Is the FDA Too Conservative or Too Aggressive?: A Bayesian Decision Analysis of Clinical Trial Design, Aug 2015, Vahid Montazerhodjat & Andrew W. Lo

december 2018 by pierredv

Via Tom Hazlett, Nov 2017

NBER Working Paper No. 21499

Issued in August 2015

NBER Program(s):Health Care, Health Economics

Implicit in the drug-approval process is a trade-off between Type I and Type II error. We explore the application of Bayesian decision analysis (BDA) to minimize the expected cost of drug approval, where relative costs are calibrated using U.S. Burden of Disease Study 2010 data. The results for conventional fixed-sample randomized clinical-trial designs suggest that for terminal illnesses with no existing therapies such as pancreatic cancer, the standard threshold of 2.5% is substantially more conservative than the BDA-optimal threshold of 27.9%. However, for relatively less deadly conditions such as prostate cancer, 2.5% is more risk-tolerant or aggressive than the BDA-optimal threshold of 1.2%. We compute BDA-optimal sizes for 25 of the most lethal diseases and show how a BDA-informed approval process can incorporate all stakeholders’ views in a systematic, transparent, internally consistent, and repeatable manner.

NBER
medicine
risk-assessment
probability
statistics
decision-making
Bayesian
research
healthcare
cancer
BDA
FDA
NBER Working Paper No. 21499

Issued in August 2015

NBER Program(s):Health Care, Health Economics

Implicit in the drug-approval process is a trade-off between Type I and Type II error. We explore the application of Bayesian decision analysis (BDA) to minimize the expected cost of drug approval, where relative costs are calibrated using U.S. Burden of Disease Study 2010 data. The results for conventional fixed-sample randomized clinical-trial designs suggest that for terminal illnesses with no existing therapies such as pancreatic cancer, the standard threshold of 2.5% is substantially more conservative than the BDA-optimal threshold of 27.9%. However, for relatively less deadly conditions such as prostate cancer, 2.5% is more risk-tolerant or aggressive than the BDA-optimal threshold of 1.2%. We compute BDA-optimal sizes for 25 of the most lethal diseases and show how a BDA-informed approval process can incorporate all stakeholders’ views in a systematic, transparent, internally consistent, and repeatable manner.

december 2018 by pierredv

Epistemic uncertainty – netjeff.com

july 2018 by pierredv

excerpt from John Ridgway's book review of "Waltzing with Bears", which is about software project risk management

"Do yourself a favour, ignore what the book says about risk analysis [for software projects] and go and buy a good book on Bayesian Methods and Decision Theory. You don't have to take my word for this, just type in 'epistemic uncertainty and Monte Carlo' into your Internet search engine and take it from there. In the meantime, here are some background notes to help explain my remark"

"epistemic uncertainty results from gaps in knowledge"

"Frequentist probability theory is used to analyse systems that are subject to aleatory uncertainty. Bayesian probability theory is used to analyse epistemic uncertainty."

"When Monte Carlo is used to model schedule risk, the [software] schedule uncertainties are being treated as if they are aleatory, even though they are predominantly epistemic. This is now considered to be unrealistic and is known to give incorrect results."

"Bayesian methods are appropriate in situations where there are gaps in information (i.e. where there is epistemic uncertainty). They involve the creation of Bayesian Belief Networks (BBNs) to model causal relationships. Data is fed into the model to enable the probability of specified outcomes to be calculated given the current body of knowledge."

Bayesian
BayesianMethods
uncertainty
MonteCarlo
risk-management
"Do yourself a favour, ignore what the book says about risk analysis [for software projects] and go and buy a good book on Bayesian Methods and Decision Theory. You don't have to take my word for this, just type in 'epistemic uncertainty and Monte Carlo' into your Internet search engine and take it from there. In the meantime, here are some background notes to help explain my remark"

"epistemic uncertainty results from gaps in knowledge"

"Frequentist probability theory is used to analyse systems that are subject to aleatory uncertainty. Bayesian probability theory is used to analyse epistemic uncertainty."

"When Monte Carlo is used to model schedule risk, the [software] schedule uncertainties are being treated as if they are aleatory, even though they are predominantly epistemic. This is now considered to be unrealistic and is known to give incorrect results."

"Bayesian methods are appropriate in situations where there are gaps in information (i.e. where there is epistemic uncertainty). They involve the creation of Bayesian Belief Networks (BBNs) to model causal relationships. Data is fed into the model to enable the probability of specified outcomes to be calculated given the current body of knowledge."

july 2018 by pierredv

Quantum Bayesianism Explained By Its Founder | Quanta Magazine

may 2016 by pierredv

Interview with Christopher Fuchs

"Those interpretations [Copenhagen, many worlds, Bohmian] all have something in common: They treat the wave function as a description of an objective reality shared by multiple observers. QBism, on the other hand, treats the wave function as a description of a single observer’s subjective knowledge. It resolves all of the quantum paradoxes, but at the not insignificant cost of anything we might call “reality.” Then again, maybe that’s what quantum mechanics has been trying to tell us all along — that a single objective reality is an illusion."

Q. How does QBism get you around those limits?

A. "One way to look at it is that the laws of physics aren’t about the stuff “out there.” Rather, they are our best expressions, our most inclusive statements, of what our own limitations are. When we say the speed of light is the ultimate speed limit, we’re saying that we can’t go beyond the speed of light."

"I’ve become fascinated by these beautiful mathematical structures called SICs, symmetric informationally complete measurements — horrible name, almost as bad as bettabilitarianism. They can be used to rewrite the Born rule [the mathematical procedure that generates probabilities in quantum mechanics] in a different language, in which it appears that the Born rule is somehow deeply about analyzing the real in terms of hypotheticals."

QuantaMagazine
QBism
Bayesian
physics
quantum-mechanics
philosophy
interviews
"Those interpretations [Copenhagen, many worlds, Bohmian] all have something in common: They treat the wave function as a description of an objective reality shared by multiple observers. QBism, on the other hand, treats the wave function as a description of a single observer’s subjective knowledge. It resolves all of the quantum paradoxes, but at the not insignificant cost of anything we might call “reality.” Then again, maybe that’s what quantum mechanics has been trying to tell us all along — that a single objective reality is an illusion."

Q. How does QBism get you around those limits?

A. "One way to look at it is that the laws of physics aren’t about the stuff “out there.” Rather, they are our best expressions, our most inclusive statements, of what our own limitations are. When we say the speed of light is the ultimate speed limit, we’re saying that we can’t go beyond the speed of light."

"I’ve become fascinated by these beautiful mathematical structures called SICs, symmetric informationally complete measurements — horrible name, almost as bad as bettabilitarianism. They can be used to rewrite the Born rule [the mathematical procedure that generates probabilities in quantum mechanics] in a different language, in which it appears that the Born rule is somehow deeply about analyzing the real in terms of hypotheticals."

may 2016 by pierredv

Chance: Peace talks in the probability wars - physics-math - 16 March 2015 - Control - New Scientist

june 2015 by pierredv

"statisticians are slowly coming to a new appreciation: in a world of messy, incomplete information, the best way might be to combine the two very different worlds of probability – or at least mix them up a little." "a crucial distinction between two different sorts of uncertainty: stuff we don't know, and stuff we can't know" Can't know -> frequentist Don't know -> divides frequentists and Bayesians Strengths & Weaknesses "Where data points are scant and there is little chance of repeating an experiment, Bayesian methods can excel in squeezing out information." -- e.g. astrophysics "Where [well-grounded theories that provide good priors] don't exist, a Bayesian analysis can easily be a case of garbage in, garbage out." "Frequentism in general works well where plentiful data should speak in the most objective way possible." -- e.g. Higgs boson "frequentism's main weakness: the way it ties itself in knots through its disdain for all don't-know uncertainties."

statistics
bayesian
june 2015 by pierredv

Doing Bayesian Data Analysis: Now in JAGS! Now in JAGS!

april 2015 by pierredv

"I have created JAGS versions of all the BUGS programs in Doing Bayesian Data Analysis. Unlike BUGS, JAGS runs on MacOS, Linux, and Windows. JAGS has other features that make it more robust and user-friendly than BUGS. I recommend that you use the JAGS versions of the programs. "

bayesian
statistics
books
JAGS
[R]
textbooks
Kruschke
april 2015 by pierredv

Physics: QBism puts the scientist back into science : Nature News & Comment

april 2014 by pierredv

A participatory view of science resolves quantum paradoxes and finds room in classical physics for 'the Now', says N. David Mermin.

quantum-mechanics
David
Mermin
Bayesian
QBism
NatureJournal
april 2014 by pierredv

Physics: Quantum quest : Nature News & Comment

september 2013 by pierredv

The lesson, says Fuchs, isn't that Spekkens's model is realistic — it was never meant to be — but that entanglement and all the other strange phenomena of quantum theory are not a completely new form of physics. They could just as easily arise from a theory of knowledge and its limits. To get a better sense of how, Fuchs has rewritten standard quantum theory into a form that closely resembles a branch of classical probability theory known as Bayesian inference “It turns out that many principles lead to a whole class of probabilistic theories, and not specifically quantum theory,” says Schlosshauer.

Bayesian
physics
quantum-mechanics
NatureJournal
september 2013 by pierredv

Bayesian Statistics and what Nate Silver Gets Wrong : The New Yorker

january 2013 by pierredv

The NewYorker offers a frequentist critique of Nate Silver via:stevecrowley,

Bayesian
newyorker
statistics
january 2013 by pierredv

Probability of ET Life Arbitrarily Small, Say Astrobiologists - Technology Review

august 2011 by pierredv

"David Spiegel at Princeton University and Edwin Turner at the University of Tokyo . . . used an entirely different kind of thinking, called Bayesian reasoning, to show that the emergence of life on Earth is consistent with life being arbitrarily rare in the universe"

bayesian
life
probability
x:arXivBlog
x:MITtechnologyreview
august 2011 by pierredv

Odds are, it's wrong: Science fails to face the shortcomings of statistics

march 2010 by pierredv

Tom Siegfried, Mar 27, 2010

"uring the past century, though, a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot."

"Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contra"

statistics
ScienceNews
via:gmsv
bayesian
***
"uring the past century, though, a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot."

"Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contra"

march 2010 by pierredv

Probably guilty: Bad mathematics means rough justice - science-in-society - 28 October 2009 - New Scientist

november 2009 by pierredv

review of Bayesian analysis of court cases

law
crime
justice
NewScientist
probability
bayesian
**
november 2009 by pierredv

A Brief Introduction to Graphical Models and Bayesian Networks, Kevin Murphy 1998

october 2009 by pierredv

A Brief Introduction to Graphical Models and Bayesian Networks - tutorial covering

Inference, or, how can we use these models to efficiently answer probabilistic queries?

Learning, or, what do we do if we don't know what the model is?

Decision theory, or, what happens when it is time to convert beliefs into actions?

Applications, or, what's this all good for, anyway?

network
probability
statistics
bay
bayesian
Inference, or, how can we use these models to efficiently answer probabilistic queries?

Learning, or, what do we do if we don't know what the model is?

Decision theory, or, what happens when it is time to convert beliefs into actions?

Applications, or, what's this all good for, anyway?

october 2009 by pierredv

Copy this bookmark: