Yearly Archive

Yearly Archives: 2011

December 31, 2011 no comments

The (Statistical) Power and the Glory

The worst possible outcome from a clinical trial is a negative result with a drug that actually works in the chosen indication.  These ‘false negatives’ almost invariably destroy real value, and for all but the largest pharmaceutical companies spell the death not only of the programme but also the company itself.

In theory, drawing the short straw and getting a negative outcome when the drug actually works should be a stroke of real bad luck.  In a properly powered trial, the odds for such an outcome should be less than one in five.  But under-powering trials, at least in Phase IIa, is so prevalent that the number of ‘false negatives’ may be as high as one in every two early stage trials.

While the impact on the individual company is usually devastating, the impact on the pharmaceutical industry as a whole is no less severe.  DrugBaron contends that more attention to the detail of trial power could double the productivity of the sector as a whole, catapulting drug development from a poorly performing sector to the very top of the pile.

The most egregious examples of under-powered trials result from overlooking the need for power calculations altogether.  For the most part, its not because the clinical development team lacks an understanding of the importance of correctly powering their trial, but from a misunderstanding of the degree of comfort that accompanies a previous positive result.

A common approach to early-stage trial design is to copy a previous positive study involving a drug that went on to achieve regulatory approval.  If you use the same end-point in the same patient population and your drug is at least as good as the drug previously approved (which, clearly, it needs to be in order to be commercially valuable) then surely their positive trial must have been suitably powered.  No. Absolutely not.

If your CMO justifies his chosen trial design by pointing to a previous successful trial, follow DrugBaron’s advice and sack him on the spot.

Their trial might only have been powered to detect an effect of that magnitude one time in two, or one time in three, not the usual minimum power we aim for of four positives out of five.  In other words, they may very well have been lucky.  They may have taken a gamble on a trial that comes up positive one time in two and hit the jackpot.  But you need not be so lucky.  Indeed, if it’s a popular trial design that’s been used quite a few times, then even a couple of positive results in the literature is no where near enough to demonstrate adequate power.

The simple fact is that estimating the power of a design requires a look at the distribution of possible outcomes when the same trial is run …

December 17, 2011 no comments

Chocolate-flavoured Poison: Like red wine and debt, the company-building model of biotech investing tastes good but kills you eventually

The most dangerous imperatives are those that you …

December 5, 2011 no comments

Too Much Skin in the Game?

There are very few rules for the biotech …

December 2, 2011 no comments

Environmental Pollutants: Opening a Soup-Can of Worms

They are everywhere: so called ‘present organic pollutants’, …


Yearly Archive