Drugbaron Blog

February 19, 2012 no comments

Avoiding the pain of the not-so-mystical placebo effect with the almost-magical run-in period

Nothing kills biotech companies like an active placebo.  More often than not the press release after a failed trial points the finger at a “surprisingly large placebo response” against which the active drug had little scope to excel.

It’s easy to assume that this placebo effect has its origins in some almost mystical neuropharmacology, where simply believing you are being treated with a powerful new drug is sufficient to induce a miraculous recovery – and as a result that there is little you can do to mitigate against the effect.

But in reality by far the largest proportion of the placebo effect originates from the design of our clinical trials, and can be readily neutralised by careful adherence to a few simple principles.  The first step to eliminating the avoidable failure that comes from high placebo response rates is to recognise that it’s not in the lap of the gods – but in the hands of your Chief Medical Officer.   All you need is the almost-magical power of a run-in period.

There are two very different factors that together result in a placebo effect – that is, a statistically significant improvement in the symptoms of a disease during a period of close observation, such as a clinical trial, even without any active pharmacological intervention.

The first, and perhaps most obvious, factor is the selection of the end-point.  End-points that have a subjective component almost always result in a larger placebo effect.  The reason is clear enough: patients want to feel better, and if the end-point includes a component that measures how the patients feel, then that desire to feel better translates into a “real” improvement.

The police may be surprised to discover that installing signs saying “Go faster” in place of speed cameras would also likely result in an apparent reduction in accidents at ‘high-risk’ sites.

Does that mean that subjective end-points have to be avoided?  Not at all – indeed, for some indications (such as neuropathic pain, for example) there is little choice.  In these circumstances you can exploit the fact that the placebo effect is usually very short-lived.  You may want to feel better, but if after a while the symptoms are really not ameliorated then your self-reported symptom score soon returns to baseline.    It may not always be possible to have a longer treatment phase (perhaps because of the duration of toxicology cover, or simply due to cost or availability of drug product), but the same effect can be achieved by having a run-in phase when all the subjects receive placebo.

But where there is a choice of end-points, it is usually worth remembering that the power of the study will be lower with the more subjective end-point.  Molecular measures in particular are usually immune to change as a result of the psychology of the patient.  The magnitude of the placebo effect is smaller therefore with these molecular markers.  By contrast, what the patient (and therefore the doctor) usually cares about is how the patient feels – drugs have to treat the symptoms of disease.

There is, therefore, commonly a trade-off: more subjective end-points may be more clinically-relevant but more susceptible to the placebo effect, while molecular markers have less direct relevance but greater statistical power.  It is precisely this reason why early stage trials (which are smaller) tend to adopt less subjective molecular surrogate end-points, while late stage trials have to focus on the subjective assessment of symptoms that regulators often require.

There aren’t many things in life that bring only benefits – and even fewer that save you money at the same time.  But a run-in period in clinical trials is one of them. 

The second factor is much more insidious, but just as easy to avoid once you know it is there.  The biggest contribution to the placebo effect is the same statistical principle that allows the police to claim that speed cameras on the roads are more effective than they actually are: regression to the mean.

The way speed cameras are positioned in the UK is very instructive in this regard.  If over a short space of time there are a number of serious accidents at a junction, perhaps including fatalities, that location is considered a high risk location – and the ideal site for a speed camera.  Once installed, the number of accidents at the location is monitored for years afterwards.  And what do they find?  They find that at almost all camera sites the rate of accidents is lower than before the cameras were fitted.

The government and the police attribute the reduction to the cameras lowering traffic speeds.  Perhaps they would be surprised, and maybe distressed, to discover that a similar outcome would have likely been achieved by installing signs saying “Go faster” at the same locations.  This is because the majority of the reduction observed was due to the motoring equivalent of the placebo effect (and would, therefore, have been observed whatever the “intervention” that was selected).

The problem lies in choosing the sites based on the accident rate over a relatively short period.  In the majority of cases, these junctions were not of themselves particularly dangerous but just had an excess of accidents through chance alone.  Once selected and monitored (whether you installed a camera or not) these junctions “revert to the mean” rate for accidents.  As they were selected for having a higher than average accident rate, the consequence is an apparent reduction in accidents at the majority of selected sites.

This is all excellent reason to doubt the value of speed cameras, but what has it to do with clinical trial design? 

Well, patients are selected for clinical trials in much the same way as sites for speed cameras.  Investigators look for subjects with a particular level of disease, and if they pass the threshold of the inclusion criteria they are randomized into the study.  Individuals who, through natural fluctuation in their disease severity, were feeling particularly well when screened wouldn’t meet the inclusion criteria and wouldn’t be in the study.  By contrast, individuals feeling at their worst are much more likely to be recruited.

Scroll forward a few weeks to the end of the treatment period.  All those individuals who were feeling particularly rotten at screening, will, on average likely be having a better time of it.  In other words, they will be showing “regression to the mean” – whether or not they had been treated.

The result will be a strong positive placebo effect.  And here’s the surprise: this regression to the mean can lead to a very large placebo effect even on molecular markers, or other end-points that entirely lack any subjective component. If you thought that by avoiding subjective end-points you could forget about placebo effects, then its time to think again.

How do you neutralize the impact of “regression to the mean”? There are several simple tricks.  Easiest of all is to avoid selecting patients using the same measure as the primary end-point.  In an RA trial for example, if you want to use CRP as the primary measure of efficacy, enrol your patients on the basis of anti-CCP titre.  If you enrol them because their CRP levels are above a certain threshold, then regression to the mean will introduce a placebo effect and reduce your statistical power.

This trick isn’t always as easy to implement as it sounds: most measures of the presence or activity of a disease are, to a greater or lesser extent, correlated.  If you select on one parameter temporally correlated with your end-point, you will still suffer a degree of regression to the mean.  In a cardiovascular trial, using LDL-cholesterol to screen for entry and triglycerides as the end-point will not completely eliminate the risk of a placebo effect, because high LDL-cholesterol and high triglycerides are associated.

The sovereign salve against regression to the mean is the run-in period. 

If you screen your subjects and accept those who exceed a threshold on the entry criteria and then monitor them, but without intervention, for a period (perhaps a couple of weeks), measuring the end-point at intervals to establish a proper baseline, then you are safe.

As it happens, the run-in period (with all subjects on placebo) was also the solution to the placebo effect resulting from the use of end-points with a subjective component.

And establishing a proper baseline, with the temporal variation component removed, increases the statistical power to see a response to intervention even in the absence of any placebo effect.

If your CMO hasn’t got a run-in period in your next trial, you might want to ask why.  It isn’t likely to be a cost-saving measure to eliminate the run-in period.  The increased statistical power that results means that a typical trial can have 20% fewer subjects – saving more than the cost (in dollars and in days) of the run-in.

There aren’t many things in life that bring only benefits – and even fewer that save you money at the same time.  But a run-in period in clinical trials is one of them.  Once your investors read this article, it will be much harder to justify a failed trial due to an unexpectedly large placebo effect than it will be to justify adding a run-in period to your trial design.

Yearly Archive