Creating new drugs is a process fraught with risk. The risk involved with discovery and early development is obvious, but increasingly the greatest risk lies at the very end of the process: winning a decent market for your approved product.
But history teaches us there is a sure-fire trick to eliminate a large portion of both the technical and commercial risk from drug development. A trick that has arguably yielded more blockbusters than any other approach: medicalizing biomarkers.
For sure, only a handful of biomarkers have undergone this transition – from something diagnostic to a target for intervention – but those that have, have yielded eye-watering drug sales. Drugs to treat elevated LDL-cholesterol (such as Lipitor™ atorvastatin), elevated blood pressure (such as Diovan™ valsartan and other angiotensin receptor antagonists) or elevated fasting glucose (such as insulin and more recently GLP-1 agonists) have garnered countless billions in sales over the last two decades.
The industry will always push the boundaries to justify adoption of new biomarkers as therapeutic targets in their own right
The reason for their success is obvious: unlike complex disease phenotypes, its trivial (and usually very cheap) to identify the significant portion of the population with the elevated biomarker. Its also straightforward to demonstrate, during clinical development, the impact of the treatment on the biomarker. So the development risks are much lower than for drugs where unambiguous demonstration of impact on a complex clinical phenotype is required.
And the commercial risk is lower too. Provided the link between the biomarker and disease is well-accepted, medical practioners, payers and patients alike are keen to embrace prevention rather than treatment when the disease itself manifests (perhaps much) later.
To the chagrin of the pharmaceutical industry, there are but a handful of such mother lodes to mine. Hypercholesterolemia, hypertension and hyperglycemia are the only diagnostic measures that are universally accepted to justify intervention in the absence of any clinical symptoms.
Not surprisingly, therefore, we see a drive from the industry to expand the list – with elevated triglycerides and low testosterone in the vanguard. After all, once regulators, doctors or even patients can be persuaded that a new biomarker requires treatment in the absence of symptoms, it would open up a brand new treasure chest.
This is precisely the thinking behind Amarin’s long-running battle with the FDA to approve its fish oil capsule Vascepa™ to lower triglycerides among asymptomatic individuals with only a mild elevation in this biomarker. For now, the FDA has resisted and continues to demand hard end-point clinical data to support such a label, but the pressure to reverse that decision is immense.
Similarly, there is something of a campaign being waged at present to designate low testosterone among men as a phenotype requiring intervention. Like triglycerides, the argument goes that low testosterone is associated with a wide range of chronic degenerative diseases, so why not intervene to correct the “defect”?
Even without substantive data, consumers worldwide are spending several billion dollars a year of agents that lower triglycerides (primarily fish oils) and elevate testosterone (where Abbvie’s Androgel™ has the dubious honour of being the market leader).
The problem, for regulators and the pharmaceutical industry alike, is the standard of proof required to justify the switch from biomarker to drug target. Is it acceptable to demonstrate a link between the biomarker and the disease using, for example, Mendelian Randomization study designs? Or do you need to demonstrate a direct benefit on the disease for each new class of drugs targeting the biomarker? Even the FDA seems unsure, and there is no unifying guidance as to what package of data would constitute sufficient proof today to allow approval of a drug on the basis of its effect on a biomarker rather than on disease itself.
This haziness risks trapping companies like Amarin in the unenviable position of not knowing whether or not a full hard-end point Phase 3 campaign will be required to secure a broad label until very late in the development pathway, when substantial dollars have already had to be committed. Indeed, Amarin have initiated such a study in cardiovascular disease that they may not be able to finance through to completion, so great is the cost and the risk.
It would help significantly if regulators on both sides of the Atlantic clarified their position on the evidence base required to validate a biomarker as an approvable end-point.
But that would not be the whole solution to the problem
Without an estimate of the clinical benefit delivered by such drugs, payers are completely unable to assess the health economics case for widespread use of such medicines. At least with hard clinical end-point data from Phase 3, it is possible for bean counters to model the benefit of using the drug in different populations. But armed only with data on the degree of impact on the biomarker, such estimates are fraught with difficulties.
Even in those few cases where a biomarker has made the transition to drug target, as with LDL-cholesterol, there remains a significant debate about the best way to deploy drugs that target it. From an epidemiological standpoint, everyone theoretically benefits from lower LDL-cholesterol – so should statins be given to everyone? Maybe. In the UK, only now are NICE considering amending their guidelines to widen access (presumably driven by the lower prices since the arrival of generic atorvastatin). At generic prices, it probably makes sense to treat millions of people who would never have progressed to the clinical stages of disease (in this case, coronary heart disease or stroke), but at premium prices, that call was much harder to justify.
That summarises the issue with causative biomarkers in a nutshell. Even when no-one doubts their role as a causative risk factor for disease, their specificity is usually very low indeed. In other words, elevated LDL-cholesterol undoubtedly increases risk of heart disease, but most people with moderately elevated LDL-cholesterol will never have a heart attack.
As long as the drug is cheap and safe (and today statins meet both those criteria), treating biomarkers makes a positive health economic contribution. But that bar is very high indeed.
There is another, more subtle issue to consider: many of these biomarker sound reassuringly well-defined, but they are themselves actually complex phenotypes. LDL-cholesterol is a density-defined lipoprotein fraction whose chemical and protein composition can (and does) differ substantially between individuals. Triglyceride is a family of glycerol esters whose fatty acid components show almost infinite diversity. Hypertension is a pressure measurement that depends on multiple physiological variables such as blood volume and peripheral resistance, each of which in turn depends on many factors from electrolyte balance and kidney function to hormones and smooth muscle cell proliferation. Even a measure of testosterone encompasses sulfated metabolites and related hormones such as dihydrotestostone (or even DHEA) that are chemically distinct but have almost identical activity.
As a result of this complexity, there is no guarantee that different approaches to modulating the biomarker will yield quantitatively or even qualitatively the same benefits. Approving drugs on the basis of the effect on the biomarker alone risks approving ineffective drugs.
“Once regulators, doctors or even patients can be persuaded that a new biomarker requires treatment in the absence of symptoms, it opens up a brand new treasure chest for the pharmaceutical industry”
The much-vaunted anti-PCSK9 antibodies that are hurtling towards the clinic are a case in point. These agents lower LDL-cholesterol at least as effectively as high-dose atorvastatin, and perhaps more importantly they do so in the 20% or so of patients who either respond poorly to statins, to else cannot tolerate them.
Almost everyone assumes these agents must deliver benefit measured by the degree of impact on LDL-cholesterol. But that is far from certain. Anti-PCSK9s affect LDL-cholesterol by a different mechanism to statins: by inhibiting the action of PCSK9, which binds to and marks LDL receptors for destruction, they increase clearance of LDL particles. By contrast, statins target cholesterol synthesis (although they also have a significant impact on clearance by altering LDL receptor expression). What if reducing new lipoprotein particle synthesis is more important that increasing clearance? Anti-PCSK9s would suddenly be much less effective at reducing the hard clinical end-points than statins per mg/dL reduction in LDL-cholesterol each achieves.
Is that possible? Certainly. LDL particles become modified with time, for example to form oxidized LDL (oxLDL). These particles are potentially pro-atherogenic and simultaneously less readily cleared (due to reduced affinity for LDL receptor). Plausibly, statins could lower oxLDL much more than anti-PCSK9s do, because statins reduce new LDL particle synthesis, and hence possess a much more powerful anti-atherogenic effect despite being inferior in terms of total LDL-cholesterol reduction.
To preserve balance, the converse is equally possible. Anti-PCSK9s boost levels of lipoprotein remnant receptors (such as LRP1) as well as the canonical LDL receptor (as evidenced by the 20% or so reduction in LDL-cholesterol shown with Amgen’s AMG145 in patients with homozygous familial hypercholesterolemia – patients with no functional LDL receptor). Because LRPs are important in the clearance of atherogenic triglyceride-rich lipoproteins beyond LDL-cholesterol, anti-PCSK9s could have a more powerful effect on hard clinical end-points than statins for a similar degree of impact on the target biomarker.
The point of this discussion is simply to highlight the dangers of approving drugs even on the basis of well-established biomarkers, rather than to make the case for anti-PCSK9s being inferior or superior to statins.
So what of the case for adding triglyceride or testosterone to the list of medicalized biomarkers?
The case for triglycerides is relatively strong. Despite the lack of direct evidence for a primary prevention effect of fish oil among individuals with moderately elevated triglyceride (above 200mg/dL), the circumstantial evidence has already won over many. Over-the-counter fish oil sales are substantial and growing. More generally, Mendelian randomization studies make a strong (though not definitive) case for a causative role of moderately elevated triglycerides in coronary heart disease. Indeed, in our MaGiCAD cohort we find a strong, independent prognostic association between triglyceride and prospective CHD mortality.
The case for testosterone is much weaker – so much so that it risks looking manufactured by those intent upon medicalizing it as a biomarker for commercial gain. Two independent Mendelian Randomization studies [1,2] found no evidence at all for a causal link between low testosterone and heart disease (or any other age-related degenerative conditions in men), while the Chinese study actually identified a possible link between higher testosterone and an unhealthy lipid profile.
In MaGiCAD we see strong links between testosterone and body mass index, and (interestingly) between low testosterone and elevated triglyceride. But there is absolutely no association with coronary heart disease, or (more powerfully) 5-year cardiovascular mortality.
The perils of wrongly medicalizing biomarkers are becoming increasingly evident. The industry wasted billions of R&D dollars in aggregate pursuing agents to elevate HDL cholesterol (and some, such Merck’s Tredaptive™ niacin, were even approved), particularly on the CETP inhibitors. These failed efforts, together with Mendelian Randomization studies, today make it crystal clear that HDL is not a causative biomarker for cardiovascular disease. But the cultural recognition of its presumed-importance will take longer to fade, aided in part by powerful language such as “good cholesterol” which has seeped deep into the public consciousness, and is proving difficult to dislodge even among the scientifically literate. This can harm industry (by misdirecting research dollars) and patients (by de-focusing them from real risk factors) alike.
Resisting the pressure to medicalize new biomarkers will also be tough. There is, after all, an enormous prize at stake with many groups standing to gain – at least in the short term – from the widespread acceptance of new treatable biomarkers. Some of the less scrupulous are prepared to invest heavily to conjure and then sustain the illusion, as seems to be the case with low testosterone, aided by the fact that it is much harder to disprove a link than to prove it. By constantly shifting the ground to different populations, different end-points and even different measures of testosterone activity, they can marshal an ostensibly powerful case even in the face of strong contrary evidence.
And there are few people as heavily incentivized to counter their push
For this reason alone, we (the industry and the general public) should support the regulators – and the FDA have taken the lead in this respect as in so many others – in resisting calls to dilute the evidence base needed to elevate a biomarker to a broadly approvable end-point. Clearly, testosterone will never make that leap, but even on triglycerides they should hold their ground.
A decade from now we may even come to realize that those biomarkers we have allowed to become medicalized are misleading us – if anti-PCSK9 turns out to be much better (or, more worryingly, much worse) at preventing heart attacks than predicted by its effects on LDL-cholesterol.
The industry, then, will always push the boundaries to justify adoption of new biomarkers as therapeutic targets in their own right. The prize for success is simply too high. And in the modern globally-connected world, the public can also be mobilized into belief in the importance of treating a particular biomarker. The regulator, therefore, stands as the last bastion protecting us, the consumer, not only from the industrial piranhas but also from ourselves.
Among this praise for the stance regulators have taken on this remains the one plea: the industry needs a better definition of what should constitute the standard of proof for a biomarker to become a broadly approvable end-point. After an appropriately wide-ranging consultation and discussion, the regulators should make that clear. Doing so not only defines the goalposts for the benefit of the industry, but perhaps more importantly shores up the defenses against broad pressure from all sides to approve (or, more insidiously, broaden the label of) things that fall short of that gold-standard.
The Cambridge Partnership is the only professional services company in the UK exclusively dedicated to supporting companies in the biotechnology industry. We specialize in providing a “one-stop shop” for accountancy, company secretarial, IP management and admin services. The Cambridge Partnership was founded in 2012 to fill a gap. Running a biotechnology company has little …