A Cure for Rising Clinical Trial Costs?

Findings from our latest clinical operations study have drawn industry attention, and for good reason: they focus on rising clinical trial costs, an issue plaguing drug developers of all sizes.

A recent article from Pharmalot’s Ed Silverman highlighted some of the report’s most important data: the rapid rise in per patient clinical trial costs between 2008 and 2011. During that period, in each stage of drug development, average costs per patient rose significantly:

• Phase I costs increased 46%
• Phase II costs increased 48%
• Phase IIIa costs increased 88%
• Phase IIIb costs increased 83%
• Phase IV costs increased 31%

These jumps coincided with an equally impressive rise in clinical trial staffing figures. The average number of total FTEs needed to run a trial increased in all stages of clinical development between 2008 and 2011: Phase I staffing rose 108%, Phase II 106%, Phase IIIa 50%, Phase IIIb 57% and Phase IV 85%.

Expanded staffing explains some of the cost increase, but not all of it. Reader comments on the Pharmalot article cast blame on everyone from principle investigators to CROs. When a wide range of potential reasons appears, however, it can actually mean the root problem is more systemic.

So what’s the fix? An editorial printed in The Wall Street Journal last Wednesday suggested a radical change in the way the FDA approves drugs as a means for containing the rising costs of clinical development. Authors Michele Boldrin and S. Joshua Swamidass suggest shifting many of the efficacy endpoints to post-marketing trials (Phase IIIb and Phase IV), while using registration trials (Phase IIIa) primarily for safety endpoints. In this way the FDA doesn’t risk the launch of unsafe drugs, and the burden of efficacy falls to developers’ post-marketing efforts, allowing for faster innovation.

The editorial points out that proving efficacy is the most expensive aspect of drug development, but safety is priority number one with the FDA. The divided approach creates flexibility for companies that have safe drugs – yet are afraid the costs of proving efficacy will be too high. This is especially relevant in the case of rare diseases. As the system stands now, it can be difficult to produce a profitable drug if the patient population sits at only 3,000 or even 300 cases per year. Perhaps a more capitalistic (my word, not theirs) approach to drug development and approval could save the industry:

Get safe drugs into the market (in this case the healthcare system) and let physicians, post-marketing studies and investigator-initiated studies determine which drugs work the best. A closer collaboration between physicians and pharmaceutical companies in post-marketing studies could be a more effective way for the healthcare industry to choose the best treatments.

Such an approach might look something like “beta-testing” in the IT industry. First, a new piece of software or hardware is refined enough by manufacturers to demonstrate functionality and avoid doing more harm than good. Then engineers and thought-leading consumers work together closely to refine the product; finally, it becomes available to an informed public and the marketplace determines product success.

It seems likely that asking drug companies to prove efficacy up front is costing patients access to helpful drugs, especially for rare diseases. Delaying the high costs of proving the exact details of efficacy (essentially a competitive issue that market forces and comparative effectiveness studies can sort out) until later in a product’s lifecycle might just might help save lives and spur new drug development.

From Ryan McGuire, senior research analyst and project lead for the recent “Clinical Operations” report.

 

[miniCurrentTerms tax=”primary”]