Getting More For Less: The Not-So-Obvious Benefits of Publicly Funded Trials for New Drugs

Dean Baker
Milken Institute Review, July 2008

See article on original website pdf

Health care professionals have long been concerned about conflicts of interest in the high-stakes process of determining whether newly invented drugs are safe and effective. After all, the profitability of the drugs’ corporate owners – and in many cases, the careers of senior executives – turn on the results. Clinical trials run by the drugs’ owners are subject to oversight by the Food and Drug Administration. But even when the overseers do their job well, the adversarial nature of the relationship between corporate researchers and government regulators virtually guarantees that resources will be wasted in the game. To cope with the perverse incentives inherent in the current system, we need to break the financial link between the development of drugs and their testing.

This article outlines a proposal for publicly financed clinical trials that builds on a plan offered earlier by Tracy Lewis, Jerome Reichman and Anthony So of Duke University in The Economists’ Voice, January 2007.

Someone would, of course, still need to foot the bill for publicly funded trials. But that seeming disadvantage could be turned into an advantage: if the administrators of Medicare’s drug-benefit program were given authority to negotiate with pharmaceutical makers, and in the process reduced the prices they paid to levels comparable to those paid by the Veterans Affairs Department (which already has the right to negotiate prices), the savings would be more than sufficient to pay for the bill for the trials. Note, moreover, that this approach need not reduce the profitability of the pharmaceutical business or its incentives to develop new drugs: the industry’s reduced revenues would be offset by eliminating the cost of privately financed trials.

Reducing the price of prescription drugs – and thereby narrowing the gap between the price of drugs and the cost of manufacturing them – would also go a long way toward solving the efficiency problem inherent in marketing products for which most of the costs are sunk before the first sale. This inefficiency includes the efforts of patients and physicians to game the system to minimize drug expenditures, as well as the efforts of health insurers to restrict access to costly drugs.

Note, too, that reducing margins in drug sales would also reduce the incentives to plow money into marketing drugs, since the payoff in selling an extra pill would be lower. Finally, lower drug prices would reduce the incentive for patients to buy unauthorized (i.e., foreign) versions of drugs, or for counterfeiters to enter the drug business. In short, by removing the conflict of interest inherent in industry-funded trials and forcing down prices paid by Medicare recipients, it might well be possible to achieve both better health outcomes and greater economic efficiency without reducing the private returns to innovation in pharmaceuticals.

The Nuts and Bolts

I envision the establishment of multiple independent companies, operating on long-term federal contracts (say, 8 to 12 years), to perform publicly financed drug trials. Approximately $20 billion per year would be needed to maintain the current level of testing. This figure would likely rise by about 10 percent annually for the next decade – roughly the growth rate projected for Medicare outlays on prescription drugs.

The agency parceling out the contracts and overseeing testing quality could be the National Institutes of Health, the Food and Drug Administration, the Centers for Medicare and Medicaid Services or a new organization established explicitly for the purpose. The independent testing companies (rather than the federal government) would select drugs for testing based on their estimated potential to improve public health, making the determinations from the evidence of preclinical research.

A key goal here is to ensure that there would always be several contractors with overlapping areas of responsibility. Along with creating a competitive benchmark for efficiency both in choosing drugs to test and minimizing the cost of testing, the overlap would reduce the chances of potentially promising drugs being overlooked.

Since all trial results would be public, the three phases of clinical trials needed for FDA approval could be performed by separate testing companies. Every tester would have the full benefit of the information obtained from prior-round trials. This openness should promote competition among testers at all stages of clinical testing, reducing the chances of trapping potentially valuable drugs in the bureaucracy of an inefficient contractor.

To minimize the potential for conflicts of interest, the management and employees of contracting firms would be barred from holding financial positions in pharmaceutical companies. In addition, all contact between the drug companies and the independent testers would be on the record and accessible to watchdog groups. Full-disclosure rules would also apply to trial results. And all the data collected from trials would be easily available – say on the Internet – in a timely manner.

Privately financed clinical testing, incidentally, would not be prohibited. But the only cases in which this would be likely to happen would be the rare instances in which the patent holder had far higher expectations of success in clinical trials than the independent testers, or the potential benefits had little to do with health.

Everybody’s a Winner

Separating the interests of the testers from those of the owners of the drug patents would get rid of the testers’ incentive to exaggerate the effectiveness of drugs or to conceal evidence of negative side effects. The elimination of the motivation to conceal proprietary data would also allow the scientific community, as well as practicing physicians, to get the full benefit of the information obtained in the trials.

There are other benefits, too. Under the current system, it often pays to invest hundreds of millions of dollars in clinical trials for a “me-too” drug that would generate minimal benefits for patients. By contrast, independent testers would have little reason to follow through with tests on drugs that have little promise compared to existing drugs. This is no small deal: between 1990 and 2004, nearly four of every five new drugs approved by the FDA fell into the “standard” classification, meaning they were safe and effective but provided no significant advance over existing drugs.

To be sure, some of the drugs classified as standard would have been approved under the system outlined here – for example, drugs involved in races for approval in which the success of competing drugs was uncertain. However, this is only likely to be the case with drugs that are already in the last stages of testing, when the marginal cost of gaining approval is relatively small. It may also be beneficial to bring some “me-too” drugs through the FDA approval process if the existing drugs are known to have harmful side effects or to interact badly with other commonly used drugs.

Another benefit of the publicly funded system is that full disclosure of research results would likely speed innovation and pare its costs. For example, researchers would be able to analyze data within and across studies to determine the relative efficacy of drugs and the frequency of side effects for different demographic groups and for those suffering from medical conditions other than the condition the new drug would treat. Under the current system, by contrast, companies release only the data that they choose to. And once they gain FDA approval, they have little to gain from sharing.

A publicly financed system would also eliminate the incentive to reward doctors participating in drug trials. There have been numerous news accounts of incidents in which drugmakers overpaid doctors to participate in drug trials as a way of rewarding them for prescribing their brands. And the evidence goes beyond the anecdotal: a study reported in The Journal of the American Medical Association found that doctors who were paid to take part in clinical trials were more likely to prescribe the company’s drugs after their participation than before.

The economics of this relationship are well understood. As with any product that is expensive to invent and relatively cheap to manufacture, the price of patented drugs is (necessarily) far above the marginal cost of production. This gives the manufacturer an enormous incentive to increase sales. And one way to manage that is to invest heavily in convincing physicians to prescribe liberally. Of course, outright kickbacks are illegal. However, it is very difficult to distinguish between kickbacks and generous payments for services well done.

Paying the Bills

A successful program for publicly financing drug tests would have the resources to match the quality of the trials now performed by drugmakers. As suggested earlier, this could probably be accomplished with somewhat less spending, since trials of me-too drugs would be curtailed and the temptation to kick back fees to participating physicians would be eliminated. Still, a public program would cost a lot of money.

The National Science Foundation estimates that the industry spent $17 billion (in 2005 dollars) on research and development in 2003, while the Pharmaceutical Research and Manufacturers of America, the industry trade group, says that the industry spent $39 billion (also in 2005 dollars) on R&D in 2004. The NSF’s numbers imply a 5 percent real average annual growth rate since 1980, while the PhRMA’s imply an 8 percent growth rate. Applying the growth rates to both numbers would imply spending of $22 billion in 2007 using the foundation’s data and $52 billion using the trade association’s data.

A reasonable estimate is that half of this research and development spending went into clinical testing, suggesting that the industry’s testing budget in 2007 was somewhere between $12 billion and $26 billion. Thus, $20 billion should be more or less sufficient to replace the clinical testing currently funded by the industry with a more efficient public program.

Another way to assess the impact of a $20 billion annual appropriation for clinical testing would be to calculate the number of patients who could be enrolled in trials for this level of expenditure. Extrapolating from the calculations of Joseph DiMasi at the Tufts University Center for the Study of Drug Development, the average cost per subject was about $14,000 in 2007. At this cost, $20 billion would be sufficient to cover 1.42 million enrollees in clinical trials.

We estimate that Medicare could save between 40 and 60 percent on drugs if it were permitted to use its purchasing muscle to negotiate prices closer to production cost. These figures are derived from comparisons between drug prices in the United States and other wealthy countries where government agencies exercise “monopsony” buying power, as well as the prices paid by the Veterans Affairs Department in this country.

Congress could, of course, pay for testing from general revenues. But using savings from reducing the prices the government pays for drugs would be more palatable in political terms. And – equally important – it would reduce the inherently wasteful gap between drug prices and their cost of production without necessarily paring industry incentives to invest in innovation.

The budgetary savings, moreover, would represent only one part of the social gain. Lower prices for drugs purchased under Medicare would also lead to large savings for prescription drug users, who pay almost 70 percent of the cost of drugs purchased under the program.

Efficiency Gains

In addition to the direct savings from lower drug prices, pricing drugs closer to their marginal cost of production would cut the waste inherent in a private system that relies on the temporary monopoly power associated with patents to motivate private R&D. The largest source of efficiency gains would be the reduction in incentives to plow money into marketing drugs. A study published last year in The New England Journal of Medicine estimated that the industry’s marketing expenses were 18 percent of sales – roughly the same proportion spent on R&D! Cutting the markups on production costs paid by Medicare beneficiaries would reduce the incentive both to advertise prescription drugs and to invest in the goodwill of physicians who write the prescriptions.

While some useful information is no doubt conveyed through marketing, it is hard to avoid the conclusion that most marketing outlays for prescription drugs directed toward physicians and consumers generate trivial societal benefits. Indeed, at times the blizzard of propaganda may convey more misinformation than information, rewarding pharmaceutical companies for the quality of their marketing programs rather than the quality of the products.

Lower drug prices would also reduce incentives to spend time and effort gaming the health insurance system. Medicare beneficiaries would have less reason to choose insurance companies on the basis of which drugs they covered because insurers would be less likely to exclude useful drugs. Physicians would not need to spend as much effort convincing insurers to cover the costs of nongeneric drugs – say, by falsely claiming that a patient was being treated for a condition for which only the expensive drug would do.

Consider, too, that reducing the difference between the market price and production costs would reduce incentives to buy drugs in gray markets. That’s a good thing. For while it might be cheaper today to obtain drugs on the Internet from foreign countries, there are significant safety and efficacy issues in using drugs made with relatively little government oversight. By the same token, lower prices for the real thing would pare incentives to manufacture and market counterfeits.

Making Lemonade from Lemons

Everyone, by now, understands that the huge and rapidly growing drug industry is both a font of life-extending treatment and a source of immense frustration in an era of out-of-control health costs. But the waste inherent in any industry that is both heavily regulated and dependent on intellectual-property protection for profitability also creates opportunities for getting more from less.

On the one hand, publicly financed testing of drugs would reduce the resources that must be invested to insure that new drugs are safe and effective. On the other, funding the program through government savings from negotiating lower drug prices would reduce the waste linked to the gap between production costs and market prices.

Admittedly, this latter benefit is more problematic than the former, since the willingness to invest in R&D is dependent on many other factors, ranging from the organizational skills of the drugmakers to their cost of capital. But there is good reason to believe that the combination of public financing of drug testing (which would lower the companies’ costs in bringing drugs to market) and the reduced incentive to throw money at hyping drugs would more than offset the negative impact of lower prices in the government-controlled portion of the drug market. Indeed, in an era in which drug companies are under increasing political pressure to negotiate prices anyway, the industry might even be persuaded to support a public financing deal.

Dean Baker is the co-director of the Center for Economic and Policy Research (CEPR). He is the author of The Conservative Nanny State: How the Wealthy Use the Government to Stay Rich and Get Richer ( He also has a blog, "Beat the Press," where he discusses the media's coverage of economic issues. You can find it at the American Prospect's web site.