- Texas man arrested for powder-letter hoax
- Islamic State opens ‘marriage bureau’ for single jihadists
- Drone almost blocks California firefighting planes
- Tornado rips off roofs, downs trees near Boston
- GOP: Environmental rules keeping agents from accessing border
- John Kerry: Millions displaced by religious fighting in 2013
- Federal appeals court rules against Virginia’s gay marriage ban
- White House says Russia ‘losing’ war in Ukraine
- Hamas turns to North Korea for weapons deal, Iran for money
- Syrian casualties surge as jihadis consolidate
HOWARD: Obamacare’s ‘one size fits all’ health care guidelines
People are individuals, not government-defined herds
Question of the Day
Amidst Washington’s bruising battles over Medicare and Medicaid reform, one of the few ideas that still enjoys broad bipartisan sup- port is comparative effectiveness research. CER is designed to compare drugs, medical devices or surgeries and determine which treatment offers the best outcome for the greatest number of patients. The hope is that, used effectively, CER will help public (and eventually private) insurers slash spending without harming patient care.
CER should remain a critical component of health care reform efforts. Paradoxically, however, it can go astray easily and result in greater health care spending and worse health care outcomes unless policymakers and researchers revisit some of its key assumptions.
While no one wants to spend money on useless or harmful therapies, picking winners and losers through CER is apt to be much harder than most proponents think. In a new report for the Manhattan Institute, researchers Tomas J. Philipson and Eric Sun find that short-term savings through reimbursement strategies based on CER research may be outweighed by long-term costs from poorer health.
For instance, in one 2005 CER study on antipsychotic drug treatment for schizophrenia in the Medicaid program, researchers found little difference between cheaper generic drugs and more expensive “second generation” treatments. Mr. Philipson and Mr. Sun estimate that if policymakers had restricted access to just the generic versions of those medicines, they would have seen substantial Medicaid savings: $1.2 billion, or 20 percent of the $5.5 billion Medicaid spent on antipsychotic drugs in 2005.
However, Mr. Philipson and Mr. Sun caution that such restricted access ultimately would have denied medicines that would have worked for many patients, resulting in worse mental health for tens of thousands of patients. This is because individual responses to treatment can vary widely. Thus, a drug that seems cheap and effective for most patients can turn out to be a “loser” for large numbers of patients who don’t respond to it or suffer severe side effects.
Mr. Philipson and Mr. Sun found that applying restrictive reimbursement to the Medicaid program for antipsychotics would reduce patient health by more than 13,000 quality-adjusted life years (QALYs). The 75 percent of patients who didn’t respond to older generics would be in worse health because, under a restrictive policy, they wouldn’t have any other therapeutic options. When each QALY is valued at $100,000, a standard practice in the peer-reviewed economic literature, restricting drug access to generics would increase health care costs by $1.3 billion, outweighing the Medicaid savings.
Denying or restricting treatment based on average patient responses ignores not only patient variation but the fact that failure on a “first-line” medicine can predict success with another drug later. Reimbursement policies that ignore this fact and assume that failing on drug A predicts futility with drug B will cut off doctors and patients from more effective options.
Mr. Philipson and Mr. Sun don’t suggest that CER be dropped. Instead, they recommend that researchers improve CER design and implementation to create more individualized treatment recommendations. For instance, CER studies can be designed to collect nuanced data about patient responses to medicines, including information about disease severity, patient tolerance for different side effects and demographic information. More CER studies should also use “cross-over” designs, in which patients are switched between multiple drug treatments, giving physicians a sense of what they should try next if “first line” drugs don’t work.
Most important, Philipson and Sun think CER studies should take into account real-world use - in which patients suffer from multiple problems (such as heart disease, diabetes, and obesity) and take multiple drugs at the same time. Drug trials required for FDA approval are often highly artificial and don’t reflect this complexity. Observational studies can focus on how patients such as these really behave, track side effects that might lead to stopping drug therapy, or illuminate dangerous drug interactions.
Beyond CER, we should recognize that companies already have substantial incentives to produce quality information about drugs and medical devices because they have an intellectual-property incentive to promote use of their products while they are protected by patents. For instance, many companies already compare new drugs to older or generic drugs that are the “standard of care” in clinical trials. Mining existing data like this is more cost effective than spending millions more on new head-to-head drug trials.
The FDA approval process generates plenty of data on drugs and medical devices, but there is much less comparative data available on surgeries and procedures - or the different payment strategies or organizational structures that might contribute to better, more affordable patient outcomes. Becoming overly focused on the price tags of individual drugs and devices risks ignoring the structural incentives that drive health care spending.
Improving CER research should lead to better and more individualized treatment recommendations in the long run. This is especially true because the most vulnerable patients in the system, those with debilitating diseases such as schizophrenia, have the most to lose from treatment guidelines that focus only on “average” patient responses.
Paul Howard is a Manhattan Institute senior fellow and director of the institute’s Center for Medical Progress. He is also managing editor of Medical Progress Today.
© Copyright 2014 The Washington Times, LLC. Click here for reprint permission.
TWT Video Picks
Get Breaking Alerts
- White House says Russia 'losing' war in Ukraine
- Hillary Clinton: Forget Obama, George W. Bush made her 'proud to be an American'
- D.C. seeks to stay judge's order allowing gun owners to carry in public
- Illegal immigrants demand representation in White House meetings
- Iraqi Christians rally at White House: 'Obama, Obama, where are you?'
- EPSTEIN: All IRS roads lead to the archivist
- Border surge puts Obama legacy on immigration at stake
- Inside the Beltway: Republican posse rides out to fire Harry Reid
- Tactical advantage: Russian military shows off impressive new gear
- KUHNER: Will Russia-Ukraine be Europe's next war?