|Written by Helene Jorgensen|
|Thursday, 27 May 2010 11:42|
The Infectious Diseases Society of America (IDSA) has developed treatment guidelines for a long list of infectious diseases, including Lyme disease. In their treatment guidelines for Lyme disease, the IDSA recommends very restricted treatment of 2 to 4 weeks of antibiotic therapy. Though many patients fail this treatment (treatment failure rates range from 15 to 69% in patients with neurologic Lyme disease), the IDSA recommends against additional treatment in patients who continue to be sick.
For the IDSA to make such a radical recommendation of no additional treatment for patients who fail recommended treatment, one would expect that several large clinical trials have been conducting to support it. But that is not the case. In fact, the recommendation is based on one single study by Klempner et al. (2001) that found no treatment effect in two trials they had conducted on a total of 114 patients. And the study was not even a good one. It suffered from design defects, and the statistical analysis was seriously flawed.
Patients enrolled in the study had been sick for a long time – 4.4 years on average – and had been treated with multiple rounds of antibiotics prior to entering the study. In fact more than 25 percent of the treatment group had already received more than 116 days of antibiotic treatment before the trial, including intravenous antibiotics. So the study was not, as claimed, set up to evaluate the effect of treatment in patients who failed 2-4 weeks of treatment. It is unlikely that 90 days of additional treatment administered to patients in the study would permanently cure patients who were still sick after having received an even longer period of treatment.
Moreover, Klempner designed his statistical analysis in a way that was biased against finding any treatment effect. Specifically, patients had to show huge improvement in their quality-of-life, measured by a Medical Outcomes Study (MOS) test, in order for treatment to be considered effective. Klempner compared the scores at baseline for the physical component and the mental component of the MOS test with the scores 180 days later (90 days after treatment ended). In order to be considered “improved,” a patient’s follow-up score had to be at least 6.5 points higher than the baseline score on the physical component and 7.9 points higher on the mental component. These are equal on average to an 18 percent increase in scores, which is a huge improvement. A literature review by biostatisticians Allison Delong and Tau Liu at Brown University found that other medical studies used much lower score increase cutoffs, ranging from 2.0 to 4.7 points (compared to 6.5 and 7.9 used in the Klempner study). The larger the improvement required to make the cutoff, the fewer patients that will be counted as “improved.” Therefore Klempner’s higher than standard unit of improvement made it less likely that his study would find a treatment effect.
According to Klempner’s own baseline data, the patients’ mental score was only 6.4 point lower than the score for general U.S. population who did not have a chronic illness. This means that patients, on average, were expected to perform better than the general population at the end of the study in order to show any improvement at all. This is clearly an unrealistic expectation, and therefore it would have been very surprising if Klempner had found any treatment effect. Delong and Tau determined that the Klempner study suffered from “substantial statistical problems that prevent its use in formulating treatment guidelines.”
Finding no treatment effect from antibiotic treatment, Klempner et al. conclude that patients did not have an active Lyme disease infection. They note that the study did not detect any DNA in their blood and spinal fluids. Although, and I quote: “we used PCR to detect B. burgdorferi [the bacteria causing Lyme disease] DNA in base-line samples of blood and cerebrospinal fluid as well as samples of blood collected during treatment, we did not find evidence of persistent infection with B. burgdorferi in these patients” (p. 89). But patients with positive PCR test for DNA was specifically excluded from the study from the very beginning. As the authors wrote on p. 85: “Patients with a positive polymerase-chair-reaction (PCR) test for B. burgdorferi DNA in plasma and cerebrospinal fluids at baseline were also excluded.”
So by design, the study excluded patients with evidence of an active infection (determined by a positive PCR). This means that the study’s usefulness in determining benefits of additional treatment is rather limited. Therefore it is quite possible that patients who fail the IDSA-recommended treatment would benefit from additional antibiotic therapy.
With so little scientific research to support their commendation, why would the Infectious Diseases Society of America, a medical society representing infectious disease doctors, recommend that sick patients should be denied treatment? An investigation by the Attorney General of Connecticut found serious flaws in the IDSA’s process for writing treatment guidelines. The authors of the treatment guidelines for Lyme disease had various undisclosed conflicts of interest, including consulting arrangements with insurance companies.
Insurance companies have an interest in the writing of treatment guidelines. They use restrictive guidelines as basis for denying coverage for expensive therapies. Treatment that falls outside the guidelines is deemed “experimental” and typically not covered. For example, it is not uncommon for insurance companies to deny coverage of antibiotic therapy for chronic Lyme disease based on the IDSA’s flawed guidelines. This leaves Lyme patients with the choice of either paying for treatment out of pocket or being condemned to a life of chronic illness. For people who cannot afford treatment this is not really much of a choice.