Dr. Caspi and his colleagues provide a valuable look at the vexing problem of proof. How do we determine if a "standard" or "alternative/complementary" therapy actually works and is safe? Western medicine has faced a challenging road to reach its current burden of proof. Complementary and alternative medicine (CAM) remains enormously popular among diverse segments of Western and non-Western societies. Proving the efficacy and safety of these techniques poses significant challenges. In part, the medical-scientific culture that defines mainstream medicine is different than the CAM culture that runs parallel and divergent from it; however, when it comes to proof, we are left with few alternatives that eliminate the bias inherent in patients, caregivers, and scientific researchers.
Western medicine is laden with numerous and deep faults. We are all biased—even the most scientifically gifted fall prey to their emotions and feelings, elevating opinions beyond their authority. Consider a lecture that was popular 15 to 20 years ago given by a prominent epilepsy specialist, whose medical and scientific publications focused on basic models of epilepsy, examining cellular mechanisms in the laboratory. He spoke on the topic of "rational polytherapy." The lecture addressed how clinicians combine two or more traditional antiepileptic drugs in a patient using "scientific" principles. For example, the lecturer advocated using drugs with alternative mechanisms of action and avoiding drugs with side effect profiles that are similar. It was a well conceived and articulated presentation. The remarkable feature was the certainty with which the argument was framed. It was not offered as "my clinical approach," or "in my experience," nor was it based on any published data. It went further. It criticized physicians who used certain combinations and clearly favored specific combinations for certain seizure types. It was presented as if it were based on rigorous studies and data. It was not. If someone had presented data with similar evidence at one of the scientific meetings on basic animal mechanisms, those in attendance would have laughed. Indeed, counter-arguments could be easily raised. Some of the most effective anticancer drug combinations rely on targeting a single pathway at different sites. Further, although we had a good idea about how the major antiepileptic drugs work to prevent or stop seizure spread 15-20 years ago, our knowledge remains incomplete today. We must always honestly assess what we do know and what we do not know. Medical doctors often fail to meet this challenge.
This scientist's error came from generalizing his value system and raising the height of the bar for carefully designed scientific studies to clinical medicine's lowest echelon of proof: "In my experience" or "I think." Based on these very humble origins of anecdotal observation in a few patients or interesting ideas, wonderful things have followed. Many were subsequently proven by well designed and well executed double-blind studies. In other cases, the opinions have been proven false. Darwin, in the closing chapter of The Descent of Man, admonished us with prophetic wisdom:
"False facts are highly injurious to the progress of science, for they often endure long; but false views, if supported by some evidence, do little harm, for everyone takes a salutary pleasure in proving their falseness: and when this is done, one path towards error is closed and the road to truth is often at the same time opened."
Studies on traditional medical and surgical therapies can also suffer from a number of other problems. In many cases, companies that manufacture a drug or device fund the studies. The potential conflict here is enormous—study design, outcome measure, choice of investigators, control of data, statistical tests that are chosen (and possibly those that are tried and never reported), and the interpretation and "spin" on the results. For double-blind studies, much of the potential bias should be eliminated by the study design. The most notorious place for abuse is in post-marketing studies. In some cases, companies provide direct grants to investigators to study a specific question about their drug or device by reviewing patient charts. In some cases, they assist in obtaining the data and writing the manuscript. The cracks in the medical system are perhaps deepest here, and attempts are underway to fix these problems. Similar, but also different problems plague the CAM study literature.
Table 2.2 provides a nice overview of the types of studies that are used to support the effectiveness of safety of both traditional and CAM therapies. Single case reports and observations of small numbers of patients, an even more basic and less formal level of study than the "open-label" study, is often the starting ground for therapeutic advances. The use of amantadine to treat influenza led to the observation that some patients with Parkinson's disease showed improvements in energy level and motor function, leading to more formal studies and wider use of this medication in Parkinson's disease. Observations in individual cases are powerful incentives and stimulants towards new ideas, but they are also the places where the greatest errors of relating cause and effect can occur. As one moves up the ladder from open-label to single-blind to dual-blind to double-blind studies, the value of the data increases; however, many other factors can limit the value of the data and its use in drawing conclusions. If the measures are not carefully designed, if outcome measures are not defined before the study begins, if the wrong statistics are used, and if too many subgroups or too few patients are studied, the results may lack meaning.
This chapter provides an important perspective on the critical nature of proof in scientific studies, whether those studies address traditional or CAM therapies. We hope that it promotes more accurate descriptions of studies and their designs, and further raises the bar on proving the efficacy and safety of all therapies.
Several factors might contribute to the misuse of the label "double-blind studies" with interventions in which double-blind testing is difficult, if not impossible. These factors include nondiscriminating research terminology, desire for academic respectability, and competition to publish and seek funding.
Medicine is practiced under societal forces that greatly influence practitioners. Data are the only dependable currency in the scientific world. The better the source of the data, the more valuable the currency. In the perceived hierarchy of evidence, doubleblind trials produce methodologically superior, higher quality data than non-double-blinded studies. The value of the label "double-blind study" contributes to the strong preference to use this potent label for clinical studies that involve a methodology with blinded parties, even if the design is not truly double-blind. Categorizing a dual-blind study as a double-blind one may be considered a "white lie," but it is a dangerous one. Are these clinical trials incorrectly described as double-blind with the possible intent that they will be regarded as somehow better and, therefore, increase the chances that they will be published or funded; or, on the other hand, is the fault merely one of an inadequate methodologic vocabulary? We believe that adopting a specific descriptor for the blinded, but less than double-blinded, study design will help foster greater truth in advertising and research reporting.
The problem of mislabeled studies is probably not unique to CAM, and it is likely that examples occur in other medical areas, especially those in which double-blind testing is very difficult. The distinction proposed herein could be generally applied; however, the field of CAM would benefit from preserving an important scientific distinction and more accurately characterizing the quality of the data that are presented.
Was this article helpful?