Evidence-based medicine in need of proving its quality mark, Part 1

Evidence-based medicine (EBM) emerged in the early 1990s as a novel ambitious paradigm aimed at improving medical care by making it safer and more cost-effective. The message was clear but revolutionary: healthcare decision-making should be based on the “best evidence” obtained from high-quality experimental studies. Being free from cognitive bias and standing firmly on the scientific ground, EBM was supposed to replace the clinical experience confounded by “making the same mistakes with increasing confidence.” The acknowledged standard for generation of scientifically robust evidence became randomized controlled trials (RCTs). To date, the total amount of completed RCTs exceeds 100,000 and continues to grow exponentially. Has the EBM’s goal been achieved?

In fact, the answer is “somewhat”. Success was gained in some areas such as treatment of asthma, prevention of postsurgical thromboembolic complications, and myocardial infarction aftercare. But in general, no revolution happened. Moreover, some costly lessons have been learnt with the “oseltamivir fiasco” being among the most mind-boggling cases1,2. Complete data re-analysis initiated by Cochrane collaboration revealed oseltamivir benefits to be substantially overestimated. Tamiflu only modestly reduced the time to first alleviation of symptoms, but caused many adverse events. Thus, the overall risk-benefit ratio was seriously compromised3,4.

Another example of misleading results driven by multiple biases was the “antidepressant myth.” Plenty industry-funded RCTs have been conducted for dozens of depression-targeted drugs, and lots of them have received marketing approval. Independent analyses shed light on the truth: many of these drugs are only slightly better than placebo, with the differences being clinically insignificant5.

How could it be possible? The cause is inherent to the procedure of approval employed by regulatory agencies. For example, FDA required two adequately conducted clinical trials with positive results. But why not to perform unlimited number of studies until you eventually find the desired difference? Negative findings could be just omitted, and there is a loophole. Selective reporting of positive results, or publication bias, is an important and well-recognized problem in biomedical science, and RCTs, especially industry-funded trials, are highly prone to this phenomenon6,7. Along with time lag bias, which arises from a delay in negative results publication, it invokes exaggeration of evidence.

Selective publication is not the only way to manipulate data. For example, one can enroll participants who are more likely to response to treatment than others, or selectively report positive findings demonstrated for specific subgroups/outcomes/drug doses. Furthermore, if allocation is not properly concealed, and blindness is not thoroughly controlled, a “double-blind, randomized, placebo-controlled” trial stops being one. By the way, this is not an easy task, and the methodology can be violated unintentionally. For example, patients can guess what group they belong to by experiencing adverse events. Finally, manipulation with statistical power of the study provides opportunity to either declare similarity in the effect size (underpowered trials), or provide evidence for statistically significant differences which are not great enough to be clinically important (overpowered trials).

These problems and challenges laid the ground for speculations that the EBM quality mark has been obscured by vested interests, and this movement as a whole is in crisis7,8.

Nevertheless, certain steps towards delivering “faithful” EBM can be taken, and they are being taken. Mandatory registration of all clinical trials as well as making them publicly available seems to be the most direct way to the solution. Disclosure of protocols and trial results would allow independent data reassessment and provide an opportunity to reveal any methodological flaws and biases, such as lack of generalizability or deviation from the initial study plan.

Another possible solution could be greater investment in independent research. Most RCTs are now sponsored by pharmaceutical companies. Every-Palmer et al.7 compared this practice with “politicians counting their own votes.” Having spent much effort and money on RCTs, will they be honest enough to accept that the initial hypothesis was wrong or not as fruitful as they expected? Nobody can guarantee. But the interpretation of research results should be free from any expectations. An ultimate goal of any study should be obtaining valid results, and negative findings must be as valuable as positive ones.

Finally, special instruments can be used to reduce industry-driven bias via downgrading the impact of studies with conflict of interest. Currently, these methods do not take into account all the factors, and there is a need to develop more effective schemes7.

Learn more: Part 2



[1] Jefferson T, Doshi P. Multisystem failure: the story of anti-influenza drugs. BMJ. 2014;348:g2263. http://dx.doi.org/10.1136/BMJ.G2263

[2] Gupta YK, Meenu M, Mohan P. The Tamiflu fiasco and lessons learnt. Indian J Pharmacol. 2015;47(1):11-16. http://dx.doi.org/10.4103/0253-7613.150308

[3] Jefferson T, et al. Oseltamivir for influenza in adults and children: systematic review of clinical study reports and summary of regulatory comments. BMJ. 2014;348:g2545.

[4] Jefferson T, et al. Neuraminidase inhibitors for preventing and treating influenza in adults and children. Cochrane Database Syst Rev. 2014;(4):CD008965.

[5] Kirsch I. Antidepressants and the placebo effect. Z Psychol. 2014;222(3):128-134.

[6] Dwan K, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3(8):e3081.

[7] Every-Palmer S, Howick J. How evidence-based medicine is failing due to biased trials and selective publication. J Eval Clin Pract. 2014;20(6):908-914.

[8] Greenhalgh T, et al. Evidence based medicine: a movement in crisis? BMJ. 2014;348:g3725.