Clinical studies can be divided into two major categories: interventional studies (also known as clinical trials or experimental studies) and observational studies. Both types can be used to investigate the relationship between an exposure (treatment with medicines, other manipulations, lifestyle changes, etc.) and an outcome. The main difference is that in interventional studies, the exposure is assigned by the investigator, while observational studies do not imply any forced change: exposure can occur via “nature” or as part of routine medical care.
The most common type of interventional study design is a randomized controlled trial (RCT). RCTs are considered the best source of evidence for assessing the efficacy of treatment. This design enables to minimize or even avoid bias, such as:
- allocation bias (by means of randomization)
- performance and ascertainment biases (due to blinding and controlled design)
- selection bias (via proper allocation concealment)
Thus, RCTs are generally superior to observational studies, which cannot not employ randomization to eliminate confounding. However, despite these obvious strengths, RCTs have some limitations.
- High costs and time constraints cause sample size limitations and justify reliance on surrogate markers of outcomes of interest. Rare long-term outcomes (such as mortality), uncommon adverse events and duration of treatment effect have limited ability to be properly tracked.
- Data obtained in RCTs cannot always be generalized to the entire population. Albeit RCTs have high internal validity, they may lack external validity.
- Taking years to plan, conduct, and analyze, RCTs can hardly keep up with urgent needs such as infection outbreaks as well as follow clinical innovations and new standards of care1.
Moreover, it would be mistaken to think that all RCTs are flawless and never produce misleading or erroneous findings. In fact, quality of data obtained eventually depends on how RCT has been implemented2. In some cases, this design is unethical or simply infeasible. A trial may require huge number of participants or extremely long study periods, so that it’s conduction would represent an insurmountable challenge. Thus, the choice will be either to go without evidence or to apply observational study design3.
On the other hand, the coming era of “big data” provides unparalleled opportunities to expand the reach of clinical research4. Broad adoption of electronical medical records, establishment of large biobanks containing detailed information on health and well-being of thousands and even millions of individuals, and willingness to share these data with all bona fide researchers5 make observational studies a powerful source of evidence that cannot be ignored. In particular, their impact may be high in the field of comparative effectiveness research6, where observational studies can be valuable alternative to RCTs. Important question therefore is how to overcome hindrances related to hidden bias and enhance the validity and trustworthiness of causal inferences.
Different techniques and analytical approaches aimed at solving the confounding problem have been developed. Standard methods are a regression model used to statistically adjust for potential confounders as well as more complex propensity score methods6,7. Nevertheless, data on every potentially confounding variable might be impossible to collect, and unobserved characteristics can still not be balanced between the studied groups.
Another state-of-the-art approach receiving increasing attention is instrumental variables (IV) technique6. The main idea of this quasi-experimental study design is selection of additional variable (an instrument) that is strongly related to sorting into treatment and at the same time is NOT associated with the outcome in a way other than via its effect on this sorting7. The fist property is called instrument strength, and the second property, also known as exclusion restriction, is the key assumption of IV model: IV must not be related to any factors that may influence the treatment effect (such as general quality of care or patients’ willing to try a new drug because of disease severity). The strength of the chosen instrument can be tested empirically, and confidence in the absence of exclusion restriction violation may be increased by running certain falsification tests7. If IV is selected properly, it can serve as a “randomizer” in IV model, and the study can approximate an RCT. Such IV can be, for example, a physician prescribing pattern, which has been used as a source of random variation in a recent large observational study comparing the effectiveness of two second-line diabetes drugs8. It is noteworthy that the same variable, that was shown to be a good IV in one study, may not be appropriate in another, therefore IV validity should be rigorously tested every time. Finally, IV method cannot be considered a “magic bullet”, and causal inference provided by this analytic technique will never be as strong as evidence produced by a well-conducted RCT6. Nevertheless, its potential to shed light on multiple unanswered clinical questions is undoubtedly great.
 Frieden TR. Evidence for health decision making — beyond randomized, controlled trials. N Engl J Med 2017;377:465–75. http://dx.doi.org/10.1056/NEJMra1614394
 Evidence-based medicine in need of proving its quality mark, Part I. Accessed 26 September 2018. http://www.atlantclinical.com/evidence-based-medicine
 Frakt AB. An observational study goes where randomized clinical trials have not. JAMA 2015;313:1091. http://dx.doi.org/10.1001/jama.2015.0544
 Frakt AB, Pizer SD. The promise and perils of big data in healthcare. Am J Manag Care 2016;22:98–9. https://www.ajmc.com/journals/issue/2016/2016-vol22-n2/the-promise-and-perils-of-big-data-in-healthcare
 UK Biobank. Accessed 26 September 2018. http://www.ukbiobank.ac.uk/
 Pizer SD. An intuitive review of methods for observational studies of comparative effectiveness. Heal Serv Outcomes Res Methodol 2009;9:54–68. http://dx.doi.org/10.1007/s10742-009-0045-3
 Pizer SD. Falsification testing of instrumental variables methods for comparative effectiveness research. Heal Serv Res 2016;51:790–811. http://dx.doi.org/10.1111/1475-6773.12355
 Prentice JC, et al. Capitalizing on prescribing pattern variation to compare medications for type 2 diabetes. Value Heal 2014;17:854–62. http://dx.doi.org/10.1016/j.jval.2014.08.2674