Basic Science vs. Applied Science Research: Why we need both

Randomized Controlled Trials (RCTs) have become the gold standard for drug/device approval primarily due to its namesake: randomization. Random assignment of patients reduces confounding from allocation.[1] In other words, both treatment and control group are essentially equal. If patients are not randomized, a biased investigator could stack the groups to achieve favorable results. RCTs are great to test for expected outcomes of a treatment. For example, 2,000 patients are randomized into treatment and placebo groups for a novel cholesterol therapy and it is determined that the treatment group saw a significant decrease in clinical biomarkers like LDL, Triglycerides, and total cholesterol. The number of patients in the trial provided enough power to prevent a false negative for our primary and secondary endpoints, but was it sensitive enough to detect all of the potential adverse reactions from the drug therapy compared to placebo? The high cost of RCTs prevents us from conducting massive blinded clinical trials with hundreds of thousands of patients, even though some rare adverse events may require such a large trial to reduce the Type II error rate. The cost limitations of RCTs leave us with using the information we have available to conduct observational research to identify unknown cause and effect relationships with approved drug therapies.[2] Beaker

Health Services Research, or Applied Research, involves a multidisciplinary team approach to study questions in a more real-world environment. In the lab, a scientist can control many external factors but this reduces the external validity of the research results.[3] By using Applied Research methods we can increase the external validity and usefulness for clinical and policy related decision. The strength of external validity comes with a drawback: since the environment isn’t as tightly controlled the researcher may lose internal validity and the specificity to detect true differences between variable and control.

Answering questions related to issues like health disparities can be achieved by using clinical and applied research methods. Using traditional bench research methods, we might bring in study volunteers and use games to simulate healthcare decision-making. This method might help us identify differences in patient willingness to pay (WTP) for health services based on ethnicity, gender, income, or other factors. For example, researchers used a controlled survey of 193 individuals to show a difference in WTP for elective procedures like total knee arthroplasty between different races.[4] This study poses an interesting question that we can take the next step and utilize applied research methods to determine if there are differences in patients based on race in actual decisions to have a knee replacement. In order to do this, we may utilize claims data over a time period and include millions of patients positive for diagnosis with osteoarthritis of the knee and look for those who decided to go through with surgery and those who did not have surgery. Then we can look at the data for race along with many other factors and identify any statistical differences. Both methods are helpful in finding answers to why some people elect to have surgery and others don’t. While it may be harder to identify differences in the applied research study compared to the basic research design, one could argue that its results are more helpful for policy decisions as it is based on actual claims and not from a survey (patient answers to hypothetical scenarios).


[1] Stolberg HO, Norman G, Trop I. Fundamentals of Clinical Research for Radiologists: Randomized Controlled Trials. AJR. 2004. Available at:, accessed March 10, 2014.

[2] Avorn J. Powerful medicines: the benefits, risks, and costs of prescription drugs. Random House LLC (2004).

[3] Hedrick TE, et al. Applied Research Design: A Practical Guide. Sage (1993).

[4] Byrne, M. M., O’Malley, K. J., & Suarez-Almazor, M. E. Ethnic differences in health preferences: analysis using willingness-to-pay. The Journal of rheumatology, (2004). 31(9), 1811-1818.

Digiprove sealCopyright secured by Digiprove © 2014 A & J Consulting, LLC
Joey Mattingly, PharmD, MBA is an assistant professor at the University of Maryland School of Pharmacy located in Baltimore, Maryland. Joey has managed retail and long-term care pharmacy operations in Kentucky, Illinois and Indiana. Leading Over The Counter is a blog of Joey's views and opinions on the topics of pharmacy leadership and management and do not represent the University of Maryland, Baltimore. Joey can be followed on Twitter @joeymattingly.

Leave a Reply Text

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

All original content on these pages is fingerprinted and certified by Digiprove