By Phillip M. Stephens, DHSc, PA-C
The “research police.” It’s a title every medical provider needs to embrace these days. Our evidence-based medical system is increasingly overwhelmed with data, and if there’s anything worse than no data, it’s bad data.
We like to think that physicians are comfortable navigating medical literature, but in reality much is hidden from physicians or purposely buried. Mark Twain was right; there are three kinds of lies: lies, damned lies and statistics.
Research about research is a new phenomenon on the rise to combat the problem. The initial findings are startling.
In 2010, Harvard researchers examined the drug trials of five major classes of drugs measuring only if the trials were positive and who sponsored them. After reviewing more than 500 trials, they found 85 percent of industry studies produced positive results while only 50 percent of government-funded studies were positive.
In 2006, researchers examined 542 drug trials for psychiatric drugs during a 10-year period. Again, industry-sponsored studies favored their drug 78 percent of the time compared with 48 percent of independent trials.
Publication bias affects every field of science. Numerous groups have tried to reproduce studies published in academic journals, and when they fail to reproduce the results, the same journals refuse publication. Many are simply not interested in negative results.
In March 2012, Nature magazine did publish the results of a study in which researchers attempted to replicate the results of 53 studies in cancer research. They were able to reproduce only six.
The core issue that affects evidence-based medicine is that if you conduct 10 studies and physicians get to see only the five that favored the tested drug or treatment, that’s a problem. So what do we do?
While publication bias and selective referencing certainly exist, things slowly are changing. We now have open-access journals, which publish any human research trials whether positive or negative. There also is a push for registration of trials, although this hasn’t been aggressively enforced. Independent systematic reviews are a huge help and should be on the reading list of every medical provider. Specifically, the Cochrane Collaboration is a gold standard for evidence reviews since the 1980s. But there is more we can do.
Insist on Better Data
Whether interacting with the pharmaceutical industry or policymakers, insist that all of the data is provided for trials and that policies are developed to ensure good methodology from your local medical center, the Food and Drug Administration (FDA) or academic journals. Cochrane finally persuaded the maker of Tamiflu to release all its data after such a campaign, and Nature magazine subsequently published the Cochrane/Tamiflu controversy.
Teach Medical Students How to Spot Bad Evidence
Preparing the future generation of medical providers to defend themselves against industry marketing or flaky evidence is imperative. Critical thinking skills and research methodology should be a core measure of every medical curriculum.
Professional Organization Involvement
Every medical association is a needed watchdog concerning medical evidence through best practices committees and policy statements.
Critically Appraise Data Independently
It’s time consuming, but using Cochrane and other independent systematic reviews gets us only part way toward good data. Just as you have a system for reading an X-ray or interpreting a set of lab results, develop and teach a system of examining research beyond simply reading the abstract.
This involves:
Scanning the abstract
- Identifying the research problem and its logical consistency
- Examining the literature review for balanced critical analysis
- Identifying the theoretical framework, research question and hypothesis
- Reviewing the methods: the sample size and target population, and how the data was collected and analyzed.
- Noting any ethical considerations and especially the methodology
- Examining the procedures, variables, analysis and discussion
- Noting the level of evidence
- Remember: all evidence isn’t created equally. Level 5 evidence is considered “expert” opinion, which could mean three guys in a room thinking it’s a good idea to level 1a evidence, which is a “systematic review” with heterogeneity of variables.
The process of how we generate and interpret evidence-based data is currently not a predominant part of medical education, but in a world of evidence-based medicine, it should be. With an increasing level of evidence needed to feed evidence-based practice, faulty or misleading data is the next public health hazard. So when the news media announces one day that coffee is bad for you then the next day unashamedly says it’s good, question the evidence.
Phillip Stephens, DHSc, PA-C,is the associate practitioner site director for Emergency Medical Associates at Southeastern Regional Medical Center, Lumberton, N.C. He is adjunct faculty at A.T. Still University in Mesa, Ariz., where he teaches Research Methodology and has practiced as an emergency medicine physician assistant for 25 years.
