Skip to main content

Big Data: The Danger in Knowing Less and Less about More and More

If policymakers want clearer direction about how to prioritize limited resources, then they can stop paying for meta-analysis.
September 25, 2015

OPINION -- Doctors are awash in data. With the advent of Big Data in healthcare, and its promise to bring physicians, policymakers, and payers a Grand Unifying Theory of Everything, a cottage industry has formed in its wake: Call it the Big Crunch.

The Big Crunch comprises academics, consultants, and others who promise to distill Big Data down to its essential meaning—reify disparate data points into clear action. As funding sources for original scholarship have dried up, contracts awarded by government agencies to analyze existing data are its lifeblood.

Meta-analysis is one popular analytical tool used by The Big Crunch--earning it canon status among biomedical, public health, and social scientists. The practice of meta-analysis resembles a sort of statistical alchemy. By combining the dregs and dross of individually insignificant studies, the meta-analytician may endeavor to produce the gold of a pooled positive result.

Other times, the meta-analytician may coax the opposite effect. By selectively choosing which studies to include in an analysis, weighting studies unequally, or leaving out studies that practicing experts would say are pivotal, real effects can be erased.

Importantly, meta-analysis being non-experimental in nature means that sources of bias are not controlled by the method: A

good meta-analysis of badly designed studies will still result in bad statistics— known as the garbage in/garbage out effect.

So, at its best meta-analysis is an exercise in editorial discretion; conducting a meta-analysis involves many, many interpretative judgments. But, at its worst, meta-analysis can fuel an agenda driven bias. I worry that the Big Crunch heralds a new age of paternalism in medicine.

That is, by choosing among a very narrow menu of studies, usually those commissioned by a program or by its own small circle of like-mined supporters, The Big Crunch exerts an inflated influence over what kind of evidence gets considered for policy. As the same data gets compiled again and again, this creates a policy-making echo chamber where ideas are amplified and reinforced by repetition inside an enclosed system.

When the Big Crunch touts new conclusions based upon their meta-analyses, the effect is not unlike the Music Man’s Professor Harold Hill rolling into town: Consternation, confusion, and concern. In short, the results are often used to justify claims that there is new trouble in River City and an urgent call to action is needed.

Grappling with typically broader issues, the Big Crunch can make a splash and grab headlines. (For example, OHSU study finds steroids, for one common malady, are overrated) Social media flares and the pundits amplify and reinforce the message.

By the time practicing experts can contextualize the information, review the glossy materials and scrutinize its

technical details, the Big Crunch has already moved on to its next grant or contract.

Thus, the real danger of meta-analysis is putting a premature end to a discussion based upon biased interpretation cloaked in quantitative authority. Contrary to the ideal of policymakers carefully weighing all evidence on complex issues before making rules and allocating resources, all too often policymakers have used research politically, selectively drawing on evidence to support already held views.

Ultimately, the purveyors of the Big Crunch may be selling science that can’t live up to its own promises. Practicing experts in a field understand that one can be both “data rich” and “information poor.” In medicine, new payment structures being implemented hazard new kinds of conflicts of interests,

competing organizational agendas, and other sources of bias embedded in selecting what counts as evidence.

Moreover, stakeholders may be unable to appraise all sources of bias contained within a meta-analysis. Every salesman knows that decisions can be influenced by framing equivalent outcomes in either terms of relative gain or loss. The same can be said for meta-analysis too.

Oregon’s policymakers would do well to remember that wisdom resides at the corner of book smart and streetwise. Though hypothesis generating, meta-analysis really can’t tell anyone much beyond what practicing experts already know. After all, the true heart of science is replication. As such, meta-analysis is unnecessary where it’s valid, and unhelpful where it’s needed most.

But one thing is certain: If policymakers want clearer direction about how to prioritize limited resources, then they can stop paying for meta-analysis. Instead, taxpayers would be best served by having publicly supported scientists spend their time conducting better experiments and getting dirty again with original data.

Dr. David Russo is a physiatrist and pain management specialist with Columbia Pain Management P.C. in Hood River. This column previously appeared in the Portland Business Journal on September 18, 2015.

Comments