Skip to main content

Not all prevention research is created equal, but experts can pinpoint the solid science

Prevention science can help guide decisions about which programs and curricula to choose, but understanding its limitations are key
January 11, 2024

Preventing or delaying youth drug use reduces problems later in life, numerous studies indicate. That’s why school-based prevention programs play a critical role in combating substance use disorder

This article is part of an investigative series showing that as Oregon kids face a world with increasingly dangerous drugs and unparalleled external pressures, the state’s education establishment has failed to adapt.

As communities look for ways to reduce youth suicide, gun violence, substance use and other problem behaviors, “there’s an enormous amount of evidence about what can be done to prevent these problems,” said Anthony Biglan, a senior scientist at the Oregon Research Institute who’s been studying youth prevention for 30 years. 

“We need schools to think about prevention,” said Rodney Wambeam, a prevention scientist at the University of Wyoming who’s conducted prevention work in about 40 of the 50 states. 

“It pays off in huge dividends.” 

The research on school-based substance use prevention programs indicates they can save tax dollars and young lives. Some studies have tracked students into adulthood, finding long-term reductions in substance use problems and other issues.

While prevention science is relatively new, research scientists have learned a lot about what works and what doesn’t. While the field didn’t really take off until the mid 1990s, today there are more than a dozen American universities that offer prevention science programs.

Janet Welsh serves as principal investigator for the Evidence-based Prevention and Intervention Support center at Pennsylvania State University, where she and her colleagues help communities and schools around the state adopt evidence-backed prevention practices. 

When schools want to set up an “evidence-based” prevention program or curriculum, they have a wide menu of programs or interventions to choose from that purport to be just that. But “evidence” can mean a lot of different things. 

“The thing about evidence is it’s a continuum,” explained Welsh. “People tend to think of it very dichotomously: It’s either an evidence-based program, or it’s not. But it isn’t really like that. It’s more like: How strong is the evidence?”

“The thing about evidence is it’s a continuum. People tend to think of it very dichotomously: It’s either an evidence-based program, or it’s not. But it isn’t really like that. It’s more like: How strong is the evidence?”

As part of a six-month investigation into prevention in Oregon’s schools, The Lund Report dove into the world of substance use prevention programs and social-emotional learning instruction. We interviewed scientists around the U.S. and combed through research. 

This reporting revealed a dedicated scientific community that’s passionate about finding ways to address some of society’s toughest problems. 

Also found was that, like any other social science, prevention science comes with caveats and limitations that cannot be ignored. 

Clearinghouse researchers wade through ‘noise’ to find solid science 

As the field has developed, policymakers, academics and others increasingly have looked to a handful of reputable clearinghouses to sort through research when they want to know what substance use prevention programs are backed by high-quality evidence. 

These clearinghouses deploy research scientists to assess social science studies using standardized methods to produce what amounts to a registry or guide of programs they consider to have met their standards for efficacy. 

Pamela Buckley is the principal investigator of the foundation-funded clearinghouse Blueprints for Healthy Youth Development, at the University of Colorado Boulder.

The registry, she told The Lund Report, is “like the Consumer Reports guide” of prevention curricula and programs.

The challenge they face? Not all studies follow best practices and draw solid conclusions, and a large fraction of prevention studies in the past decade showed potential for bias. 

So, these clearinghouses deploy teams of research evaluation experts like Buckley to weed out junk science.

“There’s a lot of information out there — there’s a lot of noise — and it’s really hard to cut to the chase about what to trust and what not to,” Buckley said. 

“There’s a lot of information out there — there’s a lot of noise — and it’s really hard to cut to the chase about what to trust and what not to.”

Clearinghouse researchers evaluate studies of different programs and see if they used valid scientific methods to measure the right things in the correct way. In essence, they check whether a study’s design merits confidence that the results that followed an intervention weren’t actually the result of other factors. 

Blueprints is among the top four clearinghouses identified among experts in the field that were consulted by The Lund Report for this project on school prevention. It’s known for having the most rigorous standards. Others include CrimeSolutions, under the U.S. Department of Justice; the academic outcomes-focused What Works Clearinghouse at the U.S. Department of Education; and the Collaborative for Academic, Social, and Emotional Learning, a clearinghouse focused on what are known as “social emotional learning programs.” Its program guide is funded by the Bill and Melinda Gates Foundation and the Chan Zuckerberg Initiative. 

Experts say the gold standard for experiments on whether a program or intervention works is what’s known as a randomized control trial.

That’s where one group of, say, students, goes through the program while another, very similar randomly selected comparison group — known as the “control” — does not. Then any differences in relevant behavior that follow among the student groups is measured. 

These trials are large and expensive, and they can take many years to complete. 

But they are also very complex and there are many ways in which the outcomes can be muddied by missteps along the way.

Buckley recently co-authored an analysis of 851 these “randomized control” studies of prevention programs published between 2010 and 2020. 

She found that the majority of the studies — nearly 80% — failed to satisfy “important criteria for minimizing biases that threaten internal validity.”

This potential for bias is just one of the many things clearinghouses look for when evaluating a study.

Funding, independence lacking in prevention science

To know whether an intervention has the potential to prevent an undesired behavior, scientists not only want to see it work well in a high-quality experiment, they want to see it work again and again in subsequent identically-designed studies. This is how scientists build confidence that the intervention caused the observed effects.

In fact, reproducing studies while replicating their results is a key component in any scientific field, but it isn’t happening as much as prevention and other social scientists would like. 

This has major implications when it comes to figuring out what programs work to help kids. One reason is money.

It usually takes a large federal grant to pay for such studies. But prevention scientists at several universities told The Lund Report that the agencies that fund these large experiments, such as the National Institute of Health, look for innovation in grant applications. And by definition, studies to replicate another study are anything but innovative.

Brittany Cooper, a prevention scientist at Washington State University, is running up against this challenge now. She’s been working on an intervention called “The First Years Away from Home,” a prevention program aimed at helping first-year college students transition away from home. Cooper said young adult prevention is a gap in the field, and this program showed positive results in its first randomized controlled trial. Now there are other scientists who want to replicate that experiment, but they’re on their “third try” to get funding, she said.

Also in short supply? Independent research. 

In prevention science, it’s common for the developers or owners of a prevention program to conduct studies of it. And in some cases, they are the very people who stand to profit on the sales to schools and communities if the program is deemed to have evidence of efficacy. 

The developers are often scientists who are “trying to solve a research question,” Cooper said, adding that she doubts that profit is a driving force in the way it might be with, say, a health study conducted by Big Tobacco.

Emily Tanner-Smith, a prevention scientist at the University of Oregon’s Prevention Science Institute, said a developer is more likely to use very tightly controlled trials, which tend to check out with “strong internal validity” But the study results may not be easily replicated or translate to similar findings when rolled out in the real world.

Regardless of motive, the conflict of interest introduces potential bias, and there is research to suggest that the positive effects in studies are larger when a developer is involved. 

Scientists use something called a research synthesis, which summarizes multiple studies of a particular program or program type, to examine whether the outcomes are being replicated across different contexts. CrimeSolutions accomplishes this in its review of practices.

Clearinghouses use different criteria

At each of the four expert clearinghouses, the stronger the evidence, the higher a program’s rating. 

Of the top clearinghouses, only Blueprints requires an independent study for something to be considered a model program — a ranking it dubs “Model+.”

To be rated as “promising,” Blueprints requires a program to have shown positive outcomes in either a high quality randomized controlled trial with a comparison group or in two other evaluations, known as quasi-experimental evaluations — that also use a comparison group.

To be considered promising by the National Institute of Justice, CrimeSolutions, a program must undergo only one high-quality study, either a randomized study or the quasi-experimental evaluation using a comparison group.

However, the Collaborative for Academic, Social, and Emotional Learning, or CASEL, which evaluates social emotional learning programs and employs lower standards of evidence. For instance programs can be dubbed “promising” even if they “lack adequate research evidence of their effectiveness on student outcomes,” according to the group.

This difference can lead to conflicts between clearinghouse ratings, making the job of educators seeking help in finding an evidence-backed prevention approach even more challenging.

Evidence, ratings differ on programs required for Oregon schools

Oregon lawmakers in 2021 passed House Bill 2166, requiring schools to soon implement “social-emotional learning,” aimed at helping kids learn how to manage emotions, feel empathy and make good decisions. Experts say they’re also among the best approaches to early-learning substance use prevention. 

Now, school districts must adopt these social-emotional programs by July 1. 

The state education department has urged districts to look to CASEL, the social emotional learning collaborative, for guidance on programs. But, as noted, the evidence backing some of the collaborative’s listings is quite limited.

Character Strong and Second Step are the most popular social-emotional learning programs in Oregon according to the results of a recent statewide survey. Character Strong, taught at 26 districts, has not yet been certified as evidence-based by any of the four clearinghouses consulted for this project, including CASEL, though it may soon be certified by a clearinghouse out of Johns Hopkins University. 

Second Step, taught at 25 districts, got mixed reviews among clearinghouses. Only the social-emotional collaborative, CASEL, includes Second Step on its recommended program list. 

Adding to the confusion, some programs that haven’t yet been listed by the collaborative are based on well-researched and recommended interventions.

One complicated example is the PAX Good Behavior Game, which teaches elementary school kids how to self-regulate in a classroom setting. It’s based on a program called the Good Behavior Game that achieved a “promising” ranking from Blueprints based on a study in Baltimore in the 1990s that found years after kids played the game, they were less likely to smoke and experience other behavioral issues linked to substance use. 

Several school districts in Oregon have widely implemented the PAX version of the game, which its developer, Dennis Embry, described as an enhanced version of the program studied in Baltimore. The PAX version was listed on SAMHSA’s National Registry of Evidence-based Programs and Practices before the Trump administration shut down the clearinghouse. 

What Works Clearinghouse included studies of different versions of the Good Behavior Game, including the PAX version, in its evaluation and concluded the evidence was “strong.”  

Blueprints, in contrast, has found the PAX version to have “insufficient evidence,” though Buckley said that could change as they are awaiting answers to some of their questions about newer research. 

Embry, a prevention scientist who worked on Sesame Street in the 1980s, is known in the scientific community for the sincerity of his efforts to help kids. He pointed to a large body of research at Johns Hopkins University, where he serves as co-investigator at the Center for Prevention, which focused on the PAX version and has shown positive outcomes.

Embry is bothered that his version of the Good Behavior Game isn’t certified by Blueprints, telling The Lund Report,  “The politics of scientists, like anything, can become warfare.”

Experts cautious about drawing conclusions

Prevention science keeps improving, Buckley said. But conducting studies and reviews of studies is a lengthy process and clearinghouses have limited funding. 

But even when a program is well-backed by research, it would be wrong to assume it will work in every setting and with every group of kids.

University of Oregon prevention scientist Tanner-Smith said that while a high clearinghouse rating can typically “allow us to be confident that the program caused the observed effects … as a scientist, I would never claim that such programs are definitively ‘proven effective.’” 

She said a study merely shows “we can be confident this program worked in this particular student population, in this particular setting, and at this historical moment.”

At the CrimeSolutions clearinghouse, analyst Kaitlyn Sill agreed. 

“Over time kids and their environment change, and drugs and their availability change, and these changes can be rapid,” Sill said. “Ongoing research is needed.”

Regardless, experts say clearinghouses provide an important service by wading through science and identifying interventions that are most likely to achieve positive outcomes. 

“A lot of people,” said Buckley, of  Blueprints, “they think … if it’s on the Crime Solutions website, you can trust it. If it’s on Blueprints, you can trust it, if it’s on What Works (Clearinghouse), you can trust it. Those who can dig into the weeds, they know the level that you can trust it.”

This article was created as part of the series, “Unsupported: Addiction prevention in Oregon classrooms” a reporting project by The Lund Report, University of Oregon’s Catalyst Journalism Project and Oregon Public Broadcasting, with support from the Fund for Investigative Journalism. Emily Green can be reached at [email protected].