In the weeks ahead Congress will prepare to vote sweeping changes to federal funding for sex education.
Unfortunately, most legislators will base this important decision on the popular belief that “comprehensive” sex education (CSE) is “evidence-based” - that it “works” to increase teen condom use, reduce teen pregnancies and STDs and foster abstinence. The popular belief is wrong.
As researchers in this field for more than 30 years, we have studied sex education programs and examined the scientific merit of effectiveness claims. Our review of CSE programs has revealed a surprising lack of evidence to support its claims. This is especially true for school-based programs - where most teens receive sex education.
Our research has shown that no single school-based CSE program has produced evidence of a reduction in teen pregnancy or STD rates. None has produced a sustained increase in consistent condom use by teens - a behavior that’s necessary to achieve even the partial risk reduction afforded by their use. And none has shown compelling success at promoting both abstinence and increased condom use within the same program - the very advantage claimed by CSE proponents.
These findings were confirmed in a recent Centers for Disease Control meta-analysis that showed that among school-based programs, no significant effect was found for pregnancy reduction, STD prevention or increased condom use. But this is not what most people believe. Most believe that CSE programs are effective.
Why is there a gap between the facts and popular belief?
Over the years CSE studies have revealed many instances of small, isolated, short-lasting impacts on behavior, amidst a few successes. CSE proponents have claimed these instances - no matter how inconsequential - as evidence of the success of the approach in general.
They could do this because across all programs there was not a consistent benchmark for defining success. To identify sex-ed programs with true impact, meaningful standards are required.
Effective programs should improve the behaviors that are most protective (rates of abstinence, preferably, or consistent condom use, secondarily) for the target population (not only a subgroup) for a sustained time period (at least 12 months - from school year to school year).
Public policy and decisions about sex education should be based on such standards and programs failing to achieve those standards should not be eligible for preferential public funding.
These standards should also apply to abstinence education programs, which have their own challenges - they are relatively new, and have far fewer studies upon which policymakers can base decisions. But one thing we have found from careful examination of the research is that the proportion of abstinence education studies that have demonstrated meaningful results for teens in schools is greater than the proportion of CSE studies that have done the same (36 percent versus 25 percent).
When looking for evidence-based programs to implement in schools, abstinence programs should remain an option available for school administrators.
Unfortunately, as Congress votes on the future of sex education in America, it will be drawing on research that fails to account for these meaningful standards of effectiveness.
With CSE programs already receiving twice the federal funding as abstinence programs and receiving apparent deference in the proposed legislation, what is the impetus for widening this funding disparity?
We call for all sex education programs to be scrutinized by meaningful standards and not given the “evidence-based” seal of approval unless they are truly shown to make a difference on the outcomes that matter, and in a broad and sustained way.
The health of young people is too important to continue to ignore the need for meaningful standards in sexual health education. Accounting for these standards is the only way we will decipher the difference between popular belief and fact and it’s the only way Congress will get it right when they consider changes in the days ahead.
Dr. Stan Weed is the founder of and a senior fellow at the Institute for Research and Evaluation. Paul Birch is the group’s director.