- The Washington Times - Tuesday, December 30, 2003

Press releases don’t usually make for the most gripping reading. But recently one came across my desk with this intriguing tagline: “Report Conflicts with Bush Policies on Faith-Based Initiatives.” The release was issued by the Charitable Choice Research Project, part of the Center for Urban Policy and the Environment at Indiana University-Purdue University in Indianapolis. Apparently, here was evidence that faith-based initiatives are generally no more effective than secular social service programming, and are sometimes less effective. If true, this certainly was news.

So, wanting more, I got the full report, entitled “Charitable Choice: First Results from Three States.” And one of the first things I read was a disclaimer. The executive summary notes: “It would be a mistake to draw broad conclusions about Charitable Choice laws from this limited research project. Nevertheless, the findings to date raise issues that should be addressed in future efforts at implementation, and point to areas requiring further research.” Then, in the introduction, we’re told again, “it should be emphasized that this is an interim report.”

Strange. This limited and measured language never made it into the press release or, as far as I could tell, into the numerous media reports it generated. Neither, curiously, did the actual title of the report. So, what’s going on? As anyone who is remotely familiar with the discipline knows, social science research is an intensely political field of study. What we have here is an example of a limited, preliminary study pushed well beyond the conclusions it could support. Were politics behind it all?

Although claiming to be the first academic study comparing the efficacy of faith-based and secular providers of social services, the Charitable Choice study is certainly not the lone work on this important topic.

The Pew Foundation funded a large, ongoing research effort via the Roundtable on Religion and Social Welfare Policy, an initiative of the Rockefeller Institute at the State University of New York. The Hudson Institute has been asking tough efficacy questions since Charitable Choice emerged in the welfare reform legislation of 1996. The Heritage Foundation, using an expert trainer, has been working with directors of faith-based organizations since 2001 to facilitate program evaluation and set a standard to satisfy any potential supporter.

Recently, federal agencies and private foundations have underwritten technical assistance, conference training and Web sites to provide unprecedented information, education and high-quality how-to’s on capacity issues such as systematic program evaluation. A newly launched Web site, www.fastennetwork.org, (Faith and Service Technical Education Network) is a substantive and impressive resource.

For centuries, faith communities have served the poor, drawing on a divine imperative: “He who oppresses the poor shows contempt for their Maker, but whoever is kind to the needy honors God” (Proverbs 14.31). But the “faith factor” — in staff, volunteers and supporters, in mission statements and privately supported programming — is no exemption from accountability. Faith communities should not expect that evaluation criteria applied to them will be any less rigorous than those used for other nonprofits.

Return on investment (ROI)measurements,therefore, should always be the big issue for nonprofits. Charitable needs are unlimited, while resources are limited. It is simply wise stewardship to invest in organizations that demonstrate that they are keeping a sharp eye on ROI.Effectivecompassion means that program activities are ‘on target’ with the mission and that the mission enhances self-sufficiency rather than dependency. Honing strategy and improving effectiveness are not issues unique to faith-based organizations; however, all nonprofits struggle with them.

Public and private funding sources are keenly interested in program outcomes, and rightly so, but the current discussion about outcomes seems to pit faith-based programming against secular programming — intentionally or not. This is misleading evaluation design.

Complete uniformity in nonprofit evaluation benchmarks is a difficult and perhaps even unwise goal. With this in mind, social science researchers generating reports on nonprofit effectiveness must be careful in their methods and transparent in their motives. Media stories containing misleading inferencesandimportant omissions about research design and conclusions only harm the poor. Let’s try to keep the politicking to a minimum, and focus on the people who need our help.

Karen Woods is director of the Center for Effective Compassion at the Acton Institute for the Study of Religion and Liberty.


Copyright © 2018 The Washington Times, LLC. Click here for reprint permission.

The Washington Times Comment Policy

The Washington Times welcomes your comments on Spot.im, our third-party provider. Please read our Comment Policy before commenting.

 

Click to Read More and View Comments

Click to Hide