Evaluation & Nonprofit Staff: Not a good fit

Previously, I wrote about my idea for an alternative way to approach evaluation in the nonprofit sector. In this post, I want to look at why it may be better to outsource evaluation work rather than rely on internal staff at nonprofit organizations.

So why are nonprofit staff not a good fit for conducting evaluation at their organization? The short answer is that, for most organizations, it is simply not an efficient use of the organization’s resources. The slightly longer answer is that many nonprofit staff are already overburdened with the roles of keeping their programs running and doors open, often do not have the particular education or experience needed to do good evaluation, likely have a talent set that is not aligned with evaluation work, and are more prone to being biased in the evaluation process.

Currently, evaluation in the nonprofit sector leaves the bulk of the burden on nonprofits. Some organizations are able to get a few thousand dollars in a grant to hire a consultant. Certain other organizations, due to the nature of their work, have to do evaluation in order to get funding (e.g. required by law), and so have dedicated staff built into their programming and operation costs. However, most organizations are left to fend for themselves for most of their needs, while still being asked to provide numbers and reporting to funders.

Historically, the sector has tried to ‘build capacity’ within nonprofits by offering manuals, workshops, and seminars as well as including evaluation in nonprofit certificate programs. However, these training may be a poor fit for purpose if the expectation is for the participants to go back to their organization and execute an evaluation plan. These efforts may, in fact, be doing more harm than good, considering the skill base and energy required to do effective, reliable evaluation work.


Let's look at the factors for why nonprofit staff time is not well spent on evaluation. There are four core reasons why internal staff are not a good fit for doing evaluation. They are Motivation, Qualifications, Talent, and Bias.

1. Motivation. In order for nonprofit staff to conduct their own evaluation, they have to be motivated to receive training, learn new materials, and then do the evaluation work. That is a lot to ask of people who are constantly fighting to find funding, hire staff, track the budget and finances, administer programs, and report to boards and funders. Prioritizing evaluation is all the more hampered by the facts that a) it is not always clear that evaluation adds value and b) funders seem to be satisfied with simple reporting efforts. Thus, ‘motivation’ is challenged by both bandwidth and valuation.

a. Bandwidth. Staff running nonprofits are busy people! Good evaluation takes time and thought, even when we know what we are doing. If one doesn't know what they are doing or is constantly being distracted, it is difficult to get a good evaluation protocol laid out, get the numbers crunched correctly, or even correctly think through what all of those numbers mean- let alone then fold that information back into improving the organization. Non-profit staff only have so much energy and attention span, and evaluation is just one more thing on their plate. It is easy to understand, then, how evaluation, the Brussels sprouts of running a nonprofit, might end up knocked off that plate and quietly tucked under a napkin somewhere. Traditional capacity building for evaluation only makes this matter worse as it adds in the extra hurdle of asking staff or leadership to devote that extra chunk of energy to first learning how to do the evaluation work before they can actually do it.

b. Valuation. Among all of the other competing demands on an organization's resources, evaluation is often near the bottom of the pile. This is due in part to many nonprofits not really seeing the value in evaluation- or not being pressed to value it. The biggest potential driving force for valuing evaluation is funders. However, funders, and especially foundations, often do not seem to be pushing very hard here, either (out of 35 or so private foundations I have looked at in the State of North Carolina, only 2 had a clear commitment to evaluation on their website). While funders do tend to ask for more numbers and reporting from grantees these days, there does not seem to be any emphasis on nonprofits doing evaluation well or folding that information back into improving the organization. Rather, the funding cycle still seems to be primarily driven by emotional, anecdotal stories with hard metrics being a secondary concern and serious discussion of how an organization is improving itself not even being on the menu. Why, then, should a nonprofit spend time and money on good evaluation when it is not clear that it will give them a competitive advantage in the grant cycle?

2. Qualifications. Evaluation is a specialized skill that requires both extensive education and experience to do well. Most nonprofit staff are not going to be able to become specialists by reading a manual or taking a day-long seminar. Further, those who have the education may not be able to gain the experience or may be limited by the other three factors in this list.

a. Education. There is a lot to know about evaluation if one wants to do it well. We have to understand statistics, research methods, a bit of human psychology, and even some economics, just to name a few things. Most evaluators have a graduate degree of some sort that includes a robust amount of coursework in these areas. Sometimes leadership or staff at nonprofits have similar degrees; but the value of those degrees in the evaluation area is limited by lack of experience, motivation, misaligned talent, or bias.

b. Experience. The more we do something, the better we get at it. Someone who has dedicated their life to evaluation is going to do a better job in less time (or recognize when it is truly worth it to spend more time) and with fewer errors. They are simply going to be more efficient at it than someone who is just struggling to grasp a concept from a workshop or a training manual or even than someone who has the education for it but not the dedicated experience.

3. Talent. The human brain is a very complicated thing and, across our society, different people are naturally given to being good at different things. Some folks are amazing mathematicians while others are phenomenal orators. Some folks know how to run an organization; others really know how to ask a question. We call this innate ability “talent.” While not 100% true, we generally tend to end up in professional spaces that also align with certain of those talents. Folks who end up working as evaluators are often going to have more mathematical and scientific talents while the folks who end up operating a nonprofit are going to tend to have more people and managerial talents. In the interest of efficiency, we are better off allowing individuals to focus on their strengths rather than ask everyone to be a generalist or, worse, try to develop their weaknesses.

4. Bias. There is always the risk for bias in the process, both intentional and unintentional, when an organization or an individual feels their survival is dependent on “showing results.” Bias can be motivated by pride and fear. Just imagine if someone spends every day of their life working on a cause and applying a specific solution to the problem they want to solve in the world, then finds out the numbers aren’t what they expect? It is easy to understand how they would be tempted to create alternate presentations of the numbers to make them comport with that individual’s experience of the program or ideas around ‘what it can be.’ Having an external evaluator who is not emotionally, politically, or financially attached to the success of a program doing the number crunching and helping with the report writing will go a long ways to help limit personal biases in the way an organization ultimately assesses itself and reports back to funders. Again, when bias occurs, it makes it impossible for an organization to improve itself and harder for funders to reliably predict which programs are good investments.

While much of evaluation work is probably best kept out of the hands of nonprofit staff so that staff can focus on their core responsibilities and the evaluation work can be more effective, there are two parts of the process that staff do need to participate in. They are data collection and incorporating the insights from the data back into the organization.

In both cases, neither job can be done without the participation of the organization. Often times, the only people who can collect data for a nonprofit are the staff of the nonprofit- they are the ones who fill out the intake forms, administer the follow surveys, and track participants. Further, the only people in a position to really judge what the organization can do or should do to improve are the people leading it. These same people and their staff are also critical to contextualizing the results from any evaluation process so that paths for improvement can be identified. Consequently, data collection and incorporating feedback are the two primary points of contact a central evaluation organization might have with the organizations it serves.