Qualitative evaluation aims at a deeper understanding of the behavior of program actors (both staff and participants) and the thinking behind their practices. Its focus is not only about WHETHER a program resulted in change but also HOW and WHY that change came about, if it did. Sometimes, however, the conclusions drawn from qualitative approaches are criticized as non-representative, overly subjective and not replicable. The collection of material evidence of program activities– meeting agendas, attendance lists, published announcements and posters, existing status reports, working papers, staff assignments, job descriptions, staff resumes, planning documents, and other program records— mitigates against this criticism by providing an objective record of what happened inside the organization on the way to meeting program goals. It’s as simple as asking respondents to SHOW the evaluation team implementation-related activities rather than asking them to TELL the team about them. Although practically very different, such methods are in many ways akin to social media metrics that trace consumer online clicks as behavioral evidence of interest in a web site’s offerings. Read the full article –>


Capturing data for a program evaluation is often done outside of a program’s normal processes. Evaluators parachute in with the various data collection tools they need to gather information about your program and in the process often interrupt normal workflows. If staff have to manage distributing and collecting various surveys and other forms it can add significantly to their workload. Worse, it can alienate program participants who may not understand why outsiders have suddenly appeared to poke and prod them. When evaluation work is superimposed in this way, stakeholders may come to see it as something external to their work and for this reason may be less inclined to buy in to what the findings suggest.

We believe that evaluation data is best captured as part of existing program processes. In most cases these processes may need to be modified slightly in order to collect information that isn’t part of existing protocols. In others, it may be necessary to modify not just what is collected but how it is gathered, and significantly, how it is stored. Of course there is no such thing as a free lunch when it comes to gathering the data needed to assess a program’s impacts, but the strategies outlined below can reduce the cost of that lunch and make it easier to digest. Read the full article –>


Open-Ended Questions in Surveys Part 2

March 6, 2014

In our last post, we talked about how to word open-ended questions. Why is this important? Because if done correctly, you can capture rich qualitative data from people using less expensive survey methods. We also talked about answer piping– taking the responses from one question and porting them into the text of another question. The [...]

Read the full article →

Open-ended Questions on Surveys Part 1

February 13, 2014

The Problem A key challenge to any study that relies on surveys is that they typically offer mostly closed-ended, forced choice options to participants. Unlike qualitative approaches, surveys often frame questions in terms of the response categories conceived by the survey’s designers. Rather than asking “what did you experience, how did you feel?”, questions take [...]

Read the full article →

A Sample Post-Training Survey

January 23, 2014

Several months ago we wrote about post-training surveys. You know, those surveys you get following a training session that ask about your experience. We’ve had several requests for more information on the topic and have decided to provide a sample survey you can download to use as a guide. The survey was created for a [...]

Read the full article →

How to Prove that Your Program Works

September 5, 2013

You may not be able to. A rather extreme statement, particularly from an organization that does program evaluation.  Nonetheless, we stand by it.  It’s hard to get too far into a discussion of the notion of proof without talking at least a bit about Karl Popper’s philosophy of science and his key idea of falsifiability. [...]

Read the full article →

Questioning the External Focus of Outcome Evaluation

July 27, 2013

We came across an opinion piece in  in the Chronicle of Philanthropy by Kelly Campbell and Matt Forti of the Bridgespan Group in which they make a number of arguments for conducting rigorous outcome evaluations of nonprofit programs. While the piece concludes with a statement about the value of ongoing assessment in the service of continuous program [...]

Read the full article →

An Alternative to the Evaluation RFP Process

July 6, 2013

This post is the last in our series about hiring an evaluation firm. Just to recap, we’ve suggested in our previous posts that while technical expertise is absolutely critical to a successful evaluation, soft skills, including flexibility and a strong client services orientation can make or break a project. (See our White Paper on Hiring an Evaluation Consultant [...]

Read the full article →

Hiring an Evaluation Firm- Part 3

June 6, 2013

Surfacing an Evaluation Firm In the last post we discussed seven characteristics to look for in an evaluation consultant. In this post, we’ll discuss how to actually go about surfacing one and how to manage the hiring process. In other words we’ll look at how to narrow your choices down to a short list of [...]

Read the full article →

Hiring an Evaluation Consultant- Part 2

May 12, 2013

In the last post we discussed five different types of evaluation projects.  Here, we’d like to describe seven key characteristics of an effective evaluation consultant. For each one, we’ve provided some insights into how to judge whether the consultants you are looking at possess the qualities you’re looking for— how to evaluate your evaluator in [...]

Read the full article →