Capturing data for a program evaluation is often done outside of a program’s normal processes. Evaluators parachute in with the various data collection tools they need to gather information about your program and in the process often interrupt normal workflows. If staff have to manage distributing and collecting various surveys and other forms it can add significantly to their workload. Worse, it can alienate program participants who may not understand why outsiders have suddenly appeared to poke and prod them. When evaluation work is superimposed in this way, stakeholders may come to see it as something external to their work and for this reason may be less inclined to buy in to what the findings suggest.
We believe that evaluation data is best captured as part of existing program processes. In most cases these processes may need to be modified slightly in order to collect information that isn’t part of existing protocols. In others, it may be necessary to modify not just what is collected but how it is gathered, and significantly, how it is stored. Of course there is no such thing as a free lunch when it comes to gathering the data needed to assess a program’s impacts, but the strategies outlined below can reduce the cost of that lunch and make it easier to digest.
Collect Baseline data at Program Start or Client Intake
This includes not only demographic information, but also any standard measures you use to assess their current status. For example, if you serve people with HIV/AIDS it might include their T-Cell count. If you serve the mentally ill, it might be their score on a standard measure of depression or their DSM code. If you work with emerging community leaders, it might include background data on their communities. If you work with high school students, it might include their overall grade point average, days absent, disciplinary data, etc. Having baseline data at the start of a program encourages ongoing data collection and saves a step when it comes time to write up a report. Being able to document who you serve is also essential for managing relationships with both existing and potential funders. This is something you want to be able to do in real-time. Look at it from a development perspective. Which would be better to tell a funder, “our after school homework help program serves 100 students” or “our after school homework program serves 100 students, 58% of whom have rates of school attendance that are below state standards?”
Collect Client Process Data in Real Time
Process data includes the data you need to assess program fidelity. It answers the question “did we do what we said we would do” or more simply “did we make our numbers.” Tying such data to individual program participants will enable you to account for differences in outcomes later on. What do we mean by this? Many nonprofit programs involve staging of events of various kinds– training sessions, community meetings, counselor interviews, etc. While staff typically do a headcount to record attendance, this kind of event level data is not very useful. Consider the following example. A nutrition program for parents of overweight children has indicated to its funders that it will achieve an 80% attendance rate among registered participants in all ten of the cooking classes it has planned. Rather than simply doing a headcount, it would be better to take attendance in order to determine who was present. Why? For two reasons.
First, from a programmatic standpoint, it might support outreach to participants who are consistently absent. Equally important, it might enable managers to identify any groups of participants who are less connected to the program. For example, are parents of children who are only marginally over weight less likely to participate regularly? Are those for whom English is a second language less likely to attend? Second, individual level data allows for a better account of the distribution of outcomes at the program’s conclusion. In this example, knowing an individual’s attendance, allows the evaluation team to figure program dosage. If a key outcome measure is change in weight three months post program, it would be very valuable to look at the degree to which participation in the classes drives observed differences.
Collecting data in this way is more work, particularly because to be useful, the information needs to be entered into a participant database. But it provides significant benefits when it comes to understanding who the program impacts, and the degree to which each program component contributes to outcomes.
Collect Program Process Data in Real Time
Data about program activities should also be as detailed as possible, it is usually not enough to just record the fact that scheduled events occurred. Thinking again about our nutrition program it may be useful to document who led each class, where it was held and the date and time it met. Such data enables program managers to look at the impact of staffing, venue and schedule on outcomes. Do participants who worked with particular staff achieve more weight loss? Do particular venues draw better attendance? Again, these factors may effect outcomes. Going beyond simply reporting average impacts requires collecting data on the range of factors that drive them. Although it may be possible to gather this kind of data retrospectively, in our experience much of what took place is lost after the program concludes. Moreover, by integrating data collection into the program’s basic processes it becomes immediately available, allowing mid-course corrections.
Integrate Evaluation into Participants’ Activities
We recently designed a number of evaluation studies that included a diary component. In these studies, program participants recorded their thoughts and feelings about the program they were involved in via an online journaling system. Beyond the evaluation information we obtained, the diaries provided participants with a medium through which to reflect on their experiences in the program. Many requested a printout of their diaries at the program’s conclusion, indicating the value it held for them. If literacy concerns make a written journal unworkable, a video diary is possible though it requires technical capabilities that may be difficult to obtain.
Another way to capture participant progress towards outcomes is to assess them throughout their involvement in the program. A GED program for example will most likely administer a series of competency tests as participants move forward towards the actual GED exam. These intermediary tests are used by instructors to guide student learning but they are equally valuable as evaluation data since they document progress and account for how it occurs over time. It’s not too hard to imagine how other ‘check-in’ activities could be integrated into other kinds of programs in order to accomplish the same thing.
We are real believers in the value of collecting program ephemera (see this post for more). Minutes of community meetings (you take minutes, right?), case notes, flyers announcing events, photographs of participant gatherings, worksheets handed out at training sessions all provide powerful evidence of what took place, what was accomplished, and how participants felt about it. They represent part of a program’s portfolio. Most programs produce these kinds of materials; the challenge is collecting them in one place for use by the evaluation team. Again, the best strategy is one that operates in real time, as part of the program’s activities. This assures completeness first because materials don’t get lost, and second because it creates an expectation that they are important, and worth saving.
It’s true that some of the suggestions we’ve made here do require you to collect, and notably, store data in ways that you may not have thought about in the past. Some add to staff workload as well. We believe however that the two main benefits of integrating data collection– it lessens the interruptions and hassles that occur when outside evaluators superimpose data collection activities and builds support for evaluation in general– outweigh the hassles. Perhaps most important, it supports a culture of learning in your organization.