If you follow a common approach to logic modeling used in the human services and other sectors, your three right most columns contain your project’s short-term, medium-term and long-term outcomes. We prefer to call short-term outcomes effects; medium-term outcomes, outcomes; and long-term outcomes, impacts. Many programs follow a similar approach. Short-term outcomes include the immediate results of the intervention and have to do with how people and systems have been affected by a program. Medium term outcomes generally describe what people are doing differently, whereas long-term outcomes get at the impacts of the changes. The table below provides some examples: [Read more…]
Capturing data for a program evaluation is often done outside of a program’s normal processes. Evaluators parachute in with the various data collection tools they need to gather information about your program and in the process often interrupt normal workflows. If staff have to manage distributing and collecting various surveys and other forms it can add significantly to their workload. Worse, it can alienate program participants who may not understand why outsiders have suddenly appeared to poke and prod them. When evaluation work is superimposed in this way, stakeholders may come to see it as something external to their work and for this reason may be less inclined to buy in to what the findings suggest.
We believe that evaluation data is best captured as part of existing program processes. In most cases these processes may need to be modified slightly in order to collect information that isn’t part of existing protocols. In others, it may be necessary to modify not just what is collected but how it is gathered, and significantly, how it is stored. Of course there is no such thing as a free lunch when it comes to gathering the data needed to assess a program’s impacts, but the strategies outlined below can reduce the cost of that lunch and make it easier to digest. [Read more…]
In our last post, we talked about how to word open-ended questions. Why is this important? Because if done correctly, you can capture rich qualitative data from people using less expensive survey methods. We also talked about answer piping– taking the responses from one question and porting them into the text of another question. The value of answer piping is that it allows each survey participant to answer questions that are personally relevant to them which, crucially, engages them and inspires rich reflective responses. It looks something like this:
Now imagine something even cooler.
We were recently asked to design a study that involved following a group of 90 professionals over time as they developed a new approach to working with their clients. As with any new program, we anticipated that they would, at least at first, experience challenges implementing the new approach. We were interested in learning whether and how, over the course of the program, they would deal with these challenges. With 90 participants, it wasn’t possible to conduct multiple interviews with everyone, yet we wanted to understand how each person addressed the issues he or she faced. We could have asked each person, at the end of the project, what challenges they anticipated at the start, but over the 18 months it was active we were concerned that they would not accurately recall their initial concerns. How can you make this work in a survey? [Read more…]
A key challenge to any study that relies on surveys is that they typically offer mostly closed-ended, forced choice options to participants. Unlike qualitative approaches, surveys often frame questions in terms of the response categories conceived by the survey’s designers. Rather than asking “what did you experience, how did you feel?”, questions take the form “did you experience this or this or this, or did you feel this or this or this?” While a good survey can cover much of what respondents are likely to report, the richness of their personal voices is lost. Qualitative approaches however are very costly to implement typically limiting the number of program participants who can be interviewed. This in turn jeopardizes the representativeness of the study.
To address these issues, we advocate an approach that integrates carefully worded open-ended questions into the surveys we conduct. In our experience, well designed open-ended questions can garner the same kind of rich response typically found in interview-based studies.The key is move beyond open-ended questions like this one:
“Is there anything else you would like to tell us about your experiences?”
Survey participants need more detailed instructions if they are to provide responses that begin to approach what you can get from an interview. They need to see exactly what you’re looking for. In one recent study we used a fairly simple prompt; “what about the leadership training session, the program staff, the space, the agenda, or something else, most supported your learning?” in place of the generic example above. In another project, we provided even more instruction:
“How has your experience in the program so far made a difference in your work? What actions have you taken to implement what you have learned? What new insights have you gained? What practices have you developed that have been particularly relevant or meaningful to you?”
This prompt yield responses with an average of 138 words which is more than the first paragraph of this post. More importantly, the responses were thoughtful and directly germane to the evaluation questions the client was seeking to answer. [Read more…]