Qualitative evaluation aims at a deeper understanding of the behavior of program actors (both staff and participants) and the thinking behind their practices. Its focus is not only about WHETHER a program resulted in change but also HOW and WHY that change came about, if it did. Sometimes, however, the conclusions drawn from qualitative approaches are criticized as non-representative, overly subjective and not replicable. The collection of material evidence of program activities– meeting agendas, attendance lists, published announcements and posters, existing status reports, working papers, staff assignments, job descriptions, staff resumes, planning documents, and other program records— mitigates against this criticism by providing an objective record of what happened inside the organization on the way to meeting program goals. It’s as simple as asking respondents to SHOW the evaluation team implementation-related activities rather than asking them to TELL the team about them. Although practically very different, such methods are in many ways akin to social media metrics that trace consumer online clicks as behavioral evidence of interest in a web site’s offerings. Read the full article –>
Capturing data for a program evaluation is often done outside of a program’s normal processes. Evaluators parachute in with the various data collection tools they need to gather information about your program and in the process often interrupt normal workflows. If staff have to manage distributing and collecting various surveys and other forms it can add significantly to their workload. Worse, it can alienate program participants who may not understand why outsiders have suddenly appeared to poke and prod them. When evaluation work is superimposed in this way, stakeholders may come to see it as something external to their work and for this reason may be less inclined to buy in to what the findings suggest.
We believe that evaluation data is best captured as part of existing program processes. In most cases these processes may need to be modified slightly in order to collect information that isn’t part of existing protocols. In others, it may be necessary to modify not just what is collected but how it is gathered, and significantly, how it is stored. Of course there is no such thing as a free lunch when it comes to gathering the data needed to assess a program’s impacts, but the strategies outlined below can reduce the cost of that lunch and make it easier to digest. Read the full article –>