Qualitative evaluation aims at a deeper understanding of the behavior of program actors (both staff and participants) and the thinking behind their practices. Its focus is not only about WHETHER a program resulted in change but also HOW and WHY that change came about, if it did. Sometimes, however, the conclusions drawn from qualitative approaches are criticized as non-representative, overly subjective and not replicable. The collection of material evidence of program activities– meeting agendas, attendance lists, published announcements and posters, existing status reports, working papers, staff assignments, job descriptions, staff resumes, planning documents, and other program records— mitigates against this criticism by providing an objective record of what happened inside the organization on the way to meeting program goals. It’s as simple as asking respondents to SHOW the evaluation team implementation-related activities rather than asking them to TELL the team about them. Although practically very different, such methods are in many ways akin to social media metrics that trace consumer online clicks as behavioral evidence of interest in a web site’s offerings.
At Usable Knowledge, we see our request for program records as more than a standard documentation review common to many consulting assignments. Our approach is broader and part of a larger inquiry into the activities undertaken during the project and their relationship to organizational goals and culture on the one hand and program outcomes on the other. Let’s put material data collection back in the context of qualitative research:
Often we’re asked to conduct in-depth interviews with program staff to develop an understanding of program activities and the thinking behind individual behavior. Self-reports of behavior and motivation, while yielding rich data, may also be limited: people may present the truth in a way that puts them in the best light or that fits with the goals of the project. They may also forget the details of what took place. Sometimes—even where program goals and desired behaviors are explicit– they may not be able to articulate exactly what they have done or why they have done it. In our work, we are always balancing three different views of program activities:
- How the program was designed to operate: the formal view, the way things are supposed to be
- How people describe their own and others activities, including their motivations. This view is by definition subjective but provides important information on what people see as important and how they interpret formal aspects of program design
- Actual behavior: what actually took place during the course of program implementation
We find that simply asking for examples of the following physical evidence of implementation behavior yields direct evidence of actual behavior.
|Document type||Behavioral questions that add context to material evidence|
|Research proposal, program description||Has the program plan changed? Are there other documents that show interim changes?|
|Status reports; interim program working papers||What is included in/excluded from reports? What is considered important to report on? Have program elements been changed as a result of reports?|
|Meeting agendas, attendance sheets, notes||Do these exist? What records are kept about program participation and activities? Do meeting agendas integrate with program plans?|
|Program posters, web announcements, email blasts||Are results of publicity tracked? Does publicity content match program goals?|
|Program staff job descriptions, staff resumes||Is the program staffed with appropriate talent?|
|Emails||What can we tell from everyday communication about program implementation, success factors?|
Used in conjunction with one-on-one interviews and other qualitative and quantitative approaches yields the most insight into how programs actually operate.