The Support Center for Nonprofit Management will be hosting us on September 4th for two evaluation related workshops. The morning session is all about survey design. In it, we’ll cover strategies for increasing response rate, always a challenge, as well as techniques for writing valid survey questions. The framework we’ll use is Don Dillman’s Total Design Method which is relevant to both paper and pencil and web based surveys. If you need to survey your participants, visitors, staff, donors or your community, you won’t want to miss this brief three hour introduction.
OK, you followed our suggestions in the survey workshop and now you’ve got a pile of data to analyze. How do you turn this data into usable knowledge? Or, you and your team just conducted twenty-five client interviews. You’ve got pages and pages of notes from each one. How do you summarize what you’ve learned? How do you identify common themes in what your respondents told you? Answering these questions is what our afternoon workshop is all about. We’ll show you how to analyze data in such a way that will enable you to report what you’ve learned to stakeholders and advance your organization’s goals.
To see the outlines for the two sessions click here.
To download the demo data for the data analysis workshop click here
To register, click here.
Qualitative evaluation aims at a deeper understanding of the behavior of program actors (both staff and participants) and the thinking behind their practices. Its focus is not only about WHETHER a program resulted in change but also HOW and WHY that change came about, if it did. Sometimes, however, the conclusions drawn from qualitative approaches are criticized as non-representative, overly subjective and not replicable. The collection of material evidence of program activities– meeting agendas, attendance lists, published announcements and posters, existing status reports, working papers, staff assignments, job descriptions, staff resumes, planning documents, and other program records— mitigates against this criticism by providing an objective record of what happened inside the organization on the way to meeting program goals. It’s as simple as asking respondents to SHOW the evaluation team implementation-related activities rather than asking them to TELL the team about them. Although practically very different, such methods are in many ways akin to social media metrics that trace consumer online clicks as behavioral evidence of interest in a web site’s offerings. Read the full article –>