Several months ago we wrote about post-training surveys. You know, those surveys you get following a training session that ask about your experience. We’ve had several requests for more information on the topic and have decided to provide a sample survey you can download to use as a guide. The survey was created for a client that received a grant to provide basic, legal services to poor New Yorkers living with mental illness. It’s underlying goal was to advocate for individuals facing administrative proceedings aimed at reducing or eliminating their public benefits. Without such benefits, the organization’s clients would have been at risk for losing their homes. One facet of the program was the provision of training for mental health advocates designed to help them help those they worked with successfully navigate and prevail in the administrative hearings process.
There are two key issues in a post training survey, how did participants respond to the trainer, and what did they take away from their experience. The former is pretty straightforward. If you download the sample survey, you’ll find items that are usable in several different types of training programs. Assessing the takeaways from training is a bit more challenging. In the sample survey we kept it simple, asking whether participants felt they would be able to help clients navigate the benefits hearing process. Of course it’s possible they knew how to do this beforehand but earlier in the survey we asked about the overall appropriateness of the training.
Ideally, you want to ask at least one question about each of the core learning objectives of the training along the lines of, “Based on the training I just experienced I am confident that I could… land a Boeing 747, or say, dance the Macarena, etc..” The scale we used in the example survey uses an “agree-disagree” format although we could just as easily have reworded the question and employed a “very confident – not at all confident” scale. We chose the former because we had other items suited to the agree-disagree design.
Of course the questions above rely on self-report. In some cases such an approach is just fine. In others however, a more objective assessment is required, especially if the content covered in the training is technical in nature or there is some kind of competency requirement that has to be met. Designing such an assessment is beyond the scope of this post. In the meantime, if you are interested in such tests, take a look at Shrock and Coscarelli’s chapter in the ASTD Handbook of Measuring and Evaluating Training.. The book is a bit pricey but it may be worth buying if your job requires a lot of assessment work. One final note, try to avoid asking participants whether they will change their behavior based on the training. Well, you can ask them that, but take the responses you receive as an indicator of relevance and not a measure of behavior change. People are notoriously bad at predicting what they’re going to do in the future, especially if it involves new behaviors and practices.
It’s also important to include a strong open-ended item. By strong, we mean an item that offers meaningful prompts (note the plural) that engage participants to respond. A simple question asking “what else would you like to tell us about the training” won’t cut it. In the absence of a more directed request, chances are good participants won’t know what else they might want to tell you. Finally, it’s a good idea to ask some background questions. There are two reasons for this. First, you will probably want to report who attended the training. For example, were participants line staff, or managers? Second, and more interesting, it is often worthwhile to look at participants’ responses according to their background. Do managers with more experience find the training less valuable? Did the trainer relate very well to the men but less well to the women who attended? These kinds of analyses enable you to improve your training programs and in so doing add value to the evaluation effort.
One last concern: Like many post-training surveys, we decided to hand out this one on paper. In many cases, the best strategy may be to distribute surveys at the close of the training since relying on participants to complete an online survey, once they return to their offices, may not be realistic. It’s an organizational culture issue really and evaluators are wise to rely on their experience within their company or nonprofit in making a decision about which method to use. Paper surveys of course require manual data entry although here at Usable Knowledge, we can scan and error check 500 double sided questionnaires in about the time it took you to read this post.
If you’d like to read the original post, you can find it here.