Thanks to Brenda Quinney (Sales Training Manager at RSA) for this thought-provoking question, and to our LinkedIn Group members for their insightful answers.
A: Mary Vivit, e-Learning & Training Developer, Salt Lake City: Good questions, Brenda. Ideally, I support a two-pronged approach: one to measure the program itself, and the other to measure the success of the design (have learners changed old or learned new behaviors?).
Most LMS have some evaluation tool. These are great for finding out how learners felt about the visual design, content presentation pace, and other activities related to the course itself. A core set of questions is helpful: Was the presenter clear and easy to understand? Was the visual design clear and pleasing? And then add in questions specific to the course. These can focus on specific examples presented, or even whether learners had difficulty completing additional assignments.
Measuring the success of the entire course should be considered when the course is being considered for creation. The question, "How will we know when the course goals have been achieved?" is crucial to designing a course. While some survey of learners may be helpful, a more objective method of ascertaining success is needed. As an example, in the case of a shop safety course, accident/incident rates before and after the course has been taken give a better view of success. Other changes, some not related to the course, should be factored in. Again using the example of the shop safety course, the distribution of additional safety gear can have an impact on incident reports.
A: Laura Bunte, Instructional Design Consultant, Chicago: Definitely conduct program and learner evaluation via surveys - but you could also add the qualitative and/or quantitative measures that link the training to the organization's business goals (and ROI) so you can communicate the course success to everyone in the organization. Using on the course goals and objectives, the evaluation should be able to show a baseline score (in knowledge, skill or even confidence level) before the training and some measure of improvement immediately following the training, using the same evaluation instrument.
Other Answers: 1) We use several surveys to measure our students success. Certainly we go of class metrics but we also have student surveys at the beginning and end of semester as well as a technology survey. We definitely ask will you have achieved the goals you planned at completion of your class/program. We also ask them to share any comments on their experience such as input on faculty, administrative staff, the online format and materials, etc. One valuable piece of data we pay attention to is their goal and expectations at the beginning of the class vs the same/similar at end and if they are realized.
2) In our blended learning programs, we do not use level 1 evaluations, as a number of our programs are faily intensive and designed to create capability change, but not necessarily be compelling or even enjoyable. I know that's heresy, but as an example, we required claim adjusters to document sample claims as part of their training which was pretty annoying to them, but saved the company $170M in unnecessary losses. So we generally evaluate by going straight to level 3 and looking for indicators of improved performance on the job.
3)Mixed methods (quantitative and qualitative) will give a rich picture. You could consider questions like:
4) We blend our training through several events that could be online, classroom, web cast, reading, skill practice exercises, relative to each learner's unique situation. I go straight to level 3. During training sessions each participant completes a personal action plan identifying steps to take to implement th elearning. After the last learning event, I email the participat and the supervisor asking them to meet and discuss the action plan and set 30-day goals. After 30 days, I send an email asking the participant to answer these five questions and copy the answer to the supervisor: