Soapbox: The Importance of an Independent Evaluator

An independent evaluator can help training program designers gain an objective, clear-eyed, and credible perspective on their work.

By Steven Katzman, Ph.D.,Director of Organizational Effectiveness and Performance Measurement, KPMG LLP
Like most organizations committed to the continuous improvement of training, KPMG LLP highly values the post-course debrief. But in the past, when our content developers and instructional designers were asked about a program, their responses often were met with skepticism. Could a professional so deeply involved in the development of a course ever be truly objective about assessing its effectiveness?

For us at KPMG, a simple addition made a big difference. It began the day we assigned a member of our Performance Measurement Group (PMG) to observe an important training program and conduct a post-course participant focus group immediately concluding the session while participants were still on site. The PMG team brought to the assignment their training in evaluation methods and their deep knowledge of KPMG Business School’s measurement strategy, post-course surveys, and testing tools. Just as important, the team brought independence from the design and development process: That independence imbued their findings with a new level of credibility.

This made sense. Independence and objectivity are deeply embedded in KPMG’s culture. In advising our clients on compliance with a complex web of laws, standards, and other regulatory pronouncements, our client service professionals rigorously avoid interests and relationships that might impair objectivity. This independence is the core of the KPMG brand. By investing the time of the PMG team in observation of a training program, KPMG Business School brought the same value to an internal learning initiative as the firm routinely brings to its external clients: As independent measurement experts, PMG could help program designers gain an objective, clear-eyed, and credible perspective on their work.

Instructional designers, content developers, and KPMG Business School leadership found this perspective extraordinarily valuable. This led to an increased demand for independent observations and focus groups across multiple KPMG training programs. Between 2008 and 2012, the number of PMG on-site reviews rose from three to 55.

Which Programs to Cover?

Choosing where and when to conduct on-site reviews is critical. KPMG offers hundreds of training programs to our professionals each year. In our last fiscal year alone, the firm developed 596 new courses nationally. Local offices developed another 1,240 courses. Utilizing a blend of traditional classroom and virtual classroom modalities, new courses accounted for more than 600,000 of the 1.4 million total CPE credits earned by our professionals. As a small, streamlined team, PMG cannot observe and conduct focus groups for all of these programs. So which programs should be covered?

To make strategic choices about the usage of on-site reviews, PMG meets annually with each of the four departments that design and deliver the firm’s training to understand their strategies for the upcoming fiscal year. We identify their most important training interventions, programs that have significant changes from the prior year, and programs that represent a significant monetary
investment.

On-site services are particularly valuable for programs conducted multiple times over several weeks. We recommend “front-loading” on-site services for these programs—conducting observations and focus groups at three or four sessions offered early in the schedule. Findings from these early sessions can influence the remaining sessions. It is gratifying to see designers make changes to the program in response to observation and focus group feedback, and then see the positive results of those changes on subsequent sessions.

What Do We Look for?

PMG observers sit in the back of the classroom, monitoring participant engagement. We note the mix of lecture, exercises, and group work. We track whether participants are asking insightful questions, questions for clarification, or no questions at all. We evaluate the facilitation (and, where applicable, co-facilitation) practices of the instructors.

We have created an Observation Guidance spreadsheet to ensure that all observers consistently monitor important aspects of the training in every on-site review.

What Do We Ask Focus Groups?

Near the end of our on-site observations, PMG team members facilitate one or more 45- to 60-minute focus groups with program participants. If a program has fewer than 20 participants, we invite all of them to join the focus group. If a program is larger, we invite a random sample of 10 to 12 participants.

To ensure participants are comfortable providing their candid feedback, only the PMG facilitator and the participants are present for the focus group. We begin by providing an explanation of our independence from the program designers, and an assurance that feedback collected will be aggregated and summarized—not individually identifiable. We set the “ground rules” (e.g., feedback is expected from everyone, take turns, etc.). We then begin the questions.

Some questions are common to all focus groups:

  • Were the objectives met?
  • What was most valuable?
  • What was least valuable?

Other questions are custom developed for each program: PMG meets with the program designers prior to course delivery to understand program objectives and changes from prior programs, and to identify key areas designers want to explore. Custom questions are worded in an objective manner designed to elicit effective feedback.

On-Site, Real-Time Course Enhancements

In spring 2012, KPMG’s industry regulators recommended a heightened focus on several industry standards. In response to these recommendations, KPMG rolled out an extensive training plan, including initial sessions in the spring and refresher sessions in the fall. PMG attended the first few sessions of the spring course, providing a feedback report to the design team. Participant feedback on these sessions indicated a significant need for greater clarity on certain concepts and greater specificity in the examples provided. The content and design teams made immediate changes to the spring program for the remaining sessions, and carried these changes into fall sessions.

The impact was evident: Participants in subsequent sessions provided much more positive focus group feedback, and rated the sessions significantly higher in their post-course evaluations.

Building On-Site Evaluation Resources

  1. Assign an independent evaluator. An evaluation report from a member of the design and delivery team is not as credible as a report from an objective observer.
  2. Engage the design/delivery team. Maintaining independence does not mean working in a vacuum. The evaluator should work with the design and delivery team to understand the program’s objectives, business goals, and unique aspects. This helps attune the evaluator to key actions that should be observed and important questions focus group participants should be asked.
  3. Be selective. You probably can’t have an on-site evaluation resource at every program. Choose programs of the greatest strategic importance to your business, those that had the greatest year-over-year changes, those that are brand new, and/or those that represent the most significant financial investment. Factor in the potential for impact: Programs with multiple sessions—where early feedback can improve later sessions—are prime candidates for on-site review.
  4. Consider virtual options. Where on-site reviews are not practical, virtual focus groups may be. We’ve successfully collected feedback from program participants through conference calls or through the use of our virtual meeting platform.
  5. Implement foundational evaluation processes. On-site reviews provide important perspectives, but they don’t replace other components of a comprehensive evaluation system. For example, all KPMG course participants receive a post-course survey, and many of our courses include knowledge exams. Our on-site evaluation services complement these efforts.
  6. Use trained evaluators. The on-site review team should include individuals with education and experience in the evaluation field.

As we all try to do more with less, the idea of assigning additional resources to a training program can seem counter-intuitive. But in our experience, utilizing an independent evaluator has added significant value. Design/delivery teams and business leaders now regularly look to our PMG to validate the impact of their investment in learning and development. They trust our feedback and recommendations, and that trust is enhanced significantly by our independence.  

 

 

Dr. Steven Katzmanis director of Organizational Effectiveness and Performance Measurement at Training Top 10 Hall of Fame member KPMG LLP. In addition to helping KPMG manage employee surveys and feedback processes, Dr. Katzman directs the Performance Measurement Group within KPMG’s Business School. He holds a Ph.D. in Industrial/Organizational Psychology from the State University of New York at Albany. Katzman’s colleagues, Michele Graham and Michael Orth, contributed to this article.

Lorri Freifeld
Lorri Freifeld is the editor/publisher of Training magazine. She writes on a number of topics, including talent management, training technology, and leadership development. She spearheads two awards programs: the Training APEX Awards and Emerging Training Leaders. A writer/editor for the last 30 years, she has held editing positions at a variety of publications and holds a Master’s degree in journalism from New York University.