Case Study: New Hire Scorecards at Discover Financial Services

Discover developed a set of reports—one for each New Hire Program—that quickly illustrate the ability of workers-in-training to perform like workers already staffing call centers.

As balanced scorecards have become increasingly common tools used to manage entire businesses, many training and development professionals have tried to adapt them for use in our profession. For example, ASTD developed its Workplace Learning and Performance Scorecard in 2006, and Ajay Pangarkar and Teresa Kirkwood published their Trainer’s Balanced Scorecard in 2009.

Such scorecards are generic ones that focus on the overall operations of a training operation. Some organizations need more specific scorecards that focus on the performance of an individual training program and the performance of learners as they make the transition from the classroom to the workplace.

That’s the situation faced by the Training Center of Excellence at Discover Financial Services. This case study describes that challenge and the solution the training team at Discover devised, and explains how they arrived at that solution—and what they learned from it.

Background

Discover Financial Services is a U.S.-based financial services company that is probably best known for its Discover credit card but also offers banking, student and personal loans, and various credit protection services. The company offers a variety of training opportunities to its managers and employees, including a flagship program for new employees in its various call centers.

How this Case Study Was Prepared

The information in this case study comes from interviews with Jon Kaplan, director, and Doug Anderson, project manager, Discover Training Center of Excellence, as well as a review of the scorecards they and their staffs developed. To ensure accuracy, Kaplan and Anderson also reviewed the transcripts of their interviews and drafts of this case study.

Basic Problem

Like most financial services companies, Discover relies heavily on call centers to serve its customers. Similar to most call centers, the ones at Discover experienced high turnover of staff. So the company regularly offers four- to six-week training programs for new workers in its various business units. The exact length of the training varies depending on the nature of the position and workers’ expectations. Assuming these workers-in-training successfully complete the program, they staff the phones, where the need for—and expectations of—high performance awaits them.

With a steady stream of workers to train and high demands for performance once they start staffing the phones, management in each of the business units naturally wondered how effective the training was.

Further challenging this process of training and integrating new workers was a decentralized training process, in which each of nine business units trained its own call center staffs.

Discover developed a multi-pronged approach to address the problem. First, the company centralized all of its training into a single Training Center of Excellence. In the process of doing so, the company centralized the design, development, and implementation of its New Hire Program. Although the programs leverage this common base of expertise, each is customized to the processes, policies, and business practices of the business units in which newly hired staff will work.

For example, the different New Hire Programs provide training on the products and services offered by the business units to which workers were assigned, including customer service, collections, banking, and various financial protection services.

Discover hoped to “professionalize” the training staff and strengthen the facilitation of training and the transfer of skills to the workplace by centralizing the training. As Jon Kaplan, director, Training Center of Excellence notes, although the old structure ensured trainers had strong business expertise, it did not emphasize the equally important skills in training and building on-the-job performance.

Not only did Discover develop a new curriculum, the company also developed a new means of evaluating the effectiveness of workers who participated in the new hire training. When considering their needs for evaluation, the training team at Discover realized that evaluating the extent to which participants merely learned and retained content was insufficient.

As Kaplan and his colleague, Doug Anderson, project manager, Training Center of Excellence, observe, what good is knowing the content if workers cannot perform on the phones? In other words, tests of learning—Level 2 in the Kirkpatrick evaluation model—are not enough.

Rather, assessment of actual performance plays a crucial role in assessing the effectiveness of training. That is, participants in the New Hire Program needed to answer customer calls courteously, knowledgeably, and promptly. Discover needed a means of assessing transfer of the learning to performance on the job—Level 3 in the Kirkpatrick evaluation model. And Discover needed a means for conducting these assessments in a timely and effective manner, and reporting the results in a way all stakeholders—workers, their managers, and the training staff—could easily understand. From this evaluation data on worker performance, the training staff could easily compile data on the impact of the training—Level 4 in the Kirkpatrick model.

Kaplan adds that the opportunity to develop such an evaluation was what attracted him to his job; he feels that establishing the link between the performance of workers and their training is key to determining the value of training.

Constraints Affecting the Design and Development of the Project

  • As is typical in most large organizations, developing a common report for a variety of products and services proved a challenge. Before starting this project, each business unit already had its own performance measures.
  • As is also typical in a dynamic company facing a rapidly changing business environment, the company and its business units frequently changed their goals—including their performance goals for workers. This not only posed challenges for training, but also for evaluating worker performance as workers would be assessed against a constantly moving target.
  • Trainers primarily had expertise in the operations of the organization but had limited skills in interpreting spreadsheets or statistics. “While my team is talented in almost every regard, we initially had few people who had the ability to divine the underlying meaning behind a set of numbers. They could not distinguish between signals and noise in data—the random fluctuations in data” or the statistical tools that can help distinguish among them.

For example, one metric of worker performance might have improved 2 percent. But depending on the way the data was collected, that improvement might merely be a common statistical fluctuation—it might not be significant.

Similarly, the start and end dates from which data were collected might affect the performance metrics. Staff did not instinctively know to look for this and similar issues.

The Solution

Purpose: Develop a set of reports—one for each New Hire Program—that quickly illustrate the ability of workers-in-training to perform like workers already staffing call centers. The report would be used by the following stakeholders:

  • Workers-in-training, to assess the extent to which they had developed their performance and the likelihood of succeeding on the phones once they completed their training.
  • Trainers, to assess the performance of individual learners so they could provide targeted feedback to those learners, as well as assess the general performance of workers and the effectiveness of the training in preparing the workers for the phones. Kaplan hoped the reports would provide both trainers and workers-in-training with “red flags” to which they could quickly respond.
  • Managers of trainers, who could use this information to coach, recognize, and reward trainers based on the performance of the employees in their respective classes.
  • Managers of new workers-in-training, so they would be aware of whether the workers had sufficiently developed their performance in training and provide a performance-based prediction of their performance on the phones. Ideally, Kaplan and Anderson hoped the report would provide these managers with high levels of confidence in their new hires when they began staffing the phones.
  • Management in the business units and in the Training Center of Excellence, to assess the impact of the training on the performance of the business.

Walk-Through of the Solution: The Training team at Discover developed a scorecard to present performance statistics on new hires.

A scorecard is:

A strategic planning and management system that is used extensively in business and industry, government, and nonprofit organizations worldwide to align business activities to the vision and strategy of the organization, improve internal and external communications, and monitor organization performance against strategic goals (Balanced Scorecard Institute, viewed at http://www.balancedscorecard.org/BSCResources/AbouttheBalancedScorecard/tabid/55/Default.aspx, visited December 6, 2011).

Scorecards present performance metrics in a visual way, so all stakeholders can quickly spot what’s working—and what needs attention.

Scorecards seemed like a natural choice to track the performance of workers-in-training because Discover already used scorecards widely throughout the organization to present other business results. Until the implementation of TCOE New Hire Program scorecards, however, the company had not adapted the practice to tracking the effectiveness of training efforts.

The scorecard provides performance information on workers-in-training. It reports on workers by location (the company has four) and by training class.

  1. The scorecard starts with a summary report, identifying overall how workers met key:
  2. Business goals that are measured and tracked quantitatively, as they pertain to workers in their first period after training.
  3. Metrics of call quality for workers in their first period after training.
  4. Enrollments in individual training classes.

This section also contains notes that set context and explain unusual metrics. For example, if a glitch in the system mistakenly routes calls to operators who are not expecting them, the metrics associated with these calls appear but need to be discounted when considering overall performance.

Next, the scorecard provides a series of charts that provide more in-depth information about each metric and use color coding to quickly call attention to certain information. For example, like a stoplight, the charts highlight metrics that are below expectations in red.

The last section provides detailed numeric al information from which the summary reports and charts are developed.

Because the specific aspects of performance needed in individual business units varied, the scorecard also provided individual units with the specific performance data they sought for their staffs. The Training team eventually developed 25 versions of the scorecard, each tailored to the unique needs of the business units it serves.

Right now, the charts are presented on paper, in a face-to-face meeting between a learning strategist (such as the account executive to the business units) and the management of the business units. This face-to-face meeting allows the learning strategist to provide the context underlying the information in the report.

Table 1: Facts about the Solution

 

Budget

Although the project did not have a specific budget, management estimates that, with staffing costs considered, the budget was approximately $100,000.

 

Staffing was the primary cost of the project. Two to three training team members worked approximately 50 percent of their time for roughly six months to develop and validate the scorecard.

 

Length of time needed to complete the project

From beginning to end: 2.5 years

 

 

Intensive development: Six months

 

Skills used in the project           

Within the Training team:


  • Project management
  • Performance requirements of customer service staff
  • Existing performance reporting on customer service staff
  • Report design
  • Data visualization
  • Communication
  • Spreadsheet skills

 

 

From the Information Technology staff within the Training Center of Excellence (this group has its own technology capabilities):


  • Database management and reporting
  • Spreadsheet skills
  • Interface design
  • Graphic design

 

Software used   

  • To provide data for the reports: Internal data warehouse at Discover
  • To synthesize the data for reporting purposes
  • The corporate performance reporting system
  • A custom reporting application used within Discover that is based on SQL and ColdFusion

 

  • To present the reports to stakeholders
  • Excel
  • Acrobat

 

Other resources used   

People. All of the individuals involved in this project contributed greatly in knowledge and resources.

 

Process for Developing the Solution: The team at Discover followed this process to develop the scorecards:

General Issue: An Evolving Project

Rather than a single project with the goal of developing a particular type of report, the development of scorecards at Discover happened in as the result of ongoing efforts to provide more effective reports on workers-in-training to its internal clients. Kaplan and Anderson refer to the different phases of this evolution as “versions” rather than phases of development.

With each version of the report, the metrics tracked were refined, the report made clearer, and its uses expanded.

Phase 1: Determining what to measure and how to use the metrics

As noted earlier, Kaplan already had an interest in tracking and reporting metrics on training, and that’s one of the many reasons he initiated this project. “We measure everything; it’s why I came here.”

But he admits, “I didn’t fully understand all of the challenges” with reporting metrics through scorecards. Admittedly, some challenges could not have been predicted by anyone.

A few challenges arose during this phase.

  • Determining what to measure. The metrics initially proposed were the typical ones used to assess call center workers such as the average call handle time (the average time it takes for a call center worker to handle an inquiry and wrap up a phone call).

Certain training metrics also were tracked, so managers of the workers-in-training would have a strong idea of the extent of direct training each worker received. Examples of metrics included the number of coaching sessions (in which a worker was coached on his or her performance in phone calls) and the number of listening sessions, in which an experienced agent audited a call and provided feedback.

But the business units sought additional information about the workers-in-training. For example, some business units sought descriptive—or qualitative—measures of performance along with quantitative measures of performance by customer service representatives.

More significantly, all business units sought metrics that indicated the workers-in-training would perform their jobs in such a way that their work would align with the results sought by the business unit.

  • Determining the quality of information included in the report. Although Discover tracks everything and keeps the data in its data warehouse, the Training team quickly learned that having the data and preparing it in a usable form were two different issues. Kaplan notes that subtle differences in data preparation such as timing of scorecard cut-offs created potential differences between the data reported on scorecards generated by the business and used by managers to assess performance, and the scorecards used for training. At times, this undermined credibility in training scorecards.
  • Dealing with a moving target. Although metrics are important in his company, Kaplan learned that the metrics that matter one quarter might not matter as much in the next. Part of that resulted from larger industry changes, which forced financial service companies to frequently change their focus. For example, a metric might have been important when a set of employees were hired and trained, but when those workers started on the phones, the metric was no longer an effective measure of business performance. 
  • Finding staff with expertise in statistics to lead and manage these efforts. As noted earlier, one of the major challenges Kaplan faced was the limited expertise of data interpretation on the training team. He eventually hired a former university professor to manage this effort; soon afterward, he learned a new curriculum manager also happened to have this experience.

Phase 2: Determining how to present the reports

In the process of determining what to measure and how, the idea arose to use scorecards. The company already reports much of its key performance metrics using scorecards; doing so in the Training Center of Excellence extended a company practice to Training and could serve as a means of aligning Training with the business.

To present the metrics in a scorecard, the reporting platform had to be selected.

The training team chose the official corporate platform for preparing other scorecards used within Discover.

Specifically, this platform would let the training team pull data into a scorecard, then use that information to track and communicate performance metrics of workers-in-training, and predict their future success on the job.

Phase 3: Rolling out the scorecards

After the scorecards were developed, the training team at Discover started using them to coach workers in new hire training classes and report the results to the managers of these workers.

One of the challenges was presenting the scorecards to stakeholders. Kaplan and Anderson decided to structure the presentation as a face-to-face one, in which learning strategists provided the context before presenting the results.

To do so, the learning strategists needed to be trained in how to interpret and present the reports so they could handle any questions that arose.

Phase 4: Improving the reporting of data with the Training Trends report

For the most valuable performance data, the training team needed the most recent data about the workers-in-training. The current scorecards report on performance after time has lagged; Kaplan and Anderson wanted a report that provided them with information about workers as they go through training and that pinpointed possible performance problems a day or two after they first appeared so trainers could intervene immediately.

The resulting Training Trends report provides daily reports on training classes both to the trainers and their managers. For example, because the amount of time spent on the phones in training is correlated with later job performance, Training Trends tracks the number of hours that workers-in-training spend on the phones. If it goes below a threshold, both the trainer and his or her manager are alerted, so they can address the problem.

Similarly, the report tracks the performance of workers-in-training during these calls and, if their performance falls below targets, it alerts the trainer so he or she might explore the situation and appropriately coach the learner.

Training Trends rolls out later in 2012.

Results: Although the focus of the project was to provide a quick, clear, consistent, and timely means of reporting the quantitatively tracked performance of workers-in-training, it has resulted in qualitative shifts in thinking about the training experience for new hires. Nearly all stakeholders have shifted their views. Table 2 summarizes some of the key shifts in view.

Table 2: How the Scorecards Have Shifted Thinking about the Training Experience for New Hires

 

Stakeholder

Shift in Thinking about the Training Experience for New Hires

Training managers

Anderson notes that having this data on his new hire classes has “changed his view of new hires.” Data from the scorecards provided insights into the effectiveness of training activities and helped to focus adjustments to classroom facilitation, activities, and development that could make the New Hire training more effective.

 

As a result of the Training Trends, Anderson feels more comfortable engaging with trainers on performance-related issues. The scorecard provides specific metrics on which to base such discussions.

 

As a result of gaps in the phone hours reported for a specific class on Training Trends, Anderson has made adjustments to his own team, helping several instructors find ways to more efficiently deliver training content.

 

As a result of the publication of call quality metrics on training scorecards Anderson has made adjustments to the training itself. He has adjusted it to focus on metrics so workers-in-training are better prepared and employ more effective strategies in their new jobs.

 

Trainers

Because skills in reading and reporting performance data is central to effectively using scorecards, and because trainers lacked these skills before, the scorecard project helped to develop them. Furthermore, the scorecard itself facilitates the reading and reporting of data, because it presents it in clear, visual terms and automatically flags performance issues skills that require further attention.

 

Because the scorecards provide specific performance data—and that data is linked to the larger performance objectives of the organization—trainers can have more meaningful and targeted discussions with business units about the performance of workers-in-training and its likely impact on that business unit.

Business units

Because they have specific data on the performance of workers-in-training and because that data is tailored to the needs of the business, management in the business units has a higher level of confidence in the ability of the training staff to prepare workers for the job.

 

Lorri Freifeld
Lorri Freifeld is the editor/publisher of Training magazine. She writes on a number of topics, including talent management, training technology, and leadership development. She spearheads two awards programs: the Training APEX Awards and Emerging Training Leaders. A writer/editor for the last 30 years, she has held editing positions at a variety of publications and holds a Master’s degree in journalism from New York University.