Did It Stick?

I've heard it many times from both customers and employees who have taken training: "I don't know if it was good training or not."

I then ask, "Well, what did they do at the end of the course to evaluate the students?"

"They gave him a certificate," is an all too common reply.

Frequently, those looking to purchase training are so focused on the syllabus or content of the course, they do not look any further to see what evaluation tools are used, if any. Therefore, they have no real way of knowing if the training "stuck"—that is, if knowledge and skills really transferred—until that employee gets the opportunity to use it on the job.

So what constitutes a useful evaluation when it comes to training? To answer this, let's first identify the four levels of evaluation:

Level One: Reactions. How did the students feel about the training? What did they like, or think was valuable? What didn't they like? This is usually accomplished using a post-course survey or structured debrief interview.

Level Two: Learning. Did the students learn what they were supposed to learn? Did they master the course objectives at an acceptable level? This is evaluated using a post-test. In some cases a pretest is administered to start the class, and a post-test is used to measure learning "gain."

Level Three: Behavioral Change. Can the students now apply the knowledge and/or skills on the job? Acquiring new knowledge is great, but simply being able to recall it on a test isn't good enough. It must be applied to job tasks to really have value. Behavioral change can be measured through on-the-job observations or follow-up surveys at least 30 days after training.

Level Four: Organizational Change. Ultimately, you want training to have some kind of positive effect on your organization. Though rarely done, this can be evaluated by comparing an important business measure before and after the training. This can include, but is not limited to: increased productivity (e.g., more repairs completed per day); fewer warranty returns of "no trouble found" parts; fewer vehicle comebacks for the same concern; increased on time deliveries; or increased customer satisfaction.

As a buyer of training, you should require that, at a minimum, evaluation through Level Two is done. A "certificate of participation" simply won't cut it if you want the training to have any value. Request to see the course objectives, not just a description of the content. (If a training supplier cannot provide you with objectives, I would be highly suspicious!) See if those objectives are in line with the on-the-job goals you have for that employee. Then, ask to see what measurements are used to ensure the employee has mastered those objectives at the end of the class.

Level Two measurements can be a simple post-test with multiple choice questions—a method that is particularly effective for measuring knowledge gain. It can also include a performance test, either in a "live" environment (e.g., on-vehicle) or embedded in a simulation (e.g., a diagnostic scenario in an online course). Simply put, the measurement should fit the objective being measured! If the objective is performance-oriented ("Diagnose concerns in the turbocharger control system"), a performance-based evaluation method is best.

Level Three and Four evaluations are difficult to do and potentially costly—but they are not impossible. A training provider that promises to follow up with students 30, 60 or 90 days after the course to survey the on-the-job application is certainly providing you a value-added service. But this is also something you can do yourself. Use the course objectives (remember, you requested those!) and follow up with your employees to see if their mastery of those objectives has carried over to the accomplishment of job tasks. And I'm sure many fleet operators keep the kinds of data mentioned for Level Four evaluation.

If you do take on the task of Level Three and Four evaluations, keep in mind a couple of key variables:

(1) What was the ultimate purpose of the training? Was it to get the employee "minimally qualified" for a new type of work, or was it to improve competency or performance in key task areas where the employee already performs? This affects what you want to measure in follow-up evaluations.

(2) There are usually other variables involved in employee performance that might need to be "filtered out" (availability of proper equipment, shifting of job assignments due to employee attrition or time off, shop environment, support from management or other employees, etc.).

As I said, difficult, but not impossible, and often worth the extra effort!