Saturday, November 20, 2010

4th Floor -- Results, Organizational Benefits, Objectives, Goals

I gave a seminar on elearning at a federal department this week. It was full of training managers and other human capital experts. They all had the same comment: "This is great information, but it doesn't address what I really need. How do I convince upper management that training is valuable?" As we examined that topic from different angles, a few things became clear. First, many training managers aren't measuring their training beyond a course evaluation. Second, and more shocking, upper management isn't sharing program objectives or organizational goals with the training team.

Let's take the measures first. You're probably familiar with Donald Kirkpatrick's evaluation model. Four levels of training assessment are broken down into a learner's evaluation, a learning assessment, knowledge transfer, and organizational benefit. Each level is measured and builds on the preceding one. It's pretty simple in concept, but it can be difficult to practice.

The training managers had some problems with the level one assessments. One issue was the questions didn't seem to go far enough. It's good to ask a learner how he or she liked the course, but it's better to go a little deeper. Ask how the learner would do it differently or if any information seemed to be missing. Find out how the learner would have survived without the course, and what resources he or she might have used if the course didn't exist. Another big issue was making sure the online evaluations were anonymous in the LMS -- something any good programmer should be able to provide with little difficulty.

At level two, managers seemed to be uncomfortable with the idea of a pretest evaluation of the learning population. They wanted to get on with the training. I sympathize, but it's really important to know how your training changed the learner's knowledge. If you don't know that, it's impossible to know what direct effect your training has had.

Level three, interestingly, was the easiest for training managers to handle. After all, they had numbers to justify that a particular skill was lacking in some way, and it was easy enough to do an additional evaluation to note the change in skill. Since the level two measures were lacking, however, managers could only guess that training was responsible for the change. Increased awareness, marketing, or other factors may have been responsible.

The biggest issue was level four, which is to be expected. Since training managers didn't have the first three levels of measurement on solid footing, and upper management wasn't sharing its objectives (or didn't have any), level four was an impossible climb. We've already looked at some issues with the first three levels, which are generally under the training manager's control. Since organizational goals and program objectives are the purview of higher powers, what can a training manager do?

I think the answer is simple: make something up. Training managers know enough about the organization to understand its mission. They know about the big programs and what they're designed to do. The trick is to tie the course to the program.

Though training managers may not get the news, marketing is all over this stuff. Every press release is peppered with "meeting the executive's goal of..." or "increase operational efficiency" or "reduce our risk exposure." This kind of language is all over an organization's website. Trainers should just pick a goal and march with it. Better yet, pick several. It's not a trick; any good trainer can identify organizational deficiencies instinctively. Trainers know what the organization needs, they just need a little help describing it to the folks with the checkbook.

Training must have a purpose, and that purpose should at least include an organizational benefit. As long as the first three Kirkpatrick levels are solid, it should be easy to show the way to level four.

No comments:

Post a Comment