Hi team! I see when I audit client Quality Management Systems that they often forget to evaluate training effectiveness. There are several ways this can be conducted, documented, and then demonstrated in an audit. How do you do this at your workplace? Interested to hear your methods.
Dear Susan Gorveatte
Kirkpatrick left us with a very interesting model and I was always trying to encourage behavior that produced result.
Whenever, my guys went for training, I would expect them to implement what they learnt on the line. They would then write a report showing the savings as a result of their implementation.
The result was that my training requests was always approved immediately as the ROI was confirmed.
A word of caution though, when I first implemented it the guys thought I had a “sneaky” way of discouraging them from training. This gradually changed when they realized I was using their reports in their annual appraisal to demonstrate their contribution.
It looks a bit backward as level 3 is behavior and level 4 results. But, it did have a positive effect on the behavior in the long run…
Let me know if any of you also feel that although the behavior drives results, Sometimes…
Thoughts?
Yours sincerely.
Ernest
I think Kilpatrick's model is a fundamental tool for ensuring training meets requirements.
When designing training we want to make sure four things happen:
- Training is used correctly as a solution to a performance problem
- Training has the the right content, objectives or methods
- Trainees are sent to training for which they do have the basic skills, prerequisite skills, or confidence needed to learn
- Training delivers the expected learning
Training is a useful lever in organization change and improvement. We want to make sure the training drives organization metrics. And like everything, you need to be able to measure it to improve.
The Kirkpatrick model is a simple and fairly accurate way to measure the effectiveness of adult learning events (i.e., training), and while other methods are introduced periodically, the Kirkpatrick model endures because of its simplicity. The model consists of four levels, each designed to measure a specific element of the training. Created by Donald Kirkpatrick, this model has been in use for over 50 years, evolving over multiple decades through application by learning and development professionals around the world. It is the most recognized method of evaluating the effectiveness of training programs. The model has stood the test of time and became popular due to its ability to break down complex subject into manageable levels. It takes into account any style of training, both informal and formal.
Level 1: Reaction
Kirkpatrick’s first level measures the learners’ reaction to the training. A level 1 evaluation is leveraging the strong correlation between learning retention and how much the learners enjoyed the time spent and found it valuable. Level 1 evaluations, euphemistically called a “smile sheet” should delve deeper than merely whether people liked the course. A good course evaluation will concentrate on three elements: course content, the physical environment and the instructor’s presentation/skills.
Level 2: Learning
Level 2 of Kirkpatrick’s model, learning, measures how much of the content attendees learned as a result of the training session. The best way to make this evaluation is through the use of a pre- and posttest. Pre- and posttests are key to ascertaining whether the participants learned anything in the learning event. Identical pre- and posttests are essential because the difference between the pre- and posttest scores indicates the amount of learning that took place. Without a pretest, one does not know if the trainees knew the material before the session, and unless the questions are the same, one cannot be certain that trainees learned the material in the session.
Level 3: Behavior
Level 3 measures whether the learning is transferred into practice in the workplace.
Level 4: Results
Measures the effect on the business environment. Do we meet objectives?
Evaluation Level | Characteristics | Examples |
Level 1: Reaction | Reaction evaluation is how the delegates felt, and their personal reactions to the training or learning experience, for example: Did trainee consider the training relevant? | feedback forms based on subjective personal reaction to the training experience Verbal reaction which can be analyzed Post-training surveys or questionnaires Online evaluation or grading by delegates Subsequent verbal or written reports given by delegates to managers back at their jobs typically ‘happy sheets’ |
Level 2: Learning | Learning evaluation is the measurement of the increase in knowledge or intellectual capability from before to after the learning experience: Did the trainees learn what intended to be taught? Did the trainee experience what was intended for them to experience? What is the extent of advancement or change in the trainees after the training, in the direction or area that was intended? | Interview or observation can be used before and after although it is time-consuming and can be inconsistent Typically assessments or tests before and after the training Methods of assessment need to be closely related to the aims of the learning Reliable, clear scoring and measurements need to be established hard-copy, electronic, online or interview style assessments are all possible |
Level 3: Behavior | Behavior evaluation is the extent to which the trainees applied the learning and changed their behavior, and this can be immediately and several months after the training, depending on the situation: Did the trainees put their learning into effect when back on the job? Was the change in behavior and new level of knowledge sustained? | Observation and interview over time are required to assess change, relevance of change, and sustainability of change Assessments need to be designed to reduce subjective judgment of the observer |
Level 4: Results | Results evaluation is the effect on the business or environment resulting from the improved performance of the trainee – it is the acid test Measures would typically be business or organizational key performance indicators, such as: volumes, values, percentages, timescales, return on investment, and other quantifiable aspects of organizational performance, for instance; numbers of complaints, staff turnover, attrition, failures, wastage, non-compliance, quality ratings, achievement of standards and accreditations, growth, retention, etc. | The challenge is to identify which and how relate to the trainee’s input and influence. Therefore it is important to identify and agree accountability and relevance with the trainee at the start of the training, so they understand what is to be measured This process overlays normal good management practice – it simply needs linking to the training input For senior people particularly, annual appraisals and ongoing agreement of key business objectives are integral to measuring business results derived from training |
Example in Practice – CAPA
When building a training program, start with with the intended behaviors that will drive results. Evaluating our CAPA program, we have three key aims, which we can apply measures against.
Behavior | Measure |
Investigate to find root cause | % recurring issues |
Implement actions to eliminate root cause | Preventive to corrective action ratio |
To support each of these top level measures we define a set of behavior indicators, such as cycle time, right the first time, etc. To support these, a review rubric is implemented.
Our four levels to measure training effectiveness will now look like this:
Level | Measure |
Level 1: Reaction | Personal action plan and a happy sheet |
Level 2: Learning | Completion of Rubric on a sample event |
Level 3: Behavior | Continued performance and improvement against the Rubric and the key review behavior indicators |
Level 4: Results | Improvements in % of recurring issues and an increase in preventive to corrective actions |
This is all about measuring the effectiveness of the transfer of behaviors.
Strong Signals of Transfer Expectations in the Organization | Signals that Weaken Transfer Expectations in the Organization |
Training participants are required to attend follow-up sesions and other transfer interventions. What is indicates: Individuals and teams are committed to the change and obtaining the intended benefits. | Attending the training is compulsory, but participating in follow-up sessions or oter transfer interventions is voluntary or even resisted by the organization. What is indicates: They key factor of a trainee is attendance, not behavior change. |
The training description specifies transfer goals (e.g. “Trainee increases CAPA success by driving down recurrence of root cause”) What is indicates: The organization has a clear vision and expectation on what the training should accomplish. | The training description roughly outlines training goals (e.g. “Trainee improves their root cause analysis skills”) What is indicates: The organization only has a vague idea of what the training should accomplish. |
Supervisors take time to support transfer (e.g. through pre- and post-training meetings). Transfer support is part of regular agendas. What is indicates: Transfer is considered important in the organization and supported by supervisors and managers, all the way to the top. | Supervisors do not invest in transfer support. Transfer support is not part of the supervisor role. What is indicates: Transfer is not considered very important in the organziaiton. Managers have more important things to do. |
Each training ends with careful planning of individual transfer intentions. What is indicates: Defining transfer intentions is a central component of the training. | Transfer planning at the end of the training does not take place or only sporadically. What is indicates: Defining training intentions is not (or not an essential) part of the training. |
Good training, and thus good and consistent transfer, builds that into the process. It is why I such a fan of utilizing a Rubric to drive consistent performance.
Originally appeared on my blog: Measuring Training Effectiveness for Organizational Performance – Investigations of a Dog (investigationsquality.com)
Workforce Development Network in the ASQ Education Division. We meet once
a month via zoom to discuss a wide range of issues including the quality of training,
training and teaching quality methods, and the role of the quality organization in an
organization's overall workforce development efforts, Please contact me at
jrdew@troy.edu if you would like to be added to the Network's list of participants.
John Dew
John Dew:
Participants in these posts on training might be interested in being part of the
Workforce Development Network in the ASQ Education Division. We meet once
a month via zoom to discuss a wide range of issues including the quality of training,
training and teaching quality methods, and the role of the quality organization in an
organization's overall workforce development efforts, Please contact me at
jrdew@troy.edu if you would like to be added to the Network's list of participants.
John Dew
John, this is a great idea! Thank you that's another great resource.