Kirkpatrick Model: Four Levels of Training Evaluation
When it comes to creating an employee learning and development strategy, a lot of time and attention goes into the prep work.
L&D managers must thoughtfully craft the training strategy by identifying employee skill gaps, setting learning objectives, choosing the appropriate training method, implementing the program, and reinforcing training.
After investing time and resources in creating elaborate employee training programs, it is crucial for L&D teams to understand if their programs are delivering ROI, whether the participants are putting their learning into practice, and if the training is positively impacting their job role and overall organizational success.
However, measuring the exact outcomes of training and uncovering those specific numbers is a time-consuming and challenging task for L&D teams. On top of that, the traditional means of evaluating training programs are no longer effective, which is why L&D teams are always looking for new and engaging solutions to shed light on the effectiveness of their training programs.
One of the most effective and globally recognized methods of evaluating the results of employee training and learning programs is the Kirkpatrick Model.
In this article, we will learn in-depth about the model and explore its different levels, with examples to understand how the model is implemented.
What is the Kirkpatrick Model?
The Kirkpatrick Model of evaluation is the best known model for analyzing and evaluating the effectiveness and results of employee training programs. It takes into account the style of training, both informal or formal, and rates them against four levels of criteria:
- Reaction of the learner and their thoughts on the training program.
- Learning and increase in knowledge from the training program.
- Behavior change and improvement after applying the learnings on the job.
- Results that the learner’s performance has on the business.
Understanding the Four Levels of Kirkpatrick Model
The Kirkpatrick Model is part of building a comprehensive learning strategy, so it requires pre-planning. Before you create your next training program, read through the four levels of the Kirkpatrick model, and you’ll be in a strong position to record meaningful data for the effectiveness of your training programs.
Level 1: Reaction
The first level of the Kirkpatrick model is “reaction”. This level is learner-focused and helps understand how employees want to learn. This makes your training efforts more effective because training is received better when it’s delivered according to learners’ preferences.
The objective for this level is straightforward – it measures whether the learners find the training to be relevant to their job role, engaging, and useful. There are three parts to the reaction phase:
- Satisfaction: Is the learner happy with what they learned in the training?
- Engagement: How much did the learner engage and contribute to the learning experience?
- Relevance: How much of the information acquired will employees be able to apply in their job?
The reaction level is most commonly assessed by a post-training survey (sometimes referred to as a “smile sheet”) that asks learners to rate their experience. The questions are designed to determine whether or not the learner enjoyed their experience and if they found the training useful for their work. The areas that the survey should focus on are:
- Program objectives
- Course materials
- Content relevance
- Facilitator knowledge
Surveys are designed to help maintain the quality and effectiveness of your training programs. The post-training evaluations provide an in-depth understanding of the value learners are getting from the training, how impactful the training is, its effectiveness concerning different trainees across the organization, and its ability to adapt to different types of learners.
Tips for Implementing Level 1- Reaction
- Use online questionnaires/surveys.
- Set aside extra time at the end of training for learners to fill out the survey.
- Pay attention to verbal responses given during training.
- Encourage written comments.
- Create questions focussing on learner takeaways.
- Reiterate the need for honesty in answers – you don’t need learners to give polite responses rather their true opinions.
✓ Thank you, the template will be sent to your email
Example – Reaction Level
Let’s consider a real-life scenario where training evaluation would be necessary.
A small sales company rolled out a new CRM platform for its sales reps to drive sales productivity and eliminate manual tasks. To get the maximum return on their investment, the company needs to provide CRM training to the sales reps to teach them how to use the platform to its full potential.
The company hires a trainer to host the first CRM training session for one-hour. The objective of the session is to teach the sales reps how to create opportunities in CRM and start migrating from spreadsheets to the new CRM. The instructor puts in their best effort to provide a comprehensive training experience for the learners.
But how does the company know how well the employees engaged with the training session?
At the conclusion of the experience, employees are given an online survey to rate, on a scale of 1 to 5, how relevant they found the training to their jobs, how engaging they found the training, and how satisfied they are with the knowledge received.
There’s also a question or two about whether they would recommend the training to a colleague and whether they’re confident that they can use the new CRM tool for sales activities.
In this example, efforts are made to collect data about how the participants initially react to the training, and the collected data is then used to address concerns in the training experience and provide a much better experience to the participants in the future.
Level 2: Learning
Level 2 gauges the learning of each participant based on whether learners have developed the intended knowledge, skills, attitude, confidence, and commitment to the training.
Learning can be evaluated through both formal and informal methods. Methods of evaluation include assessments (self-assessments and team-assessments) or interviews.
Tips for implementing Level 2- Learning
- Set Training Objectives: Employee training objectives are long and short-term measurable outcomes that define what learners will be able to do at the end of the training. Identify what you want to accomplish with a training program – improve employee performance, bridge a knowledge gap, teach new employee skills, teach a new software or tool, etc. Whatever the case, make sure that the purpose of training is clear. Training objectives allow you to analyze each objective against the training goal and measure the effectiveness of your training program.
- Pre-test: A test or evaluation prior to the training must be conducted to determine the current knowledge and skill levels of the trainees. Then, when the training is finished, test your trainees a second time to measure what they have learned, or measure their learning with interviews or verbal assessments.
Example – Learning level
Carrying the example from the previous section forward, let’s see what an example of the level 2 evaluation would look like.
The objective of the training session is to teach employees how to create opportunities in their new CRM and facilitate a smooth move away from spreadsheets in order to improve an organization’s sales data quality. For this, a training program is implemented for a group of 20 sales reps to learn the opportunity creation process.
To determine if the learners gained the intended knowledge from the CRM training session, the trainer asks them to demonstrate the process of creating a new opportunity on the CRM platform.
This helps the instructor understand whether the sales reps are ready to move forward with the next training session. The trainers may also deliver a short multiple-choice assessment prior to, and post to, the training session to compare the knowledge acquired by learners from the training.
Level 3: Behavior
One of the most crucial steps in the Kirkpatrick Model is the “behavior” phase. It measures whether participants were truly impacted by the learning and if they’re applying the learnings in their daily tasks.
This level analyzes the differences in the participant’s behavior at work after completing the program, which helps determine if the knowledge and skills the program taught are being used in the workplace – and what impact it has made.
Testing behavior is challenging as it is generally difficult to anticipate when a trainee will start to properly utilize their learnings, making it more difficult to determine when, how often, and how to evaluate a participant post-assessment.
Tips for Implementing Level 3: Behavior
- To get reliable data, implement this level 3 to 6 months after the training is completed.
- Use a mix of observations and interviews to analyze behavioral change.
- In the beginning, use subtle evaluations and observations to evaluate change and once the change is noticeable, start using evaluation tools such as interviews or surveys.
- Have a clear definition of what the desired change looks like. For instance, what skills are expected to be put into action by the learner?
- Evaluations are more successful when folded into present management and training methods. Implementing a digital adoption platform (DAP) enables you to deliver on-the-job training via guided walkthroughs and observe on-the-job behavior change with an analytics dashboard for L&D professionals to monitor. DAP analytics provide information on metrics such as how much time users spent in a new system, how many activities were successfully completed, how many times the in-app contextual training walkthroughs were viewed, etc. These dashboards help gather user feedback in real-time, right at the time employees accessed training and applied it to work.
Example – Behavior Level
To observe the behavior level in our previous CRM training example, you need to keep measuring the CRM adoption metrics that matter to your organization, via end-user feedback loops and data analytics. Pull the win rates and sales performance KPIs, as well as observe the time taken to create an opportunity for all the training participants after 3 to 6 months of completing the training session to examine any behavior changes.
Compare the metrics of each trainee to observe which of them have adopted the new CRM and are using it in their daily tasks, and which aren’t. Additionally, make use of your employee analytics to gather insights into which employees are engaging with the training and implementing their learnings on the job, as well as which employees are underutilizing the tool.
Level 4: Results
The result level of the Kirkpatrick Model, commonly regarded as the primary goal of the program, is dedicated to measuring the overall success of a training program. It measures learning against the organization’s business outcomes, i.e your KPIs established at the start of the training.
Common KPI’s include:
- A boost in sales
- Lowered spending
- Improved quality of products
- Fewer accidents in the workplace
- Increased productivity
- Increased customer satisfaction
Tips for Implementing Level 4: Results
- Before starting the training program, you should know exactly what metrics are to be measured throughout the program, and share that information with all participants.
- Use a control group. Take two groups with many common factors and put one group through the training experience. Watch how the data generated by each group compare and use it to improve the training experience.
- Don’t rush the final evaluation; give participants enough time to effectively adapt and implement their new skills.
- Observers need to properly understand the training type and desired outcome.
Example – Results Level
In our CRM implementation example, the primary metric the training evaluators look at is opportunity creation in the CRM and data migration from old spreadsheets.
To measure results, observe if new opportunities are being created on the CRM platform or the old spreadsheets. If all the 20 participants are now creating opportunities on the CRM tool and saving time in doing it, it means the training program was effective and contributed to the success of the sales team and overall organization.
On the other hand, if some of the participants are still using spreadsheets, it means the training wasn’t as useful for them. In this case, L&D teams can either arrange a different method of training for these employees or figure out if there is any other reason behind their resistance to change and work accordingly.
The Kirkpatrick Model analyzes and evaluates the effectiveness and results of employee training programs. It takes into account the style of training, both informal or formal, and rates them against four levels of criteria – Reaction, Learning, Behavior, Results.
- Reaction of the learner and their thoughts on the training program
- Learning and increase in knowledge from the training program
- Behavior change and improvement after applying the learnings on the job
- Results that the learner’s performance has on the business
Using the Kirkpatrick Model creates an actionable training measurement plan to clearly define goals, measure results, and identify areas of notable impact. Analyzing data at each level allows organizations to evaluate the relationship between each level, understand the training results better, and readjust plans and correct courses throughout the learning process.
Use Whatfix’s data analytics dashboard to determine how employees interact with the training programs. The dashboard enables you to track whether employees are engaging with the training, figure out the completion rates, and where employees are dropping off within the training. Based on those numbers, you can infer if the training is effective or ineffective, useful, interesting, or confusing.
Request a demo to see how Whatfix empowers organizations to leverage data insights to build and optimize onboarding, training, and other in-app user experiences.