When a job that I am currently interviewing for asked how I measure success with Learning and Development strategies, this is what I shared with them. It is a holistic and structured approach to measuring success where I collect metrics on 3 levels that I have found the most critical in L&D programs: Reaction, Learning, and Impact. Since every L&D initiative is tied to a company investment of money, time, and people resources, data– and not just any and all data, but data that matters– is critical to the success of each project!
“What is not measured, can not be improved, and what is not improved, will always degrade” – William Kelvin
“Reaction” refers to the experience of the program– how employees feel during and after training, their level of satisfaction, and whether they felt it was worth their time. This data is collected through both quantitative and qualitative feedback surveys during and at the end of the program, letting employees know that we genuinely care about their input, thus boosting their engagement and participation while also gathering areas for improving their experience.
“Learning” refers to the activity and retention of the curriculum. I collect data such as completion percentage rates, training dropout rates, average test scores, and training attendance. I also look at the analytics on participation and completion rates in addition to the time that employees spend on training, as employee time is very valuable. I measure Learning at the start of a program with skills-gap analyses or other baseline KPIs, and then at the end of the program, plus 2-3 months after program or training completion to examine retention of information and look for areas to improve the content or make curriculum updates. This way, instead of throwing the proverbial spaghetti against the wall to see what sticks, robust programs are built with intentional edits.
“Impact” is where we demonstrate ROI. I measure the number of new initiatives and projects that were started, the number of individuals working on those projects, and the budget spent on them. I analyze Impact metrics across Reaction and Learning to see whether programs can be improved and adapted, or if they ought to pivot– and sometimes terminate! I collect Impact data prior to the program initiation, and then quarterly from there, on a regular schedule to ensure consistent collection so that only the best-in-class programs are in active rotation.
Due to the holistic nature of this methodology, important decisions do not rely on a small pool of data, trends can be detected for early interventions, improvements can be pinpointed, and ROI can be proven across employee retention, engagement, and performance!