(and how you can get these 10 metrics really fast!)

There is much to say on the use of metrics and measurements. There is actually much ado about the use of dashboards in analytics. A growing number of people (in and outside L&D) feel we should not be led by numbers, and a growing number of people (in and outside L&D) feel we should not use dashboards. Their reasoning is that metrics lead to incorrect conclusions, and cause people to act and behave different and sometimes opposite of what is actually desired. People find that having too much emphasis on quantitative metrics, especially in a complex setting provides a too simplistic picture. People find that designing, creating and publishing metrics that make sense is costly and very time consuming. And many people find dashboards overly complex and far from user friendly.

But rather than taking a negative stance towards metrics and dashboards, I’d rather think that these challenges reflect the complex nature of Learning Analytics (and data analytics as a whole). And that it would be a huge missed opportunity if we in L&D keep ignoring the amazing value a dashboard, with the right metrics off course, can bring.

Because much of the criticism on Dashboards and Metrics has little to do with what they are, rather with how we use them. So choosing the wrong metrics, or selecting only one metric in isolation to focus on, will lead to unwanted behavior. But that does not mean that all metrics should be avoided, it means you need to select more appropriate metrics and always look at all relevant metrics together. Incorrect interpretation of metrics that lead to the wrong decisions should not be a reason to abandon metrics; it should be a trigger to improve people’s ability to correctly interpret data. And poorly designed dashboards that cause ‘information overload’ and are too complex to use should not be a reason to throw dashboards out of the window. It should be a reason to invest in well designed dashboards.

But I can also not ignore the fact that selecting, designing and building metrics can be complicated. That is why I am sharing my top 10 list of fundamental L&D metrics that I think should be the standard on every L&D dashboard. Not that you should stop with these metrics, on the contrary. Having these metrics in your L&D dashboard will provide you with a wealth of insights that enables you to immediately start making data driven decisions in many different parts of the L&D process.

Especially when you are sitting on a huge pile of of L&D data, and you are not sure what to do with it, this top 10 list could mean a flying start for your L&D department to become more analytical and use data to drive decisions. So rather than spending weeks or months discussion what metrics you should select. You could simply adopt this list, get these metrics out quickly and then move on to the real interesting stuff.

So, here is the top 10!

And here’s an example of what a dashboard with these metrics could look like:

(read on if you want to know more about each metric and why it is such a useful metric)

Engagement Metrics: Reach and Completion Ratios

I’m starting with 2 engagement metrics: Reach and Completion Ratios. Now, Engagement is a much more complex topic that I typically explain by comparing it to a sales funnel like I did in my quest for the ultimate learning KPI (link). The definition I give to learner engagement is ‘stolen’ from marketing and goes something like this: “All interaction between an employee and all available L&D tools, products, offerings and services in the organization ( through various online or offline channels), in the belief that engaging target learning audiences to a high degree is conducive to learning transfer and impact and contributes to furthering business objectives“. That is a mouth full and a lot to grasp (that is why I’ve dedicated a full post to the topic) And therefore it is quite an undertaking to analyze engagement. But as a first go and starting point, engagement can be expressed and tracked by using these 2 metrics.

Audience Reach

Audience reach is basically telling you what % of the audience you have been able to reach with your learning activities and solutions. You can calculate the audience reach by dividing the number of employees who are actively engaged with L&D activities by the total number of employees. This does mean making a decision on the definition of ‘actively engaged’. But a solid starting point would be to take employees who completed at least one, or a couple of trainings. This may sound a bit lame. Afterall, one completion does not make you engaged in learning. But there’s 2 things to consider here.

First you need to exclude all mandatory training. Simply because you only want to include training that was initiated by the employee.

Secondly, while there may be little value in looking into audience reach at the entire workforce and the entire catalog. It’s to be expected that most if not all employees have completed at least 1 training on their own initiative. However, when you dive a little deeper in the data you start to realize the value of this metric. Here are some examples:

% Reach in AI Upskilling

Imagine that your company is in the middle of a large AI upskilling initiative. For this purpose a serious program is deployed with several learning experiences and activities at different proficiency levels and covering different aspects of AI. If your standard L&D dashboard contains the metric audience reach, you can start to zoom into that AI program and use the data to start answering question like: What portion of the workforce is actively upskilling in AI? And what portion of the workforce is actively upskilling in using generative AI to accelerate administrative processes? Or improving sales? And what portion of the workforce is engaged at the beginner level, and what portion on intermediate? If you have specific expectations and targets on what portion of the workforce needs AI knowledge and skills, % reach is a great metric to track the actual numbers against your target and take action if the numbers stay behind target.

% Reach over time

Another interesting insight is the % reach over time. This is when you look at the % reach by quarter, month or even week. Looking at % reach in this matter allows you to better understand the L&D equivalent of returning customers: An employee who regularly completes a learning activity will appear more often on the list compared to employees who complete activities more infrequent. So analyzing the % reach by week, or by month can help making decisions on 2 area’s. First it can help identifying periods in time where learning is more popular and you can use that information to plan deployments of new activities. Secondly, it helps understand the general trend of learning consumption over time. And especially if there is a downward trend there could be a reason to take action and increase initiatives to boost engagement like marketing.

Completion Ratio

The completion ratio is a metric where you take the number of completions and divide this by the total number of registrations plus completions. As with calculating audience, you should not include mandatory learning records. The reason for tracking this metric is that low completion ratios suggests that employees quit activities early. And quitting early could mean one of 2 things. Or employees have found what were looking for in the activity and do not need to fully finish it. Or employees realized that the activity was not meeting their expectations. Both insights could trigger an action. In the case where employees drop off because they have found what they have been looking for, you could consider splitting the activity into smaller pieces or modules. In case employees drop off because they do not find the activity useful, you obviously would want to make improvements to the activity.

So note that low completion rates is not necessarily a bad thing, but for most of us in L&D is does act as a warning sign that you either need to modularize your content more, or update it to better fit the needs of the employee.

As with Audience Reach, the Completion Ratio can also be regarded to a portion of your dataset, meaning you can look at the Completion Ration for just a country, or a specific topic, or a specific learning modality. Here are some examples and scenarios:

Low completion rates in specific countries

Low completion rates in specific countries could indicate a language challenge. Maybe employees in these countries struggle with learning activities not in their native languages and you could consider an update to your language strategy and encourage more translations. Thanks to AI, the cost of translations has gone down considerably!

Low completion rates in specific functions

Low completion rates in specific functions like HR or Finance could indicate a number of things. It could be that the activities available do not provide sufficient context for employees in that function. For example, I am seeing a lot of learning activities on data analysis. But most of these activities take examples and use cases from marketing and sales as context; they talk about website traffic, conversion and other key M&S aspects. Now while these aspects are also of interest to L&D, the different context in which data analysis is explained makes it very difficult for participants to internalize the knowledge and be able to apply it to their different domain. That is why they could drop off. This is actually one of the main reasons why we have created the Learning Analytics Toolkit; a data & analytics program (it’s actually much more than a program!) that is specifically designed for L&D professionals!

Low completion rates for (online) activities with long durations

Many of us have been in a situation where we needed to make the following trade off: Extend the (online) activity a little more, or split it up into smaller modules. Most of us choose to expand the activity. It’s easier, faster and cheaper. But it has left us with fairly lengthy, especially online, activities like eLearning and videos. And in times where time itself is our most valuable commodity, employees do no longer have the time to go through lengthy activities. Especially if these activities contain a lot of content and information on topics they are already familiar with. Do a root cause of high drop of rates could be that employees find the activity too lengthy. The solution? Make your activities more modular!

Learning Transfer Metrics: Average Learning Hours and Average Ratings

Learning Transfer is a complex concept that is not easy to measure. At SLT we speak of the law of increasing complexity when we explain that things that are really worth measuring are those things that are very hard to measure. And Learning Transfer is no difference. Learning Transfer is described on Wikipedia as follows: “Transfer of learning occurs when people apply information, strategies, and skills they have learned to a new situation or context”. In our analytics model we make a distinction between different levels of transfer; the transfer of knowledge which is fairly easy to measure using knowledge checks for example. The transfer of skills, which is already much more difficult to measure as explained here and here. And finally the transfer of desired behavior, which is even more complex. So building a full on learning transfer dashboard for all these levels, is a lot of work and requires deep expertise.

But there is a short cut you can take. One that holds a few assumptions, but can be used as a proxy for learning transfer.

Meerman’s Law: “The more valuable the insight, the more complex it is to obtain.”

Average Learning Hours per Employee

People who spend time on learning activities will learn something, and people who spend more time learning are more likely learning more compared to people who spend less time learning. This assumption is the basis for measuring the average learning hours per employee.

The average learning hours per employee is defined as the average estimated time spent by the employee on learning activities. That is all learning activities that are measured and reported (for some organizations this could be most learning activities, while other organizations only measure time spent on formal learning or even only compliance learning). The metric is calculated by calculating the total learning hours by taking the estimated duration of each learning activity (often recorded in the LMS and LXP) and multiply this with the total number of completions. And then divide this number by the total number of employees.

The average learning hours per employee is a great measure to have, and contrary to many others, I firmly believe every L&D dashboard should have this measure. In addition to the reasons mentioned in by blogpost on learning hours, measuring the average learning hours as a proxy for learning transfer holds some merit. It is a quantifiable way to measure something that is by itself hard to measure, it is supported by some educational psychology and neuroscience research around dedication, repetition, and cognitive processing. Although I am not an educational psychologist, nor neuroscientist. And we know that dedicated practice (and the time it takes to do dedicated practice) is a crucial element of skill development.

However, there are also comments to make on this assumption. It ignores individual differences. Some employees take less time to learn and apply new knowledge and skills, while others take more time. The quality of study strategies and materials also has a huge impact on the time spent. There’s a optimum, so spending more time beyond that optimum does not mean more transfer, it actually could indicate a waste of time. In todays world of social media and constant distractions, its questionable how much time people really spend on focused learning. And lastly, and not unimportant, we have a tendency to focus too much on learning hours as an L&D metric. And while I am a fan of measuring and tracking learning hours, we should not make the mistake of elevating learning hours to a goal like some companies have done with ‘100 learning hours per year per employee’ campaigns. In fact, I actually see it as the core value add of L&D that L&D activities upskill employees with the least possible effort!

Still with all of the pros and cons in mind, I do reserve a seat in the list of 10 must have L&D metrics for average learning hours per employee. Having this metric allows you so see trends over time. Is the average time spend increasing or decreasing? Whether that is good or bad depends on the context. I guess you would like to see an increase in average learning hours spent on AI upskilling activities, or data and analytics. While you would like to see a decrease in learning hours spent on compliance related learning (without compromising quality naturally!). It also enables you to compare and benchmark parts of your organization. Why is the average so much higher in one country compared to another? Why so much lower in a specific pay grade, or in a specific function.

Word of caution!

As a closing remark I do need to share a very important word of caution. In many L&D organizations there is not enough attention to accurate and complete recording of activity durations in the LMS or LXP. It goes without saying (or at least I hope so) that the duration in the LMS or LXP is crucial input for this metrics. Hence making sure this value is populated and populated with the best possible estimate of the time required to complete the activity is very important when using the metric of average learning hours per employee!

Average Rating (score)

The second proxy of learning transfer that maybe even more controversial than the average learning hours per employee: the average rating of activities. Most LMS and LXP tools have the possibility for employees to rate titles with for example 1 to 5 stars. Where 5 stars is regarded as excellent, and 1 star as poor. The assumption behind using this metric for learning transfer is that employees who have a positive notion towards an activity are more likely to have learned more compared to activities that are rated lower or even poor. There could be various reasons why employees rate activities poorly; poor quality, lack of relevance, too simple, or too complex, poor structure. But whatever the reason, low average ratings indicate a low(er) transfer of learning. While there is some evidence that high ratings are leading to more effective transfer.

Another advantage of using average rating as proxy for learning transfer is that the data is easy to collect, and very easy to benchmark. Benchmarking average rating across your catalog (by modality, provider, skill etc) and your organization (location, function, etc) can be very useful to understand differences (both positive and negative) and adjust activities accordingly to increase rating scores and possibly learning transfer. Note that I use the word ‘possibly’ for a very specific purpose here to underline the assumption!

However, there are also some very convincing reasons to not use rating based metrics. The first one is the subjective nature of a rating. As an L&D professional, I am very aware of what good learning experience design looks like, and I know what poor design looks like. So I typically rate activities that I participate lower than the average. Also we need to be aware of cultural differences and the likelihood of an employee giving poor ratings to activities also depends on cultural background. Secondly, employees could rate an activity positively, while not applying anything they have learned in their work (for whatever reason that may be). This kinda beats the whole purpose of this exercise. There is a high risk of oversimplification. More or less like the metric of average learning hours per employee, we in L&D have the tendency to see activity ratings as the objective. And high ratings as evidence for success. Which they are not. They are indications. They can be powerful indications, for sure. But they remain indications that help you to make better decisions regarding the activity. Finally, most ratings are recorded immediate after the completion of the activity, so it does not provide any indication of the employee perception and reflection at a later stage. It might well be that after 3 months the employee who gave a 5 star rating realizes the activity was a waste of time..

However, I think that not necessarily indications of high ratings are useful, I think that indications of low ratings are especially useful. Because even though we cannot provide scientific evidence that high ratings always lead to improved learning transfer, we can imagine that a negative perception towards a learning activity does impede learning transfer.

Word of caution!

As with average learning hours per employee, I need to give out a very important work of caution. The % of participants who rate a learning activity is typically very low. As a few % of the total number of employees who have completed the activity. With these low number of responses you always need to ask yourself the question if these few ratings are a fair representation of all participants. Most likely it is not. If only for the realization that people who are positively inclined to a learning activity are more likely to rate it compared to employees who have negative inclination. So 5% rating responses with an average of 4.5 out of 5 does NOT provide evidence of success! It might actually be a bit reason for concern if you think that 95% of the participants could dislike the program!

The single most important Compliance Metric: Percentage Past Due

Compliance learning is complex. But often it is a fairly well structured process that lends itself well for analytics. The one compliance metric I would always include in my L&D dashboard is the percentage of past due records.

Past due occurs when the deadline to complete a compliance activity has passed, but the activity is not completed. The key reason to use ‘Paste Due’ rather than ‘Completions’ in compliance learning is that most of the times a due date is set to compliance learning. This means that employees with a due date in the future still have time to complete the learning activity. So it would be ‘unfair’ to only look at completions as means of tracking compliance learning.

The Percentage Past Due is calculated by taking the total number of compliance learning records that are past due and divide that by the total number of compliance records. Its key to look at records and not people, as a single employee can be involved in many compliance learning activities and maybe only be past due on one of them.

I would want this metric in my standard L&D dashboard because it provides immediate actionable insights in compliance learning, especially when you put a target on it, which is very easy to do for this metric. If you are serious about compliance learning, the percentage of past due records should not exceed 5%. I’ve seen committed organizations having 2% as a target. With a target past due % and measuring the actual %, a well designed dashboard shows you immediately if you are under your target (which in this context is a good thing!), or you are over and must take action. People with 1 or more past due records should be notified, as well as direct supervisors of notorious past due employees.

Learning cost metrics: Cost per learning hour available and consumed

Every CLO, CHRO and CEO would want to see cost metrics on a dashboard, and so should every L&D professional. Because what we do costs money, and we are expected to spend the money as best as we can. However, calculating the full return of investment of learning is a very difficult thing to do. Mainly due to the complexity of establishing the impact of L&D on business performance (see also here). This to the extend where the cost of calculating ROI could even outweigh the cost of the program. So for sure, I do not suggest to always calculate the ROI, and definitely not begin learning analytics with calculating ROI.

But that does not mean that a learning dashboard should not have cost metrics. The obvious choice would be to include L&D expenditure vs budget. But these metrics are often included in already available dashboards and reports from finance. You could still decide to include then in your learning dashboard. But I want to focus however on 2 different metrics.

Average cost per learning hour available

First the metric called “cost per available learning hour”. This metric is also referred to as cost per created learning hour. But as many L&D teams these days procure content as much as they create, or even more. I feel the latter one does not cover the full intent. The definition of this metric can be described as something like “all direct and indirect costs related to making and estimated 1 hour of learning experiences available for employees”. This means that a 30 minute video on AI that costed a total of 15.000 euro, has a cost per available learning hour of 30.000. And an inhouse developed 40 hour leadership program costing 600.000 euro has a cost per available learning hour of 15.000. And yes, that is less expensive than the video! Also, procuring a library of content for a fixed price could be defined in this way. Let’s say you invest 300.000 euro in an external online content library that holds a massive 60.000 learning hours. That would mean a cost of only 5 euro per learning hour available. That cost of 5 euro is actually not a huge exaggeration. It does demonstrate the appeal for these external libraries. As they are able to sell and resell the content over and over again, they can offer very low prices per learning hour available.

The cost per available learning hour is a direct indication of the efficiency of your design and development process, or the effectiveness of your procurement process when you buy learning experiences off the market. A high cost means inefficiency, a low cost means efficiency. It is also possible to put a target to these costs so you can compare actual costs with the target. 1 hour of classroom training for example could set you back anything between 500 and 5.000 euro (mind this is excluding the cost to deliver the training!). While a video does between 1.000-25.000 euro and a traditional eLearning somewhere in the region of 7.000 to 50.0000.

You can see immediately the different cost ranges per learning modality. And that is one of the reasons why this metric is so interesting. It heavily depends not just on the learning modality, but also things like complexity of the topic and the audience (do you need translations). And this helps a lot in establishing if the cost per learning hour is reasonable or not. For example, if the cost of creating a simple 1 hour traditional eLearning on data privacy (here we have compliance learning again) is higher than the cost to create a 1 hour serious game on how to apply AI to your daily work….there’s something not quite right. Either you have massively overpaid for the compliance eLearning, or you have a fantastic and highly efficient framework for creating 1 hour serious games. Or both.

Also, if you have initiated any improvement plans, maybe outsourced L&D work? Or centralized development work? You can use this metric to track the trend over time and see if the changes are really making an impact.

And if you are serious about carefully spending company money to get the most out if the investment in L&D, you can put a target, actively track these numbers and take action if for whatever reason, the costs are increasing. In addition, you can track estimated design and development projects as well along the same lines to be better able to project L&D expenditure!

Average cost per learning hour consumed

Remember the somewhat counterintuitive example where I showed that creating 1 hour of video is more expensive than 1 hour of classroom training? There is a huge risk there that people using this metric as the sole guideline will end up only approving investments for classroom training. That can’t be right? Because if so, why are most L&D organizations going through digitalization?

You’re right. You should never ONLY look at cost per available learning hour as a financial metric. If you base all your decisions on only this metric, you will end up becoming a library!

So I always recommend to include a second somewhat similar, yet very different, metric: cost per learning hour consumed. This metric takes all the costs associated with content design, development, delivery, deployment and maintenance and divide it by the total number of learning hours that were consumed. So that 30 video that costed 15.000 euros? Well, if that video is mandatory to watch for the whole company of 3.000 employees, you come to only 10 euro per learning hour consumed (15.000 euro divided by 3.000 x 0.5 hours). While the 600.000 euro budget to build and deliver that leadership program is actually only developed for the top 50 employees. So that would make the cost per learning hour delivered for this program to be 600.000 divided by 40 x 50 (learning hours per participant times the number of participants) is 300. That is significant more expensive than the video! So the cost per learning hour delivered should relate to the objective and purpose of the program. Key programs (like onboarding and leaderships) and key strategic skills (like Data, Digital, Analytics and AI) could have a higher cost per learning hour delivered, while generic programs and non strategic skills should have lower costs per learning hour delivered

And that is exactly why this metric is a useful proxy for the ROI of L&D. And much more simple to calculate!

Portfolio Health

Most of us either go to the doctor for a regular checkup, or we have wearable devices like the iWatch that provides indicators that tell us something about our health: body temperature, heart rate, blood pressure, weight, muscle vs fat mass, and key among these indicators.

What if I tell you that these indicators are metrics? And that these metrics together tell you if you are healthy or not. If one of these metrics gets above or below the accepted boundaries, either you or your doctor will take notice and do some more investigation.

Like we can define metrics that tell us something about our personal health, we can also define metrics that tell us something about the health of our learning catalog, or how we at SLT refer to it, the L&D portfolio of products and services (which is in most cases almost equal to the catalog!). We refer to these metrics as Portfolio Health Metrics. They are metrics that tell you how well your catalog is serving the employees in your company, in a similar way as webshops monitor their portfolio of products to see if it still meets the needs of their customers. Portfolio Health metrics can indicate if your catalog is overinflated and could use some content cleaning, or is short on activities in specific area’s. The following three metrics are all metrics around portfolio health that I would always include in my L&D Dashboard: The number of available learning programs and activities, the average completions per learning activity and the % of programs utilized.

The number of available learning hours

Depending on your learning tools, learning programs and activities may be called anything from Smartcards, Training Objects, Courses, Offerings or Assets. But tracking how many of these you have available for your employees is always a good idea. And it is relatively easy to achieve by counting them.

The total number across all modalities, and all topics and all systems if your have more than one, can provide useful insights into the growth or decline of your portfolio. And depending on your strategy, growth can be a good and a bad thing: If you are expanding your portfolio, you will want to see growth and will take action if the dashboard indicated that the portfolio is shrinking, not growing or not growing fast enough. If you are on a quest to reduce an inflated portfolio by promoting re-use of existing content and retiring outdated, poor quality and/or irrelevant content, you will want to take action if the portfolio is actually growing, not shirking or not shrinking fast enough. A trendline of total available assets over time can give you that instant insights.

However, there is one major challenge with tracking the number of learning items in your portfolio. A 5 minute video is considered equal to a 40 hour leadership program. And that is not a fair (fair as in justified) comparison. Counting the number of learning activities only would be something like a webshop that only counts the number of sales and forgetting that the sale of a small 2 euro item does not bring the same revenue as a 500 euro new TV. So rather than counting the number of learning assets, I prefer to include the number of available learning hours in my standard L&D dashboard! This provides you similar insights as explained above, but in a more balanced way. So if you for example want to rationalize excel training by splitting ten 3-hour elearning courses with a lot of overlap (representing 10 assets and 30 learning hours), by twenty 15-minute videos that together cover the necessities (representing 20 assets but only 5 learning hours), using learning hours available would provide you with the more accurate insights; looking at number of assets only would suggest a growth of the portfolio that is not really there!

Possibly more interesting is to combine learning hours available with learning activity dimensions like modality, provider and skill. This allows you to monitor any portfolio strategy that you have in place and take action if the numbers require you to take action.

So for example if you are ramping up AI upskilling, you should see an positive trend in the learning hours available on AI skills

If you are transitioning from F2F to online learning, you should see a decrease in available learning hours F2F. And possibly an increase in available learning hours for online learning

If you adopt a micro learning strategy based on video, you should see an increase in available learning hours in video

If your strategy is to buy more and build less, you should see a decrease in learning hours available through inhouse developed programs and an increase in available learning hours through external learning opportunities.

Average completions per activity

I’ve explained already how the cost of learning can be expressed in the cost to make 1 hour of learning available, and the cost to consume 1 hour of learning. The last metric depends not just on the content created, but also on the total number of completions for each activity: the more completions, the lower the average cost per learning hours delivered.

But there is more you can do around completions. The average completions per learning activity can be calculating by taking all registered completions and divide that by the number of available learning activities, including the ones that have no completions at all (this is essential!). This number overall, so all completions in all your systems, and all available learning activities, tell you something about the level of inflation of your L&D portfolio or catalog. Low average completions per activity means that your catalog is inflated. It most likely has too many learning activities that nobody is really interested in (so with zero completions) and/or too many learning activities that only interest a handful of employees (so with very few completions). Both of these could be candidates to be retired.

As with learning hours available, you can use average completions per activity to look at different learning modalities and providers.

For example, learning modalities that have a relative high cost to design and build, like video and eLearning, or VR and Serious Games, should have much higher numbers of average completions compared to learning modalities that reasonably easy to make like podcasts and face to face classroom training. Learning experiences with lower than expected completions, or where the number of completions is in steady design suggest a decrease in popularity and should be candidates to retire.

The average number of completions also provide great (but less actionable) insights in how granular learning is organized. The number of completions per activity tell you a lot on granularity. Low average completions means that Learning Activities are being developed or bought for only small groups of people, while high average numbers of completions per activity indicate a more ‘one size fits all’ attitude. Which one is best really depends on context. If you are supporting a small numbers of top researchers in your company by curating scientific papers with a high level of specialization, it matter not that much that only a few read them. On the other hand, if you develop activities on generic topics, skills and content and they show a low average number of completions…something is clearly wrong….

% activities utilized

Where low average completions per activity indicate a low or declining popularity, tracking the % utilized activities provides a great indication of obsolete learning activities. Obsolete activities are activities where nobody is really interested in. Maybe because they cover an uninteresting topic, or are outdated, or of very poor quality. And especially when you look at activities that have not been used for a longer period of time, you should consider retiring them. Tracking the % utilization of the catalog then helps to track if the retirement of these activities happens at the right pace. I do not think it is realistic that all your learning activities are used, but still it should be possible to achieve between 75% and 95% utilization overall. Depending on how many new activities are being added as new activities might take a while before they are being found by employees.

The reason to also include this metrics in my top 10 is that the metric average completions per activity does not provide sufficient information. Technically, this number could be reasonable in a catalog where half of the learning activities have extreme numbers of completions while the other half has none. I’ll illustrate this with 2 examples.

The first example is the data, digital, analytics and AI portfolio in the catalog. This (section of the) portfolio is characterized by small short learning modules that are curated by AI and L&D staff to provide maximum personalization. Because of this choice, the average number of completions is fairly low, as you have a lot of modules. While the % utilization is high as most of the modules are used as part of a personalized learning experience.

On the other hand you have a more traditional set of communication and change management learning experiences. A set of 10 long eLearning courses of each 4 hours. In the years people have found their favorite few courses and have been curating and promoting these the most. While the other courses in this portfolio have been neglected and are not being used as they more or less overlap with the ones that are being used.

So based on these 2 scenarios, you could conclude, by only looking at the average completions only, that the Data, Digital, Analytics and AI portfolio segment requires immediate attention. While if you look at both metrics, your actions most likely will be more targeted towards the Communication and Change segment of the portfolio.

With this information, you can actually establish a simple decision matrix that could look like this

High average completions + high utilization = great! keep going
High average completions + low utilization = worth investigating
Low average completions + high utilization = fine, no action required
Low average completions + low utilization = immediate action required

Word of caution!

There is one final and crucial comment to make on the definition of utilization. Because there is no fixed definition of utilization. Some use a single completion as evidence of utilization, other companies and L&D teams already count having a registration as evidence of utilization. You can imagine that the choice made around what is considered to be a utilized program vs not utilized has a major effect on the number. And while I see L&D teams taking the side of caution, that is picking a low threshold for utilization, I would advice against it. Because although the threshold of utilization for sure depends on the company size, and to some extend on your L&D strategy, a high threshold for utilization is preferable to a low threshold. Let’s face it, L&D is not put in place and given substantial budgets to create learning experiences for only a handful of people. Unless these people are the best of the best and most strategic employees in the organization, it is probable a waste of time, resources and money. So I would rather define utilization as having at least 50 or more completions in the last year, than set it at only 1 completion.

What is the best number to use? Good question! You can actually do some interesting analytics on this by taking data on available assets, completion numbers and costs to come to an optimal number of completions where you can consider its worthwhile investing.

A final comment

There is a slide deck going around from one of the L&D analyst and research groups that includes over 400 L&D metrics. I can be sure of 2 things. First that 400 metrics is way to much, and secondly that this set of 10 metrics is a great starting point for any L&D Dashboard. You might disagree with me. If so, please do let me know as I would love feedback on this and I am open to ideas and suggestions for even better metrics.

For now, these 10 metrics are the foundation of our standard L&D dashboard that we offer at SLT Consulting. We offer this dashboard, build with Power BI, as a commercial service to L&D teams who do not have the knowledge, capacity and/or funding to design and build something themselves. But I would also argue that L&D teams who do have the knowledge, capacity and funding to build their own dashboard can also make a huge jump ahead in learning analytics by using our dashboard. That is why we have build it in the first place.

When you as an L&D team implements our dashboard, you’ll have actionable insights based on industry best practices in a matter of weeks. Especially when you use Cornerstone on Demand as an LMS as our data model is more or less based on the most widely used LMS in the world. This is a huge time and money saved compared to trying to setup a foundational dashboard completely by yourself. And that means you can use your time and your budget to work on more advanced analytics!

So, be smart! Don’t try to reinvent the wheel. Let us know if you need help getting a foundational L&D dashboard up and running via info@sltconsulting.nl

The 10 L&D metrics that every L&D team should use, every day.

Leave a Reply

Your email address will not be published. Required fields are marked *