(Above a representation of possibly the quest of all quests: King Arthur and its knights are starting the search for the Holy Grail. By by Scottish artist William Dyce, 1848)
Part I: Too many learning metrics, too little value
Most of us will remember the days when it seemed every corporate was (and some still are!) beginning to suffer from an informal KPI that was adopted by just about everyone: to have as many slides, and as much information in each slide as possible for every presentation you gave. We started to call this ‘death by PowerPoint’ and despite attempts of technology providers and storytelling thought leaders like David JP Philips, this is still a very current topic and challenge. The root cause of death by PowerPoint can be largely found in our lack of storytelling abilities and limited knowledge on how our brain works and remember things.
In Data Analytics (not just in Learning Analytics), I’m seeing a similar trend emerging that I refer to as ‘death by data’. For some reason we think that more data points equals to better insights. The consequence of this thinking is the creation of hideous dashboards that are packed with as many visuals, metrics and KPI’s as will fit the page, using the most fascinating visuals we can find. Key among them the gauges and pie charts. And it does not matter whether these crowded dashboards are created in excel, powerpoint or more advanced business intelligence solutions like the example below in Power BI. We find them everywhere. Clearly this problem is not rooted in the use of specific tooling, rather it is the way we so often (mis)use all of these tools.
In Learning Analytics, I have seen similar examples of “death by learning data”. If we are not sure what we want to report, and what story we want to tell with data and insights, we have a tendency to show all the data & metrics we can find. As with powerpoint, showing too much data, metrics and KPI’s in cluttered dashboards results in people being overwhelmed with information and not actually be able to extract any of the key points, insights and actions, let alone remember them or act upon it.
Some examples of what I’ve seen in Learning Dashboards:
- Long lists of course titles with registration and completion data
- Long lists of names with training data. (This gives extra concerns and risks related to data privacy! Why do you need the names of individual employees? )
- Visuals that have so much data or so many labels that you cannot actually read them, like world maps where bubbles displaying learning consumption are overlapping, or spiderweb visuals that have so many axis that labels overlap.
- Data points that are grouped in a single visualization but actually logically do not belong together, or even make it very hard to distinguish from each other
Inadequate KPI’s stimulate the wrong behavior
Where showing too much data limits our ability to remember the key points and prevents us from taking action based on the insights provided by the dashboard, showing the wrong metric or KPI’s might be even more harmful. We could waste people’s time by focusing on metrics and KPI’s that are irrelevant. But in a worst case scenario, a strong focus on the wrong metric can have serious negative consequences. In his book “The Tyranny of Metrics” Jerry Z. Muller shares a number of horrifying negative consequences in behavior due to linking performance to the wrong metrics. The two examples that made the biggest impression on me were from healthcare: ambulances with patients waiting outside the emergency entry of a hospital until the hospital was sure they could diagnose the patient with the agreed SLA and surgeons refusing high risk surgeries because they were heavily evaluated on the survival rate of their patients (and high risk surgeries have a higher likelihood of patients not surviving the procedures).
In L&D we have plenty of examples as well (although fortunately not as life-threatening as the ones above).
In an effort to increase the ‘% seats occupied‘ in classrooms, I have seen trainers promote their classes far beyond the original target audience. As a consequence, classrooms were filled, yes, but with participants who possibly not have the right job, level and/or need for the training topic and will most likely have far less impact from the trainingb in terms of transfer and application on the job. In essence, in this way employees spend valuable time participating in training that is less (or not at all!) relevant for their job.
In an attempt to increase registration and completion numbers I have seen L&D people formally push a training to a large audience in the same way as is done with compliance training, almost ‘tricking’ the audience in thinking it was mandatory to complete the training. The registration and completion numbers go up, but again at a cost of employees spending valuable time on training that is less or not relevant for them.
To lower the cost per delivered learning hour (a common metric used to track efficiency of L&D) I have seen training owners inflate the learning hours by for example having participants read one or more books as pre-requisites for a course. I love reading books myself, but find I do not have enough time to read many. If you want participants to have a basic knowledge level before coming into the training, there are way more effective methods available that take a lot less time. And yes, creating solid learning experiences that use the time of employees in the most efficient way do require higher L&D investment and will increase the costs per delivered learning hour….
A final example of misleading metrics in learning is the inflation of target audience numbers to reduce the cost per participant and create a more positive business case. The risk of inflating target audiences is that you most likely end up with far fewer numbers and possibly overspending on solutions.
The book “The Tyranny of Metrics” actually mentions a lot of examples from our educational system as well with most prominent the widespread application of testing in education that I also recognize from corporate L&D. When to much emphasis is put on getting positive test results, fundamentally we’re encouraging and stimulating people to learn how to pass the test, rather than learning actual new knowledge or skills.
It shows that if you do not carefully select the right (set) of metrics or KPI’s, you run the risk of creating the wrong, or even harmful, behavior.
Measure what matters
Having too many metrics creates information overload and does not allow you to use data and insights to tell a story. Or make better and faster decisions. Having the wrong metrics stimulates people ‘rigging’ the system and possibly creates inefficiencies, rather than remove inefficiencies, or even harm the system all together (imagine the effect that overloading your organization with learning just because you want to increase your consumption number has on how they perceive the L&D department, or even how they perceive learning all together).
The key to prevent dashboards from becoming cluttered, meaningless or even harmful is to carefully consider what really matters. What data and insights help you to check progress on your strategy? What data helps you to tell the right story to your business stakeholders? What data helps you to drive action to improve?
It is interesting to notice that many of us are working on defining metrics for learning. Just google “metrics for learning and development” and you’ll see plenty of examples. Even with every organization being different, I can’t help but think there might just be that ultimate L&D KPI that is meaning full for every L&D organization. No matter what business you are in, no matter what location, no matter whether you’re supporting a thousand employees or several hundreds of thousands. This blog series is about exploring best practices in learning metrics and KPI’s and see if we can find one…..or even several…
We continue this series with a look at one of L&D’s most controversial topics: learning hours!