Learning data and analytics is essential in any modern L&D department. There is more data available then ever to provide insights on what works, what doesn’t. On how L&D is making an impact and where there are opportunities to improve.

Being able to correctly interpret data and insights through analytics is rapidly becoming a crucial skill for L&D professionals. Not being able to interpret data, or the wrong interpretation of data could be rather problematic. In a worst case scenario, it will cause us to continue doing things that do not work, or are even counterproductive!

The first part of this series addressed how we make conscious and unconscious assumptions during any analytics exercise. This second part will be about the use and misuse of statistics by mixing correlation with causation.

“Lies, damned lies, and statistics”

is a much used quote in data science and statistics for a good reason (no quote from Mark Twain as by popular believe, but actually by British prime minister Benjamin Disraeli).

Statistics is a fascinating field that I’ve started to rediscover in the last couple of years. The challenge with statistics is that it is hugely complex, it can be easily misused, and therefore it is misused a lot by accident or on purpose. Now, I am by far no statistician, so this is not an article about the correct application of statistics in learning & development, but rather a collection of actual and typical ways in which ‘causality’ is claimed. Knowing how these type of claims are being used and misused, will help recognizing them when you come across it. If you are able to recognize, you will more likely to be able to challenge unjust claims and adjust your interpretation accordingly.

Correlation vs Causation

Mixing up correlation with causation is arguably the most common mistake made with statistics.

Correlation is defined as a ‘any statistical relationship‘ between 2 datasets. For example: your total consumed learning hours are nicely going up month over month this year. They hold an equal trend with your revenue growth. What does this mean and how should you interpret this?

Well. The right answer to this question always is “it depends”. Because correlation means ‘any’ relationship, this could mean that the relationship is build on pure coincidence or chance. A much used method in statistics to test the type or relationship is by calculating the likelihood or chance that your hypothesis is true (or false). So for this example, we could put the following hypothesis to the test: “what is the chance that revenue growth is caused by increased learning hours” and then let the statisticians do their magic.

Then we can see 3 possible outcomes. First we conclude that we do not have sufficient data to calculate the chance. This will be the most likely (or more only viable) outcome if you only have these 2 datasets. The reason is that revenue growth depends on many different factors and each of them has to be brought into the analysis. It could simply be that both learning hours growth and revenue growth are caused by a number of merger and acquisitions that brought new employees (generating more learning hours), new markets and new customers (generating the growth). It could also be that the factors contributing to growth are so diverse and complex, that statistician cannot find a model that can handle the complexity. Or that there is simply not sufficient good and useable data available.

This makes the ‘learning impact’ discussion such an interesting one. To truly determine the impact of learning investments on aspects like revenue of profit is rather impossible unless you have serious data at your disposal. People and companies who claim to have calculated learning ROI most often actually have not done so and present different metrics and data to represent “impact” (I’ll look into this a bit more in part 3: using irrelevant data). If you know of an example that disproves my hypothesis, please share, because I would love to have a look at it!

If the outcome is one of the rare occasions that you have sufficient data, and the analysis shows that the chance is very low, most likely we would reject the hypothesis and move on to a next hypothesis.

The final option, again provided that you have the right data available to do the analysis, is the chance is high. So high in fact that you will be able to convince others that there is a causal relationship between increased learning hours and increased revenue. You will win the award for “best CLO” and can look forward to many invitations for keynotes, because everybody (including me!) will want to know how you did this.

“Correlation does not imply causation”

What is really concerning, is that in so many cases correlation is presented in such a way that it strongly suggests causation. Or even worse, it is presented as a de facto causation, which is in most cases, actually not the case.

You might remember the outrages claim I made in part I of this series on assumptions? The claim that ice-cream sales causes drownings? Well..the chart on drowning deaths and ice-cream sales from Spain (where I am writing this post) provides ultimate proof that this is indeed the case: As soon as ice-cream sales start to go up, the number of drownings start to follow. A very clear positive correlation. Any policy maker should immediately, upon seeing this data, decide to ban ice-cream sales in order to save lives. Now this sounds somewhat ridiculous. And is it. There is no connection between the two. They simply both depend on a 3rd element, which is the weather. As soon as the weather improves in spring, ice-cream sales goes up, and more people will be swimming in the sea (a little later to allow the sea water to warm up a bit). More people swimming in the sea, means more drownings.

While this example of “correlation does not imply causation” is fairly straightforward, there are unfortunately too many examples in very part of life (science, politics, media, etc) where people actually blindly conclude that a correlation does imply a causation. Below a few additional examples from a Ted Talk at my old University:

In learning, like in any other discipline, we have the same challenge of ‘jumping to conclusions’ and being overly eager to treat correlation as causation. A few examples:

The relationship between A and B is coincidental

The most common mistake made is claiming that there is a relationship between 2 different factors, while in reality there is none and it really is coincidence. Personally I think this is the worst example of ‘jumping to conclusions’, but the good news is, that even with a bit of data exploration, you can validate and if required, correct this. An elaborate example:

The impact of a Learning Campaign: Learning campaigns are great ways to promote learning and curiosity with employees and are really becoming popular these days. Using both on and offline messages and ‘advertising’ throughout the organization on the importance of continuous education and the great opportunities available for employees is a great way to create more attention, more consumption and accelerate skills development. So campaign management teams are pulled together and a fantastic program is developed for the month of April. The question is then: “Did the campaign cause an increase in learning activity? In my quest for the ultimate learning KPI, I already dive deeper in what you should measure. And part 2 is be on the fascinating topic of learning hours, so for this example I will have a look at learning hours as a representative metric for increased learning during the campaign month of April. We’ll start with looking at the total learning hours recorded per month.

Hooray! it looks like the campaign has worked! April is by far the month with the most learning hours. So job well done!

But are we congratulating ourselves a little bit to soon? Let’s bring in some more data. Luckily we’ve been keeping track of our campaign data as well. So if we would add a metric called ‘campaign activity’ (may the volume of messages or views, or a combination of multiple activities) to the chart using a different scale for the y-axis, we would have something like the next chart:

We can still clearly see that most of the campaign activity exactly matches the month with the highest learning hours. That would be reason to support our claim of “our learning campaign had led to more learning hours”. You would say, right?

Even if it seems so very tempting to agree, it would be worthwhile digging in the data just a little bit more.

As L&D professionals, we still have a tendency to push training to people. We like to be in control and define what is right for people given their job in the organization. In my previous article on data interpretation in L&D, I blamed the school/educational system, but somehow I can’t help but thinking most of us do this as it is by far the easiest marketing strategy with the highest guaranteed consumption and subsequent highest volume of learning hours (that is one of the reasons why you should never look at learning hours in isolation!).

This is one of the reasons why push vs pull learning is always one of the first checks I do, in any investigation or analysis. When we perform this check in this example, the chart above will tell us a very interesting story. It shows that the increased learning hours in April are mainly due to an increased volume of push learning (or assigned learning as I refer to it). This single chart completely changes the story because we can hardly claim that the completion of mandatory learning hours are thanks to a campaign on learning and curiosity, right? Upon further investigation and validation of the (still imaginary) data, it proved to be the case that rather a lot of mandatory training was launched early in the year. Some of them were programs postponed from the previous years (they should have been launched in Nov/Dec but got delayed due to the SME’s not being available). And as the standard due date in this company is set on 90 days….it caused a surge of completions in April.

A better chart then to look at is one where we take mandatory training out of the equation:

This shows a completely different picture where April consumption of learning hours is actually the lowest compared to any other month in the year.

Now, remember. This is just an example and hypothetical situation, although you might recognize some of the things I mentioned. But it is an example that illustrates we should not jump to conclusions. Even the last chart does not provide conclusive evidence if the campaign was successful or not. Maybe the programs marketed during the campaign were recorded in a different system, or recorded incorrectly, or even not recorded at all. All good things to check when you do a thorough analysis to try to figure out correlation and causation.

The relationship between A and B is Reversed

Reverse causality is the mistake we make thinking A leads to B, while in fact B is causing A.

As always….Dilbert is right!

Plenty of examples of reverse causality exist in research (and yes, Dilbert is right). But taking it closer to home, a typical example here would be a study around high performers and learning. There’s typical 2 claims being made that could fall victim to Reverse Causality easily: First that high performing people are active learners, and second that high performing companies are investing a lot in learning. In essence they are both making the same claim which is that learning causes high performance. The only difference is the aggregation level; you can make this claim for individuals, teams and entire organizations.

Various organizations and consultancies have done serious studies on the relationship between high performing companies and learning. There’s Bersin, McKinsey, Deloitte. Big names for certain. And first, I do not claim that these studies are false. I simply don’t know sufficiently how they reached their conclusions, what models they used, what data they and what assumptions they’ve made. I also do not deny that these studies do not contain useful information (yes, that is a double negative…also interesting when it comes to data driven storytelling!), insights and models. I think they do, and I use a lot of them to sharpen my thinking. What I can say is that given the complexity of proving causation and the many times we get causality wrong, these studies by no means should be interpreted as a recipe based on the understanding that more investment in learning is the cause of business performance improvement. That claim would be inaccurate. A similar trend exists on learning hours. Will more learning hours lead to improved business performance? Anybody who would present this as a fact would require serious research and data to back this up. If you’re one of these people with sufficient research and data to do so, I would very much like to get in touch and look at your work!

There is a real possibility that various claims on more learning (investment) leads to improved business performance suffer from reverse causality: Maybe outperforming companies have the luxury to invest more money and time in learning and hence that is why they invest more. Not the other way around. Imagine a company who is market leader, has high margins and a full innovation funnel. Processes are efficient and streamlined, management is world class and employees are happy. It’s very likely that this company invests more money and time in learning compared to a company that is working very thin margins in an extreme competitive environment. For the second company to think that if they invest as much in learning as the first company so they can become market leader as well would be a serious strategic risk!

The same goes for individuals. I while back I was part of an experimental study to establish the relationship between high performing individuals and their learning activities. The expected (or should I say desired) outcome of the study was that high performing individuals spend significant more time on learning compared to non high performing employees. We even wanted to go beyond and see what type of training titles these high performing employees were taking to provide inspiration for others. The general idea is then:”Look, our best people are taking these courses, maybe you should take them as well so you can become the best”. In the end the study provided inconclusive evidence of either (so no significant different in volume and no significant similarity in titles). And that was on hind sight a good thing. Else conclusions that people would draw like “More learning leads to better individual performance”, or “taking these training titles lead to better individual performance” would have a very high likelihood of falling victim to reverse causality. It could be equally true to say that high performing individuals have more time and mental capacity to learn new knowledge and skills on top of their everyday work, compared to employees who put all energy into trying to achieve performance goals.

Again, these are hypothetical examples. And I’m NOT claiming that the statement “high performing individuals have more time and mental capacity to learn new knowledge and skills” is true. What I’m trying to explain is that again we should not jump to conclusions. Before you jump to conclusions on causality, it is highly recommended to at least serious check if there is a reason to suspect reversed causality!

The relationship between A and B links to a third factor that drives both

Third factor causality means that A is not causing B, but there is a third factor C that is causing both A and B.

A relevant example for this in L&D could be leadership development. Suppose you’re studying the impact of leadership programs. And you look at data on participation in leadership development programs versus data on 180 degree feedback results where employees rate their manager. Your analytics show that people who participate in leadership development programs score on average 1 point higher on a 5 point evaluation scale. Eureka! We have found evidence that leadership programs cause higher leadership evaluation scores and thus we have demonstrated that the LD programs have a positive ROI!

Or have you?

You could be right. But I would recommend doing a bit more research as chances are that there could be a 3rd element that drives both a persons interest in leadership development programs, as well as high manager evaluation scores: For example an intrinsic interest in people and people management (possibly even people development).

What if you make the claim “participation in leadership developments results in higher manager evaluation scores” while managers who get lower scores simply not participate in LD programs because of their lack of interest in people management all together?

Again, as with previous examples, I do not say that claims of “leadership programs lead to improved leadership evaluation scores” are false. Simply that we should be very careful with claiming they are true!

The relationship between A and B is bi-directional

Bidirectional causation is the case where A causes B, but also B cause A. So to a certain extend they reinforce eachother.

In the example used in the reversed causality case: “higher investments in learning leads to higher performance of the organization” could in fact be also a bidirectional causation. It’s not hard to imagine high performing organizations to have a high level of learner engagement and activities, which would then again lead to an even better performance, after which more funds and time is freed up to stimulate more learning…

So….what should I do now?

I can’t emphasize enough that I do not claim that typical statements about impact, efficiency etc made in the learning and development domain by companies, researchers and consultants are false. And even in cases where I am somewhat doubtful and feel a claim is exaggerated, I do not have the data to validate and check if the claim they make is true.

However, in today’s world where data and statistics play such an important role in decision making, I do think we should be better capable in separating facts from fiction. Just being aware of the the correlation-causation fallacy, I think, will already be a big step. All around us, in marketing, politics and even in science, we see data and statistics being misused (on purpose or by accident, we might never know) and there is no denying that the same thing happens in L&D. So here are my 2 big recommendations for today:

First, if you are in a position where you need to make a decision, whether is a decision of buying a new tool, investing in a new program, or even a decision on stopping or continuing an existing program, and you’re presented with data and statistics that claim any causal relationship, treat these claims of causality with a healthy dose of skepticism (in the classical sense of the word skepticism being a “questioning attitude“) and ask questions. Ask to see evidence that the causal relationship is true, and how they know. Aks them if they have checked and validated to make sure they haven’t fallen victim to any of the above mistakes when defining a cause-and-effect relationship.

Secondly, if you are in a position where you are asked to provide evidence of a causal relationship:

  • Be aware of the above mistakes, so you do not make them
  • Be humble in your claims and don’t jump to conclusions. Even if no causality can be proven, it does not mean that your analysis is a waste of time. Any insight is valuable and with the right support, contextual knowledge and intuition, the right decisions can still be made.
  • Be transparent on what you can achieve given the data, time, resources and expertise available. If a case is really critical, and you need more time, ask for it! If the question or hypotheses is important enough you should get that extra time…
  • When in doubt yourself, and again if the case has a high priority, you can consider bringing in a top-notch statistician who can help doing a very solid analysis on the chance that a certain hypothesis “Do A and B have a casual relationship”, is true.

Summary

There’s a reason that learning analytics is not easy but a lot of fun! It’s almost like being a detective: trying to find clues on what is going and retrieve evidence of what you have found. Unfortunately, you see a lot of false evidence being brought forward and it’s essential that if you use data and insights for decision making, you are aware of the most common mistakes leading to false evidence.

If you’re in learning analytics yourself, it’s key to understand that providing correct evidence for causal relationships is not easy. It takes serious data and you will find yourself in situations where there is simply not sufficiënt data to proceed. It takes serieus analytics and even then you might not find a correlation.

So we need to be prepared to face situations where:

1. Insufficient data: You have insufficient data to be able to draw any conclusion at all. Insufficient data could be in the form of a low number of participants, or having just a handful of training data available. It could also be that you have no data available on crucial elements of your analysis because you have not designed your programs or systems in such a way that it captures the data you need (this happens a lot actually….that is why it’s so important to consider data at the start of your design!)

2. No Correlation: When you feel you have the right data in terms of volume and scope, you should naturally assume that there is no correlation in your data and try to prove that there actually is. An example is where we tried to identify common training among high performing employees. It turned out there was no correlation. The training high performing employees participated in was too diverse and much of it was push training anyways.

3. Correlation (no Causation): As mentioned above, most relationships are correlations, no causations. Finding a correlation can be valuable. A positive correlation for sure can indicate that there might be a causation and is worth while further research! But even without going into a deep causal analysis, a correlation could provide an indication. A signalling functions. In SLT we’ve been developing a learning funnel model around engagement that aims to find correlations between people using your learning tools and content and the skills they build. The funnel in that model acts as a ‘flag’ or ‘warning’ or ‘indicator’ based on the assumption that if more people visit your learning tools, they will likely consume more content and build more skills. Check the article for more details on the funnel!

Representing what you will face in learning analytics: in most occasions you will not have sufficient data. In case you have sufficient data, most will show no correlation. If you have found a correlation, only a very few will turn out to be a causation…

4. Causation: Causal relationships do exist! I try to exercise at least 3x per week. And when I am done, I feel physically tired having spend a lot of energy. I also feel like I have achieved something and feel mentally refreshed. I’m pretty sure that I do not need scientific research to conclude that the physical en mental change is caused by the exercise. Proving causal relationships is actually pretty simple in situations with only a few variables available. I.e. not much is else happening during exercise that could be the cause of my physical and mental change rather than the exercise. However, when the number of variables and potential causes increase, the statistics dramatically rise in complexity.

That is the interesting challenge with finding causal relationships between learning & development investments and business performance. Business performance depends on many variables, and many of them are not being properly tracked or analyzed. Many of them are actually very difficult to track or analyze in the first place! That does not mean it is impossible. But is does mean that its not easy. Without building an L&D architecture that is fully focused on demonstrable business impact you will not even get close and at best will only be able to identify correlations. This is not necessarily a bad thing, as sometimes being able to demonstrate a correlation is actually sufficient as long as everybody in the room understands that correlation does not equal causation. To that extend I’ll share a few bonus examples Marc Ramos shared with me not long ago to help you and the people you are working with understand the potential fallacy of correlations.

The next part of this series will be on a more ‘lighter’ topic: the use of visualizations. Or better, how data visualizations are used to tell a story that can very very subjective. I will demonstrate how, with the exact same data, you can actually tell very different stories…

Data Interpretation in L&D part II: To cause or not to cause…THAT is the question…

Leave a Reply

Your email address will not be published. Required fields are marked *