Skills is arguably the single hottest topic in HR right now. Everybody is talking about skills.
Some of these reports are very useful, some of the solutions as well. But a lot of what’s written and offered in the market place is very thin and at best only provides insights and solutions on part of the skills challenge.
For me the skills challenge is about three elements:
Each of these provides it’s specific challenges, and each of these is very much a data & analytics play (“skills analytics”).
For the first challenge “what skill do we need” I would like to refer to some of the data and analytics related challenges on identifying ‘future skills’ in part I of the series on data interpretation, as much of the research in this area is based on numerous assumptions that can not always be properly validated.
The 3rd challenge “How do we close the gap” is more a learning science, instructional design question which is not really my expertise. Although I might share my thinking on the importance of data in ‘closing the skill gap’ later in the year.
This article however will focus on the middle challenge: What skills do we have. Arguably the one that offers the most interesting data and analytics challenges and opportunities!
Note that I’m not going into the details and discussions on what are skills and what is the definition of a skill assessment. You can just check the wiki definitions, they work pretty well. But more importantly, the principles and suggestions shared in this post are valid for whatever your definition of skill is you use, and whatever your understanding and interpretation of skill assessments is.
Why a skill assessment model?
So whatever definition of skills and skill assessment you decide to adopt, you will face challenges when trying to understand what skills people currently have. These challenges amount to things like opportunity to measure, level of dept that you can measure, different methods, different context and many more.
If you do not use a solid model to organize and orchestrate your skill assessments, you will end up with a potpourri of data and data-points that will not allow you to easily draw the right conclusions and take the actions. Even more, you might be more inclined to draw wrong conclusions and take the wrong actions as I explained earlier in the examples around data interpretation (link).
Some of the errors you could make:
Over and underestimating people’s skills. Causing people to be suggested for roles that are still ‘a bridge too far’, or receive recommendations for advanced training opportunities before they are ready for it (and sub sequentially waste their precious time). Or naturally the other way around: employees getting roles that do not provide sufficient challenge and growth opportunities, or learning recommendations that equally not provide any challenges and upskilling.
Make incorrect investment decisions. related to the above, but more on an aggregated level is the risk that we overestimate the organizational level of strategic skills. If leadership is convinced that the organization does not need upskilling in the strategic area’s like data science and digital marketing & sales because data shows that the required skills are there, they will most likely decide to invest in other skill area’s. If then it turns out that these skills in the organization were overestimated and are not really there, the organization will have to quickly catch up to realize it’s strategic goals. That always goes with more disturbance and definitely with higher costs. It will also take significant more time to reach the organizational goals. Time that your competitors potentially have used to upskill in data science and digital marketing allowing them to get ahead of you.
The same goes for underestimating skills on the organizational level. You might decide to heavily invest in 3 strategic area’s of upskilling simply to realize you already have these skills available and only few employees are taking part in these upskilling programs.
There is a set of ‘rules’ that I use frequently and that can help you avoid the above mistakes and mitigate the associated risks. These 3 rules by no means are ‘the definite rules’, nor do not offer any guarantee that you will always make the right decisions. But they will definitely help you reducing the chances of making the wrong decision. The first rule is frequently overlooked and hardly used. But it is essential considering a model, let’s call it a maturity model, for assessing what skills are available in your organization: recording the method of assessment.
Rule 1: Always record the method of assessment
The first rule is about being fully transparent and clear on the method of assessment. In principle no skill assessment method is wrong, but assessment methods are very different in terms of reliability, accuracy, costs and scalability. If ever you want to draw accurate and correct conclusions from skills assessment data, you must include the method of data gathering itself in the dataset so you can bring the differences around reliability and accuracy into your insights and conclusion.
For example, if you are recruiting a top level expert or senior leader, chances are that you will put the candidates through a thorough assessment. No doubt organized, delivered, credentialed and validated by an well known brand in the industry. This is almost the top end of the skill assessment spectrum (I’ll explain the real top end in a bit). Given the investment in these assessments, the quality and the credentialing, it’s pretty safe to say that these type of skill assessments are pretty accurate and trustworthy. However, these assessments are expensive and time consuming. Not easy to scale and too expensive to apply to your whole workforce.
The most basic skill assessment is self evaluation where an employee rates their own skill and level. While this is a type of assessment that is cheap and easy to scale, the accuracy and reliability of this method is very questionable as explained in the examples in the article on data interpretation (link). Data from self assessments like this even have such a low level of reliability that in many other fields, like marketing, this type of data sources is all but ignored.
It would be a mistake to treat both data sources equally. They each have their value, but the first one goes deep and accurate for a small number of people, while the second method goes superficial at a large scale.
A ‘Skill assessments Maturity Model’
When considering different methods of assessments, and each methods level of accuracy and reliability, it’s almost natural to consider a ‘skill assessment maturity model’.
The lowest level of skill assessment in the model would be the already mentioned self assessment. This method simply requires employees to rate their level of proficiency for a skill. It’s possibly the most frequent used method right now because it can be a solid starting point for any skill assessment initiative and it’s very easy to build and use. The data you would get from these assessments are very useful for initial explorations and investigations. However, getting accurate and correct data from any such self assessment is based on quite a few assumptions, including the assumptions that participants:
- Equally understands the meaning of each skill in the list
- Equally understands the meaning of each proficiency level
- Equally understand the characteristics of every skill-proficiency combination
- Have a very accurate understanding of their own skills and proficiency level
- Are fully honest when taking the assessment
As long as you are not able to validate the above assumptions, the quality and reliability of the data will be rather poor.
A next step in the maturity ladder for skills assessments is peer assessments: to ask your peers to do a similar rating for you. This provides an additional angle on top of your self assessment. A different angle does not automatically qualifies as better as the same assumptions mentioned above for self assessment hold true for peer assessments. A 6th could even be added based on your relationship with your peers: a good relationship could result in overestimation, a strained relationship could result in underestimation. When you have both self and peer assessment datasets available however, you will be able to compare the results and identify instances where peer assessment skill ratings are very different from the self assessments. In which cases you could flag these records, and even ignore them as being unreliable.
The next step from the peer assessment is the manager assessment. In many cases, a manager could (should?) have similar skills as you albeit at a higher proficiency level. That would qualify the manager as a person who could rate you. Alternatively a manager is trained to be able to also assess people on skills that the manager doesn’t have. As this is again an assessment heavily based on people’s judgment, we need to consider the same assumptions and pitfalls relevant for a self assessment and peer assessment.
But with manager assessments we have yet another source of data that can help us understand if these people based skills assessments have resulted in reliable outcomes. If you would but the results of the self, peer and manager assessment together, you can create insights like the one shown below. A person in a company has done a self, peer and manager assessment on 5 skills with somewhat different results. “Empathy” and “Machine Learning” are rated equally between all 3 methods making that data much more reliable compared to the other ones. “Statistics” is rated the same between the self assessment and manager, only the peer assessment result is different, suggesting that the level 2 indicated by the manager and person itself is probably more reliable compared to the skill “Active listening” where the person, peers and manager all provided different ratings.
Stepping away with the ‘slicer’ methods based on personal interpretations of a skill level, we now move to the more reliable and sophisticated methods of skill assessments.
The first group in this category is the group of formative and summative assessments. These are assessments taking during or after a learning experience and are widely used in education to evaluate a student performance during and after a course, module or curriculum. In corporate learning both formative and summative testing is widely used by building questions into the course (formative) and have a final quiz or exam at the end of the course (summative). While most of these tests in the context of corporate L&D are limited to testing awareness or knowledge on a certain topic (a well known example here is the infamous quiz at the end of every compliance elearning – if you do not correctly answer all questions, you cannot complete), it is perfectly feasible to also perform these type of assessments at higher proficiency levels. A nice example is codeacademy.com where you’ll find challenges to test yourself. In these challenges you need to solve something, actually demonstrating you have the right knowledge and skills to do so. Each challenge nicely comes with a proficiency level (intermediate or advanced). When designed well (and that is no small thing. It’s really hard to design an accurate and high quality assessment) these type of assessments have a direct link to tasks and activities that are expected to take place in the context of the actual work environment and thus help to predict if you have the right knowledge and skills to perform these tasks and activities.
What is key for this maturity model, is that the assessment is based on a simulated and designed environment. From the datamodel perspective it actually does not really matter if the assessment takes place during or after a learning experience, as you should always record the assessment date, no matter what assessment method, that is why I group formative and summative together in a single ‘level’.
A second important aspect is that I would make a distinction between certified and accredited assessments vs. non-certified/accredited assessments, with certified and accredited assessments being the more reliable and accurate of the two, depending obviously on the organization, institute or company taking the assessment. The best example of very reliable accredited assessment organizations would be educational institutes like universities. But there are plenty of more commercial ones in different industries. In HR and L&D for example you have the ATD Master series, and several CIPD qualifications available. In some companies they have very advanced and mature assessment capabilities that are rivaling or even outperforming commercial ones. So sometimes it will be desirable to make a distinction between internal and external assessments.
However, it’s important to note that most (if not all) of these assessments are still designed and therefore based on a specific interpretation of what good looks like. This is why recording the method, assessor and assessment institute is vital to do deep and thorough analysis of the quality of these designed assessments as well as its reliability. In general however it can be expected that more formal assessments (especially the ones that are audited or accredited) are more reliable compared to the ‘slicer’ based assessment.
The final step in understanding a persons skill and skill level is to look at that persons behavior in the workplace, i.e. while executing their tasks. I refer to this as performance based assessments. In it simplest form this is a neutral observer (ideally trained and certified) who tracks the performance of a person during actual work activities, in the actual work environment and context, and rates skills and skill levels based on the activity outcomes. This practice is especially seen in highly regulated environments. I’ve worked with a global oil and energy company where they had an large resource pool of certified assessors who ensured through observation that people in specific roles had all the right skills to perform their tasks accurately and timely. But internal and external audits are also nice examples of this type of performance based assessment.
The added value of performance based assessments over formative or summative assessments is that it takes place in the flow of work, not in a simulated environment and therefore provides much more reliable data. How well your simulation is designed (and thinking of VR….simulations can be pretty impressive these days!), it is never as ‘good’ as the real world: In theory you can pass every summative assessment on ‘bad news conversations’ in designed role play activities with the best actors in the world…it does not guarantee that you will handle your first real bad news conversation as a pro (The 2009 movie “Up in the air” with George Clooney and Anna Kendrick has a nice scene on ‘simulated’ vs ‘real life’). Real work always beats examples.
The challenge for performance based assessments is scalability. No company will be able to afford these assessments for all skills and all employees. So you will have to make decide which people and what skills are critical, important and valuable enough to perform these type of assessments.
Data driven performance based assessments
This is where the potential of data comes in…big data. Performance based testing by human observation is expensive and not scalable. Unless you’re operating with a really high margin, it is not something that companies can afford beyond a selected group of top experts and/or leaders, or in area’s where the risk of of lack of (recorded) skill and competence is simply too high.
But as most of us these days are working in a digital environment, and the digital environment records all our actions, we have the potential to analyze this ‘business’ data to establish more objective, more accurate and more reliable skills data without having the need to invest in a large number of ‘human’ assessors.
Using actual performance data will not just enable us to create more objective, accurate and reliable skill assessments, it will also enable us to do this on a ongoing bases to make it possible to track changes over time.
This does require a careful and accurate mapping of business activities and actions. It requires to carefully consider what activities require what skills, and what outcomes relate to what level of skill.
A fairly simple skill like ‘excel’ (funny enough, excel is in many cases the most popular search word in LMS and LXP systems!), could be linked to a large variety of tasks: Creating a table, formatting a table, enter formulas, create pivot tables, create visuals, build macro’s…In my life I have seen many extremely complex excel files that required very advanced excel skills (and a lot of patience!) to design and build. I use excel still frequently to create handy and easy to use ‘databooks’ that, in lack of a dashboarding tool, can help L&D professionals to get access to data and simple insights. In an ideal situation, an excel guru lists the most common actions in excel and rate each of them in terms of complexity. An excel skill level could then be defined by looking at a person’s speed and accuracy in executing these actions. The combination of data regarding the complexity of a task + speed + accuracy would then give you a skills level of excel. Not an easy exercise, and as your tasks become more complex, the mapping and data required will become more complex as well. But it can be done!
When you have solid formative and summative assessments in place, or even better observation based performance assessments, you already have made the crucial mapping from task/activity to skill and skill level. In that situation, it “only” requires a mapping of the task/activity to the data it generates.
An example. Microsoft Viva provides advanced analytics that help creating workplace insights based on real data taken from outlook, teams and sharepoint. The list of metrics is available online and include many items related to communication, time management and collaboration. If you have this data available, you could start to map some of these metrics to tasks and skills, and the data to different skills levels. The skill of ‘time management’ (like excel. time management always appears in the top 5 of most popular search words) for example could be assessed using the metric ‘% of Low-quality meeting hours compared to total working hours’ with the assumption that people who attend a lot of low quality meetings do not manage their time as well as people with less or no low quality meetings. In fact, the same metric could also be used to map against the skill of ‘prioritization’, which shows the potential complexity of this type of mapping activities.
What about learning?
You might have noticed that I have not mentioned learning, or the completion of learning experience at all in the above model. The reason for this is that a learning experience could have various or even combinations of the above assessment methods.
A traditional test or quiz during or after the learning experience can be labeled as a “Non-Certified/Accredited Formative & Summative Assessment”. A formal leadership assessment taken by one of the many commercial organizations (not a free ‘self assessment!) at the start of a 6 month advanced leaders program could be labeled as a “Certified/Accredited (Formative & Summative) Assessment”.
Very often people make the assumption that the ‘mere’ completion of a learning experience (course, video, article, virtual reality event, you name it!) is sufficient to consider people are building skills. But that is a very dangerous assumption. In SLT we’re currently developing an advanced skills datamodel that will allow a more nuanced and realistic way to build insights on this assumption. When it is ready, we will naturally share the model. But for now it’s sufficient to say that the assumption ‘completing learning = building skills’ is not a solid basis to assess what skills you have in your organization.
Rule 2: Always bring all assessment data into a single repository
I am always amazed how much of our data still sits in silos. Not just in HR, but everywhere….. While we all recognize that bringing data together creates huge advantages and opportunities, we hardly build ecosystems in a way that allows easy data sharing. My recommendation regarding skill assessment data (and all other skill related data for that matter) is to make sure all skill data is brought together in a single data repository. Ideally one that is independent of your HR applications (but I’ll explain the why another time).
The key reasons why it’s my recommendation to bring all skill assessment data in a single repository are the following:
1. Siloed skill assessment data has limited use for analytics and insights based on the notion that the reliability of such isolated data is always lower compared to analytics and insights driven from a multitude of assessments methods as explained when discussing ‘rule 1’ above.
2. Considering that our main customer for skills and learning analytics is the employee (more on that later!) and (or) that we also want to provide personalized insights into skill levels and upskilling opportunities to each customer it makes very little sense to have skill assessment data spread around your systems. That is a recipe for a very poor employee experience:
Imagine that a new hire goes to your core HR platform and has to complete his/her profile, including current skills and levels. At that moment he or she might think, wait a minute…did I not shared this information during the recruitment process? Why do they ask it again? But being new, enthusiastic and a real professional, they populate their current skills again. Only to realize a few days later, when they access your LXP for the first time, that they again are asked to enter their current skills, and a while later again in your LMS, and on other relevant tools (think social learning etc.). Asking an employee to enter their current skills several times in several different platforms is really not something that would be appreciated in 2023 and will badly reflect on any new employee!
3. The above example also illustrates an additional risk of skill data siloes; the risk of using different skill labels and taxonomies. Many platforms have their own specific skill labels and taxonomies. Some HR and L&D system providers are even very secretive about their taxonomies and very inflexible. When you consider skills data from both a user experience point of view as well as analytics and insights, having a single set of skill labels and a single taxonomy is absolutely crucial. You do not want employees to become confused as the skill labels in their role profile are different from core HR and again different from the LXP. And when you run analytics and create data driven skill insights, you also do not want to waste valuable investments and effort to mapping the different skill labels time and time again. Bringing all skill data into a single repository will enable you to align on labels and taxonomies and create a single ‘skill language’ that can be used across every skill related process in the organization.
4. Finally, the more skill assessment data is available, the more value you can generate from AI. AI, and especially Machine Learning thrives with data. Data is what enables machine learning to learn: the more data, the better they can learn. So having all skill (assessment) data in one place will allow you to benefit more from AI compared to having data stuck away in silos!
Rule 3: Always iterate and improve your methods
There are few domains as dynamic as people’s knowledge and skills: They literally change every week. Not only do people learn new knowledge and skills all the time, our understanding of what knowledge and skills predict performance changes all the time as well.
As we start analyzing performance, knowledge and skills data we might only have a rudimentary understanding of what knowledge and skills are required to do a job well. And while the biggest expert in a certain job could have the knowledge and experience to make a really good start, jobs change fast and the context in which people perform their jobs change fast as well. But as we continue to collect data and insights, we have a unique opportunity to continuously validate our thinking against all that newly acquired data and insights. It would be such a shame to ignore all that data and all the insights it could provide.
Therefore I always recommend to setup a structure and process that enables continuous evaluation, iteration and improvements. This can be done in several area’s, for example
What skills predict success at a job? When we post job openings that include skills, the skills we list are always based on an assumption. We assume that a person with these skills, or with ability to learn these skills, will be successful at the job. However, if this is an assumption, we should put a process in place to periodically validate that assumption to make sure it (still) holds true. It could well be that a wrong assumption was made while constructing the job profile, or that the job has changed so much that additional skills are required.
An illustration of the first example is a recent piece of work on analyzing job profiles for digital sales professionals. We used Natural Language Processing to identify skills for each job profile that also included skill levels. I am always in favor of including skill levels, and actually see it as a must do if you (a) want to hire the right candidates, and (b) want to do meaningful analytics. But in this case, the skill levels were too high as it was expected that every sales professional should be at expert level in social media skills. Now you and I might have different expectations as to what an expert in Social Media is, and it could be very beneficial to have a social media expert in your digital sales team, but I would not expect that every digital sales professional should also be a social media expert. That would be way too expensive thinking of upskilling all your digital sales, but also it would mean that a significant portion of their expert social media skills would not be used in their everyday job!
Employees understanding of skills and skill levels. As mentioned already while discussing skill self-assessments; different people have a different understanding of what a skill and skill level really means. If you design your skill processes well, you’ll no doubt use and share uniform definitions when ever possible to maximize the likelihood of different people having the same understanding of a skill. I, for example, typically use lightcast (former EMSI/BG) as a skills taxonomy. It is open (as in available for everybody, in contrast of some other technology providers!) and they use Wikipedia for most of their skill definitions. If you follow me and read my articles, you’ll know that I use Wikipedia a lot.
However, if you take the example of learning analytics (skill ESFAA90E184B379555FA in the Lightcast taxonomy); despite the fact that it has an extensive wiki page (link), it is still hard to define exactly what learning analytics skills are! If I would ask a 100 L&D professionals, I would expect close to a 100 different answers to that question. The difference in interpretation of learning analytics between an educational context and in corporates alone will heavily influence what people perceive to be learning analytics skills. So where one L&D person thinks that learning analytics skills are the ability to collect training feedback data and create bar charts in excel and powerpoint. Another might think about being able to process, structure, model 50 million training records through Alteryx and build a brilliant Power BI dashboard for a few 100 L&D colleagues.
The better employees understand what a skill and skill level really means, the better they will be able to (a) rate themselves and others (as a peer or manager), (b) look for fit for purpose opportunities to upskill and (c) write job profiles!
Data gathered through skill assessments can help pinpointing area’s where the employees understanding of skills and skill levels could use strengthening. For example when you see significant differences between assessment results from self, peer and manager assessments. these difference could be caused by bias (basically over or under estimations), but I would not ignore potential different interpretations. The same goes for large differences in skill assessment results from internal assessments vs external assessments; they could indicate a different interpretation of the same skill within the organization vs the outside world.
Skill evaluation methods, people and suppliers. I am always a fan of continuously monitoring processes. Possible (likely?) because I am a control freak. Monitoring your skill assessment methods, people (if you use them) and suppliers is no exception as even credentials from all the right universities on critical skills acquired could turn out to be less accurate than expected. Top universities could be rather poor at specific topics (while poor universities could be outstanding in 1 or 2 topics). World class accredited institutes could see dramatic drops in quality due to many reasons. One could be that they do not keep up to date with the latest developments and as such deliver an out of date skill assessment. Whatever the reason might be, using data to continuously validate the quality of your skill assessment processes is very much worth the effort.
Having people with the right skills in critical positions in your organization is essential for success. Being able to target upskilling on specific skills for specific people is strategically important for any organization that wants to thrive and grow in this world. And being able to understand exactly what skills you have in your organization is crucial for both.
That is where a structured, data driven and holistic skill assessment model will help. Such a model should be based on different skill assessment methods making sure each method is captured. It should bring all skill (assessment data) into a single repository and continuously evaluate reliability and quality of each method used.