Implementing an AI based Learning Experience Platform (LXP) requires a different view on L&D and technology
PART I: Design
There is a lot being said on learning experience platforms. It seems to be the next buzz word in our industry and there’s enough experience platform vendors out there that push this as the new solution for all our problems. However I question some of the hype as we need to be careful we’re not falling into the next snake oil trap.
The biggest challenge I see is that implementing a learning experience platform is not just the implementation of new technology. In order to make a learning experience platform really work as intended, it requires a re-think of the way you organize and facilitate (part of) learning in your organization. In this article I will demonstrate that you need to really think this through and among many things, need to ensure you have a very solid foundation in place when it comes to understanding technology and how to make technology work for you.
Let go of designing the full learner experience
Our classical approach to the design of online learning (and this is by the way also valid for any well designed classroom training) is that we carefully consider each interaction between the learner and teacher as well as between learners. In the design of online learning program we exactly decide when and how the interaction takes place. In other words we define the experience. When you think of a simple page turner, we design a solution where all learners have the same experience. If we make it a bit more complex by introducing branching or personalized scenario’s that take the user input into consideration, we design all possible experiences: how will the story line go? Where do we ask for input, choices or responses? What will happen next with each possible response?
Every implementation of an artificial intelligence based learning experience platform, will automate most of that design. You will no longer be able to directly control it.
Let me repeat that one: In a learning experience platform, the key interactions with the end users are driven by machine learning algorithms, not by a predefined design created by an instructional designer.
I have seen colleagues who asked vendors to share the coding, so they can have a look. Even if you would get the code, and you would understand it, you will not be able to directly influence the algorithm anyways. That is the nature of using AI and Machine Learning: The algorithms used in learning experience platforms are using the data in your system, as well as data generated from interactions from users (likes, shares, comments, registrations/completions, review ratings, removals, explicit asks for input – see also Magpie etc.) to create personalized learning journeys, and more importantly learn themselves (that is why it’s called machine learning!) how to continue to create better personalized learning journeys over time.
If this is the case….how do you ensure your learners have the best possible experience?
With the implementation of a learning experience platform, you are no longer in direct control of the moments and nature of the interactions. This means you will have to rethink the way you design and/or integrate your content. The key change I see is that the content should become much more granular (anybody thinking micro-learning here?). In a classical (digital) instructional design exercise you typically create fairly lengthy coures (20 minutes is what I consider lengthy here!) containing several pages, video’s, questions, images and other digital content. This is then uploaded to a LMS or LXP as one big “blub” of content. I have seen several large companies try to implement a learning content management strategy to be able to build modular content elements that could be re-used again in different courses as you would re-use lego blocks to create endless designs. I actually never saw this fully working in reality, as the level of discipline of people and the functionality available in the systems to manage this by hand for large content libraries was simply not up to the task.
Now with Machine Learning and AI coming in, we can leverage technology to basically analyze all the different content we have in our repositories, and based on user (experience) data as well as possible some additional input from users (“what do you want to learn today?”). See this as a machine who asks you what lego structure you want to build and then looks at your box of lego, selects the blocks required and starts building.
As with lego, the bigger and more uniform the blocks are, the more you are limited in your abilities to create fantastic stuff. Building a lego house from only 8×2 rectangular blocks is possible, but it won’t be pretty and some people might have a challenge recognizing it as a house. Having only 1×1 and 1×2 rectangular blocks, plus some 1×2 roof tiles will increase your freedom to craft the house you want, but it will not give you nice doors and windows. The challenge is to find the right level of balance in the level of granularity of your content. Not to big, not to small, not so specific that it is used only once, nor so generic that is actually does not mean anything anymore. Not an easy job. The story of Lego with their ups and downs confirms this is really really hard. But it can be done if you carefully re-think the way we design learning.