Recommendation engines for learning, with Marc Zao-Sanders

For this interview I spoke with Marc Zao-Sanders, CEO of Filtered, a platform that makes learning recommendations. In our daily life, we see recommendation engines in action all around us, such as Spotify and Netflix.

Recommendation engines and learning are a natural fit. The process of seeing patterns in what an organisation or an individual needs, and then finding the right learning experience, is a core function of L&D. This is something a recommendation engine can do.

Marc uses a bit of machine learning jargon at one stage: collaborative filtering. A basic description of a collaborative filter is that it’s a series of techniques that looks at a user’s past actions and interests, and how they relate to those of other users, and makes recommendations based on user behaviour interrelationships.

Filtered’s platform is actually a combination of a chat and recommendation engine. Magpie is a version of this platform that has been designed specifically for L&D people. Magpie is a great way to experience what chatbots and recommendation engines can do.

Download the how artificial intelligence is changing the way L&D is working eBook

To go along with the podcast series on design thinking and L&D, we have released an eBook with transcripts of all the interviews. The eBook also gives a brief explanation of what AI is and an overview of how it is being used in L&D.

In the eBook you will learn:

  • Some of the jargon behind the technologies e.g. what data scientists mean when they talk about ‘training a model’. 
  • How AI is being used in L&D today to gain insights and automate learning. 
  • Why you should be starting to look at using chatbots in your learning programs.
  • How you can get started with recommendation engines 

Subscribe using your favourite podcast player or RSS

Subscribe: Apple PodcastsSpotify | Amazon MusicAndroid | RSS

Useful links

Transcript - Recommendation engines for learning, with Marc Zao-Sanders

In L&D we collect a lot of data but we don’t use it

Robin: To get us started, I'd love to hear your thoughts on what the potential is for this cluster of technologies called ‘AI for L&D’?

Marc: I think the potential is like what is being seen in commerce, in marketing and in advertising. I think it's going to be big in L&D, because in L&D you have a lot of data. So you have a lot of content, you have a lot of people, you have a lot of people doing things with that content, and it is impossible to manage that data, to draw insights from it, if you just have humans doing that work. And that's where AI – and not just AI, but automation – comes in.

Currently in L&D I don't think we do loads with the data we collect, but there are all sorts of opportunities, because the data is vast. The prize is big. I mean, AI is a huge industry, it's a 200 billion dollar industry globally. So there'll be the investors, there'll be guys trying to solve some important problems. Where there’s one of them, there are others trying to solve these problems as well. So the potential is big, in a word.

Robin: Often there are systems we use in digital learning to collect lots of data, but in L&D we don't do a lot with it because I think often L&D people are coming from a very human-centred background. Their first intent is not data. Occasionally I come across people who are more data driven, but it's not a natural spot for most L&D people to work from.

We are not data driven

Marc: I think that's probably true and it’s beyond L&D in general – certainly in the UK – but I think it's a global phenomenon. There is an aversion to numbers, data and spreadsheets. Maybe you're right that there is a little bit more of a human bent in our industry, but I think actually it's pretty pervasive and that's something that the global workforce is gradually waking up to and becoming better at.

I think as we start to see the rewards from being focused and data driven, we start to draw out some insights from maybe some smaller data sets. You put those to the powers-that-be at your company and if they're agreeing that this is helpful – affecting performance in a department – then your confidence grows. You get into a positive feedback loop and continue to grow it.

But we're still probably some way from that and, like you suggest, a little bit further back than other areas.

Robin: The podcast in this series with Mike Sharkey focuses on predictive analytics in higher education. He made the comment that essentially, education is just so different to, say, e-commerce because so many more variables apply in education. I know you've been doing some work with recommendation engines rather than prediction systems. How did you end up moving towards using recommendation in AI?

Figuring out which content is right for a person

Marc: Well, we didn't come to it by thinking, ‘What will be the best way to use AI?’ We came to this from the point of view of a learner: a member of staff, an employee, a worker. A person with a job needs to know certain things, but there's just so much content out there.

So the first adaptive technology we created was a tech to work with Microsoft Excel. What we're giving the user are the most relevant functions and features that a user should learn in order to be more productive and add value in their role, given their aspirations, given what they already know, and what have you. That was a few years ago. More recently we've gone, ‘Okay, Excel is interesting but that's a relatively narrow domain. What if we expand that to everything that a person with a job might need to learn, in order to upskill and do their job better?’

Recommendation engines are a great way to figure out which content is right

Marc: There are many providers, and most of those providers have a lot of content. In some cases, tens of thousands of learning assets. So how do you connect an individual with the most relevant learning asset? AI recommendation engines and recommendation assistants, and some of the established algorithm types like collaborative filtering, are designed for exactly this purpose. If you look at, say, entertainment, and Netflix and Spotify, or if you look at social media, Facebook, LinkedIn, and Instagram, all of these huge companies have to get relevant content to their user in order to drive usage and subscriptions, to get more clicks, or whatever.

What we're doing in corporate learning, the problem is basically identical. Of course, there are new answers, but structurally it's the same problem. Loads of content, a proliferation of content, lots of users with individual characteristics, and not always making the best choices in terms of what they should learn next, and most of the time not making any choice whatsoever.

So a recommender system is a way of getting the right content to the right user, just like it is in other fields of modern life.

Robin: Recommender systems are really quite mature compared to some of the other machine learning technologies. They have a more solid technical base to build on as well.

Spotify and Netflix are great examples of getting the content to the right user

Marc: You've also got predictive analytics, which is reasonably mature technology. I feel blessed a little bit with what we're doing because we can point to Spotify and Netflix and these guys are so ubiquitous. If you don't use them, or if you don't have any of them, you're probably in a population of, you know, like, 13 people globally.

For anyone that is trying to explain these algorithmic conduits between content and users, these platforms are good examples. I think when you get beyond that to less tangible, maybe more exciting uses of AI, like in medicine or robotics, it's a little bit harder to see how it might be relevant straight away.

Getting the right content to the right users is well established. Amazon is another example, a massive example. eBay as well, even. They are pretty mature, but not in corporate learning. Corporate learning is often the industry that time forgot. L&D and some other industries are trying to catch up on some lost time.

The Netflix of learning – what are people really talking about?

Robin: I feel like I have to talk with someone in this series of podcasts about the idea – the term ‘Netflix of learning’. I keep hearing the term, and I wonder whether or not people are talking about the interface of Netflix, or talking about content, or talking about the technology behind Netflix?

You're right, it's a great way of explaining recommendation technology, but I’m a bit concerned that sometimes when people are talking about the ‘Netflix of learning’, they aren't realising what needs to be below the Netflix interface.

Marc: It's quite an interesting conversation starter because, like you say, that has become a little bit of a cliché in the industry, the ‘Netflix of learning’. It's also significantly confusing in the case of Netflix because yes, there are recommendations but it's also a player. The interface is decent and it's certainly effective and used by many people. When people talk about the ‘Netflix of learning’, are they talking about the user interface or are they talking about recommendation capability and AI?

I'm genuinely not clear on what they are talking about and that's one of the reasons that when I mentioned Netflix, I made a mistake. I think it's much better to use, possibly, Twitter as a better example. So social media, all these tweets out there – you follow people, but the algorithms also sort which tweets are going to be most relevant to you given what you've liked in the past and the activity of other users. There's less room to confuse or conflate it with a player or just a user interface.

The other thing I'd say though about Netflix, since it comes up so often, is that the way that they are trying to get you to the right content is actually very sophisticated, because they have chosen really high profile films and TV series that have very exciting images and thumbnails. In fact, there's a lot of AI in just the thumbnail. They do split testing to get the right thumbnails in front of people.

Netflix directs your attention

Marc: You've got a big screen at the top showing you whatever Netflix thinks is most exciting, the latest release, and in a sense this is not personal. Let's say there's a new series of Orange is the New Black: they'll make a big splash of that. They'll just stick that on almost everyone's screen. Then there are those people who talk a lot about the films on Netflix. That's a social media activity and Netflix is getting the benefit of effective advertising, based on what they choose to put in front of people.Then below the big screen, they have the genres and what you most recently viewed, and there's nothing algorithmic about that. Netflix is actually a bit less dependent on algorithms to make recommendations than many of the other examples.

Spotify and Amazon are better examples

Marc: If you are trying to present your work as being recommendations-first, as we do, Netflix is possibly the worst of all of the examples to pick. If we're going with anything, I would pick Spotify or Amazon or any other company, basically.

Robin: The way you just explored what Netflix does is great. It’s really not a great metaphor for learning. Spotify's radio feature and suggestions work. They are actually a more powerful way of thinking about recommendation systems.

Marc: Yes, and it’s also to do with the volume. With Netflix, of course they've got quite a big selection but it's a few thousand, versus Spotify, it's – I don't even know, but I'm sure it's in the millions of songs. The concept of the unknowns in the music industry is absolutely massive. The concept of unknowns in film: a lot less, and it's a lot chunkier and, you know, films are longer. Even TV series are a lot longer. If you get a recommendation wrong with a film, then it's more of a big deal because you've lost a couple of hours. The users have lost a couple of hours of their lives.

Learning is much more like Spotify in that respect. There are literally millions of learning assets, if you consider all the content libraries out there, and the Internet with all of its instructive, informative articles, and then what companies create themselves. In many cases, an individual company will have hundreds of thousands of assets: PDFs, MP4s, PowerPoints, and all sorts of stuff that they'll want to make available to – or some of which that they'll want to make available to – their staff. You add all of that up and you're in the many, many millions and possibly billions of learning assets.

Unknowns are really important, and so getting a recommendation that reaches into that unknown and brings it to a user that wouldn't otherwise discover it is more valuable than it is with Netflix.

Robin: To dive into the platform that your company's been working on: does that just pick up on past learning activities, or is it bringing in a larger ecosystem of data? Or is it profiling people in other ways? Is it doing it all of these things?

Marc: Okay, Thanks for asking about our platforms.

Understanding the content

Marc: What it does is it ingests learning content from a client. There are three large broadcast groups I've mentioned: the libraries, the internet, and what a client at a large company might have themselves. Part of the algorithm stack is understanding that content. You've got to be able to classify according to competencies, according to difficulty level, according to various characteristics. Understand the content, that's a part of what the stack has to do.

Understanding the users by using a chatbot

Marc: Then it needs to understand the user. That's one of the reasons that we have a chat interface conversational UI. We can ask some questions of the user. Of course, if the client already has some of that data, then we can ingest that data. Basically we get as much data as possible, in as reasonable a time as possible. What we collect is information about the user that pertains to making a useful recommendation for them.

Those two sides of it sound really obvious when you describe it this way, but I don't think that it's always thought of as simply as this. I do think it's useful to think of it in this way. You've really got to understand the content. You've really got to understand the user, the learner. Then you need to have some way of matching the two and a feedback mechanism so that the system will improve over time.

One of the reasons that we've moved to chatbot is that you've got to be really careful with privacy. It's all very well delivering a personalised service; personalisation obviously requires data in order to personalise, otherwise you're stuck with generic. To ingest data that is pre-existing – maybe it exists on LinkedIn – is increasingly, and rightly I think so, frowned upon. That's why we prefer to just kind of get it from scratch. Unless there’s very good reason to ingest. We ask explicitly in this conversation chatbot UI about your department, about your aspirations, about what you already know, and about what excites you, so that we can make better recommendations.

So you're doing it as a conversation: Individual workers, we would love to help you learn the most useful stuff. Here's some questions so we can get to know you better, so that we can make some better recommendations. Here's a quick learning experience so you get a flavour for the kind of thing that we can deliver, and if you like that, tell us some more and we'll get more into it.

Robin: That's a really great formula. What’s great about it is that it’s really explicit about how the profiling and revealing of personal data works.

Marc: I'm glad you think so. I mean actually it's worth, always I think, assessing anything in learning and comparing it to an adjacent field.

A platform like Spotify is always collecting data about you

Marc: It comes back to Spotify. They don't have to have a chat, or ask you about your actual musical likes and dislikes, although probably at the start you can indicate some genres that you like. They don't need to do that because they get so much data from people putting headphones on and playing Spotify for eight hours. They have so much data on what you like and don't like that they don't need to ask the explicit questions.

You’re not going to be learning for eight hours a day

Marc: In learning, it's not like that. No-one's going to be learning for eight hours a day. Learning assets generally aren't the length of a three- or a four-minute track. Some are but many aren't. You've got to understand the learner and get the data you need in order to make recommendations in a different way. That's one of the nuances I was referring to early on.

Although there are these analogous situations, analogous companies like, say, Spotify, Netflix, or whatever, they are very interesting, I find it fascinating the nuances between them when you look at what the user activity is and what data you can actually pull in order to make good recommendations.

Robin: A couple of times when I've been talking to L&D people about the potential of some of these technologies, they want something to grasp – they want something to do. This is one reason why I really wanted to talk to you, because you actually have a platform that people can start to do trials on and play with. How do people get started?

Marc: Oh, that's a good question.

Getting started

Marc: Well you can go to Filtered.com and you can try Magpie. That's one of the links on the home page. Unlike some of our competitors, there's a free version that people can just experiment with and like, enjoy and ask us questions about. We're very sort of happy to put that there and be challenged on it. That's the best way to start. Anyone that's interested in anything that you and I have discussed, we'll be very happy to have them in touch with me directly.