Social learning and xAPI learning systems

What is social learning, and what new opportunities can be opened up when social learning is introduced to the workplace? On the Learning While Working podcast, Ben Betts from HT2 Labs dives into social learning and how it can be used with data analysis for more valuable learning outcomes.

Subscribe using your favourite podcast player or RSS

Subscribe: Apple PodcastsSpotifyGoogle Podcasts | Amazon MusicAndroid | Stitcher | RSS

Sign up to our email newsletter to get podcast updates

Links from the podcast



Welcome to the Learning While Working podcast. It's Robin Petterd here, the founder of Sprout Labs and the host of the Learning While Working podcast. This podcast is another in our series of podcasts about xAPI. I'm talking with Ben Betts, the CEO of HT2 Labs. HT2 Labs makes some next-generation learning tools that includes a learning record store. The other tools are all xAPI-enabled.

Ben and I travel a lot of ground really quickly. He talks about the need for L&D to be more data-driven, the relationship between business intelligence tools and xAPI, and Ben is incredibly passionate about the importance of social learning. We touch on the personalisation of learning, and start to talk a little bit about the analysing of social data in different ways.

This podcast is a little bit longer than some of our other podcasts; it goes for about thirty minutes. On the blog post that goes along with this podcast, there's a link to the social learning playbook that Ben talks about.

Ben, welcome to the Learning While Working podcast. It's great to have you here today.

Thanks for having me, Robin.

Ben, what sort of trends are you seeing in learning that xAPI is now starting to enable?

I guess I'd break it into three things. I think there's a bit of a trend around moving towards more data-driven decision making, more data being required, more pressure on return on investment. There are consistent studies coming out that suggest that organisations are barely measuring any impact of training at all, and I think they're having to answer for that quite a lot.

So I think there's a pressure there on reporting on data. I think there's a philosophical change and obviously the podcast picks up on that big time in terms of this move from courses towards more resources, more performance support, more working and learning being one and the same sort of thing and so people are starting to move away from the course and looking for opportunities to learn across the spectrum, but they still want that first trend - they still want the data that comes with it. So those two things give rise to xAPI.

And then the third thing is the rise of the ecosystem, the rise of so many online learning tools that now we're living in a world of abundance. It used to be that elearning was a bit of a weird thing, and that it was kind of like online dating. Online dating was weird. If you said you met your wife or your girlfriend or your boyfriend or whoever through online dating back in '99-2000, you'd be looked at as being a bit of a weirdo and the same thing, I did a master's degree online and I guess that finished in 2006 or 2007, something like that. People didn't quite believe that was a real thing.

When I said I'd got an MBA but I did it online, they were kind of like, "Okay, is that a real certificate? Is that actually a real exam?" They didn't really believe - my friends, employers, anybody, didn't really believe that that was a real thing. Since that, over the last decade, everything has been legitimised in a way that we can scarcely remember there was a time when it was weird. So as we come into that sort of world now, we see so many websites, so many resources, your edXs, your LinkedIn Learnings, your Degrees and things like that, that are starting to aggregate resources or starting to put stuff out there for free, or for subscriptions, which are setting the bar for content and experience.

So we're seeing more and more organisations wanting to bring some of that in and being able to select the best-in-breed tools as opposed to having to sit with a learning management system for five to seven years. They're starting to want to be able to shop around.

So, I think those three things: the need for more data, the move towards working and learning being the same thing, and the rise of the ecosystem, given this abundance of content and learning experiences and the legitimisation of learning online is giving rise to the conditions that mean we need to be able to talk about data in a standard way. We need to be able to reference these things using something like the xAPI.

Yes, and it's totally interesting because essentially we're in a spot where there's so many choices about how we learn, and that's really that ecosystem and that sense of learning actually being not just about things that are locked into a LMS, but that also then creates all this sort of data capture problem, which xAPI partly captures.

It's really interesting because one thing I have an issue with is in some ways, xAPI is only still learning data, it's not actually performance data unless you've got it integrated with something that's measuring performance. So, it's only one step. There's still a conceptual thing on the being linked to business outcomes and xAPI is like an input rather than an outcome. Are you starting to see people actually bring that data into the same learning reporting systems?

Yes. I see two things here. I see differences on data in and differences on data out. On data in, I guess when we started down this xAPI journey, there was some very naïve notion that the world would become xAPI, that xAPI would catch on, or that we'd do this, or that Salesforce would do it, or that Yammer would do it, or something like that. Or that I'd get it in my electronic point of sales system.

Of course, that's not happened and it's probably never going to happen because these organisations serve huge numbers across lots of different industries. They build APIs and they don't necessarily build it to the specification that we desire.

At HT2 Labs, we're increasingly going to those systems and transforming them. You take an API, you take Salesforce's API and you transform it into xAPI, pretty much in real time. Then you're getting, when somebody's escalating opportunities, we've got the data coming in from the sales training, from the LMS, or whatever, and then you've got also, the opportunity data coming in from Salesforce.

That sort of stuff is starting to happen more and more. We've done that with Salesforce, with SAP, with Yammer and a bunch of those other tools. That's becoming more known as a need and more well articulated by people. It's not always easy to do because all these organisations, if you actually start talking to whoever owns Salesforce or whoever owns Yammer, they get very nervous that you're going to take all of their data or something like that. There are negotiations to be had there. But we can do that, and we are doing that.

On the way out, then increasingly people are starting to see that where expectations are falling short of xAPI potentially, is because they just don't have the people and the mindset to deal with this sort of data. xAPI data is fundamentally different to getting a score or something like that. Whilst you can contain a score, xAPI is a log. It just shows everything that happened. Being able to pick back through that, being able to aggregate that and to create meaningful metrics from it is a skill. Not a skill that many in the learning and development department have, why would they? It's not been their job.

Increasingly, we're seeing that that data is being passed over to data analysts, or that data analysts are starting to be picked up in job roles within HR, within HR tech, within L&D, and they want that data in a BI tool. They want it in Tableau or Looker or Dynamix or something like that. Again, you're having to build these sort of translation pieces from xAPI into those tools, because it seems unlikely that your LRS is going to compete, nor should it in my opinion, compete with a Tableau or something like that.

This is also just the sort of thing as an example of ecosystems, of multiple components working together as one, to get an outcome. It's actually interesting with the BI tools that you're talking about because I, by mistake, noticed and reflected back on the last week, and sat and went, "Oh, I've actually had two discussions where essentially, L&D and the IT areas have said, 'Actually, we don't want the data there, we want that back in our BI tool.' That's only a stepping stone." And that was a really powerful thing, to sit there and go, "Oh okay, so in actual fact we're not completely sure what we're going to do with what - when we get it there, but we actually just want to make sure we can explore it in the same environment, we're exploring the rest of the business data."

I think that really does play to the ecosystem routine of recognising what tools can do best and trying to stick to it. This is where we fell down and we continue to fall down with the learning management system, because the demands put on a learning management system are that of umpteen other systems.

You've got to be the best at managing people to have a good learning management system. It has to manage people flawlessly, it has to manage content brilliantly, it has to be a booking system, it has to be a calendar, it has to be an e-commerce thing. You have to be so many things to make a decent LMS solution, one that ticks all of those boxes. It's just impossible to be a good one because how can you be the best booking system, the best e-commerce system, the best classroom system, the best elearning launcher? Frankly, it's bullshit, you can't do that sort of thing and come out with a good product, consistently.

Thinking that actually, we should be looking - I think it's very tempting, when you procure things from an organisational point of view, why procure three things, when you can procure one? Because it's just a pain in the arse to get three in, but to get one in, that would be better. So if I can get one that does all of these things, that would be tempting.

But we've got a legacy now, of that not really working very well and us coming back to the table in three, five, seven years to say, "Oh, that was crap." I kind of have this hesitance, when you see the feature list that comes out of an LMS procurement or something like that, you go, "What good piece of software has 350 features like that? Why can't we focus on what they do best?"

So for me, thinking about a mature ecosystem that invokes the use of xAPI, it would have a learning records warehouse, it would have a data warehouse that is fundamentally xAPI conforming, at the heart of it. But that's what it'd do, it'd be really good at getting data in and getting data out, cleansing that data, making sure that it was sane, that it was usable, that it could go out to those BI tools.

But I don't think you should build a BI tool in it, and equally I don't think you should necessarily build all those filters on the way in, as a part of it. I think you've got to see these things as a pipeline, as a series of tools, and then focus in on what can be best-of-breed.

One thing I just want to explore a little bit, because I think that's really interesting, that as well as Learning Locker, and well as the xAPI, HT2 Labs has another couple of tools, which is I think of it almost next generation tools that xAPI and Red Panda and Curator. Can you talk a little bit about why you are not developing those to start with?

Curator is where we started, and it was a result of my Ph.D work in the UK. I was researching a better way to do learning online. I take it back a little bit, forgive me for two minutes, go back in history.

I've worked in elearning for most of my career. I used to be a bit of a web developer, became an elearning developer kind of by default. I've always been kind of underwhelmed with the world of elearning. There was an episode of The Simpsons, I looked it up, I think it's from the early 90s, where the town of Springfield inherits a lot of money and Lisa imagines what the school would be like. She put on her VR headset and then goes rampaging with Genghis Khan. That was like 1993, and we are still - it's 2017 and we're like, "Ah, that's almost - we could do that in five years time, maybe."

I'm like, "Oh, Geez guys." That's always my touchstone of what people thought online learning could be, and then the reality has been different. So anyway, I was disillusioned, I went back and started to research how we could do things differently. One of my problems was the isolation of online learning. The classroom, I wasn't a big believer in classroom preaching from the front of the room, but I was a big believer in people talking. The best moments in any face to face workshop or classroom or lecture I've ever been in, have been in the bar afterwards.

By taking stuff online, we just ruined any semblance of that experience. I was interested in social. That led us to create Curator, which is basically a social online classroom. As a result of that, we were getting all sorts of data out of it. We were getting people making comments, liking things, uploading their own content. That didn't fit with things like Scorm that we were used to tracking with.

This was at the time when Tin Can, as it was then known - Tin Can API - was in development, so we became an adopter. It led us to build Learning Locker and go from there. We kind of had this social piece, this social classroom where you go to do learning, but Learning Locker, where you were storing learning. And now we've got a third bit, the Panda.

This was really, trying to sort of suggest, well why was I in that classroom in the first place? What was the plan here? What was I hoping to achieve? There's an undertone of wanting to make relevant learning opportunities appear at just the right time. A personalised way to learning is the theme of everybody at the moment. Personalisation, things like that.

It is a desire of mine, I do think that if we could make learning more relevant to your needs right now, it would be a better experience. Red Panda is there to try and answer that a little bit, to try and understand okay, well if this is what you've done, what might you want to do next? To try and adapt that a little bit, to try and personalise that a little bit, and to be able to feed off xAPI data in the background. Because at the moment, there's not a lot in the xAPI world that actually consumes xAPI data. Quite a lot of stuff spits it out, not a lot consumes it. So this is where Red Panda came in.

Ben, it's lovely to meet someone who's got their heart in the same place as me. It's also really interesting because I come from a strong multimedia background and it's presumed that you always want to be all sexy tools and sexy things. But there's something of occasionally I come across people who can do that, but then sit there and go, "Well, the real power of the tools we've got is actually our ability to be able to communicate with people, not to be able to just show another piece of video."

It's really nice to see that that was what came out of that rethinking for you, around what learning really could be. It's interesting, we've got a bit of a learning ecosystem framework and it starts with - you need to have the knowledge, which is sort of the bits you're curating. Then people need to know where they're going, which is the Red Panda goal. People need spots to practise, which is I think, possibly what you're talking about with some of the Red Panda. And then people need to able to work together and communicate, and think through ideas, and that's Curate a bit as well, to bring together everything.

It's interesting as well, because essentially, people don't pull out or do anything with learning data. There was one LRS demo we were doing and we said, "We've got a workflow system that just sends email messages depending on which statement comes into the system." And they went, "Oh, most people don't do anything with the data, or they ask what we could do with the data."

So Red Panda actually, literally responds back to past activity?

That's the idea of it, the execution of that's difficult because you then have to start mapping things to understand. You don't really want to face up xAPI to any end user. Nobody should ever need to know that, or to be involved in that. How you can communicate that if you do this in Salesforce, that in Yammer and take this course, then that would be a journey, is something that we can articulate from a tech point of view in xAPI, but need to create interfaces and opportunities that people can just use and see and do quite well.

Using that, you have a guy who's down at xAPIapps, who've done some really nice work of starting to humanise some of that, and so it picks up on some similar themes. It's not easy to do because we're used to coming at it from an xAPI angle, where just mentioning the word is - it doesn't mean anything, why are we talking about these things when you're in the context of a user interface?

Getting past that is tricky and still something we're trying to overcome. We're still very much a beta product for the Panda. We have our first sort of six clients that we're rolling in with now, as we try and really articulate what is the need that we're really fulfilling with this sort of thing.

Because we believe we've got something here, but we are a little bit solution led, in terms of, "Look, we've got xAPI here, we think it might be useful. Can it useful for this, or this, or this?" So you're kind of shopping around a little bit to see where it actually makes a difference.

With discussions I had last week with some people working with doing, designing a solution, what was really lovely, was them coming into elearning and expecting it to be personalised. We were in the background and thinking through some adaptive learning things, and like you just said, technically it's actually really quite easy, but how you show it to a learner and then how you do the design and thinking through of that, that's where the real challenge is, around the processes for it.

It's going to be a spot to watch what happens in that personalisation spot, I think, in the next two or three years.

I think it's a huge area. You'll see us putting out more content, more ideas around this because I think what is personalised learning, it's sort of been a bit - you've got different people passing off different things as being personalised or adaptive in different ways, where you - coming out with a branching scenario or something like that and saying, "Hey look, it's my personalised adaptive tool." Well it doesn't adapt to the person at all, it was adapting to the clicks on the screen, or - you know what I mean? There was nothing inherent in the persona that led it down that routine, it was just a preset track that's been triggered around the place.

I think you'll start to tease out: well, there'll be a bunch of snake oil in the area of this is personalised, this is not, or whatever, and where that lies in. I think there'll also be potentially a bit of a backlash against the sort of personalised things. Where you've seen companies like Newton for example in the US, go big on adaptive learning for schools and for curriculums in the K-12 sort of area, you start to see anecdotal evidence that the students starting to become the victim of personalised learning; the tutor; the innocent bystander and thing like that.

Because computer says you will do this, computer says you will do that, we're not really sure how the algorithm came to do it, but the algorithm tells you, and you will do it. At that point, there's a bit of backlash against machines and I think if you take that into the workplace, then you're definitely going to have people rebelling against that routine.

So how 'personalised' actually gets put in front of people - and it comes back to the UI stuff, the Facebook and the other tools in that sort of region have become very clever about personalising the experience without you really knowing it. That you end up going to tag a new photo that you've uploaded and the first three people that you can tag, just happen to be the people who are near you at the time, that just creates a seamless user experience, but I didn't even realise until you start digging into it, "Oh crap, they've done that. That wasn't an accident, they've done it."

How personalised actually faces up to the world and people accept that as it becomes the norm, I think is going to be an interesting area.

It will be an interesting area because there's a set of privacy issues around that. It all feels very Big Brother. But I think that Facebook example's a nice one of where you're sort of augmenting and you're getting people a large level of value, rather than extracting value from them.

Yes, I think that's exactly the point, yes.

What do you think is the big trend that's going to happen within social learning space within the next couple of years, Ben?

I think there's a couple of things that are on my mind. Aggregation and curation are sort of themes that are there. Aggregation is a theme that is abundant on the internet. What you see in internet technology, internet industries is that people don't believe you can do - the first step is people don't believe you can do it online, then a bunch of people come along and do it online for the first time, and then so many people do it online that now we need aggregators to come along to do something about that.

You see it in books or in cars or in whatever, where you sit there and say - well, eventually, first people said, "Now you'll never be able to retail that online, you'll never be able to do that online." Until it was, and then there was hundreds of the buggers. Now you've got things coming out to aggregate those, so there's an advert here in the UK that's driving me insane for a holiday website called Trivago. Its key USP is that there are so many aggregators out there for your holiday, wouldn't it be great if you could aggregate the aggregators? Geez, we've fallen down the well.

There's a trend there in aggregation. I think the same thing is coming in learning because we've seen this abundance of people rise, Open Learning, Edex, Coresail, whatever. We've got content and things that the marginal cost of content is becoming very, very low, the abundance of it is very, very high. So we'll see a trend towards more aggregation of that.

I think part of the trend you'll see with social in that is you'll start to see more of that being aggregated through the lens of people. That you should look at these things because Robin thinks these are great. Or something like that. As you start to go through that trend of aggregation, you start to bump into the social and see how that comes into a bit more light touch.

The other bit that I'm very keen on in social is actually the sort of measurements of value in the way that people progress their thinking with social. It used to be that social was this bit on the side of an online course, so you would do the elearning and then there'd be a forum, or something like that. Go click on the forum and make three posts, and that'd be you done, or whatever there.

Increasingly, you're starting to see the social come into the conversation to begin with. The social is right there, alongside it. One of my favourite areas of research at the moment is how we can research, how we can build models of what people are actually saying to suggest whether they're making progress.

When that progress could be different things to different people, so in an academic course it might be that they're expressing a level of critical thought in their conversation. For the workplace, it might be that they're exhibiting that they've tried a new behaviour, and that maybe they're building a new habit, or something like that.

But actually being able to use the data that sits behind the social, to be able to suggest whether people are making progress and making achievement I think is an area that's hugely exciting, and one we're working towards. And it means that we can move away from the social being this add-on piece over here, you've still got to take the test, to be: forget the test, just work through this as you would, do it in the course of normal working and learning, and we can measure your progress from what you say and what you do.

At that point, that just takes over the world for me, and there's very little need to get into the multiple choice question world for an awful lot of workplace learning.

I'm still a huge lover of real workplace evidence and people reflecting on workplace evidence, because the language of people around their workplace evidence, it really reveals what their understanding is. You're almost talking about trying to automate that with sentiment analysis style technology.

Yes, and we've had some luck. I've found like 70,000 ways that doesn't work, and like two and a half that maybe do. A little bit. One of the advantages of having Curator is that we've got hundreds of thousands of social courses that are out there, to various kind companies who have given us rights to anonymously aggregate some of that data, then we've able to do quite a lot of research in terms of: what are the markers? What are the models?

Some of those are structural things, just the length that somebody types or talks is a reasonable indicator of some things. Some of it is thematic, what are they talking about in terms of their language? Has their language become intentional? Has it become reflective? Has it become historical? You change from, "Oh yeah, I read that and I understood" to, "I am going to do this next time" to, "I did it and this is how it went." There's some grammatical structures to the way that people talk about things there, which work independently of the subject matter, that we can start to pick into.

I don't necessarily think there's going to be a silver bullet in terms of us finding the algorithm that works every time, but what we are seeing is that there are increasing numbers of structures and small things that can build that portfolio of evidence that somebody has made a change, or is doing something differently, has adapted their behaviour in some way, shape or form.

So, for me, that again comes back to the Panda in terms of how am I going to service all of that? Because it isn't going to be a moment in time where you get 60%. It's going to be a journey and it's going to be lots of forms of evidence in there and it's going to be in the weight of that evidence that we suggest that you have met a goal or reached an outcome, as opposed to that single moment of testing.

Yes, and it's a richer possibility as well. It's interesting because I was thinking it through. In a classroom situation, teachers get a sense of people's language around things, I was almost that same sort of analysing of language. You're lucky to have such a big data set to be able to work with. We've looked at automatic marking of short answer questions, we just haven't been able to get the huge data sets that you need to be able to build up the possibilities with those things.

Yes, and this is an interesting area in terms of that ecosystem, coming back full circle. If you really want to take advantage of data, and do things in a personalised, adaptive way, you need an awful lot of data. There's a real question mark for me, whether many companies are ever going to get to the level of data that they need inside the firewall, to really do any of this stuff.

You look at things like IBM's Watson and things like that. Those sorts of services emerge because nobody, no individual company has scale, or the number of people doing any particular job role or anything like that, to really get to that. So you have to go to this enormous global scale in order to really get the data that means you end up with results that aren't liable to spurious outliers, completely screwing them up.

I think that again plays back to the ecosystem point, that if folks really want to take advantage of some of the things that are coming down the line, they're going to have to get a point where they're used to using more third party services because some of these things will not be achievable onsite.

Ben, I've just noticed what the time is, we're nearly out of time and I really would like to have a wrap up question for you, which is if an organisation was starting to think about getting into a small ecosystem approach, what do you think the first step should be?

You're leading me to pitching a little guide that I just put together recently, called The Social Learning Playbook. So go download The Social Learning Playbook. My mindset is really of trying to think about how folks can imagine learning as a campaign, as opposed to a course. And into that sort of ecosystem routine, to just jot down on a piece of paper what are the tools that we already have in the organisation today? They could be learning tools, they might not be learning tools. They might be tools of work.

To think about them in terms of three lenses. I call them Inspiration, Instruction and Implementation. So if you think all of those tools that you've got there and think which of those could I use to inspire people, well maybe Yammer or something that's a social tool that I've got in the organisation. It's job is to put out content, to put out conversations, maybe to inspire.

What about instruction? What have I got there? Well, I've got the classroom, I've got online, I've got elearning, I've got MOOC providers, I've got all sorts of stuff that could go in that instruction piece, there.

And implementation, I've got things that are kind of a Google Doc, working out in the open, maybe Slack, maybe some sort of way of building a team together and working around that. The sort of things you can do in a CRM with cases and opportunities and other things like that.

There are tools in your organisation already, that can help you think about how you'd build a campaign of taking someone from inspiration, from getting them up from wanting to learn or to change, to instructing them to thinking about how they might use a new model, to implementation, to think about how you can help them, using that new model every day.

So I really think that people should start on the ecosystem routine by looking at what they've already got, and then seeing how that maps. There might be some gaps, there might be some need to go shopping for some pieces. There will definitely be needs for some integration, be it the single side on at the front end or the data at the back end.

But I think a lot of this can be done with the tools you already have, and that's the start of the ecosystem, for me. As opposed to thinking, "We'll going to run a course and it's going be on the LMS." That's where it begins and ends. It's definitely getting out of this pre and post mindset, because it still pays deference that the middle bit is the important bit. In the pre and the post sort of work world, the middle bit is the most important, but we know, in the 'working is learning' sort of world, that that is not the most important. That's the prelude to actually going back to your desk and doing something different.

So we've got to stop playing deference to the pre and the post. Start thinking about campaigns.

Thank you, that was a really great conversation, Ben. I might leave it at that point and include the link to that playbook. Thank you very much. I think we could actually talk for hours.

Well, maybe next time Robin, maybe next time.