Introduction to xAPI with Andrew Downes

An interview with Andrew Downes from Watershed LRS about xAPI.

Subscribe using your favourite podcast player or RSS

Subscribe: Apple PodcastsSpotify | Amazon MusicAndroid | RSS

Sign up to our email newsletter to get podcast updates

Links from the podcast

 

Download: Interviews and expert advice from the leaders in using xAPI (aka Tin Can)    

This download is a selection of interviews from Sprout Labs’ Learning While Working podcast with leaders in the area of using Experience API.

The download includes:

  • An introduction to xAPI, with Andrew Downes from Watershed LRS
  • The evolution of xAPI, with Andrew Downes from Watershed LRS
  • Using xAPI with Totara LMS, with Hamish Dewe from Orion Health 
  • Social learning and xAPI learning systems, with Ben Betts from HT2 Labs
  • What is xapiapps? with Nick Stephenson, CEO of xapiapps

There’s more than two hours of audio and over 18,000 transcribed words. The interviews include dozens of examples of how of xAPI is being used in organisations today.

Transcript

Robin:
It’s Robin Petterd here, the host of the Learning While Working podcast. This is the first in a series of podcasts that are a detailed exploration and examination of what’s happening with xAPI at the moment.

The podcasts are going to be a mixture of interviews with vendors, with learning record stores, hopefully people who are providing data to learning record stores, and then some case studies as well.

It feels like 2017 has been the year when xAPI has become mainstream. At Sprout Labs we’re finding people are either asking for it or when they hear about what it is, they go, “yes that is something we want, and it’s useful for us.”

In this podcast I’m talking with Andrew Downes from Watershed LRS. Andrew has an amazing wealth of knowledge and he mentions a number of resources that we’ll put into the blog post that goes along with this podcast. He talks about Watershed LRS and gives some advice on how to get started. I’ve also done a second podcast with Andrew, which talks about the potential and history of xAPI as well.

As we’re doing this series of podcasts, we’re also reaching out to Learning Record Stores, vendors and developers to find out more about what each one’s capable of. So it’s going to be interesting at the end of this to be able to look back and to possibly do some comparisons. I hope you get a lot from this podcast and I’m looking forward to this series of podcasts on xAPI.

Robin:
Andrew, welcome to the Learning While Working podcast.

Andrew:
Thank you.

Robin:
When we first met, you had this fantastic title of xAPI Evangelist. You're now working with Watershed. What's exactly your role?

Andrew:
Yes, and now my title is Learning and Interoperability Consultant, which acts as quite a conversation starter, because people often don't necessarily know what interoperability is, let alone what an Interoperability Consultant is. My role is really to wear quite a few different hats within Watershed. So one of my roles is working on a client implementation project, where I'm supporting clients and third parties and vendors perhaps such as yourselves, who we’re working with to get data into Watershed, to bring learning data from wherever learning is happening all together into one place.

My job is helping to shepherd that data into the hole as it were. I also get involved with supporting clients with configuring Watershed, particularly when they get on to using some of our more advanced features, supporting them there. So that's one of my hats. I also get involved in a lot of our marketing efforts. So you'll notice as you go to our blog, we've got loads of useful content on our blog, Watershed.com/blog, a shameless plug. I have written a lot of that content. That's where the learning consultant hat comes from. There's a lot of content around learning evaluation, around learning analytics, and we're helping people to think about the theoretical framework that Watershed sits in.

Watershed is a piece of software, it's a tool that lets you do a job but you also need to have that theoretical framework around that, you need to think about how you're doing daily analytics, you need to be thinking about the effectiveness of your learning. So, a lot of that content is designed to help people do that, whether or not they're Watershed customers. We try and write it in a neutral way so this content is gonna be useful for everybody.

Robin:
That's one of the great things about having you on as our first person in this series of podcasts around xAPI, because Watershed is really the leader in that whole area of learning analytics. I might ask some more a little bit later about what you mean by those theoretical frameworks, I might just get started with another question that I think might actually lead nicely into that, because I'm really interested to explore what you mean by that.

It sounds like you've helped a lot of people to get started with xAPI, what are your gems of advice about how to get started?

Andrew:
Great question. So, I mentioned our blog a moment ago, and we do have a lot of content on our blog, but really it's designed for people to help them get started, and one blog series we've done recently is around the topic of learning analytics, and there's five steps - there's actually six steps, we call it the five steps but there's a sixth bonus step that I mention. There's five steps to get started with learning analytics, and you can actually just do the first step, or just the first couple of steps. Each step takes you a little bit deeper into the more robust learning analytics.

So, the first step is about gathering your data. It's about looking at what learning is happening in your organisation already. What data are you already capturing, and just bringing that together. That can be pretty straightforward to do, you might have products that already support xAPI. If you're using Watershed, we have functionality where you can take a database dump in the form of a csv file, you can import that into Watershed. There are tools like Zapier that allow you to pull data from lots of different products, into your learning record store. So gathering your data is that first step.

Then take some time to get to know your data, look at the data you've actually got. Figure out if there's gaps in the data, figure out if there are errors in the data, figure out what look right, what doesn't look quite right and perhaps take some time to improve that data gathering. The next step we call operationalising your data. So this is where you actually start to use the data. You start to build it out into reports, to analyse the data and to look at what learning is actually happening within your organisation.

And then the next step, explore your data, is where you actually go a little bit deeper and not just look at what's happening, but looking at why it's happening, and starting to understand why is it having a certain impact on the business. Is learning having that impact? Why is it that people are not attending classes on a particular day of the week? Just starting to dig into the data and look for interesting things, anomalies, outliers, things that are perhaps unexpected. Or even, validating things that are expected.

Our fifth step, build on what you've learnt, is about iterating that whole process again. So once you've gone through that process of collecting data, getting to know it, operationalising it, exploring it, look at "Well okay, what additional data can I capture?" And go through that process again, build on what you've done.

The sixth thing which isn't really a step, it's something you should be doing throughout the whole process, is showing off your data. Sharing your insights with business, getting feedback from others, showing off the successes in both the data program and the successes of your learning programs as revealed by the data.

So that's the five step process, and we go into that in a lot more detail with very practical tips on the blog. A few other tips that I want to mention are: look to case studies for inspiration. So, we've got a lot of case studies on our sites, if you go to watershedlrs.com click on resources, then click on client stories, you'll see a number of case studies there. What I will say about the case studies is that, that's very much a growing list. A lot of the time the case studies are a little bit out of date, in the sense that maybe it takes a few months to implement the project, then you've got to get permission from the client's marketing department to share the story, and all these things.

So there's actually a lot more case studies that we don't have on our website and lots more exciting things going on, and perhaps even more advanced uses of Watershed of learning analytics. Our case studies fall into a few different buckets. We have a number of clients that are looking at utilisation of different learning experiences, different learning resources across their organisation. Whether that's content in the LMS, classroom sessions, content that they've purchased from things like GetAbstract, Lynda.com etc, or just content from across the web maybe tracked by a learning experience platform like Pathgatherer  those sorts of things.

So clients have this learning happening everywhere, and they don't necessarily have insight into what learners are doing, what's actually being used, what isn't being used. Trends in uses, is it increasing, is it decreasing, those sorts of things. Which departments are more active, or using different elements. So by bringing all that data together they can just get a really clear picture of what's happening.

We have other clients, a great example is Medstar, who are listed on our case studies, who are looking at the impact of learning on job performance metrics. So in Medstar's case, their job is resuscitating people in the event of what they call a ‘Code Blue’ which is where your heart stops, you go into cardiopulmonary arrest, I think the term is. They have a series of performance metrics and those benchmarks they know have an impact on survival rates for patients, so pretty important stuff. And so what Medstar is doing is they're looking at the learning experiences, and then they're also testing those benchmarks to see, does the learning have an impact on those benchmarks? And particularly, which of the learning experiences have the biggest impact, and really digging into the detail there.

We have other clients who are already doing reporting, but they're doing that learning evaluation using spreadsheet software, maybe Excel, maybe something else, and often it's incredibly time consuming. So every month they bring all their data together into this mammoth spreadsheet that nearly crashes their computer just to open it up, perhaps. These huge spreadsheets, and lots of organisations have these kind of super spreadsheets. It takes a long time to wrangle all of the data, get it into the reports and get it looking right. Then they copy that into PowerPoint and they present it once a month. So they're getting this kind of snapshot every month of the data, and it's taking huge amounts of effort.

By using Experience API and a learning records store, they're able to automate that process, so that at anytime, not just once a month, they can just log into the learning records store and instantly see the reports with up-to-date data without doing any work. Obviously there's some initial work to set up those reports, which they work together with us to implement. But once that initial work is done, they can just log in and instantly see the data. So it's saving them huge amounts of time, and because they're saving time, they've now got the time to really dig into the data and explore and get some more insights.

So, follow the five steps is the first piece of advice. Look for case studies to inspiration, that's the second piece of advice. The final piece of advice is use off-the-shelf tools. A lot of the times where people talk about xAPI, maybe you go to a conference and there's a session about xAPI, and they're talking all about writing code, doing custom development, custom programming - and even many of the case studies on our website, I mentioned that some of the case studies are fairly old by now, because it takes a while to get them out, they are talking about writing custom code. But actually today in 2017 and even going back into 2016, there are more and more off-the-shelf tools that have support for xAPI. And you can run an xAPI project without having to write any code at all.

Now that doesn't mean you can do that with any tool. There are obviously some products that don't have xAPI support, and if you're using those products there might be some need to write some code or do something a little bit technical, but you can if you're willing to pick your tool based on their quality of xAPI support, it is possible to do everything without necessarily writing some code. It can be easier to get started than you might think. We have a list of what we call certified data sources on our website, which you can have a look at. A few conferences, we've been running a thing called xAPI Go, which is a game where we get vendors involved in this game to link up the Watershed and show off the integrations, so you can actually see these integrations with these different products. You go around the different stands at the conference, try the activities, and then you see the data coming into the learning records store.

And we generally see ten or fifteen tools have been involved in those activities when we run them. There are lots of tools that you can just plug-and-play together.

Robin:
That last one is a really sweet idea, I might follow up with that, and do something else later. There's some really interesting things in that five step process, Andrew. I think that sort of stepping it out a little bit like a maturity curve, of that first thing of that layer of getting your data in, and then actually starting to think about what you're going do with that is really nice, because then you're not jumping to the complicated bit.

I've heard some people talking about the fact that xAPI data is more about what people do when they learn, rather than why things are happening. You made a comment that as you start to look at the data you can actually look at the "Why?" If you think about it a certain way.

Andrew:
Yes, so the "Why?" Comes into place when you start comparing data. So let me give you a couple of examples.

Let's say that we have a chart in Watershed, it's a simple line chart, and the line chart is actually really good at looking at activity over time. So, we might just be looking at logins to a particular platform over time, or downloads or files, or whatever the learning activity is, we can count the number of times it happens or the number of people that do it on a given day, or week, or month, whatever. And then we plot that as a line chart. And so you can very visually see levels of activity. So if you're looking at a line chart, and you see "Oh there's a spike in activity at this particular time," you then obviously ask the question, "Okay well why was that a spike in activity?" And so you can go and have a look what did we do in the learning department, or what's happening in the organisation?

And some of that might be happening within the tool itself, or some of this research might be happening outside of Watershed, where you're looking at your calendar perhaps in Outlook, or you're just asking people in your organisation, going and talking to people. But you might know, "Okay well at that particular point in time, we ran this particular email campaign, to promote the tool. And okay, we can see we ran the email campaign on this day, and then there was a spike a few days later or immediately afterwards. We think, therefore, that the email campaign probably had an impact." And you can then look at the other dates where you ran email campaigns and see if there were corresponding increases at those times. And perhaps you also look at the other ways you're promoting the tool. Maybe you have something on the intranet, or there's other methods of promoting the learning content. And again you can see which of those have the biggest impact on the spikes.

Another type of report we have in Watershed, is our correlation card, which is where you can compare two different metrics. So, for example Medstar I mentioned, are using that to look at the benchmarks achieved by a particular person. So the amount of time it takes before they zap people with their paddles when they're on their teams doing these simulations, and then comparing that to whether or not they used this mobile app, like a mobile app simulation of the defibrillator, in the last year. The idea is that people who have used the mobile app, or are using the mobile app, are more likely to perform better on those metrics. So they can plug that into a correlation card and they can see, how closely correlated those two things are, and that starts to give them an idea of, "Is this use of the mobile app related to how well people do?" So they're starting to understand a little bit of why people are performing well, or they're testing theories they have about why people are performing well.

So it's when you're comparing the data, that's where you're able to get into that question of "Why?"

Robin:
Now a couple of really nice examples. Occasionally you have people ask how to build a culture of data driven learning and evaluation, and essentially as I'm listening to you I'm sitting here thinking, "Oh that's what you're talking about, is being curious." Asking questions, of the data that's coming through and other behaviours and what the effects are, and it's really interesting because the data driven approach doesn't always come naturally to L&D people, but curiosity does. It's almost by making it visual with some of the reporting tools, you harness that.

Andrew:
Yes what we find is that Watershed can never answer all of your questions, because as soon as we've answered one set of questions, it's like chopping the head off the Hydra, twice as many questions pop out. Once people start to dig into these things, and start to get these insights, it really does become a little bit addictive, and they want more insights to have even more impact and answer even more questions. So I think you're right, that curiosity is definitely there, and once you get started you can't stop.

Robin:
Andrew, you've given some great advice about getting started today, with xAPI. Thank you so much for joining me on the Learning While Working Podcast today.

Andrew:
Yes no problem, thanks for having me!

 

 

Download: Interviews and expert advice from the leaders in using xAPI (aka Tin Can)    

This download is a selection of interviews from Sprout Labs’ Learning While Working podcast with leaders in the area of using Experience API.

The download includes:

  • An introduction to xAPI, with Andrew Downes from Watershed LRS
  • The evolution of xAPI, with Andrew Downes from Watershed LRS
  • Using xAPI with Totara LMS, with Hamish Dewe from Orion Health 
  • Social learning and xAPI learning systems, with Ben Betts from HT2 Labs
  • What is xapiapps? with Nick Stephenson, CEO of xapiapps

There’s more than two hours of audio and over 18,000 transcribed words. The interviews include dozens of examples of how of xAPI is being used in organisations today.