What is broken with assessment in eLearning, with Jenny Saucerman

This the start of a series of interviews with the speakers from Learning Solutions 2019.

This podcast is with Jenny Saucerman on what is broken with assessment in eLearning. L&D is more becoming focused on performance based learning, but the missing element is often assessment. There is often a gap between learning and on the job performance. Jenny has some great thoughts about this challenge and how can simulations be used in assessment, particularly branching scenarios. The podcast will help you to rethink your assessments in your eLearning modules beyond multiple choice questions.

Subscribe using your favourite podcast player or RSS

Subscribe: Apple PodcastsSpotify | Amazon MusicAndroid | RSS

Useful links

What is broken with assessments in eLearning - transcript

Robin: Jenny, what do you think is broken with assessment in eLearning?

Jenny: So right now I think we are really stuck on this one particular method of assessment, which is multiple choice exams. And the history of multiple choice exams is actually trying to measure really low-level reading comprehension for elementary school students. And so I think we, as professionals doing training for other professional adults, are trying to measure performance in a way that goes beyond basic reading comprehension skills. And so the current assessment method used isn't actually capturing what we care about, which is learners performance on the task that we care about. So I'm suggesting we use assessment methods that are better matched for who our learners actually are and what they need to be able to do.

Robin: So there's lots of thinking around getting learning experiences closer to being an on-the-job task and driving the learning experiences from the decisions people need to make. There hasn't been as much attention put on assessment in terms of aligning it on the job performance.  

Jenny: Yes and I think part of that is, you're totally right and I think the reason is that multiple choice tests are just so easy to make. And our authoring software is optimised to build them. It's just been a very common method of assessment throughout our school systems. It just shows up in our eLearning because that's our understanding of what assessment is and because it’s how we were assessed when we were in school. 

Robin: Yes and we generally copy what we've seen in the past. It's a natural part of mirroring. If multiple choice questions are what we have mostly seen in the past it's what we keep on doing.

Jenny: Definitely.

Robin: There are different types of multiple choice questions as well, they can be used for knowledge assessment but they can also be used for scenarios or multi-step scenarios. Is it multiple choice questions that are the issue or is it how we're using multiple choice questions?

Jenny: The multiple choice format works really well when you have a large number of learners who need to demonstrate relatively low-level reading comprehension skills. They do a great job at that, they do a great job of standardising the assessment. This was invented back in a time when teachers had lots of leeway with how they graded their students. So there's some concern about fairness and bias in their assessments of students. They do an incredibly good job at assessing those sorts of skills. So if you need to test the reading comprehension of elementary school students, it's great. But as far as being an assessment that predicts performance on the job by adults, I don't really think it's a valid measure for that. 

Robin: There's an interesting word that you're using there to 'predict'. Often when people start thinking about more valid, more reliable assessments they go straight to on job assessments or evidence and portfolios of on the job work, which quite often is not scalable. It takes more time and effort. So when you say predict, what sort of things are you sort of talking about as being a solution to solving this problem?

Jenny: I see performance assessments as being the middle ground. We certainly want to scaffold people's learning so they're able to review what's expected of them. And then they're able to practise that and demonstrate it in a relatively safe environment before they have to go out and do it on the job. For example, I've seen simulations of intubation of patients. For the medical student who’s learning how to do that, it's a much smoother learning experience for them to be able to practise the intubation process on a mannequin rather than having to do it on a human right away. It allows them to understand and get feedback about how they're doing in regards to their timing and how they're placing it before they have to take the risk of doing it on a live human.

Robin: Medical education has a rich set of tools that I think the rest of workplace learning can learn a lot from. It's often seen as being a very university based approach and it's an on the job apprentice model. That is  highly reflective and has this lovely layer of practice and has deep discipline around simulation as well. How do you think that sort of approach transfers into an eLearning module?

Jenny: Yes, I think we can definitely use our tools to simulate on the job performance. For example, in one of my courses it was a member service scenario for credit union staff. And the course was about how to handle difficult member discussions. A member comes up and they're angry because a cheque is being held or they have overdraft fees. And so we could ask them a multiple choice question like "Do you 'A' accept the cheque, 'B' tell them it needs to be held, 'C'", or we can put them in a branching video assessment that shows them in video format, the member coming up to them and demanding to know what's going on. It's giving them a number of dialogue options and letting the conversation flow from there to see how they do in that conversation. Which is a lot more similar to what they're doing on the job, than a multiple choice exam would be able to assess.

Robin: So you showed this at Learning Solutions and you also talked about the fidelity of the assessment. You used video to get the experience close to what it would really be like in the workplace.  Interactive videos can be complex. What really drove that choice? 

Jenny: Yes, that's a great question and there's a lot of research regarding when higher fidelity or lower fidelity is a better fit for a particular assessment. That's certainly a consideration that you need to have. In this case, because part of dealing with angry members is self-regulation of your own emotions, we really wanted to give the learners a chance to have that simulated visceral response to a social conversation that's likely to happen. The way to get them to feel that response and refer back to their training to know how to respond meant getting actors, professional actors, to act out the role of someone who’s angry so you get that feeling from the video of just watching this person who’s mad at you. Rather than say a 2D image on the screen or just text. Because that's a part of member service, that emotional regulation you need to have that can't be as ignited or initiated from either 2D image or just text. 

Robin: That sense of seeing the subtleties in someone's facial muscles around anger and posture and reading of a person you can do in video or in a face-to-face role play but you can't do in a 2D illustration.

Jenny: Exactly and getting that feedback from the member based on your dialogue choices is also very important. So being able to read people and see how they react and how their body posture changes and how their facial expressions change or the intonation they use, all of that is very important in human communication and it can't exactly be portrayed as well through text or 2D images.

Robin: In that learning experience, did you also use branching scenarios or was that done in some other way?

Jenny: Yes, we certainly scaffolded the experience. So we had more didactic screens, so just explaining "This is the process, this is the framework you use, here's some things to keep in mind" and then I provided them with those 2D conversations. Because that's much lower fidelity, they're less likely to have that emotional response and it's their first time trying it out, so I didn't want to kind of overwhelm them. So we did have those 2D conversations with those characters but I gave them immediate feedback rather than letting them branch. So they could only proceed in the conversation if they selected the dialogue option that worked best. So if they picked something wrong they got a big red banner and text explaining why that wasn't the best response. So that immediate feedback is really good for a formative assessment but is not necessarily something I wanted to include in the summative assessment. 

Robin: That's really powerful in terms of the scaffolding from a couple points of view. You removed some of the complexity of the real task, e.g. the emotional load by not using video. But you've also guided people down a pathway towards making the right decision rather than actually being more exploratory, which quite often branching scenarios are. Sometimes SMEs really wrestle with the fact that in a branching scenario people can make mistakes.  You're really helping guide people through the first practice stage. You used the word formative assessment with those, I avoid using that word assessment completely in learning task because they it’s really practise rather than assessment.

Jenny: Yes, I think I consider it to be an assessment because it is giving them feedback based on the decisions they make based on what they had just learned. It doesn't count towards their final score but it does assess them based on what their performance is and gives them that immediate feedback. That's why I would classify it as a formative assessment.

Robin: When you're giving good feedback in that type of assessment you’re giving learners a chance to self-reflect. Quite often when we're doing instructional design processes the sequence is "This one's the learning one, this is the formative one to allow you to  practise and this is real one". You talked about branching scenarios, as being predictive of on the job performance. What's your thoughts about the next step of being able to actually assess on the job performance at scale with the digital learning?

Jenny: Yes, that's a great question. So really part of this is going to be institutions identifying important performance metrics and integrating that into a learning system. So I can't tell you what the measure for x performance is going to be because it matters to your institution. The institution will have to either define it as perhaps a member ratings of how an interaction went. Maybe you have an observer to see how certain interactions went. Maybe you have a real life role play, upping the fidelity of the simulation even further.

So that's kind of the way that you would have to identify those performance metrics and then you would be able to tie them to their score on this assessment. So you would see if the assessment is predictive of that metric that you identify that you care about. And that's kind of what I'm hoping to get into. As for at scale, yes that's an interesting question because that qualitative data, like in person observations might be hard to do at scale. So you might have to look at alternative measures. So for example, you could certainly look at data from, I'm talking in a credit union context, so maybe chat assistance when a member pings you on chat. You chat with them and then they're able to rate how you performed with them. I think that would probably be one way to do it at scale. But again it really matters... the institution is going to have to identify the performance metrics that really matter to them and then use that as what is predicted by the assessment score. 

Robin:  What's exciting about the work you're doing Jenny is there are lots of people talking and learning about having to be more focused on those performance outcomes on the job. It seems to me what your doing is bridge the gap between “multiple choice questions are not a great way of being able to assess what's happening on the job” and “ how do we then integrate workplace assessments into the learning process". 

At Sprout Labs we are building apps for people to be able to do assessments in the workplace.  The process can be triggered when someone completes a module that then goes back to the learner manager who can then say "now you need to a do workplace observation of this person to see how they're going in the workplace". So the eLearning module no longer becomes an isolated bubble is maybe the way to think about it. 

Jenny: Right, It's really cool that that's what you guys are doing because it is whole system. The entire point of training is to modify behaviour and if we don't capture what the behaviour is then we have no way of determining the efficacy of the learning intervention.

Robin: Yes, so it was interesting that during your talk at Learning Solutions I was  sketching out a complete assessment workflow "Where there needs to be like a pre-assessment, possibly a self-assessment and then do this experience where they practise, explore high-fidelity assessment over a period of time then automate the whole pushing back out the workplace assessment as well".  The organisation you're working with essentially supplies learning experiences to multiple different types of credit unions, is that right Jenny?

Jenny: Yes. I work for CUNA and CUNA provides support and services for credit unions all over America and so it's interesting because part of my job is providing that training and assessments at scale. Which is why exploring the simulated assessments is so fascinating because it would be great if we could send people to every credit union in America and assess them during a role play but that's simply not feasible. So being able to do it in eLearning I think is absolutely fantastic for our learners.

Robin: You hinted that each organisation would need to figure out their performance metrics as well. I think with generic content like you're working with is always challenging and I think that's what's exciting about putting those sorts of more richer simulations in learning experience, is that you get a little bit away from the generic content.

Jenny: Yes. I mean the feedback we've received from credit unions has been overwhelming positive. Apparently the staff love it, the managers love it. They say it’s very engaging and so I think people are looking for that. And I don't know maybe it's just me personally but you know staring at those exams always deeply stressed me out and I felt like I could never really show people what I actually knew because I was so stuck on "Well I'm torn between A and C and I'm not sure which one it is and it's going to be 50/50" and "Oh should I change my answer" and that's not really the experience of the branching video assessment I developed. You're able to recover in the conversation if you say something and you notice that the member didn't respond well. And I purposefully built it that way because that like real life. Like in real life generally unless you say something outrageous, you can kind of recover from the conversation. And so it's not like so high risk and nerve wracking and feeling like you're a failure I think as much as a lot of multiple choice tests feel for staff.

Robin: I've got one more question for you. If someone is listening to this podcast and really wants to improve their assessment practise, what would be your biggest piece of advice?

Jenny: My biggest piece of advice would be to really deeply consider what is the behaviour that we are trying to change, what is the behaviour that we want to see and then work backwards from there to develop your assessment. So thinking about how to do that in content authoring systems, it's going to be completely dependent on your context. What I built here might not work if you're trying to teach learners physical manipulation of objects. So the context is going to completely determine the simulation and so that's where you need to start.

Robin: Thank you, that’s a great piece of advice and thank you so much for this conversation today Jenny

Jenny: Thank you, I’ve really enjoyed talking to you.