Thoughts and Reflections on The Assessment Summit
Episode summary
Assessment is something we don’t think or talk about enough in learning and development, but it's one of the keys to transforming the impact of what we do in L&D. This episode is a bit different from others – it’s Robin reflecting on some key takeaways and themes from this year’s Assessment Summit.
About The Assessment Summit
The Assessment Summit brought together some of the world’s most accomplished learning experts to share a smorgasbord of practical, actionable advice on assessment – and about 80% of the speakers have previously been on the podcast.
Key takeaways:
- Why do people have such negative associations with assessment? Paul Kearney, Enterprising Education Specialist at Enterprise Design Consultancy presented an interesting talk on ‘The Myths Of Assessment’, and he reflected on why so many people have had bad experiences, through exams, etc. As L&D experts, we need to shift our thinking about this term – it’s about feedback and moving forward.
- Assessment design. Often in our practices multiple choice questions are our default as a type of assessment tool and strategy. Jenny Saucerman, Online Learning Instructional Design Manager at Credit Union National Association demonstrated how scenario questions are a great way to predict someone's future performance. We also learnt about VR assessment tools and embracing that assessment happens over time.
- Assessment in the creative industries. Learning is quite often the process of solving a problem, and the evidence to the solution then becomes the assessment. It’s about not giving the answers, but delivering collaborative approaches and peer work assessment.
- Digital Assessment. Cheryle Walker, Founder, Innovator, Consultant and Facilitator at Learn LIVE Online touched on some technical considerations such as running verifiable virtual assessments. Dan McFadyen, MD at Edalex, spoke about using micro credentials as a way to connect, recognise and uncover skills. It was also fascinating to learn more about the rich source of data you can obtain from assessments, which Bikram Kawan, Software Engineer at Sprout Labs delved into.
Segmented time stamps:
- 02:16 Why so many of us associate ‘assessment’ with fear
- 04:17 Measuring the impact of our work
- 06:12 Assessment design
- 09:07 Using VR
- 11:18 The role of assessment in creative industries
- 15:28 The opportunities and challenges that come with digital assessment
- 17:06 Leveraging from your rich sources of data
Links from the podcast:
- Access all talks from The Assessment Summit
Transcript - Thoughts and Reflections on The Assessment Summit:
Robin Petterd:
I'm recording this a week after the Assessment Summit. It's given me a little bit of time to think about it. It is a bit of an attempt for me to bring together some of my thoughts and my learning from the summit. As you listen to this, you'll hear many names from past podcast guests. And about 80% of the people that were part of the summit are actually ex-podcast people. You can get the full content from the summit in the content section of the Sprout Labs website. This includes the recordings, the links to the Miro Board which have notes and summaries, and links to what people talk about as well.
I decided to do this summit because I really felt like assessment wasn't something we were really talking about enough in workplace L&D. I see a fairly conservative level of signups, and actually doubled the amount of signups. And I was really surprised and really happy that so many people were interested in talking about this particular area.
The way I decided to actually organise this particular podcast is around themes and ideas and then to pick out moments from each one of the sessions. It's not a, this is a summary of .. session, and some sessions I only talk about really briefly. But they were really good sessions but it just possibly didn't quite fit with this particular summary.
Paul Kearney, during the interview I did live with him, at the beginning of the interview he asked a question to the audience. He said, what does the word, assessment, bring to mind? And for me, it brought to mind, fear. Fear is exam type situations of being measured. I've talked to a number of clients after the summit about their thoughts. And Paul picked up this idea after talking about this notion of how we have had baggage and past experiences, quite often negative, about assessment. The assessment doesn't actually have to be hard. Especially in workplaces. We actually want people to succeed. We actually want people to be able to be competent, to be able to do their jobs. And I think this is a really interesting shift in thinking about assessment. I do hope to also release Paul's interview as a podcast in the near future as well.
If we go back to the Latin origin of the word, assess, it means, to sit by. Which means, giving feedback. Feedback from learners, feedback to trainers, and feedback for reflection to sit by, to think, and move forward.
This summit actually started with this frame of thinking about learning and reflection in our workplaces. Ray Jimenez started the summit with a session that evolved around the use of his situation expert tool, and the way that when you start to reflect on what you're doing and problems gives you a possibility to assess, look at, measure what you're doing, and then plan for change as well. There's a deep linkage between doing, assessing what you're doing, reflecting, and planning and moving forward.
One of the themes I talked about a couple of times in the podcast in the past and other content, is the importance of thinking about the impact of what we're doing in workplaces. For me, there is a direct linkage between the idea of thinking about learning with a business outcome, and then that business outcome actually happens because of a change in behaviour in our people. The thing is we can only know if that behaviour change has actually happened if we assess people. We give them targets to be assessed by and then they know what that behavioural change is. For me, I've realised that the link that's missing between this business impact and learning is this assessment piece.
We often copy what we've seen in education in the past. And education has actually changed quite a lot, but we still keep on copying quite often our own experiences which may be 20 years old or longer, or shorter in some cases. But in our workplaces, we're in a really different situation to education institutions. Most education institutions are doing two things. Most education institutions are rewarding qualifications, so assessments are actually barriers to qualifications.
Most of the time, but not always, the learners don't have a workplace. So they're actually talking and thinking about knowledge rather than being in a spot where they're learning by doing. I know I'm making great generalisations here. In actual fact, Nick Petch reminded me of my own background at university and my own creative learning. In studio-based creative learning in design and arts, it's quite often self-directed learning. Learning by doing. And the assessment is actually based on creative outputs. Hope to actually do a follow-up podcast, by the way, with Nick on that particular topic.
A lovely quote from Cara North. If we have a hammer, we see nails. If we have a quiz tool, which most of our LMSs and our authoring tool have, we only see the assessment possibilities that a quiz tool gives us. Too often in our practices, multiple choice questions are our default type of assessment tool and strategy. If we start to think about assessment design as collecting a series of evidence that gives sufficient evidence of someone's ability to do something now or into the future, in most cases, a single multiple choice question is not going to be adequate. Jenny Saucerman talked about the person who invented and first started working with multiple choice questions. They were really designed for low level primary school kids for literacy and numeracy. Very different types of skills to what they're being used with now.
The other thing is these multiple choice questions are not easy to write. Cara got us doing a lovely activity that got us to essentially deduct a series of questions. This was both as a way of showing that quite often we are actually meeting people's deductive abilities with multiple choice questions rather than their ability to do something, but also showing a whole lot of bad practises. Things like the longest choice being the right one, or all of the above, or the correct one being the third one of C.
Jenny Saucerman gave a great presentation. There was a mixture of theory, practise, and some deep dive into tools as well. Jenny was talking about her frustration that multiple choice questions really only measure knowledge, and knowledge is part of a prediction for future behaviour. What she switched to with her as her actual assessments is using scenarios. Sometimes these are branching scenarios. Sometimes they're just standalone scenario questions. They are actually a great way of being able to predict someone's future performance. So instead of just being knowledge-focused, they become focused on what people need to do and what decisions they need to make as well.
For me, one of the other keys is even with scenarios, that we need to be starting to think about separating out our learning experiences and our assessment. The assessments are happening over time, so we're spacing them over time, so we are actually starting to sit there and go, is this person transferring and applying this knowledge into the workplace? Are they still able to retain and think about this behaviour when they encounter it in three months time in a simulated scenario?
Adi Stephan finished the day talking about simulations and VR. And about how by using VR, especially in very physical situations, and a lot of these examples were in mining, where you can actually get learners to perform in the real situation. You can also do things like put learners into situations where they normally would be unsafe and make sure that their behaviour was correct. This is just great because it's a total move away from knowledge assessments.
One of the keys for me was in a question I actually asked Cara North at the end of her session. She talked about the fact that really good assessment design comes down to your needs analysis. And that when you start to do better needs analysis and you really start to focus on the performance outcomes, quite often what will fall out of this is different, richer types of assessment tools. Performance checklists, evidence of what people do in the workplace rather than multiple choice questions. I'll do some follow-up on this as well.
Jo Cook was actually talking about the formative assessments in virtual classrooms. I asked her a question about how summative assessments work in virtual delivery. What really fascinated me was she made the comment that, like Cara, quite often when you start to think about things differently and sit there and go, "Actually, it's probably not appropriate to do a virtual assessment in a virtual classroom. Oh, we haven't got a quiz tool. We haven't got our hammer. Oh, actually, look, let's go back out into the workplace and see ... get someone to do something, collect some evidence and then that can be used as the assessment."
It's interesting that the performance checklist is a really good example. Especially if it's in a spot where it's something that the learner is doing anyway. They're often quite time-consuming. People think about them as form-filling. At Sprout Labs, we've actually got a mobile assessment app that works with Glasshouse, that allows this process of workplace observations and checklists to be digitalized and automated.
As I said, Nick Petch reminded me of assessment in the creative industries. In these sorts of areas, assessments are often based on outcomes and artefacts of the problem. Example, what's your design to solve the problem? The learning is quite often the process of solving that problem and the evidence to the solution then becomes the assessment. It's a guided process.
In Nick's session, he talked about a couple of examples where he built programmes that were quite often project-based and incredibly social, where this process of people doing activities then start to add up to the behaviour that's changed. It was quite nicely scaffold. Nick also reminded me that you have to be really careful when you start to ask learners for evidence of process. It can actually start to sometimes define what the outcomes are. And in design and creativity, that's a really bad thing. In some spots it's actually a really nice thing because you want people to actually follow a process every time.
Often our stakeholders are really negative about these ideas of using workplace evidence as part of assessment. They see it as being a negative for two reasons. It actually means more work for the learners and then someone has to check it. I actually think this is a really great opportunity to actually rethink a couple of things and rethink and re-look at and build on what could be strengths or different types of approaches.
For me, there are two ways of getting over this particular viewpoint. First of all, it's a philosophy to start with. Behaviour change is not easy. Learning does involve work. The learner having to do more work is actually a positive. If you're in a spot where it's a tick and flick, maybe it probably shouldn't actually be a learning experience. Maybe it's something that you shouldn't actually be in a spot where that's actually something you're needing people to spend time doing. Is this a whole thing that is significant enough to be assessed? If it is, then you might be in a spot where you can use more sophisticated approaches than a multiple choice question.
The other thing I think is really important about this whole ‘someone needing to check it’, is collaborative approaches to assessment. By collaborative, it means things like peers assessing or a manager being involved in assessment and also self-assessment. Work is a team activity. This fits so nicely with this sensibility of peer work and a collaborative assessment. We don't do things individually and it's a spot where there's lots of different possibilities to be able to expand the way we work with things.
Robert Kienzle, gave a great example of using a self-assessment, of using the team assessment tool where it could be completed beforehand. And then in actual fact, the live virtual classroom sessions become reflective, sharing and planning sessions with the team. It switches the approach. It becomes an assessment for learning rather than assessment of learning.
Mick Gwyther talked about peer assessment. He used an example where he found himself needing to integrate soft skills in learning and assessment into a course that was previously technically-focused on digital design. This is a nice process where he actually got the learners to, first of all, think about what the benchmarks for these types of skills were going to be. Then the learner designed what the tasks were going to be. This is quite a lot of method thinking for the learners about what it's actually ... what these things look like. And then once they built the tools and tasks that they were actually going to use, they actually then used those to give feedback to each other and evaluate each other on those benchmarks as well.
Digital assessment has lots of possibilities and challenges. I previously talked about the challenges of quizzes and them being knowledge-focused, and then possibly just using them as one type of assessment is not probably sufficient knowledge.
Cheryle Walker also reminded me of some of the technical challenges. Is a person who's doing the quiz actually really the learner? And some of the ways they might be able to ... you might be able to get around this, by doing things like using multiple types of verification. A bit like we use multifactor- authentication to get into systems. And then also in the past, there's been a strong culture of assessment centres where people have to physically come in and do a quiz. This can be done in web conferencing as well. You can sit there and say, "At 9:00 AM on Wednesday, we will be doing this digital assessment. Everyone also needs to join a MS Teams or a Zoom session so that we really know that it's you that's doing this as well."
Daniel McFadyen talked about the use of microcredentials and microcredential platforms and these as a way of being able to connect and recognise and uncover skills. Part of what Dan introduced to me was rich skills descriptors. And there's more out there than what I thought. If you're thinking about a capability framework or thinking about how you're going to be assessing your learners, this is just a really good place to start to look first.
Assessment can be a rich source of data. Bikram Kawan, one of the Sprout Labs' staff members, talked about using xAPI. Jingjing talked about the way you can use tagging to sit there and bring together different types of assessments across different types of courses to map them to outcomes. Might sound like a really simple idea, but something that I don't see we're actually applying enough in our learning management systems. Quite often, they are fragmented out. We're not thinking about how things are linked together to values or behaviour outcomes or business outcomes. The thing of tagging different types of learning resources and different types of assessments is a way of bringing things together.
Bikram talked about the differences between xAPI and SCORM and how with xAPI you can start to collect so much more detail about what your learners are doing within a learning experience and assessment.
Then also, you can start to collect assessment data beyond the learning management system so it can be hooked up into simulators or business systems. In actual fact, if you sit there and say someone ... the metric for someone being competent after a course is .. Sales for a salesperson, xAPI could be hooked into Salesforce to measure that particular person's performance and report back to your learning systems. Once you have that data in, you can then do different types of analysis. You can then look at learners' performances. You can drill in to sit there and say, is this particular activity ... what's working? Are people actually answering it correctly? And then be able to get the data on how you might be able to change and improve things in the future.
Actually, API is one thing we do at Sprout Labs. This can be used for building automations as well. So doing things like sending managers or themselves different types of email messages based on their behaviours as well. Bikram's going to do some follow-ups about doing things like integrating and working with xAPI in tools like Power BI as well.
The Assessment Summit, for me, has been what feels like the start of a journey on assessment. I'm going to keep on doing some follow-up interviews on the podcast for the next little while with some of the guests and some of the people that couldn't make it to the summit as well, and also releasing some other content around assessment. In this particular podcast recording, I'll just give my highlights, some summaries and some thoughts. As I said in the introduction, the full recordings and the transcripts are in the content section of the Sprout Labs website.