Measuring the impact of learning ecosystems with Steve Foreman

This podcast is an exploration of measuring the impact of learning ecosystems with Steven Foreman, the President of InfoMedia Designs. The podcast starts with Steve outlining his ecosystem model and then he talks about measuring impact. Steve talks about the strategic work that L&D need to be working on and how it leads to a layered approach to measurement. The layers are: the business metrics, then the performance metrics and then learning metrics. These layers form a framework for learning dashboards.

Subscribe using your favourite podcast player or RSS

Subscribe: Apple PodcastsSpotify | Amazon MusicAndroid | RSS

Useful links

Robin: How would you describe a learning ecosystem?

Steve: So much of the work we do requires us to master a lot of information we're constantly learning in our professional lives and in our personal lives. And we don't just learn through training, obviously. We learn in many ways, through using tools and resources, through work experiences and practise, through self-study and research, networking with other people, seeking guidance from experts, observing others, in all kinds of ways.

So, my view on ecosystems is that L&D in general, learning and development organisations in general, need to support some of these other ways in which people learn and move beyond the classroom or even the virtual classroom. We need to make our solutions more direct, more effective, and more instantly available.

The learning and performance ecosystem is a framework, or a model that places people in the centre and surrounds them with tools and resources that help them perform their work. And I like to think of it in terms of six components, or six sets of tools, it's kind of like a bigger tool box than just training. So, obviously one of the tools in our tool box is structured learning, where we're helping people develop skills and knowledge, meet compliance requirements, get to baseline proficiency, and so on.

But there are other tools like social networking and collaboration, which allows people to exchange ideas and experiences and crowd-source questions. Access to experts, facilitating connections between people and experts in our organisations. Where people can ask questions and get guidance and avoid the typical problems they may encounter if they don't meet with an expert.

Knowledge management, where people can look up information and access tools and information quickly on their own. Performance support solutions, which help people complete processes and tasks and make decisions and so on. And then talent management, which allows people to assess themselves for growth and development planning and improving their current job or prepare for a new job.

So those six components are the parts of the ecosystem that we can bring to bear. We don't have to use all of them. Any one of our solutions can tap into those tools in combination or just one tool or multiple tools or all the tools. It depends on what the problem is that we're trying to solve and how we imply them.

But the learning and performance ecosystem is enabled by technology, processes, content, and a set of skills we have to develop in our organisations. Skills are things like performance consulting and human performance analysis, a more holistic design thinking and continuous needs analysis.

Robin: I really like that sentiment about putting the learner at the centre and having things around the outside of it. The 70-20-10 learning model is popular. When you start working with the 70-20-10 model it quite often needs to be this, as you say, richer combination of things. One thing I like about your definition is bringing in the talent management side as well. In learning, we actually have problems measuring impact. I think that it's a wicked problem. Do you think it's possible when you're working with so many moving components, to actually measure their impact?

Steve: That's a great question and a great point. And absolutely yes. We all know Donald Kirkpatrick's model. His four levels of evaluation that he first published in 1954. It's amazing how old that model is but it really has stood the test of time, because it's a good model. However, in most organisations with which I've worked, there's a lot of level one and two, that's getting done,. where people are measuring student reaction to their training and whether they learned anything, through testing or observation, or what have you.

Occasionally you would come across organisations which are doing level three, where they're measuring behaviour change or performance on the job. But rarely do I come across anyone who's measuring level four, which is about results, business results. So, when we talk about metrics in the learning and development space, we typically are thinking about learner satisfaction and test scores. And we count a lot of things. We count how many courses do we have, how many enrollments, how many completions, how many training hours, credits, things like that.

We've got this well oiled machine for producing courses. And for some courses, those metrics may be fine but as the late, great Peter Drucker once said, "What's measured, improves." I really like that quote, because it's so true. When we measure things, we have the information we need and the insight we need to improve them. So when we're measuring, when we're counting courses and enrollments and we're measuring learning satisfaction, what are we improving?

We're improving things that may make our products better but do our sponsors really care about those things? The answer is no. So ecosystems solutions really, especially those that involve multiple components and are a little bit more complex, often involve more time, more effort, more money, more resources to put in place. To ensure their success, they require a more rigorous analysis process, a more robust design, more involved evaluation strategy, that provides evidence of their impact on our customers metrics.

So when we think about ecosystems, our focus starts with level four results. And where we become accountable for our customer metrics, not our metrics but their metrics, our sponsors metrics. What is their goal? What are they trying to accomplish? What is the problem they're trying to solve and how do they know they have a problem? What are they measuring? How will they know when the problem is fixed? What metrics are they looking at?

If we can identify those metrics and keep those in sight as our guideposts, everything we do in terms of solving a problem and creating a solution, has to stay focused on those metrics. In many organisations it may not be practical to take this more rigorous approach for everything but we can do it for more strategic programmes.

Robin: One of the things that's great about Kirkpatrick's model, is that it's a shared language. I think, start at the wrong end of it, which is the level one. It’s powerful when you start at the top level. Not actually throwing it out, still keeping it as shared language.

When you start to talk about that level four, in early conversations. The solutions quickly have to move beyond a simple intervention as well. That quite often leads very naturally to an ecosystem approach. Because all of a sudden, people realise, if they want this impact on the business, there's actually these five different aspects we need to be looking at. I think it leads towards a different way of thinking, by actually working with the results driven approach first.

Steve: Yeah and that's what Kirkpatrick, if you read Kirkpatrick, that's what his intention was, that we start with level four. And it's funny, the numbering convention, we're used to starting with level one, and moving up to four. But the idea is that the results become our guidepost. And measuring results, so many people say, "Well, it's so hard to measure results, because you can't isolate training's impact on business metrics."

Well that's true but you can't isolate anything's impact on anything, really. The idea is to provide a chain of evidence that links your solution's results to performance metrics and in turn, to business metrics. You can't necessarily provide proof of impact but you can certainly provide evidence of impact.

Robin: Just making that subtle difference around proof and evidence, is a really nice way of reframing it. Make people feel a little bit less precious about it as well. Steve, you talked quickly about really applying impact measures to more strategic programmes. What do you actually mean by strategic programmes?

Steve: I've been working with a lot of organisations that are saying, "Well, how do we do this? We're in a mode of being order takers." When you go to the industry conferences, you see a lot of presentations on how to stop being order takers. Like in a restaurant, where the server is taking the orders of the people who are dining there and then going off to the kitchen, placing the order, and then fulfilling the order.

This is a mode that many learning and development organisations are stuck in. People come to them and say, "We want training. Can you give us this training?" So I like to think in terms of, if you were to inventory all of the learning programmes that your organisation offers and you were to say, we're going to think of this in terms of a portfolio of all of our products, all of our solutions.

And if you were to categorise the solutions in three categories within your portfolio, there are solutions that are, I like to term them, as contributory. They contribute to the organization's success but they're not necessarily critical to the organization's existence.

So these are things like ... We've all seen lots of courses on critical thinking skills and personal productivity, time management, teamwork, managing difficult conversations, written and oral communication skills. All kinds of things that are important and contribute to success but they're not critical. So, that’s one bucket.

Another bucket in the portfolio is the primary courses or solutions. So these are solutions that address operational needs of the organisation. Things like policies, processes, compliance, values, and ethics. The things that are really critical to how the organisation operates. Then strategic programmes would be the third bucket in your portfolio.

Those I define as, they have a strong execute sponsor, a leadership level sponsor, clearly articulated goals and metrics. So when you start thinking about your products this way, you can even apply some criteria to organising your portfolio and seeing what you have. When I work with organisations on this, they often discover most of their content is contributory, and some of its primary and very little of it is strategic.

So there's a need to rebalance the portfolio and make space for those more strategic projects. Well how do you do that if you're taking orders? I think in that case, we can take a lesson from our partners in the IT function, who have, typically, a project intake process. You can apply the same criteria as you did in the inventory to new project requests.

The way IT does it, is they ask people who are requesting work from IT, to provide a mini business case. What is the strategic value of this work for the business? The more compelling arguments, the more compelling business cases, get the attention from IT. So this is a way to start making some space for us to work on more strategic work.

Robin: That is a nice frame of mind to think about the project approach that is used by IT. Another way I've seen move to more strategic way work is to actually give line managers a small training budget, so if you want something that's really tactical rather than strategic, they have a budget to be able to do it. Then the L&D team can move their activity towards being more strategic.

The other way is a lot of the topics you've just talked about, like the time management, can be found off the shelf in content libraries,.

With strategic projects, as you were saying, there is a strong problem statement and metrics tied to them. How do you bring together the evidence, and show that relationship between the activity that someone might be doing in a learning experience and how it's impacting back on the business metrics?

Steve: The role of needs analysis is critical in this. And it all starts with meeting with your executive sponsor, the one who’s problem you're trying to solve and using your consulting skills to get them to articulate the problem they're trying to solve and to identify the metric that they're looking at or the combination of metrics they're looking at.

If you can do that, then you're in a position where you can say, "Okay, we're going to try to partner with you and be a problem solver for you, instead of those course people over there." And we're going to try to see what we can do to positively impact that metric and solve this problem or help solve the problem.

That involves analysis and often we get some pushback when we say, "We want to do some analysis." But the idea here is that, if we're going to really solve the problem, we have to understand the nature of the problem. If we're talking about the customer's metrics and these are metrics that are important to our customer, then they're probably going to give us permission to go ahead and do some analysis.

That analysis involves first, looking at what are the critical job roles that most impact that metric. Once you've identified the job roles, what are the critical job tasks that each of those roles performs, that impact the metric. Then once you've figured that out, what are the challenges in performing those job tasks? And then, what are the root causes behind those challenges.

So it's really a pretty standard analysis process but the key is when you're looking at root causes, to look at the work itself. The work that's being done. The job tasks that are being done and how they're done and what the challenges are. Then looking at the worker, the workflow and the workplace.

So when we look at the worker, we're often looking at things like knowledge and skills. When we're looking at the workflow, we're looking at the business processes and the inputs and outputs. When we're looking at the workplace, we're looking at the systems and the information and the tools people are using to accomplish their work. So when we look at all of those things holistically, we may find solutions that require some work.

For example, a knowledge base, or a performance support tool that enhances the workflow, or adds to the tools available in the workplace. When we're in L&D and we focus on training needs analysis and we're already decided we're going to develop training, we're always focusing simply on the worker and skills and knowledge. And that isn’t always the most effective and efficient way to solve the problem. It's also usually the most expensive and difficult way to solve problems, to try to change worker behaviour.

So when we're thinking in terms of ecosystems where we're really trying to get at, what are the root causes of why the performance we're looking for is not happening. And we're using the metric as our guidepost. Every question we ask everyone we talk to, we're always focusing on, what kind of impact does this have on our customer's metric.

As we peel back the layers of the onion and we talk to the people in the critical job roles and so on, we're also finding out what their performance metrics are. Our customers are responsible for the linkage between performance metrics and business metrics. And by performance metrics, I mean things like key performance indicators that are used in different functional areas and productivity metrics and so on.

They have metrics for performance. It's a matter of identifying those metrics as we do the analysis and which of those metrics is having the most impact on the business metrics. And then, when we start by conceptualising what kind of ecosystem solution we might want to provide, we're keeping all those metrics in mind. And we were thinking about, what do we want to measure in our solution that is going to give us an indicator of impact on those performance metrics.

So, it's really a chain of evidence. We look at the business metric, we look at the performance metrics that impact the business metric. Now we look at the ecosystem solution metrics that impact performance. That's the analysis process. Then later, after we've got that solution and we're using analytics and dashboards to see how we're doing. We're looking at it in the reverse. We're looking at our solution metrics and how whether they're moving the needle on the performance metrics and whether that's moving the needle on the business metrics.

Robin: What I really like about this particular thinking, this is literally in a dashboard, you can line those three things up and see them together. Which I think is a really powerful endpoint, after going through the right needs analysis.

You can't just dive into the business metrics, you need to look at all three aspects. Looking at the root cause and then figuring out what the performance metrics are and the business metrics. It’s an interesting way of thinking that is way beyond just measuring course completion.

Steve: That's a really good point, Robin, about lining those metrics up in a dashboard. The whole ecosystem metaphor is an interesting one, in that an ecosystem, in order to thrive, it needs to be sustainable. And it needs to be constantly growing and changing, in response to things from outside that affect it. So, in using dashboards to monitor our ecosystem solutions ... and by the way, when we're designing our solutions, right away we should be designing the metrics and the dashboards as part of the solution.

We should be thinking about that all along. IT allows us to do continuous needs analysis. We're so used to doing front end analysis and then we solve the problem and we move on to the next project. But there's such an opportunity especially with these strategic projects that have a real opportunity for impact on the business success. There's this opportunity to continually look at the data and refine, adjust, enhance and expand our solutions for even more impact.

Robin: When I talk about being data driven, as an L&D person, I have a story where the L&D person starts the day looking at their dashboards for your programmes and asking what's happening? What do I need to shift today to be able to make the metrics change. If you can see it on the dashboard and you can see it visually, you can then make modifications to it.

Steve, lots of great ideas and lots of great advice in this podcast. If there was one thing you could say to someone around learning ecosystems, what would your big piece of advice be?

Steve: Interesting. Well, we need to transform the Learning and Development function, from being course providers, to being perceived as business partners and problem solvers. And there's this move throughout business towards data driven decision making, where senior leaders aren't just making decision subjectively but they're using data to drive some of their decision making. And they're coming back to functions in sales and customer service and marketing and expecting data to help them drive their decisions.

And if they're not already asking L & D for this, they're going to be doing it soon. So it's just not good enough to keep using the same old things that we've been counting and measuring in the past, around training. We need to really be thinking in terms of ecosystems, strategic projects in our portfolio and metrics to really show our evidence of impact so we're going to continue to thrive into the future.

So I think it's all about setting some ... once you've figured out your product portfolio and where your current state is, set some goals for adding more strategic solutions and building up that part of your portfolio. Building up your ecosystem capabilities and skill set and building up your relationships with the senior leaders in your organisation, with whom you can partner to help them with their issues and what's keeping them up at night.

Robin: Great. Steve, that's a really nice way to wrap up the session. Thank you for joining me today.

Steve: Thank you Robin