Developing a digital mindset for L&D: Test-driven cultures

DM eBook data driven testing

 

Digital businesses have strong test-driven cultures that L&D can learn from. A test-driven culture means changes are constantly being made, data is being collected and analysed, and the learning from the analysis is being put into place. One of the core reasons that digital businesses have adopted a test-driven approach is that often they are innovating, and it means that the idea can be measured and tested. The risk of the innovation is reduced.

A nice framework from Strategyzer (the people behind the business model canvas) looks like this:

  • Hypothesis: Which hypotheses did the team set out to test?
  • Experiment: How did we test the hypothesis in question?
  • Evidence: What data was produced from an experiment to validate or invalidate a particular hypothesis?
  • Insights: What insights did we gain from that data?
  • Action: What are our next actions going to be, based on the insights we gained? Will we continue the experiment? Will we go back to the drawing board and change our idea and subsequently the underlying hypotheses? Will we run entirely new experiments?


Gathering evidence, insights and actions are processes that look similar to the end-of-program evaluation processes that are common in L&D. The different and possibly transformational aspect of this framework is the process of generating a hypothesis and devising an experiment. It’s a different frame of mind from measurement and evaluation. It’s more like a research project, where ideas and approaches are tested and the organisation learns from and optimises its approach.

Test-driven cultures in digital business often have three key aspects: they are data driven, the testing processes are often automated, and user testing is central.

 

Data-driven testing

Most digital businesses have a culture of collecting rich, detailed data that is often more than what is needed – data such as when users are accessing services and what they are doing. L&D evaluations are often just about collecting data for continuous improvement, or, if more behaviour-style data collection is occurring, it’s focus is on figuring out how people are learning. Often in L&D, data beyond completions is not being stored by the LMS, or, if it is being stored, it is not being analysed and used. Learning actually has some powerful tools and standards for collecting data compared with other industries. For instance, each digital marketing system records data in a different format. In learning, we have standards such as xAPI, which makes learning sharable and interchangeable between systems. Exciting possibilities begin to emerge when you are able to correlate learning behaviour with business performance data.

Often, L&D people see that they might need to bring this performance data into learning systems to be able to report on and look at these relationships. But many organisations now have business intelligence systems (usually called BI systems) that mean they can bring data in from many different sources to explore these relationships. Maybe in the near future the key will lie in being able to bring learning data into business intelligence systems so it can be explored in the same ways as other business data.

 

A/B test in marketing

Digital marketing has developed a number of powerful techniques to make decisions based on data. One of these is the A/B test. Two variations are presented to users, then the best performing result is selected. In marketing this might be testing two different button labels or two different subject lines for an email message. What might be measured is the number of clicks on a button or the number of people who open an email message. Many websites constantly have these types of tests happening. A/B testing removes bias, or the ‘I like this one’ factor. It is evidence based.

 

How could A/B testing be used in L&D?

The question to ask yourself is, when could you be doing micro experiments? What variables in the program could be tested? Below are two examples.

Testing facilitators

You could have the same virtual classroom program with two different facilitators, and the measurement could be a learner self-assessment rating of improvement in performance.

Testing resources

You could have two variations of a learning resource, e.g. one might be text based and the other video based, and the measurement could be based on the learner’s scores on activities. 

DM eBook ab testing horizontal

 Automated testing

There is a development process called test-driven coding. When a developer starts making a new feature they first develop the test, e.g. When a user clicks on … they should see …

This is part of the shift to agile software development that is always open to change. This then means that when a developer makes a change they need to know if that change has broken another feature. These tests then run automatically to make sure the systems are stable. A developer’s code is constantly being tested.

Humans are more complex than code; behavioural change is completely different. But the test-driven learning approach offers some possibilities. It could mean that L&D focuses first on what the impact of the learning is going to be, and putting in place automatic ways to measure that impact. The automatic measure might be business performance data or short surveys that are sent to learners each week or each month. These types of constant surveys can aid in the transfer of learning by reminding the learner what the goal was. They also enable the rapid collection of data. Often, the same tools in learning management systems that are used to remind employees to complete compliance training can be used for these types of automatic messaging.

 

User testing

Most digital products including learning experiences undergo some form of user testing. User testing doesn’t need to be complex – often, most of the issues are discovered by testing with just five users. The most powerful technique is to have a list of tasks that users need to complete and then just watch them complete those tasks. A variation on this is asking them to ‘think out loud’ to try to identify why people are struggling.

 

Questions and ideas to explore around test-driven learning cultures

  1. What experiments could you run?
  2. Could your next learning program be reframed as an experiment?
  3. How could you automate the collection of performance data?
  4. What will be the impact of your next program?
  5. If you’re not already user testing your learning resource, how could you start?