Although we may recognise how change can benefit compliance-based learning, stakeholders can be nervous about it. That can feel ironic, given that organisations don’t always measure the impact of the training well enough to know if it works in the first place. So, what process can you follow to convince them? How can you drive the change you want to see?
Article no. 7 of 10 - The Kineo view on: Compliance elearning
We’ve looked at using methodologies for innovation, and how you can draw on the principles of learner experience design (LXD) to prove what your proposed changes for compliance will deliver to your organisation.
By adopting a learning experience design mindset (like the one shown here), you can carry out a staged approach to change with evaluation built in throughout. The evidence that will materialise from this iterative process will help to add valuable clout to your eventual recommendations. Where we’ve seen this done by some of our clients, it’s led to very tangible gains and enhanced insight into what works and what doesn’t.
A process for driving change
In this article, we’ll look at the process Kineo recommend for driving change. It comprises three main phases:
1. Understand the challenge:
Do your research! Consult with your learners first and foremost, but also your stakeholders, to understand what their needs and expectations are. Measure your previous success in whatever way possible, including industry benchmarks. Define your problem statement, which might also be an opportunity, before you start working on that change you want to make.
2. Prototype and iterate:
Focus on that change and think about your hypothesis: what’s going to happen as a result? How will you measure the success and prove the changes are working? Turn that into a strong business case which aligns with your organisation’s strategy and will convince your stakeholders to go ahead with the project. That gives you a solid ground for designing your solution – but don’t go too far just yet. Create a proof of concept and test it out with your learners to see what impact it has.
Borrowing from lean and continuous improvement methodologies, this is a real ‘fork in the road’ moment. Do you want to scale up your approach? Perhaps go from a small test group to a full organisational roll out, or do you want to revise the approach based on your results?
Let’s explore some of those elements in the process in a bit more detail.
Define the problem
This is a critical step. To define the problem we want to fix, we need to look at it from several angles:
- What are the real issues that your business needs to address? Look at what the regulator really needs, which can be a blurry issue for some policyowners. They understand the policy, the need for training, but the regulatory requirements can be quite ambiguous. It may be easier said than done but trying to get as much clarity on that upfront is important, because without it, how can we be sure we’re asking the right questions? The answer could have quite a radical impact on the solution you go on to develop.
- What do the learners need to do in their role to keep themselves, others, and the organisation safe (including financial damage and brand reputation)? What are the barriers? You can use interviews, focus groups, surveys and existing data to really look through their lens and invite them into the conversation. Ideally what you will end up with is a matrix mapping different roles against what they need to do – that is the real world skills - to ensure that the organisation is safe and compliant (tools like Cathy Moore’s action mapping process can help with this). As a result, you might decide to design different learning pathways for different audiences, ensuring greater efficiencies and relevance for your end learners.
- Do you need to improve mindset, culture or resources? Are these really training challenges? Again, tools like Cathy Moore’s ‘Will training help?’ flowchart can help you to work that one out. What positive actions can learners take, rather than simply avoiding negative ones?
- Finally, do you truly know how well your current training is working? What’s your benchmark? What data can you use as part of this research
Make a hypothesis
Once you know the problem you’re attempting to solve, don’t jump into a solution: first make your hypothesis. In other words, make an educated guess (based on your research) that if you do a certain thing – if you change the way you deliver something, or deliver it in a different format or duration – then it will have a particular impact.
Hypotheses are an important concept in data analysis, as they define what you will look for when measuring success, and the specific effect you will have on your learner’s behaviours, ideally in a measurable way.
Ultimately your hypothesis will form a business case for you to take to your stakeholders. Without this, it may be hard to gain traction for your change. And the change doesn’t have to be a grand innovation. Sometimes we can try and innovate for the sake of it, when even a small change can have a big impact.
You can test a hypothesis by further research, but most decisively through a proof of concept or prototype.
Prototype a solution
This is the fun part! Any change is an experiment and you should be keen to test its effectiveness. Prototypes allow us to do this.
We know from what learners tell us that many learning experiences can be improved. And while it might sound dangerous to ‘experiment’ with compliance learning, the ‘set it and forget it’ mindset is also risky. We don’t want to fail, but we do want to know if we need to improve our approach.
That’s why it’s important to start small with a Minimum Viable Product (MVP)– the smallest number of key features that are vital for testing your hypothesis quickly.
Involve learners in this process: understand what change they want to see. Design Thinking tells us that the most insightful suggestions can come from anyone. And test concepts early, including small parts of what you build (e.g. ‘unit tests’) to get early insights.
Using design patterns as part of your LX design
User experience (UX) can be one of the biggest barriers for learners completing digital learning. It’s not necessarily the content or even the delivery of it, but the fact they can get frustrated or misled by problems with the way the learning functions or looks. For example, not knowing where to go next or not having a clear indication of when the course is complete. Online learning is a web experience and we can learn from the processes that web and interaction designers use. These include tried and tested ‘user experience patterns’ that define quick and intuitive ways to get users from A to B. It helps to apply similar concepts in your learning designs.
Patterns can describe small scale interactions such as how a button works, to larger ones such as how a learner completes a quiz or a scenario. Everything within this broad spectrum of the term ‘experience’ needs to be considered from a usability point of view.
You can borrow inspiration from existing patterns and principles on the
web, such as Google’s Material library or Nielsen
Norman Group’s 10 Usability Heuristics. Such
ideas and principles can be used as part of the design phase to help you create
an easy and enjoyable journey through the learning and referred to as guidelines
during your usability testing process.
Now it’s time to measure against the set of success criteria you defined at the start, when you were building your business case.
Think about the size and the composition of the group that you want to test with. You need to get people involved who are representative of your end learners, as well as consider the timeframe over which you want to test your prototype.
You could just test it and find out whether people liked it, but that’s unlikely to be enough.
“What we normally like to see is some indication that the learning is transferring into the real world, manifesting itself in the behaviours we see. That will necessitate a longer test period.”
It can help to do some A/B testing. ‘A’ could be your current learning offer, and ‘B’ your new approach. Over the course of say, one year, you can see whether you get better results from your new approach versus your previous one. That’s particularly important if you don’t have much in the way of benchmarking data.
Having done our measurement, we come to the pivot point: this is when you decide whether things are good to scale up or whether you want to revise your solution. That revision might not be scrapping it and starting again. The feedback from your measurement process might just tell you that you need to tweak a bit of your UX, in which case you need to go back to the implementation phase.
Or it might tell you that you need to go back to the hypothesis phase and look at different hypotheses to solve your initial problem. But that’s OK – in fact, it’s great – because what you’re doing is testing your hypothesis before taking that big commitment to scale up. And that’s what we have found to be most effective in driving change for a lot of our clients.
If you want to know more about how we create better learning experiences for compliance-based learning, drop us a line to book a free consultation with one of our learning experts.