How user research challenged policy to move from a data centric view to a teacher centric view
What do you do when all the data that you hold internally tells one story, but your users tell a very different one when you speak to them? How do you make sure that your team appreciates that this is a problem?
We recently overcame this challenge on a project where financial incentives are offered to teachers who were trained as either maths or physics specialists. While maths teachers felt confident in their eligibility, we kept seeing science teachers who began to doubt their eligibility as they worked through the user journey.
Identifying the problem
Our main journey was testing well with our teachers, with the exception of one particular page:
The question itself felt fairly innocuous to us, we had data from the policy team that showed teachers qualified with a specialism. However, many of the science teachers that took part in our research struggled to answer this question.
_I taught Physics during my training but my PGCE was science…. I trained as a science teacher… _
When we first played this finding back, we were told that it simply wasn’t possible that any of them had ‘qualified in science’. The policy team reiterated that teachers have a specialism, there were official records of those specialisms and, therefore, teachers should know what they were or, at least, be able to find the information easily.
As the project continued, we gathered more evidence that this was an issue for teachers, despite what the database was telling us. We discovered that while science teachers do have a specialism, they are still trained to teach all three science subjects and will, most likely, continue to teach all three sciences during their careers. Essentially, they qualify and stop thinking of themselves as a “physics teacher”, and start thinking of themselves as a “science teacher”.
As a science teacher, you are required to teach all 3 sciences, especially at key stage 3. I don’t know any science teachers who only teach their specialism.
On top of not just teaching one science, our research highlighted that many qualification certificates do not explicitly say what subject the specialism was. We saw certificates which simply said “secondary education”, or “PGCE science”.
In practical terms, this meant that our users were facing a question which meant they were either eligible for £2000 or not. The uncertainty of how to answer this question was understandably quite stressful and caused many to question what to do.
If I click no, does that mean that I will be locked out of this? I don’t want to say ‘no’ and then it says I’m not eligible and I’ve missed out.
Essentially, we were facing a problem where our users didn’t understand their eligibility, leading to two potential problems; eligible teachers who didn’t claim, and ineligible teachers who put in claims that would ultimately be rejected.
Presenting the problem
Despite mentioning this issue throughout the project and (almost) every research playback, we struggled to get the buy-in from policy to engage with it. We needed to change how the policy team thought about the problem, to encourage them to view it from a teacher-centric point of view and not a data-centric one. This would allow us to design a question which everyone could answer, not just those who were clear about their specialism.
So we faced a new challenge; changing how we presented the information to make it more engaging for the teams. We were quickly learning that showing clips, repeating quotes and talking about our insights were no match for a database with thousands of rows containing evidence of teachers having a specialism.
We went back through our research sessions and put the thought processes and evidence in a much clearer format. We set out 6 scenarios, like the one below, which contained a real quote from a user, what their PGCE certificate said, what their course was called on their transcript, what the internal system said and whether they would claim or not. These were a mixture of the three main possibilities;
- Teachers that were eligible according to our systems, but would not claim
- Teachers that were not eligible according to our systems, but would claim
- Teachers that we had no accurate information for, but would claim
Setting out the evidence like this allowed the team to build empathy with teachers. They started to understand why it was difficult for science teachers to answer the question of what their specialism was, and that meant that we had users that couldn’t complete a task the first time they tried.
Now that we had buy-in and an understanding of the problem, we could start to work through the potential solutions we had designed.
We tried to answer these questions from a teacher’s perspective, and kept referring back to the evidence that the teacher in each of the scenarios had. We mapped out the positives (green post-its) and negatives (red post-its) to make sure that we were clear on the implications of each of these solutions:
Some of these solutions were based around giving the users an option to say that they weren’t sure what they specialised in, while others allowed them to select science and our help text explained that we would assist them;
Eventually, we settled on a solution that combined several of our ideas. To allow a user to select “science” which opened a separate page that briefly explained what we meant by specialism and allowed them to select an “I’m not sure” option which would trigger a manual check.
Testing our solution
Now that we had a new question, we needed to test it with more users. While it still caused some confusion, the majority of science teachers we spoke to felt more comfortable with this new question.
When I think about my PGCE it is science with a physics specialism. I guess I could have put either of those answers
The fact that they could essentially ‘ask’ the service to check their qualification for them provided a reassurance that it was absolutely fine to not know what their specialism was. It also reduced the stress of not knowing whether they were eligible for the incentive or not.
As the researcher, trust your instinct. You are the one that spends the most time with your users, if you feel like something is an issue then keep putting it forward. Our job is to advocate for people who cannot be in the room while we’re designing services. It can feel uncomfortable to keep bringing up the same thing and it’s so disheartening to be told that there is no problem to solve, but that does not mean that it’s not worth fighting for.
Find an ally who can help you map the problem out, and help you to visualise new ways of presenting the information to your teams. We worked as a team of research, content and service design to pool our insights and worked out how many scenarios we may see, and how we might be able to help users in each of those scenarios.
Encourage your team to practise empathy, and give them the information and evidence to be able to do this. Encourage them to go back to the evidence your users have access to and then try to answer questions from the user’s point of view. We found that this was much more impactful than saying “this is difficult and we should change it”.
In this instance, we couldn’t solve the root cause of the problem in our user journey, but we could fight to change the journey to make it easier and more comfortable for those who were going to struggle. Never fall into the “it can’t be that difficult” trap of assuming that they will have access to the information somewhere. You can do this by truly listening to your users, and trying to uncover the layers of where their confusion has come from.