Articles

From time to time we write about things you might find useful such as what we're working on and thinking about.

A picture of Urska

12 min read

Measuring digital inclusion with research participants

As we live in the information age - where every person is expected to access information, essential services and products digitally - it’s imperative for innovators, creators, designers, developers, and government to make their digital products and services inclusive.

At Paper we’ve been working with businesses and government to design new services and products, ensuring that they meet the needs of all users by doing research with them. One of the ways that we make sure we are researching with the right people is by asking potential participants questions to assess where they are in terms of digital inclusion.

The definition of digital inclusion from GOV.UK highlights three criteria:

  • Digital skills - Being able to use computers and the internet. This is important, but a lack of digital skills is not necessarily the only, or the biggest, barrier people face
  • Connectivity - Access to the internet and the right infrastructure that people need for access
  • Accessibility - Accessibility is a barrier for many people and services need be designed to meet all users’ needs

This definition is from GOV.UK’s Digital Landscape Research report, in which they also created a scale with 9 points ranging from Users who never have and never will use digital to confident users and experts. Their research plotted the UK population across the scale highlighting three groups, Non-digital skills, Low digital skills and High digital skills:

GOV.UK Inclusion Scale Graph

Their research is based on people’s:

  • ability to go online and connect to the internet
  • their skills when it comes to using the internet
  • their motivation for using the internet
  • and how much they trust the things they access on the internet (like a fear of crime, or not knowing where to start to go online).

Paper has used the GOV.UK digital inclusion scale with each of our research participants, but we started to see that we could iterate and possibly improve it using our Research & Development (R&D) stream of work.

The challenges

The first set of challenges that were identified:

  • Businesses and organisations we work with struggle to recruit people who cover a broad range of the digital inclusion scale (particularly lower down on the scale).
  • Capturing the digital inclusion of users isn’t easy. There are many parameters that need to be thought of and implemented and there is a risk of subjectivity if the researchers’ approach isn’t consistent.
  • We noticed that an ever increasing part of the research was taken up with trying to accurately plot where people are on the scale as people’s relationship with digital gets ever more complex.

However, when we started to dig deeper into the topic of digital inclusion we went on to find issues that seemed to be more fundamental and trickier to solve.

How to define an expert and a beginner user?

Next, four parameters were used as a possible step for the digital inclusion scale: the definition of beginner and expert, subjectivity, recruitment and timing, and data awareness.

The line between a beginner and an expert user is blurred. For example:

  • We discovered that people who would, in fact, rank as experts on the scale would not necessarily consider themselves experts because they are more aware of digital advancements than non-experts.
  • People who are expert users are making informed choices not to use digital products because of concerns about data security or just preferring more traditional things like going to the supermarket instead of having it delivered.
  • Children are beginning to learn how to code as a part of their school curriculum but aren’t necessarily being taught the critical thinking involved in identifying trusted sources of information on the web.

These factors are changing the digital inclusion scale and consequently, we need to start adjusting the way we capture and analyse the data we plot onto the scale.

Researcher & User Subjectivity

Subjectivity occurs when different researchers ask the same questions. Because there isn’t a consistent metric, it’s easy for the subjectivity of the research to creep through. They may end up scoring higher or lower based on the answer the participant gives, rather than on solid evidence.

And if you ask people to score their answers themselves, this can be skewed by their own subjectivity.

Subjectivity also explains how, with technology evolving, experts are looking into new realms of digital that make them perceive themselves as less skilled, while people who aren’t aware of up-and-coming technologies may consider themselves to be expert users.

Data awareness

Another key topic is how aware users are of what is happening with their data online, how they can protect themselves, and their rights about data sharing. People are now more aware how their data footprint online is used (and abused) and how their data presence is something that needs to be carefully considered.

As In their latest report, Doteveryone noted:

“…while around a third don’t realise that information about previous searches or purchases is collected, two-thirds are unaware that information about their internet connection is gathered and over 80% don’t realise that information which other people share about them is collected.”

What we did

If we accept that asking about the digital inclusion of users should be a part of any research, then we also need to accept that our research tools will need to adapt to the changing environment.

A researcher’s mission is to capture the users’ digital inclusion information quickly and in a way that doesn’t interrupt or overwhelm the rest of the research.

We looked through the journey of getting users for testing, from recruitment, signing up and the testing process to identifying the best possible time to ask people about their digital inclusion.

Recruitment timeline for research

Next we looked at non-digital users. How can they be recruited and included in the research process? As most of the research recruitment is based online, non-digital users can be excluded from the testing process. It’s important to think about being accessible for non-digital users and digital users alike.

Asking who else is doing this

Speaking with Amy Everett, user researcher at the Home Office working on digital inclusion, she told us about how the Home Office is looking at the emotional aspect of people. How are they feeling emotionally while interacting with a service? How do these emotions affect their behaviour and ability to make decisions?

This was a fascinating insight. While doing user research participants are pulled out of their usual day to day environment, their emotional state was being affected by the result of doing the research, among that also their digital inclusion scores. The question this led to is, ‘how can we minimise the emotional impact on the research?’.

Prototyping a new digital inclusion measurement tool

Based on our research talking with other people in the digital industry, looking at the current digital inclusion scale and the four parameters explored in the last section we created a hypothetical user need and started exploring solutions.

As a user researcher

I need to be able to quickly understand and capture someone’s digital skills and abilities

so that I have context for the research session.

Hypothetical user need

Initially, five different prototypes were created. The prototypes ranged from tasks, simulations of internet errors, levels and inclusive surveys to a Maslow’s hierarchy of need inspired prototype. Following a show-and-tell with the rest of the team at Paper and a workshop, we decided to progress with a prototype based on Maslow’s hierarchy of needs.

We adapted the original scale to be more aware of users’ behaviours regarding their daily internet use. Using Maslow’s hierarchy of needs as a base, we adapted it to see how much of the participants’ life is tied in with digital (Food, Health, Friends & Family, Security etc), how comfortable they are using it, and where improvements could be made. This helped us understand which aspects of their lives are fully digitised and in which areas they still lack digital skills.

Maslow's heirarchy of needs Diagram of Maslow’s hierarchy of needs

Our hypothesis was that we could relate essential human needs back to digital products and services that people might choose to use depending on where they were on a digital inclusion scale.

Developing the prototype

Testing the prototype

The prototype was initially tested with 13 people. We recruited people with low digital skills by contacting local community centres that offer help in gaining or improving people’s digital skills in their community.

We tested our new digital inclusion evaluation tool with users and benchmarked it against the existing scale to see how effective and accurate it was.

With testing, it was discovered that the prototype didn’t present the variety of possible options to capture all different possibilities of users’ digital skills. We did, however, get an overview of how users have different levels of digital skills by splitting their living areas into different question topics. This allowed us to have a deeper understanding of people’s digital inclusion regarding which areas of life they have more digital skills in and in which areas they lack them.

Through testing we recognised that the questions we used didn’t include all the possible levels of digital skills so we adapted the prototype to cover five different levels of digital skills:

  1. I understand what I’m doing and I’m a user (High digital skills)
  2. I understand and know how to use, but I choose not to use the digital products (High digital skills)
  3. I’m using this digital product but would need some help with it (Low digital skills)
  4. I don’t have skills, but would like to gain them as I see benefits in digital options (Low digital skills)
  5. I don’t use it and don’t want to learn how to use it (No digital skills)

Here’s a visual representation of our vision of the prototype after testing.

Inclusion scale findings

Evaluating the project

After creating, testing and evaluating our tool, we reflected on how we approached the project as a whole. Here are some of our findings.

A diagram showing the evalution exercise

Subjectivity

The new digital inclusion prototype has a constant metric that helps to eliminate researchers’ personal decisions on participants digital inclusion.

Timing

This is still an open question for us as each research project has its own needs and level of complexity. We didn’t discover the optimal time to test the digital inclusion of participants in research.

The emotional aspect

Unfortunately, we haven’t been able to tackle this one just yet however we do think that this is something we can measure by using the prototype at different stages of the journey with participants. This might be what we test next.

Is the digital inclusion scale enough to base your assumption of digital inclusion of participant on?

We tried to make the new question set broad and inclusive enough to define people’s levels of digital skills, as well as eliminate the subjectivity aspect of researchers having the final word in the final score of the user. By making it broader and looking at different areas of participants lives we increased the credibility of the results on peoples digital inclusion as well as eliminated the subjectivity of the researcher that could affect the score.

How to define an expert and beginner user

The line is changing and the only way we can keep in pace with it is to iterate on the existing tests of digital inclusion, have regular updates, and be involved with informing people about new up and coming technology.

Data awareness

We added in a question regarding data to see how aware people are of how their data is used online as well as understanding how they can limit its usage. This is an evolving aspect of users awareness. Our digital inclusion scoring will need to take this into account in the future.

Summarising our work

Digital is evolving and the digital inclusion scale should evolve with it. We believe that this research opens up new insights into digital inclusion and the complexity involved in measuring it.

The project opened up many questions and though prototyping solved some of them, some questions are still left unanswered. At the stage we are now, we understand the depth and complexity of digital inclusion and recognise how it varies in different areas of our lives and we have a tool and a method to continue improving how we measure digital inclusion.

Digital inclusion can’t be looked at as one broad topic. We need to look at different aspects of users’ lives (social life, friends & family, health, food, security…) and consider their different levels of digital inclusion in those specific areas.

This will help us understand where users are already skilled and where the digital inclusion need a bit of a push (as presented on graphs above). We hope that this will open a wider discussion and help evolve the exploration of digital inclusion.