The digital apprenticeship service is a complex programme with 7 scrum teams developing different elements of the service.
We had already carried out lots of user testing on the individual component parts, but it was really important to do a private beta across the whole service so that we could test how it all fitted together.
How it worked
We wanted all 7 scrum teams to have sight of everything we were learning in private beta, which logistically is quite a challenge.
We set up a wall of research (see photo) which was highly visible to all teams. We printed the screens which were being tested each week and annotated them every day with comments from employers on post-it notes. Comments were either green (it’s fine, I can use it), amber (I’m a bit confused) or red/pink (I can’t carry on). Other colours were used for our notes. This wall became a visual way of sharing what employers were saying (for example, a pile of red post-its on one screen showed there was an issue).
Once a week, the user research clan (user researchers from each scrum) invited all teams to stand in front of the wall and ask questions about employers’ feedback. Following the session, the researchers worked with the relevant product owners to make decisions about changes which were needed to the product.
Everyone had to do their bit to make it work; at times it was a real challenge to get things done on time, and sometimes, in the spirit of agile, we would change the priorities of what was built first. This meant that we had to be flexible about our research and re-think where we could best add value.
We ran the cycle for two months with 100 employers. By the end, we were much closer as product teams and we all understood how the service fits together from an employers’ perspective. We’re all working to a single backlog so that we keep the end-to-end experience in the front of our minds.
A lot of changes we made were in the detail: how to word things better or where to place items on screen. But there were some bigger, more important learnings, too:
1. Details about our users
We learnt more about exactly who our users will be. During the alpha phase, the people in the organisations we’d engaged with who had been keen to talk to us were generally apprentice or training managers, often sitting within HR, and usually at middle management level. When they started to use the service, they quickly realised there were some tasks they couldn’t complete alone. Because of this, we started to engage with other potential users - like members of the payroll team or more senior managers.
2. How different organisations are structured
Again, because we were asking employers to use the service for the first time in a real situation, and input real data, they often needed to communicate with different parts of their organisation. Sometimes, these different departments might be completely separate or even outsourced. Awareness of this really helped us to tighten up our segmentation and personas.
We found out, for example, that finding the right person with access to Government Gateway details was really hard. If you’re a small organisation, those details could be with someone sitting at the next table. If you’re a big organisation, this could be outsourced or could be handled by someone in a completely different department or building.
3. Assumptions about users’ knowledge
During alpha, all of our testing was moderated, and most of it on small parts of the service at a time. This meant that the participant was usually given some context into the service and what was being tested. Unmoderated testing in private beta really challenged our assumptions around users’ understanding of what the service is for and how to use it.
Private beta taught us a lot about our users and what they did and didn’t know. I’m looking forward to finding out even more as we enter public beta. You can follow our progress here on the SFA Digital blog by signing up to email updates or our RSS feed.