Public Sector Digital projects fall under the influence of GDS and the Service Design Manual, which describes five service phases: Discovery, Alpha, Beta, Live and Retirement.
In this post, I will discuss what it means to run a successful Alpha and to illustrate this I’ll refer to the recent work we have successfully completed for the Office for National Statistics (ONS) in its Digital Transformation Programme.
What is an Alpha?
As described by the GDS Service Design Manual, "Alpha is the development phase that comes after discovery." Therefore, to understand the Alpha we should take a quick look at Discovery.
In this initial phase, the project team is expected to identify the users of the service being considered, map out their user journey(s) and how their needs are being met currently. Discovery is all about analysis and should be considered a green-field approach to service design. Consideration should be given to how the needs of the users may be met with a digital/technology solution and how that solution might fit into the organisation. The Discovery Team should also consider what the shape of the Alpha phase will look like and, more importantly, what the shape of the team to deliver that phase will be.
At the end of the Discovery, the team present their findings to their GDS-assigned assessment panel and, if they're happy, the project is given the green light to proceed into Alpha.
Transitioning from Discovery to Alpha
When starting Alpha, it is vital that due consideration is given to both User Research and Technical Prototyping. It is easy to take the 'development phase' headline and start to build a solution when in actual fact the purpose of Alpha is to build prototype solutions that can be tested with users. The findings of this User Research form the basis to move forward to actual service/product development in the Beta phase.
A typical Alpha team will consist of a full-time User Researcher, a Technical Lead, Product Owner, Delivery Manager and then a mixture of highly skilled technical and analytical individuals depending on the envisaged solution out of Alpha.
For the ONS, we brought in DevOps expertise, front-end web development and back-end Python development as we knew there were going to be significant challenges to building and deploying the new product due to a transformation in their cloud infrastructure. We also had a UX specialist and Business Analyst with domain knowledge. This was quite a large team for an Alpha, but we wanted to make sure we could prove the technical build as well as provide prototypes to support user research.
We started Alpha by consuming the outputs from Discovery and then, in collaboration with the Service Manager, we identified three user stories that would cut vertically and horizontally through the service to improve the understanding around some of the key unknowns. In particular, we wanted to expand the understanding of the user needs for two key user personas that had proved hard to unpack during Discovery. In addition, we had stories to develop a working prototype in the back and front end that would talk to each other using the micro-services architecture and languages that the ONS preferred.
Alpha was timeboxed at six two-week sprints. Again, it's important to get this timebox approach agreed at the start. In a lot of cases, the boundary between each GDS phase is also a spend-control gateway and the cost of the Alpha, being timeboxed, should be understood ideally before you start but certainly within the first sprint.
During our Alpha, we had several iterations of a functional prototype with increasing features that were demonstrable at the fortnightly show and tell. We also iterated through sketches, wireframes and clickable prototypes for both the front and back stage that were used in user research.
By the end of the six sprints, we were confident that we had proven the infrastructure. We had a CI Pipeline from dev to production that we demonstrated as part of the final show and tell. We were also confident that the understanding of the user needs had sufficiently been expanded such that the next step of building a production-ready Beta was viable. As part of the output from our Alpha, we also made recommendations around the next steps in user research. In this case, to better understand the assisted digital user journey as that had been particularly hard to identify in the Alpha.
Navigating the Service Assessment
When it came to assessment, we were fortunate to be the first in the ONS to go through the new style of assessment panel. In the past, the panel had used a series of questions (18) to drive the session. Recognising that this tends to lead to some fixation on the early (user research) questions, the new style is a prompted conversation. The agenda is now based on the question set but not controlled by it.
This approach allowed an easier free-flowing session that could identify and discuss the areas that required deeper understanding without the prescription of the Q&A previously experienced. It led the team to the behaviour of 'knowing their stuff' rather than just knowing the answers to the questions.
That's not to say we didn't know the answers, we demonstrated this by creating pages on our Confluence to cover each of the original 18 questions.
Throughout the Alpha, I instilled a single message into the team. While it was obvious that the best approach would be for the ONS to retain a lot of, if not all, the team into the Beta, I held firm to the mantra that the team should consider the Alpha a discrete project and as such the outputs from the Alpha should be placed in a (virtual) box with a nice ribbon on it. Thus, the Beta team could walk in and unwrap the Alpha to absorb everything that we had done and discovered.
This approach meant that the team were focussed on the end date and results they were producing and led to a very successful assessment. In fact, the service manager, himself a former senior GDS assessor, said it was probably the best he'd seen!