Pellets not cannonballs: How we experiment at Monzo

Read the article

I joined Monzo in October 2021 as a product designer in our Operations collective. We’re responsible for the customer support experience in the Monzo app as well as the internal tooling we use to help customers.

Monzo is #1 for overall service quality in the UK. But there’s always room to improve and we’re constantly thinking about our customers and their needs.

When we think we’ve found something to improve, we run an experiment. We run experiments mostly to figure out what better product solutions we could build for user problems or pain points. As a designer, I love this way of working because it means we’re staying focused on the customer while measuring the impact we have to gather learnings for future work. In my first 6 months at Monzo we ran 21 experiments!

In this post, I'll explain what my first few months looked like as a member of the design community at Monzo and my role in those experiments.

Experiments: this is the way

Most experiments are simple A/B tests where we show half our customers what’s called an 'enabled variant' (a fancy way of saying the new experience) and the other half the 'control', which is what we’re testing against. We also run a smaller, more longterm 'holdout' group to make sure the effects we’re seeing from one experiment aren’t unraveling over time, or interacting with other experiments in a negative way.

A table showing control with 50% weight and 45820 users, enabled with 50% weight and 45956 users, and holdout with 0% weight and 10294 users.

Once results reach significance and there’s no further doubt about which variant has won, we change the weight to 100% (minus the holdout) to the winning design and build from there. It’s the new 'control'.

When I joined in October, our Operations squads had already run 6 experiments in just the month before so I joined a team with a lot of momentum. The challenge was to get up to speed quickly enough that I could match this pace.

What a project looks like

So, what does a project look like? How were our squads able to ship 21 experiments in such a short amount of time? And what were the steps we took to get there?

Every idea at Monzo begins with a proposal. We use Notion heavily. It’s definitely the most writing I’ve done as a designer. We have lots of templates and best practices for getting our ideas out and into a form that’s easy to share with everyone across a squad or the wider business for feedback and iteration.

A screenshot showing part of an experiment proposal template on Notion. The headings include: What problem are we trying to solve? Why should we solve it? How should we solve it? What if this problem didn't exist?

In total, I wrote 5 or 6 of the proposals that went on to become experiments myself. Anyone can write them and direction is definitely not set from a top-down perspective. Some of the best ideas I’ve worked on have come from our customer operations staff themselves, often in the form of feedback gathered during shadowing sessions, or by our customer care product partners who have an incredible wealth of knowledge about our customers and those who help them.

Getting data as early as possible

Proposals often include a good amount of data to help size impact. It’s great at this early stage to get a sense of the problem’s scale, as well as the potential opportunity.

We use Looker, a tool I have a love-hate relationship with from previous roles. But at Monzo, I find our data to be a lot more approachable. And when it isn’t, I can easily reach out to a data scientist to walk me through something more complex, like writing an SQL query to narrow things down.

Used effectively, data can be a strong communicator on proposals, often leading to “no way” moments that help expose particularly surprising behaviour and lead to a lot of discussion and 🤯 emojis.

Data also plays a huge role in how we measure success and impact. There are separate, even more data rich experiment plans that are written up alongside proposals that clearly define what success looks like. These cover what we should be looking for once an experiment concludes in order to judge if it had the impact we expected.

Pellets not cannonballs

Once a proposal’s been written up and shared with the team, then highlighted almost everywhere with comments, improved based on feedback, sometimes rewritten totally, and (if it’s not abandoned – that’s ok, it happens) prioritised for an experiment, it’s launched as a 'pellet'.

A screenshot of a presentation slide. On the left is a list of benefits of pellets including that they are quick to build, help us learn quickly and are low risk. On the right-hand side are cannonballs with a list of risks, including: long time to build, slow to learn from, and potential for high reward but no guarantee.

If you’re trying to sink a ship and you only have a few cannonballs you can fire, the best course of action is to shoot lots of little pellets to find your target first. Then launch your cannonball. The great thing about pellets is that sometimes one of them can hit so well on target that they take out the captain in one shot and it turns out you fired a cannonball after all.

We use pellets to scope out projects from the onset, asking ourselves tough questions about the focus, scale, and risk of work. As a regulated bank, keeping experiments small, nimble, and rigorously adherent to our values and governance means that there’s no regrets if a hypothesis doesn’t turn out to be true and the control wins. Thankfully this happens infrequently, but we always learn a lot when it does.

Yarrr!

You can imagine how many pirate memes this framework has generated. It’s gone a long way towards setting up momentum, allowing us to string together a number of experiment launches in quick succession. Once an experiment wraps up it’s also common for the findings to inspire whole new proposals, which in turn become whole new experiments.

It’s one of the most motivating ways I’ve worked and I highly recommend taking on this simple 'we sink ships' analogy to inspire your team to always learn quickly and say no to big projects that zap a team’s energy.

It’s how we launched 21 experiments in my first 6 months at Monzo. And how we’ll continue to work incredibly hard to keep our #1 ranking for service quality. Our band of customer support pirates are always hunting for our next ship to sink, designing and improving the customer experience with each experiment we launch.

If you want to join us and help launch even more experiments, we're currently looking for a Director of Product Design and a Senior Product Designer.