Making big decisions is hard. And at Monzo, it’s not always easy to know the best way to explain, price or design something.
We’ve got an amazing user research team, who carry out research with real customers, and use what they learn to help us make more informed decisions. But another way we try to make better decisions is by running experiments.
Today, more than two million people use Monzo. And we couldn’t possibly understand all the different ways our customers see and think about things. Making decisions based on our own opinions could lead to problems, or have consequences we hadn’t expected. So we test out our ideas with you, whenever we can.
When do we do experiments?
Let’s take Monzo’s energy switching feature as an example.
When you start the switching process, you’ll swipe through three screens that explain how it works.
One way we can measure how well the feature's working is to see how many people finish their energy switch after seeing these screens.
If our data shows us that 40% of people close the app (or “drop-off”) when they reach the second screen, that’s a clear signal something’s not quite right.
It might be that there are too many screens, or that a particular screen has too much text, unclear language or a confusing image.
We’ll run an experiment to test different solutions to the problem, and check whether making those changes increases the number of people who finish their energy switch. Below are three examples of screens we might show people as part of the experiment.
How do experiments work?
We often test just one thing at a time, so we can get clear, actionable learnings. This approach is known as “split” or “A/B testing”. Sometimes we’ll split it out even further, and test multiple things with several groups of people.
Let’s say we’ve decided to A/B test one of the new screens above. We’ll keep showing the original screen to one group of people, without making any changes (this is known as the “control”).
We’ll show the new screen to another group of people (this is known as the “treatment”).
Then we’ll monitor the results to see if the new screen makes a difference.
Our aim is to increase the number of people swiping through all three screens and completing their energy switch. If the new screen performs better, and the results of our tests are reliable, we’ll show the new screen to all Monzo customers.
How do we know if our results are reliable?
There are a few important steps we need to follow to make sure we can rely on our results:
Keep all other factors the same. If something else is different about the two groups, and how and when they see the screens, we can’t be sure that it hasn’t influenced the results. So we’d start the test for everyone at the same time of day, in the same format, and for the same length of time.
Make sure the results aren’t random. Run the test with a group of people that's large enough, for a time period that's long enough, so that if there’s a difference we'll probably see it. This helps us confirm that the results we've seen are highly unlikely to have been random – so if we’d included all our customers in the test, we're confident we'd come to the same conclusion.
Don’t influence people’s behaviour by telling them they’re part of a test. We can’t tell people that they’re part of an experiment, as it might influence their behaviour in some way. For example, you might spend more time reading the screens than you normally would, or feel pressured to make a decision that you otherwise wouldn’t have.
By running experiments like this, we can get quick feedback from thousands of people. And we use that feedback to make Monzo even better!
This is how we approach product development at Monzo. Instead of polishing features internally for months and months based on our own predictions about what works, we release features early to get feedback from you!
What does this mean for me?
This means you’ll occasionally see something in your Monzo app that other people won’t. So don’t worry if a friend or someone on the community forum sees something different.
We look at the overall results of the experiment and use those results to inform how we develop features. We don’t use experiments to learn more about you personally. We might look at other factors when we try to understand our results, like how often you use Monzo, the phone you have, or the time of day that you interacted with the experiment.
And in the long run, we hope that running experiments helps us build Monzo in a way that works for you.