How we use design to create business impact

Read the article

Monzo has grown spectacularly over its 9-year history and has set a bar for product-led growth. 

Product-led growth (PLG) is a strategy where the product itself becomes the primary driver of acquiring and retaining customers. Instead of relying solely on traditional sales or marketing, businesses design the product to be incredibly valuable, with the intention that this very act encourages people to discover, adopt, and advocate for it independently. 

Our first example of product-led growth was our invite-only, Golden Ticket referral scheme (yes, it was Wonka inspired) in which customers could gift their friends and family exclusive access - it was very successful in spreading the word. Our first foray into network effects

However, just because we have had this success in the past doesn’t mean we should rest on our growth laurels. In this post, I’ll delve into how we’re building on this by adopting new growth design ways of working and how uniquely positioned design is as a business tool for growth. Buckle-up!

In this blog post, I aim to help you understand

  • what Growth Design is

  • how we use experimentation to learn

  • how Monzo applies experimentation to create great experiences

  • how to craft an effective hypothesis

  • and how to map metrics to an experiment

What is Growth Design?

Growth design is the intersection of product, marketing, design and data. Here at Monzo, we take pride in adding our unique product and brand craft to the mix, too. 

A Growth Designer has many traits and expertise that a Product Designer has but may index higher in some specialities than others. Both experts focus on creating well-designed experiences that build customer and business value, and both use experimentation techniques to learn if their design has an impact. How they apply these methods is where the difference lies. 

Experiment to learn

A growth designer is more likely to use experimentation principles to learn than derisking a design. Experimenting to learn requires a different approach to the design. The design may be bolder than usual to better understand a person's behaviour compared to a design they have confidence in and need to de-risk.

Fry from Futurama squinting with the words 'Not sure if it's the future or just an experiment' around him.

For example, we wanted to understand better how new customers interact with our app after the onboarding period. While we had several assumptions about what features would be helpful, we needed more data. To make up for this, Almo, our growth designer, and his squad conducted an experiment that simplified the signup process for new customers and focused on a core set of setup actions in our app. This bold experiment was to learn what works, guiding new users along a predetermined path. 

Overall, we got more customers 'through the door' and significantly improved physical card activations, which has historically been challenging to move the needle on. However, this also reduced feature adoption and overdraft uptake. We now know that overdrafts drive significant revenue during the first few days of a new customer's lifecycle.

At Monzo, product designers focus on creating great core experiences that generate value for our customers, and Growth Designers focus on getting more customers to that value sooner. The critical factor in this difference is time. A growth designer might ask themselves: 'What is the thinnest slice of this design that will build my confidence in it?'. It's about learning faster to remain competitive and continue creating great things for our business and customers.

Scrappy, not crappy!

A key point to note here is that we're not in the business of creating half-baked experiences. We don't want the customer or prospective customer to feel like they're in an experiment—we don't want them to see behind the curtain. Seeing the edges of the experiment will erode our hard-earned trust and potentially bias results. The person must feel like this is a real experience to get the richest learning. We have adopted a mantra to help us remember this: scrappy, not crappy!

For us, 'scrappy, not crappy' means to focus on the core hypothesis and craft a high-quality slice of the experience to help us test it. The counterbalance we take at Monzo is we're likely to spend a little more time refining our design than other companies that practise growth. This may mean we cut the scope to get the experience right, but this is a good tradeoff as long as we're consistently laddering up to the hypothesis. 

Monzo stands out with its beloved brand and community of super fans. Our growth design journey aims to help people interested in Monzo become customers and customers into superfans, all whilst preserving the inviting and delightful nature of our product experience.

Putting Growth Design into practice

In growth design, the scientific method serves as our fundamental approach, aligning with the iterative and data-driven nature of the discipline. Just as scientists approach experiments to test hypotheses and uncover insights, growth designers leverage this methodology to create and refine strategies that drive customer acquisition and activation. 

We also use Spotify’s Thoughtful Execution framework to ensure we’re working on the right things. 

Our process begins with observation and understanding; growth designers keenly observe user behaviours, leveraging data and insights to identify opportunities and challenges. This forms the basis for hypotheses, where designers work with their teams to articulate assumptions and potential solutions to address specific aspects of the user journey. 

Hypothesis generation is central to this process, and crafting a good hypothesis is no small task!

Crafting hypotheses

A hypothesis is an educated, testable statement about how a specific change or experiment might impact user behaviour, engagement, or a targeted business metric. It serves as a foundation for conducting experiments and is formulated based on insights, observations, and an understanding of user needs. A well-crafted hypothesis often includes the expected outcome and the rationale behind why the change is expected to have a particular effect. The goal is to test and validate the hypothesis through data-driven experimentation, allowing growth teams to learn and iterate on their strategies.

A woman looking confused with maths equations and diagrams floating around her.

The format that works for me has been to write the statement like this: 

We know that [1] is true. If we do [2], then we will see [3] happening, resulting in [4] a change in the metric.

  1. Start with what you know (data and insight)

  2. then what you will change (your action)

  3. then describe why this will change (your assumed reason)

  4. …and finally, how you know this change will be successful (what metric it's moving)

An example of what this might look like in practice: 

45% of web visitors are abandoning their applications [1]. We believe that re-engaging people with an abandoned application campaign [2] will help people remember to continue [3] and increase the sign-up form submitted to the application completed conversion rate by at least +10% [4]. 

Data centricity

A young Bill Gates lounging on a desk surrounded by old computers with the words 'When it comes to data analysis, I excel' around him.

Data-centricity is fundamental to the Growth Designers toolbox. Growth teams meticulously analyse user interactions, employing statistical tools and methodologies to identify patterns, correlations, and areas for improvement. This continuous analysis informs the refinement of hypotheses and ensures a dynamic, informed approach to the evolving design. Like the scientific method emphasises revising hypotheses based on results, growth designers embrace an iterative mindset. Failures are not setbacks but stepping stones, guiding designers toward improved touchpoints and enhanced user experiences. 

Yoda from Star Wars with his quote 'The greatest teacher, failure is'

Learning is a process and involves many failed attempts as demonstrated by ex-Head of Growth at Miro, Kate Syuma. 

…our activation experiments didn’t show statistical significance for quarters… after 20+ experiments with the onboarding flows we finally uncovered the trend. And even after that, the first iteration with the new approach wasn’t successful, but the second was. You need to fail at least once to do the right thing.

Watch the metrics

Once we have a hypothesis and a plan, the next step is determining what metrics this experiment will move. All our squad initiatives must align with overall company objectives, so knowing what numbers we are moving is critical to determining if we are on the right track. 

We line up our metrics by determining a primary, secondary, and guard rail metric:

The primary metric is the single figure that evidences impact. We can avoid misinterpreting our experiment's results if we can isolate this to a single primary metric.

The secondary metrics are the numbers that are good to monitor but are optional. They might help identify future hypotheses and generate ideas for further experiments.

When an experiment unexpectedly moves a guardrail metric, we may want to reduce trust in the results or stop an experiment where we are unintentionally blocking or harming users or the organisation.

Here is one of our recent metric maps for a test on monzo.com:

Primary metric: /sign-up/ to /sign-up/initiated conversion rate (moving people from one page to another, basically)

Secondary metrics: 

  • Homepage viewed to sign up initiated page viewed

  • Homepage to business application started

  • Homepage to business application completed

Guardrail metric: Bounce rate

What’s next?

Run the test! You have your hypothesis, your design, your plan of what to do next and all that you have to do next is to sit back and watch the numbers come in… up or down!

A a looping gif of George from Seinfeld eating popcorn on a couch.

At Monzo, we value customer feedback greatly, and this is especially true for growth design. We analyse customer feedback from their usage data when we launch an experiment, and to ensure a comprehensive understanding, we also conduct one-to-one interviews to gain deeper insights. This helps us better understand the impact of our experiments and make informed decisions when iterating in the future.

As with any growth endeavour, our journey with growth design is iterative. We're learning, growing and adjusting our process as we go, optimising for exceptional customer experiences whilst ensuring we grow sustainably as one of the UK's best banks. 

If you’d like to learn more about Growth Design, here are some interesting links:

Defining Growth Design: The Guide to the Role Most Startups are Missing

The emergence of the growth designer

From UX to Growth Design: 5 principles to multiply your value

Kate Syuma’s Newsletter and Podcast, Growth Mates

Come join us 🤩

If you read this and think this is your thing and you want to get involved, check out our careers page where we update our roles. We also have a couple at the moment that might interest you 👇