If you’ve been following my product-market-fit series, you should now understand how to conduct customer and product research.
You’ve done all the hard work of talking to people and gathering data. Now you need to surface these learning and turn them into actionable insights!
In this post, we’ll cover how to do just that. 💪
Welcome to part 4 of my PMF sprint playbook series! Make sure to catch up on the previous posts 👀
Part 1: How to run effective PMF sprints
Part 2: How to get people to talk to you about your ideas
Part 3: How to extract unbiased insights from customer calls
Surfacing the signal
We’ve already briefly discussed how to structure and format your customer calls to optimise for unbiased information. The ultimate goal and why you started this whole process in the first place still hasn’t changed. You have to:
Discover an important and unmet need.
Hopefully if you’ve done your customer calls right, you will have gathered valuable data and can extract these jobs to be done or unmet needs from each one. LLMs are obviously great at this, but really try to distill the core learning from each call into a couple of sentences. For recruitment agencies this could be, for example:
Clients want visibility into recruitment progress but in manageable formats
Retained jobs often come with detailed market maps using Excel spreadsheets shared with clients. This results in duplicated work and takes 1h on average to complete.
In the cases where no obvious themes or clustering emerge you can drill down further by putting this data into a priority matrix using your preferred white-boarding tool. Here’s one we did for project management:
The idea here is to help you rank, identify and prioritise the interesting problems surrounding your hypothesis. This is not an exact science and you will need to mix your intuition with past experience and memory from previous conversations. Hopefully you will have gathered strong data on whether or not these needs are important during your interviews.
You’ll walk away from this process with either:
Confirmation that your initial hypothesis was spot-on
Discover adjacent problems with greater opportunities for growth
In both cases you’ll want to dig deeper and feed this learning back into your product roadmap or into your list of growth experiments for your next PMF sprint.
Start with the end
The best way to start thinking about the next steps on any of these ideas is to define expected results. This is the hardest part of the planning, but it will help you clearly define the success criteria for when to commit or move on. It will also cement the potential impact on your goal. In our case, it also flagged areas where we weren’t tracking or surfacing the right data from our product analytics.
You’ll also want to try and quantify your intuition and hunch that this will work. It’ll help you prioritise and decide which ideas to tackle first.
Sean Ellis, author of “Hacking Growth”, codified this process into the ICE framework. The main idea is to list these hypotheses and score them according to 3 criteria:
Impact: “If successful, how big will the impact be on our goal?”
Confidence: “How sure are you that this will work?”
Ease: “How easy is this to implement?”
Here are a few examples of what this could look like in a fictional scenario:
I added a results date to measure the “time to impact” to also optimise for fast feedback loops.
What is missing from this sheet is the methodology on how you will actually execute and move the needle. In some cases this is obvious, however when writing these down, you need to consider what is the minimum viable experiment that will give you evidence of positive results.
The key is to break down potentially high effort experiments on a hypothesis into the smallest possible effort. Iterate as much as you can on this to ideally get the ‘Ease’ above 5, whilst understanding that this will obviously have a lesser impact or dilute your expected results.
A classic example of this is the fake door test where you can present an option to an end user before you implement anything to track engagement. Read this pricing experiment from a growth engineer at Masterclass for a more concrete example of this.
Another idea that we’ve been playing around with is to change the ‘Expected results’ into a ‘Kill criteria’, for example:
< 25 % adoption after 7 days ➜ kill
Lean learning loop
Your PMF journey lives and dies by how fast you can learn and experiment before you run out of money. Create hypotheses, distil raw signal and deploy it into the next experiment. Keep the loop tight and the rituals light.
Everything else is just noise.