With so many tools to measure, experiment, and quickly iterate through the designs of cloud software products, it’s tempting to fly by instruments and “let the market show you the way”.
Engineers love structured and formulaic methods, so they’re naturally drawn to micro-improvement problems. The thing is, making something better only works if you start with something good. And good product design starts far outside the realm of the quantifiable. For this reason, it’s a bad idea to rely too much on iterative optimization during the early, formative years of a product-oriented startup.
Better is easy, good is hard
The biggest problem with micro-improvements is that they’re commercially introverted. They don’t drive real growth. Real growth comes from making a product that meets customer needs, telling a story that resonates, designing an engaging experience, building scalable distribution channels, making people happy, and getting them to talk about you. None of these things can be built by A/B testing or polishing funnels. Yes, they can be improved — but that can wait for now.
Click-through rates don’t write good stories
I’m sure The Economist employs great editors that write compelling headlines, but this rests on a solid layer of good reporting and analysis. If you tried to build a publication simply by optimizing for the click-through rate of the headlines, you’d probably end up with a tabloid.
There’s a common misconception that optimization is a series of micro-improvements to polish something into perfection. So, the more stuff you try, and the more things you optimize, the better your product gets. But it’s not about perfection through accumulation — it’s about choice. You optimize forsomething. You choose something that you sacrifice other things for, and this implicitly sets a design direction.
The dangerous mind-game of micro-optimization is that it consistently yields improvements in this or that metric that give a sense of progress. Setting your direction to chase whatever metric seems to be the easiest to push at the moment is an easy way to get trapped around a local maximum. You tell your investors that you’re focusing on the high-impact factors, but the truth is you might be going down a product design rathole.
Behaviour is a symptom, not a cause
It’s a very common mistake to optimize for proxies of an outcome, instead of for the outcome itself. We make recruiting software. Statistically, our loyal, paying customers share some common behaviours: they add our jobs widget onto their website, they post to job boards, send emails to candidates, and they invite their colleagues to schedule interviews and give feedback.
I can trick you into doing some of the above tasks with some big yellow buttons (we’ll A/B test the colour) that each say — actually, never mind what it says, we’ll A/B test that too. But does the correlation work backwards?The common behaviours are only symptoms of the collision of people who care about hiring with software that does the job well. A substantial and lasting increase in the number of people that engage with the product cannot be achieved by gimmicks.
Funnels are a useful model for troubleshooting flows and discovering hidden causalities, but it’s worth remembering that they are merely an abstraction of a segment of user experience, not the experience itself. You can’t compartmentalize user experience and drive it by min-maxing clicks and screens.
User behaviour is something to be measured, understood, and interpreted. Design follows the interpretation of behaviour but shouldn’t try to force it.
In a startup environment where resources, time, and attention are in short supply, it’s vitally important to pick one thing that you’re going to obsess over for at any one time and only optimize for that one thing.
Optimizing for acquisition is a good bet. It’s a wide enough goal to affect the design of everything that relates to product-market fit and marketing, but leaves some complex questions like pricing, churn, and cost-efficiency out of the picture for the moment. Focusing on increasing a % conversion on step 2 of some arbitrary funnel is too specific and too low-leverage to make a meaningful impact on your product development. Testing, measuring, and iterating are important tools for a product designer, but they’re only helpful if you ask the right questions.
The problem with insignificant battles is that they’re still battles nonetheless. It’s too easy to spend enormous amounts of time optimizing for something that won’t make much of a difference. To make it worse, it’s a sophisticated-looking task, so it feels like a good day’s work. We’re suckers for streams of small but frequent rewards.
Don’t get me wrong. It’s healthy to be analytical and make decisions supported by factual evidence. It’s smart to harvest quick wins by removing adoption blockers and refining critical flows. It’s beneficial to nudge users here and there. But you can’t build an entire product by throwing shit at a wall to see what sticks.
[this post was originally published at the inside intercom blog]