One of the worse lines we have ever heard in an article or blogpost trying to explain how to do analytics for startups was the following by Ryan Holiday in a Kissmetrics post:
A somewhat counterintuitive comment on analytics: I see people make decisions that are backed by the numbers, but violate common sense. If your ad is working (getting clicks), but it’s boring, you have a problem. Because as soon as you stop running the ad, the clicks will stop.
A quick question for him - how does he know if an ad is boring? A better sense of the question: How do you measure quality?
Right now his answer is common sense. He is appealing to your vanity. He is trying to convince you one that one of the falsehoods that good growthhackers tell you to avoid: Your mental model of your customer’s thought process is more accurate than the one in the real world, your actual customer’s mental model.
There are valid places to start with your mental model. Suppose that instead of your brain, we only have a computer model - a script called
intutiontron.php. 80% of the time
intuitiontron.php accurately predicts the click through rate of various advertisements and predicts incorrectly 20% of the time. It would be a useful script to run if you had written 100 different kinds of copy but only have the budget for 10 ads in order to give you actual test ad buy the best chance of succeeding. Most of the time, it would choose ads that are likely to do well in actual test buys. However, it would be sheer folly to pull an ad which
intuitiontron.php says is “boring and has a low response rate”, but which we know already has a high conversion rate, since we know “intuiontron.php” fails to predict clickthroughs accurately 20% of the time.
Your common sense plays the same role as
intuitiontron.php - it is a mental model, rather than a computer model. But it is a model rather than the real world. It often predicts the world and occasionally gets things wrong. When it gets things wrong, you should try to understand why, improve your model, and accept reality.
This is how the scientific method works. We build models, test them against reality and update as needed.
This does not mean there are no models about branding that could give us some indication if the ad is bad but getting lots of clicks. In fact, there is a whole school of models call Media Mix Models designed to do so. They were developed in the 1950s by JWT in order to measure the effectivrniness of various mediums of advertising for consumer packaged goods (like soap). These model assumes that you know what you’ve spent on advertising and you can get accurate data back about what you’ve sold, particularly in the context of market share of your product. In practice, media mix modeling is not very suitable to startups, that are often trying to create or reorganize an entire category of products. Further, most models highly overweight towards discounts, a problem when you are trying to grow your revenue. Without this sort of statistical power, in fact it is impossible to know if a branding campaign is wrong. Maybe one day these models can be adapted in a way that makes sense for a startup, but that day is not today. And without the numbers that come out of such models, Ryan Holiday’s advice should be considered spurious at best.
Therefore, if a growth hacker comes to you and talks about how your work is “boring” and your brand is “off” - ask them for the numbers and ask him how he found those numbers before accepting his advice.