But how do you *know*?

The case for (and against) testability

There’s an old saying in the ad world, attributed to John Wanamaker: ‘Half the money I spend on advertising is wasted; the trouble is I don’t know which half’.
Designing systems can feel a lot like this – agonising for hours over which features people will find valuable, which part is the core of the product, and more than anything else; ‘What the Users Want’. Des Traynor wrote eloquently about this recently.

Finding out what users (or in a more traditional sense: customers) want fall into two main groups – the first being the hero / genius model, where the designer decides what the public needs and builds it for them, including Thomas Edison, Henry ‘If I asked my customers what they want, they’d say a faster horse’ Ford, and the ubiquitous Steve Jobs. As Freakonomics tells us – any business with a hero / genius model, whether music or sport or drug dealing, is filled with people lower down the pyramid of success struggling to make their mark.

The second model, based on the scientific method has emerged relatively recently in technology, thanks in part to the rise of ‘big data’ and the means to analyse it, and the data-driven approach to web optimisation championed by Lean Startup’s Eric Ries. In this approach, every design variant can be tested with real users (through A/B testing), and the most successful chosen based on hard data. While having some obvious benefits (the ability to quote the data to show that implementing the whim of a senior exec would be disastrous for conversion being one), an over-reliance on data-as-design-aid, can lead to focusing on the wrong things – as the legendary Jared Spool wrote recently.

The answer seems to lie somewhere in between; I wrote a while ago about the differing strengths of people vs systems – the human mind is exceptionally good at connecting disparate thoughts and experiences, and technologies into new ideas – the insight that fuels the creation of new products and services, and drives great businesses.

Systems, and the maturing analytics software market that use them, are incredibly good at analysing the user behavior that shows whether our great idea is any good at all. If we can’t test if an idea is any good, what’s to stop us flitting to the next one when we meet the first real resistance and forging a reputation as an ‘Ideas’ person, or more likely, as a corporate magpie – jumping from one shiny new thing to the next.

However, if we rely on data to make every decision, we’re looking backwards to decide how to go forward , and when the time comes to make the big decision, where the data is inconclusive (or absent) we’ll be terrified of the responsibility.

In business, as in life, there is risk at every turn – most of us do all we can to minimise risk, to understand the context of our decisions, and have the courage of our convictions.

The truth is that if we’re doing something new, we never really *know*. As a once-decent crooner sang; ‘you gotta have faith.’ And data.