Showing posts with label customer development. Show all posts

The Entrepreneur’s Guide to Customer Development

0 comments
Brant Cooper and Patrick Vlaskovits have written a new book, The Entrepreneur’s Guide to Customer Development, which builds upon the foundational work of The Four Steps to the Epiphanywhile improving accessibility, updating the ideas, and making it more actionable. I believe it is the best introduction to Customer Development you can buy.


As all of you know, Steve Blank is the progenitor of Customer Development and author of The Four Steps to the Epiphany. I have personally sold many copies of his book, and continue to recommend it as one of the most important books a startup founder can read. 

I used to give copies of Four Steps out to my employees, in the hopes that it would instantly indoctrinate them into the methodology of Customer Development. I just assumed that everybody would love the book as much as I did, and would instantly change their behavior based on what they read in a book. You can imagine how well that worked. Instead of that naive approach, I wish I'd had a book like this one, to help me figure out how to get started with customer development step-by-step. 

When I wrote a review of Four Steps on this blog in November, 2008, I did my best to be candid and warn of a few shortcomings:
And Steve is the first to admit that it's a "turgid" read, without a great deal of narrative flow. It's part workbook, part war story compendium, part theoretical treatise, and part manifesto. It's trying to do way too many things at once. On the plus side, that means it's a great deal. On the minus side, that has made it a wee bit hard to understand.
Brant and Patrick undertook a difficult challenge: to provide a generally accessible introduction to Customer Development, without diluting its impact or dumbing-down its principles. I think they've succeeded.

The Entrepreneur’s Guide is an easy read.  It is written in a conversational tone, doesn't take itself too seriously, and avoids extraneous fluff. It does a great job of laying out general principles and suggesting specific, highly actionable tactics. You can easily take from it whatever makes sense for your business, and leave the rest. And it's incredibly to-the-point: you can digest this book in a couple of hours.

While the customer development framework of Four Steps is universally relevant, The Entrepreneur’s Guide updates its practices for modern startups. Four Steps primarily centers its stories and case studies on B2B hardware and software startups. This new volume also tackles examples from the Internet and wireless startups of today, both B2B and B2C. And throughout, they maintain a thoroughly realistic take on the power - and limitations - of an entrepreneurship methodology:
Successful implementation of Customer Development, let alone simply believing in it, will not guarantee success for your business. Customer Development will help you – force you – to make better decisions based on tested hypotheses, rather than untested assumptions. The results of the Customer Development process may indicate that the assumptions about your product, your customers and your market are all wrong. In fact, they probably will. And then it is your responsibility, as the idea-generator (read: entrepreneur), to interpret the data you have elicited and modify your next set of assumptions to iterate upon.
Many “airport business books” urge entrepreneurs to never give in. They tell them to persist in their dream of building a great product and/or company, no matter what the odds are or what the market might be telling them – success is just around the corner. They tend to illustrate this sort of advice with inspiring stories of entrepreneurs who succeeded against all odds and simply refused to throw in the towel. While maintaining persistence and willpower is certainly good advice, Customer Development methodologies are designed to give you data and feedback you may not want to hear. It is incumbent upon you to listen.
The Entrepreneur’s Guide to Customer Development includes four powerful case studies/interviews with successful entrepreneurs who have taken iterative approaches to their respective startups that very much resemble the spirit of Lean Startups and Customer Development.  I found these to be particularly interesting and worthwhile.

At the heart of Brant and Patrick's interpretation of Customer Development is their belief that its fundamental teaching is to question assumptions. This gives them a hook with which to apply their ideas to a wide variety of situations. In other words, if particular examples in the book don’t apply to you directly, Brant and Patrick show you how to figure out what might work for you.  This is important, since every situation is different.  I'll give them the last word:
You are already skeptical of Customer Development and Lean Startups and the slew of emerging buzzwords and supple-to-the-point-of-meaningless terms. That’s great, more power to you; we applaud your skepticism. But be philosophically consistent: periodically take the time to question your own expertise and that of your friends, partners and investors. Make the effort to test your assumptions.
If there’s a shortcoming to this book, it’s that it focuses primarily on the Customer Discovery step in The Four Steps.  Here’s hoping they soon tackle Customer Validation. Well done, Brant and Patrick. I can't wait to see what's next. In the meantime, go buy a copy of The Entrepreneur´s Guide to Customer Development right now.

Read More »

Don't be the Ice Cream Glove

0 comments
I have a new post up today on O'Reilly Radar, called "Is your product an Ice Cream Glove or a Snuggie?" It is based on two videos I normally use in workshops - each of which contains an important entrepreneurship lesson for all of us. Here's an excerpt:
For those that haven’t watched it, I’ll give a brief recap. Ali G meets with business leaders and investors on Wall Street to learn how to create a new company around a new product idea. After some general lessons, he then proposes his first product idea, complete with flip charts, business plan, and marketing plan. His idea? The Ice Cream Glove, a special glove you can carry around with you so that, if you happen to eat ice cream, you can prevent your hands from getting sticky. After failing to persuade most of the investors to back him in that venture, he then tries to sell a second idea: a Hoverboard, “like from Back to the Future.” After all, they must have made at least one of them for the movie, right?

Both of these ideas for companies are terrible, and the show is funny because he manages to keep on selling them with a straight face. But there are also important lessons baked into the humor. Take the example of the Hoverboard. If you look at the typical startup, you will see the vast majority of their energy and time invested in building new technology. We act as if the biggest risk to startup success is that the technology won’t work. But in reality, most products fail because they are the Ice Cream Glove, that is, because there are no customers who will buy them.

Read the rest (and be sure to watch the videos)...
These videos make an important point: that almost all product ideas sound bad. At the whiteboard, you can make any idea seem brilliant or ridiculous. It's only by actually moving through the fundamental startup feedback loop, which involves facts, that we can find out which have a kernel of truth baked within them.

Let me also say a brief thank you to those who replied to my previous ask for feedback about cross-posting. So far, all the feedback has been in favor of doing it whenever I have a guest post elsewhere. If you have further thoughts, please leave them as a comment. Thanks!
Reblog this post [with Zemanta]

Read More »

Validated learning about customers

0 comments
Would you rather have $30,000 or $1 million in revenues for your startup? Sounds like a no-brainer, but I’d like to try and convince you that it’s not. All things being equal, of course, you’d rather have more revenue rather than less. But all things are never equal. In an early-stage startup especially, revenue is not an important goal in and of itself.

This may sound crazy, coming as it does from an advocate of charging customers for your product from day one. I have counseled innumerable entrepreneurs to change their focus to revenue, and many companies who refuse this advice get themselves into trouble by running out of iterations. And yet revenue alone is not a sufficient goal. Focusing on it exclusively can lead to failure as surely as ignoring it altogether.

Let’s start with a simple question: why do early-stage startups want revenue? We all know why big companies want revenue – it’s one of two critical halves of the formula for profit. And big companies exist to maximize profit. Don’t startups exist for the same reason? I think such reasoning is an example of the “startup dollhouse fallacy” – that startups are just shrunken-down big companies. In fact, I don’t think revenue is in and of itself a goal for startups, and neither is profit. What matters is proving the viability of the company’s business model, what investors call “traction.” Demonstrating traction is the true purpose of revenue in an early growth company. (Of course this is not at all true of many profitable small businesses, but they are not what I mean by startups.) Before I explain what I mean, let me add an important caveat: traction is not just important for investors. It should be even more important to the founders themselves, because it demonstrates that their business hypothesis is grounded in reality. More on that in a moment.

Consider this company (as always, a fictionalized composite): they have a million dollars of revenue, and are showing growth quarter after quarter. And yet, their investors are frustrated. Every board meeting, the metrics of success change. Their product definition fluctuates wildly – one month, it’s a dessert topping, the next it’s a floor wax. Their product development team is hard at work on a next-generation product platform, which is designed to offer a new suite of products – but this effort is months behind schedule. In fact, this company hasn’t shipped any new products in months. And yet their numbers continue to grow, month after month. What’s going on?

In my consulting practice, I sometimes have the opportunity to work with companies like this. Diagnosis is easy: they are exceptionally gifted salesmen. This is an incredible skill, one that most engineers overlook. True salesmen are artists, able to hone in on just those key words, phrases, features, and benefits that will convince another human being to give up their hard-earned money in exchange for even an early product. For a startup, having great sales DNA is a wonderful asset. But in this kind of situation, it can devour the company’s future.

The problem stems from selling each customer a custom one-time product. This is the magic of sales: by learning about each customer in-depth, they can convince each of them that this product would solve serious problems. That leads to cashing many checks. Now, in some situations, this over-selling would lead to a secondary problem, namely, that customers would realize they had been duped and refuse to re-subscribe. But here’s where a truly great sales artist comes in. Customers don’t usually mind a bait-and-switch if the switched-to product really does solve an important problem for them. These salesmen used their insight into what their customers really needed to make the sale and then deliver something of even greater value. They are closing orders. They are gaining valuable customer data. They are close to breakeven. What’s the problem?

This approach is fundamentally non-scalable. These founders have not managed, to borrow a phrase from Steve Blank, to create a scalable and repeatable sales process. Every sale requires handholding and personal attention from the founders themselves. This process cannot be delegated, because it’s impossible to explain to a normal person what’s involved in making the sale. The founders have a lethal combination of insight about what potential customers want and in-depth knowledge about what their current product can really deliver. As a result, potential customers are being turned away; they can only afford to engage with the customers that are best qualified.

And what of the product development team? They are busy too, but they are not creating value for the company. They are trying to build a product to an ever-changing spec, based on intuitions from the founders about what might be able to sell itself. Worse, the founders are never around – they are too busy going out and selling! Without access to customer data, or even a clear product owner, the product development team keeps building feature after feature based on what they think might be useful. But since nobody in the company can clearly articulate what the product is, their efforts result in incoherence. Worst of all, their next-generation product is so bad they are not allowed to try it out on any customers. The team is thus completely starved of any form of external feedback.

Let me describe a different company, one with only $30,000 in revenue (again, pure fiction). This company has a large long-term vision, but their current product is only a fraction of what they hope to build. Compared to the million-dollar startup, they are operating at micro-scale. How does that stack up?

First of all, they are not selling their product by hand. Instead, each potential customer has to go through a self-serve process of signing up and paying money. Because they have no presence in the market, they have to find distribution channels to bring in customers. They can only afford those (like Google AdWords) that support buying in small volume.

Compensating for these limitations is the fact that they know each of their customers extremely well, and they are constantly experimenting with new product features and product marketing to increase the yield on each new crop of customers they bring in. Over time, they have found a formula for acquiring, qualifying, and selling customers in the market segments they have targeted. Most importantly, they have lots of data about the unit economics of their business. They know how much it costs to bring in a customer and they know how much money they can expect to make on each one.

In other words, they have learned to grow renewable audiences. Given the data they’ve collected about these early customers, they are also able to estimate with modest precision how big the market is for their product in its current form. They may be at micro-scale now, but they are in a very good position to raise venture money and engage in extremely rapid growth.

Our million-dollar startup, by contrast, is stuck in the mud.

Stories like these are what has led me to this definition of progress for a startup: validated learning about customers. (Steve calls this just Customer Validation, but I like to emphasize the learning aspect, so I accept a far more awkward phrase.)

This unit of progress is remarkable in several ways. First of all, it means that most aggregate measures of success, like total revenue, are not very useful. They don’t tell us the key things we need to know about the business: how profitable is it on a per-customer basis? What’s the total available market? What’s the ROI on acquiring new customers? And how do existing customers respond to our product over time?

Secondly, this definition locates progress firmly in the heads of the people inside the company, and not in any artifacts the company produces. That’s why none of dollars, milestones, products or code can count as progress. Given a choice between what a successful team has learned and the source code they have produced, I would take the learning every time. This is why companies often get out-competed by former employees (Palm vs Handspring to name just one), even though the upstart lacks all of the familiar resources, tools, processes, and support they used to have. (Incidentally, it’s also why these upstarts often get sued for bogus reasons. Companies can’t believe they didn’t steal any of their “precious” assets.)

But learning is a tricky thing to quantify, which is why the word “validated” is so important in this definition. Validation comes in the form of data that demonstrates that the key risks in the business have been addressed by the current product. That doesn’t always mean revenue, either. Some products have relatively obvious monetization mechanisms, and the real risks are in customer adoption. Products can find sources of validation with impressive stats along a number of dimensions, such as high engagement, viral coefficient, or long-term retention. What’s important is that the data tell a compelling story, one that demonstrates that the business is on a solid economic footing. (It being so easy to convince yourself that you’re in one of these “special case” businesses, I do recommend you give revenue a long, hard look first.)

For example, I’ve talked a few times about how IMVU raised its first venture round with monthly revenues of around $10,000. This wasn’t very impressive, but we had two things going for us:
  1. A hockey stick shaped growth curve. People often forget the most important part of the hockey stick: the long flat part. We had months of data that showed customers more-or-less uninterested in our product. We were limping along at a few hundred dollars a month in revenue. All this time, we were continuously changing our product, talking to customers, trying to improve on our AdWords spend. Eventually, these efforts bore fruit – and this was evident in the data. This lent our claims about learning and discovery credibility.

  2. Compelling per-customer economics. We had only a small number of customers – if memory serves, only a few thousand active users. But a little math will show that we were making over a dollar per-user per-month. Our cost to acquire a customer on AdWords was only a few cents. Our eventual VC’s were quick to grasp what this meant (in fact, they understood it better than we did): that if our product achieved significant scale, it would be wildly profitable.
These two aspects could be plotted on one simple graph, which tells this equally simple story: if there is a market out there for this kind of product, we are the team that will find it and profit from it. That turned out to be a compelling investment thesis, despite our micro-scale results.

Let’s return to my example of the million-dollar-revenue company. If you find yourself in this kind of situation, what can you do? I’d suggest a few things, each rooted in the idea of breaking down the wall between the two halves of this company.
  1. Go on an agile diet quickly. With a product development team that is not shipping, any agile methodology will surface major problems quickly. Force anyone who is in customer contact to take the role of the Product Owner and insist that they deliver something new on a short regular interval.

  2. Get product into customers’ hands. The sales strategy currently leaves many customers completely un-served (those that don’t qualify for the founders’ personal time). Start using some of those customers as guinea pigs for a self-serve version of the product. Even if the product is absolutely terrible, it will establish a baseline against which the product development team can try and improve.

  3. Build tools for the sales team that reduce the time investment required for each sale. Instead of devoting all product development efforts to building a full-blown product, try building just those parts of the product that would allow the current sales process to go a little faster. For example, could we develop a simple landing page that would allow customers to pre-qualify for sales time? Iterating on these kinds of features has two benefits: it frees up time for the founders and simultaneously starts getting building a feedback loop with the product development team. Pretty soon, the text on that landing page is going to become an effective explanation for what the product does, because if it’s not the salesman will have to spend time re-explaining the product to potential customers. Time-to-complete-a-sale is not a bad metric for validated learning at this stage.
This last point is especially important. Although this kind of team may understand their customers well, they don’t yet know how to talk to them in a standardized way. Without that, they probably won’t achieve significant scale. (For more on how this plays into the process of scaling up, see the Customer Creation stage of the customer development model.) Perhaps they’ll be able to hire someone especially skilled in the marketing skills needed to find this positioning. But in the meantime, by iterating on their product with customers, they have a chance to get there on their own.

Read More »

Combining agile development with customer development

0 comments
Today I read an excellent blog post that I just had to share. Jim Murphy is a long-time agile practitioner in startups. He's often felt that there was something missing. In most agile development systems, there is a notion of the "product backlog" a prioritized list of what software is most valuable to be developed next. The breakthrough idea of agile is that software should be built iteratively, with the pieces that customers value most created first. This is a significant improvement on the traditional waterfall methodology.

But startups sometimes have trouble applying agile successfully. Or, rather, they apply it successfully, but things don't turn out so well. Enter Jim's post...
Customer Development - The Missing Piece!

But, over the years I’ve realized that the toughest problem - the one that matters most and was consistently the most challenging - was figuring out what the product backlog should be.

The backlog is the answer to the question: “What is the most important work we should do right now?” it presumes that you could confidently make that list, and keep it up to date as things change - or at least articulate what you’re building and for whom. Embedded in that assumption is why startups fail. How do you really make the best backlog for your company?

XP and Scrum don’t have much to say - they punt. Its by far the hardest part of the puzzle of shipping successful products and both recommend that you get a customer in the room and ask them to clarify what they want as you go. Well, that’s fine as far as it goes but when you’re a startup and you don’t have customers yet you need a way to bootstrap and that can feel awfully chaotic and wasteful. What’s worse is that as you grow you’ve probably developed some pretty bad habits as far as setting priorities and strategy: like thinking you’re a genius - just because you got funded - and that genius is what allows you to *know* what the market wants.

I remember having this exact same "aha!" moment, auditing Steve Blank's class when we were first building IMVU. Ever since that time, I have struggled to explain how the feedback loop in customer development should interface with the feedback loop in product development.

If you look at the origins of most agile systems, including Scrum and XP, they come out of experiences in big companies. Consider the classic project that was essential to the creation of extreme programming, the Chrysler Comprehensive Compensation System. This was to be a new piece of software to run payroll for Chrysler. In a project like that there are lots of big questions that need to be answered in order to build a working product. But you don't generally have to ask "what problem are we trying to solve?" That's pretty clear. In the case of C3, that was to run payroll for 87,000 employees, who were presumably receiving payroll before the project began. What causes projects like this to fail in traditional software development is that the solution is unknown. Agile is one way to succeed, because pursuing unknowns iteratively is a good way to mitigate risk. What do you do if the problem itself is unknown?

In a startup, rather than think of ourselves as having a marketing department and an engineering department, I now believe it's better to think of ourselves as focusing our energies on unknown problems and unknown solutions. Approaching each of them iteratively is the right thing to do. But the biggest payoff of all can be found when we combine them into one large company-wide feedback loop.

Last year, I found myself back in Steve Blank's class at Haas, this time trying to teach the students about what it's like running engineering alongside customer development. Working with Steve, I came up with schematic diagrams that I hope illustrate this point. (You can see the full deck in my post on Customer Development Engineering or listen to audio from a more recent lecture)

I thought given Jim's prompting it might be useful to post this excerpt. Notice that the unit of progress changes as we move from waterfall to agile to the lean startup. For more on this latter point and why it's so important, consider taking a look at the posts Achieving a failure and Throwing away working code.




Anyway, thanks Jim for the great post. And credit once again goes to Nivi from Venture Hacks for sharing it with me.

Read More »

Achieving a failure

0 comments
We spend a lot of time planning. We even make contingency plans for what to do if the main plan goes wrong. But what if the plan goes right, and we still fail? This is the my most dreaded kind of failure, because it tricks you into thinking that you're in control and that you're succeeding. In other words, it inhibits learning. My worst failures have all been of this kind, and learning to avoid them has been a constant struggle.

See if this plan sounds like a good one to you:
  • Start a company with a compelling long-term vision. Don't get distracted by trying to flip it. Instead, try and build a company that will matter on the scale of the next century. Aim to become the "next AOL or Microsoft" not a niche player.
  • Raise sufficient capital to have an extended runway from experienced smart money investors with deep pockets who are prepared to make follow-on investments.
  • Hire the absolute best and the brightest, true experts in their fields, who in turn can hire the smartest people possible to staff their departments. Insist on the incredibly high-IQ employees and hold them to incredibly high standards.
  • Bring in an expert CEO with outstanding business credentials and startup experience to focus on relentless execution.
  • Build a truly mainstream product. Focus on quality. Ship it when it's done, not a moment before. Insist on high levels of usability, UI design, and polish. Conduct constant focus groups and usability tests.
  • Build a world-class technology platform, with patent-pending algorithms and the ability to scale to millions of simultaneous users.
  • Launch with a PR blitz, including mentions in major mainstream publications. Build the product in stealth mode to build buzz for the eventual launch.
I had the privilege, and the misfortune, to be involved with a startup that executed this plan flawlessly. It took years, tens of millions of dollars, and the efforts of hundreds of talented people to pull it off. And here's the amazing thing about this plan: it actually worked. I think we accomplished every one of those bullet points. Check. Mission accomplished.

Only this company was a colossal failure. It never generated positive returns for its investors, and most of its employees walked away dejected. What went wrong?

This company was shackled by shadow beliefs that turned all those good intentions, and all that successful execution, into a huge pile of wasted effort. Here are a few:
  • We know what customers want. By hiring experts, conducting lots of focus groups, and executing to a detailed plan, the company became deluded that it knew what customers wanted. I remember vividly a scene at a board meeting, where the company was celebrating a major milestone. The whole company and board play-tested the product to see its new features first hand. Everyone had fun; the product worked. But that was two full years before any customers were allowed to use it. Nobody even asked the question: why not ship this now? It was considered naive that the "next AOL" would ship a product that wasn't ready for prime time. Stealth is a customer-free zone. All of the efforts to create buzz, keep competitors in the dark, and launch with a bang had the direct effect of starving the company for much-needed feedback.

  • We can accurately predict the future. Even though some aspects of the product were eventually vindicated as good ones, the underlying architecture suffered from hard-to-change assumptions. After years of engineering effort, changing these assumptions was incredibly hard. Without conscious process design, product development teams turn lines of code written into momentum in a certain direction. Even a great architecture becomes inflexible. This is why agility is such a prized quality in product development.

  • We can skip the chasm. As far as I know, there are no products that are immune from the technology life cycle adoption curve. By insisting on building a product for mainstream customers, the company guaranteed that they would be unhappy with the number and type of customers they got for the first version. Worse was the large staff in departments appropriate to a mainstream-scale product, especially in customer service and QA. The passionate early adopters who flocked to the product at its launch could not sustain this outsized burn rate.

  • We can capitalize on new customers. As with many Silicon Valley failures, a flawless PR launch turned into a flawed customer acquisition strategy. The first version product wasn't easy enough to use, install, and pay for. It also had hardware requirements that excluded lots of normal people. Millions of people flocked to the website, but the company could only successfully monetize early adopters. As a result, the incredible launch was mostly wasted.

  • We know what quality means. All of the effort invested in quality, polish, stability and usability turned out to be for nothing. Although the product was superior to its competitors in many ways, it was missing key features that were critical for the kinds of customers who never got to participate in the company's focus groups (or were represented on its massive QA staff). Worse, many of the wrong assumptions built into the technical architecture meant that, in the real world outside the testing lab, the product's stability was nothing to write home about. So despite the millions invested in quality, the end result for most customers was no better than the sloppy beta-type offerings of competitors.

  • Advancing the plan is progress. This is the most devastating thing about achieving a failure: while in the midst of it, you think you're making progress. This company had disciplined schedules, milestones, employee evaluations, and a culture of execution. When schedules were missed, people were held accountable. Unfortunately, there was no corresponding discipline of evaluating the quality of the plan itself. As the company built infrastructure and added features, the team celebrated these accomplishments. Departments were built and were even metrics-driven. But there was no feedback loop to help the company find the right metrics to focus on.
These shadow beliefs have a common theme: a lack of reality checks. In my experience, great startups require humility, not in the personal sense, but in the organizational capacity to emphasize learning. A good learning feedback loop trumps even the best execution of a linear plan. And what happened with this ill-fated company? Although it failed, many of the smart people involved have accomplished great things. I know of at least five former employees that went on to become startup founders. They all got a tremendous first-hand lesson in achieving a failure, all on someone else's dime (well, millions of dimes). As we move into a new economic climate, it's my hope that our industry will stop this expensive kind of learning and start building lean startups instead.

The interesting thing about an analysis like this is that it seems obvious in retrospect. A lot of people say that they know that they don't know what customers want. And yet, if you go back and look at the key elements of the plan, many of them are in subtle conflict with the shadow beliefs. Understanding that tension requires a lot of reflection and a good teacher. I myself didn't understand it until I had the opportunity to view that failure through the lens of the customer development theory. I had a big advantage, because Steve Blank, the father of customer development, got to see the failure up close and personal too as an investor and board member. A much less painful way to learn the lesson is read his book: The Four Steps to the Epiphany.
Reblog this post [with Zemanta]

Read More »

What is customer development?

0 comments
When we build products, we use a methodology. For software, we have many - you can enjoy a nice long list on Wikipedia. But too often when it's time to think about customers, marketing, positioning, or PR, we delegate it to "marketroids" or "suits." Many of us are not accustomed to thinking about markets or customers in a disciplined way. We know some products succeed and others fail, but the reasons are complex and the unpredictable. We're easily convinced by the argument that all we need to do is "build it and they will come." And when they don't come, well, we just try, try, again.

What's wrong with this picture?

Steve Blank has devoted many years now to trying to answer that question, with a theory he calls Customer Development. This theory has become so influential that I have called it one of the three pillars of the lean startup - every bit as important as the changes in technology or the advent of agile development.

You can learn about customer development, and quite a bit more, in Steve's book The Four Steps to the Epiphany. I highly recommend this book for all entrepreneurs, in startups as well as in big companies. Here's the catch. This is a self-published book, originally designed as a companion to Steve's class at Berkeley's Haas school of business. And Steve is the first to admit that it's a "turgid" read, without a great deal of narrative flow. It's part workbook, part war story compendium, part theoretical treatise, and part manifesto. It's trying to do way too many things at once. On the plus side, that means it's a great deal. On the minus side, that has made it a wee bit hard to understand.

Some notable bloggers have made efforts to overcome these obstacles. VentureHacks did a great summary, which includes slides and video. Marc Andreeson also took a stab, calling it "a very practical how-to manual for startups ... a roadmap for how to get to Product/Market Fit." The theory of Product/Market Fit is one key component of customer development, and I highly recommend Marc's essay on that topic.

Still, I feel the need to add my two cents. There's so much crammed into The Four Steps to the Epiphany that I want to distill out what I see as the key points:
  1. Get out of the building. Very few startups fail for lack of technology. They almost always fail for lack of customers. Yet surprisingly few companies take the basic step of attempting to learn about their customers (or potential customers) until it is too late. I've been guilty of this many times in my career - it's just so easy to focus on product and technology instead. True, there are the rare products that have literally no market risk; they are all about technology risk ("cure for cancer"). For the rest of us, we need to get some facts to inform and qualify our hypotheses ("fancy word for guesses") about what kind of product customers will ultimately buy.

    And this is where we find Steve's maxim that “In a startup no facts exist inside the building, only opinions.” Most likely, your business plan is loaded with opinions and guesses, sprinkled with a dash of vision and hope. Customer development is a parallel process to product development, which means that you don't have to give up on your dream. We just want you to get out of the building, and start finding out whether your dream is a vision or a delusion. Surprisingly early, you can start to get a sense for who the customer of your product might be, how you'll reach them, and what they will ultimately need. Customer development is emphatically not an excuse to slow down or change the plan every day. It's an attempt to minimize the risk of total failure by checking your theories against reality.

  2. Theory of market types. Layered on top of all of this is a theory that helps explain why different startups face wildly different challenges and time horizons. There are three fundamental situations that change what your company needs to do: creating a new market (the original Palm), bringing a new product to an existing market (Handspring), and resegmenting an existing market (niche, like In-n-Out Burger; or low-cost, like Southwest Airlines). If you're entering an existing market, be prepared for fast and furious competition from the incumbent players, but enjoy the ability to fail (or succeed) fast. When creating a new market, expect to spend as long as two years before you manage to get traction with early customers, but enjoy the utter lack of competition. What kind of market are you in? The Four Steps to the Epiphany contains a detailed approach to help you find out.

  3. Finding a market for the product as specified. When I first got the "listening to customers" religion, my plan was to talk to as many customer as possible, and build them as many features as they asked as possible. This is a common mistake. Our goal in product development is to find the minimum feature set required to get early customers. In order to do this, we have our customer development team work hard to find a market, any market, for the product as currently specified. We don't just abandon the vision of the company at every turn. Instead, we do everything possible to validate the founders' belief.

    The nice thing about this paradigm is it sets the company up for a rational discussion when the task of finding customers fails. You can start to think through the consequences of this information before it's too late. You might still decide to press ahead building the original product, but you can do so with eyes open, knowing that it's going to be a tough, uphill battle. Or, you might start to iterate the concept, each time testing it against the set of facts that you've been collecting about potential customers. You don't have to wait to iterate until after the splashy high-burn launch.

  4. Phases of product & company growth. The book takes its name from Steve's theory of the four stages of growth any startup goes through. He calls these steps Customer Discovery (when you're just trying to figure out if there are any customers who might want your product), Customer Validation (when you make your first revenue by selling your early product), Customer Creation (akin to a traditional startup launch, only with strategy involved), and Company Building (where you gear up to Cross the Chasm). Having lived through a startup that went through all four phases, I can attest to how useful it is to have a roadmap that can orient you to what's going on as your job and company changes.

    As an aside, here's my experience: you don't get a memo that tells you that things have changed. If you did, it would read something like this: "Dear Eric, thank you for your service to this company. Unfortunately, the job you have been doing is no longer available, and the company you used to work for no longer exists. However, we are pleased to offer you a new job at an entirely new company, that happens to contain all the same people as before. This new job began months ago, and you are already failing at it. Luckily, all the strategies you've developed that made you successful at the old company are entirely obsolete. Best of luck!"

  5. Learning and iterating vs. linear execution. I won't go through all four steps in detail (buy the book already). I'll just focus on the paradigm shift represented by the first two steps and the last two steps. In the beginning, startups are focused on figuring out which way is up. They really don't have a clue what they should be doing, and everything is guesses. In the old model, they would probably launch during this phase, failing or succeeding spectacularly. Only after a major, public, and expensive failure would they try a new iteration. Most people can't sustain more than a few of these iterations, and the founders rarely get to be involved in the later tries.

    The root of that mistake is premature execution. The major insight of The Four Steps to the Epiphany is that startups need time spent in a mindset of learning and iterating, before they try to launch. During that time, they can collect facts and change direction in private, without dramatic and public embarrassment for their founders and investors. The book lays out a disciplined approach to make sure this period doesn't last forever, and clear criteria for when you know it's time to move to an execution footing: when you have a repeatable and scalable sales process, as evidenced by early customers paying you money for your early product.
It slices, it dices. It's also a great introduction to selling and positioning a product for non-marketeers, a workbook for developing product hypotheses, and a compendium of incredibly useful tactics for startups young and old.

When I first encountered this book, my first impulse was as follows. I bought a bunch of copies, gave them out to my co-founders and early employees, and then expected the whole company's behavior would radically change the next day. That doesn't work (you can stop laughing now). This is not a book for everyone. I've only had luck sharing it with other entrepreneurs who are actually struggling with their product or company. If you already know all the answers, you can skip this one. But if you find some aspect of the situation your in confusing, maybe this will provide some clarity. Or at least some techniques for finding clarity soon.

My final suggestion is that you buy the book and skim it. Try and find sections that apply to the startup you're in (or are thinking of building). Make a note of the stuff that doesn't seem to make sense. Then put it on your shelf and forget about it. If your experience is anything like mine, here's what will happen. One day, you'll be banging your head against the wall, trying to make progress on some seemingly intractable problem (like, how the hell do I know if this random customer is an early adopter who I should spend time listening to, or a mainstream customer who won't buy my product for years). That's when I would get that light bulb moment: this problem sounds familiar. Go to your shelf. Get down the book, and be amazed that you are not the first person to tackle this problem in the history of the world.

I have been continually surprised at how many times I could go back to that same well for wisdom and advice. I hope you will be too.

Read More »

Using AdWords to assess demand for your new online service, step-by-step

0 comments
Google Analytics ベンチマークIf you want to build an online service, and you don't test it with a fake AdWords campaign ahead of time, you're crazy. That's the conclusion I've come to after watching tons of online products fail for a complete lack of customers. So I thought I would walk you through exactly how to run a "fake landing page" test using cheap tools that require no technical skills whatsoever.

Our goal is to find out whether customers are interested in your product by offering to give (or even sell) it to them, and then failing to deliver on that promise. If you're worried about disappointing some potential customers - don't be. Most of the time, the experiments you run will have a zero percent conversion rate - meaning no customers were harmed during the making of this experiment. And if you do get a handful of people taking you up on the offer, you'll be able to send them a nice personal apology. And if you get tons of people trying to take you up on your offer - congratulations. You probably have a business. Hopefully that will take some of the sting out of the fact that you had to engage in a little trickery.

To motivate you to give this a try, let me tell you a story from the early days of IMVU. It was fall 2004, and the presidential election was in full swing. One day, we became convinced that a killer app for IMVU would be to sell a presidential debate bundle, where our customers could put on a Bush or Kerry avatar, and then engage in mock debates with each other. It was one of those brilliant startup brainstorms that comes to the team in a flash, with a giant thunderclap. We spent weeks working on this new product, racing the clock so it would be done in time for the real presidential debates. We had endless arguments internally about what features it should include, how the avatars should look, and how much it should cost. We finally settled on a $1.99 price point, figuring that we wouldn't make much money, but at least we wouldn't get in the way of achieving scale. Finally the day came, we unleashed the landing page, emailed our existing customers, and started advertising online.

The net result: we sold exactly zero presidential debate avatars. None. Nada. We tried different price points, different ad copy, different landing pages. Nothing made any difference. Turns out, there was aboslutely no demand whatsoever for that particular product. And we could have found it out quite easily, if we'd used the simple five step process below. Oops - there went several precious weeks of development effort down the drain.

So, if you're interested in helping avoid mistakes like that, here are the steps:
  1. Get a domain name. It doesn't have to be the world's catchiest name, just pick something reasonably descriptive. If you're concerned about sullying your eventual brand name, don't use your "really good" name, pick a code name. Make sure your domain registrar offers free "website forwarding" if you don't use a hosting service that lets you use a custom domain name in step 2.

  2. Setup a simple website. I recommend using a hosted service like SnapPages. You basically want to create two pages: a landing page that says what your product does, and a signup page that people can use to register for it. If you're feeling charitable, you can add a third page that lets people know that the product isn't available right now, and that you'll get back to them when it is.

  3. Enable Google Analytics tracking. The nice thing about services like SnapPages is that they offer this built-in. You just have to sign up for Google Analytics, get your account number, and plug it into your site.

  4. Start an AdWords campaign. Google AdWords has no minimum buy required, so you can easily run a campaign for five dollars a day, or even less. Just put in your credit card. I recommend using their Keyword Tool to setup your initial list of ad targets. Don't worry about selecting particularly good keywords, if you're new to SEM. Just load them all in and choose a low cost-per-click. I used to use $.05, but you might want to go as high as $.25 or $.50. Just make sure you choose a maximum daily budget that is affordable for you to run the campaign for a few weeks with. I would aim to get no more than 1o0 clicks per day - over the course of a week or two, you'll get pretty good conversion data.

  5. Measure conversion rates. Use Google's built-in Analytics/AdWords integration, to track the effectiveness of each ad you run. Then set up "goal tracking" in Analytics to see how many people actually sign up using your registration page. Here are the stats you want to pay particular attention to: the overall conversion % of customers from landing page to completed registration, the click-through-rate for your ads on different keywords, and the bounce rate of your landing page for different keywords.
Armed with that data, you will know a lot about what your business will look like when you finally do build the product you're imagining. At the very least, you can plug those assumptions into your financial model, now that you have a sense for what the cost of acquiring new customers might look like.

Even more importantly, you can start to experiment with feature set, positioning, and marketing - all without building a product. Use Google Optimizer to try different landing pages (even radically different landing pages) to see if any particular way of talking about your product makes a difference to your conversion rates. And if you're getting conversion rates that you feel good about, try asking for a credit card or other method of payment. If that takes your conversion rate to zero, that doesn't necessarily mean you don't have a business, but I'd give it some serious thought. Products that truly solve a severe pain for early adopters can usually find some visionary customers who will pre-order, just based on the vision that you're selling. If you can't find any, maybe that means you haven't figured out who your customer is yet.

And if you don't know who your customer is, perhaps some customer development is in order?
Reblog this post [with Zemanta]

Read More »

Principles of Lean Startups, presentation for Maples Investments

0 comments
Full diagram originally drawn by John Boyd for...Image via WikipediaSteve Blank and I had the opportunity to create a presentation about lean startups for Maples Investments. Maples Investments is the only venture investor I know who has oriented their entire strategy to lean startups. They've invested in many great companies, including IMVU, Digg, Kongregate, Twitter... you get the idea. I really enjoy working with them and their companies.

Steve and I worked to find a metaphor that would help explain the power of lean startups, and why they have a serious competitive advantage, especially in these challenging economic times. We borrowed John Boyd's OODA loop, a concept from military strategy. Boyd emphasized the importance of agility in combat: "the key to victory is to be able to create situations wherein one can make appropriate decisions more quickly than one's opponent." We think this same principle applies to startups, which have the same problems of maneuvering on unknown or confusing terrain.

As I've written previously, lean startups are built upon three main trends:
  • Technology commoditization. It is becoming easier and cheaper for companies to bring products to market, leveraging free and open source software, cloud computing, open social data (Facebook, OpenSocial), and open distribution (AdWords, SEO). Lean startups have the ability to use this commodity stack to lower costs and, more importantly, reduce time to market.

  • Agile software development. Agile allows companies to build higher quality software faster. This speeds up the Ideas-Code-Data feedback loop. Combined with the technology trends above, it also enables rapid deployment strategies like just-in-time scalability.

  • Customer development. It's not enough just to build a product with great features - you have to figure out if there is a market for it. The only way to do this is to get out of the building and test your hypotheses against reality. The biggest source of cost/time advantage that all lean companies have is avoiding building features that customers don't want.
In this presentation, we tried to summarize those trends, show how they give startups an advantage in good economic times as well as bad, and explain why they enable a new and better investment strategy. Hopefully others will find it useful as well.

For those interested in getting started with agile or customer development, I thought I'd include a few links. My path to lean startups began with Kent Beck and extreme programming. The best resources there are his book Extreme Programming Explained: Embrace Change and the gentle introduction at extremeprogramming.org. For customer development, start with Steve's book The Four Steps to the Epiphany or take a look at his recent Entrepreneurial Thought Leader Lecture.






Reblog this post [with Zemanta]

Read More »

When NOT to listen to your users; when NOT to rely on split-tests

0 comments
There are three legs to the lean startup concept: agile product development, low-cost (fast to market) platforms, and rapid-iteration customer development. When I have the opportunity to meet startups, they usually have one of these aspects down, and need help with one or two of the others. The most common need is becoming more customer-centric. They need to incorporate customer feedback into the product development and business planning process. I usually recommend two things: try to get the whole team to start talking to customers ("just go meet a few") and get them to use split-testing in their feature release process ("try it, you'll like it").

However, that can't be the end of the story. If all we do is mechanically embrace these tactics, we can wind up with a disaster. Here are two specific ways it can go horribly wrong. Both are related to a common brain defect we engineers and entrepreneurs seem to be especially prone to. I call it "if some is good, more is better" and it can cause us to swing wildly from one extreme of belief to another.

What's needed is a disciplined methodology for understanding the needs of customers and how they combine to form a viable business model. In this post, I'll discuss two particular examples, but for a full treatment, I recommend Steve Blank's The Four Steps to the Epiphany.




Let's start with the "do whatever customers say, no matter what" problem. I'll borrow this example from randomwalker's journal - Lessons from the failure of Livejournal: when NOT to listen to your users.
The opportunity was just mind-bogglingly huge. But none of that happened. The site hung on to its design philosophy of being an island cut off from the rest of the Web, and paid the price. ... The site is now a sad footnote in the history of Social Networking Services. How did they do it? By listening to their users.
randomwalker identifies four specific ways in which LJ's listening caused them problems, and they are all variations on a theme: listening to the wrong users. The early adopters of LiveJournal didn't want to see the site become mainstream, and the team didn't find a way to stand up for their business or vision.

I remember having this problem when I first got the "listening to customers" religion. I felt we should just talk to as many customers as possible, and do whatever they say. But that is a bad idea. It confuses the tactic, which is listening, with the strategy, which is learning. Talking to customers is important because it helps us deal in facts about the world as it is today. If we're going to build a product, we need to have a sense of who will use it. If we're going to change a features, we need to know how our existing customers will react. If we're working on positioning for our product, we need to know what is in the mind of our prospects today.

If your team is struggling with customer feedback, you may find this mantra helpful. Seek out a synthesis that incorporates both the feedback you are hearing plus your own vision. Any path that leaves out one aspect or the other is probably wrong. Have faith that this synthesis is greater than the sum of its parts. If you can't find a synthesis position that works for your customers and for your business, it either means you're not trying hard enough or your business is in trouble. Figure out which one it is, have a heart-to-heart with your team, and make some serious changes.




Especially for us introverted engineering types, there is one major drawback to talking to customers: it's messy. Customers are living breathing complex people, with their own drama and issues. When they talk to you, it can be overwhelming to sort through all that irrelevant data to capture the nuggets of wisdom that are key to learning. In a perfect world, we'd all have the courage and stamina to perservere, and implement a complete Ideas-Code-Data rapid learning loop. But in reality, we sometimes fall back on inadequate shortcuts. One of those is an over-emphasis on split-testing.

Split-testing provides objective facts about our product and customers, and this has strong appeal to the science-oriented among us. But the thing to remember about split-testing is that it is always retrospective - it can only give you facts about the past. Split-testing is completely useless in telling you what to do next. Now, to make good decisions, it's helpful to have historical data about what has and hasn't worked in the past. If you take it too far, though, you can lose the creative spark that is also key to learning.

For example, I have often fallen into the trap of wanting to optimize the heck out of one single variable in our business. One time, I became completely enamored with Influence: The Psychology of Persuasion (which is a great book, but that's for another post). I managed to convince myself that the solution to all of our company's problems were contained in that book, and that if we just faithfully executed a marketing campaign around the principles therein, we'd solve everything. I convinced a team to give this a try, and they did tried dozens of split-test experiments, each around a different principle or combination of principles. We tried and tried to boost our conversion numbers, each time analyzing what worked and what didn't, and iterating. We were excited by each new discovery, and each iteration we managed to move the conversion needle a little bit more. Here was the problem: the total impact we were having was miniscule. It turns out that we were not really addressing the core problem (which had nothing to do with persuasion). So although we felt we were making progress, and even though we were moving numbers on a spreadsheet, it was all for nothing. Only when someone hit me over the head and said "this isn't working, let's try a radically new direction" did I realize what had happened. We'd forgotten to use the all the tools in our toolbox, and lost sight of our overarching goal.

It's important to be open to hearing new ideas, especially when the ideas you're working on are split-testing poorly. That's not to say you should give up right away, but always take a moment to step back and ask yourself if your current path is making progress. It might be time to reshuffle the deck and try again.

Just don't forget to subject the radical new idea to split-testing too. It might be even worse than what you're doing right now.




So, both split-testing and customer feedback have their drawbacks. What can you do about it? There are a few ideas I have found generally helpful:
  • Identify where the "learning block" is. For example, think of the phases of the synthesis framework: collecting feedback, processing and understanding it, choosing a new course of action. If you're not getting the results you want, probably it's because one of those phases is blocked. For example, I've had the opportunity to work with a brilliant product person who had an incredible talent at rationalization. Once he got the "customer feedback" religion, I noticed this pattern: "Guys! I've just conducted three customer focus groups, and, incredibly, the customers really want us to build the feature I've been telling you about for a month." No matter what the input, he'd come around to the same conclusion as before.

    Or maybe you have someone on your team that's just not processing: "Customers say they want X, so that's what we're building." Each new customer that walks in the door wants a different X, so we keep changing direction.

    Or consider my favorite of all: the "we have no choice but to stay the course" pessimist. For this person, there's always some reason why what we're learning about customers can't help. We're doomed! For example, we simply cannot make the changes we need because we've already promised something to partners. Or the press. Or to some passionate customers. Or to our team. Whoever it is, we just can't go back on our promise, it'd be too painful. So we have to roll the dice with what we're working on now, even if we all agree it's not our best shot at success.

    Wherever the blockage is happening, by identifying it you can work on fixing it.

  • Focus on "minimum feature set" whenever processing feedback. It's all too easy to put together a spec that contains every feature that every customer has ever asked for. That's not a challenge. The hard part is to figure out the fewest possible features that could possibly accomplish your company's goals. If you ever have the opportunity to remove a feature without impacting the customer experience or business metrics - do it. If you need help determining what features are truly essential, pay special attention to the Customer Validation phase of Customer Development.

  • Consider whether the company is experiencing a phase-change that might make what's made you successful in the past obsolete. The most famous of these phase-change theories is Crossing the Chasm, which gives very clear guidance about what to do in a situation where you can't seem to make any more progress with the early-adopter customers you have. That's a good time to change course. One possibility: try segmenting your customers into a few archetypes, and see if any of those sounds more promising than another. Even if one archetype currently dominates your customer base, would it be more promising to pursue a different one?
As much as we try to incorporate scientific product development into our work, the fact remains that business is not a science. I think Drucker said it best. It's pretty easy to deliver results in the short term or the long term. It's pretty easy to optimize our business to serve one of employees, customers or shareholders. But it's incredibly hard to balance the needs of all three stakeholders over both the short and long-term time horizon. That's what business is designed to do. By learning to find a synthesis between our customers and our vision, we can make a meaningful contribution to that goal.

Read More »

The one line split-test, or how to A/B all the time

0 comments
Split-testing is a core lean startup discipline, and it's one of those rare topics that comes up just as often in a technical context as in a business-oriented one when I'm talking to startups. In this post I hope to talk about how to do it well, in terms appropriate for both audiences.

First of all, why split-test? In my experience, the majority of changes we made to products have no effect at all on customer behavior. This can be hard news to accept, and it's one of the major reasons not to split-test. Who among us really wants to find out that our hard work is for nothing? Yet building something nobody wants is the ultimate form of waste, and the only way to get better at avoiding it is to get regular feedback. Split-testing is the best way I know to get that feedback.

My approach to split-testing is to try to make it easy in two ways: incredibly easy for the implementers to create the tests and incredibly easy for everyone to understand the results. The goal is to have split-testing be a continuous part of our development process, so much so that it is considered a completely routine part of developing a new feature. In fact, I've seen this approach work so well that it would be considered weird and kind of silly for anyone to ship a new feature without subjecting it to a split-test. That's when this approach can pay huge dividends.

Reports
Let's start with the reporting side of the equation. We want a simple report format that anyone can understand, and that is generic enough that the same report can be used for many different tests. I usually use a "funnel report" that looks like this:


Control Hypothesis A
Hypothesis B
Registered1000 (100%)
1000 (100%)
500 (100%)
Downloaded
650 (65%)
750 (75%)
200 (40%)
Chatted
350 (35%)
350 (35%)
100 (20%)
Purchased
100 (10%)
100 (10%)
25 (5%)


In this case, you could run the report for any time period. The report is set up to show you what happened to customers who registered in that period (a so-called cohort analysis). For each cohort, we can learn what percentage of them did each action we care about. This report is set up to tell you about new customers specifically. You can do this for any sequence of actions, not just ones relating to new customers.

If you take a look at the dummy data above, you'll see that Hypothesis A is clearly better than Hypothesis B, because it beats out B in each stage of the funnel. But compared to control, it only beats it up through the "Chatted" stage. This kind of result is typical when you ship a redesign of some part of your product. The new design improved on the old one in several ways, but these improvements didn't translate all the way through the funnel. Usually, I think that means you've lost some good aspect of the old design. In other words, you're not done with your redesign yet. The designers might be telling you that the new design looks much better than the old one, and that's probably true. But it's worth conducting some more experiments to find a new design that beats the old one all the way through. In my previous job, this led us to confront the disappointing reality that sometimes customers actually prefer an uglier design to a pretty one. Without split-testing, your product tends to get prettier over time. With split-testing, it tends to get more effective.

One last note on reporting. Sometimes it makes sense to measure the micro-impact of a micro-change. For example, by making this button green, did more people click on it? But in my experience this is not useful most of the time. That green button was part of a customer flow, a series of actions you want customers to complete for some business reason. If it's part of a viral loop, it's probably trying to get them to invite more friends (on average). If it's part of an e-commerce site, it's probably trying to get them to buy more things. Whatever its purpose, try measuring it only at the level that you care about. Focus on the output metrics of that part of the product, and you make the problem a lot more clear. It's one of those situations where more data can impede learning.

I had the opportunity to pioneer this approach to funnel analysis at IMVU, where it became a core part of our customer development process. To promote this metrics discipline, we would present the full funnel to our board (and advisers) at the end of every development cycle. It was actually my co-founder Will Harvey who taught me to present this data in the simple format we've discussed in this post. And we were fortunate to have Steve Blank, the originator of customer development, on our board to keep us honest.

Code
To make split-testing pervasive, it has to be incredibly easy. With an online service, we can make it as easy to do a split-test as to not do one. Whenever you are developing a new feature, or modifying an existing feature, you already have a split-test situation. You have the product as it will exist (in your mind), and the product as it exists already. The only change you have to get used to as you start to code in this style, is to wrap your changes in a simple one-line condition. Here's what the one-line split-test looks like in pseudocode:


if( setup_experiment(...) == "control" ) {
// do it the old way
} else {
// do it the new way
}


The call to setup_experiment has to do all of the work, which for a web application involves a sequence something like this:
  1. Check if this experiment exists. If not, make an entry in the experiments list that includes the hypotheses included in the parameters of this call.
  2. Check if the currently logged-in user is part of this experiment already. If she is, return the name of the hypothesis she was exposed to before.
  3. If the user is not part of this experiment yet, pick a hypothesis using the weightings passed in as parameters.
  4. Make a note of which hypothesis this user was exposed to. In the case of a registered user, this could be part of their permanent data. In the case of a not-yet-registered user, you could record it in their session state (and translate it to their permanent state when they do register).
  5. Return the name of the hypothesis chosen or assigned.
From the point of view of the caller of the function, they just pass in the name of the experiment and its various hypotheses. They don't have to worry about reporting, or assignment, or weighting, or, well, anything else. They just ask "which hypothesis should I show?" and get the answer back as a string. Here's what a more fleshed out example might look like in PHP:



$hypothesis =
setup_experiment("FancyNewDesign1.2",
array(array("control", 50),
array("design1", 50)));
if( $hypothesis == "control" ) {
// do it the old way
} elseif( $hypothesis == "design1" ) {
// do it the fancy new way
}
In this example, we have a simple even 50-50 split test between the way it was (called "control") and a new design (called "design1").

Now, it may be that these code examples have scared off our non-technical friends. But for those that persevere, I hope this will prove helpful as an example you can show to your technical team. Most of the time when I am talking to a mixed team with both technical and business backgrounds, the technical people start worrying that this approach will mean massive amounts of new work for them. But the discipline of split-testing should be just the opposite: a way to save massive amounts of time. (See Ideas. Code. Data. Implement. Measure. Learn for more on why this savings is so valuable)


Hypothesis testing vs hypothesis generation
I have sometimes opined that split-testing is the "gold standard" of customer feedback. This gets me into trouble, because it conjures up for some the idea that product development is simply a rote mechanical exercise of linear optimization. You just constantly test little micro-changes and follow a hill-climbing algorithm to build your product. This is not what I have in mind. Split-testing is ideal when you want to put your ideas to the test, to find out whether what you think is really what customers want. But where do those ideas come from in the first place? You need to make sure you don't get away from trying bold new things, using some combination of your vision and in-depth customer conversations to come up with the next idea to try. Split-testing doesn't have to be limited to micro-optimizations, either. You can use it to test out large changes as well as small. That's why it's important to keep the reporting focused on the macro statistics that you care about. Sometimes, small changes make a big difference. Other times, large changes make no difference at all. Split-testing can help you tell which is which.

Further reading
The best paper I have read on split-testing is "Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO" - it describes the techniques and rationale used for experiments at Amazon. One of the key lessons they emphasize is that, in the absence of data about what customers want, companies generally revert to the Highest Paid Person's Opinion (hence, HiPPO). But an even more important idea is that it's important to have the discipline to insist that any product change that doesn't change metrics in a positive direction should be reverted. Even if the change is "only neutral" and you really, really, really like it better, force yourself (and your team) to go back to the drawing board and try again. When you started working on that change, surely you had some idea in mind of what it would accomplish for your business. Check your assumptions, what went wrong? Why did customers like your change so much that they didn't change their behavior one iota?

Read More »