Showing posts with label listening to customers. Show all posts

Case Study: UX, Design, and Food on the Table

0 comments
(One of the common questions I hear is how to reconcile design and user experience (UX) methods with the Lean Startup. To answer, I asked one of my favorite designers to write a case study illustrating the way they work, taking us step-by-step through a real life redesign.

This is something of an IMVU reunion. The attendees at sllconf 2010 were wowed by Food on the Table's presentation. If you weren't there, be sure to watch the video. Manuel Rosso was IMVU's first VP of Marketing, and is now CEO of Food on the Table, one of the leading lean startups in Austin. I first met Laura Klein when we had the good fortune of hiring her at IMVU to join our interaction design team. Since then, she's gone on to become one of the leading experts implementing UX and design in lean startups. 

In this case study, Laura takes us inside the design process in a real live startup. I hope you'll find it illuminating. -Eric)

A lot of people ask me whether design fits into the lean startup process. They're concerned that if they do any research or design up front that they will end up in a waterfall environment.

This is simply not true. Even the leanest of startups can benefit from design and user research. The following is a great example of how they can work together.

A couple of months ago, Manuel Rosso, the CEO of Food on the Table came to me with a problem. He had a product with a great value proposition and thousands of passionate customers. That wasn't the problem. The problem was activation.

As a bit of background, Food on the Table helps people plan meals for their families around what is on sale in their local grocery stores. The team defined an activated user as someone who made it through all the steps of the first time user experience: selecting a grocery store, indicating food preferences, picking recipes, and printing a grocery list.

Users who made it through activation loved the product, but too many first time users were getting lost and never getting all the way to the end.

Identifying The Problem

More than any startup I've worked with, Food on the Table embraces the lean startup methodology. They release early and often. They get tons of feedback from their users. And, most importantly, they measure and a/b test absolutely everything.

Because of their dedication to metrics, they knew all the details of their registration funnel and subsequent user journey. This meant that they knew exactly how many people weren't finishing activation, and they knew that number was higher than they wanted.

Unfortunately, they fell into a trap that far too many startups fall into at some point: they tried to measure their way out of the problem. They would look at a metric, spot a problem, come up with an idea for how to fix it, release a change, and test it. But the needle wasn't moving.

After a couple of months, Manuel had a realization. The team had always been dedicated to listening to users. But as they added new features, their conversations with users had changed - they became more narrowly focused on new features and whether each individual change was usable and useful. Somewhere along the way, they'd stopped observing the entire user experience, from end to end. This didn't last very long - maybe a month or two, but it was long enough to cause problems.

As soon as he realized what had happened, Manuel went back to talking directly to users about their overall experiences rather than just doing targeted usability tests, and within a few hours he knew what had gone wrong. Even though the new features were great in isolation, they were making the overall interface too complicated. New users were simply getting lost on their way to activation.

Now that they knew generally why they were having the problem, Manuel decided he needed a designer to identify the exact pain points and come up with a way to simplify the interface without losing any of the features.

Key Takeaways:
  • Don't try to measure your way out of a problem. Metrics do a great job of telling you what your problem is, but only listening to and observing your users can tell you why they're having trouble.
  • When you're moving fast enough, a product can become confusing in a surprisingly short amount of time. Make sure you're regularly observing the user experience.
  • Adding a new feature can be useful, but it can also clutter up an interface. Good design helps you offer more functionality with less complexity.
Getting an Overview of the Product

When I first came on board, the team had several different experiments going, including a couple of different competing flows. I needed to get a quick overview of the entire user experience in order to understand what was working and what wasn't.

Of course, the best way to do that is to watch new and current customers use the product. In the old days, I would have recruited test participants, brought them into an office, and run usability sessions. It would have taken a couple of weeks.

Not anymore! I scheduled UserTesting.com sessions, making sure that I got participants in all the main branches of the experiments. Within a few hours, I had a dozen 15 minute videos of people using the product. The entire process, including analysis, took about one full day.

Meanwhile, we set up several remote sessions with current users and used GoToMeeting to run fast observational sessions in order to understand the experience of active users. That took another day.

Key Takeaway: Get feedback fast. Online tools like GoToMeeting and UserTesting.com (and about a hundred others) can help you understand the real user experience quickly and cheaply.

Low Hanging Fruit

Once we had a good idea of the major pain points, we decided to split the design changes into two parts: fixing low hanging fruit and making larger, structural changes to the flow. Obviously, we weren't going to let engineering sit around on their hands while we made major design changes.

The most important reason to do this was that some of the biggest problems for users were easy to fix technically and could be accomplished with almost no design input whatsoever.
For example, in one unsuccessful branch of a test, users saw a button that would allow them to add a recipe to a meal plan. When user test participants within the office pressed the button, it would very quickly add the recipe to the meal plan, and users had no problem understanding it. When we observed users pressing the button on their own computers with normal home broadband connections, the button took a few seconds to register the click.

Of course, this meant that users would click the button over and over, since they were getting no feedback. When the script returned, the user would often have added the recipe to their meal plan several times, which wasn't what they meant to do.

This was, by all accounts, a bad user experience. Why wasn't it caught earlier?

Well, as is the case with all software companies, the computers and bandwidth in the office were much better than the typical user's setup, so nobody saw the problem until we watched actual users in their natural environments.

What was the fix? We put in a "wait" spinner and disabled the button while the script was processing. It took literally minutes to implement and delivered a statistically significant improvement in the performance of that branch of the experiment.

Giving immediate feedback drastically reduced user error
Manuel told me that, immediately after that experience, the team added a very old, slow computer to the office and recently caught a nasty problem that could add 40 seconds to page load times. Needless to say, all usability testing within the office is now done on the slowest machine.

Key Takeaways:
  • Sometimes big user problems don't require big solutions.
  • To truly understand what your user is experiencing, you have to understand the user's environment.
  • Sometimes an entire branch of an experiment can be killed by one tiny bug. If your metrics are surprising, do some qualitative research to figure out why!
A Redesign

While the engineering team worked on the low-hanging fruit, we started the redesign. But we didn't just chuck everything out. We started from the current design and iterated. We identified a few critical areas that were making the experience confusing and fixed those.

For example, we started with the observation that people were doing ok for the first couple of screens, but then they were getting confused about what they were supposed to do next. A simple "Step" counter at the top of each page and very clear, obvious "Next" and "Back" buttons told users where they were and what they should do next.

Users also claimed to want more freedom to select their recipes, but they were quickly overwhelmed by the enormous number of options, so we put in a simple and engaging way to select from recommended recipes while still allowing users to access the full collection with the click of one button.

Users were confused by how to change their meal plan
Recommended recipe carousels made choosing a meal plan fun and easy to understand
One common problem was that users asked for a couple of features that were actually already in the product. The features themselves were very useful and well-designed; they just weren't discoverable enough. By changing the location of these features, we made them more obvious to people.

Most importantly, we didn't just jump to Photoshop mockups of the design. Instead, we created several early sketches before moving to interactive wireframes, which we tested and iterated on with current users. In this case, I created the interactive wireframes in HTML and JavaScript. While they were all grayscale with no visual design, they worked. Users could perform the most important actions in them, like stepping through the application, adding meals to their meal plan, and editing recipes. This made participants feel like they were using an actual product so that they could comment not just on the look and feel but on the actual interactions.

By the end of the iterations and tests, every single one of the users liked the new version better than the old, and we had a very good idea why.

Did we make it perfect? No. Perfection takes an awful lot of time and too often fails to be perfect for the intended users.

Instead, we identified several areas we'd like to optimize and iterate on going forward. But we also decided that it was better to release a very good version and continue improving it, rather than aim for absolute perfection and never get it out the door.

The redesign removed all of the major pain points that we'd identified in the testing and created a much simpler, more engaging interface that would allow the team to add features going forward. It improved the user experience and set the stage for lots more iteration and experimentation in the future. In fact, the team currently has several more exciting experiments running!

Key Takeaways:
  • Interactive prototypes and iterative testing let you improve the design quickly before you ever get to the coding stage.
  • Targeting only the confusing parts of the interface for redesign reduces the number of things you need to rebuild and helps make both design and development faster.
  • Lean design is about improving the user experience iteratively! Fixing the biggest user problems first means getting an improved experience to users quickly and optimizing later based on feedback and metrics.
The Metrics

Like any good lean startup, we released the new design in an a/b test with new users. We had a feeling it would be better, but we needed to know whether we were right. We also wanted to make sure there weren't any small problems we'd overlooked that might have big consequences.

After running for about 6 weeks and a few thousand people, we had our statistically significant answer: a 77% increase in the number of new users who were making it all the way through activation.

My entire involvement with the project to do the research, design, and usability testing was just under 90 hours spread over about 6 weeks.

Key Takeaway: Design - even major redesigns - can be part of an agile, lean startup environment, if done in an efficient way with a lot of iteration and customer involvement.



Laura Klein has been working in Silicon Valley as both an engineer and a UX professional for the last 15 years. She currently consults with lean startups to help them make their products easier to use. She frequently blogs about design, usability, metrics, and product management at Users Know. You can follow her on Twitter at @lauraklein.

Read More »

Learning is better than optimization (the local maximum problem)

0 comments
Lean startups don’t optimize. At least, not in the traditional sense of trying to squeeze every tenth of a point out of a conversion metric or landing page. Instead, we try to accelerate with respect to validated learning about customers.

For example, I’m a big believer in split-testing. Many optimizers are in favor of split-testing, too: direct marketers, landing page and SEO experts -- heck even the Google Website Optimizer team. But our interest in the tactic of split-testing is only superficially similar.

Take the infamous “41 shades of blue” split-test. I understand and respect why optimizers want to do tests like that. There are often counter-intuitive changes in customer behavior that depend on little details. In fact, the curse of product development is that sometimes small things make a huge difference and sometimes huge things make no difference. Split-testing is great for figuring out which is which.

But what do you learn from the “41 shades of blue” test? You only learn which specific shade of blue customers are more likely to click on. And, in most such tests, the differences are quite small, which is why sample sizes have to be very large. In Google’s case, often in the millions of people. When people (ok, engineers) who have been trained in this model enter most startups, they quickly get confused. How can we do split-testing when we have only a pathetically small number of customers? What’s the point when the tests aren’t going to be statistically significant?

And they’re not the only ones. Some designers also hate optimizing (which is why the “41 shades of blue” test is so famous – a famous designer claims to have quit over it). I understand and respect that feeling, too. After you’ve spent months on a painstaking new design, who wants to be told what color blue to use? Split-testing a single element in an overall coherent design seems ludicrous. Even if it shows improvement in some micro metric, does that invalidate the overall design? After all, most coherent designs have a gestalt that is more than the sum of the parts – at least, that’s the theory. Split-testing seems fundamentally at odds with that approach.

But I’m not done with the complaints, yet. Optimizing sounds bad for visionary thinking. That’s why you hear so many people proclaim proudly that they never listen to customers. Customers can only tell you want they think they want, and tend to have a very near-term perspective. If you just build what they tell you, you generally wind up with a giant, incoherent mess. Our job as entrepreneurs is to invent the future, and any optimization technique – including split-testing, many design techniques, or even usability testing – can lead us astray. Sure, customers think they want something, but how do they know what they will want in the future?

You can always tell who has a math background in a startup, because they call this the local maximum problem. Those of us with a computer science background call it the hill-climbing algorithm. I’m sure other disciplines have their own names for it; even protozoans exhibit this behavior (it's called taxis). It goes like this: whenever you’re not sure what to do, try something small, at random, and see if that makes things a little bit better. If it does, keep doing more of that, and if it doesn’t, try something else random and start over. Imagine climbing a hill this way; it’d work with your eyes closed. Just keep seeking higher and higher terrain, and rotate a bit whenever you feel yourself going down. But what if you’re climbing a hill that is in front of a mountain? When you get to the top of the hill, there’s no small step you can take that will get you on the right path up the mountain. That’s the local maximum. All optimization techniques get stuck in this position.

Because this causes a lot of confusion, let me state this as unequivocally as I can. The Lean Startup methodology does not advocate using optimization techniques to make startup decisions. That’s right. You don’t have to listen to customers, you don’t have to split-test, and you are free to ignore any data you want. This isn’t kindergarten. You don’t get a gold star for listening to what customers say. You only get a gold star for achieving results.

What should you do instead? The general pattern is: have a strong vision, test that vision against reality, and then decide whether to pivot or persevere. Each part of that answer is complicated, and I’ve written extensively on the details of how to do each. What I want to convey here is how to respond to the objections I mentioned at the start. Each of those objections is wise, in its own way, and the common reaction – to just reject that thinking outright – is a bad idea. Instead, the Lean Startup offers ways to incorporate those people into an overall feedback loop of learning and discovery.

So when should we split-test? There’s nothing wrong with using split-testing, as part of the solution team, to do optimization. But that is not a substitute for testing big hypotheses. The right split-tests to run are ones that put big ideas to the test. For example, we could split-test what color to make the “Register Now” button. But how much do we learn from that? Let’s say that customers prefer one color over another? Then what? Instead, how about a test where we completely change the value proposition on the landing page?

I remember the first time we changed the landing page at IMVU from offering “avatar chat” to “3D instant messaging.” We didn’t expect much of a difference, but it dramatically changed customer behavior. That was evident in the metrics and in the in-person usability tests. It taught us some important things about our customers: that they had no idea what an avatar was, they had no idea why they would want one, and they thought “avatar chat” was something weird people would do. When we started using “3D instant messaging,” we validated our hypothesis that IM was an activity our customers understood and were interested in “doing better.” But we also invalidated a hypothesis that customers wanted an avatar; we had to learn a whole new way of explaining the benefits of avatar-mediated communication because our audience didn’t know what that word meant.

However, that is not the end of the story. If you go to IMVU’s website today, you won’t find any mention of “3D instant messaging.” That’s because those hypotheses were replaced by yet more, each of which was subject to this kind of macro-level testing. Over many years, we’ve learned a lot about what customers want. And we’ve validated that learning by being able to demonstrate that when we change the product as a result of that learning, the key macro metrics improve.

A good rule of thumb for split-testing is that even when we’re doing micro-level split-tests, we should always measure the macro. So even if you want to test a new button color, don’t measure the click-through rate on that button! Instead, ask yourself: “why do we care that customers click that button?” If it’s a “Register Now” button, it’s because we want customers to sign up and try the product. So let’s measure the percentage of customers who try the product. If the button color change doesn’t have an impact there – it’s too small, and should be reverted. Over time, this discipline helps us ignore the minor stuff and focus our energies on learning what will make a significant impact. (It also just so happens that this style of reporting is easier to implement; you can read more here)

Next, let’s take on the sample-size issue. Most of us learn about the samples sizes from things like political polling. In a large country, in order to figure out who will win an election with any kind of accuracy, you need to sample a large number of people. What most of us forget is that statistical significance is a function of both sample size and the magnitude of the underlying signal. Presidential elections are often decided by a few percentage points or less. When we’re optimizing, product development teams encounter similar situations. But when we’re learning, that’s the rare exception. Recall that the biggest source of waste in product development is building something nobody wants. In that case, you don’t need a very large sample.

Let me illustrate. I’ve previously documented that early-on in IMVU’s life, we made the mistake of building an IM add-on product instead of a standalone network. Believe me, I had to be dragged kicking and screaming to the realization that we’d made a mistake. Here’s how it went down. We would bring customers in for a usability test, and ask them to use the IM add-on functionality. The first one flat-out refused. I mean, here we are, paying them to be there, and they won’t use the product! (For now, I won’t go into the reasons why – if you want that level of detail, you can watch this interview.) I was the head of product development, so can you guess what my reaction was? It certainly wasn’t “ooh, let’s listen to this customer.” Hell no, “fire that customer! Get me a new one” was closer. After all, what is a sample size of one customer? Too small. Second customer: same result. Third, fourth, fifth: same. Now, what are the odds that five customers in a row refuse to use my product, and it’s just a matter of chance or small sample size? No chance. The product sucks – and that is a statistically significant result.

When we switch from an optimization mindset to a learning mindset, design gets more fun, too. It takes some getting used to for most designers, though. They are not generally used to having their designs evaluated by their real-world impact. Remember that plenty of design organizations and design schools give out awards for designing products that never get built. So don’t hold it against a classically trained designer if they find split-testing a little off-putting at first. The key is to get new designers integrated with a split-testing regimen as soon as possible. It’s a good deal: by testing to make sure (I often say “double check”) each design actually improves customers lives, startups can free designers to take much bigger risks. Want to try out a wacky, radical, highly simplified design? In a non-data-driven environment, this is usually impossible. There’s always that engineer in the back of the room with all the corner cases: “but how will customers find Feature X? What happens if we don’t explain in graphic detail how to use Feature Y?” Now these questions have an easy answer: we’ll measure and see. If the new design performs worse than the current design, we’ll iterate and try again. But if it performs better, we don’t need to keep arguing. We just keep iterating and learning. This kind of setup leads to a much less political and much less arbitrary design culture.

This same approach can also lead us out of the big incoherent mess problem. Teams that focus on optimizing can get stuck bolting on feature upon feature until the product becomes unusable. No one feature is to blame. I've made this mistake many times in my career, especially early on when I first began to understand the power of metrics. When that happens, the solution is to do a whole product pivot. "Whole product" is a term I learned from Bill Davidow's classic Marketing High Technology. A whole product is one that works for mainstream customers. Sometimes, a whole product is much bigger than a simple device - witness Apple's mastery of creating a whole ecosystem around each of their devices that make them much more useful than their competitors. But sometimes a whole product is much less - it requires removing unnecessary features and focusing on a single overriding value proposition. And these kinds of pivots are great opportunities for learning-style tests. It only requires the courage to test the new beautiful whole product design against the old crufty one head-to-head.

By now, I hope you’re already anticipating how to answer the visionary’s objections. We don’t split-test or talk to customers to decide if we should abandon our vision. Instead, we test to find out how to achieve the vision in the best possible way. Startup success requires getting many things right all at once: building a product that solves a customer problem, having that problem be an important one to a sufficient number of customers, having those customers be willing pay for it (in one of the four customer currencies), being able to reach those customers through one of the fundamental growth strategies, etc. When you read stories of successful startups in the popular and business press, you usually hear about how the founders anticipated several of these challenges in their initial vision. Unfortunately, startup success requires getting them all right. What the PR stories tend to leave out is that we can get attached to every part of our vision, even the dumb parts. Testing the parts simply gives us information that can help us refine the vision – like a sculptor removing just the right pieces of marble. There is tremendous art to knowing which pieces of the vision to test first. It is highly context-dependent, which is why different startups take dramatically different paths to success. Should you charge from day one, testing the revenue model first? Or should you focus on user engagement or virality? What about companies, like Siebel, that started with partner distribution first?  There are no universally right answers to such questions. (For more on how to figure out which question applies in which context, see Business ecology and the four customer currencies.)

Systematically testing the assumptions that support the vision is called customer development, and it’s a parallel process to product development. And therein lies the most common source of confusion about whether startups should listen to customers. Even if a startup is doing user-centered design, or optimizing their product through split-testing, or conducting tons of surveys and usability tests, that’s no substitute for also doing customer development. It’s the difference between asking “how should we best solve this problem for these customers?” and “what problem should we be solving? and for which customer?” These two activities have to happen in parallel, forming a company-wide feedback loop. We call such companies built to learn. Their speed should be measured in validated learning about customers, not milestones, features, revenue, or even beautiful design. Again, not because those things aren’t important, but because their role in a startup is subservient to the company’s fundamental purpose: piercing the veil of extreme uncertainty that accompanies any disruptive innovation.

The Lean Startup methodology can’t guarantee you won’t find yourself in a local maximum. But it can guarantee that you’ll know about it when it happens. Even better, when it is time to pivot, you’ll have actual data that can help inform where you want to head next. The data doesn’t tell you what to do – that’s your job. The bad news: entrepreneurship requires judgment. The good news: when you make data-based decisions, you are training your judgment to get better over time.

Read More »

A real Customer Advisory Board

0 comments
A reader recently asked on a previous post about the technique of having customers periodically produce a “state of the company” progress report. I consider this an advanced technique, and it is emphatically not for everyone.

Many companies seek to involve customers directly in the creation of their products. This is a lot harder than it sounds. Hearing occasional input is one thing, but building an institutional commitment to acting on this feedback is hard. For one, there are all the usual objections to customer feedback: it is skewed in favor of the loud people, customers don’t know what they want, and it is fundamentally our job to figure out what to build. All of those objections are valid, but that can’t be the end of the story. Just because we don’t blindly obey what our customers say doesn’t absolve us of the responsibility of hearing them out.

The key to successful integration of customer feedback is to make each kind of collection part of the regular company discipline of building and releasing products. In previous posts, I’ve mentioned quite a few of these, including these most important ones:
  • having engineers post on the forums in their own name when they make a change
  • routinely split-testing new changes
  • routinely conducting in-person usability tests and interviews
  • Net Promoter Score
Each of these techniques is fundamentally bottoms-up.  They assume that each person on the team is genuinely interested in testing their work and ideas against the reality of what customers want. Anyone who has worked in a real-world product development team can tell you how utopian that sounds. In real life, teams are under tremendous time pressure, they are trying to balance the needs of many stakeholders, and they are human. They make mistakes. And when they do, they are prone to all the normal human failings when it comes to bad news: the desire to cover it up, rationalize the failure away, or redefine success.

To counteract those tendencies, it helps to supplement with top-down process as well. One example is having a real Customer Advisory Board. Here’s what it looks like. In a previous company, we put together a group of passionate early adopters. They had their own private forum, and a company founder (aka me) personally ran the group in its early days. Every two months, the company would have a big end-of-milestone meeting, with our Board of Directors, Business Advisory Board, and all employees present. At this meeting, we’d present a big package of our progress over the course of the cycle. And at each meeting, we’d also include an unedited, uncensored report direct from the Customer Advisory Board.

I wish I could say that these reports were always positive. In fact, we often got a failing grade. And, as you can see in my previous post on “The cardinal sin of community management” the feedback could be all over the map. But we had some super-active customers who would act as editors, collecting feedback from all over the community and synthesizing it into a report of the top issues. It was a labor of love, and it meant we always had a real voice of the customer right there in the meeting with us. It was absolutely worth it.

Passionate online communities are real societies. What we call “community management” is actually governance. It is our obligation to govern well, but – as history has repeatedly shown – this is incredibly hard. The decisions that a company makes with regard to its community are absolute. We aspire to be benevolent dictators. And unlike in many real-world societies, our decisions are not rendered as law but as code. (For more on this idea, see Lawrence Lessig’s excellent Code is Law.) The people who create that code are notoriously bad communicators, even when they are allowed to communicate directly to their customers.

A customer advisory board that has the ear of the company’s directors acts as a kind of appeals process for company decisions. As I mentioned in “The cardinal sin of community management,” many early adopters will accept difficult decisions as long as they feel listened to. As a policy matter, this is easy to say and very hard to implement. That’s why the CAB is so valuable. They provide a forum for dissenting voices to be heard. The members of the CAB have a stake in providing constructive feedback, since they will tend to be ignored if they pass on vitriol. In turn, they become company-sanctioned listeners. By leveraging them, the company is able to make many more customers feel heard.

The CAB report acts as a BS detector for top management. It’s a lot harder to claim everything is going smoothly, and that customers are dying for Random New Feature X when the report clearly articulates another point of view. Sometimes the right thing to do is to ignore the report. After all, listening to customers is not intrinsically good. As always, the key is to synthesize the customer feedback with the company’s unique vision. But that’s often used as an excuse to ignore customers outright. I know I was guilty of this many times. It’s all-too-easy to convince yourself that customers will want whatever your latest brainstorm is. And it’s so much more pleasant to just go build it, foist it on the community, and cross your fingers. It sure beats confronting reality, right?

Let me give one small example. Early in IMVU’s life, IM was a core part of the experience. Yet we were very worried about having to re-implement every last feature that modern IM clients had developed: away messages, file transfer, voice and video, etc. As a result, we tried many different stratagems to avoid giving the impression that we were a fully-featured IM system, going so far as to build our initial product as an add-on to existing IM programs. (You can read how well that went in another post here.)

This strategy was simply not working. Customers kept demanding that we add this or that IM feature, and we were routinely refusing. Eventually, the CAB decided to weigh in on the matter in their board-level report. I remember it so clearly, because their requests were actually very simple. They asked us to implement five – and only five – key IM features. For weeks we debated whether to do what they asked. We were afraid that this was just the tip of the iceberg, and that once we “gave in” to these five demands there would be five more, ad infinitum. It actually took courage to do what they wanted – as it does for all visionaries. Every time you listen to customers, you fear diluting your vision. That’s natural. But you have to push through the fear, at least on occasion, to make sure you’re not crazy.

In this particular example, it turned out they were right. Just those few IM features made the product dramatically better. And, most importantly, that was the end of IM feature creep. Nobody even mentioned it as an issue in subsequent board meetings. That felt good – but it also gave our Board tremendous confidence that we could change the kind of feedback we were getting by improving the product.

This technique is not for everybody. It gets much harder as the company – and the community – scales, and, in fact, IMVU uses a different system of gathering community feedback today. But, if your community is giving you a headache, give this a try. Either way, I hope you’ll share your experiences, too.


Read More »

Net Promoter Score: an operational tool to measure customer satisfaction

0 comments
Cover of
I've mentioned Net Promoter Score (NPS) in a few previous posts, but haven't had a chance to describe it in detail yet. It is an essential lean startup tool that combines seemingly irreconcilable attributes: it provides operational, actionable, real-time feedback that is truly representative of your customers' experience as a whole. It does it all by asking your customers just one magic question.

In this post I'll talk about why NPS is needed, how it works, and show you how to get started with it. I'll also reveal the Net Promoter Score for this blog, based on the data you've given me so far.

How can you measure customer satisfaction?
Other methods for collecting data about customers have obvious drawbacks. Doing in-depth customer research, with long questionnaires with detailed demographic and psychograpic breakdowns, is very helpful for long-range planning, interaction design and, most importantly, creating customer archetypes. But it's not immediately actionable, and it's far too slow to be a regular part of your decision loop.

At the other extreme, there's the classic A/B split-test, which provides nearly instantaneous feedback on customer adoption of any given feature. If your process for creating split-tests is extremely light (for example, it requires only one line of code), you can build a culture of lightweight experimentation that allows you to audition many different ideas, and see what works. But split-tests also have their drawbacks. They can't give you a holistic view, because they only tell you how your customers reacted to that specific test.

You could conduct an in-person usability test, which is very useful for getting a view of how actual people perceive the totality of your product. But that, too, is limited, because you are relying on a very small sample, from which you can only extrapolate broad trends. A major usability problem is probably experienced similarly by all people, but the absence of such a defect doesn't tell you much about how well you are doing.

Net Promoter Score
NPS is a methodology that comes out of the service industry. It involves using a simple tracking survey to constantly get feedback from active customers. It is described in detail by Fred Reichheld in his book The Ultimate Question: Driving Good Profits and True Growth. The tracking survey asks one simple question: How likely are you to recommend Product X to a friend or colleague? The answer is then put through a formula to give you a single overall score that tells you how well you are doing at satisfying your customers. Both the question and formula are the results of a lot of research that claims that this methodology can predict the success of companies over the long-term.

There's a lot of controversy surrounding NPS in the customer research community, and I don't want to recapitulate it here. I think it's important to acknowledge, though, that lots of smart people don't agree with the specific question that NPS asks, or the specific formula used to calculate the score. For most startups, though, I think these objections can safely be ignored, becuase there is absolutely no controversy about the core idea that a regular and simple tracking survey can give you customer insight.

Don't let the perfect be the enemy of the good. If you don't like the NPS question or scoring system, feel free to use your own. I think any reasonably neutral approach will give you valuable data. Still, if you're open to it, I recommend you give NPS a try. It's certainly worked for me.

How to get started with NPS
For those that want to follow the NPS methodology, I will walk you through how to integrate it into your company, including how to design the survey, how to collect the answers, and how to calculate your score. Because the book is chock-full of examples of how to do this in older industries, I will focus on my experience integrating NPS into an online service, although it should be noted that it works equally well if your primary contact with customers is through a different channel, such as the telephone.

Designing the survey
The NPS question itself (again, "How likely are you to recommend X to a friend or colleague?") is usually asked on a 0-10 point scale. It's important to let people know that 10 reperesents "most likely" and 0 represents "least likely" but it's also important not to use words like promoter or detractor anywhere in the survey itself.

The hardest part about creating an NPS survey is to resist the urge to load it up with lots of questions. The more questions you ask, the lower your response rate, and the more you bias your results towards more-engaged customers. The whole goal of NPS is to get your promoters and your detractors alike to answer the question, and this requires that you not ask for too much of their time. Limit yourself to two questions: the official NPS question, and exactly one follow-up. Options for the follow-up could be a different question on a 10-point scale, or just an open ended question asking why they chose the rating that they did. Another possibility is to ask "If you are open to answering some follow-up questions, would you leave your phone number?" or other contact info. That would let you talk to some actual detractors, and get a qualitative sense of what they are thinking, for example.

For an online service, just host the survey on a webpage with as little branding or decoration as possible. Because you want to be able to produce real-time graphs and results, this is one circumstance where I recommend you build the survey yourself, versus using an off-the-shelf hosted survey tool. Just dump the results in a database as you get them, and let your reports calculate scores in real-time.

Collecting the answers
Once you have the survey up and running, you need to design a program to have customers take it on a regular basis. Here's how I've set it up in the past. Pick a target number of customers to take the survey every day. Even if you have a very large community, I don't think this number needs to be higher than 100. Even just 10 might be enough. Build a batch process (using GearMan, cron, or whatever you use for offline processing) whose job is to send out invites to the survey.

Use whatever communication channel you normally rely on for notifying your customers. Email is great; of course, at IMVU, we had our own internal notification system. Either way, have the process gradually ramp up the number of outstanding invitations throughout the day, stopping when it's achieved 100 responses. This way, no matter what the response rate, you'll get a consistent amount of data. I also recommend that you give each invitation a unique code, so that you don't get random people taking the survey and biasing the results. I'd also recommend you let each invite expire, for the same reason.

Choose the people to invite to the survey according to a consistent formula every day. I recommend a simple lottery among people who have used your product that same day. You want to catch people when their impression of your product is fresh - even a few days can be enough to invalidate their reactions. Don't worry about surveying churned customers; you need to use a different methodology to reach them. I also normally exclude anyone from being invited to take the survey more than once in any given time period (you can use a month, six months, anything you think is appropriate).

Calculate your score
Your NPS score is derived in three steps:
  1. Divide all responses into three buckets: promoters, detractors, and others. Promoters are anyone who chose 9 or 10 on the "likely to recommend scale" and detractors are those who chose any number from 0-6.
  2. Figure out the percentage of respondants that fall into the promoter and detractor buckets.
  3. Subtract your detractor percentage from your promoter percentage. The result is your score. Thus, NPS = P% - D%.
You can then compare your score to people in other industries. Any positive score is good news, and a score higher than +50 is considered exceptional. Here are a few example scores taken from the official Net Promoter website:

Apple 79
Adobe 46
Google
73
Barnes & Noble online
74
American Express
47
Verizon
10
DIRECTV
20

Of course, the most important thing to do with your NPS score is to track it on a regular basis. I used to look at two NPS-related graphs on a regular basis: the NPS score itself, and the response rate to the survey request. These numbers were remarkably stable over time, which, naturally, we didn't want to believe. In fact, there were some definite skeptics about whether they measured anything of value at all, since it is always dismaying to get data that says the changes you're making to your product are not affecting customer satisfaction one way or the other.

However, at IMVU one summer, we had a major catastrophe. We made some changes to our service that wound up alienating a large number of customers. Even worse, the way we chose to respond to this event was terrible, too. We clumsily gave our community the idea that we didn't take them seriously, and weren't interested in listening to their complaints. In other words, we committed the one cardinal sin of community management. Yikes.

It took us months to realize what we had done, and to eventually apologize and win back the trust of those customers we'd alienated. The whole episode cost us hundreds of thousands of dollars in lost revenue. In fact, it was the revenue trends that eventually alerted us to the magnitude of the problem. Unfortunately, revenue a trailing indicator. Our response time to the crisis was much too slow, and as part of the post-mortem analysis of why, I took a look at the various metrics that all took a precipitous turn for the worse during that summer. Of everything we measured, it was Net Promoter Score that plunged first. It dropped down to an all-time low, and stayed there for the entire duration of the crisis, while other metrics gradually came down over time.

After that, we stopped being skeptical and started to pay very serious attention to changes in our NPS. In fact, I didn't consider the crisis resolved until our NPS peaked above our previous highs.

Calculating the NPS of Lessons Learned
I promised that I would reveal the NPS of this blog, which I recently took a snapshot of by offering a survey in a previous post. Here's how the responses break down, based on the first 100 people who answered the question:
  • Number of promoters: 47
  • Number of detractors: 22
  • NPS: 25
Now, I don't have any other blogs to compare this score to. Plus, the way I offered the survey (just putting a link in a single post), the fact that I didn't target people specifically to take the survey, and the fact that the invite was impersonal, are all deeply flawed. Still, all things considered, I'm pretty happy with the result. Of course, now that I've described the methodology in detail, I've probably poisoned the well for taking future unbiased samples. But that's a small price to pay for having the opportunity to share the magic of NPS.

I hope you'll find it useful. If you do, come on back and post a comment letting us all know how it turned out.


Reblog this post [with Zemanta]

Read More »

What is customer development?

0 comments
When we build products, we use a methodology. For software, we have many - you can enjoy a nice long list on Wikipedia. But too often when it's time to think about customers, marketing, positioning, or PR, we delegate it to "marketroids" or "suits." Many of us are not accustomed to thinking about markets or customers in a disciplined way. We know some products succeed and others fail, but the reasons are complex and the unpredictable. We're easily convinced by the argument that all we need to do is "build it and they will come." And when they don't come, well, we just try, try, again.

What's wrong with this picture?

Steve Blank has devoted many years now to trying to answer that question, with a theory he calls Customer Development. This theory has become so influential that I have called it one of the three pillars of the lean startup - every bit as important as the changes in technology or the advent of agile development.

You can learn about customer development, and quite a bit more, in Steve's book The Four Steps to the Epiphany. I highly recommend this book for all entrepreneurs, in startups as well as in big companies. Here's the catch. This is a self-published book, originally designed as a companion to Steve's class at Berkeley's Haas school of business. And Steve is the first to admit that it's a "turgid" read, without a great deal of narrative flow. It's part workbook, part war story compendium, part theoretical treatise, and part manifesto. It's trying to do way too many things at once. On the plus side, that means it's a great deal. On the minus side, that has made it a wee bit hard to understand.

Some notable bloggers have made efforts to overcome these obstacles. VentureHacks did a great summary, which includes slides and video. Marc Andreeson also took a stab, calling it "a very practical how-to manual for startups ... a roadmap for how to get to Product/Market Fit." The theory of Product/Market Fit is one key component of customer development, and I highly recommend Marc's essay on that topic.

Still, I feel the need to add my two cents. There's so much crammed into The Four Steps to the Epiphany that I want to distill out what I see as the key points:
  1. Get out of the building. Very few startups fail for lack of technology. They almost always fail for lack of customers. Yet surprisingly few companies take the basic step of attempting to learn about their customers (or potential customers) until it is too late. I've been guilty of this many times in my career - it's just so easy to focus on product and technology instead. True, there are the rare products that have literally no market risk; they are all about technology risk ("cure for cancer"). For the rest of us, we need to get some facts to inform and qualify our hypotheses ("fancy word for guesses") about what kind of product customers will ultimately buy.

    And this is where we find Steve's maxim that “In a startup no facts exist inside the building, only opinions.” Most likely, your business plan is loaded with opinions and guesses, sprinkled with a dash of vision and hope. Customer development is a parallel process to product development, which means that you don't have to give up on your dream. We just want you to get out of the building, and start finding out whether your dream is a vision or a delusion. Surprisingly early, you can start to get a sense for who the customer of your product might be, how you'll reach them, and what they will ultimately need. Customer development is emphatically not an excuse to slow down or change the plan every day. It's an attempt to minimize the risk of total failure by checking your theories against reality.

  2. Theory of market types. Layered on top of all of this is a theory that helps explain why different startups face wildly different challenges and time horizons. There are three fundamental situations that change what your company needs to do: creating a new market (the original Palm), bringing a new product to an existing market (Handspring), and resegmenting an existing market (niche, like In-n-Out Burger; or low-cost, like Southwest Airlines). If you're entering an existing market, be prepared for fast and furious competition from the incumbent players, but enjoy the ability to fail (or succeed) fast. When creating a new market, expect to spend as long as two years before you manage to get traction with early customers, but enjoy the utter lack of competition. What kind of market are you in? The Four Steps to the Epiphany contains a detailed approach to help you find out.

  3. Finding a market for the product as specified. When I first got the "listening to customers" religion, my plan was to talk to as many customer as possible, and build them as many features as they asked as possible. This is a common mistake. Our goal in product development is to find the minimum feature set required to get early customers. In order to do this, we have our customer development team work hard to find a market, any market, for the product as currently specified. We don't just abandon the vision of the company at every turn. Instead, we do everything possible to validate the founders' belief.

    The nice thing about this paradigm is it sets the company up for a rational discussion when the task of finding customers fails. You can start to think through the consequences of this information before it's too late. You might still decide to press ahead building the original product, but you can do so with eyes open, knowing that it's going to be a tough, uphill battle. Or, you might start to iterate the concept, each time testing it against the set of facts that you've been collecting about potential customers. You don't have to wait to iterate until after the splashy high-burn launch.

  4. Phases of product & company growth. The book takes its name from Steve's theory of the four stages of growth any startup goes through. He calls these steps Customer Discovery (when you're just trying to figure out if there are any customers who might want your product), Customer Validation (when you make your first revenue by selling your early product), Customer Creation (akin to a traditional startup launch, only with strategy involved), and Company Building (where you gear up to Cross the Chasm). Having lived through a startup that went through all four phases, I can attest to how useful it is to have a roadmap that can orient you to what's going on as your job and company changes.

    As an aside, here's my experience: you don't get a memo that tells you that things have changed. If you did, it would read something like this: "Dear Eric, thank you for your service to this company. Unfortunately, the job you have been doing is no longer available, and the company you used to work for no longer exists. However, we are pleased to offer you a new job at an entirely new company, that happens to contain all the same people as before. This new job began months ago, and you are already failing at it. Luckily, all the strategies you've developed that made you successful at the old company are entirely obsolete. Best of luck!"

  5. Learning and iterating vs. linear execution. I won't go through all four steps in detail (buy the book already). I'll just focus on the paradigm shift represented by the first two steps and the last two steps. In the beginning, startups are focused on figuring out which way is up. They really don't have a clue what they should be doing, and everything is guesses. In the old model, they would probably launch during this phase, failing or succeeding spectacularly. Only after a major, public, and expensive failure would they try a new iteration. Most people can't sustain more than a few of these iterations, and the founders rarely get to be involved in the later tries.

    The root of that mistake is premature execution. The major insight of The Four Steps to the Epiphany is that startups need time spent in a mindset of learning and iterating, before they try to launch. During that time, they can collect facts and change direction in private, without dramatic and public embarrassment for their founders and investors. The book lays out a disciplined approach to make sure this period doesn't last forever, and clear criteria for when you know it's time to move to an execution footing: when you have a repeatable and scalable sales process, as evidenced by early customers paying you money for your early product.
It slices, it dices. It's also a great introduction to selling and positioning a product for non-marketeers, a workbook for developing product hypotheses, and a compendium of incredibly useful tactics for startups young and old.

When I first encountered this book, my first impulse was as follows. I bought a bunch of copies, gave them out to my co-founders and early employees, and then expected the whole company's behavior would radically change the next day. That doesn't work (you can stop laughing now). This is not a book for everyone. I've only had luck sharing it with other entrepreneurs who are actually struggling with their product or company. If you already know all the answers, you can skip this one. But if you find some aspect of the situation your in confusing, maybe this will provide some clarity. Or at least some techniques for finding clarity soon.

My final suggestion is that you buy the book and skim it. Try and find sections that apply to the startup you're in (or are thinking of building). Make a note of the stuff that doesn't seem to make sense. Then put it on your shelf and forget about it. If your experience is anything like mine, here's what will happen. One day, you'll be banging your head against the wall, trying to make progress on some seemingly intractable problem (like, how the hell do I know if this random customer is an early adopter who I should spend time listening to, or a mainstream customer who won't buy my product for years). That's when I would get that light bulb moment: this problem sounds familiar. Go to your shelf. Get down the book, and be amazed that you are not the first person to tackle this problem in the history of the world.

I have been continually surprised at how many times I could go back to that same well for wisdom and advice. I hope you will be too.

Read More »

When NOT to listen to your users; when NOT to rely on split-tests

0 comments
There are three legs to the lean startup concept: agile product development, low-cost (fast to market) platforms, and rapid-iteration customer development. When I have the opportunity to meet startups, they usually have one of these aspects down, and need help with one or two of the others. The most common need is becoming more customer-centric. They need to incorporate customer feedback into the product development and business planning process. I usually recommend two things: try to get the whole team to start talking to customers ("just go meet a few") and get them to use split-testing in their feature release process ("try it, you'll like it").

However, that can't be the end of the story. If all we do is mechanically embrace these tactics, we can wind up with a disaster. Here are two specific ways it can go horribly wrong. Both are related to a common brain defect we engineers and entrepreneurs seem to be especially prone to. I call it "if some is good, more is better" and it can cause us to swing wildly from one extreme of belief to another.

What's needed is a disciplined methodology for understanding the needs of customers and how they combine to form a viable business model. In this post, I'll discuss two particular examples, but for a full treatment, I recommend Steve Blank's The Four Steps to the Epiphany.




Let's start with the "do whatever customers say, no matter what" problem. I'll borrow this example from randomwalker's journal - Lessons from the failure of Livejournal: when NOT to listen to your users.
The opportunity was just mind-bogglingly huge. But none of that happened. The site hung on to its design philosophy of being an island cut off from the rest of the Web, and paid the price. ... The site is now a sad footnote in the history of Social Networking Services. How did they do it? By listening to their users.
randomwalker identifies four specific ways in which LJ's listening caused them problems, and they are all variations on a theme: listening to the wrong users. The early adopters of LiveJournal didn't want to see the site become mainstream, and the team didn't find a way to stand up for their business or vision.

I remember having this problem when I first got the "listening to customers" religion. I felt we should just talk to as many customers as possible, and do whatever they say. But that is a bad idea. It confuses the tactic, which is listening, with the strategy, which is learning. Talking to customers is important because it helps us deal in facts about the world as it is today. If we're going to build a product, we need to have a sense of who will use it. If we're going to change a features, we need to know how our existing customers will react. If we're working on positioning for our product, we need to know what is in the mind of our prospects today.

If your team is struggling with customer feedback, you may find this mantra helpful. Seek out a synthesis that incorporates both the feedback you are hearing plus your own vision. Any path that leaves out one aspect or the other is probably wrong. Have faith that this synthesis is greater than the sum of its parts. If you can't find a synthesis position that works for your customers and for your business, it either means you're not trying hard enough or your business is in trouble. Figure out which one it is, have a heart-to-heart with your team, and make some serious changes.




Especially for us introverted engineering types, there is one major drawback to talking to customers: it's messy. Customers are living breathing complex people, with their own drama and issues. When they talk to you, it can be overwhelming to sort through all that irrelevant data to capture the nuggets of wisdom that are key to learning. In a perfect world, we'd all have the courage and stamina to perservere, and implement a complete Ideas-Code-Data rapid learning loop. But in reality, we sometimes fall back on inadequate shortcuts. One of those is an over-emphasis on split-testing.

Split-testing provides objective facts about our product and customers, and this has strong appeal to the science-oriented among us. But the thing to remember about split-testing is that it is always retrospective - it can only give you facts about the past. Split-testing is completely useless in telling you what to do next. Now, to make good decisions, it's helpful to have historical data about what has and hasn't worked in the past. If you take it too far, though, you can lose the creative spark that is also key to learning.

For example, I have often fallen into the trap of wanting to optimize the heck out of one single variable in our business. One time, I became completely enamored with Influence: The Psychology of Persuasion (which is a great book, but that's for another post). I managed to convince myself that the solution to all of our company's problems were contained in that book, and that if we just faithfully executed a marketing campaign around the principles therein, we'd solve everything. I convinced a team to give this a try, and they did tried dozens of split-test experiments, each around a different principle or combination of principles. We tried and tried to boost our conversion numbers, each time analyzing what worked and what didn't, and iterating. We were excited by each new discovery, and each iteration we managed to move the conversion needle a little bit more. Here was the problem: the total impact we were having was miniscule. It turns out that we were not really addressing the core problem (which had nothing to do with persuasion). So although we felt we were making progress, and even though we were moving numbers on a spreadsheet, it was all for nothing. Only when someone hit me over the head and said "this isn't working, let's try a radically new direction" did I realize what had happened. We'd forgotten to use the all the tools in our toolbox, and lost sight of our overarching goal.

It's important to be open to hearing new ideas, especially when the ideas you're working on are split-testing poorly. That's not to say you should give up right away, but always take a moment to step back and ask yourself if your current path is making progress. It might be time to reshuffle the deck and try again.

Just don't forget to subject the radical new idea to split-testing too. It might be even worse than what you're doing right now.




So, both split-testing and customer feedback have their drawbacks. What can you do about it? There are a few ideas I have found generally helpful:
  • Identify where the "learning block" is. For example, think of the phases of the synthesis framework: collecting feedback, processing and understanding it, choosing a new course of action. If you're not getting the results you want, probably it's because one of those phases is blocked. For example, I've had the opportunity to work with a brilliant product person who had an incredible talent at rationalization. Once he got the "customer feedback" religion, I noticed this pattern: "Guys! I've just conducted three customer focus groups, and, incredibly, the customers really want us to build the feature I've been telling you about for a month." No matter what the input, he'd come around to the same conclusion as before.

    Or maybe you have someone on your team that's just not processing: "Customers say they want X, so that's what we're building." Each new customer that walks in the door wants a different X, so we keep changing direction.

    Or consider my favorite of all: the "we have no choice but to stay the course" pessimist. For this person, there's always some reason why what we're learning about customers can't help. We're doomed! For example, we simply cannot make the changes we need because we've already promised something to partners. Or the press. Or to some passionate customers. Or to our team. Whoever it is, we just can't go back on our promise, it'd be too painful. So we have to roll the dice with what we're working on now, even if we all agree it's not our best shot at success.

    Wherever the blockage is happening, by identifying it you can work on fixing it.

  • Focus on "minimum feature set" whenever processing feedback. It's all too easy to put together a spec that contains every feature that every customer has ever asked for. That's not a challenge. The hard part is to figure out the fewest possible features that could possibly accomplish your company's goals. If you ever have the opportunity to remove a feature without impacting the customer experience or business metrics - do it. If you need help determining what features are truly essential, pay special attention to the Customer Validation phase of Customer Development.

  • Consider whether the company is experiencing a phase-change that might make what's made you successful in the past obsolete. The most famous of these phase-change theories is Crossing the Chasm, which gives very clear guidance about what to do in a situation where you can't seem to make any more progress with the early-adopter customers you have. That's a good time to change course. One possibility: try segmenting your customers into a few archetypes, and see if any of those sounds more promising than another. Even if one archetype currently dominates your customer base, would it be more promising to pursue a different one?
As much as we try to incorporate scientific product development into our work, the fact remains that business is not a science. I think Drucker said it best. It's pretty easy to deliver results in the short term or the long term. It's pretty easy to optimize our business to serve one of employees, customers or shareholders. But it's incredibly hard to balance the needs of all three stakeholders over both the short and long-term time horizon. That's what business is designed to do. By learning to find a synthesis between our customers and our vision, we can make a meaningful contribution to that goal.

Read More »

Lo, my 5 subscribers, who are you?

0 comments
It's not always fun being small. When you have an infinitesimal number of customers, it can be embarrassing. Some might look at my tiny "5 readers" badge and laugh. But as long as your ego can take it, there are huge advantages to having a small number of customers.

Most importantly, you can get to know those few customers in a way that people with zillions of customers can't. You can talk to them on the phone. You can provide personalized support. You can find out what it would take for them to adopt your product, and then follow up a week later and see if they did. Same with finding out what it would take to get them to recommend your product to a friend. You can even meet the friend.

For companies in the early-adopter phase, you can play "the earlyvangelist game" whenever a customer turns out to be too mainstream for your product. Pick a similar product that they do use, and ask them "who was the first person you know who started using [social networking, mobile phones, plasma TV, instant messaging...]? can I talk to them?" If your subject is willing to answer, you can keep going, following the chain of early-adoption back to someone who is likely to want to early-adopt you.

That level of depth can help you build a strong mental picture of the people behind the numbers. It's enourmously helpful when you need to generate new ideas about what to do, or when you face a product problem you don't know how to solve.

(For example, we used to be baffled at IMVU by the significant minority of people who would download the software but never chat with anyone. It wasn't until we met a few of them in person that we realized that they were having plenty of fun dressing up their avatar and modeling clothes. They wanted to get their look just right before they showed it to anyone else - they would even pay money to do it. But all of our messaging and "helpful tutorials" were pushing them to chat way before they were ready. How annoying!)

And since I have a blog, I have a way to ask questions directly to you. If you have a minute, post your answers in a comment, or email me. Here's what I want to know:
  1. First of all, the NPS question: On a scale of 1-10 (where 10 is most likely), how likely is it that you you would recommend this blog to a friend or colleague?
  2. How did you hear about it?
  3. What led you to become a subscriber, versus just reading an article and leaving like everybody else? (or, if you're not a subscriber, what would it take to convince you?)
  4. What do you hope to see here in the future?
Thanks, you loyal few. I am grateful for your time and feedback.

Read More »

How to Usability Test your Site for Free

0 comments
Noah Kagan has a great discussion of usability testing which can help get you over the "that's too hard" or "that's too expensive" fear.

At Facebook we never did testing or looked at analytics. At Mint, Aaron (CEO) was very very methodical and even flew in his dad who is a usability expert. We did surveys, user testing and psychological profiles. This was extremely useful in identifying the types of users we may have on the site and especially for seeing how people use the site. I never really did this before and was AMAZED how people use the site vs. what I expected. Most people know I am very practical or as my ex-gfs call it “cheap.” Anyways, here how our new start-up user tests.

His tips are both very practical and very effective - I've used craigslist, surveymonkey, and, yes, even cafes too. Usability testing is great for coming up with ideas about what to change in your product, but don't forget to split-test those ideas to make sure they work, too.

Read more at How to Usability Test your Site for Free | Noah Kagan's Okdork.com

Read More »

How to listen to customers, and not just the loud people

0 comments
Frequency is more important than talking to the "right" customers, especially early on. You'll know when the person you're talking to is not a potential customer - they just won't understand what you're saying. In the very early days, the trick is to find anyone at all who can understand you when you are talking about your product.

In our first year at IMVU, we thought we were building a 3D avatar chat product. It was only when we asked random people we brought in for usability tests "who do you think of as our competitors?" that we learned different. As product people, we thought of competition in terms of features. So the natural comparison, we thought, would be to other 3D avatar based products, like The Sims and World of Warcraft. But the early customers all compared it to MySpace. This was 2004, and we had never even heard of MySpace, let alone had any understanding of social networking. It required hearing customers say it over and over again for us to take a serious look, and eventually to realize that social networking was core to our business.

Later, when the company was much larger, we had everyone on our engineering team agree to sit in on one usability test every month. It wasn't a huge time commitment, but it meant that every engineer was getting regular contact with an actual customer, which was invaluable. Most of the people building our product weren't themselves target customers. So there was simply no substitute for seeing actual customers with the product, live.

Today, when I talk to startup founders, the most common answer I get to the question "do you talk to your customers?" is something like "yes, I personally answer the customer support emails." That's certainly better than nothing, but it's not a good substitute for proactively reaching out. As Seth writes this week in Seth's Blog: Listening to the loud people, the most aggressive customers aren't necessarily the ones you want to hear from. For example, my experience with teenagers is that they are very reluctant to call or email asking for support, even when they have a severe problem. They just don't need another authority figure in their life.

Don't confuse passion with volume. The people who are the lifeblood of an early-stage startup are earlyvangelists. These are people who understand the vision of your company even before the product lives up to it, and, most importantly, will buy your product on that basis. In some situations, they are also the vocal minority who wants to reach out and get in your face when you do something wrong, but not always. If you're just getting negativity from someone, they are more likely a internet troll - not an earlyvangelist. (For more on earlyvangelists and why they are so important, see Steve Blank's The Four Steps to the Epiphany)

Here's the suggestion from Seth Godin I want to emphasize:
And here's one thing I'd do on a regular basis: Get a video camera or perhaps a copy machine and collect comments and feedback from the people who matter most to your business. Then show those comments to the boss and to your staff and to other customers. Do it regularly. The feedback you expose is the feedback you'll take to heart.
It's not enough to just look at the feedback that comes across your desk. You need to foster situations where you - and everyone you work with - is likely to see feedback that matters. Some techniques that I've found especially helpful:

  1. Build your own tracking survey, using a methodology like Net Promoter Score (NPS) to identify and get a regular check-up from promoters (and to screen out detractors). As a nice side-effect, NPS gives you a very reliable report card on customer satisfaction.
  2. Create a members-only forum where only qualified customers (perhaps, paying customers) can post. Let them connect with each other, but also with you. Treat these people as VIPs, and listen to what they have to say.
  3. Establish a customer advisory board. Hand pick a dozen customers who "get" your vision. The way I have run these in the past (when I was dealing with extremely passionate customers) is to have them periodically produce a "state of your company" progress report. I would insist that this report be included in the materials at every board meeting, uncensored and unvarnished.

Read More »

Customer Development Engineering

0 comments

Yesterday, I had the opportunity to guest lecture again in Steve Blank's entrepreneurship class at the Berkeley-Columbia executive MBA program. In addition to presenting the IMVU case, we tried for the first time to do an overview of a software engineering methodology that integrates practices from agile software development with Steve's method of Customer Development.

I've attempted to embed the relevant slides below. The basic idea is to extend agile, which excels in situations where the problem is known but the solution is unknown, into areas of even greater uncertainty, such as your typical startup. In a startup, both the problem and solution are unknown, and the key to success is building an integrated team that includes product development in the feedback loop with customers.



As always, we had a great discussion with the students, which is helping refine how we talk about this. As usual, I'm heavy on the theory and not on the specifics, so I thought I'd share some additional thoughts that came up in the course of the classroom discussion.

  1. Can this methodology be used for startups that are not exclusively about software? We talk about taking advantages of the incredible agility offered by modern web architecture for extremely rapid deployment, etc. What about a hardware business with some long-lead-time components?

    To be clear, I have never run a business with a hardware component, so I really can't say for sure. But I am confident that many of these ideas still apply. One major theory that has influenced the way I think about processes comes from Lean Manufacturing, where they use these same techniques to build cars. If you can build cars with it, I'm pretty sure you can use it to add agility and flexibility to any product development process.

  2. What's an example of a situation where "a line of working code" is not a valid unit of progress?

    This is incredibly common in startups, because you often build features that nobody wants. We had lots of these examples at IMVU, my favorite is the literally thousands of lines of code we wrote for IM interoperability. This code worked pretty well, was under extensive test coverage, worked as specified, and was generally a masterpiece of amazing programming (if I do say so myself). Unfortunately, positioning our product as an "IM add-on" was a complete mistake. Customers found it confusing and it turned out to be at odds with our fundamental value proposition (which really requires an independant IM network). So we had to completely throw that code away, including all of its beatiful tests and specs. Talk about waste.


  3. There were a lot of questions about outsourcing/offshoring and startups. It seems many startups these days are under a lot of pressure to outsource their development organization to save costs. I haven't had to work this model under those conditions, so I can't say anything definitive. I do have faith that, whatever situation you find yourself in, you can always find ways to increase the speed of iteration. I don't see any reason why having the team offshore is any more of a liability in this area than, say, having to do this work while selling through a channel (and hence, not having direct access to customers). Still, I'm interested in exploring this - some of the companies I work with as an advisor are tackling this problem as we speak.


  4. Another question that always comes up when talking about customer development, is whether VC's and other financial backers are embracing this way of building companies. Of course, my own personal experience has been pretty positive, so I think the answer is yes. Still, I thought I'd share this email that happened to arrive during class. Names have, of course, been changed to protect the innocent:


    Hope you're well; I thought I'd relay a recent experience and thank you.
    I've been talking to the folks at [a very good VC firm] about helping them with a new venture ... Anyway, a partner was probing me about what I knew about low-burn marketing tactics, and I mentioned a book I read called "Four Steps to the…"
    It made me a HUGE hit, with the partner explaining that they "don't ramp companies like they used to, and have very little interest in marketing folks that don't know how to build companies in this new way."

Anyway, thanks to Steve and all of his students - it was a fun and thought-provoking experience..

Read More »