Showing posts with label case study. Show all posts

Case Study: The Nordstrom Innovation Lab

0 comments
Today's case study answers a bunch of questions all at once about Lean Startup principles: can they be used inside a Fortune 500 company? can they be used to sell physical low-tech products? can they be used in a retail store? I have been confidently answering questions like these non-stop for the past few months. I do believe the answer is yes. But, as the saying goes, seeing is believing. And now you won't have to take my word for it.

Nordstrom is currently ranked #254 on the Fortune 500 (yes, I looked it up) with over $9 billion in revenues. Scrappy startup they are not. And yet they face the same competitive pressures that are causing every modern company to take a long, hard look at the process they use to innovate. Anyone who has read The Innovator's Dilemma knows just how hard it is for a company that has been successful to invest in potentially disruptive innovations.

I have been talking to JB Brown, the manager of the Nordstrom Innovation Lab about publishing a case study. At the same time, Nordstrom had sent a camera crew to document the Lab at work. When I saw the rough cut of the videos they were producing, I knew they would be a powerful teaching tool. It's one thing to talk about "rapid experimentation" and "validated learning" as abstract concepts. It's quite another to see them in action, in a real-world setting. Proving his understanding of minimum viable product, JB suggested that we start small, by posting a "case study MVP." That's how this post came to be.

Below, you'll find two videos: one about the lab, and one containing a case study of the team at work. Watch them both. If you have questions, JB has generously agreed to make himself available to answer them in a future post. Just leave your question as a comment to this post. If there's sufficient interest, we'll expand this MVP.


"A Lean Startup Inside a Fortune 500 Company"



"We Really Don't Know What the Features Are Yet..."


Here are some highlights that I found especially interesting:
  • One-week iterations. One of the hardest things about corporate innovation is breaking through the slowness that is the default speed for most initiatives. The Nordstrom Innovation Lab solves this problem by working in one-week increments. In the second video above, you'll see them build an entire new product in one week end-to-end.

  • Genchi gembutsu. This is one of my favorite concepts from the Toyota Production System. It translates roughly as "go and see for yourself" - it's the Toyota version of "get out of the building." By talking face-to-face with customers, salespeople, and managers in a physical store, the innovation team is able to identify an opportunity that they can execute against extremely quickly. But they go beyond simply "getting out of the building" - they actually set up shop physically in a retail store for the entire week. They build products, test new features, and get feedback all out in the open. You really have to see it to believe it.

  • Simple, rapid, experiments. I hear all the time that developing for iOS, with its myriad approval delays and deployment obstacles means that you can't use rapid development techniques on that platform. Yet in the video you'll see this team overcome that bias with a little ingenuity. They simply brought two iPads with them. While the app is in development, the sales team is using one iPad, and the developers are working on another. At every break, the sales team swaps iPads with the developers - always using the latest version of the app. (The same technique works with paper prototypes, too.)
Have questions for JB and the rest of the Nordstrom Innovation Lab team? Post them as comments.

Read More »

Case Study: Lean UX at work

0 comments
This article is a guest post by Jeff Gothelf, Director of User Experience at TheLadders in New York City. Jeff has been promoting the use of Lean UX as an effective method to spur greater innovation, quality and productivity in startups as well as within teams in larger organizations. Jeff keeps a blog here (www.jeffgothelf.com/blog) with tweets available at @jboogie (www.twitter.com/jboogie). Jeff will also be joining us on stage at Startup Lessons Learned 2011, where he will be presenting a case study on his experience blending design, Lean Startup, and large company culture.
  
Lean Startups need to make snap decisions, iterate quickly and pivot when needed. Can an established organization with a recognized brand, proven revenue stream and an employee count in the hundreds or thousands embrace these principles? I’m happy to report the answer is “yes” as we’ve proven at TheLadders.

TheLadders is an eight-year-old company based out of New York City focusing on the $100k+ employment market (both jobseekers and recruiters). TheLadders’ subscription model has proven to be successful and the company has enjoyed solid growth with the employee count crossing 400 recently. Originally a waterfall shop, we transitioned to Agile development about two years ago. We entered the process largely blind and have had to feel our way in the dark until we found processes that worked for each team (you can read more about our transition here). Now entering our third year with Agile we’re continually trying to improve the efficiency, productivity and velocity of the various teams. Levels of Agile adoption span the full spectrum across our 6 Scrum teams. Some teams are still working to get beyond the gated approach of various phases and approval cycles while other teams are beginning to break down the silos of roles and focus on team-wide problem solving and velocity.

The most advanced of these teams, by adhering to the build-measure-learn methodology, has managed to reduce so much waste from their process that they’ve actually become an internal version of a lean startup within the context of our larger organization.

Here’s what’s worked for them:

Lean UX
TheLadders’ suite of products has UI components throughout the experience meaning some level of design is required for each feature to ship. A dedicated User Experience (UX) designer is assigned to this team. Traditionally the focus of this designer’s efforts has been a highly-detailed set of deliverables that included workflow diagrams, sitemaps, wireframes, annotations and detailed UI specification docs. These deliverables however had a detrimental effect on the collective problem-solving power of the team. They favored the “designer as hero” approach where an individual would disappear for an extended period of time and return with a fully-baked solution to present to the team. The team would then have to push back on issues of appropriateness of solution, scope and feasibility and terrifically time-consuming negotiations would ensue.

Driven by a desire to implement a highly-iterative approach favoring the validated learning of actual customer usage of our products, the team adopted Lean UX as their design approach. Covered extensively here and here, Lean UX is a team-wide acceptance of significantly lighter-weight design deliverables that are used to drive conversation, estimation and validation both with internal stakeholders and ultimately customers. The approach demystifies the design process by involving the other team members (yes, developers too!) in the problem-solving process.

The deliverables are low-fidelity – meaning the designer has spent a minimal amount of time creating them. Investment in these documents is low with their real purpose being to spur discussion – both within the team and beyond it to business owners and customers. The low-fidelity nature of the deliverable also encourages non-designers to join the discussion. The fear of “messing with” a beautiful mockup dissipates and a healthier discussion over functionality and workflow follows.

“Demos have become our design reviews.”
Examples of these low-fi deliverables include whiteboard sketches, pencil/pen and paper sketchbooks, wireframes, hacked screenshots (i.e., screengrabs that have been quickly manipulated in Fireworks) and even rough prototypes (in such varied tools as Powerpoint, Adobe Fireworks and even paper). This level of design takes less time to create and to iterate upon. As feedback is collected by the designer from team members and stakeholders, the next version of the design asset is turned around in hours, not days. This gives the project a powerful feeling of forward motion and progress. In addition, the build-measure-learn cycle can be executed much more quickly (even inside the walls of a larger organization) with this level of design.

This whiteboard sketch serves as an adequate representation of a proposed user experience and feature set. The conversation around this sketch provides enough directional insight for the rest of the team to voice any concerns, to begin the estimation process and to start coding.


Mobile designs require the context of the device to truly get a sense of the experience. Using readily available (and free) wireframe templates, we were able to quickly create this representation of a proposed mobile UI focusing on validating the content and workflow rather than creating pixel-perfect graphics.


Initially, the biggest challenge was to increase developers’ comfort level with such low-fi assets. In the past, every pixel and interaction was spelled out with extreme detail. With these lighter-weight designs, the missing documented details are conveyed through conversation. One of the ways we achieve this level of understanding and comfort is by including the entire team (a product manager, 4 developers, a QA analyst, UX designer and a dev manager) in solving each problem.



A member of the team participating in a brainstorm activity.

The implicit decisions that were not covered by these assets became innate product knowledge through this team-wide involvement. For example, in the mobile wireframe above, nowhere does it actually say what happens when the user clicks the “Follow” button (something that would’ve been meticulously document in our past process). That workflow was discussed and agreed to as part of a team conversation around feasibility and scope.

Beyond that, we face the challenge of getting business owners to make decisions based on these rough representations of customers’ end-state experiences. This is an ongoing challenge. The most effective way we’ve found to mitigate this is by holding live, weekly demos of working (unfinished) code to which these business owners are invited. At these demos, our rough designs come to life allowing business owners to react and provide insight and feedback prior to the push to production. Given that we push code live every two weeks, these demos have become our design reviews.

Competencies over roles
One of the interesting outcomes of implementing this process at TheLadders has been the gradual breakdown of the concept of “roles” within the team. Roles, such as software engineer, user experience designer and product manager, come with narrow and explicitly-assumed responsibilities. For example, software engineering write code. UX designers create wireframes and product managers gather requirements. Traditionally, team members were only expected to work on the tasks directly associated with their roles (e.g., deciding how the UI was laid out and what color certain elements were was always the realm of the “designer”). As the team has matured through multiple iterations, roles have given way to competencies.

An individual may have multiple competencies. There is usually a core competency they are best at (e.g., writing code) but they likely have other competencies that can contribute to the team’s goals. By demystifying the design process, the team has provided a venue for other team members’ competencies to shine. Conversely, input on coding decisions have also become open to non-coders. Yes, we all still have titles that are aligned with these traditional roles but the work we do now spans the various competencies we all bring to the table. It may seem like a subtle difference but it’s powerful.


Transparency builds trust
The Lean UX process has increased transparency into the way designers work. This transparency is, at times, uncomfortable for designers because it moves them away from the hero-based designer attitude. Instead, it shares the risks and rewards with the entire team. Historically, a designer wouldn’t reveal a design until they felt it was “ready.” This usually meant a prolonged period of time working alone followed by a dramatic reveal of “the solution” to the team. Lean UX pushes for validation (both internal and external) much earlier in the design process requiring the designer to reveal their thinking in its infancy.  Providing insight early allows the rest of the team to move forward as well. For example, if the UX designer is considering implementing a real-time statistics feature, showing the team a sketch of the elements proposed to go on that screen and discussing their behavior provides them with enough information to figure out how to get that data to the presentation layer.

Letting the rest of the team into the design process yields a much stronger level of trust. Surprises are minimized while scope issues are resolved sooner. Repeated over multiple iterations, this process begins to build a shared understanding of the team’s capacities and constraints. This understanding matures into trust over time making intra-team communications more efficient since they’re no longer burdened with posturing and politics. The team is able to address the core issues faster and learns how to solve problems much more effectively.

Freedom to fail
Critical to the success of this team is freedom – freedom from micromanagement, freedom from review cycles and most importantly, freedom to fail. Senior management has entrusted this team with a problem statement, set short-term KPI’s as goalposts for the team to strive towards and allowed them to go and execute. The Lean UX team at TheLadders is tasked with solving the issue of communications between jobseekers and recruiters. Given the problem statement of figuring out how to get these two parties communicating more effectively and guided by KPI’s such as response rates and number of communications sent through the system, the team comes up with its own solutions. Those solutions are implemented quickly, the results of those changes are measured and the learnings from the analysis of those measurements are applied to the next iteration.

In the cases where conflict may occur with other teams or with senior management, the development manager working closely with the UX designer (who, in this case, happens to be relatively senior) protects the team by reminding the organization that anything implemented is a short two weeks away from its next iteration should the learnings reveal that it was a suboptimal decision.

Proven over many iterations, the build-measure-learn cycle has insulated the team from the micromanagement of the past. If an estimation is wrong or an iteration falls short of its original number of story cards, no one gets fired. The trick is to keep management aware of your activities and intentions, justify those intentions with the learnings from each iteration and (most importantly) show progress. Hit a milestone? Tell the business owner. Exceeded your original expectations? Tell your boss. Learned something completely new about your audience? Stand up in front of the company and explain it. It’s this proactive approach that keeps the micromanagers at bay and the team free to solve problems as it sees fit.

Self-organized “enlightenment”
What’s surprised the team the most as it continues to mature is the nature of their process – or lack thereof. It seems that as a team matures and the trust bonds between the members grow, the rituals of formal process fall away in favor of less-prescribed, more “understood” cadences. The role of scrum master has become less relevant as each team member owns their own accountability. Participation, coordination, conversation all begin to just “happen” without the need for external prodding. Elements of scrum have begun to get mixed in with kanban seamlessly as the way the team works continues to evolve and improve every two weeks. Seemingly minor tactics like limiting the number of “in dev” cards at any time and visually “aging” each one based on how many days it’s been in flight have driven up awareness, focus, productivity and quality while reducing project management overhead.

The team’s board holds a constantly prioritized and updated backlog of stories for each iteration. A maximum of 4 cards (one for each developer) can be in-flight at any given time. Even if no UI assets have been provided for the next card in the backlog, the team begins development assuming certain design tactics based on existing style guides.  Paired developer/designer sessions refine the UX. The “release target” is a moving line. Unforeseen complexities or reprioritizations force the team to move stories above/below the line for each release.



Each in-flight story card is aged using magnetic avatars. On our board each day in-flight is equal to a day in the life of a man. As the story ages, the avatar ages – from a baby through adulthood to old age and finally, death. Our goal is to never let a story die. This technique keeps the team keenly aware of where they’re getting bogged down and urges resolution.

We keep our important KPI’s updated on the board daily to let us know how we’re progressing. We also keep count of the number of days we’ve gone without a story reaching the “death” stage on the board. Finally, every team member who’s late for stand-up owes $1 (which inevitably gets spent on beer).

Conclusion
The Lean Startup approach is not limited to small companies. The drive to minimize waste, work collaboratively, solve problems and develop customers can manifest inside larger organizations eager to continue innovating. By broadening the application of these principles to disciplines which, in the past may have been seen as bottlenecks (like UX design), a team-wide approach is developed. The Lean UX process personifies this and has proven its value within TheLadders organization. When applied to a team that is tasked with solving a problem, not implementing a solution, and complementing that with an environment where autonomy and accountability are held at the highest regard, a truly productive and invested team emerges. We’re actively working to train the rest of our teams to work this way. As we do, we’re learning that the training extends beyond the teams to their business owners and stakeholders as well. Lean UX has proven successful at TheLadders. Give it a shot at your organization and share your story.

Liked this case study? Come see Jeff speak at SLLCONF 2011 on May 23 in SF. 


Read More »

Case Study: UX, Design, and Food on the Table

0 comments
(One of the common questions I hear is how to reconcile design and user experience (UX) methods with the Lean Startup. To answer, I asked one of my favorite designers to write a case study illustrating the way they work, taking us step-by-step through a real life redesign.

This is something of an IMVU reunion. The attendees at sllconf 2010 were wowed by Food on the Table's presentation. If you weren't there, be sure to watch the video. Manuel Rosso was IMVU's first VP of Marketing, and is now CEO of Food on the Table, one of the leading lean startups in Austin. I first met Laura Klein when we had the good fortune of hiring her at IMVU to join our interaction design team. Since then, she's gone on to become one of the leading experts implementing UX and design in lean startups. 

In this case study, Laura takes us inside the design process in a real live startup. I hope you'll find it illuminating. -Eric)

A lot of people ask me whether design fits into the lean startup process. They're concerned that if they do any research or design up front that they will end up in a waterfall environment.

This is simply not true. Even the leanest of startups can benefit from design and user research. The following is a great example of how they can work together.

A couple of months ago, Manuel Rosso, the CEO of Food on the Table came to me with a problem. He had a product with a great value proposition and thousands of passionate customers. That wasn't the problem. The problem was activation.

As a bit of background, Food on the Table helps people plan meals for their families around what is on sale in their local grocery stores. The team defined an activated user as someone who made it through all the steps of the first time user experience: selecting a grocery store, indicating food preferences, picking recipes, and printing a grocery list.

Users who made it through activation loved the product, but too many first time users were getting lost and never getting all the way to the end.

Identifying The Problem

More than any startup I've worked with, Food on the Table embraces the lean startup methodology. They release early and often. They get tons of feedback from their users. And, most importantly, they measure and a/b test absolutely everything.

Because of their dedication to metrics, they knew all the details of their registration funnel and subsequent user journey. This meant that they knew exactly how many people weren't finishing activation, and they knew that number was higher than they wanted.

Unfortunately, they fell into a trap that far too many startups fall into at some point: they tried to measure their way out of the problem. They would look at a metric, spot a problem, come up with an idea for how to fix it, release a change, and test it. But the needle wasn't moving.

After a couple of months, Manuel had a realization. The team had always been dedicated to listening to users. But as they added new features, their conversations with users had changed - they became more narrowly focused on new features and whether each individual change was usable and useful. Somewhere along the way, they'd stopped observing the entire user experience, from end to end. This didn't last very long - maybe a month or two, but it was long enough to cause problems.

As soon as he realized what had happened, Manuel went back to talking directly to users about their overall experiences rather than just doing targeted usability tests, and within a few hours he knew what had gone wrong. Even though the new features were great in isolation, they were making the overall interface too complicated. New users were simply getting lost on their way to activation.

Now that they knew generally why they were having the problem, Manuel decided he needed a designer to identify the exact pain points and come up with a way to simplify the interface without losing any of the features.

Key Takeaways:
  • Don't try to measure your way out of a problem. Metrics do a great job of telling you what your problem is, but only listening to and observing your users can tell you why they're having trouble.
  • When you're moving fast enough, a product can become confusing in a surprisingly short amount of time. Make sure you're regularly observing the user experience.
  • Adding a new feature can be useful, but it can also clutter up an interface. Good design helps you offer more functionality with less complexity.
Getting an Overview of the Product

When I first came on board, the team had several different experiments going, including a couple of different competing flows. I needed to get a quick overview of the entire user experience in order to understand what was working and what wasn't.

Of course, the best way to do that is to watch new and current customers use the product. In the old days, I would have recruited test participants, brought them into an office, and run usability sessions. It would have taken a couple of weeks.

Not anymore! I scheduled UserTesting.com sessions, making sure that I got participants in all the main branches of the experiments. Within a few hours, I had a dozen 15 minute videos of people using the product. The entire process, including analysis, took about one full day.

Meanwhile, we set up several remote sessions with current users and used GoToMeeting to run fast observational sessions in order to understand the experience of active users. That took another day.

Key Takeaway: Get feedback fast. Online tools like GoToMeeting and UserTesting.com (and about a hundred others) can help you understand the real user experience quickly and cheaply.

Low Hanging Fruit

Once we had a good idea of the major pain points, we decided to split the design changes into two parts: fixing low hanging fruit and making larger, structural changes to the flow. Obviously, we weren't going to let engineering sit around on their hands while we made major design changes.

The most important reason to do this was that some of the biggest problems for users were easy to fix technically and could be accomplished with almost no design input whatsoever.
For example, in one unsuccessful branch of a test, users saw a button that would allow them to add a recipe to a meal plan. When user test participants within the office pressed the button, it would very quickly add the recipe to the meal plan, and users had no problem understanding it. When we observed users pressing the button on their own computers with normal home broadband connections, the button took a few seconds to register the click.

Of course, this meant that users would click the button over and over, since they were getting no feedback. When the script returned, the user would often have added the recipe to their meal plan several times, which wasn't what they meant to do.

This was, by all accounts, a bad user experience. Why wasn't it caught earlier?

Well, as is the case with all software companies, the computers and bandwidth in the office were much better than the typical user's setup, so nobody saw the problem until we watched actual users in their natural environments.

What was the fix? We put in a "wait" spinner and disabled the button while the script was processing. It took literally minutes to implement and delivered a statistically significant improvement in the performance of that branch of the experiment.

Giving immediate feedback drastically reduced user error
Manuel told me that, immediately after that experience, the team added a very old, slow computer to the office and recently caught a nasty problem that could add 40 seconds to page load times. Needless to say, all usability testing within the office is now done on the slowest machine.

Key Takeaways:
  • Sometimes big user problems don't require big solutions.
  • To truly understand what your user is experiencing, you have to understand the user's environment.
  • Sometimes an entire branch of an experiment can be killed by one tiny bug. If your metrics are surprising, do some qualitative research to figure out why!
A Redesign

While the engineering team worked on the low-hanging fruit, we started the redesign. But we didn't just chuck everything out. We started from the current design and iterated. We identified a few critical areas that were making the experience confusing and fixed those.

For example, we started with the observation that people were doing ok for the first couple of screens, but then they were getting confused about what they were supposed to do next. A simple "Step" counter at the top of each page and very clear, obvious "Next" and "Back" buttons told users where they were and what they should do next.

Users also claimed to want more freedom to select their recipes, but they were quickly overwhelmed by the enormous number of options, so we put in a simple and engaging way to select from recommended recipes while still allowing users to access the full collection with the click of one button.

Users were confused by how to change their meal plan
Recommended recipe carousels made choosing a meal plan fun and easy to understand
One common problem was that users asked for a couple of features that were actually already in the product. The features themselves were very useful and well-designed; they just weren't discoverable enough. By changing the location of these features, we made them more obvious to people.

Most importantly, we didn't just jump to Photoshop mockups of the design. Instead, we created several early sketches before moving to interactive wireframes, which we tested and iterated on with current users. In this case, I created the interactive wireframes in HTML and JavaScript. While they were all grayscale with no visual design, they worked. Users could perform the most important actions in them, like stepping through the application, adding meals to their meal plan, and editing recipes. This made participants feel like they were using an actual product so that they could comment not just on the look and feel but on the actual interactions.

By the end of the iterations and tests, every single one of the users liked the new version better than the old, and we had a very good idea why.

Did we make it perfect? No. Perfection takes an awful lot of time and too often fails to be perfect for the intended users.

Instead, we identified several areas we'd like to optimize and iterate on going forward. But we also decided that it was better to release a very good version and continue improving it, rather than aim for absolute perfection and never get it out the door.

The redesign removed all of the major pain points that we'd identified in the testing and created a much simpler, more engaging interface that would allow the team to add features going forward. It improved the user experience and set the stage for lots more iteration and experimentation in the future. In fact, the team currently has several more exciting experiments running!

Key Takeaways:
  • Interactive prototypes and iterative testing let you improve the design quickly before you ever get to the coding stage.
  • Targeting only the confusing parts of the interface for redesign reduces the number of things you need to rebuild and helps make both design and development faster.
  • Lean design is about improving the user experience iteratively! Fixing the biggest user problems first means getting an improved experience to users quickly and optimizing later based on feedback and metrics.
The Metrics

Like any good lean startup, we released the new design in an a/b test with new users. We had a feeling it would be better, but we needed to know whether we were right. We also wanted to make sure there weren't any small problems we'd overlooked that might have big consequences.

After running for about 6 weeks and a few thousand people, we had our statistically significant answer: a 77% increase in the number of new users who were making it all the way through activation.

My entire involvement with the project to do the research, design, and usability testing was just under 90 hours spread over about 6 weeks.

Key Takeaway: Design - even major redesigns - can be part of an agile, lean startup environment, if done in an efficient way with a lot of iteration and customer involvement.



Laura Klein has been working in Silicon Valley as both an engineer and a UX professional for the last 15 years. She currently consults with lean startups to help them make their products easier to use. She frequently blogs about design, usability, metrics, and product management at Users Know. You can follow her on Twitter at @lauraklein.

Read More »

Case Study: Rapid iteration with hardware

0 comments
(I am often asked to explain how to apply Lean Startup approaches to domains beyond software. In order to answer, I have taken to drawing a two-axis diagram. 

On one axis we have the degree of market uncertainty for a given industry. For "cure for cancer" type businesses, there is no question about who the customer is and what the customer wants, and therefore there is no market uncertainty. On the other extreme, modern web-based applications face almost no technical risk, and are governed by high market uncertainty.

On the other axis we have the underlying cycle time of the industry in question. Slow-moving cycles, like drug discovery or new automobile models, govern the slow part of the axis. On the extreme opposite end are rapid iteration businesses like software or fashion.

The key to understanding Lean Startup is to recognize two things:
  1. Lean Startup techniques confer maximum benefit in the upper-right quadrant, namely high market uncertainty coupled with fast cycle time.
  2. Every industry on Earth is currently undergoing a disruption that is causing it to move along both axes: more uncertainty and faster cycle times.
I am aware of no industries that are moving "backwards" on either dimension. Thus, more and more industries are starting to look like the software business. Of course, the underlying root cause of this worldwide disruption is the software and semiconductor revolution. Industries are disrupted as their traditional work process is "infected" by software. And, as a result, more and more companies are able to benefit from Lean Startup practices. 

The following case study looks at one such industry, consumer electronics, where the pace of iteration has taken a marked turn towards high speed. It is written by Ronald Mannak, who is currently the CEO of a startup named Yobble. What follows are solely his opinions. -Eric)

In a bar in Amsterdam in 2005, my two cofounders and I came to the sad conclusion that startup we tried to built for two years was doomed. In 2003 we started developing a martial arts motion sensing toy, a full three years before the Nintendo Wii changed the world of motion sensing. The toy (we called it Ninja Master) consisted of two hardware units, attached to both wrists. When a child would perform a perfect karate move (or better yet: a combo of several karate moves in a row), Bruce Lee-like karate sounds would emerge from a small speaker in the device. We loved the product. Test users loved the product. It was way ahead of its time. We thought we were visionaries and believed the future was motion control. Yet, we failed to sell the toy. We talked to every toy company imaginable, but none wanted to license our toy. " Kids nowadays don't want to move, they play Playstation" was the most often heard reply, even though our user tests suggested otherwise. To make matters worse, we lived in a country (Holland) without a proper functioning startup, VC and angel ecosystem. The company was doomed. My co-founders decided startup life wasn't for them.

However, one new idea emerged at that meeting. What if we could make an air drum? Drum sticks with sensors in them. Now that was an idea. Music is much easier to sell (to toy companies) than the abstract martial arts Ninja Master toy. Besides, we could easily expand the line with with an air guitar and a device to link the air instruments to a PC. How cool. I loved the idea so much that I decided to pursue the idea.

I envisioned the product would be popular with 8 to 12 year old boys. I thought the price couldn't be higher than $40. I already knew how the product would be used. Boy, was I wrong.

Waterfall
I previously worked on a couple of IT projects that used the 'waterfall model' where specifications were written down by one team, thrown over an imaginary wall and implemented by another team. Every single waterfall project I encountered turned out to be a disaster in every way. Specifications turned out to be open to multiple interpretations, usability was the last priority (if a priority at all). It Just Did Not Work. As a beta tester of the first Borland Delphi, I learned the wonders of rapid prototyping and fast iterations. I wondered if we could do the same for hardware development. It turned out we indeed could.

The first hire
The first hire was critical. I wanted somebody who was creative first and technical secondly. I found the perfect person at the department of Industrial Design Engineering of the Delft University of Technology. Joris. Joris was creative and eager to learn. Better yet, he plays drums. Even better than that: he likes to tinker with electronics. Hiring him was a no brainer, and he didn't disappoint.

The internship only lasted six months. That's not much time, considering the scope of the project. I convinced the university that Joris should not be writing specifications and other nonsense first, but start right away building prototypes. And he did.

Joris suggested that before he started working on electronics, we should invite children, give them wooden drum sticks and let them pretend they were playing air drums. It turned out to be an excellent idea. Children are perfect test subjects. To our surprise, every single child did something we didn't anticipate. Without any exception, they all whacked the wooden drum sticks *sideways* and made 'crashing sounds'. I certainly didn't think of sideways movements when I created the first ideas, but apparently it was a good idea to implement.

The prototypes
The next day we started building the first prototype to see if the sensors actually behaved like they were supposed to, and to see if we could measure the sideway movements. The prototype was crude. Joris taped sensors on his arms with duct tape and started drumming in the air with wooden drum sticks (that did not contain any electronics). We connected the sensors to a seven year old pc with an Arduino-like interface that ran a simple drum program we developed. The results were amazing. It actually worked. (A video of the first prototype can be found here.)

We now knew what kids liked and we knew the product was technically feasible. Yet, I still felt we didn't know all we needed to know and wanted to test more. And I'm glad we did.

For the next prototypes we placed the sensors in PVC pipes to optimize sensor angles and added features to the pc software.

We made another discovery we did not anticipate. We found out that parents (who came along with their children for user testing) often liked the prototype as much as their kids. We decided to interview the parents and quickly found out that the parents who like our product were video games players. Of course we liked our product, but we never would have guessed other grown ups would like our product too. Knowing this, we invited test users from 12 to 30. They also loved the prototypes. Our target audience just exploded in size. We decided to make a few changes that would make the product less 'toy' and more 'gadget'.

Over a period of six months, we made eight generations of prototypes, each version adding more features and making the product more reliable. By testing each generation, we learned that a lot of hypotheses were correct, but a large number of hypotheses were incorrect. By testing early and often, we were able to adjust the product. I believe that we demonstrated that it is indeed possible to iterate fast and often with hardware development.

Product Launch
After some financing-related delays, the products went on sale in Europe and Asia in the summer of 2008. The retail selling price was $40, exactly what we targeted. In less than six months, we sold over 90,000 units. All shops sold out our products two months before christmas, all without spending one penny on marketing. The products were voted 'best music gadget' on television program The Gadget Show, became the best selling music toys on Amazon.co.uk and the best selling products on Firebox.com. Best of all, users love the products. On Firebox.com, the average user rating (740 users) is 4.5 out of 5 stars (link). We couldn't have been happier.

Post mortem
We demonstrated it is possible to iterate often and fast. I believe a lot of the product's success can be attributed to the iterative development process. We didn't find every issue though. We didn't test the price and we didn't see the Nintendo Wii or Guitar Hero coming. We chose to enter the market though the low margin toy market, where (in hindsight) we should have positioned the products as video games with higher video games margins.

Another thing we missed: after we launched we received many requests to add double bass drum, as often used in metal. The drums include two drum pedals and a double bass drum could have been added by a simple and minor change to the embedded software. However, updating the embedded software in sold devices isn't possible with the microcontroller we used. We could have included the feature in a 1.1 version of the product, but the toy manufacturer we licensed the toy to, wasn't interested in a new version, as the original version continues to sell well to this day.

Tools to develop hardware get better and cheaper. Open source projects like Arduino and SuperCollider make iterative hardware development cheaper and faster than ever. We learned that connecting the prototypes to PCs and user the PC to run the program is a very good way to test hardware (developing on a PC is still much faster than embedded developing on a standalone hardware device).

In the summer of this year, I moved to San Francisco and founded a new startup that makes music related games and hardware controllers that connect to the iPhone. There are a lot of new opportunities. New cheap flashable micro controllers make firmware updates possible for low cost hardware. With hardware connected to the internet (in our case through the iPhone) it should be possible to use continuous deployment: small and very frequent updates of the firmware instead of less frequent large updates. Bugs in firmware could be fixed within minutes or hours instead of weeks or months.

(Continuous deployment of hardware is an exciting new capability. In addition to continuous deployment of firmware via the Internet, it is also possible to do continuous deployment by taking advantage of a small-batch production process. When the complete cycle time of assembly is low and the design can be specified mostly through software, it's conceivable that each batch rolling off the line could have a different design. -Eric)

As a final thought, I am convinced iterative design depends mostly on the mindset of the team and the company culture and less on the tools. I was lucky to have a great team of A-players that were willing to take responsibility and risk. If the company culture is such that mistakes are punished, I am pretty sure iterative development won't work.

Read More »

Case Study: SlideShare goes freemium

0 comments
(Normally, I do not write about companies that are doing a marketing launch. But I have decided to make an exception today, for two reasons. First, SlideShare is a fantastic product (that I use on a regular basis) and an impressive company example of Lean Startup practices in action. Second, their story illustrates a key Lean Startup idea: proving the business in micro-scale. It requires separating the product launch from the marketing launch (see Don't Launch) as well as other staple Lean Startup tactics: minimum viable product, split-testing, customer development and the pivot. This story especially demonstrates that these techniques are not reserved only for tiny startups just starting out. When SlideShare began the journey you're about to read, they already had more than a million visitors a day. Because the stakes were high, they had to successfully use a technique I call Innovation inside the box which is important for entrepreneurs inside established companies of all sizes.


Once again, this case study is a collaboration with Sarah Milstein, who conducted the interviews and wrote the post itself, with some minor edits and commentary from me. As this is a new initiative for this blog, we especially welcome your feedback. Did you find this post useful? One recurring request I hear from Lean Startup practitioners around the world is a desire to see examples of the ideas in action. How are we doing?


In the meantime, take a look at how SlideShare performed a significant pivot while still moving at full speed. -Eric]

“The first user experience was actually terrible.” Rashmi Sinha, co-founder and CEO of SlideShare, describes an early version of the analytics package that’s part of the Pro accounts the company announced today.

If your company is using minimum viable product, you’ve probably said the same thing yourself. A lot. SlideShare, founded in 2007, started experimenting with MVPs and A/B testing this year. Those tools, combined with focused customer interviews, have turbo-charged the company’s ability to learn.

What prompted the process change? Early this year, SlideShare launched custom channels. Designed for large businesses, the channels let a company share several types of documents, brand the channel with their own design elements, and then include display advertising, contest promotions, blog aggregation, social media integration and metrics reporting. The idea seemed to SlideShare to be a natural direction. Except it didn’t take off. [I was an early adopter of this feature, and participated in the last marketing launch, as you can see here. Alas, even brilliant marketing adorned with a giant picture of me can't fix the wrong product. -Eric]

Big companies said they liked the idea, but SlideShare found it hard to close deals. Meanwhile, individuals and smaller companies emailed by the hundreds to say that they wanted the features of custom channels, but the sales model—arranged like a media buy—didn’t make sense to them.

SlideShare’s existing customers had needs that the company’s new product—along with its pricing and positioning—simply weren’t solving. Realizing it had taken a wrong turn, SlideShare rethought its approach to premium accounts and ultimately performed what we’d call a value capture pivot, one where the company changes the way it collects revenue from customers.

The process started with a few moving parts. First, the company began quietly testing subscription pricing plans, initially positing a basic plan and an enterprise version. Second, when an individual or small company signed up, Sinha would email them to ask if they’d be willing to hold a phone interview with her to discuss their experience of the product. Despite the fact that SlideShare's product is well-established with many customers, Sinha still took the critical step of (to use Steve Blank's famous phrase) getting out of the building, a particularly important job for founders. Third, SlideShare started holding sales calls with large companies to learn what would prompt them to buy the enterprise version.

“Individuals and small companies wanted analytics, they wanted to know what was happening in social media [for their content], they wanted ad removal and lead gen. Branding was less important to them,” says Sinha. Big companies had other needs. “We didn’t anticipate at all the control features. For instance, we worked with Pfizer, and they wanted the comments turned off. I hadn’t thought that would be a feature. But they’re regulated, so it makes sense.” SlideShare used the two streams of information to segment their market and come up with three plans that recombined the custom channel features in meaningful ways.

But that’s just part of the story. As SlideShare was pivoting, it was also trying out two processes to get better results: 1) A/B testing to refine the pricing plans and the page describing them; and 2) MVPs to hone the actual premium features. The combo helped SlideShare learn a lot in short order. [This is the essential approach to testing a big vision that avoids the "local maximum" trap. See Learning is better than optimization. -Eric]

The company ran landing page splits every two or three days (they initially used Unbounce to generate the pages) and measured them carefully with KISSmetrics. They also used SnapABug for live chat on their site. Between the metrics and the direct customer questions, SlideShare had what Sinha calls “minor learnings and then major shifts.”

For example, early iterations of their pricing page included the original, free version of SlideShare. “We realized that was really confusing people,” says Sinha. “We don’t give you all this Pro plan information right away when you join SlideShare. It’s more like, ‘If you’re already using SlideShare, you might want to try this.’” They removed the core plan, and conversions went up.

The A/B testing did have its challenges. Because SlideShare has more than a million visitors a day, the team is used to developing features that at least 100,000 people will use. “You get used to having a big impact,” says Sinha. With the split tests, maybe 500 people would see an iteration (SlideShare drove traffic with calls to action around their own site). “You have to get ready to deal with much smaller numbers.”

The MVPs were tricky to implement for emotional reasons, too. Because the SlideShare team was used to giving away a high-value product, engineers balked at charging for a clearly imperfect product. The analytics package, for instance, launched in what Sinha calls “a very crude version; we started off and sold it before we were comfortable with it.”

The saving grace was follow-up interviews. SlideShare asked customers what they had expected in the product; the responses were often literal descriptions. People consistently said they were dying for analytics and specifically that they wanted to track social media and understand the people visiting their content (SlideShare eventually discovered that showing visitors’ locations and timing satisfied people’s needs).

“Charging for something half-baked is really interesting,” says Sinha. “It makes the product team uncomfortable. At the same time, you make sure that you get honest feedback. If the product doesn’t meet customers’ expectations, they cancel. It’s a very honest relationship. On analytics, we got a lot of feedback that it was half-baked, that we sold it under false pretense. But we would just respond honestly and fast and say, ‘Tell us what you want.’ Then we’d get back to them when we had built it.” Customers appreciated the follow-up, and many bought again after the feature had evolved. In this regard, SlideShare used the early adopter feedback not only to improve the product, but too improve its understanding of what subsequent customers would want. [That is customer development. -Eric]

The marketing launch for SlideShares Pro accounts is today. But the product launch has been happening iteratively over the past months—which means the company is confident in its new offerings. “When we launched custom channels in February, a lot of people reached out and said, “We’d love to buy’,” recalls Sinha. “But it never happened.” [Alas, customers don't know what they want! -Eric] Since creating and refining its premium accounts, SlideShare has closed a number of deals, including high-profile accounts like Dell, Cisco and Pfizer.

Sinha notes that Eric Schmidt, in a recent interview, said that you find out whether people truly like a product in the second phase after launch. In the first phase, you get a lot of curious people. Only after the buzz has died down do you truly understand what’s going on. With careful and continuous learning processes, SlideShare is inverting that idea and going to market with a validated product. That is the essence of proving the business in micro-scale.

[We'll see if the marketing launch results follow the predictions of SlideShare's validated business model. We wish them the best of luck, and hope we can convince them to share their results - positive or negative - in the near future. In the meantime, good luck and thanks for letting us share your story. -Eric]

Read More »

Case Study: kaChing, Anatomy of a Pivot

0 comments
(The following guest post is a new experiment for this blog. It was written by Sarah Milstein in collaboration with kaChing CEO Andy Rachleff. kaChing has been very active in the Lean Startup movement. If you haven't seen it, Pascal's recent presentation on continuous deployment is a must-see; slides are here. In the interests of full disclosure: I am an advisor to kaChing but did not participate in the interviews that led to this case.

With case studies like this, we aim to illustrate specific Lean Startup techniques through the stories of current practitioners. It is written using the information that the company voluntarily shared, and therefore reflects their current thinking and recollections. I am particularly interested in feedback on this case study. Do you find it helpful? Please give us your feedback in the comments. Thanks, Eric)

You probably know that Flickr, the photo-sharing site, started out as an MMOG. And if you’re a regular reader of this blog, you may know that IMVU started out as an instant messaging add-on. It’s common, perhaps the norm, for startups to pivot like that—to discover that a product is catching on in unintended ways worth pursuing. Yet there’s a lot of mystery around pivots, and entrepreneurs ask all the time how you know it’s time to commit to a new direction.

To shed some light, I talked with kaChing, a destination that enables individual investors to find outstanding money managers to manage their money. The company’s audacious goal is to disrupt the $11 Trillion mutual fund industry. The startling part is that kaChing started out as a…Facebook game. That’s an epic pivot, like shifting from making solar calculators to powering the Space Shuttle. How’d it happen?

kaChing launched a virtual portfolio management game on Facebook in January 2008 and a similar version shortly thereafter on kaChing.com. The intent was to discover amateurs who could manage a portfolio as well if not better than professionals (think American Idol) and then facilitate individual investors giving them their real money to manage. In other words, the game would serve as a kind of minor league for the profession. Because kaChing prefers its portfolio managers to have a long track record, the marketplace launch (i.e., the version that would facilitate the investment of real money) was planned for late-2009.

kaChing deployed the game across a slew of platforms, including MySpace, the iPhone, and the Yahoo App Platform. The result? They attracted more than 450,000 portfolios—a decent number for a company that hoped a good percentage would prove out as capable managers. They also hoped a reasonable percentage would realize they were lousy money managers and would then convert to clients.

In the early fall of 2009, as kaChing prepared for its marketplace launch, the management team showed the app—which included real time market data, SEC-grade accounting, analytics, compliance and customer management tools—to a number of investment pros to get feedback and endorsements. One of those pros was John Powers, head of the Stanford endowment. He noted the platform would be good not only for amateurs who had proven themselves as outstanding portfolio managers in the game, but also for professional money managers —a group that had insufficient tools for managing and scaling their businesses.

The kaChing system was based on full transparency. A portfolio manager’s entire track record & holdings had to be disclosed. The company didn’t believe professionals would be willing to reveal that level of detail. But Powers’s reaction was intriguing enough to prompt Andy Rachleff, kaChing’s CEO, to call friends who were professional money managers and describe the idea. The response was surprisingly positive.

Andy Mathieson, a founder and managing member at Fairview Capital, was particularly supportive. He was unconcerned about transparency, noting the good have nothing to fear. Mathieson signed on to be a money manager in the marketplace launch, committing five years worth of prior transactional data. Mathieson’s firm has a minimum investment of $1 million dollars outside of kaChing. On kaChing, consumers could invest in Fairview Capital’s strategy with as little as $3k.

When the marketplace launched on October 19, it included seven amateurs who had risen through the game’s ranks and four professionals, including Mathieson. Within a month, kaChing observed several interesting things. First, because the amateurs weren’t SEC-registered, the site had to refer to them with awkward terms like “geniuses.” That was confusing for consumers, who already had to figure out what on kaChing.com was a game and what was real. Second, out of 450,000 gamers, only seven had qualified to become kaChing managers. Third, the company expected hundreds of amateurs who performed poorly in the game to realize they weren’t good at investing and therefore become customers. in fact only five people converted into paying customers. Finally, after launch, 30 professional money managers, having read articles on the company, contacted kaChing out of the blue. These managers weren’t concerned with transparency. They were interested in the tools and new distribution medium kaChing provided.

In November, kaChing held an all-hands meeting, circling up chairs in their small Palo Alto office, to discuss whether they should focus solely on professionals and abandon the systems for proving amateurs. “Some people weren’t comfortable because it wasn’t as fun, and one senior engineer thought we’d be losing the part of kaChing that was an enabler for anyone who wanted to make it as a pro,” Rachleff recalls. “But what we really wanted to change was not who manages the money, but who has access to the best possible talent. We’d originally thought we’d need to build a significant business with amateur managers to get professionals to come on board, but fortunately It turns out that wasn’t necessary.”

The staff agreed they could better fulfill their goals by working just with professional managers. In December, they removed the game from kaChing.com. In February, they held another all-hands meeting to talk about shutting down the legacy Facebook game, which still had 60,000 active users. “Everybody felt the burden of supporting all those transactions every day,” says Pascal-Louis Perez, kaChing’s CTO. “It took a ton of our time, and just wasn’t contributing to our long term vision.” That all-hands lasted five minutes.

Which is a nice story. But when kaChing actually shut down each game, hundreds of angry players spewed venom. “We had to ignore them, because they weren’t our target audience – and were never likely to become customers.” says Rachleff.

kaChing says they had the fortitude to make quick decisions and stay the course not just because they’d observed how people were using the marketplace, but also because they’d spoken with hundreds of potential and past customers. To acquire new money managers, the company makes traditional sales calls, which means they’ve interviewed many, many professionals and gotten a strong sense of their needs. At the same time, whenever a customer closes an account, kaChing contacts the person to find out why; most agree to a short phone interview. (The site has about 700 active paying customers.

Perez says this level of contact, synthesized with their own observations, has given them confidence to make bold decisions. Of the money managers they’ve interviewed, he notes, “The feedback is consistent; we solve big enough problems for people that we believe they’ll come on board.”

With 21 employees today, kaChing is devoted to recruiting professional managers and finding product/market fit, first for money managers, then for consumers. Thus far the results are encouraging. More than 30 qualified professional money managers have been attracted to the platform and more than $190 million of customer assets have been committed as well.

The kaChing team is quick to note that because they’re still closing-in on product/market fit, they’re less data-driven than they plan to be once they’re in optimizing mode. “We create hypotheses, and test them,” says Rachleff. “If something fails, we cut it off. If something seems to succeed, we pursue it aggressively. You have to have the courage of your convictions. With limited data, you have to make tough decisions.”

Special thanks to Pascal-Louis Perez for sharing information and making this post possible.

Read More »

Case Study: Continuous deployment makes releases non-events

0 comments
The following is a case study of one entrepreneur's transition from a traditional development cycle to continuous deployment. Many people still find this idea challenging, even for companies that operate solely on the web. This case presents a further complication: desktop software. Without being able transparently modify the software in situ, is it still possible to deploy on a continuous basis? Read on to find out.


Ash Maurya is the founder of WiredReach, a bootstrapped startup that he has been running for seven years. Recently, he was bitten by the lean startup bug and has started writing about his experiences attempting to apply lean startup and customer development principles. I've previously named his post Achieving Flow in a Lean Startup as one of my favorite blog posts of 2009. 

What follows is his own account of the challenges he faced as well as the solutions he adopted, lightly edited for style. If you're interested in contributing a case study for publication here, consider getting started by adding it to the Lean Startup Wiki case study section. -Eric

Of all the Lean Startup techniques, Continuous Deployment is by far the most controversial. Continuous Deployment is a process by which software is released several times throughout the day – in minutes versus days, weeks, or months. Continuous Flow Manufacturing is a Lean technique that boosts productivity by rearranging manufacturing processes so products are built end-to-end, one at a time (using singe-piece flow), versus the more prevalent batch and queue approach.

Continuous Deployment is Continuous Flow applied to software. The goal of both is to eliminate waste. The biggest waste in manufacturing is created from having to transport products from one place to another. The biggest waste in software is created from waiting for software as it moves from one state to another: Waiting to code, waiting to test, waiting to deploy. Reducing or eliminating these waits leads to faster iterations which is the key to success.



My transition to Continuous Deployment

Prior to adopting continuous deployment, I used to release software on a weekly schedule (come rain or shine) which I viewed as pretty agile, disciplined, and aggressive. I identified the must-have code updates on Monday, official code cutoff was on Thursday, and Friday was slated for the big release event. The release process took at least half a day and sometimes the whole day. Dedicating up to 20% of the week on releasing software is incredibly wasteful for a small team. This is not counting the ongoing coordination effort also needed in prioritizing the ever-changing release content for the week as new critical issues are discovered. Despite these challenges, I fought the temptation to move to a longer bi-weekly or monthly release cycle because I wanted to stay highly responsive to customers (something our customers repeatedly appreciate). Managing weekly releases got a lot harder once I started doing customer development. Spending more time outside the building, meant less time for coding, testing, and deploying. Things started to slip. That is when I devised a set of work hacks to manage my schedule (described here) and what drove me to adopting Continuous Deployment.

My transition from staged releases to continuous deployment took roughly 2 weeks. I read Eric Ries' 5 step primer to getting started with Continuous Deployment and found that I already had a lot of the necessary pieces. Continuous integration, deployment scripts, monitoring, and alerting are all best practices for any release process - staged or continuous.

The fundamental challenge with Continuous Deployment is getting comfortable with releasing all the time.
Continuous deployment makes releases non-events and checking in code is synonymous with triggering a release. On the one hand, this is the ultimate in customer responsiveness. On the other hand, it is scary as hell. With staged releases, time provides a (somewhat illusory) safety net. There is also comfort in sharing test responsibility with someone else (the QA team). No one wants to be solely responsible for bringing a production system down. For me neither was a consideration. I didn't have time or a QA team.

I took things easy at first - made small changes and audited the release process maniacally. I started relying heavily on functional tests (over unit tests) which allowed me to test changes as a user would. I also identified a set of events that would indicate something terribly going wrong (e.g. no users on the system) and built real-time alerting around them (using nagios/ganglia). As we built confidence, we started committing bigger and multi-part changes, each time building up our suite of testing and monitoring scripts. After a few iterations, our fear level was actually lower than how we used to feel after a staged release. Because we were committing less code per release, we could correlate issues to a release with certainty.

These days, we never wonder if unexpected errors could have been introduced as a result of a large code merge (since there is no branching. We also rely on more testing and monitoring automation, which is way more robust and consistent than what we were doing before.

All that said, mistakes are still made and we commit bad code now and then. None that have taken the system down (not yet anyway). Rather than seeing these as a shortcoming of the process, we view it as an opportunity to build up our Cluster Immune System. We try and follow a Five Whys approach to keep these errors from recurring. There is always some action to take: writing more tests, more monitoring, more alerts, more code, or more process.

Looking back, struggled to balance the opposing pulls of "outside the building" versus "inside the building" activities. Adopting Continuous Deployment has allowed me to build "flow" into my day which allows me to do both. But easier releases are not the only benefit of Continuous Deployment. Smaller releases lead to faster build/measure/learn loops. I've used these faster build/measure/learn loops to optimize my User Activation flow, delight customers with "near-instant" fixes to issues, and even eliminate features that no one was using.

While it is somewhat easier to continuously deploy web based software, with a little discipline, desktop based software too can be built to flow. Here's how I am implement continuous deployment for my desktop-based application (CloudFire).

My Continuous Deployment process


Don't push features

If you've followed a customer discovery process, identified a problem worth solving, and built out your minimum viable product, DON'T keep adding features until you've validated the MVP, or more specifically the unique value proposition of the MVP. Unneeded features are waste and not only create more work but can needlessly complicate the product and prolong the "customer validation" phase.

Every new feature should ideally be pulled by more than one customer before showing up in a release.
Build in response to a signal from the "customer", and otherwise rest or improve.
As a technologist, I too love to measure progress based on how much stuff I build. But instead of channeling all my energy towards building new features, I channel roughly 80% of it towards measuring and optimizing existing features. I am not advocating adding no features at all. Users will naturally ask for more stuff and your MVP by definition is minimal and needs more love. Just don't push it.

Code in small batches

I've previously described my 2 hour blocks of maker time for maximizing my work "flow". Prior to starting any maker activity, I clearly identify what needs to get done (the goal) and sketch out how it needs to get done (the design).

It is important to point out that the goal of the maker activity need not be a user facing feature or even a complete feature. There is inherent value in committing incremental work into production to diffuse future integration surprises. During the maker activity, I code, unit test, and create or update functional tests, as needed. At the end of the maker activity, I check-in code which automatically triggers a build on a continuous integration server that is then run through a battery of unit and functional tests. The artifacts created at the end of the build are installers for mac and windows (for new users) along with an Eclipse P2 repository (OSGI) for automatic software updates (for current users). The release process takes ~15 minutes and runs in the background.

Prefer functional tests over unit tests whenever possible

I don't believe in blindly writing unit tests just to achieve 100% code coverage as reported by some tool. To do that I would have to mock (simulate) too many critical components. I deem excessive unit testing a form of waste. Whenever possible, I rely on functional tests that verify user actions. I use Selenium, which lets me control the application on multiple browsers and OS platforms, just as a user would. One thing to be wary of is that functional tests are longer running than unit tests and will gradually increase the release-cyle-time. Parallelization of tests with multiple test machines is a way to address this. I am not at that point yet but Selenium Grid looks like a good option. So does Go Test It.

Always test the User Activation flow

After the integration tests are run and the software packaged, I always verify my User Activation flow before going live. The user activation flow is the most critical path towards achieving initial user gratification or product/market fit. My user activation flow is automatically tested on both a mac and windows machine.

Utilize automagic software updates

A major challenge with desktop-based (versus web-based) software is propagating software updates. Studies have shown that users find traditional software update dialogs annoying. To overcome this, I am using a software update strategy that works silently without ever interrupting the user, much like an appliance. Google Chrome utilizes a similar update process. The biggest risk with this approach is that users will find it Orwellian. So far no one has complained, and many users like the auto-update feature. It helps that CloudFire, being a p2web app, runs headlessly with a browser-based UI.

This is how the software update process currently works:
  1. At the end of each build, we push an Eclipse P2 repository (OSGI) which is a set of versioned plug-ins that make up the application. Because the application is composed of many small plug-ins, coupled with the fact that we commit small code batches, the size of each software update can be downloaded quickly.
  2. Every time the user starts up the application, it checks for a new update, downloads and installs one if available. Depending on the type of update, it could take effect immediately or require an application restart. If an application restart is required, we wait until the next user initiated relaunch of the application or trigger one silently when the system is idle.
  3. If the application is already running, it periodically polls for new updates. If an update is found, it is also downloaded and installed in the background (as above) without interrupting the user.
Alerts and monitoring

I use nagios and ganglia to implement both system and application level monitoring and alerting on the overall health of the production cluster. Examples of things I monitor are the numbers of user activations, active users, and aggregate page hits to user galleries. Any out-of-the-norm dip in these numbers immediately alerts us (via twitter/SMS) to a potential issue.

Application level diagnostics

Despite the best testing, defects still happen. More testing is not always the answer as some defects are intermittent and a function of the end-user's environment. It is virtually impossible to test all combinations of hardware, OS, browsers, and third party apps (e.g. Norton anti-virus, Zone Alarm, etc.).

Relying on users to report errors doesn't always work in practice. To compensate, we've had to build  basic diagnostics right into the application itself. They can notify both the user and us of unexpected errors, and allow us to pull configuration information and logs remotely. We can also do remote rollbacks this way.

Tolerate unexpected errors exactly once

Unexpected errors provide the opportunity to learn and bullet-proof a system early. Ignoring them or implementing quick-and-dirty patches inevitably lead to repeat errors which are another form of waste. I try and follow a formalized Five Why's process (using our internal wiki) for every error. This forces us to really stop, think, and fix the right problem(s).

My continuous deployment process is summarized below:


So why is Continuous Deployment so controversial?
Eric has addressed a lot of the objections already on his blog. One that I hear a lot is the belief that you need a massive team to pull off continuous deployment. I would argue that the earlier in the development cycle and the smaller the team, the easier it is to implement a continuous deployment process. If you are a start-up with a MVP, there is no better time to adopt a continuous deployment process than the present. You don't yet have hundreds of customers, dozens of peers, or dozens of features. It is a lot easier to lay the groundwork now with time on your side.

If you enjoyed Ash's writing in this case study, I suggest you subscribe to his blog. -Eric

Read More »

Case Study: Using an LOI to get customer feedback on a minimum viable product

0 comments
How much work should you do on a new product before involving customers? If you subscribe to the theory of the minimum viable product, the answer is: only enough to get meaningful feedback from early adopters. Sometimes the best way to do this is to put up a public beta and drive a limited amount of traffic to it. But other times, the right way to learn is actually to show a product prototype to customers one-on-one. This is especially useful in situations, like most B2B businesses, where the total number of customers is likely to be small.

This case study illustrates one company’s attempt to do customer development by testing their vision with customers before writing a single line of code. In the process, they learned a lot by asking initial prospects to sign a non-binding letter of intent to buy the software. As you’ll see, this quickly separated the serious early adopters from everyone else. Mainstream customers don’t have enough motivation to buy an early product, and so building in response to their feedback is futile.

Along the way, this case study raises interesting ethical issues. The lean startup methodology is based on enlisting customers as allies, which requires honesty and integrity. If you deceive customers by showing them screenshots of a product that is “in-development” but for which you have written no code, are you lying to them? And, if so, will that deception come back to haunt you later? Read on and judge for yourself.

The following was written an actual lean startup practitioner. It was originally posted anonymously to the Lean Startup Circle mailing list, and then further developed on the Lean Startup Wiki’s Case Studies section. If you’re interested in writing a future case study, or commenting/contributing to one, please join the mailing list or head on over to the wiki. What follows is a brief introduction by me, the case study itself, and then some Q&A led by LSC creator Rich Collins. Disclaimer: claims and opinions expressed by the authors of case studies are theirs alone; I can’t take credit or responsibility. – Eric Ries

In April of 2009 my partner and I had an idea for a web app, a B2C platform that we are selling as SaaS [software-as-a-service]. We decided from the get-go that, while we clearly saw the benefits and necessity of our concept, we would remain fiercely skeptical of our own ideas and implement the customer development process to vet the idea, market, customers etc, before writing a single line of code.

My partner was especially adamant about this as he had spent the last 6 months in a cave writing a monster, feature-rich web app for the financial sector that a potential client had promised to buy, but backed out at the last second.  They then tried to shop the app around, and found no takers.  Thousands of lines of code, all for naught -- as is usually the case without a customer development process. (See Throwing away working code  for more on this unfortunate phenomenon. -Eric)

We made a few pencil drawings of what the app would look like which we then gave to a graphic designer.  With that, the graphic designer created a Photoshop image. We had him create what we called our "screenshots" (which suggests that an app actually existed at the time) and had him wrap them in one of these freely available PS Browser Templates. Now armed, with 4 "screenshots" and a story, we approached our target market, some of which was through warm introductions, and some, very literally, was through simple cold-calling.

Once we secured a meeting, we told our potential customers that we were actively developing our web app (implying that code was being written) and wanted to get potential user input into the development process early on.  Looking at paper print-outs of our "screenshots", no one could tell that this was simply a printout of a PSD, and not a live app sitting on a server somewhere. We walked them through what we thought would be the major application of our product.  Most people were quite receptive and encouraging.  What proved to be very interesting was that we quickly observed a bimodal distribution with regards to understanding the problem and our proposed solution:

  • people either became very excited and started telling us what we should do, what features it needed and how to run with this, or
  • they didn't think there was a real problem here, much less a needed solution.
We ruminated on this for a while. The vehemence of those that didn't get it surprised us.  Perhaps we had a super-duper-hyper-ultra-cool idea  --- but not enough customers existed to make it worth the effort. We visited each potential customer a minimum of twice, if not three times.  Each time we would come back with a few more "screenshots" and tell them that development was progressing nicely and ask them for more input. We also solicited information as to how they were currently solving the problem and how much they paid for their solution.

On the third visit, we pressed those who saw merit in the idea to sign a legally non-binding Letter of Intent.  Namely, that they agree to use it free of charge if we deliver it to them and it is capable of X, Y and Z.  And not only do they agree to use it, but that they intend to purchase if by Y date at X price if it meets their needs.

By the way, this LOI was not written in legalese.  Three quarters of it was simple everyday English.  In fact, we customer dev-ed the LOI itself.  The first time, we asked a client to sign it before we had even written it.  When they agreed to sign it, we quickly whipped it up while sitting in a coffee shop and emailed it off to them.  This would help us separate the wheat from the chaff when it came to determining interest and commercial viability.  Once we had two LOIs signed and in-hand, we actually began to write code.

We also implicitly used the LOIs for price structure and price discovery - which we are still working on.  We backed into prices from all sorts of angles, estimating the time-cost of equivalent functionality, competitive offerings, other tools we were potentially displacing -- but in the end, we lobbed a few numbers at them and waited to see if they flinched.

Customer A got X price, Customer B got X + Y price, and so on.  So far, our customers have never mentioned price as an objection, which suggests to me that at this point we are very much underpriced. The LOI was also useful as we leveraged it by approaching the competitor of one of those who signed by simply letting them know that their competitor will be using our app.  They returned our cold intro email within 8 mins.

We have two customers that have balked at signing LOIs, but want to use our product.  This has been somewhat of a quandary for us.  When we decided to go the LOI route, we thought that we would not bend and that we would only service those customers who would sign the LOI.  In the end, we decided that these two customers were large enough to help us with exposure, provide good usage data and worth the risk of them wasting our time.  Time will tell if this theory proves correct.

Right now, the app itself is pretty ugly, a bit buggy and slow -- and doesn't even do a lot.  It is borderline embarrassing.  Don't get me wrong, it does the few necessary things.  BUT it definitely does NOT have the super-duper-hyper-ultra-cool Web 2.0 spit and polish about it. Interestingly enough, our ratio of positive comments to negative comments from actual users is about 10 to 1.  One of our first customers had a disastrous launch with it, yet, has signed on to try it again (granted, they did get it for free and we did offer it for free for this next time). But they didn't hesitate to try it again.  I thought we would have to plead, beg and beseech.  But for them, it was a no-brainer.  So, we have to be doing something right.

Our feature set is very limited and being developed almost strictly from user input.  While I personally have all sorts of super-duper-hyper-ultra-cool Web 2.0 ideas --- we are holding ourselves back, and forcing ourselves to wait for multiple, explicit and overlapping user requests.  We have seen our competitors whose feature sets are very rich, to say the least, but we think in some cases, are as over-engineered as they are feature-rich.

Only time and the market will tell if they are innovative and we are slow, lazy pigs or they have gotten ahead of themselves/the market and our minimalist solution will be better received.

Rich Collins, founder of the Lean Startup Circle, responded to the poster with some Q&A.
LSC: What is your response to some of the people on Hacker News that questioned the ethics of taking this approach?

Some of the commenters have some good points.  It definitely explores ethical boundaries.  However, I don't think we indulged in any zero-sum game type deception.  By that, I mean our intentional fuzziness about the state of development did not cause harm in any manner to our prospective clients.  In fact, just us showing up at their offices and talking about our screenshots benefited our prospective clients tremendously as:

  1. Those clients who had never even entertained the functionality we were proposing gained significant knowledge.
  2. With that knowledge, they could (and did) Google our competition and start exploring the space and current offerings. 
We did, in fact, tell one of our prospects in the beginning that our screenshots were simply mock-ups.  However, that makes the prospect feel as if you are wasting their time and they then are unlikely to provide input.

"Oh, this is just a Photoshop file?  Well, come back to us when you are further along." which defeats the whole purpose of getting face time for Customer Development!

When you tell them, the app is in development (and it was, even before coding, we were spending a lot of time on what we wanted and didn't want, how it would look, use cases ‚ etc) the prospects are interested in providing input and shaping the product.  They need to feel and see some momentum.

LSC: Your use of a non-binding letter of intent was another interesting tactic.  Did the customers that signed it end up paying for your product?

Yes and no.  We had a dispute with one signee and couldn't convert them.  However, we successfully converted others.  I should also mention that there was one client who refused to sign an LOI, but we are in the process of converting them.

The LOI was designed to give us hard, non-bullshit-able feedback instantly.  Too often people will affirm your idea so that you (or they) can save face, which BTW is a form of well-intentioned and socially acceptable deception.  This is why, IMHO, friends, wives, and significant others are probably not good people to talk to about your idea.  At the end of the day, no one knows if the idea is any good.  The market will tell you.

LSC: Would you respond to a few selected Hacker News comments?
"If I were one of your prospects, I would never sign a letter of intent based on drawings only. I'd make you come back later with something, anything I could play with ... Come back when you have something real to show. Until then you're no different from any other poser."

I myself probably would never sign an LOI on screenshots only.  However, our customers did a lot of stuff that I would never do.  Lesson learned:  I am not my customer.  We think differently.  We solve our problems differently.  We have different needs and wants.  Repeat after me:  You are not your customer.

LSC: And one more: "Except the LOIs in this case are utterly meaningless. I've been on the customer side of LOIs that were signed on request, knowing that it obligated us to nothing."

Wrong.  We got instantaneous feedback on the validity of the idea and started our sales process concurrently.  While legally non-binding, customers who have signed an LOI are a lot less likely to disappear or make themselves hard to get a hold of.  LOIs, while clearly not as good as signed sales contract, do have meaning and are valuable.  I encourage B2B startups to keep them in their customer development arsenal.

Special thanks to Rich Collins, the Lean Startup Circle practitioners, and to everyone who has contributed to the Case Studies on the wiki. And thanks to these entrepreneurs for sharing their story. Have a case study you’d like to share? Head on over to the Lean Startup Wiki.

Read More »