The lean startup comes to Stanford

I'm going to be talking about lean startups (and the IMVU case in particular) three times in the next two weeks at Stanford. It's exciting to see the theory and methodology being discussed in an academic context. The entrepreneurship progarms of the business, engineering, and undergraduate schools are all tackling the subject this semester, and I'm honored to be part of it. Even better, my friend Mike Maples, one of the pioneers of microcap investing in startups, is teaching a unit in Stanford's E145 on "The New Era of Lean Startups."

It's a real challenge to communicate honestly in these classes. I struggle to try and make the students actually experience how confusing and frustrating startup environments are. When we do the IMVU case, we generally get complete consensus in the class that several of the zany things we did are 100% right. Complete consensus? We didn't even think they were 100% right. And we still argue about whether our success came from those decisions, or some exogenous factor.

It's one of the hard things about learning just from hindsight, and it matters in the board room every bit as much as in the classroom. You can only learn from being wrong, but our brains are excellent rationalizers. When something works, it's too easy to invent a story about how that was your intention all along. If you don't make predictions ahead of time, there's no way to call you on it.

In fact, in the early days, when IMVU would experience unexpected surges of revenue or traffic, it was inevitable that every person in the company was convinced that their project was responsible. Those stories would be retold and repeated, and eventually achieved mythological status as "facts" that guided future action. But making decisions on the basis of myths is dangerous territory.

How did we combat this tendency? I don't pretend that we did it well. But many of the tools of lean startups are designed for just this purpose:
  • Regular checking in with and regular talking to customers surfaces bogus theories pretty fast
  • Split-tests make it harder to take credit for someone external factor making you successful
  • Cross-functional teams tend to examine their assumptions harder and with more skepticism than purley single-function teams
  • Working in small batches tends to make it less likely that you'll attribute big results to small changes (because the fact that small changes sometimes do lead to big results is counter-intuitive)
  • Rapid iteration makes it easy to test and re-test your assumptions to give you many opportunities to drive out superstition
  • Open source code invites criticism and active questioning
Still, it's hard to make the case that these solutions are needed, because the problems seem so obvious. I hear some variation of this pretty often: "I mean, sure those guys were rationalizing and kidding themselves. But our team would never do that, right? We'll just be more vigilant." Good luck.

Let me end with a challenge: see if you can find and kill just one myth in your development team. My suggestion: take a much-loved feature and split-test it with some new customers to see if it really makes a difference. If you try, share your story here. I'm especially interested in what you used to share the idea with your colleagues. What language should we use? What arugments are persuasive? What works and what doesn't?


0 comments:

welcome to my blog. please write some comment about this article ^_^