Showing posts with label agile. Show all posts

Validated learning about customers

0 comments
Would you rather have $30,000 or $1 million in revenues for your startup? Sounds like a no-brainer, but I’d like to try and convince you that it’s not. All things being equal, of course, you’d rather have more revenue rather than less. But all things are never equal. In an early-stage startup especially, revenue is not an important goal in and of itself.

This may sound crazy, coming as it does from an advocate of charging customers for your product from day one. I have counseled innumerable entrepreneurs to change their focus to revenue, and many companies who refuse this advice get themselves into trouble by running out of iterations. And yet revenue alone is not a sufficient goal. Focusing on it exclusively can lead to failure as surely as ignoring it altogether.

Let’s start with a simple question: why do early-stage startups want revenue? We all know why big companies want revenue – it’s one of two critical halves of the formula for profit. And big companies exist to maximize profit. Don’t startups exist for the same reason? I think such reasoning is an example of the “startup dollhouse fallacy” – that startups are just shrunken-down big companies. In fact, I don’t think revenue is in and of itself a goal for startups, and neither is profit. What matters is proving the viability of the company’s business model, what investors call “traction.” Demonstrating traction is the true purpose of revenue in an early growth company. (Of course this is not at all true of many profitable small businesses, but they are not what I mean by startups.) Before I explain what I mean, let me add an important caveat: traction is not just important for investors. It should be even more important to the founders themselves, because it demonstrates that their business hypothesis is grounded in reality. More on that in a moment.

Consider this company (as always, a fictionalized composite): they have a million dollars of revenue, and are showing growth quarter after quarter. And yet, their investors are frustrated. Every board meeting, the metrics of success change. Their product definition fluctuates wildly – one month, it’s a dessert topping, the next it’s a floor wax. Their product development team is hard at work on a next-generation product platform, which is designed to offer a new suite of products – but this effort is months behind schedule. In fact, this company hasn’t shipped any new products in months. And yet their numbers continue to grow, month after month. What’s going on?

In my consulting practice, I sometimes have the opportunity to work with companies like this. Diagnosis is easy: they are exceptionally gifted salesmen. This is an incredible skill, one that most engineers overlook. True salesmen are artists, able to hone in on just those key words, phrases, features, and benefits that will convince another human being to give up their hard-earned money in exchange for even an early product. For a startup, having great sales DNA is a wonderful asset. But in this kind of situation, it can devour the company’s future.

The problem stems from selling each customer a custom one-time product. This is the magic of sales: by learning about each customer in-depth, they can convince each of them that this product would solve serious problems. That leads to cashing many checks. Now, in some situations, this over-selling would lead to a secondary problem, namely, that customers would realize they had been duped and refuse to re-subscribe. But here’s where a truly great sales artist comes in. Customers don’t usually mind a bait-and-switch if the switched-to product really does solve an important problem for them. These salesmen used their insight into what their customers really needed to make the sale and then deliver something of even greater value. They are closing orders. They are gaining valuable customer data. They are close to breakeven. What’s the problem?

This approach is fundamentally non-scalable. These founders have not managed, to borrow a phrase from Steve Blank, to create a scalable and repeatable sales process. Every sale requires handholding and personal attention from the founders themselves. This process cannot be delegated, because it’s impossible to explain to a normal person what’s involved in making the sale. The founders have a lethal combination of insight about what potential customers want and in-depth knowledge about what their current product can really deliver. As a result, potential customers are being turned away; they can only afford to engage with the customers that are best qualified.

And what of the product development team? They are busy too, but they are not creating value for the company. They are trying to build a product to an ever-changing spec, based on intuitions from the founders about what might be able to sell itself. Worse, the founders are never around – they are too busy going out and selling! Without access to customer data, or even a clear product owner, the product development team keeps building feature after feature based on what they think might be useful. But since nobody in the company can clearly articulate what the product is, their efforts result in incoherence. Worst of all, their next-generation product is so bad they are not allowed to try it out on any customers. The team is thus completely starved of any form of external feedback.

Let me describe a different company, one with only $30,000 in revenue (again, pure fiction). This company has a large long-term vision, but their current product is only a fraction of what they hope to build. Compared to the million-dollar startup, they are operating at micro-scale. How does that stack up?

First of all, they are not selling their product by hand. Instead, each potential customer has to go through a self-serve process of signing up and paying money. Because they have no presence in the market, they have to find distribution channels to bring in customers. They can only afford those (like Google AdWords) that support buying in small volume.

Compensating for these limitations is the fact that they know each of their customers extremely well, and they are constantly experimenting with new product features and product marketing to increase the yield on each new crop of customers they bring in. Over time, they have found a formula for acquiring, qualifying, and selling customers in the market segments they have targeted. Most importantly, they have lots of data about the unit economics of their business. They know how much it costs to bring in a customer and they know how much money they can expect to make on each one.

In other words, they have learned to grow renewable audiences. Given the data they’ve collected about these early customers, they are also able to estimate with modest precision how big the market is for their product in its current form. They may be at micro-scale now, but they are in a very good position to raise venture money and engage in extremely rapid growth.

Our million-dollar startup, by contrast, is stuck in the mud.

Stories like these are what has led me to this definition of progress for a startup: validated learning about customers. (Steve calls this just Customer Validation, but I like to emphasize the learning aspect, so I accept a far more awkward phrase.)

This unit of progress is remarkable in several ways. First of all, it means that most aggregate measures of success, like total revenue, are not very useful. They don’t tell us the key things we need to know about the business: how profitable is it on a per-customer basis? What’s the total available market? What’s the ROI on acquiring new customers? And how do existing customers respond to our product over time?

Secondly, this definition locates progress firmly in the heads of the people inside the company, and not in any artifacts the company produces. That’s why none of dollars, milestones, products or code can count as progress. Given a choice between what a successful team has learned and the source code they have produced, I would take the learning every time. This is why companies often get out-competed by former employees (Palm vs Handspring to name just one), even though the upstart lacks all of the familiar resources, tools, processes, and support they used to have. (Incidentally, it’s also why these upstarts often get sued for bogus reasons. Companies can’t believe they didn’t steal any of their “precious” assets.)

But learning is a tricky thing to quantify, which is why the word “validated” is so important in this definition. Validation comes in the form of data that demonstrates that the key risks in the business have been addressed by the current product. That doesn’t always mean revenue, either. Some products have relatively obvious monetization mechanisms, and the real risks are in customer adoption. Products can find sources of validation with impressive stats along a number of dimensions, such as high engagement, viral coefficient, or long-term retention. What’s important is that the data tell a compelling story, one that demonstrates that the business is on a solid economic footing. (It being so easy to convince yourself that you’re in one of these “special case” businesses, I do recommend you give revenue a long, hard look first.)

For example, I’ve talked a few times about how IMVU raised its first venture round with monthly revenues of around $10,000. This wasn’t very impressive, but we had two things going for us:
  1. A hockey stick shaped growth curve. People often forget the most important part of the hockey stick: the long flat part. We had months of data that showed customers more-or-less uninterested in our product. We were limping along at a few hundred dollars a month in revenue. All this time, we were continuously changing our product, talking to customers, trying to improve on our AdWords spend. Eventually, these efforts bore fruit – and this was evident in the data. This lent our claims about learning and discovery credibility.

  2. Compelling per-customer economics. We had only a small number of customers – if memory serves, only a few thousand active users. But a little math will show that we were making over a dollar per-user per-month. Our cost to acquire a customer on AdWords was only a few cents. Our eventual VC’s were quick to grasp what this meant (in fact, they understood it better than we did): that if our product achieved significant scale, it would be wildly profitable.
These two aspects could be plotted on one simple graph, which tells this equally simple story: if there is a market out there for this kind of product, we are the team that will find it and profit from it. That turned out to be a compelling investment thesis, despite our micro-scale results.

Let’s return to my example of the million-dollar-revenue company. If you find yourself in this kind of situation, what can you do? I’d suggest a few things, each rooted in the idea of breaking down the wall between the two halves of this company.
  1. Go on an agile diet quickly. With a product development team that is not shipping, any agile methodology will surface major problems quickly. Force anyone who is in customer contact to take the role of the Product Owner and insist that they deliver something new on a short regular interval.

  2. Get product into customers’ hands. The sales strategy currently leaves many customers completely un-served (those that don’t qualify for the founders’ personal time). Start using some of those customers as guinea pigs for a self-serve version of the product. Even if the product is absolutely terrible, it will establish a baseline against which the product development team can try and improve.

  3. Build tools for the sales team that reduce the time investment required for each sale. Instead of devoting all product development efforts to building a full-blown product, try building just those parts of the product that would allow the current sales process to go a little faster. For example, could we develop a simple landing page that would allow customers to pre-qualify for sales time? Iterating on these kinds of features has two benefits: it frees up time for the founders and simultaneously starts getting building a feedback loop with the product development team. Pretty soon, the text on that landing page is going to become an effective explanation for what the product does, because if it’s not the salesman will have to spend time re-explaining the product to potential customers. Time-to-complete-a-sale is not a bad metric for validated learning at this stage.
This last point is especially important. Although this kind of team may understand their customers well, they don’t yet know how to talk to them in a standardized way. Without that, they probably won’t achieve significant scale. (For more on how this plays into the process of scaling up, see the Customer Creation stage of the customer development model.) Perhaps they’ll be able to hire someone especially skilled in the marketing skills needed to find this positioning. But in the meantime, by iterating on their product with customers, they have a chance to get there on their own.

Read More »

Continuous deployment and continuous learning

0 comments
At long last, some of the actual implementers of the advanced systems we built at IMVU for rapid deployment and rapid response are starting to write about it. I find these on-the-ground descriptions of the system and how they work so much more credible than just theory-type posts that I am excited to share them with you. I can personally attest that these guys know what they are talking about; I saw them do it first-hand. I will always be full of awe and gratitude for what they accomplished.
Continuous Deployment at IMVU: Doing the impossible fifty times a day by Timothy Fitz
Continuous Deployment isn’t just an abstract theory. At IMVU it’s a core part of our culture to ship. It’s also not a new technique here, we’ve been practicing continuous deployment for years; far longer than I’ve been a member of this startup.

It’s important to note that system I’m about to explain evolved organically in response to new demands on the system and in response to post-mortems of failures. Nobody gets here overnight, but every step along the way has made us better developers.

The high level of our process is dead simple: Continuously integrate (commit early and often). On commit automatically run all tests. If the tests pass deploy to the cluster. If the deploy succeeds, repeat.

Our tests suite takes nine minutes to run (distributed across 30-40 machines). Our code pushes take another six minutes. Since these two steps are pipelined that means at peak we’re pushing a new revision of the code to the website every nine minutes. That’s 6 deploys an hour. Even at that pace we’re often batching multiple commits into a single test/push cycle. On average we deploy new code fifty times a day.
We call this process continuous deployment because it seemed to us like a natural extension of the continuous integration we were already doing. Our eventual conclusion was that there was no reason to have code that had passed the integration step but was not yet deployed. Every batch of software for which that is true is an opportunity for defects to creep in: maybe someone is changing the production environment in ways that are incompatible with code-in-progress; maybe someone in customer support is writing up a bug report about something that's just being fixed (or worse, the symptom is now changing); and no matter what else is happening, any problems that arise due to the code-in-progress require that the person who wrote it still remember how it works. The longer you wait to find out about the problem, the more likely it is to have fallen out of the human-memory cache.

Now, continuous deployment is not the only possible way to solve these kinds of problems. In another post I really enjoyed, Timothy explains five other non-solutions that seem like they will help, but really won't.
1. More manual testing.
This obviously doesn’t scale with complexity. This also literally can’t catch every problem, because your test sandboxes or test clusters will never be exactly like the production system.
2. More up-front planning
Up-front planning is like spices in a cooking recipe. I can’t tell you how much is too little and I can’t tell you how much is too much. But I will tell you not to have too little or too much, because those definitely ruin the food or product. The natural tendency of over planning is to concentrate on non-real issues. Now you’ll be making more stupid mistakes, but they’ll be for requirements that won’t ever matter.
3. More automated testing.
Automated testing is great. More automated testing is even better. No amount of automated testing ensures that a feature given to real humans will survive, because no automated tests are as brutal, random, malicious, ignorant or aggressive as the sum of all your users will be.
4. Code reviews and pairing
Great practices. They’ll increase code quality, prevent defects and educate your developers. While they can go a long way to mitigating defects, ultimately they’re limited by the fact that while two humans are better than one, they’re still both human. These techniques only catch the failures your organization as a whole already was capable of discovering.
5. Ship more infrequently
While this may decrease downtime (things break and you roll back), the cost on development time from work and rework will be large, and mistakes will continue to slip through. The natural tendency will be to ship even more infrequently, until you aren’t shipping at all. Then you’ve gone and forced yourself into a total rewrite. Which will also be doomed.
What all of these non-solutions have in common is that they treat only one aspect of the problem, but at the expense of another aspect. This is a common form of sub-optimization, where you gain efficiency in one of the sub-parts at the expense of the efficiency of the overall process. You can't make these global efficiency improvements until you get clear about the goal of your development process.

That leads to a seemingly-obvious question: what is progress in software development? It seems like it should be the amount of correctly-working code we've written. Heck, that's what it says right there in the agile manifesto. But, unfortunately, startups can't afford to adopt that standard. As I've argued elsewhere, my belief is that startups (and anyone else trying to find an unknown solution to an unknown problem) have to measure progress with validated learning about customers. In a lot of cases, that's just a fancy name for revenue or profit, but not always. Either way, we have to recognize that the biggest form of waste is building something that nobody wants, and continuous deployment is an optimization that tries to shorten this code-data-learning feedback loop.

Assuming you're with me so far, what will that mean in practice? Throwing out a lot of code. That's because as you get better at continuous deployment, you learn more and more about what works and what doesn't. If you're serious about learning, you'll continuously learn to prune the dead weight that doesn't work. That's not entirely without risk, which is a lesson we learned all-too-well at IMVU. Luckily, Chad Austin has recently weighed in with an excellent piece called 10 Pitfalls of Dirty Code.

IMVU was started with a particular philosophy: We don't know what customers will like, so let's rapidly build a lot of different stuff and throw away what doesn't work. This was an effective approach to discovering a business by using a sequence of product prototypes to get early customer feedback. The first version of the 3D IMVU client took about six months to build, and as the founders iterated towards a compelling user experience, the user base grew monthly thereafter.

This development philosophy created a culture around rapid prototyping of features, followed by testing them against large numbers of actual customers. If a feature worked, we'd keep it. If it didn't, we'd trash it.

It would be hard to argue against this product development strategy, in general. However, hindsight indicates we forgot to do something important when developing IMVU: When the product changed, we did not update the code to reflect the new product, leaving us with piles of dirty code.

So that you can learn from our mistakes, Chad has helpfully listed ten reasons why you want to manage this dirty-code (sometimes called "technical debt") problem proactively. If we could do it over again, I would have started a full continuous integration, deployment, and refactoring process from day one, complete with five why's for root cause analysis. But, to me anyway, one of the most inspiring parts of the IMVU story is that we didn't start with all these processes. We hadn't even heard of half of them. Slowly, painfully, incrementally, we were able to build them up over time (and without ever having a full-stop-let's-start-over timeout). If you read these pieces by the guys who were there, you'll get a visceral sense for just how painful it was.

But it worked. We made it. So can you.

Read More »

Achieving a failure

0 comments
We spend a lot of time planning. We even make contingency plans for what to do if the main plan goes wrong. But what if the plan goes right, and we still fail? This is the my most dreaded kind of failure, because it tricks you into thinking that you're in control and that you're succeeding. In other words, it inhibits learning. My worst failures have all been of this kind, and learning to avoid them has been a constant struggle.

See if this plan sounds like a good one to you:
  • Start a company with a compelling long-term vision. Don't get distracted by trying to flip it. Instead, try and build a company that will matter on the scale of the next century. Aim to become the "next AOL or Microsoft" not a niche player.
  • Raise sufficient capital to have an extended runway from experienced smart money investors with deep pockets who are prepared to make follow-on investments.
  • Hire the absolute best and the brightest, true experts in their fields, who in turn can hire the smartest people possible to staff their departments. Insist on the incredibly high-IQ employees and hold them to incredibly high standards.
  • Bring in an expert CEO with outstanding business credentials and startup experience to focus on relentless execution.
  • Build a truly mainstream product. Focus on quality. Ship it when it's done, not a moment before. Insist on high levels of usability, UI design, and polish. Conduct constant focus groups and usability tests.
  • Build a world-class technology platform, with patent-pending algorithms and the ability to scale to millions of simultaneous users.
  • Launch with a PR blitz, including mentions in major mainstream publications. Build the product in stealth mode to build buzz for the eventual launch.
I had the privilege, and the misfortune, to be involved with a startup that executed this plan flawlessly. It took years, tens of millions of dollars, and the efforts of hundreds of talented people to pull it off. And here's the amazing thing about this plan: it actually worked. I think we accomplished every one of those bullet points. Check. Mission accomplished.

Only this company was a colossal failure. It never generated positive returns for its investors, and most of its employees walked away dejected. What went wrong?

This company was shackled by shadow beliefs that turned all those good intentions, and all that successful execution, into a huge pile of wasted effort. Here are a few:
  • We know what customers want. By hiring experts, conducting lots of focus groups, and executing to a detailed plan, the company became deluded that it knew what customers wanted. I remember vividly a scene at a board meeting, where the company was celebrating a major milestone. The whole company and board play-tested the product to see its new features first hand. Everyone had fun; the product worked. But that was two full years before any customers were allowed to use it. Nobody even asked the question: why not ship this now? It was considered naive that the "next AOL" would ship a product that wasn't ready for prime time. Stealth is a customer-free zone. All of the efforts to create buzz, keep competitors in the dark, and launch with a bang had the direct effect of starving the company for much-needed feedback.

  • We can accurately predict the future. Even though some aspects of the product were eventually vindicated as good ones, the underlying architecture suffered from hard-to-change assumptions. After years of engineering effort, changing these assumptions was incredibly hard. Without conscious process design, product development teams turn lines of code written into momentum in a certain direction. Even a great architecture becomes inflexible. This is why agility is such a prized quality in product development.

  • We can skip the chasm. As far as I know, there are no products that are immune from the technology life cycle adoption curve. By insisting on building a product for mainstream customers, the company guaranteed that they would be unhappy with the number and type of customers they got for the first version. Worse was the large staff in departments appropriate to a mainstream-scale product, especially in customer service and QA. The passionate early adopters who flocked to the product at its launch could not sustain this outsized burn rate.

  • We can capitalize on new customers. As with many Silicon Valley failures, a flawless PR launch turned into a flawed customer acquisition strategy. The first version product wasn't easy enough to use, install, and pay for. It also had hardware requirements that excluded lots of normal people. Millions of people flocked to the website, but the company could only successfully monetize early adopters. As a result, the incredible launch was mostly wasted.

  • We know what quality means. All of the effort invested in quality, polish, stability and usability turned out to be for nothing. Although the product was superior to its competitors in many ways, it was missing key features that were critical for the kinds of customers who never got to participate in the company's focus groups (or were represented on its massive QA staff). Worse, many of the wrong assumptions built into the technical architecture meant that, in the real world outside the testing lab, the product's stability was nothing to write home about. So despite the millions invested in quality, the end result for most customers was no better than the sloppy beta-type offerings of competitors.

  • Advancing the plan is progress. This is the most devastating thing about achieving a failure: while in the midst of it, you think you're making progress. This company had disciplined schedules, milestones, employee evaluations, and a culture of execution. When schedules were missed, people were held accountable. Unfortunately, there was no corresponding discipline of evaluating the quality of the plan itself. As the company built infrastructure and added features, the team celebrated these accomplishments. Departments were built and were even metrics-driven. But there was no feedback loop to help the company find the right metrics to focus on.
These shadow beliefs have a common theme: a lack of reality checks. In my experience, great startups require humility, not in the personal sense, but in the organizational capacity to emphasize learning. A good learning feedback loop trumps even the best execution of a linear plan. And what happened with this ill-fated company? Although it failed, many of the smart people involved have accomplished great things. I know of at least five former employees that went on to become startup founders. They all got a tremendous first-hand lesson in achieving a failure, all on someone else's dime (well, millions of dimes). As we move into a new economic climate, it's my hope that our industry will stop this expensive kind of learning and start building lean startups instead.

The interesting thing about an analysis like this is that it seems obvious in retrospect. A lot of people say that they know that they don't know what customers want. And yet, if you go back and look at the key elements of the plan, many of them are in subtle conflict with the shadow beliefs. Understanding that tension requires a lot of reflection and a good teacher. I myself didn't understand it until I had the opportunity to view that failure through the lens of the customer development theory. I had a big advantage, because Steve Blank, the father of customer development, got to see the failure up close and personal too as an investor and board member. A much less painful way to learn the lesson is read his book: The Four Steps to the Epiphany.
Reblog this post [with Zemanta]

Read More »

Refactoring yourself out of business

0 comments
Let me start out by saying I am a big fan of refactoring, the ongoing process of changing code so that it performs the same behaviors but has more elegant structure. It's an essential discipline of good software development, especially in startups. Nonetheless, I want to talk about a dysfunction I've seen in several startups: they are literally refactoring themselves to death.

Here's a company I met with recently. They have a product that has a moderate number of customers. It's well done, looks professional, and the customers that use it like it a lot. Still, they're not really hitting it out of the park, because their product isn't growing new users as fast as they'd like, and they aren't making any money from the current users. They asked for my advice, and we went through a number of recommendations that readers of this blog will already be able to guess: adding revenue opportunities, engagement loop optimization, and some immediate split-testing to figure out what's working and what's not. Most of all, I encouraged them to start talking to their most passionate customers and running some big experiments based on that feedback.

I thought we were having a successful conversation. Towards the end, I asked when they'd be able to make these changes, so that we could meet again and have data to look at together. I was told they weren't sure, because all of their engineers were currently busy refactoring. You see, the code is a giant mess, has bugs, isn't expandable, and is generally hard to modify without introducing collateral damage. In other words, it is dreaded legacy code. The engineering team has decided it's reached a breaking point, and is taking several weeks to bring it up to modern standards, including unit tests, getting started with continuous integration, and a new MVC architecture. Doesn't that sound good?

I asked, "how much money does the company have left?" And it was this answer that really floored me. They only have enough money to last another three months.

I have no doubt that the changes the team is currently working on are good, technically sound, and will deliver the benefits they've claimed. Still, I think it is a very bad idea to take a large chunk of time (weeks or months) to focus exclusively on refactoring. The fact that this time is probably a third of the remaining life of the company (these projects inevitably slip) only makes this case more exaggerated.

The problem with this approach is that it effectively suspends the company's learning feedback loop for the entire duration of the refactoring. Even if the refactoring is successful, it means time invested in features that may prove irrelevant once the company starts learning again. Add to that the risk that the refactoring never completes (because it becomes a dreaded rewrite).

Nobody likes working with legacy code, but even the best engineers constantly add to the world's store of legacy code. Why don't they just learn to do it right the first time? Because, unless you are working in an extremely static environment, your product development team is learning and getting better all the time. This is especially true in startups; even modest improvements in our understanding of the customer lead to massive improvements in developer productivity, because we have a lot less waste of overproduction. On top of that, we have the normal productivity gains we get from: trying new approaches on our chosen platform to learn what works and what doesn't; investments in tools and learning how to use them; and the ever-increasing library of code we are able to reuse. That means that, looking back at code we wrote a year or two ago, even if we wrote it using all of our best practices from that time, we are likely to cringe. Everybody writes legacy code.

We're always going to have to live with legacy code. And yet it's always dangerous to engage in large refactoring projects. In my experience, the way to resolve this tension is to follow these Rules for Refactoring:
  • Insist on incremental improvements. When sitting in the midst of a huge pile of legacy code, it's easy to despair of your ability to make it better. I think this is why we naturally assume we need giant clean-up projects, even though at some level we admit they rarely work. My most important lesson in refactoring is that small changes, if applied continuously and with discipline, actually add up to huge improvements. It's a version of the law of compounding interest. Compounding is not a process that most people find intuitive, and that's as true in engineering as it is in finance, so it requires a lot of encouragement in the early days to stay the course. Stick to some kind of absolute-cost rule, like "no one is allowed to spend more time on the refactoring for a given feature than the feature itself, but also no one is allowed to spend zero time refactoring." That means you'll often have to do a refactoring that's less thorough than you'd like. If you follow the suggestions below, you'll be back to that bit of code soon enough (if it's important).

  • Only pay for benefits to customers. Once you start making lots of little refactorings, it can be tempting to do as many as possible, trying to accelerate the compounding with as much refactoring as you can. Resist the temptation. There's an infinite amount of improvement you can make to any piece of code, no matter how well written. And every day, your company is adding new code, that also could be refactored as soon as it's created. In order to make progress that is meaningful to your business, you need to focus on the most critical pieces of code. To figure out which parts those are, you should only ever do refactoring to a piece of code that you are trying to change anyway.

    For example, let's say you have a subsystem that is buggy and hard to change. So you want to refactor it. Ask yourself how customers will benefit from having that refactoring done. If the answer is they are complaining about bugs, then schedule time to fix the specific bugs that they are suffering from. Only allow yourself to do those refactoring that are in the areas of code that cause the bug you're fixing. If the problem is that code is hard to change, wait until the next new feature that trips over that clunky code. Refactor then, but resist the urge to do more. At first, these refactoring will have the effect of making everything you do a little slower. But don't just pad all your estimates with "extra refactoring time" and make them all longer. Pretty soon, all these little refactorings actually cause you to work faster, because you are cleaning up the most-touched areas of your code base. It's the 80/20 rule at work.

  • Only make durable changes (under test coverage). There's no point refactoring code if it's just going to go back to the way it was before, or if it's going to break something else while you're doing it. The only way to make sure refactorings are actually making progress (as opposed to just making work) is to ensure they are durable. What I mean is that they are somehow protected from inadvertent damage in the future.

    The most common form of protection is good unit-test coverage with continuous integration, becaus that makes it almost impossible for someone to undo your good work without knowing about it right away. But there are other ways that are equally important. For example, if you're cleaning up an issue that only shows up in your production deployment, make sure you have sufficient alerting/monitoring so that it would trigger an immediate alarm if your fix became undone. Similarly, if you have members of your team that are not on the same page as you about what the right way to structure a certain module is, it's pointless to just unilaterally "fix" it if they are going to "fix" it right back. Perhaps you need to hash out your differences and get some team-wide guidelines in place first?

  • Share what you learn. As you refactor, you get smarter. If that's not a team-wide phenomenon, then it's still a form of waste, because everyone has to learn every lesson before it starts paying dividends. Instead of waiting for that to happen, make sure there is a mechanism for sharing refactoring lessons with the rest of the team. Often, a simple mailing list or wiki is good enough, but judge based on the results. If you see the same mistakes being made over again, intervene.

  • Practice five whys. As always with process advice, I think it's essential that you do root cause analysis whenever it's not working. I won't recap the five-why's process here (you can read a previous post to find out more); the key idea is to refine all rules based on the actual problems you experience. Symptoms that deserve analysis include: refactorings that never complete, making incremental improvements but still feeling stuck in the mud, making the same mistakes over and over again, schedules slipping by increasing amounts, and, of course, month-long refactoring projects when you only have three months of cash burn left.
Back to my conversation with the company I met recently. Given their precarious situation, I really struggled with what advice to give. On the one hand, I think they have an urgent problem, and need to invest 100% of their energy into finding a business model (or another form of traction). On the other, they already have a team fully engaged on making their product architecture better. In my experience, it's very hard to be an effective advocate for "not refactoring" because you can come across as anti-quality or even anti-goodness. In any event, it's enormously disruptive to suddenly rearrange what people are working on, no matter how well-intentioned you are.

I did my best to help, offering some suggestions of ways they could incorporate a few of these ideas into their refactoring-in-progress. At a minimum, they could ensure their changes are durable, and they can always become a little more incremental. Most importantly, I encouraged the leaders of the company to bring awareness of the business challenges to the product development team. Necessity is the mother of invention, after all, and that gives me confidence that they will find answers in time.


Read More »

Stevey's Blog Rants: Good Agile, Bad Agile

0 comments
I thought I'd share an interesting post from someone with a decidedly anti-agile point of view.
Stevey's Blog Rants: Good Agile, Bad Agile: "Google is an exceptionally disciplined company, from a software-engineering perspective. They take things like unit testing, design documents and code reviews more seriously than any other company I've even heard about. They work hard to keep their house in order at all times, and there are strict rules and guidelines in place that prevent engineers and teams from doing things their own way. The result: the whole code base looks the same, so switching teams and sharing code are both far easier than they are at other places."
I think you can safely ignore the rantings about "bad agile" and the bad people who promote it. But it's helpful to take a detailed look inside the highly agile process used by Google to ship software. Three concepts I found particularly helpful:

  1. Process = discipline. Agile is not an excuse for random execution or lack of standards. They have an extreme focus on unit tests and code standards, which is highly recommended.

  2. Dates are irrelevant. Use a priority work queue instead of scheduling and estimating. As I've written previously:
    I think agile team-building practices make scheduling per se much less important. In many startup situations, ask yourself "Do I really need to accurately know when this project will be done?" When the answer is no, we can cancel all the effort that goes into building schedules and focus on making progress evident.
  3. Focus on launching. All of the incentives described in the article focus on making it easy and highly desirable to launch your product:
    [He] claimed that launching projects is the natural state that Google's internal ecosystem tends towards, and it's because they pump so much energy into pointing people in that direction. ...

    So launches become an emergent property of the system.

    This eliminates the need for a bunch of standard project management ideas and methods: all the ones concerned with dealing with slackers, calling bluffs on estimates, forcing people to come to consensus on shared design issues, and so on. You don't need "war team meetings," and you don't need status reports
    I even believe in doing launches on a continuous basis (see continuous ship).
Anyway, thanks Stevey for your thoughtful post. And sorry about those horrible Bad Agile charlatans that have been apparently torturing you with their salesmanship and dumb ideas.



Reblog this post [with Zemanta]

Read More »

Customer Development Engineering

0 comments

Yesterday, I had the opportunity to guest lecture again in Steve Blank's entrepreneurship class at the Berkeley-Columbia executive MBA program. In addition to presenting the IMVU case, we tried for the first time to do an overview of a software engineering methodology that integrates practices from agile software development with Steve's method of Customer Development.

I've attempted to embed the relevant slides below. The basic idea is to extend agile, which excels in situations where the problem is known but the solution is unknown, into areas of even greater uncertainty, such as your typical startup. In a startup, both the problem and solution are unknown, and the key to success is building an integrated team that includes product development in the feedback loop with customers.



As always, we had a great discussion with the students, which is helping refine how we talk about this. As usual, I'm heavy on the theory and not on the specifics, so I thought I'd share some additional thoughts that came up in the course of the classroom discussion.

  1. Can this methodology be used for startups that are not exclusively about software? We talk about taking advantages of the incredible agility offered by modern web architecture for extremely rapid deployment, etc. What about a hardware business with some long-lead-time components?

    To be clear, I have never run a business with a hardware component, so I really can't say for sure. But I am confident that many of these ideas still apply. One major theory that has influenced the way I think about processes comes from Lean Manufacturing, where they use these same techniques to build cars. If you can build cars with it, I'm pretty sure you can use it to add agility and flexibility to any product development process.

  2. What's an example of a situation where "a line of working code" is not a valid unit of progress?

    This is incredibly common in startups, because you often build features that nobody wants. We had lots of these examples at IMVU, my favorite is the literally thousands of lines of code we wrote for IM interoperability. This code worked pretty well, was under extensive test coverage, worked as specified, and was generally a masterpiece of amazing programming (if I do say so myself). Unfortunately, positioning our product as an "IM add-on" was a complete mistake. Customers found it confusing and it turned out to be at odds with our fundamental value proposition (which really requires an independant IM network). So we had to completely throw that code away, including all of its beatiful tests and specs. Talk about waste.


  3. There were a lot of questions about outsourcing/offshoring and startups. It seems many startups these days are under a lot of pressure to outsource their development organization to save costs. I haven't had to work this model under those conditions, so I can't say anything definitive. I do have faith that, whatever situation you find yourself in, you can always find ways to increase the speed of iteration. I don't see any reason why having the team offshore is any more of a liability in this area than, say, having to do this work while selling through a channel (and hence, not having direct access to customers). Still, I'm interested in exploring this - some of the companies I work with as an advisor are tackling this problem as we speak.


  4. Another question that always comes up when talking about customer development, is whether VC's and other financial backers are embracing this way of building companies. Of course, my own personal experience has been pretty positive, so I think the answer is yes. Still, I thought I'd share this email that happened to arrive during class. Names have, of course, been changed to protect the innocent:


    Hope you're well; I thought I'd relay a recent experience and thank you.
    I've been talking to the folks at [a very good VC firm] about helping them with a new venture ... Anyway, a partner was probing me about what I knew about low-burn marketing tactics, and I mentioned a book I read called "Four Steps to the…"
    It made me a HUGE hit, with the partner explaining that they "don't ramp companies like they used to, and have very little interest in marketing folks that don't know how to build companies in this new way."

Anyway, thanks to Steve and all of his students - it was a fun and thought-provoking experience..

Read More »

Ideas. Code. Data. Implement. Measure. Learn

0 comments
I like theory too much. But hey, it's what helps me think about problems. This simple feedback loop has proven its worth to me time and again. It's inspired by the classic OODA Loop and is really just a simplified version of that concept, applied specifically to creating a software product development team.

There are three stages:
  1. We start with ideas about what our product could be.
  2. We're a software company, so what we do everyday is turn ideas into code.
  3. Hopefully, we find out what happened when people use that code, creating data.
Giving rise to three verbs:
  1. Implement (programming!) where we turn ideas into code the best way possible.
  2. Measure what happened, as quickly as possible.
  3. Learn from the data, letting it influence our ideas for the next iteration through the loop
So far, it's all obvious. What helped me is the insight from the Theory of Constraints that in a dynamic system anything that optimizes the sub-parts tends to sub-optimize the whole. Which is a fancy way of saying: focus on total time through the loop, not on the time of any individual activity.

Optimize speed through the whole loop. This sometimes steps on the favored opinions of functional specialists in any organization.

My personal favorite: "Code without data collection? Faster but..." Ever heard a programmer argue for ripping out all that pesky data monitoring code? It's slowing down the system, wasting resources, creating ugly code and uglier scaling problems. If we just stopped measuring, we could write code a hell of a lot faster.

If you have worked with a Professional Data Warehouse Expert, you might have seen: "Measure 10000 things? Comprehensive but..." No human being can learn from 10,000 graphs. It's overwhelming. To turn data into learning, you have to focus on the few key pieces of data the everyone agrees are important. And you have to get the decision makers and implementers to look at (and believe!) the data on a regular basis.

How about documentation that nobody reads? Reports that go unnoticed? Alerts that go off so often that they get ignored? Split-test experiments that go on forever? All of these are true waste, and they generally happen because somebody is optimizing for their particular part of the puzzle, not for the team as a whole.

Read More »

Just-In-Time Scalability

0 comments
At my previous company, we pioneered an approach to building out our infrastructure that we called "Just-In-Time Scalability." We wanted an agile approach that would allow us to build our software architecture as we needed it, without downtime, but also without large amounts of up-front cost. After all, the worst kind of waste in software development is code to support a use case that never materializes. Scalable systems are no exception - if your assumptions about how many customers you'll have, or how they will behave are just a little bit wrong, you can wind up with a massive amount of wasted code.

Chris and I had the opportunity to present on our approach this past spring at the MySQL Conference. You can also download our presentation, "Just-In-Time Scalability: Agile Methods to Support Massive Growth."

Update: PDF version for "lazy linux users" and others who love freedom.

Read More »