Lean Analytics: The Best Numbers for Non-Tech Companies
Guest post by Lisa Regan, writer for The Lean Startup Conference.
Analytics spark more questions and discussion than almost any other aspect of the Lean Startup method. If you’re coming to them from outside the tech sector, the language around analytics can be particularly confusing. Alistair and Ben, co-authors of the book Lean Analytics, will help you sort it out in our next webcast, Lean Analytics for Non-tech Companies. The webcast is this Friday, October 25, at 10a PT and includes live Q&A with participants. Registration is free.
For those new to analytics, Alistair and Ben have a free Udemy course well worth checking out. It provides a basic introduction to analytics as they apply to Lean Startup, including sections on what metrics to use and how to interpret them. And it’s also a great starting point for learning the basic vocabulary and methods for analytics, especially for anyone in non-tech startups, where this kind of language is less prevalent. For instance, Ben lists out the worst of the “vanity metrics,” a term that describes appealing but meaningless or misleading numbers. And, Alistair carefully breaks down cohort analysis, a method of grouping users according to a shared criterion (all the users who joined in a given month, for instance, or during a particular campaign), and then demonstrates how you can test with those cohorts to yield actionable information. And, Ben goes over the difference between “leading” and “lagging” indicators--with the former able to tell you how to create growth by creating effective changes.
In the Udemy course, Alistair and Ben expand these basics into a description of how to create empathy, stickiness, virality, revenue, and scale. Stickiness, Ben and Alistair say, is where people move on too quickly--they don’t make sure they really have a product that has the right features and functionality to meet their customers’ needs. It’s here that analytics are important in checking your or your investors’ natural impulses to jump ahead to the next phase.
To help turn the conversation specifically to non-tech companies—the topic of our webcast this week—we asked Alistair to answer a few questions.
LSC: Tell us about the customer development you did for your book:
Alistair: We've been thrilled at how Lean Analytics seemed to resonate with founders. As operators of an accelerator—and founders in our own right—Ben and I had constantly struggled with what the “right” numbers are for a business. We decided to find out, and talked with around 130 founders, entrepreneurs, investors and analysts. The results were revealing: most people didn't know what “normal” was, but there were clear patterns that stood out.
While many of the organizations were technical, we also spoke to big non-tech companies, and smaller businesses like restaurant owners. Nearly all of the ones who'd been successful went through a natural process of customer development—what we call the “empathy” stage—followed by a tight focus on stickiness, then virality, then paid acquisition, and finally scaling.
LSC: What's an example of one metric, other than revenue, that you might look at for a non-tech product?
Alistair: There are plenty. The Net Promoter Score is an obvious one for an established product—how likely are you to tell someone else about the product or service. It's a good measurement because it captures both satisfaction and virality. Customer support numbers, trouble-tickets, returns and complaints are good too. But they're all lagging indicators. In other words, they show you the horse left the barn.
Consider a restaurant. Revenue is a good, obvious metric; but maybe the number of people who don't leave a tip is a leading indicator of revenue. If you could find a way to measure that, and then you understood that there was a strong correlation between tipping rates or amounts and revenue, then you could experiment with things more cleanly. You could try different menus to different tables, and then look at tip amounts, and figure out earlier in the process whether the new menu was better or worse.
The reality, though, is that every company today is a tech company. The dominant channel by which we reach customers is the Internet, whether you're a small local restaurant on Yelp or a global maker of tissue paper. And the dominant tool we use to measure back-office operations is technology, from inventory to supply chain management to procurement to human resources.
The beautiful thing about this, to someone who's analytically minded, is that while humans are awful at recording things, software has no choice but to do so. As a result, we're awash in a sea of data that might yield good insights about the business. The challenge is to know what the biggest problem in the business is right now, then to find a metric that shows you, as early as possible in the customer lifecycle, whether that problem is getting better or worse.
LSC: Here's a common problem: you start measuring something, and you assume that the results will be clear enough to help you make additional decision about your product (for example, to pivot, persevere or kill an idea)--but then the results are hazy. What's a good step to take when your measurement Magic 8-Ball says, "Ask again later"?
Alistair: This is why it's so important to draw a line in the sand beforehand. Scientists know this: you formulate a hypothesis, and then you devise an experiment that will reveal the results. Unfortunately, as founders, we're so enthusiastic, so governed by our reality distortion field, that we often run the experiment and then find the results we want. This is confirmation bias, and it kills.
We often tell founders that a business plan is nonsense. A business model, on the other hand, is a snapshot of your business assumptions at this moment in time. Once you've stated those assumptions clearly, you run experiments to see if they're valid. We spoke with the head of innovation at one Fortune 500 company who told us his only metric for early-stage innovation is “how many assumptions have you tested this week?”
The confusion isn't that the results are hazy. It's that the business model is complex. If I think I can sell 100 widgets at $10 apiece, and they cost me $5 to build and market, that's a business model. But if my measurements show me that people will only pay $8 a widget, is that a failure? No—it means I now need to revise my assumptions and test whether people will buy 125 widgets instead, so I can generate the same revenue (and adjust my margins accordingly).
The Magic 8-Ball seldom says “Ask again later.” What it often says is “Revise your assumptions and test something else.” That's why the most critical attribute of early-stage product development is the ability to learn quickly.
LSC: For companies that aren't used to thinking in terms of metrics, any tips for getting a team on board?
Alistair: As we say in the book, once, the leader was someone who could convince others to act in the absence of information. Today, the leader is someone who can ask the right questions. Data-driven business is here today; it's just not evenly distributed. That's changing, slowly. But there are things you can do to hasten it along.
The first is to use a small data victory to create an appetite for a bigger one. Take, for example, David Boyle at EMI. The company had billions of transactions locked away that might reveal how and why people bought music. But there was little support for analyzing it. So David started his own analysis project, surveying a million people about their music. This was brand new data, and he evangelized it within the organization. Everyone wanted some. Once there was a demand for this data, he earned the political capital to dig into the vast troves of historical information.
The second is to treat everything as a study. Many companies like certainty. We've joked that if a startup is an organization in search of a sustainable, repeatable business model, then a big company is an organization designed to perpetuate such a model. That's in direct conflict with disruption and innovation. So how do you deal with a boss who wants certainty? When we spoke with DHL, they told us that they consider every new initiative a learning exercise that might just happen to produce a new product or service. They've launched new business ideas that failed—but that failure taught them valuable things about a particular market, which they then shared with customers and used for strategic planning.
The simple reality is that with cloud computing, prototyping, social media, and other recent tools, the cost of trying something out is now vanishingly small. In fact, it's often cheaper than the old cost of a big study or research project. Companies need to learn that trying something out is how you conduct the study. Let's say you want to know about the burgeoning market for mobile widgets. So you create a mobile widget MVP. If it fails, you've successfully studied it. If it succeeds, you've successfully studied it, and built a new venture along the way.
The third is, when in doubt, collect and analyze data. We've done some work with the folks at Code for America. In one case, a group was trying to improve the Failure to Appear rate for people accused of a crime. This is a big deal: if you don't show up for court, it triggers a downward spiral of arrests and incarceration. But there were a lot of challenges to tackling the problem directly, so they took a different approach: they created tools to visualize the criminal justice system as a supply chain, making it easier to identify bottlenecks that showed where the system needed work most urgently.
If you're an intrapreneur tilting at corporate windmills, you need to embrace these kinds of tactics. Use small data victories to give management a taste of what's possible. Frame your work as a study that will be useful even if it fails. And when you run into roadblocks, grab data and analyze it in new ways to find where you'll get the most leverage.
--
Our webcast with Alistair and Ben, Lean Analytics for Non-tech Companies, is this Friday; register today and come ready with your questions. Alistair will also be giving a workshop at The Lean Startup Conference, December 9 – 11 in San Francisco. Join us there.
Analytics spark more questions and discussion than almost any other aspect of the Lean Startup method. If you’re coming to them from outside the tech sector, the language around analytics can be particularly confusing. Alistair and Ben, co-authors of the book Lean Analytics, will help you sort it out in our next webcast, Lean Analytics for Non-tech Companies. The webcast is this Friday, October 25, at 10a PT and includes live Q&A with participants. Registration is free.
For those new to analytics, Alistair and Ben have a free Udemy course well worth checking out. It provides a basic introduction to analytics as they apply to Lean Startup, including sections on what metrics to use and how to interpret them. And it’s also a great starting point for learning the basic vocabulary and methods for analytics, especially for anyone in non-tech startups, where this kind of language is less prevalent. For instance, Ben lists out the worst of the “vanity metrics,” a term that describes appealing but meaningless or misleading numbers. And, Alistair carefully breaks down cohort analysis, a method of grouping users according to a shared criterion (all the users who joined in a given month, for instance, or during a particular campaign), and then demonstrates how you can test with those cohorts to yield actionable information. And, Ben goes over the difference between “leading” and “lagging” indicators--with the former able to tell you how to create growth by creating effective changes.
In the Udemy course, Alistair and Ben expand these basics into a description of how to create empathy, stickiness, virality, revenue, and scale. Stickiness, Ben and Alistair say, is where people move on too quickly--they don’t make sure they really have a product that has the right features and functionality to meet their customers’ needs. It’s here that analytics are important in checking your or your investors’ natural impulses to jump ahead to the next phase.
To help turn the conversation specifically to non-tech companies—the topic of our webcast this week—we asked Alistair to answer a few questions.
LSC: Tell us about the customer development you did for your book:
Alistair: We've been thrilled at how Lean Analytics seemed to resonate with founders. As operators of an accelerator—and founders in our own right—Ben and I had constantly struggled with what the “right” numbers are for a business. We decided to find out, and talked with around 130 founders, entrepreneurs, investors and analysts. The results were revealing: most people didn't know what “normal” was, but there were clear patterns that stood out.
While many of the organizations were technical, we also spoke to big non-tech companies, and smaller businesses like restaurant owners. Nearly all of the ones who'd been successful went through a natural process of customer development—what we call the “empathy” stage—followed by a tight focus on stickiness, then virality, then paid acquisition, and finally scaling.
LSC: What's an example of one metric, other than revenue, that you might look at for a non-tech product?
Alistair: There are plenty. The Net Promoter Score is an obvious one for an established product—how likely are you to tell someone else about the product or service. It's a good measurement because it captures both satisfaction and virality. Customer support numbers, trouble-tickets, returns and complaints are good too. But they're all lagging indicators. In other words, they show you the horse left the barn.
Consider a restaurant. Revenue is a good, obvious metric; but maybe the number of people who don't leave a tip is a leading indicator of revenue. If you could find a way to measure that, and then you understood that there was a strong correlation between tipping rates or amounts and revenue, then you could experiment with things more cleanly. You could try different menus to different tables, and then look at tip amounts, and figure out earlier in the process whether the new menu was better or worse.
The reality, though, is that every company today is a tech company. The dominant channel by which we reach customers is the Internet, whether you're a small local restaurant on Yelp or a global maker of tissue paper. And the dominant tool we use to measure back-office operations is technology, from inventory to supply chain management to procurement to human resources.
The beautiful thing about this, to someone who's analytically minded, is that while humans are awful at recording things, software has no choice but to do so. As a result, we're awash in a sea of data that might yield good insights about the business. The challenge is to know what the biggest problem in the business is right now, then to find a metric that shows you, as early as possible in the customer lifecycle, whether that problem is getting better or worse.
LSC: Here's a common problem: you start measuring something, and you assume that the results will be clear enough to help you make additional decision about your product (for example, to pivot, persevere or kill an idea)--but then the results are hazy. What's a good step to take when your measurement Magic 8-Ball says, "Ask again later"?
Alistair: This is why it's so important to draw a line in the sand beforehand. Scientists know this: you formulate a hypothesis, and then you devise an experiment that will reveal the results. Unfortunately, as founders, we're so enthusiastic, so governed by our reality distortion field, that we often run the experiment and then find the results we want. This is confirmation bias, and it kills.
We often tell founders that a business plan is nonsense. A business model, on the other hand, is a snapshot of your business assumptions at this moment in time. Once you've stated those assumptions clearly, you run experiments to see if they're valid. We spoke with the head of innovation at one Fortune 500 company who told us his only metric for early-stage innovation is “how many assumptions have you tested this week?”
The confusion isn't that the results are hazy. It's that the business model is complex. If I think I can sell 100 widgets at $10 apiece, and they cost me $5 to build and market, that's a business model. But if my measurements show me that people will only pay $8 a widget, is that a failure? No—it means I now need to revise my assumptions and test whether people will buy 125 widgets instead, so I can generate the same revenue (and adjust my margins accordingly).
The Magic 8-Ball seldom says “Ask again later.” What it often says is “Revise your assumptions and test something else.” That's why the most critical attribute of early-stage product development is the ability to learn quickly.
LSC: For companies that aren't used to thinking in terms of metrics, any tips for getting a team on board?
Alistair: As we say in the book, once, the leader was someone who could convince others to act in the absence of information. Today, the leader is someone who can ask the right questions. Data-driven business is here today; it's just not evenly distributed. That's changing, slowly. But there are things you can do to hasten it along.
The first is to use a small data victory to create an appetite for a bigger one. Take, for example, David Boyle at EMI. The company had billions of transactions locked away that might reveal how and why people bought music. But there was little support for analyzing it. So David started his own analysis project, surveying a million people about their music. This was brand new data, and he evangelized it within the organization. Everyone wanted some. Once there was a demand for this data, he earned the political capital to dig into the vast troves of historical information.
The second is to treat everything as a study. Many companies like certainty. We've joked that if a startup is an organization in search of a sustainable, repeatable business model, then a big company is an organization designed to perpetuate such a model. That's in direct conflict with disruption and innovation. So how do you deal with a boss who wants certainty? When we spoke with DHL, they told us that they consider every new initiative a learning exercise that might just happen to produce a new product or service. They've launched new business ideas that failed—but that failure taught them valuable things about a particular market, which they then shared with customers and used for strategic planning.
The simple reality is that with cloud computing, prototyping, social media, and other recent tools, the cost of trying something out is now vanishingly small. In fact, it's often cheaper than the old cost of a big study or research project. Companies need to learn that trying something out is how you conduct the study. Let's say you want to know about the burgeoning market for mobile widgets. So you create a mobile widget MVP. If it fails, you've successfully studied it. If it succeeds, you've successfully studied it, and built a new venture along the way.
The third is, when in doubt, collect and analyze data. We've done some work with the folks at Code for America. In one case, a group was trying to improve the Failure to Appear rate for people accused of a crime. This is a big deal: if you don't show up for court, it triggers a downward spiral of arrests and incarceration. But there were a lot of challenges to tackling the problem directly, so they took a different approach: they created tools to visualize the criminal justice system as a supply chain, making it easier to identify bottlenecks that showed where the system needed work most urgently.
If you're an intrapreneur tilting at corporate windmills, you need to embrace these kinds of tactics. Use small data victories to give management a taste of what's possible. Frame your work as a study that will be useful even if it fails. And when you run into roadblocks, grab data and analyze it in new ways to find where you'll get the most leverage.
--
Our webcast with Alistair and Ben, Lean Analytics for Non-tech Companies, is this Friday; register today and come ready with your questions. Alistair will also be giving a workshop at The Lean Startup Conference, December 9 – 11 in San Francisco. Join us there.
0 comments:
welcome to my blog. please write some comment about this article ^_^