Rapid Iteration for Mobile App Design

Guest post by Lisa Regan, writer for The Lean Startup Conference

As we’ve mentioned before, this year’s Lean Startup Conference features a lot of speakers who have incredible expertise to share but are new to our event. Mariya Yao  is one such speaker. She’s the founder and Creative Director at Xanadu, a mobile strategy and design consultancy helping to guide app developers to success in a rapidly-changing, often chaotic mobile ecosystem.

We asked her a few questions about how mobile developers can measure and address their product’s performance in an environment that is both incredibly competitive and rapidly changing. She provided some basic answers for us here and will go into more depth at the conference.

LSC: You've spoken before about strategic failures--where people build the wrong product--versus tactical fails, where people build the product wrong. This is a great distinction; so how can a mobile app developer know which of these is their particular problem? In other words, are there dead giveaways that the problem with an app is strategic rather than tactical?

Mariya: A strategic failure occurs when--as Paul Graham is fond of saying--you build a product no one wants. This means that you can't easily get users through the door despite solid marketing efforts, they aren't proactively inviting their friends and colleagues, or no one is paying for your product. A tactical failure occurs when you do grow quickly or easily attract passionate users, but see major drop-offs at key points in product usage due to poor implementation and user experience.

When you build a product that is clearly performing poorly from the get-go and you've ruled out basic technical, marketing, or executive issues, it's very likely the product is a strategic fail. However, what often happens is a startup builds a product people like but don't love. They'll typically appear to do well early on, but won't have enough of a passionate following to achieve meaningful growth or revenues.

There are two questions that I recommend startups use to differentiate between being liked versus being loved. First is the question Sean Ellis popularized, where you ask your users, "How disappointed would you be if you could no longer use our product?" and have them answer with either, "Very Disappointed," "Somewhat Disappointed," "Not Disappointed," or "I no longer use the product." Sean did research across hundreds of startups and discovered that companies that had fewer than 40% of their users answer "Very Disappointed" tended to struggle with building a successful and sustainable business.

The second question is known as the Net Promoter Score, where you ask your users, "On a scale from 0-10, how likely are you to recommend us to your friends?" You mark those who answer 0-6 as Detractors, 9-10 as Promoters, and 7-8 as Neutral. Your Net Promoter score is the percent of Promoters minus your percentage of Detractors, which should be a number between -100 and +100. The world's most successful companies typically score around +50, and top performing tech companies like Apple, Google, and Amazon regularly score over +70.

LSC: You've also spoken before about the fact that mobile apps suffer a major dropoff in engagement between opening the app and registering it. When that happens, what has a developer typically failed to validate before this step? How can they test for this in the app development?

Mariya:
  The drop-off between opening the app and registering tends to occur because an app developer doesn't clearly communicate the value of their app before demanding that a user put in work to register an account. This is a violation of the "give before you take" principle that governs social interactions.

For example, you'll often see apps where the very first screen is a Facebook-only login screen. Most of the time, all you see here is the title of the app, some vague background image or tagline, and this big Facebook Connect button. While social registration can be easier than regular registration, you're also asking users to give you access to their social data before you've clearly shown them WHAT your app does and communicated clearly WHY they should hand over sensitive information.

Imagine if a random stranger comes up to, someone you know nothing about, and immediately demands to know your birthday, your relationship status, and all your friend's email addresses. Obviously that'd be wildly off-putting and you'd refuse his request. That behavior is socially awkward for people AND socially awkward for apps, and the numbers show this. The typical drop-off rate at these kinds of Facebook-only login screens is about 30% and I've even seen cases where it is over 50%.

My advice for developers who want to combat this immediate drop-off is to test different kinds of onboarding flows for brand new users and try to delay registration until user data is absolutely needed. There are many apps that deliver plenty of utility and value without mandating that a user create an account up front. Great examples include Yelp and Flipboard. Others like Airbnb allow you to browse listings to your heart's content and only require registration when you are at the last step of completing a booking. That said, there will always be categories of apps — such as social networks or messaging apps — that require a user's identity in order to deliver value. In those cases, I'd recommend testing very short "Learn more" overviews prior to registration and optimizing your social invite flows, as they will often be the most compelling ways to get new users over the registration hurdle.

If a developer has a live product with sufficient usage already in the market, I'd recommend running several split tests with delayed registration if he or she hasn't already. For developers who are still in early ideation phases and are building utility apps that don't require user identification, one quick way to get early feedback is to create a multitude of paper prototypes on index cards that test different opening flows and show them to potential users in the app's intended context. For apps that are social or require a user's identity to be useful, a prototype needs to be more fully fleshed out to give meaningful test results. Here I'd recommend developers build as minimal as possible of an HTML5 app, hook up all the requisite analytics, and test as early as possible for retention on the core action loop they want their users to take. For less technical developers, I'll be covering some methods and tools to get functional prototypes built with less dependency on engineering know-how.

LSC: You do a lot of work in helping app developers create longterm engagement. Do you have examples of app-specific measures that developers really should pay attention to (and maybe generally don't) in order to validate customers' engagement?

Mariya:
Compared to desktop usage patterns, mobile apps tend to see more frequent sessions but significantly lower session lengths. For example, a product that has both a desktop and a mobile presence might see desktop users visit 10-20 times a month for session lengths of over 10 minutes on average, whereas on mobile they might see users visit 30-50 times a month for less than 60 seconds at a time.

Another difference you'll see is that people will visit hundreds of websites in a month on desktop, but their bandwidth for apps is much more limited. On mobile, despite the fact that there are millions of offerings in the app stores, the average consumer only uses about 15-20 different apps per week on a regular basis. There's a limit on both the real estate on a mobile user's home screen and their capacity for adopting new apps for habitual use.

Thus for many types of mobile apps, the holy grail is to become a daily habit for users. For your app category, you want to be the "go-to" app that users depend on. Aim to get your users to come back every day, maybe even multiple times a day, in order to have a shot at broad long-term retention. A popular metric for measuring retention in the mobile games industry is DAU / MAU, or daily active users divided by monthly active users, and I highly recommend that consumer-facing mobile app developers keep track of that metric as well.

LSC: How can app developers, particularly those working in a cross-platform environment, quickly test and validate new features and processes?
 

Mariya: Moving quickly across multiple platforms is tough because development and testing are both so much slower and more bug-prone than on desktop or a single platform. Generally speaking, I'd advise developers to focus on nailing the product experience on a single platform first before becoming too ambitious on the cross-platform front, but occasionally you come across apps whose value comes from being ubiquitous.

Regardless of what app or feature you want to test, I'd recommend you first follow Eric's advice in The Lean Startup and clearly identify your hypotheses and unanswered questions. Then you should decide effective ways to test your assumptions and pre-determine what your metrics of success should be in order for you to make a go or no-go decision to build. Much of this is the same whether you are building for mobile or web, though on mobile there are some specific tactics and tools you can use to prototype aspects of your new products or features quickly that I'll share in my talk at the Lean Startup Conference. I shamelessly encourage all of you to attend my session on "Rapid Iteration on Mobile" if you'd like to learn more.

LSC: Let's say an app has 2,000 monthly active users and a simple function those people like—but the developer has done some testing and thinks there's a much bigger market in a related but different product. How would you recommend that the developer pivot to the new idea without losing all of the existing customers?

Mariya:
My advice would heavily depend on the resources--time, money, and engineering prowess--that the app developer has available and what the growth metrics and business model look like for this existing app with 2,000 MAU. For the vast majority of social games or consumer-facing mobile products, 2,000 MAU is probably too low of a user base to sustain a real business model as typically only 1%-5% of your users will convert to paying customers and advertisers aren't usually enticed into partnerships unless your numbers are well into the millions. If there aren't real drivers of long-term growth behind this app, it may be the right (albeit incredibly tough) strategic decision to pursue a higher potential market even if it means abandoning some early wins.

That said, there are many ways to test new products and markets relatively cheaply so any major pivoting decision can and should be vetted thoroughly. If the new app idea is closely related to the existing one, the app developer should try cross-promoting the new product to his existing user base. 2,000 MAU is a ripe field for recruiting potential users and conducting user research and usability studies. He or she may even choose to launch the product in parallel with the existing one if the company can manage to do this without sacrificing too much momentum or morale. By comparing the live performance of both products in the market, you'll get the most accurate data to inform your strategic product decisions.

For an existing product on mobile, there are many ways to segment your audience to test new features. One of the most popular is to release an app in a limited number of countries, such as Canada or New Zealand, prior to a global launch. Another is to "white-label" your app and release parallel apps in the same market that test different value propositions. Yet another is to test with mobile web apps or Android apps first prior to officially launching. For example, pushing new changes out on Android is typically much faster than with iOS so it's popular, especially with mobile game developers, to fine-tune apps on Android rather than starting with iOS.

--
Learn more at The Lean Startup Conference, December 9 - 11 in San Francisco. Register today.

0 comments:

welcome to my blog. please write some comment about this article ^_^