The ABCDEF's of conducting a technical interview

I am incredibly proud of the people I have hired over the course of my career. Finding great engineers is hard; figuring out who's good is even harder. The most important step in evaluating a candidate is conducting a good technical interview. If done right, a programming interview serves two purposes simultaneously. On the one hand, it gives you insight into what kind of employee the candidate might be. But it also is your first exercise in impressing them with the values your company holds. This second objective plays no small part in allowing you to hire the best.

Balancing competing objectives is a recurring theme on this blog - it's the central challenge of all management decisions. Hiring decisions are among the most difficult, and the most critical. The technical interview is at the heart of these challenges when building a product development team, and so I thought it deserved an entire post on its own.

In this post I'll follow what seems to be a pattern for me: lay out a theory of what characterizes a good interview, and then talk practically about how to conduct one.

When I train someone to participate in a technical interview, the primary topic is what we're looking for in a good candidate. I have spent so much time trying to explain these attributes, that I even have a gimmicky mnemonic for remembering them. The six key attributes spell ABCDEF:
  • Agility. By far the most important thing you want to hire for in a startup is the ability to handle the unexpected. Most normal people have a fairly narrow comfort zone, where they excel in their trained specialty. Those people also tend to go crazy in a startup. Now, we're not looking for people who thrive on chaos or, worse, causes chaos. We want someone who is a strong lateral thinker, who can apply what they've learned to new situations, and who can un-learn skills that were useful in a different context but are lethal in a new one. When talking about their past experience, candidates with agility will know why they did what they did in a given situation. Beware anyone who talks too much about "best practices" - if they believe that there are practices that are ideally suited to all situations, they may lack adaptability.

    To probe for agility, you have to ask the candidate questions involving something that they know little about.

  • Brains. There's no getting around the fact that at least part of what you should screen for is raw intelligence. Smart people tend to want to work with smart people, so it's become almost a cliche that you want to keep the bar as high as you can for as long as you can. Microsoft famously uses brainteasers and puzzles as a sort of quasi-IQ test, but I find this technique difficult to train people in and apply consistently. I much prefer a hands-on problem-solving excercise, in a related discipline to the job they are applying for. For software engineers, I think this absolutely has to be a programming problem solved on a whiteboard. You learn so much about how someone thinks by looking at code you know they've written, that it's worth all the inconvenience of having to write, analyze and debug it by hand.

    I prefer to test this with a question about the fundamentals. The best candidates have managed to teach me something about a topic I thought I already knew a lot about.

  • Communication. The "lone wolf" superstar is usually a disaster in a team context, and startups are all about teams. We have to find candidates that can engage in dialog, learning from the people around them and helping find solutions to tricky problems.

    Everything you do in an interview will tell you something about how the candidate communicates. To probe this deeply, ask them a question in their area of expertise. See if they can explain complex concepts to a novice. If they can't, how is the company going to benefit from their brilliance?

  • Drive. I have most been burned by hiring candidates that had incredible talents, but lacked the passion to actually bring them to work every day. You need to ask: 1) does the person care about what they work on? and 2) can they get excited about what your company does? For a marketing job, for example, it's reasonable to expect that a candidate will have done their homework and used your product (maybe even talked to your customers) before coming in. I have found this quite rare in engineers. At IMVU, most of them thought our product was ridiculous at best; hopeless at worst. That's fine for the start of their interview process. But if we haven't managed to get them fired up about our company mission by the end of the day, it's unlikely they are going to make a meaningful contribution.

    To test for drive, ask about something extreme, like a past failure or a peak experience. They should be able to tell a good story about what went wrong and why.

    Alternately, ask about something controversial. I remember once being asked in a Microsoft group interview (and dinner) about the ActiveX security model. At the time, I was a die-heard Java zealot. I remember answering "What security model?" and going into a long diatribe about how insecure the ActiveX architecture was compared to Java's pristine sandbox. At first, I thought I was doing well. Later, the other candidates at the table were aghast - didn't I know who I was talking to?! Turns out, I had been lecturing the creator of the ActiveX security model. He was perfectly polite, not defensive at all, which was why I had no idea what was going on. Then I thought I was toast. Later, I got the job. Turns out, he didn't care that I disagreed with him, only that I had an opinion and wasn't afraid to defend it. Much later, I realized another thing. He wasn't defensive because, as it turns out, he was right and I was completely wrong (Java's sandbox model looked good on paper but its restrictions greatly retarded its adoption by actual developers).

  • Empathy. Just as you need to know a candidates IQ, you also have to know their EQ. Many of us engineers are strong introverts, without fantastic people skills. That's OK, we're not trying to hire a therapist. Still, a startup product development team is a service organization. We're there to serve customers direclty, as well as all of the other functions of the company. This is impossible if our technologists consider the other types of people in the company idiots, and treat them that way. I have sometimes seen technical teams that have their own "cave" that others are afraid to enter. That makes cross-functiona teamwork nearly impossible.

    To test for empathy, I always make sure that engineers have one or two interviews with people of wildly different background, like a member of our production art department. If they can treat them with respect, it's that much less likely we'll wind up with a silo'd organization.

  • Fit. The last and most elusive quality is how well the candidate fits in with the team you're hiring them into. I hear a lot of talk about fit, but also a lot of misunderstandings. Fit can wind up being an excuse for homogeneity, which is lethal. When everyone in the room thinks the same way and has the same background, teams tend to drink the proverbial Kool-Aid. The best teams have just the right balance of common background and diverse opinions, which I have found true in my experience and repeatedly validated in social science research (you can read a decent summary in The Wisdom of Crowds).

    This responsibility falls squarely to the hiring manager. You need to have a point of view about how to put together a coherent team, and how a potential candidate fits into that plan. Does the candidate have enough of a common language with the existing team (and with you) that you'll be able to learn from each other? Do they have a background that provides some novel approaches? Does their personality bring something new?
It's nearly impossible to get a good read on all six attributes in a single interview, so it's important to design an interview process that will give you a good sampling of data to look at. Exactly how to structure that process is a topic for another day, however, because I want to focus on the interview itself.

My technique is to structure a technical interview around an in-depth programming and problem-solving exercise. If it doesn't require a whiteboard, it doesn't count. You can use a new question each time, but I prefer to stick with a small number of questions that you can really get to know well. Over time, it becomes easier to calibrate a good answer if you've seen many people attempt it.

For the past couple of years I've used a question that I once was asked in an interview, in which you have the candidate produce an algorithm for drawing a circle on a pixel grid. As they optimize their solution, they eventually wind up deriving Bresenham's circle algorithm. I don't mind revealing that this is the question I ask, because knowing that ahead of time, or knowing the algorithm itself, confers no advantage to potential candidates.

That's because I'm not interviewing for the right answer to the questions I ask. Instead, I want to see how the candidate thinks on their feet, and whether they can engage in collaborative problem solving with me. So I always frame interview questions as if we were solving a real-life problem, even if the rules are a little far-fetched. For circle-drawing, I'll sometimes ask candidates to imagine that we are building a portable circle-drawing device with a black and white screen and low-power CPU. Then I'll act as their "product manager" who can answer questions about what customers think, as well as their combined compiler, interactive debugger, and QA tester.

You learn a lot from how interested a candidate is in why they are being asked to solve a particular problem. How do they know when they're done? What kind of solution is good enough? Do they get regular feedback as they go, or do they prefer to think, think, think and then dazzle with the big reveal?

My experience is that candidates who "know" the right answer do substantially worse than candidates who know nothing of the field. That's because they spend so much time trying to remember the final solution, instead of working on the problem together. Those candidates have a tendency to tell others that they know the answer when they only suspect that they do. In a real-world situation, they tend to wind up without credibility or forced to resort to bullying.

No matter what question you're asking, make sure it has sufficient depth that you can ask a lot of follow-ups, but that it has a first iteration that's very simple. An amazing number of candidates cannot follow the instruction to Do the Simplest Thing That Could Possibly Work. Some questions have a natural escalation path (like working through the standard operations on a linked-list) and others require some more creativity.

For example, I would often ask a candidate to explain to me how the C code they are writing on the whiteboard would be rendered into assembly by the compiler. There is almost no earthly reason that someone should know about this already, so candidates answer in a wide variety of ways: some have no idea, others make something up; some have the insight to ask questions like "what kind of processor does this run on?" or "what compiler are we using?" And some just write the assembly down like it's a perfectly normal question. Any of these answers can work, and depending on what they choose, it usually makes sense to keep probing along these lines: which operations are the most expensive? what happens if we have a pipelined architecture?

Eventually, either the candidate just doesn't know, or they wind up teaching you something new. Either way, you'll learn something important. There are varying degrees of not-knowing, too.
  1. Doesn't know, but can figure it out. When you start to probe the edges of someone's real skills, they will start to say "I don't know" and then proceed to reason out the answer, if you give them time. This is usually what you get when you as about big-O notation, for instance. They learned about it some time ago, don't remember all the specifics, but have a decent intuition that n-squared is worse than log-n.

  2. Doesn't know, but can deduce it given the key principles. Most people, for example, don't know exactly how your typical C++ compiler lays out objects in memory. But that's usually because most people don't know anything about how compilers work, or how objects work in C++. If you fill them in on the basic rules, can they reason with them? Can those insights change the code you're trying to get them to write?

  3. Doesn't understand the question. Most questions require a surprising amount of context to answer. It doesn't do you any good to beat someone up by forcing them through terrain that's too far afield from their actual area of expertise. For example, I wold often work the circle-drawing question with candidates who only had ever programmed in a web-based scripting language like PHP. Some of them could roll with the punches and still figure out the algorithmic aspects of the answer. But it was normally useless to probe into the inner workings of the CPU, because it wasn't something they knew about, and it can't really be taught in less than a few hours. You might decide that this knowledge is critical for the job you're hiring for, and that's fine. But it's disrepectful and inefficnet to waste the candidate's time. Move on.
My purpose in elaborating these degrees of not-knowingness is to emphasize this essential point: you want to keep as much of the interview split between boxes one and two. In other words, you want to keep asking questions on the boundaries of what they know. That's the only way to probe for agility, brains, and the best way to probe for communication. In the real world, the vast majority of time (especially in startups) is spent encountering novel situations without a clear answer. What matters is how good your thinking is at times like those, and how well you can communicate it. (It's kind of like playing Fischer Random Chess, where memorizing openings is useless).

Let me return to my topic at the top of the post: using the interview to emphasize values as well as evaluate. The best interviews involve both the interviewer and the canddiate learning something they didn't know before. Making clear that your startup doesn't have all the answers, but that your whole team pushes their abilities to their limits to find them is a pretty compelling pitch. Best of all, it's something you just can't fake. If you go into an interview with the intention of lording your knowledge over a candidate, showing them how smart you are, they can tell. And if you ask questions but don't really listen to the answers, it's all-too-obvious. Instead, dive deep into a problem and, together, wrestle the solution to the ground.


Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

Squash Soup with Vanilla Crème Fraiche

Our host for November was Meg of Joy Through Cooking. (The pictures through out this post are from Kavs of The Girl Next Kitchen, Temperance of http://hoghigh.blogspot.com/ and Maybelle's Mom of Feeding Maybelle.) Here is Meg's post.

When I saw our fabulous hosts were looking for a soup recipe for November I JUMPED at the chance. You see… I love soup. Creamy or clear… warm and cozy this time of year it just warms you from the inside out. I have a few different soup cookbooks and have had fun experimenting over the last few years.

Temperance and Lori encourage hosts to choose something new to us, as well. Then I remembered… there WAS a recipe that I had been wanting to try. It was challenging, and had a twist but was still safe with lovely flavors.

You see, we love watching Bravo! Reality shows. For our members abroad, this channel has some skill-based reality shows… a fashion designer show, a hairdressing show, and… a cooking show. Top Chef. In my humble opinion, the best of the bunch (new season starts next week!). In the last season set in Chicago there was a challenge set at Second City Comedy Club where the audience threw out theme words that the chefs were to use as inspiration for their challenge dish. Each team had a color, a food or flavor, and an emotion. One team, the team that eventually won the challenge had the words: Yellow, Vanilla, Love.

They made a soup. A squash bisque with a dollop of vanilla crème fraiche. And they put love in it as they layered flavors, tasted, and perfected it. To me, a rich warm thick soup is all about love… it is a dish that just loves you right back!

So this is the Challenge: Squash Soup with Vanilla Crème Fraiche

Main “requirements” for this soup: Make your own stock. You can use the recipe given or play with the flavors. The given stock is a vegetable one. If you want to go with a chicken stock you are welcome to, but it must be homemade. This part can be done ahead.

Incorporate a creamy vanilla element. You can use whipping cream if crème fraiche is cost-prohibitive. For those with allergies/food restrictions of course please substitute as needed but for those who do not have those restrictions, please incorporate a creamy vanilla garnish.

For the soup it should be a squash soup. I used a mix of Butternut, buttercup, and acorn squash but you can use your favorites (or what is available). You can pre-cook the squash as well.

Beyond that, please use your creativity. As we know, much of cooking is about feel and taste. So taste your soup as you are cooking it, often. Taste your stock. Adjust your seasonings as you see fit. I really enjoyed the flavor the miso brought to the soup – it was something a little different, but you can play with herbs, spices, fruit, etc that you think will be right for you. The more you cook, taste, season, cook, taste, season… the more layered your flavors will become. Whatever you choose to do, please cook your soup with LOVE.

The recipe given is huge. It would serve more than 8, easily. Go with the full recipe if you plan to serve this for your holiday dinner. I cut it in half and had a ton or leftovers still.

Suggestion: make more stock than the recipe calls for. You will want the flexibility to keep adding to the soup to achieve desired consistency. And you can certainly use any extra stock in your cooking.

Also, I highly recommend making the stock and cooking the squash ahead (and stick in the fridge). Then making the mirepoix and pulling the soup together will be much quicker… and since you do not want to rush soup, this is a good thing.

The recipe called to cheesecloth straining. You can do that, or use an immersion blender, or whatever other tools you may have on hand.

Squash Soup with Vanilla Creme Fraiche
Please see the original inspiration here.
Prep Time: one hour and 30 minutes
Serves: more than 8 (I estimate about 20)
Spike & Andrew's recipe:

Mirepoix:
3 sliced leek bottoms (rinsed)
4 carrots (peeled and sliced)
10 shallots (peeled and sliced)
1 clove garlic
1/2 lb butter
1/2 cup honey
1/2 cup miso stir
Salt and pepper

Squash:
5 acorn
5 butternut
Oil for rubbing
Sage leaves
Salt and pepper

Vegetable Stock:
4 quarts water
2 white onions
4 carrots, peeled
2 leeks
6-8 button mushrooms
Bouquet garnish (parsley, bay leaves, peppercorns)

Vanilla Creme Fraiche:
Creme fraiche
2 vanilla beans

Additional Ingredients:
Salt to taste
Cayenne to taste

DIRECTIONS:
Mirepoix:
Sweat all of the vegetables with butter. Sweat down and deglaze with honey. Stir and add miso. Season with salt and pepper.
Squash:
Cut squash in half, scoop out seeds and reserve one butternut head for garnish. Rub squash with oil and season with salt and pepper. Place one piece of sage under every piece of squash. Place squash face down on a sheet tray and roast at 350 degrees until done. Scoop flesh out and pass through a ricer.
Vegetable Stock:
In a pot, boil all ingredients together with the exception of the bouquet garnishes. (NOTE:allow to simmer for at least 1 hour)
Vanilla Creme Fraiche:
Whip creme fraiche and scrape vanilla beans and fold in.
Soup:
Combine squash and vegetable stock to desired consistency. Add mirepoix and cook. Blend with a vita prep and strain through a chinois. Season with salt and cayenne.
To Plate:
Add 6 ounces of soup in bowl and spoon in creme fraiche. Garnish with bouquet garnishes.

Quotes From the Forum:
Oh my gosh was this yummy!! I loved what the vanilla creme fraiche did for this dish! It was so fragrant, so flowery- it reminded me that vanilla comes from an orchid.
Lauren of I'll Eat You

The soup is awesome, and I have to say it gets even better every day.
Kathrine of South Bronx Foodie

Oh my! Just made the soup tonite everyone said it was amazing. This was a fantastic pick. I never would have put miso with squash but it was ABSOLUTELY delicious.
Lori of Lori's Lipsmacking Goodness

0 comments:

welcome to my blog. please write some comment about this article ^_^

Twitter Rejects $500 Million Takeover Offer From Facebook

You might be wondering why Twitter would reject a $500 Million Takeover Offer From Facebook. Here's why:Twitter Rejects $500 Million Takeover Offer From Facebook

0 comments:

welcome to my blog. please write some comment about this article ^_^

Lesson 44: Be A Kind Beekeeper


Hi, we're David and Sheri Burns owners and operators of Long Lane Honey Bee Farms. And we look alittle bit like this caricature drawing that was done of us at our last honey show that we did in Danville, Illinois. Those aren't devil horns sticking out of our heads, they are little bee antennas.

You'll see a widget to the right of this article where you can click on the file for Tuesday and hear it.
Sheri and I work hard at being kind, polite and cordial beekeepers. We try very hard to treat our customers as friends and family. We believe we should treat others as we would want to be treated.
Having said that, I feel it is time to write a lesson of a different nature. We talk alot about wanting gentle bees, but every now and again beekeepers need that same expectation placed on them. We should be gentle too!
I received a phone call last week from a nice gentleman from another state who is wanting to get started keeping bees in the spring. He spoke with a man in his area, a commercial beekeeper to get some advice. But the man was pretty negative and felt alittle intimidated by "another beekeeper" who he perceived could be cutting in on his business or territory.
For the most part, the beekeeping community is matchless when it comes to kindred spirits gleaming with encouragement, camaraderie, and cordiality. Yet, like with every group there can be competition, strife, jealously and fear. And some beekeepers have to prove they are the smartest at the meeting. Beekeepers are a bit proud of the knowledge that they have gain because most of us have gained that knowledge and wisdom at a great cost to our pocket books and our total hive count. Or we've paid hundreds of dollars for various beekeeping courses and conventions we've attended where we've gained our knowledge. We want a pat on the back for all that knowledge we've gained.
This is to be expected and is okay to a point. However, it can become prideful and greedy. I consider the knowledge I've gained so far to be public domain, shareware, free for others to know too! I share what I know not to seem or sound like a know-it-all but to help others avoid problems and to enjoy greater success. But some beekeepers, not a lot, but some are grouchy, resentful, territorial and negative! They are in every organization, so don't think that just beekeeping has its share of curmudgeons. That's right, curmudgeons. This best summarizes that elite segment of beekeepers who are no fun to be with. Look at the definition of a curmudgeon:


Sound like someone you know? Ill-tempered full of resentment and stubborn notions. Avoid these kinds of beekeepers. They are out there and they are ready to tell you how stupid you are for listening to some other beekeeper or for buying your equipment from one place and not the other. Some will tell you of all the insurance you have to have in case your bees should sting a customer and on goes the list of expressed fears.
Sometimes they don't even have to say a word, but they just give you that look, that makes you feel that what you've just said is stupid and ignorant. Behind your back they'll snicker and say things like, "That's the stupidest thing I've ever heard. His bees are going to die if he tries that..."
Almost every association has atleast one curmudgeon. And one is all it takes to silence the eager student from asking genuine questions. One is all it takes to cause an association to be poorly attended.
There's not much we can do for the few curmudgeons out there. You can requeen a mean hive but you can't requeen your association from the curmudgeons.
So let me give you 10 things for you to do not to become a beekeeping curmudgeon and to deal with those who are...
1) Be nice, friendly and encouraging to other beekeepers and to everyone for that matter.
2) Speak up at your association meetings. In a kind and nice way, try to refute the curmudgeons negative outlook. Share what is positive and what successes you are enjoying.
3) When a curmudgeon gossips about someone else, stop them right there. Do not listen. If you listen and say nothing, even your silence is taken as agreement with them, so don't be silent. Speak positively.
4) Perhaps in a humorous way, you can ask the curmudgeon if he or she might consider requeening their attitude. "Why do you keep bees if you are so down on things anyway. I think you need to get out of beekeeping or requeen your attitude".
5) Each association should give out an annual award for the most kind, helpful and encouraging beekeeper among us.
6) When negative things happen to you, like your hive dies, look at it from a positive point of view. Look at what you learned from the bees that you can apply next time and do better or try a different approach.
7) Contribute to your local association. Don't just show up with a chip on your shoulder because you have family or financial issues. Leave those behind and come with something encouraging and positive to share at your meetings.
8) Think back to when you first started keeping bees. You had to work hard to find answers. So look around and find those new to beekeeping and mentor them and help them along with positive and encouraging advice.
9) When you think you know it all, and you badly want to share it, bite your tongue and try to learn more. Mark Twain once said, "It is better to keep your mouth closed and let people think you are a fool than to open it and remove all doubt".
10) Keep learning. You'll never reach a point where you know everything about beekeeping, so remember that though you may know more than some, lots of folks know more than you. So keep learning.
So, remember to be positive and supportive of other beekeepers. Why not share some equipment or if a neighbor beekeeper loses some hives in the winter and you didn't, why not give him a hive or two. After all, you could have lost those hives anyway.
If your life is filled with hardships and negative happenings, perhaps you need to focus on something positive. Why not listen to our new Studio Bee Live Beekeeping Broadcasts! Sheri and I have fun sharing silly things and smart things that we do on our honey bee farm. We'll give you information on beekeeping as well as make you smile. Just log on to: http://www.honeybeesonline.com/studiobeelive.html or click on the player in the upper right side of this blog.

4)We love to answer your beekeeping questions and now we have a new line just for questions. 217-427-2430. Call that line if you have questions about beekeeping, but call our other line to place orders. The order line is 217-427-2678.
When you call in with your question, we'd love to play your question on our broadcast along with our answer. So, when you ask your question, and it's okay for us to use it on our broadcast, just say, "Hi I have a question for studio bee live..."
Or you can email us questions: david@honeybeesonline.com
Finally, now that it is November, here is what you should be doing with your hive.
NOVEMBER AND THE BEES: The bees continue to cluster for winter. They may not yet go into a full winter cluster, and may actually develop two clusters. They may break cluster frequently on warm days and recluster at night. But they will begin to cluster for the winter. The days are getting much shorter. The queen will lay less and less.
NOVEMBER AND THE BEEKEEPER: Feed your light hives as long as the sugar water doesn't freeze. Finish up all winterizing of your hives. On a cold day when the bees are all inside, weed-eat around your hives. Enjoy Thanksgiving! Start purchasing next year's equipment.
That's all for now, from Long Lane Honey Bee Farms, Sheri and I appreciate you and enjoy calling you our friends!!
Remember to Bee-Have yourself!
David & Sheri Burns
Long Lane Honey Bee Farms
http://www.honeybeesonline.com/
ORDER LINE: 217-427-2678
QUESTION LINE: 217-427-2430
EMAIL:
david@honeybeesonline.com

0 comments:

welcome to my blog. please write some comment about this article ^_^

The power of DrudgeReport

Looking for a design for a simple site? Why the Drudge Report is one of the best designed sites on the web

0 comments:

welcome to my blog. please write some comment about this article ^_^

Net Promoter Score: an operational tool to measure customer satisfaction

Cover of
I've mentioned Net Promoter Score (NPS) in a few previous posts, but haven't had a chance to describe it in detail yet. It is an essential lean startup tool that combines seemingly irreconcilable attributes: it provides operational, actionable, real-time feedback that is truly representative of your customers' experience as a whole. It does it all by asking your customers just one magic question.

In this post I'll talk about why NPS is needed, how it works, and show you how to get started with it. I'll also reveal the Net Promoter Score for this blog, based on the data you've given me so far.

How can you measure customer satisfaction?
Other methods for collecting data about customers have obvious drawbacks. Doing in-depth customer research, with long questionnaires with detailed demographic and psychograpic breakdowns, is very helpful for long-range planning, interaction design and, most importantly, creating customer archetypes. But it's not immediately actionable, and it's far too slow to be a regular part of your decision loop.

At the other extreme, there's the classic A/B split-test, which provides nearly instantaneous feedback on customer adoption of any given feature. If your process for creating split-tests is extremely light (for example, it requires only one line of code), you can build a culture of lightweight experimentation that allows you to audition many different ideas, and see what works. But split-tests also have their drawbacks. They can't give you a holistic view, because they only tell you how your customers reacted to that specific test.

You could conduct an in-person usability test, which is very useful for getting a view of how actual people perceive the totality of your product. But that, too, is limited, because you are relying on a very small sample, from which you can only extrapolate broad trends. A major usability problem is probably experienced similarly by all people, but the absence of such a defect doesn't tell you much about how well you are doing.

Net Promoter Score
NPS is a methodology that comes out of the service industry. It involves using a simple tracking survey to constantly get feedback from active customers. It is described in detail by Fred Reichheld in his book The Ultimate Question: Driving Good Profits and True Growth. The tracking survey asks one simple question: How likely are you to recommend Product X to a friend or colleague? The answer is then put through a formula to give you a single overall score that tells you how well you are doing at satisfying your customers. Both the question and formula are the results of a lot of research that claims that this methodology can predict the success of companies over the long-term.

There's a lot of controversy surrounding NPS in the customer research community, and I don't want to recapitulate it here. I think it's important to acknowledge, though, that lots of smart people don't agree with the specific question that NPS asks, or the specific formula used to calculate the score. For most startups, though, I think these objections can safely be ignored, becuase there is absolutely no controversy about the core idea that a regular and simple tracking survey can give you customer insight.

Don't let the perfect be the enemy of the good. If you don't like the NPS question or scoring system, feel free to use your own. I think any reasonably neutral approach will give you valuable data. Still, if you're open to it, I recommend you give NPS a try. It's certainly worked for me.

How to get started with NPS
For those that want to follow the NPS methodology, I will walk you through how to integrate it into your company, including how to design the survey, how to collect the answers, and how to calculate your score. Because the book is chock-full of examples of how to do this in older industries, I will focus on my experience integrating NPS into an online service, although it should be noted that it works equally well if your primary contact with customers is through a different channel, such as the telephone.

Designing the survey
The NPS question itself (again, "How likely are you to recommend X to a friend or colleague?") is usually asked on a 0-10 point scale. It's important to let people know that 10 reperesents "most likely" and 0 represents "least likely" but it's also important not to use words like promoter or detractor anywhere in the survey itself.

The hardest part about creating an NPS survey is to resist the urge to load it up with lots of questions. The more questions you ask, the lower your response rate, and the more you bias your results towards more-engaged customers. The whole goal of NPS is to get your promoters and your detractors alike to answer the question, and this requires that you not ask for too much of their time. Limit yourself to two questions: the official NPS question, and exactly one follow-up. Options for the follow-up could be a different question on a 10-point scale, or just an open ended question asking why they chose the rating that they did. Another possibility is to ask "If you are open to answering some follow-up questions, would you leave your phone number?" or other contact info. That would let you talk to some actual detractors, and get a qualitative sense of what they are thinking, for example.

For an online service, just host the survey on a webpage with as little branding or decoration as possible. Because you want to be able to produce real-time graphs and results, this is one circumstance where I recommend you build the survey yourself, versus using an off-the-shelf hosted survey tool. Just dump the results in a database as you get them, and let your reports calculate scores in real-time.

Collecting the answers
Once you have the survey up and running, you need to design a program to have customers take it on a regular basis. Here's how I've set it up in the past. Pick a target number of customers to take the survey every day. Even if you have a very large community, I don't think this number needs to be higher than 100. Even just 10 might be enough. Build a batch process (using GearMan, cron, or whatever you use for offline processing) whose job is to send out invites to the survey.

Use whatever communication channel you normally rely on for notifying your customers. Email is great; of course, at IMVU, we had our own internal notification system. Either way, have the process gradually ramp up the number of outstanding invitations throughout the day, stopping when it's achieved 100 responses. This way, no matter what the response rate, you'll get a consistent amount of data. I also recommend that you give each invitation a unique code, so that you don't get random people taking the survey and biasing the results. I'd also recommend you let each invite expire, for the same reason.

Choose the people to invite to the survey according to a consistent formula every day. I recommend a simple lottery among people who have used your product that same day. You want to catch people when their impression of your product is fresh - even a few days can be enough to invalidate their reactions. Don't worry about surveying churned customers; you need to use a different methodology to reach them. I also normally exclude anyone from being invited to take the survey more than once in any given time period (you can use a month, six months, anything you think is appropriate).

Calculate your score
Your NPS score is derived in three steps:
  1. Divide all responses into three buckets: promoters, detractors, and others. Promoters are anyone who chose 9 or 10 on the "likely to recommend scale" and detractors are those who chose any number from 0-6.
  2. Figure out the percentage of respondants that fall into the promoter and detractor buckets.
  3. Subtract your detractor percentage from your promoter percentage. The result is your score. Thus, NPS = P% - D%.
You can then compare your score to people in other industries. Any positive score is good news, and a score higher than +50 is considered exceptional. Here are a few example scores taken from the official Net Promoter website:

Apple 79
Adobe 46
Google
73
Barnes & Noble online
74
American Express
47
Verizon
10
DIRECTV
20

Of course, the most important thing to do with your NPS score is to track it on a regular basis. I used to look at two NPS-related graphs on a regular basis: the NPS score itself, and the response rate to the survey request. These numbers were remarkably stable over time, which, naturally, we didn't want to believe. In fact, there were some definite skeptics about whether they measured anything of value at all, since it is always dismaying to get data that says the changes you're making to your product are not affecting customer satisfaction one way or the other.

However, at IMVU one summer, we had a major catastrophe. We made some changes to our service that wound up alienating a large number of customers. Even worse, the way we chose to respond to this event was terrible, too. We clumsily gave our community the idea that we didn't take them seriously, and weren't interested in listening to their complaints. In other words, we committed the one cardinal sin of community management. Yikes.

It took us months to realize what we had done, and to eventually apologize and win back the trust of those customers we'd alienated. The whole episode cost us hundreds of thousands of dollars in lost revenue. In fact, it was the revenue trends that eventually alerted us to the magnitude of the problem. Unfortunately, revenue a trailing indicator. Our response time to the crisis was much too slow, and as part of the post-mortem analysis of why, I took a look at the various metrics that all took a precipitous turn for the worse during that summer. Of everything we measured, it was Net Promoter Score that plunged first. It dropped down to an all-time low, and stayed there for the entire duration of the crisis, while other metrics gradually came down over time.

After that, we stopped being skeptical and started to pay very serious attention to changes in our NPS. In fact, I didn't consider the crisis resolved until our NPS peaked above our previous highs.

Calculating the NPS of Lessons Learned
I promised that I would reveal the NPS of this blog, which I recently took a snapshot of by offering a survey in a previous post. Here's how the responses break down, based on the first 100 people who answered the question:
  • Number of promoters: 47
  • Number of detractors: 22
  • NPS: 25
Now, I don't have any other blogs to compare this score to. Plus, the way I offered the survey (just putting a link in a single post), the fact that I didn't target people specifically to take the survey, and the fact that the invite was impersonal, are all deeply flawed. Still, all things considered, I'm pretty happy with the result. Of course, now that I've described the methodology in detail, I've probably poisoned the well for taking future unbiased samples. But that's a small price to pay for having the opportunity to share the magic of NPS.

I hope you'll find it useful. If you do, come on back and post a comment letting us all know how it turned out.


Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

Catching a news or seasonal topic

If there is something popping in the news or that is happening soon and you can catch it right, you can find a lot of traffic. Case in point: BlackFriday.infoTake a look at its traffic curve on Alexa:Traffic for BlackFriday.info.

0 comments:

welcome to my blog. please write some comment about this article ^_^

Letter. Writing. Campaign.


News Flash: For those of you who don't know, and I do apologize, I am pregnant with Number two. The bundle is slated to arrive mid-latish April, putting me at about 19 weeks, half way there. Okay, on with the post...

Dear Time:

Please slow down. I’m serious. My daughter was just born and now she’s two. And a half. It was just Halloween and now people are asking what I’m making for Thanksgiving. I started thinking about what to make for Thanksgiving and someone asked me if I’d started my Christmas shopping.

This has got to stop. I’m not sure how you’ll do it, but please, just slow down. Just for one day, I beg you.

Signed,

Timed Out

::

Dear Daylight Savings Time:

I’m not sure how you’re related to Time (note separate letter, above) but I just wanted to let you know that you’ve really screwed things up for me and my family. It’s been a couple of weeks now and we still can’t seem to figure ourselves out. I’m irritable and waking at odd hours, my kid tells me she’s ready for a nap at noon (which is really 1pm, her old nap time) and I can’t very well send her to bed without lunch, so we struggle through the next hour until 1pm (which is really 2pm) at which point she’s so tired that it takes her another hour to fall asleep, which she does until nearly 5pm because she’s so tired. By 5pm when she wakes it’s nearly dinnertime. She tells me she just ate (which is basically true). By the time dinner is through and the table is cleared it feels like it should be 9pm, but she’s not tired because she just slept three hours and it’s really only 7pm. Even though its dark enough to be midnight.

I just can’t figure you out. When are we actually in “daylight savings time” anyway? When we spring ahead or when we fall back or just always?

I’m sorry to be so feisty, but I’m tired and I’m pregnant.

Signed,

Too tired and cranky to come up with a clever sign-off

::

Dear CVS guy:

Listen, I haven’t been in your store in months, maybe even a year. And to be honest, when I do walk in there I start to have heart palpitations as it is. I’m not sure how you could fit one more Whitman’s Sampler Candy box, but you do.

At any rate, when you needed to pass me in the way-overcrowded-aisle (not with people, mind you, with stuff) all you needed to say was, “Excuse me,” and I would’ve happily moved aside.

But you didn’t.

You stood there and grunted and rolled your eyes when I didn’t even know you were there.

Maybe I’m a little more sensitive these days, being pregnant and all, but even if I weren’t pregnant I’d think you were pretty rude to someone who was just minding her own business and preparing to spend money on window candles that probably won’t work anyway.

So there, I feel better now.

Signed,


I'll huff and puff and blow that house down!

::

Dear Jesus:

I think we must be doing something right because our little two-year-old darling told me that I should talk to You the other day.

Our toaster wasn’t working right and I said, “Well, that’s a little bit of a problem” because I’d promised her toast and jam with breakfast. She told me that I should talk to You because You listen to us when we have problems and that You are everywhere.

Sigh.

Love,

Maureen


::

Dear Husband:

Remember when were dating and first married and agreed that we’d never be like our parents and watch TV in different rooms? Heck, that we wouldn’t even watch it on different couches?

Well, I’m not sure if you’ve noticed, but we’ve started watching TV on different couches and if you keep up this Battlestar Galactica fixation, we might just end up watching it in different rooms, too.

Is this what happens when you have kids?

Love,

Wife

Ps….I’m pretty sure I have a crush on Chuck, but he kind of reminds me of you, if that makes it okay.

::

Dear Private Caller:

I’m not sure who you are or what you want, but please stop calling. At least move your pestering to the after-nap hour. One of these days you’re going to wake my little one and then I’ll really be annoyed.


Signed,

Publicly Pi**ed

::

Dear Peanut M&M’s in my cupboard:

I hear your taunts and I’m ignoring you. I am not going to open you, so please stop trying. Please.

Signed,

Stuffed well enough with my own peanut, thank you


::

Dear Olivia (of the Olivia series for children, by Ian Falconer):
I like your sass and all, but we have to talk about all this standing on tables and chairs business that you seem to enjoy (and get away with).

To date, my 2 year old is a great rule follower. She knows not to "write on people" and to "sit on her bottom." But when we read your stories and you are doing all of the above, it''s planting a seed that I'm afraid is about to sprout.

So, at least if you're going to do those things, maybe your mom and dad could at least correct you on it. Publicly.

Thanks,

Mama of a fan

0 comments:

welcome to my blog. please write some comment about this article ^_^

10 Social Media myths

Don’t Believe these 10 Social Media MythsFrom the article:The importance of establishing a strong social media presence has been discussed to exhaustion. You know that you need to be an active user on these sites because social media can produce numerous benefits for you and your brand. Of course, as the bandwagon for social media has filled up, many myths have been spread.ANother myth that I

0 comments:

welcome to my blog. please write some comment about this article ^_^

Lo, my 1032 subscribers, who are you?

When I first wrote about the advantages of having a pathetically small number of customers, I only had 5 subscribers. When I checked my little badge on the sidebar today, I was shocked to see it read 1032. As it turns out, it was much harder to get those first five subscribers, then the next thousand, thanks to great bloggers like Andrew Chen, Dave McClure, and the fine folks over at VentureHacks. Thank you all for stopping by.

Of course, 1000 customers is pretty pathetically small too. When startups achieve that milestone, it's a mixed blessing. On the one hand, having a little traction is a good thing. But on the other hand, figuring out what's going on starts to get more difficult. You can't quite talk to everyone on the phone . You have to start filtering and sorting; deciding which feedback to listen to and which loud people to ignore. It's also time to start thinking about customer segments. Do you have a particular set of early adopters that share some common traits? If so, they might be pointing the way towards a much bigger set of people who share those traits, but are not early adopters.

Let's take an example of a startup I was advising a few years ago. Of their early customers, about 1/3 of them turned out to be high school or middle school teachers. This wasn't an education product - it was a pretty surprising group to find using it. What all these teachers had in common were two things: they were technology early adopters that were willing to take a chance on a new software product, and they all had similar problems organizing their classes and students. At that early stage, it was the company's first glimpse of what a crossing the chasm strategy might look like: use these early adopters to build a whole product for the education market. Then sell it to mainstream educators, schools, and school districts, who shared the same problem of organizing classes, but were not themselves early adopters.

So how do you get started with customer segmentation? If you've already been talking to customers one-on-one, don't stop now (and if you haven't, this is still a good time to start). Those conversations are the best way to look for patterns in the noise. As you start to see them, collect your hypotheses and start using broader-reach tools to find out how they break down. I would recommend periodic surveys, along with some kind of forum or other community tool where the most passionate customers can congregate. You can also use Twitter, your blog (with comments), or even a more structured tool like uservoice.

I'd start with a simple survey (I use SurveyMonkey), combining the NPS question with a handful of more in-depth optional questions. In fact, I feel like I should eat my own dogfood, take my own medicine, or whatnot. Here's my survey for Lessons Learned:
As a loyal subscriber, I'd like to invite you to take the first Lessons Learned customer survey: Click Here to take survey
I put this together using the free version of SurveyMonkey, to show just how easy it is. If you're serious about this, you probably want to use their premium version, which will let you do things like add logic to let people easily skip the second page if they choose to, and send them to a "thank you page" afterward. Be sure to make the thank you page have a call to action (like a link to subscribe, for example) - after all, you're dealing with a customer passionate enough to talk to you.

So, to those of you who take the time to fill out the survey: thanks for the feedback! And to everyone who's taken the time to read, comment, or subscribe: thank you.
Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

ScienceDaily: Corporate culture is most important factor in driving innovation

Some recent research into what makes innovation happen inside companies:
Corporate Culture Is Most Important Factor In Driving Innovation: "Looking at data from 759 firms across 17 countries the researchers found that location is not the determining factor in the degree to which any given firm is innovative; but rather, the innovative firms themselves share key internal cultural traits. Innovation appears to be a function of the degree to which a company fosters a supportive internal structure headed by product champions and bolstered by incentives and the extent to which that organization is able to change quickly"
The concept of a strong product champion is a recurring theme in successful product development organizations, large and small. It's even more critical in lean startups when they need to manage growth.

I believe it's important that product teams be cross-functional, no matter what other job function the product champion does. At IMVU, we called this person a Producer (revealing our games background); in Scrum, they are called the Product Owner. At Toyota, they are called Chief Engineer:
Toyota realizes that the Chief Engineer job is probably the most important one in the company because the Chief Engineer listens to the customer and then determines what the functions need to do to address the customer’s desires. Thus the power of the Chief Engineer is very large even though he (and they are all men so far) has no direct reports other than a secretary and a few assistants who are themselves being trained to be chief engineers.

The job of the Chief Engineer is to determine the needs of the product and then to negotiate with the heads of body engineering, drive train engineering, manufacturing engineering, production, purchasing, etc., about what their function needs to do to fully support the product. Once an agreement is reached, the Chief Engineer continually watches to make sure that the functions are following through. In the event there is an irreconcilable difference between Chief Engineer and function head, the issue can be elevated to a very high level, but apparently this doesn’t happen.

Great companies build highly adaptable teams, empower leaders to run them, and have high standard of accountability. I will share some further thoughts on how to build strong cross-functional teams in part three of The four kinds of work, and how to get them done.

Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

Viral growth

Viral growth has its limitations, as described here:Three myths of Viral Growth

0 comments:

welcome to my blog. please write some comment about this article ^_^

The four kinds of work, and how to get them done: part two

In part one, I talked about four different kinds of work that every company has to do: innovation/R&D, strategy, growth, and maintenance/scalability. When startups grow, they tend to have problems handling the inevitable conflicts that emerge from having to do multiple kinds of work all at once. In order to grow effectively, it's important to have a technique that mitigates these problems.

I ended part one with two questions: Why do these different kinds of work cause problems? And why do those problems seem to get worse as the company grows? Let's get to the answers.
  1. Apples-and-oranges trade-offs. It's extremely difficult to make intelligent trade-offs between things that are not at all alike. For example, should we invest a week of engineering time into making our website more failure-proof (which we're pretty sure will pay off right away) or into experimenting with a new technology (that might pay off in months, years, or never)? If I have some budget for outside help, should I hire a vendor to help us drive down our payment fraud rates by 1% (ROI easy to predict), or hire a market research firm to give us insights into potential customers (ROI hard to predict)? It's much easier to make trade-offs within a single kind of work than across types of work.

  2. People have natural affinity for some kinds of work. Even worse, in my opinion, is that I know there are at least a few readers out there who read the previous paragraph and thought, "those aren't hard choices to make; it's obvious what you should choose..." That's because most people have a natural affinity for certain kinds of work. Have you met that prickly operations guy who seems to love servers more than people (but would never let them fail on his watch)? Or the zany innovator who just can't comprehend schedules but always has a new trick up her sleeve? Those are the natural leaders of the kind of work they were born to do. But they are often counter-productive when placed in a management role for other kinds of work. Sometimes, just having other kinds of work being done nearby is enough to drive them crazy. This can lead to a lot of needless politics and needless suffering if it is not proactively managed.

  3. People get trapped doing the wrong kind of work. Successful products and features have a natural lifecycle. They are born in R&D, become part of the company's DNA in Strategy, delight zillions of customers in Growth, and eventually become just another box on a Maintenance checklist somewhere. The problem is that people who were essential to the product in a previous phase can get carried along and find themselves stuck downstream. For example, the original innovator from R&D can find himself the leader of a team tasked with executing incremental growth, because he understands the feature better than anyone. Or a critical engineer, who wrote the breakthrough code that first helped a feature achieve scale is considered "too essential" to be relieved of responsibility for maintaining it. This has two bad consequences: it puts people in jobs that they are not ideally suited for, and it reduces degrees of freedom for management to make optimal resource allocation decisions. If your top performers are all stuck in Growth and Maintenance, who do you have left in R&D and Strategy?
To mitigate these problems, we need a process that recognizes the different kinds of work a company does, and creates teams to get them done. It has to balance competing goals of establishing clear ownership, while avoiding talented employees getting stuck.

In part three, I'll lay out the criteria for such a process, and describe the techniques I've used to make it work.

0 comments:

welcome to my blog. please write some comment about this article ^_^

The four kinds of work, and how to get them done: part one

I've written before about some of the advantages startups have when they are very small, like the benefits of having a pathetically small number of customers. Another advantage of the early stages is that most don't have to juggle too many competing priorities. If you don't have customers, a product, investors, or a board of directors, you can pretty much stay focused on just one thing at a time.

As companies grow, it becomes increasingly important to build an organization that can execute in multiple areas simultaneously. I'd like to talk about a technique I've used to help manage this growth without slowing down.

This technique rests on three things: identifying the kinds of work that need to get done, creating the right type of teams for each kind, and steering the company by allocating resources among them. For this analysis, I am heavily indebted to Geoff Moore, who laid out the theoretical underpinnings of this approach (and describes how to use it for companies of all sizes and scales) in Dealing with Darwin: How Great Companies Innovate at Every Phase of Their Evolution.

Four kinds of work
  1. Innovation / R&D - this is what all startups do in their earliest stages. Seeing what's possible. Playing with new technologies. Building and testing prototypes. Talking to potential customers and competitors' customers. In this kind of work, it's hard to predict when things will be done, what impact they will have, and whether you're making progress. Managers in this area have to take a portfolio approach, promoting ideas that work and might make good candidates for further investment. The ideal R&D team is a small skunkworks that is off the radar of most people in the company. A "startup within the startup" feeling is a good thing.

  2. Strategy - startups first encounter this when they have the beginnings of a product, and they've achieved some amount of product/market fit. Now it's time to start to think seriously about how to find a repeatable and scalable sales process, how to position and market the product, and how to build a product development team that can turn an early product into a Whole Product. As the company grows, this kind of work generalizes into "executing the company's current strategy." Usually, that will be about finding new segments of customers that the company can profitably serve. It's decidedly not about making incremental improvements for current customers - that's a different kind of work altogether. This kind of work requires the most cross-functional of teams, because it draws on the resources of the whole company. And although schedules and prediction are difficult here, they are critical. It's essential to know if the strategy is fundamentally working or failing, so the company can chart its course accordingly.

    Your strategy might be wrong; might take a long time to pay off; might even pay off in completely unexpected ways, which is why it is unwise to devote 100% of your resources to your current strategy. If you invest in strategy at the expense of innovation, you risk being unprepared for the next strategy (or of achieving tunnel-vision in which everyone drinks the Kool-Aid). If you invest in strategy at the expense of growth, you can starve yourself of the resources you need to implement the strategy. And if you neglect maintenance, you may not have a business left at all.

  3. Growth - when you have existing customers, the pressure is on to grow your key metrics day-in day-out. If you're making revenue, you should be finding ways to grow it predictably month-over-month; if you're focused on customer engagement, your product should be getting more sticky, and so on. Some companies and founders refuse to serve existing customers, and are always lurching from one great idea to the next. Others focus exclusively on incremental growth, and can never find the time or resources for strategy. Either extreme can be fatal. This kind of work is where schedules, milestones, and accurate estimates thrive. Since the work is building on knowledge and systems built in the past, it's much more likely to get done on-time, on-budget, and to have a predictable effect on the business. Growth work calls for relentless executors, who know how to get things done.

  4. Maintenance and scalability - "keeping the lights on" gets harder and harder as companies grow. Yet the great companies manage to handle growth while keeping the resources dedicated to maintenance and scalability mostly fixed. That means they are continuously getting better and better at automating and driving out waste. Continuous improvement here frees up time and energy for the parts of the company that find new ways to make money. Often a company's unsung heroes are doing this kind of work: invisible when doing a good job, all-too-visible when something goes wrong. These teams tend to be incredibly schedule and process-centric, with detailed procedures for anything that might happen.
Companies of any size do all these kinds of work, and do them well. You don't need any special process to make it happen, just good people who are committed to making the company successful. So why do these different kinds of work cause problems? And why do those problems seem to get worse as the company grows?

We'll talk about those problems in detail in part two.
Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

Crowdsourcing

Another source of content to consider:Crowdsourced Design: Is crowdSPRING the next iStockPhoto?

0 comments:

welcome to my blog. please write some comment about this article ^_^

Achieving success

A 7 Step No-Waffle Plan for Real World Success

0 comments:

welcome to my blog. please write some comment about this article ^_^

Five Whys

Taiichi Ohno was one of the inventors of the Toyota Production System. His book Toyota Production System: Beyond Large-Scale Production is a fascinating read, even though it's decidedly non-practical. After reading it, you might not even realize that there are cars involved in Toyota's business. Yet there is one specific technique that I learned most clearly from this book: asking why five times.

When something goes wrong, we tend to see it as a crisis and seek to blame. A better way is to see it as a learning opportunity. Not in the existential sense of general self-improvement. Instead, we can use the technique of asking why five times to get to the root cause of the problem.

Here's how it works. Let's say you notice that your website is down. Obviously, your first priority is to get it back up. But as soon as the crisis is past, you have the discipline to have a post-mortem in which you start asking why:
  1. why was the website down? The CPU utilization on all our front-end servers went to 100%
  2. why did the CPU usage spike? A new bit of code contained an infinite loop!
  3. why did that code get written? So-and-so made a mistake
  4. why did his mistake get checked in? He didn't write a unit test for the feature
  5. why didn't he write a unit test? He's a new employee, and he was not properly trained in TDD
So far, this isn't much different from the kind of analysis any competent operations team would conduct for a site outage. The next step is this: you have to commit to make a proportional investment in corrective action at every level of the analysis. So, in the example above, we'd have to take five corrective actions:
  1. bring the site back up
  2. remove the bad code
  3. help so-and-so understand why his code doesn't work as written
  4. train so-and-so in the principles of TDD
  5. change the new engineer orientation to include TDD
I have come to believe that this technique should be used for all kinds of defects, not just site outages. Each time, we use the defect as an opportunity to find out what's wrong with our process, and make a small adjustment. By continuously adjusting, we eventually build up a robust series of defenses that prevent problems from happening. This approach is a the heart of breaking down the "time/quality/cost pick two" paradox, because these small investments cause the team to go faster over time.

I'd like to point out something else about the example above. What started as a technical problem actually turned out to be a human and process problem. This is completely typical. Our bias as technologists is to over-focus on the product part of the problem, and five whys tends to counteract that tendency. It's why, at my previous job, we were able to get a new engineer completely productive on their first day. We had a great on-boarding process, complete with a mentoring program and a syllabus of key ideas to be covered. Most engineers would ship code to production on their first day. We didn't start with a great program like that, nor did we spend a lot of time all at once investing in it. Instead, five whys kept leading to problems caused by an improperly trained new employee, and we'd make a small adjustment. Before we knew it, we stopped having those kinds of problems altogether.

It's important to remember the proportional investment part of the rule above. It's easy to decide that when something goes wrong, a complete ground-up rewrite is needed. It's part of our tendency to over-focus on the technical and to over-react to problems. Five whys helps us keep our cool. If you have a severe problem, like a site outage, that costs your company tons of money or causes lots of person-hours of debugging, go ahead and allocate about that same number of person-hours or dollars to the solution. But always have a maximum, and always have a minimum. For small problems, just move the ball forward a little bit. Don't over-invest. If the problem recurs, that will give you a little more budget to move the ball forward some more.

How do you get started with five whys? I recommend that you start with a specific team and a specific class of problems. For my first time, it was scalability problems and our operations team. But there is no right answer - I've run this process for many different teams. Start by having a single person be the five whys master. This person will run the post mortem whenever anyone on the team identifies a problem. Don't let them do it by themselves; it's important to get everyone who was involved with the problem (including those who diagnosed or debugged it) into a room together. Have the five why master lead the discussion, but they should have the power to assign responsibility for the solution to anyone in the room.

Once that responsibility has been assigned, have that new person email the whole company with the results of the analysis. This last step is difficult, but I think it's very helpful. Five whys should read like plain English. If they don't, you're probably obfuscating the real problem. The advantage of sharing this information widely is that it gives everyone insight into the kinds of problems the team is facing, but also insight into how those problems are being tackled. And if the analysis is airtight, it makes it pretty easy for everyone to understand why the team is taking some time out to invest in problem prevention instead of new features. If, on the other hand, it ignites a firestorm - that's good news too. Now you know you have a problem: either the analysis is not airtight, and you need to do it over again, or your company doesn't understand why what you're doing is important. Figure out which of these situations you're in, and fix it.

Over time, here's my experience with what happens. People get used to the rhythm of five whys, and it becomes completely normal to make incremental investments. Most of the time, you invest in things that otherwise would have taken tons of meetings to decide to do. And you'll start to see people from all over the company chime in with interesting suggestions for how you could make things better. Now, everyone is learning together - about your product, process, and team. Each five whys email is a teaching document.

Let me show you what this looked like after a few years of practicing five whys in the operations and engineering teams at IMVU. We had made so many improvements to our tools and processes for deployment, that it was pretty hard to take the site down. We had five strong levels of defense:
  1. Each engineer had his/her own sandbox which mimicked production as close as possible (whenever it diverged, we'd inevitably find out in a five whys shortly thereafter).
  2. We had a comprehensive set of unit, acceptance, functional, and performance tests, and practiced TDD across the whole team. Our engineers built a series of test tags, so you could quickly run a subset of tests in your sandbox that you thought were relevant to your current project or feature.
  3. 100% of those tests ran, via a continuous integration cluster, after every checkin. When a test failed, it would prevent that revision from being deployed.
  4. When someone wanted to do a deployment, we had a completely automated system that we called the cluster immune system. This would deploy the change incrementally, one machine at a time. That process would continually monitor the health of those machines, as well as the cluster as a whole, to see if the change was causing problems. If it didn't like what was going on, it would reject the change, do a fast revert, and lock deployments until someone investigated what went wrong.
  5. We had a comprehensive set of nagios alerts, that would trigger a pager in operations if anything went wrong. Because five whys kept turning up a few key metrics that were hard to set static thresholds for, we even had a dynamic prediction algorithm that would make forecasts based on past data, and fire alerts if the metric ever went out of its normal bounds. (You can even read a cool paper one of our engineers wrote on this approach).
So if you had been able to sneak into the desk of any of our engineers, log into their machine, and secretly check in an infinite loop on some highly-trafficked page, here's what would have happened. Somewhere between 10 and 20 minutes later, they would have received an email with a message more-or-less like this: "Dear so-and-so, thank you so much for attempting to check in revision 1234. Unfortunately, that is a terrible idea, and your change has been reverted. We've also alerted the whole team to what's happened, and look forward to you figuring out what went wrong. Best of luck, Your Software." (OK, that's not exactly what it said. But you get the idea)

Having this series of defenses was helpful for doing five whys. If a bad change got to production, we'd have a built-in set of questions to ask: why didn't the automated tests catch it? why didn't the cluster immune system reject it? why didn't operations get paged? and so forth. And each and every time, we'd make a few more improvements to each layer of defense. Eventually, this let us do deployments to production dozens of times every day, without significant downtime or bug regressions.

One last comment. When I tell this story to entrepreneurs and big-company types alike, I sometimes get this response: "well, sure, if you start out with all those great tools, processes and TDD from the beginning, that's easy! But my team is saddled with zillions of lines of legacy code and ... and ..." So let me say for the record: we didn't start with any of this at IMVU. We didn't even practice TDD across our whole team. We'd never heard of five whys, and we had plenty of "agile skeptics" on the team. By the time we started doing continuous integration, we had tens of thousands of lines of code, all not under test coverage. But the great thing about five whys is that it has a pareto principle built right in. Because the most common problems keep recurring, your prevention efforts are automatically focused on the 20% of your product that needs the most help. That's also the same 20% that causes you to waste the most time. So five whys pays for itself awfully fast, and it makes life noticeably better almost right away. All you have to do is get started.

So thank you, Taiichi Ohno. I think you would have liked seeing all the waste we've been able to drive out of our systems and processes, all in an industry that didn't exist when you started your journey at Toyota. And I especially thank you for proving that this technique can work in one of the most difficult and slow-moving industries on earth: automobiles. You've made it hard for any of us to use the most pathetic excuse of all: surely, that can't work in my business, right? If it can work for cars, it can work for you.

What are you waiting for?

0 comments:

welcome to my blog. please write some comment about this article ^_^

Things to think about when creating a site

18 Rules the Best Web Developers Follow

0 comments:

welcome to my blog. please write some comment about this article ^_^

Where did Silicon Valley come from?

Those of us who have had the privilege of working in the premier startup hub in the world often take its advantages for granted. Among those: plentiful financing and nerds, a culture that celebrates both failure and success, and an ethos of openness and sharing. It's useful to look back to understand how we got those advantages. It's not side-effect of some secret mineral in the water: it was painstakingly crafted by people who came before us. And what may surprise you is how many of those people were part of the military-industrial complex.

I think the absolute best reading on this subject is a book called Regional Advantage: Culture and Competition in Silicon Valley and Route 128 by AnnaLee Saxenian. It's an academic treatise that tries to answer a seemingly straightforward question: after World War II, why did Silicon Valley become the undisputed leader of the technology world, while Boston's Route 128 corridor did not. To an early observer, it would have seemed obvious that Route 128 had all the advantages: a head start, more government and military funding, and far more established companies. And although both regions had outstanding research universities, MIT was way ahead of Stanford by every relevant measure. However...
While both Stanford and MIT encouraged commercially oriented research and courted federal research contracts in the postwar years, MIT's leadership focused on building relations with government agencies and seeking financial support from established electronics producers. In contrast, Stanford's leaders, lacking corporate or government ties or even easy proximity to Washington, actively promoted the formation of new technology enterprises and forums for cooperation with local industry.

This contrast — between MIT's orientation toward Washington and large, established producers and Stanford's promotion of collaborative relationships among small firms — would fundamentally shape the industrial systems emerging in the two regions.
The book is really fun to read (how often do you see an academic tome crossed with a real whodunit?). It's important not just for historical reasons, but because we are often called upon to take sides in current debates that impact the way our region and industry will develop. Just to pick one: will software patents, NDA's and trade secrets laws make it harder for people to share knowledge outside of big companies? We need to work hard, as previous generations did, to balance the needs of everyone in our ecosystem. Otherwise, we risk sub-optimizing by focusing only on one set of players.

However, even that fascinating history is not the whole story. You might be wondering: who were those brilliant people who made the key decisions to mold Silicon Valley? And what were they doing beforehand? Steve Blank, who I've written about recently in a totally different context has attempted to answer these questions in a talk called "Hidden in Plain Sight: The Secret History of Silicon Valley." If you're in the Bay Area, you have the opportunity to see it live: he's giving the talk at the Computer History Museum next Thursday, November 20:
Hear the story of how two major events – WWII and the Cold War – and one Stanford professor set the stage for the creation and explosive growth of entrepreneurship in Silicon Valley. In true startup form, the world was forever changed when the CIA and the National Security Agency acted as venture capitalists for this first wave of entrepreneurship. Learn about the key players and the series of events that contributed to this dramatic and important piece of the emergence of this world renowned technology mecca.
If you can't make it, you can take a look at this sneak peak of the slides, courtesy of the author:



In addition to learning who to thank (Frederick Terman and William Shockley), you'll get a behind-the-scenes look at World War II and the Cold War from an electronics perspective. Fans of
Cryptonomicon will have a blast.

Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^