Ricotta Cheese

Our host for February was Lauren of I'll Eat You. (The picture in this post KatBaro of A Good Appetite. ) Here is Lauren's post.

I have been making soft cheeses at home for about a year now. They are surprisingly simple, and there are many that don't require special equipment. While I would love for us all to go out and buy cheese molds and cultures and age our own Gouda, we are going to be making fresh ricotta, which can be made in your own kitchen with things you already own. (Bon Appetit kinda stole my thunder on this, but whatever).

Fresh Ricotta
you'll need:
1 gallon milk (you can use 1 percent on up, remember that the more fat in the milk, the more cheese it will yield.)
1 quart buttermilk

-cheesecloth (a good, tightly woven one, not the kind you buy at the supermarket)- If you don't have one of these, you can get by with a slotted spoon, but you may lose some of the cheese.

-a thermometer (mine is for oil and candy)


Place buttermilk and milk in a pot, heat on med-low heat until it reaches 185 degrees.

It will begin to separate into curds and whey. Be sure to stir occasionally to make sure no curds stick to the bottom and burn. You will see that as the temperature approaches 185, the whey becomes clearer as the curds coagulate more.


Pour the curds into a cheesecloth lined colander. Tie the ends of the cheesecloth together and hang for 10-15 minutes. Remove from cheesecloth and place in an airtight container.

Voila! Cheese!

Here is a link to a post about making ricotta, with pictures.

Some tips:

use can use milk that has been pasteurized, but not ultra-pasteurized. Ultra pasteurization heats the milk too much, and de natures the proteins that form curds. You will not get cheese from ultra pasteurized milk. Sorry.

make sure your pots and other equipment are very clean before starting

you can make any amount as long as you stick to a 4 parts milk to 1 part buttermilk ratio.


From the Forum:
The ricotta was great. I had it drain longer, so it was drier. I personally like it drier. I used unsweetened soy milk, but there was still a hint of sweetness in some of the bites
Debyi of Healthy Vegan Kitchen

can't believe how simple it was especially when we've been paying 18bucks a pop for cream cheese
Kavs of The Girl Next Kitchen

I did my cheese on Saturday and tastes the same as a fresh cheese we have in Spain called Mató.
Olga of Las Recetas de Olga

0 comments:

welcome to my blog. please write some comment about this article ^_^

The lean startup at UC Berkeley Haas School of Business

U.C. Berkeley HaasImage by Jessica_Mah via Flickr

Last week I had the opportunity to lecture again in Steve Blank's entrepreneurship class at Haas, which is always a great learning experience - for me at least! This time, I was lucky enough to have my friend Nivi from Venture Hacks in the audience, and he was recording. That means today, in addition to the slides themselves, you can listen to the whole talk in its hour-long glory, Q&A included. I liked the flattering Venture Hacks commentary so much I'll just quote it:

Many founders believe that early stage startups are endeavors of execution. The customer is known, the product is known, and all we have to do is act.

Eric takes a different approach. He believes that many early stage startups are labors of learning. The customer is unknown, the product is unknown, and startups must be built to learn.

...

It represents the triumph of learning, over the naive startup creation myths we read about in the media.

IMVU learned to learn. This process can be replicated at your company. Please do try this at home.

As usual, I learned a ton from the students and their insightful questions. Some highlights, at least for me:
  • We experienced a great science experiment for the scenario: what would happen if a big company tries to compete with you given complete knowledge about your idea and designs? We actually lived this situation, and got to validate this answer: a process of rapid iteration can beat massive strategic investments from a big competitor.

  • What happened when we got early press (circa 2004) in violation of our own no-PR rule. Learn why you don't want to do premature press, despite all the pressure you'll feel to "launch" early.

  • I especially enjoyed the discussion of the power of fact-based decision making. When we'd go into a product planning meeting, often an engineer or product person would have already run a small scale A/B experiment, and so already have some data to go on. That led to much shorter (and better!) meetings than the old opinion based marathon planning sessions.
I've embedded the audio and slides below, but I recommend you click through to the Venture Hacks article if you're interested.











Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

A new site

Here is a new site created by a single person. It is simple in concept and nicely implemented. I do not know how long it will last, because he may have underestimated bandwidth costs, but perhaps he has deep pockets or receives enough donations. Excellent URL too:Imgur.comHis description is simple enough:imgur is an image sharer. It allows you to upload a picture, manipulate it, and share it with

0 comments:

welcome to my blog. please write some comment about this article ^_^

Please teach kids programming, Mr. President

Of course, what I really mean is: let them teach themselves. I'll explain in a moment.

After volunteering for the Obama campaign last year, a friend of mine insisted that I write a letter to our new President telling him what I thought he should do. This post is the result. Now, I don't pretend to be an expert on macroeconomics, international grand strategy or even enough of a policy wonk to make serious recommendations on how best to implement Issue X Reform. I can only speak to my little corner of the American experience. Here's what I do know:
  1. The future strength of our economy depends on its ability to create, support, and sustain entrepreneurs. (If you are somehow not convinced of this point, I'll let Fareed Zakaria explain)
  2. We know who the next generation of entrepreneurs are going to be. They are in school, right now, all across this country.
  3. They are nerds.
I'm not offering extensive studies or research to support this conclusion; the evidence from my peers right here in the innovation capital of America, Silicon Valley, is absolutely overwhelming. Almost to a person, we learned the key skills that would enable us to compete in this new economy in between shifts of highly regimented classes and turns of humiliation at the hands of our more popular peers. (See Paul Graham's Why Nerds are Unpopular to learn more)

Take a look at this article on a programming Q&A site: How old are you, and how old were you when you started coding? There are over forty pages of responses from programmers of all ages, and if you just read the stories at random, you'll see a clear pattern. (Or, if you prefer a more quantitative analysis, one of the commenters has helpfully summarized them in graph form. We are nerds, after all. Here's the most striking thing about the statistics of this post: the average "age when started programming" is 13. Think of how many 10-year-olds there must be in the data to balance out the occasional person who started mid-career.

That data is completely consonant with the people I know who are successful technologists today, and similar patterns are documented in each recent wave of technology innovation. I am especially grateful to Malcolm Gladwell for reintroducing the stories of people like Bill Gates and Bill Joy into the mainstream discourse. What's striking about these stories, if you get past the PR hype, are two very important themes:
  1. These prodigies were self-taught, and had a fundamental fascination with technology from a very young age.

  2. Their stories would not have been possible without access to sympathetic adults with the necessary equipment and knowledge to get started.
It's this second point I want to emphasize. I've seen it first-hand in my own story. I learned programming on my own, because I thought it was fun. It was only years later that I discovered the shocking truth that you could get paid to do it for a living. But I was also very lucky. I went to excellent public schools (so-called science magnets) that had computer classes, which meant they had networked computers that I could use. My parents had computers in the house for as long as I can remember, and tolerated my obsession with them despite the fact that it seemed like a strange hobby with no obvious benefits. Without their support, there is no chance I would be where I am today. They didn't teach me to program, but they allowed me to teach myself. That's what I'd like every child to have the opportunity to do.

My belief is that, right now, even in the worst and most under-served schools in the country, there are kids with the same potential as Bill Joy. They are probably bored. They are getting beat up by their peers, getting into trouble with their teachers, and generally having a pretty bad time. Those are the kids I think we have an obligation and an opportunity to reach. I don't think we can rescue them from humiliation (that would require a seriousness about education reform I don't see any evidence we're ready for), but I do think we can offer them an escape. And it just so happens that escape is to an activity essential for the future of our civilization. I think it's a pretty good deal.

I didn't learn to program from school, although it sometimes happened at school. In fact, it often got me in trouble. We were supposed to learn how to use computers via a carefully structured curriculum that taught us basic concepts one at a time, slowly advancing the whole class through a regimented program. You've probably read accounts like this, from other arrogant nerds, but bear with mine: in the first week, my nerdy friends and I had already mastered the whole curriculum. We spent the rest of our time pretending to work on the assigned homework, but really trying to do interesting side projects, like sending juvenile messages across the school network or building primitive video games. We did our best not to get caught by our teachers or noticed by our peers. Our fear was well substantiated: both had severe consequences.

Later, I discovered the incredible world of online role-playing games, called MUDs. These were primitive open-ended video games created by the players themselves, using simple programming languages. I spent endless hours getting the world's best introduction to object-oriented programming, and I didn't even know I was doing work. MUDs made the essential truth about software into a powerful metaphor: that code is magic, giving those who wield it the ability to create new forms of value literally out of thin air. We also learned that law is code, and that leadership was needed to build thriving communities in a digital age. You can find the origins of many successful companies in these early lessons.

So all I'm asking, on behalf of the thousands of nerds who could one day change the world for the better, is that we give them access to simple, open, programmable devices; a little time to work on them; and a safe space to work in. They'll take it from there. They don't need adult supervision, or a certified curriculum. If we network them together, they'll answer each others' questions and collaborate on projects we can hardly imagine.

Those of us who made it stand ready to do our part. Given the opportunity, we will build the systems these kids need. We will answer their questions. We will mentor them to get them started, and give them jobs and internships when they are ready. Asked to help, I am confident that Silicon Valley and every other innovation center will step up.

But I do think this requires participation from the public sector, too. There are three threats that are limiting the opportunity to unlock these kids' creativity:
  1. Inequity of access. Too many kids today don't have access to computers, cell phones, video games or other programmable devices. We need to leverage every part of our public inf, including public schools and libraries, to make access for those that want to learn programming universal. This doesn't have to be expensive - in fact, many of the physical devices are in place. But we need to open up access to kids so that they can use, program, and remix them on their own terms.

  2. DRM and other restrictions. Increasingly, today's computers and video games are not programmable, they are locked to their users. There would be no Microsoft, Sun Microsystems, or countless other job-creating tech companies today if early computers required corporate authorization to use.

    When I was a kid, the way I logged onto the internet for the first time (to play MUDs, naturally) was through an open dial-up console at San Diego State University. When I say open, it's hard to believe how open it was: just dial the number, and you were dropped directly at a UNIX prompt. No logins, no codes, just raw uncensored internet access.

  3. School hostility to phones, nerds, and other things they don't understand. An awful lot of kids have cell phones, and schools are busy banning them from classrooms. What a lost opportunity! Kids are voluntarily bringing a portable networked supercomputer to class, and we want to restrict them to pencil and paper?

    A modern phone like the iPhone is a miraculous device. But it's not very open, and not very programmable, unless you have an expensive Mac and an approved developer license. We need to think about how to make these devices programmable by their users, so that they can grow and share as soon as the innovation bug bites them. You might not enjoy typing in code on such a small device, but kids don't mind. I know; in class I used to write video games for the TI-82, a graphing calculator provided to me by my school. Sure it was tedious, but compared to the alternatives, I thought it was great.
Each of these trends will need to be countered by sensible public policy, and that's what I am hoping for from our new administration.

So that's my plea on behalf of nerds everywhere. If you're interested in helping them out, leave a note in the comments.
Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

Work in small batches

Software should be designed, written, and deployed in small batches.

Of all of the insights I've contributed to the companies I've worked at over the years, the one I am most proud of is the importance of working in small batches. It's had tremendous impact in many areas: continuous deployment, just-in-time scalability, and even search engine marketing, to name a few. I owe it originally to lean manufacturing books like Lean Thinking and Toyota Production System.

The batch size is the unit at which work-products move between stages in a development process. For software, the easiest batch to see is code. Every time an engineer checks in code, they are batching up a certain amount of work. There are many techniques for controlling these batches, ranging from the tiny batches needed for continuous deployment to more traditional branch-based development, where all of the code from multiple developers working for weeks or months is batched up and integrated together.

It turns out that there are tremendous benefits from working with a batch size radically smaller than traditional practice suggests. In my experience, a few hours of coding is enough to produce a viable batch and is worth checking in and deploying. Similar results apply in product management, design, testing, and even operations. Normally I focus on the techniques you need to reduce batch size, like continuous integration. Today, I want to talk about the reasons smaller batches are better. This is actually a hard case to make, because most of the benefits of small batches are counter-intuitive.

Small batches mean faster feedback. The sooner you pass your work on to a later stage, the sooner you can find out how they will receive it. If you're not used to working in this way, it may seem annoying to get interrupted so soon after you were "done" with something, instead of just working it all out by yourself. But these interruptions are actually much more efficient when you get them soon, because you're that much more likely to remember what you were working on. And, as we'll see in a moment, you may also be busy buidling subsequent parts that depend on mistakes you made in earlier steps. The sooner you find out about these dependencies, the less time you'll waste having to unwind them.

Take the example of a design team prepping mock-ups for their development team. Should they spend a month doing an in-depth set of specifications and then hand them off? I don't think so. Give the dev team your very first sketches and let them get started. Immediately they'll have questions about what you meant, and you'll have to answer them. You may surface assumptions you had about how the project was going to go that are way off. If so, you can immediately evolve the design to take the new facts into account. Every day, give them the updated drawings, always with the proviso that everything is subject to change. Sometimes that will require the team to build something over again, but that's rarely very expensive, because the second time is so much more efficient, thanks to the knowledge gained the first time through. And over time, the development team may be able to start anticipating your needs. Imagine not having to finish the spec at all, because the team has already found an acceptable solution. I've witnessed that dozens of times, and it's a huge source of time-savings.

Small batches mean problems are instantly localized. This is easiest to see in deployment. When something goes wrong with production software, it's almost always because of an unintended side-effect of some piece of code. Think about the last time you were called upon to debug a problem like that. How much of the time you spent debugging was actually dedicated to fixing the problem, compared to the time it took to track down where the bug originated?

Small batches reduce risk. An example of this is integration risk, which we use continuous integration to mitigate. Integration problems happen when two people make incompatible changes to some part of the system. This comes in all shapes and sizes. You can have code that depends on a certain configuration that's deployed on production. If that configuration changes before your code is deployed, the person who changes it won't know they've introduced a problem. Your code is now a ticking time bomb, waiting to cause trouble when it's deployed.

Or consider the case of code that changes the signature of a commonly-called function. It's easy to find collisions if you make a drastic change, but harder when we do things like add new default parameters. Imagine a branch-based development system with two different who each added a new, but different, default-value argument to the end of the signature, and then gone through and updated all its callers. Anyone who has had to spend hours late at night resolving one of these conflicts knows how painful they are. The smaller the batch size, the sooner these kinds of errors are caught, and the easier the integration is. When operating with continuous deployment, it's almost impossible to have integration conflicts.

Small batches reduce overhead. In my experience, this is the most counter-intuitive of its effects. Most organizations have their batch size tuned so as to reduce their overhead. For example, if QA takes a week to certify a release, it's likely that the company does releases no more than once every 30 or 60 days. Telling a company like that they should work in a two-week batch size will sounds absurd - they'd spend 50% of their time waiting for QA to certify the release! But this argument is not quite right. This is something so surprising that I didn't really believe it the first few times I saw it in action. It turns out that organizations get better at those things that they do very often. So when we start checking in code more often, release more often, or conduct more frequent design reviews, we can actually do a lot to make those steps dramatically more efficient.

Of course, that doesn't necessarily mean we will make those steps more efficient. A common line of argument is: if we have the power to make a step more efficient, why don't we invest in that infrastructure first, and then reduce batch size as we lower the overhead? This makes sense, and yet it rarely works. The bottlenecks that large batches cause are often hidden, and it takes work to make them evident, and even more work to invest in fixing them. When the existing system is working "good enough" these projects inevitably languish and get deprioritized.

Take the example of the team that needs a week to certify a new release. Imagine moving to a two-week release cycle, with the rule that no additional work can take place on the next iteration until the current iteration is certified. The first time through, this is going to be painful. But very quickly, probably even by the second iteration, the weeklong certification process will be shorter. The development team that is now clearly bottlenecked will have the incentive needed to get involved and help with the certification process. They'll be able to observe, for example, that most of the certification steps are completely automatic (and horribly boring for the QA staff) and start automating them with software. But because they are blocked from being able to get their normal work done, they'll have a strong incentive to invest quickly in the highest ROI tests, rather than overdesigning a massive new testing system which might take ages to make a difference.

These changes pay increasing dividends, because each improvement now direclty frees up somebody in QA at the same time as reducing the total time of the certification step. Those freed up QA resources might be able to spend some of that time helping the development team actually prevent bugs in the first place, or just take on some of their routine work. That frees up even more development resources, and so on. Pretty soon, the team can be developing and testing in a continuous feedback loop, addressing micro-bottlenecks the moment they appear. If you've never had the chance to work in an environment like this, I highly recommend you try it. I doubt you'll go back.

If you're interested in getting started with the transition to small batches, I'd recommend beginning with Five Whys.

(I have infuriated many coworkers by advocating for smaller batch sizes without always being able to articulate why they work. Usually, I have to resort to some form of "try it, you'll like it," and that's often sufficient. Luckily, I now have the benefit of a forthcoming book, The Principles of Product Development Flow. It's really helped me articulate my thinking on this topic, and includes an entire chapter on the topic of reducing batch size.)

0 comments:

welcome to my blog. please write some comment about this article ^_^

Continuous deployment with downloads

One of my goals in writing posts about topics like continuous deployment is the hope that people will take those ideas and apply them to new situations - and then share what they learn with the rest of us. So I was excited to read a recent post about applying the concept of continuous deployment to that thickest-of-all-clients, the MMOG. Joe Ludwig goes through his current release process and determines that the current time to make and deploy a release is about seven and half hours, which is why his product is released about once a month. While that's actually quite speedy for a MMOG, Joe goes through the thought experiment of what it would take to do it much faster:
Programmer Joe � Continuous Deployment with Thick Clients
If it takes seven and a half hours to deploy a new build you obviously aren’t going to get more than one of them out in an 8 hour work day. Let’s forget for a moment that IMVU is able to do this in 15 minutes and pretend that our target is an hour. For now let’s assume that we will spend 10 minutes on building, 10 minutes on automated testing, 30 minutes on manual testing ...

In fact, if those 30 minutes of manual testing are your bottleneck and you can keep the pipeline full, you can push a fresh build every 30 minutes or 16 times a day. Forget entirely about pushing to live for a moment and consider what kind of impact that would have on your test server. Your team could focus on fixing issues that players on the test server find while those players are still online. Assuming a small change that you can make in half an hour, it would take only an hour from the start of the work on that fix to when it is visible to players. That pace is fast enough that it would be possible to run experiments with tuning values, prices of items, or even algorithms.Of course for any of this to work the entire organization needs to be arranged around responding to player feedback multiple times per day. The real advantage of a rapid deployment system is to make your change -> test -> respond loop faster.
This is a great example of lean startup thinking. Joe is getting clear about what steps in the current process actually deliver value to the company, the imagining a world in which those steps were emphasized and others minimized. Of course, as soon as you do that, you start to reap other benefits, too.

I'd like to add one extra thought to Joe's thought experiment. Let's start with a distinction between shipping new software to the customer, and changing the customer's experience. The idea is that often you can change the customer's experience without shipping them new software at all. This is one of the most powerful aspects of web architecture, and it often gets lost in other client-server programming paradigms.

From one point of view, web browsers are a horribly inefficient platform. We often send down complete instructions for rendering a whole page (or even a series of pages) in response to every single click. Worse, those instructions often kick off additional requests back and forth. It would be a lot more efficient to send down a compressed packet with the entire site's data and presentation in an optimized format. Then we could render the whole site with much less latency, bandwidth usage, and server cost.

Of course, the web doesn't work this way for good reasons. Its design goals aren't geared towards efficiency in terms of technical costs. Instead, it's focused on flexibility, readability, and ease of interoperability. For example, it's quite common that we don't know the exact set of assets a given customer is going to want to use. By deferring their selection until later in the process, we can give up a lot of bookkeeping (again trading off for considerable costs). As a nice side-effect, it's also an ideal platform for rapid changes, because you can "update the software" in real time without the end-user even needing to be aware of the changes.

Some conclude that this phenomenon is made possible because the web browser is a fully general-purpose rendering platform, and assume that it'd be impossible to do this in their app without creating that same level of generality. But I think it's more productive to think of this as a spectrum. You can always move logic changes a little further "upstream" closer to the source of the code that is flowing to customers. Incidentally, this is especially important for iPhone developers, who are barred by Apple Decree from embedding a programming language or interpreter in their app (but who are allowed to request structured data from the server).

For example, at IMVU we would often run split-test experiments that affected the behavior of our downloadable client. Although we had the ability to do new releases of the client on a daily basis (more on this in a moment), this was actually too slow for most of the experiments we wanted to run. Plus, having the customer be aware that a new feature is part of a new release actually affects the validity of the experiment. So we would often ship a client that had multiple versions of a feature baked into it, and have the client call home to find out which version to show to any given customer. This added to the code complexity, latency, and server cost of a given release, but it was more than paid back by our ability to tune and tweak the experimental branches in near-real-time.

Further upstream on the spectrum are features that can be parameterized. Common examples are random events which have some numeric weighting associated with them (and which can be tuned) or user interface elements that are composed of text or graphics. We tacked on a feature to the IMVU client that worked like this: whenever the client called home to report a data-warehousing event, we used to have a useless return field (the client doesn't care if the event was successfully recorded). We repurposed that field to optionally include some XML describing an on-screen dialog box. That meant we could notify some percentage of customers of something at any time, which was great for split-testing. A new feature would often be first "implemented" by a dialog box shown to a few percent of the userbase. We'd pay attention to how many clicked the embedded link to get an early read on how much they cared about the feature at all. Often, we'd do this before the feature existed at all, apologizing all the way.

There are plenty more techniques even further upstream. Eventually, you wind up with specialized state machines, interpeters or a full-fledged embedded platform. We eventually embedded the Flash interpreter into our process, so we could experiment with our UI more quickly.

In fact, we considered releases themselves to be a special case of this more general system. We had a structured automated release process. After all, the release itself was just a static file checked into our website source control. Every new candidate release was automatically shown to a small number of volunteers, who would be prompted to upgrade. The system would monitor their data and if it looked within norms gradually offer the release to a small number of new users (who had no prior expectation of how the product should work). It would carefully monitor their behavior, and especially their technical metrics, like crashes and freezes. If their data looked OK, we'd have the option to ramp up the number of customers bit by bit until finally all new users were given the new release and all existing users were promtped to upgrade. Although we'd generally do a prerelease every day, we wouldn't pull the trigger on a full release that often, because our upgrade path for existing users wasn't (yet) without cost. It also gave our manual QA team a chance to inspect the client before it was widely deployed, due to the lower level of test coverage we have on that part of the product (it's much harder).

In effect, every time we check in code to our client code base, we are kicking off another split-test experiment that asks: "is the business better off with this change in it than without it?" Because of the mechanics of our download process, this is answered a little slower than on the web. But that doesn't make it any less important to answer.

To return to the case of the thick-client MMOG, there are some additional design constraints. It's more risky to have different players using different versions of the software, because that might introduce gameplay fairness issues. I think Joe's idea of deploying to the test server is a great way around this, especially if there is a regular crew of players subjecting the test server to near-normal behavior. But I also think this could work in a lot of production scenarios. Take the case of determining the optimal spawn rate for a certain monster or treasure. Since this is a parameterized scenario, it should be possible to do time-based experiments. Change the value periodically, and have a process for measuring the subsequent behavior of customers who were subjected to each unique value. If you want to be really fancy, you can segregate the data for customers who saw multiple variations. I bet you'd be able to write a simple linear optimizer to answer a lot of design questions that way.

My experience is that once you have a tool like this, you start to use it more and more. Even better, you start to migrate your designs to be able to take ever-increasing advantage of it. At IMVU, we found ourselves constantly migrating functionality upstream, in order to get faster iteration. It was a nearly uncoscious process; we just felt that much more productive with rapid feedback.

So thanks for sharing Joe! Good luck with your thought experiment, and let us know if you ever decide to make those changes a reality.
Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

What is a market? (a guide for hackers)

(This post was inspired by a conversation with Nivi from Venture Hacks, but is otherwise not his fault)

There has been a proliferation of frameworks and metaphors lately that are designed to help startups avoid the all-too-common fatal mistake of failing to find a market. To wit: achieving product/market fit, getting customer validation, making something people want, things that matter, and of course the many excellent books on the topic, of which I'll mention just two of the best, Crossing the Chasm and The Innovator's Dilemma.

Having worked with dozens of founders over the past few months, I think I can safely say, without naming any names, that most of us are not too clear on what we're talking about when it comes to markets. Is a market a set of paying customers? Are there different types, or are they all similar? If so, how do we think about successful companies that don't charge money for their product? Are advertisers customers, and are they part of a market? Founders are constantly being barraged by incoherent and contradictory advice. For example, focus on finding a big market, but also, don't try to compete against large incumbents who already have a large market. But also, remember that eBay started out with pez dispensers but also remember that Google never advertised and I've heard that Facebook got it right from the start.

Few people want to admit that they don't understand what other people are talking about. That's especially true when jargon is flying and the stakes are high. For people who went to business school, I don't really know what to say. But I work with a lot of hackers-turned-founders who I do relate to. There's no earthly reason we should expect a programmer will have picked up a good understanding of market dynamics along the way, while they were busy figuring out how to grok partial template specialization.

Here's my attempt to explain market types using a metaphor most of us should be able to understand. It takes advantage of the idea, which I owe to Clayton Christensen, that customers buying a product are really "hiring" it to do a specific "job" for them. It's as if every customer, whether they are an enterprise, a small business, or an individual consumer, is actually an employer who wants to get things done. "People don't want to buy a quarter-inch drill. They want a quarter-inch hole!"

So imagine you're applying for a programming job. You're smart and have l33t skills. And yet, depending on who you talk to and what jobs you apply for, you may find an easier or harder time getting the job. Those differences are actually predictable, and fall into categories, and those categories are called market types. There are four:
  1. Existing market. This is applying for an open job req. An employer is trying to hire someone for a specific job, and they think they know what that job is. You might be smarter than the other applicants, but there are zillions of them. Because the employer has so many choices, they can afford to be very picky. This is the world of incomprehensible job postings, jargon, and HR keyword-based screening. It's important to understand how these companies see you: your resume looks like crap, your references are poor or nonexistent, and you don't have the mandated 10 years of experience in J2EE ERP CRM WTF. Tough sell. In the world of startups, this is like trying to sell a product to a very demanding customer who needs to see a lot of features before buying.

  2. Resegmented market (low cost variety). Here's a situation where you offer to do the job for 1/10th the cost. Instead of being paid $120k/yr, you are willing to do it for $12k/yr. This is how outsourcers get jobs they aren't otherwise "qualified" for. Keep in mind that a moderate price savings won't get it done; you can't call up traditional HR and say "I'll do the job for $105k" because that's not a meaningful sum to most companies, especially compared to the cost of making the wrong hire. In fact, you can't go through traditional channels at all. Recruiters and HR departments rarely recommend outsourcing - more likely, someone inside the company needs to get work done cheaply and circumvents the established hiring process to use an outside (cheap) vendor.

    Here your challenge is to convince that manager that you really can do the job at such low cost. Now your crappy resume and low-grade references become a strength: you're obviously not wasting money on marketing. Why do you think outsources have been happy to generate all this PR the past few years about people losing their jobs to India? Although it generates some political backlash, it establishes tremendous credibility among people who are desperately trying to save money. They must be thinking "if it's costing people their jobs, it must really work. Maybe I should try that..."

  3. Resegmented market (niche variety). This involves changing the job description. If the company is looking for programmers, you convince them they absolutely need Ruby programmers. Now, there's nobody with 10 years experience in doing ETL QVC in Ruby, so now your 3 years is starting to look pretty good. And now all those "qualified" candidates who you were competing against in scenario #1 are starting to look unqualified, because although they may be experts in something, they don't know Ruby, and you've convinced the client that Ruby is the be-all-end-all of programming languages. Remember Java?

  4. New market - this is like applying to a company that does not have an open req for programmers. Your challenge is to convince them to hire you anyway, even though they don't know what they need. Now even 10 years of experience is probably not good enough, because they have no idea how many years experience you ought to have. Without a basis for comparison, you first have to drive home the need, then you will have an easy time making the sale (after all, you're the person who they trust to bring it to their attention). A possible approach is to "steal" another job category. For example, "If you hire me you can free up 10 of your staff in department X, because I will write software that ..." Toughest part of the sale is to get agreement that you really can deliver benefits that they didn't previously know were possible.

    Here's the good and the bad news about new markets: you don't have any competition. When you call on a customer to try and get th job, it's unlikely that somebody else got there first. Unfortunately, that also means the csutomer is unlikely to know what you're talking about. Be prepared for an extended slog.
Think back to the confusing and contradictory advice I mentioned earlier. The #1 best way to cope with advice like that is to know your market type. That way, when someone say something like "you only have a few niche customers, so you can't be in a very big market" you can reply and (as a bonus) know what you're talking about. How about something like "don't worry! We're busy resegmenting the XYZ market with a disruptive low-price offering. Although that currently means we only can serve these few low-end customers, as our product improves, we'll eventually move up-market and kick the incumbents out."

They may not understand what you're talking about, but - don't worry! - they'll probably be too embarrassed to say anything.

0 comments:

welcome to my blog. please write some comment about this article ^_^

You buy virtual goods

Jeremy Liew has a great piece in the WSJ about the central mystery of businesses that make money selling virtual goods:
Why do People Buy Virtual Goods? - WSJ.com: "My theory is that people buy digital goods for the same reason that they buy goods in the real world; (i) to be able to do more, (ii) to build relationships, and (iii) to establish identity."
Jeremy's argument is a good one, and I'm glad to see it being advanced in such a high-profile way. Having had a number of years to try and answer this question at cocktail parties and social gatherings of all kinds, I thought I'd try and expand on his framework a little, and then try and use these answers to help make suggestions for those who are trying to get people to buy virtual goods. Let me start by trying to convince you that, no matter who you are, you already buy virtual goods.

My goal, when talking to people who are new to the virtual worlds concept, is to convince them that they already buy virtual goods. This is always true, because modern economies have become increasingly virtual over the years. Why do Citizens of Humanity jeans cost $200, when physically similar pants can be had for one-tenth the price? The same is true of brand-name products in almost every category (and it can be measured). Brands are so pervasive that even those people who want to make a statement against it (do you build all your own software from source?) have to invest substantial time avoiding it, which is just another kind of premium. (Ironically, one of the best sources for insight on this phenomenon is Naomi Klein's anti-corporate manifesto No Logo.)

My point is not just that brands are a form of virtual goods, although that's true. Beyond their brand, I want to argue that every product contains both tangible and intangible sources of value, and that everything you buy has at least some "virtual goods" component. By recognizing these components, we can make better sense of what's going on in the online virtual goods market, and craft strategies that leverage people's pre-existing experience with virtual goods. This is why brand-based virtual goods have worked so well online. Paying a premium for a branded pair of virtual jeans is actually a pretty similar experience for most customers.

I dissect the value that a product gives to a customer into four sources:
  • Practical utility - This is the tangible benefit that a product enables, whether that's transportation, warmth, cleanliness, or entertainment. Many products derive all of their value from this utility, online as well as off: generic drugs are pretty similar to a number of non-epic consumables in your average MMOG.

  • Perceived value - This is the extra value a customer perceives as a result of good marketing, product design, product quality, or exception product/market fit. For example, many customers derive satisfaction from feeling like they bought the "best" product in a given category, even if that product has no objective performance difference from its nearest competitor. This is true even in cases where the customer derives no status benefit from the product (which we'll cover in a second). For example, home electronics brands like Sony and Bose work very hard to create an impression of exceptional performance even in products that are used primarily in private.

  • Social value - When I can use a product to my benefit in a social situation, it can be transformed in value. All gifting-type products are influenced by this source, as Hallmark has long understood. But plenty of other product categories depend on social factors: status purchases, beauty products, fashion products, and (at least here in San Francisco) food and produce. For a non-brand example, look no further than De Beer's successful, if pernicious, marketing of diamonds.

  • Identity value - This is the strongest source of value of all, and it's a little tricky to differentiate from the preceding two sources. This is the benefit you get from incorporating a product into your self-conception. For example, take your average Mac fanatic. When they buy an Apple laptop, they are doing more than enjoying a premium product and showing off. They are saying to the world and - more importantly - to themselves: I am the kind of person that buys Apple products. Apple has done a phenomenal job of convincing us that we, too, can be a little like Steve Jobs, if only we had one more iFoo in our lives. Many fashion and beauty products create this kind of affinity, especially in products that are not visible to others (don't make me spell it out). Identity products are not easily displaced, because the emotional investment is very high. This is every bit as true for online goods - just try and trade your friend's level 80 warlock for your "equivalent" level 80 rogue. Good luck.
I've tried to arrange these sources of value in a hierarchy, and although I don't have a lot of evidence that I got it right, this is my gut sense of how they stack up. The nice thing about understanding these sources is that consumers generally have separate budgets for each category. One of the biggest lessons I learned from my time at IMVU was that if you can move spending out of the entertainment budget (which is often constrained) and into the identity budget, you can make a lot more money per customer. Even in tough times (actually, especially in hard times) people spend significant sums to bolster their sense of who they are. For better or worse, products physical and online are parts of that formula for most people in our society.

So when thinking of selling virtual goods, consider combining sources of value into one product. This is true for online goods, for example in supporting item gifting. It's just as true for physical or even hybrid goods, as Webkinz has demonstrated by adding an online source of utility to a physical stuffed animal. Many virtual goods generate only mediocre returns when they draw from only one source of value, as Facebook has been finding out with their only-social-utility gifts, or dozens of startups have found out with their just-the-functionality technology offerings.

And when given the choice, try and move up the hierarchy of value. If given the opportunity to work with two customer segments, one of which sees your product as a basic utility and another of which sees it as a lifestyle statement, choose the latter. IMVU made that choice early on, when we abandoned some profitable customers who wanted to use our product as a regular-IM substitute. There was no way to service them while still engaging with the goths, emos and anime fans who were rapidly becoming IMVU's top evangelists. We doubled-down on identity value, and it worked out well.

0 comments:

welcome to my blog. please write some comment about this article ^_^

The free software hiring advantage

This is one of those startup tips I'm a little reluctant to share, because it's been such a powerful source of competitive advantage in the companies I've worked with. But I'm going to share it anyway, because it feels like the right thing to do. Here's the short version: hire people from the online communities that develop free software. (Yes, you may be more familiar with the term open source, but let's give credit where credit is due, at least for today).

Especially for a startup, not taking maximum advantage of free software is crazy. The benefits are many and much-discussed, and so I'll mention only one in passing. It's one of the easiest ways to get leverage in your development process, amplifying the power of your team by letting you take advantage of code written by thousands of others. It's obvious that can lower your development costs, but I think it's even more important that it can reduce your time to market. It can benefit your team in other, more surprising ways as well.

This approach gives you an edge in hiring. Most of the best programmers I've known are been active in at least one free software project. It's a wonderful filter for people who are intrinsically motivated by the art of programming. Beyond the quality of the candidates themselves, I've noticed three big effects of hiring out of free software communities:
  1. You can hire an expert in your own code base. I've had the good fortune to see this first-hand. I hired someone who was a key contributor to a library that was heavily used in our application. Although he didn't know much about our app when he started, he was able to be productive from day one, because we immediately put him to work extending our application's use of the library in question. He saw opportunities none of us could, because his point-of-view about what constituted "legacy code that I know well" and "third party code that some other strange people wrote" were exactly inverted from ours.

  2. You can hire people who have worked together. Another unexpected benefit comes when you hire people who are part of the same online coding community. They share a common language, culture, and coding style. They have a certain amount of trust based on having been part of a common mission, and they share a passion for the project's goals. That's all helpful for recruiting, retaining, and ramping-up a new employee. You don't have to guess who their mentor will be.

  3. You're not competing with every other company in town. This is especially true in Silicon Valley. The great programmers are already being headhunted by 10 other startups, and the cognitive overhead of trying to figure out which are actually unique opportunities is high. Free software contributors tend to be geographically dispersed, and so aren't part of the echo chamber. When you call, it's a more unusual occurrence, and you're more likely to get their attention.
If you want to successfully hire from a coding community, don't just barge in. Job posts almost never work; they're considered a form of spam. Instead, engage with the project. Submit patches. Make suggestions for how the project could be improved. Publish examples of how you use it. A surprising number of contributors to these projects have no idea how their work is used; I've often found myself in the position of being the largest single user of the software on the planet, without having ever talked to the people who wrote it. Talk about a reliable, low-maintenance vendor. This effort takes time, so this approach is not for the impatient. Still, the engagement itself is worthwhile, even if you never hire anyone. You're helping people who spend part of their lives helping you. It's basic self-interest.

Once you're part of the community, a big question is who to try and hire. As you get to know the contributors, it's evident who the real leaders are. Here's my heuristic for deciding who to approach. Ignore the famous people who are busy giving lots of speeches about how technology X will change the world. Find the person on the mailing list who patiently corrects newbies' mistakes. When someone writes in with a bad (but earnest) idea, who has the combination of in-depth knowledge and communications skills necessary to correct them without alienating them from the project? Communities that don't attract these kinds of leaders don't scale, so any successful project is bound to have some. It's worth the effort to bring them on board.

Once I've identified one of these superstars, here's what has worked for me. Approach them directly and privately. State your case plainly and be honest. I've found almost anyone will give you the time of day, if you tell them directly: "I am working on a company whose mission I profoundly believe in. We are heavy users of Project X, and are grateful to you for making that possible. Can I have a few minutes of your time?" Build up a relationship over time, find out what makes them tick, and try to make the case that your company is a great place to pursue that passion. If you're telling the truth, they'll come to see it your way eventually.

Given how many startups complain bitterly about how hard it is to find qualified programmers, I'm surprised more don't engage more fully with the people who make their technology stack possible. Try it, you just might like it. If you've never been a contributor to a free software project before, take a look at Contributing to Open Source Without Committing a Line Of Code.

Of course, if you're the target of one of these hiring calls, and not the company doing it, the perspective is pretty different. How do you evaluate a startup as a potential employer? As I promised over at Hacker News, I'll try and tackle that question in a future post. If you're interested in that topic, let me know in a comment.
Reblog this post [with Zemanta]

0 comments:

welcome to my blog. please write some comment about this article ^_^

Continuous deployment and continuous learning

At long last, some of the actual implementers of the advanced systems we built at IMVU for rapid deployment and rapid response are starting to write about it. I find these on-the-ground descriptions of the system and how they work so much more credible than just theory-type posts that I am excited to share them with you. I can personally attest that these guys know what they are talking about; I saw them do it first-hand. I will always be full of awe and gratitude for what they accomplished.
Continuous Deployment at IMVU: Doing the impossible fifty times a day by Timothy Fitz
Continuous Deployment isn’t just an abstract theory. At IMVU it’s a core part of our culture to ship. It’s also not a new technique here, we’ve been practicing continuous deployment for years; far longer than I’ve been a member of this startup.

It’s important to note that system I’m about to explain evolved organically in response to new demands on the system and in response to post-mortems of failures. Nobody gets here overnight, but every step along the way has made us better developers.

The high level of our process is dead simple: Continuously integrate (commit early and often). On commit automatically run all tests. If the tests pass deploy to the cluster. If the deploy succeeds, repeat.

Our tests suite takes nine minutes to run (distributed across 30-40 machines). Our code pushes take another six minutes. Since these two steps are pipelined that means at peak we’re pushing a new revision of the code to the website every nine minutes. That’s 6 deploys an hour. Even at that pace we’re often batching multiple commits into a single test/push cycle. On average we deploy new code fifty times a day.
We call this process continuous deployment because it seemed to us like a natural extension of the continuous integration we were already doing. Our eventual conclusion was that there was no reason to have code that had passed the integration step but was not yet deployed. Every batch of software for which that is true is an opportunity for defects to creep in: maybe someone is changing the production environment in ways that are incompatible with code-in-progress; maybe someone in customer support is writing up a bug report about something that's just being fixed (or worse, the symptom is now changing); and no matter what else is happening, any problems that arise due to the code-in-progress require that the person who wrote it still remember how it works. The longer you wait to find out about the problem, the more likely it is to have fallen out of the human-memory cache.

Now, continuous deployment is not the only possible way to solve these kinds of problems. In another post I really enjoyed, Timothy explains five other non-solutions that seem like they will help, but really won't.
1. More manual testing.
This obviously doesn’t scale with complexity. This also literally can’t catch every problem, because your test sandboxes or test clusters will never be exactly like the production system.
2. More up-front planning
Up-front planning is like spices in a cooking recipe. I can’t tell you how much is too little and I can’t tell you how much is too much. But I will tell you not to have too little or too much, because those definitely ruin the food or product. The natural tendency of over planning is to concentrate on non-real issues. Now you’ll be making more stupid mistakes, but they’ll be for requirements that won’t ever matter.
3. More automated testing.
Automated testing is great. More automated testing is even better. No amount of automated testing ensures that a feature given to real humans will survive, because no automated tests are as brutal, random, malicious, ignorant or aggressive as the sum of all your users will be.
4. Code reviews and pairing
Great practices. They’ll increase code quality, prevent defects and educate your developers. While they can go a long way to mitigating defects, ultimately they’re limited by the fact that while two humans are better than one, they’re still both human. These techniques only catch the failures your organization as a whole already was capable of discovering.
5. Ship more infrequently
While this may decrease downtime (things break and you roll back), the cost on development time from work and rework will be large, and mistakes will continue to slip through. The natural tendency will be to ship even more infrequently, until you aren’t shipping at all. Then you’ve gone and forced yourself into a total rewrite. Which will also be doomed.
What all of these non-solutions have in common is that they treat only one aspect of the problem, but at the expense of another aspect. This is a common form of sub-optimization, where you gain efficiency in one of the sub-parts at the expense of the efficiency of the overall process. You can't make these global efficiency improvements until you get clear about the goal of your development process.

That leads to a seemingly-obvious question: what is progress in software development? It seems like it should be the amount of correctly-working code we've written. Heck, that's what it says right there in the agile manifesto. But, unfortunately, startups can't afford to adopt that standard. As I've argued elsewhere, my belief is that startups (and anyone else trying to find an unknown solution to an unknown problem) have to measure progress with validated learning about customers. In a lot of cases, that's just a fancy name for revenue or profit, but not always. Either way, we have to recognize that the biggest form of waste is building something that nobody wants, and continuous deployment is an optimization that tries to shorten this code-data-learning feedback loop.

Assuming you're with me so far, what will that mean in practice? Throwing out a lot of code. That's because as you get better at continuous deployment, you learn more and more about what works and what doesn't. If you're serious about learning, you'll continuously learn to prune the dead weight that doesn't work. That's not entirely without risk, which is a lesson we learned all-too-well at IMVU. Luckily, Chad Austin has recently weighed in with an excellent piece called 10 Pitfalls of Dirty Code.

IMVU was started with a particular philosophy: We don't know what customers will like, so let's rapidly build a lot of different stuff and throw away what doesn't work. This was an effective approach to discovering a business by using a sequence of product prototypes to get early customer feedback. The first version of the 3D IMVU client took about six months to build, and as the founders iterated towards a compelling user experience, the user base grew monthly thereafter.

This development philosophy created a culture around rapid prototyping of features, followed by testing them against large numbers of actual customers. If a feature worked, we'd keep it. If it didn't, we'd trash it.

It would be hard to argue against this product development strategy, in general. However, hindsight indicates we forgot to do something important when developing IMVU: When the product changed, we did not update the code to reflect the new product, leaving us with piles of dirty code.

So that you can learn from our mistakes, Chad has helpfully listed ten reasons why you want to manage this dirty-code (sometimes called "technical debt") problem proactively. If we could do it over again, I would have started a full continuous integration, deployment, and refactoring process from day one, complete with five why's for root cause analysis. But, to me anyway, one of the most inspiring parts of the IMVU story is that we didn't start with all these processes. We hadn't even heard of half of them. Slowly, painfully, incrementally, we were able to build them up over time (and without ever having a full-stop-let's-start-over timeout). If you read these pieces by the guys who were there, you'll get a visceral sense for just how painful it was.

But it worked. We made it. So can you.

0 comments:

welcome to my blog. please write some comment about this article ^_^

The lean startup @ Web 2.0 Expo (and a call for help)

I've been asked to speak this year at the Web 2.0 Expo to explain the lean startup concept to a larger audience.
The Lean Startup: a Disciplined Approach to Imagining, Designing, and Building New Products.: Web 2.0 Expo San Francisco 2009

The current macroeconomic climate presents unparalleled opportunities for those that can thrive with constrained resources. The Lean Startup is a practical approach for creating and managing a new breed of company that excels in low-cost experimentation, rapid iteration, and true customer insight. It uses principles of agile software development, open source and web 2.0, and lean manufacturing to guide the creation of technology businesses that create disruptive innovation.

1:30pm Wednesday, 04/01/2009 Location: 2022
I'd like to extend my thanks to the people who've come to my recent "office hours" talks and given such valuable feedback. I know some folks weren't able to get in to the most recent one (sorry!), and so I'm looking forward to having a bigger venue to work with. The trade-off, though, is a more structured presentation format. And that's where the call for help comes in.

I have to practice what I preach, right? If you're interested in being part of my "customer advisory board" for this presentation, please get in touch. I'm looking for a few people to run ideas by, give feedback, and generally help craft ideas so that they can have the biggest impact on this audience. I'm especially interested in hearing from those of you who are planning to attend the Web 2.0 Expo or have attended one in the past. If you want to do it, please start with the presentation Steve and I did for Maples Investments a few months ago. That was for a different audience and in a much smaller venue, but I'd like to use it as a starting point. If you'd like to give feedback, feel free to post comments here, or get in touch on LinkedIn, Facebook, or try to beat the spam filter on email.

Lastly for those of you who decide to stop by the Expo, please come say hello and let me know you're a reader, so I can say thanks in person. As a group you've given me such incredible support and feedback these past few months, and I truly appreciate it.

0 comments:

welcome to my blog. please write some comment about this article ^_^

Creating a high-end look for a web site

Some of these are so simple, but they look great:21 Simple But Impressive Corporate Web Designs Of Top Brands

0 comments:

welcome to my blog. please write some comment about this article ^_^

Lesson 48: Snap Shot Of Getting Started Keeping Honey Bees

Hello Everybody! We are David & Sheri Burns, taking all the confusion out of beekeeping. Our goal is to help more people start keeping bees! And it is working. More and more people have taken up beekeeping because of our efforts. We are trying to spread the word about how important honey bees are to all of us.

Yesterday, out of the blue, I was asked by FOX NEWS to do a live interview on the Neil Cavuto show. I welcomed the opportunity to get the word out on national news about the plight of the honey bee. Click here to watch the interview.

Many people want to start keeping bees, but are clueless as to when to start, what to buy, how to do it and where to buy everything. Then, there is the huge learning curve of knowing what you are doing.

Here at Long Lane Honey Bee Farms, we know these are challenges to prospective beekeepers, so we are doing our part to walk you through how to become a beekeeper. We want to make it simple and easy to understand.

Beekeeping is a wonderful hobby, and can become a home-based business as your hives expand. Many customers tell us how much they enjoy beekeeping and how relaxing it really is.

So in today's Basic Beekeeping Lesson #47, we want to give you a snap shot of how to start keeping honey bees. Are you ready? Let's go!

1) Buy Your Equipment, called Woodenware because most bee hives are made of wood. Our cost of a complete hive is $249, assembled and painted. Nothing to build or paint! When should you buy your hive? NOW! Jan- May

2) Buy Your Bees. We ship your bees to you through UPS if you also order a hive, otherwise plan to pick up at our facility. You receive
3 pounds of bees, which is about 10,000 bees and 1 queen. The package is about the size of a shoe box with a screen around it. Within the package there is a can of sugar water that the bees eat while being shipped to you. The typical cost of our package of bees including shipping is $96. Bees must be purchased between Jan-April. Most packages of bees sell out fast, so it is important to order your bees in the winter months!

3) Buy Your Tools. Basic equipment includes a hive tool to help separate the frames and hive boxes when you look inside your hive, a smoker which is used to puff smoke into the hive to calm the bees before you open up to look and a hat and veil to give you protection and increase your confidence in working your bees. Our cost of all of these pieces of equipment is $59 for the hat/veil, smoker and hive tool. Timing: Make sure you have your equipment on hand prior to getting your bees :)

4) Once you order your items, review our online FREE lessons. Our lessons are very easy to follow and will answer all your questions. Each lesson will walk you through the entire process of keeping bees. For example, our
Lesson Seven will walk you through how to install your first package. It is simple and easy to understand.

5) We are your mentors! Every new beekeeper needs someone to call or email with questions. We are here to answer your questions. When you purchase your items from us, we'll be your personal mentor. Believe me, this is a huge sacrifice of our time, but we do it because we appreciate our customer's business and we want our customers to feel comfortable in keeping bees.

Check out our New STUDIO BEE LIVE BROADCAST!
In our next lesson, I'll talk about feeding bees Fondant in late winter/early spring and give you some Fondant recipes. And I'll calm your fears about spotting on your hives on warm winter days.

Until next time, remember to Bee-have yourselves.

David & Sheri Burns
Long Lane Honey Bee Farms
14556 N. 1020 East Road
Fairmount, IL 61841
PHONE: 217-427-2678
David's EMAIL:
david@honeybeesonline.com

0 comments:

welcome to my blog. please write some comment about this article ^_^

Taking charge

If you need some extra energy:60+ Resources For Entrepreneurs To Step Up and Take Charge

0 comments:

welcome to my blog. please write some comment about this article ^_^

Beet Leaf Holopchi

Our hosts for January were KatBaro of A good Appetite and Giz & Psychgrad from Equal Opportunity Kitchen. (The pictures through out this post are from Temperance of High on the Hog and Psychgrad of Equal Opportunity Kitchen. ) Here is KatBaro's post.

Happy New Year everyone. This month I am hosting the challenge with Giz & Psychgrad from Equal Opportunity Kitchen. For this month's recipe we chose a Ukrainian dish called Holopchi. It's a big like a cabbage roll but its not. It seemed like a good comfort dish for winter.

Here's the recipe along with some notes from Giz

Beet Leaf Holopchi
from The Keld Community Ladies Club in Ashville, Manitoba. The last publishing of this cookbook was 1976 and I doubt it's even in circulation anymore.

This is not your usual cabbage roll - can you imagine a bread dough wrapped in beet leaves and baked in a creamy, garlic, onion and dill sauce.

Bread Dough:

2 pkgs. yeast
1/2 cup warm water
1 tsp sugar
2 cups scalded milk
4 cups warm water
1/4 cup melted butter
8 cups flour
3 eggs, beaten
2 Tbsp salt
1 Tbsp Sugar
6 1/2 cups flour
a couple bunches of beet leaves

Note: When I first saw this recipe I thought it was wrong - how many recipes need THAT much flour. I used the recipe and indeed had to add more to get the right consistency. AND I ran out of dough before I ran out of beet leaves.

Directions

1. Dissolve 1 ts. sugar in 1/2 cup tepid water, sprinkle with yeast and let stand for 10 minutes.
2. To the milk-water liquid add the melted butter, dissolved yeast and 8 cups of flour. Let rise in a warm place until double in bulk (about 1 hour)
3. Add salt, beaten eggs, sugar and remaining flour.
4. Knead well until dough is smooth and top with melted butter or oil.
5. Place in a warm place and let rise until double in bulk. It will take about 2 hours. Punch down . When dough has risen to double in bulk, place a piece of dough, the size of a walnut on a beet leaf and roll up (leaving sides open)
6. Place holopchi loosely in a pot to allow for dough to rise to double in bulk again.
7. Arrange in layers, dotting each layer with butter.
8. Cover tightly, bake in a moderate oven of 350 F for 3/4 to 1 hour. Serve with dill sauce or cream and onion sauce. (I like to cook the holopchi with the sauce but you don't have to. You can add it later - just make sure you have enough butter in roasting pan before layering your beet leaf rolls.)
I baked mine longer - about 1 1/2 hours and was happy with the result

Sauce
1/2 cup butter
2 cups whipping cream
8 small onions (I used chives)
2 handfuls of chopped fresh dill (this makes the whole dish)
2-4 large cloves of garlic, chopped fine

Melt butter in saucepan. Add onions (chives) garlic, dill and cream.
Let it come to a boil and then turn down the heat.
I like to cook the holopchi with the sauce but you don't have to. You can add it later - just make sure you have enough butter in roasting pan before layering your beet leaf rolls.

This is not a 5 minute recipe. When you commit to making it - it's an adventure - most definitely a worthwhile one. This recipe filled an open roaster and a turkey sized roaster.


From the Forum:
Even though I thought this was a very strange concept, I am reporting back to you all that it was quite good.
Lauren of I'll Eat You

It was wonderful! This was my first R2R and a 'keeper' recipe for sure.
Snapper of Our House In The Middle Of The Street

I tucked some pork chops under the holopchi and boy were they great, they came out so tender.
Temperance of High on the Hog

0 comments:

welcome to my blog. please write some comment about this article ^_^