@andrewchen

Subscribe · Featured · Recent · The Cold Start Problem 📘

Archive for the ‘Uncategorized’ Category

How to keep visual design consistent while A/B testing like crazy

without comments


If you don’t watch out, after a couple months of A/B testing, your product will end up looking like Las Vegas!

Why A/B testing and visual design come into conflict
It’s great to implement consistent A/B testing in their product process, but then it becomes even harder to keep a consistent visual design while doing test after test. This tension comes from the fact that A/B tests push you towards local maxima, making the particular section of page you’re testing high-performance, but at the expense of the overall experience. As a result, there’s a lot of temptation to “hack in” a new design, the way that software engineers have to “hack in” a feature – but this is short-term at best. This often means adding a bold, colored link to the top of page with “NEW!” or adding yet another tab – these are all band-aid solutions because once you get to the next set of features, it’s not a scalable design to have 100 tabs.

Each of these competing features, taken by itself, moves the needle positively. However, there isn’t a great way to measure the gradual “tragedy of the commons” effect to the overall user experience. Each new loud page element competes with all previous page elements, and must be louder as a result – this leads to the Vegas effect that many Facebook apps end up in.

To really solve this problem, you need a central design vision – there’s no way around that. It also helps a lot to have a flexible design that embraces A/B testing – you can work with your designers to make this happen through modular, open elements.

Closed designs make it hard to add or remove content
Let’s take a particular example and look at it – this might be a standard example on a page like a video or otherwise:

closed

It looks nice, but also has tremendous sensitivity to the content and an inflexible design that makes it hard to test new content. To be more specific, ask yourself the following quesitons:

  • If you wanted to add a comments count in addition to views and votes, how would you do that?
  • What happens when the views number gets beyond 10,000?
  • What if you wanted to add favorites, or flagging for inappropriate content?
  • If we decided to hide the thumbs down, how would this visually balance?
  • If we wanted to fit more thumbnails onto a browse page, how easy it is to shrink the main thumbnail?
  • etc.

The above design is an example of a “closed” design where everything fits just right, but makes it very difficult to add or remove elements. There’s an exact balancing of all the parts of the element, which makes it very sensitive.

Many of the solutions to the questions involve either require building out new pieces next to the element, which throws it off balance. Thus, if the above were used in an A/B test, the visual look would be immediately ruined.

Open designs that are A/B test-friendly
Let’s compare this to the elements below, which have a more modular design that can scale vertically:

open

The above elements don’t have the same “just right” visual appeal, but make it much easier to add and remove content. The key design decision is to add multiple bands of content which can be grouped together and extended vertically. Ideally, you would never end up with a repeating tile of 4-buttons and 3-stats, but you could certainly test it much more easily than with the closed design.

Here are some of the variations that can easily be tested:

  • Switch the title section and the stats/buttons sections
  • Add and remove buttons (or no buttons!)
  • Add and remove stats (or no stats!)
  • Combine price tags with other stats
  • Try different buttons
  • etc.

Following an open design on page elements enables substantial A/B testing within some flexible constraints. Now you may still be tempted to do something crazy like big hover overlays, <BLINK> tags, and other stuff, but at least you can make it easy to test a wide variety of low-hanging fruit. It also makes the owner of the overall visual design able to maintain a central “style guide” while still offering enough flexibility to keep people creative.

This same idea of open designs with horizontal bands of content can be applied to whole pages too – let’s examine a page from the king of A/B testing, Amazon.com.

Open page layouts
From the snapshot below, you can see that Amazon groups the center column of content – each element has a title explaining how it is, a list of items, and a navigation link to see more. This is also true with the item detail pages, which use a similar grouping to show everything from similar books to reviews to other elements. These pages can get very long, but because most of it is below-the-fold, it’s easy to get away with.

I’ve been told that this modular design enables Amazon to take a “King of the Hill” approach to testing each horizontal band of content against each other. Different software teams will create different kinds of navigation and recommendation, and if it causes people to click through to buy, then it floats up higher in the page. This systematic A/B testing is much more easily enabled when there’s the design flexibility for that sort of thing.

Here’s a snapshot for a reminder of what this looks like:

While you may argue that Amazon’s design is cluttered and actuallysucks, on the other hand, this approach lets them take a very experimental approach to pushing out features. It makes it very low-cost to implement a new recommendations approach and try it out without needing to figure out how to design it into the UX.

What’s next? Modular user flows?
Of course, if you can take a modular approach to scaling individual page elements or entire pages, the next question is whether you can take this approach to user flows.

I’ve never seen anyone do this, but this is how it might work:

  • Any linear user flow is identified in a product (like new registration, payment flow, etc)
  • This flow might be 1 page, or broken into N pages
  • Similarly, every individual page might have a bunch of fields (like photo, about me, etc.)
  • As part of the A/B testing process, you might want to drop a new page (or new fields) into the flow
  • Then an optimization process shuffles pages throughout the flow to identify the best page sequence and page-by-page configuration

You might imagine something like this could be a very powerful process as it would allow you to identify whether you should offer a coupon pre-transaction or post-transaction, or on any given page, where an input field should be placed.

For those who want to know more, I have written a bunch more about A/B testing here.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

September 21st, 2009 at 8:30 am

Posted in Uncategorized

Netflix on their Freedom and Responsibility culture

without comments

Pretty fascinating slides – it’s 128 pages, but worth flipping through – see below. Those on RSS feeds, you can find a link to the presentation here. I found this via Bob Sutton, who writes:

This slideshow was on a number of blogs over the summer (see here) , but I wanted to make sure that everyone saw it and, frankly, to get a post here so I have a record of it.  Apparently, Reed Hastings, the amazing CEO of Netflix, put-up a set of 128 slides that is a “reference guide to our freedom and responsibility culture.” I realize that most 128 page slide decks are deadly dull, but this is an exception.  You may not agree with all their values and approaches, but on the whole I think you will be fascinated by the detail and thought.  Now, I have no inside knowledge of what it is like to work at Netflix, but if this is accurate, it is a pretty impressive company — frankly far more enlightened than most in SiliconValley.

Written by Andrew Chen

September 20th, 2009 at 2:09 pm

Posted in Uncategorized

Why low-fidelity prototyping kicks butt for customer-driven design

without comments

Low-fidelity prototyping versus high-fidelity prototyping

In my discussions with designers, one of the interesting recurring conversations is the tools and process they use to prototype and mock up experiences. In particular, there’s a lot of divergence on how high or low-fidelity to go with a prototype.

For designers that primarily come from agency backgrounds, I’ve found that there’s an emphasis on quickly getting to a near pixel-perfect mockup, and the variations are minor in detail. In that worldview, the ideal deliverable is a single version of something that feels high-quality and gets minor feedback from clients. In the client-agency model, if you give your clients a bunch of rough mockups that seem low-fidelity, then you risk looking unprofessional. Or worse yet, you might get a ton of diverging comments that you then have to work out and iterate – in some cases that’s the last thing you want to do.

This is especially not a good situation for companies that focus on products delivered to customers –  in that case, you mostly want your product to be the right one, no matter how many iterations it takes. As a result, low-fidelity prototyping can be really useful because it aids an iterative, customer-focused approach rather than one where the Great Designer comes up with something directly from his brain.

Here’s a couple of the main advantages:

  • Get better and more honest feedback
  • It’s great for A/B testing
  • Make the cost of mistakes cheap, not expensive
  • Refine the page flow, not the pages
  • Figure out the interaction design rather than the visual design

In addition, after I’m done arguing my point, I’ll recommend some of the tools that have been useful for me in doing low-fidelity prototyping.

Get better and more honest feedback
The first time I really undestood the power of low-fidelity prototyping was when I started doing usability tests on consumer products I had built (This was years ago). Initially, I wondered why anyone did paper-prototyping? I immediately concluded that it was due to a deficiency of many designers that they couldn’t write code, and thus couldn’t do HTML mockups of the products they wanted to build.

But once I started getting people to view and interact with my prototypes, I realized that one of the big problems was that people didn’t give good feedback when the prototype you present to them is too perfect. Rather than telling me about the really high-level things, like “does the value proposition make sense?” instead they would focus on colors, fonts, the layout of the page, etc. And furthermore, they didn’t feel that they could really jump in and build on top of the ideas you showed them, because it was far beyond their capability to duplicate.

Compare this to a simple exercise where you are using hand-drawn cards or drawing paper and are literally sketching stuff out during a customer interview – you’re much more likely to try something out, and have the person you’re interviewing grab your pencil and say “no, more like this!” And that’s exactly the kind of interaction you want.

It’s great for A/B testing
As for as a metrics-driven approach goes, you have to remember that techniques like A/B testing fundamentally thrive off of variety. In particular, it thrives off of variety at the UI layer, where many small UI changes that cost very little technically can be tried out and optimized. As a result, you don’t want 2-3 pixel-perfect mockups, you actually want 10 or 20 rough mockups where you can select only the most high-variance candidates.

Some of the highest variance stuff has to do with changing the order in which you do things, or opt-in versus opt-out, or richer AJAXy interactions. These are all things where it’s easier to generate many candidates through low-fidelity prototypes since you’re often looking at things form a systems level.

Make the cost of mistakes cheap, not expensive
One of the hidden benefits of having a low-fidelity prtotyping process is that it makes changing directions much easier, which naturally facilitates a collaborative design discussion. When you’re using a customer-driven product philosophy that incorporates a lot of outside metrics and qualitative feedback, you’ll probably get multiple people involved in the design process. If it’s done by one person or a small group, and is polished significantly before it reaches the greater group, one of the problems is that it discourages collaboration. It’s very hard for people not to get defensive when they’ve spent a lot of time polishing something only for it to get changed significantly. Using a low-cost process makes it so that you can try a lot of variations cheaply, without any of the emotions involved.

Refine the page flow, not the pages
One of the highest leverage design  decisions you can make is not about the look of an individual page, but what happens before and after it. For example, you can take a multi-step process and condense it onto one page, or change the ordering of something so that you do something and then register, rather than the other way around. These kinds of design decisions ultimately focus on the order and flow of the user, rather than the look or interactions of any specific page. If you go with a low-fidelity, then it’s easy to draw lots of small pages and link them up in a flow, and do things like cross pages off, change the ordering of a funnel, and lots of things that feel natural when the prototype is very rough. Otherwise, it’s too easy to get caught in the details on the “right” way something works without exploring the options.

Work your way up from low-fidelity to high-fidelity
Of course, you want to make sure that you use the right process for the right job. So there’s nothing wrong with high-fidelity prototyping, especially when you are in the later stages and thinking about issues like branding, look and feel, and all those other details. One way to keep this process going is to have multiple rough prototyping checkpoints so that design decisions are constantly getting refined – maybe the first step is a sketch on a paper, the next is a rough mockup on the computer, then a detailed mockup, then a rough built-out version, and then iterate to the final product. These steps make it so that all the design decisions are well understood, refined, and debated all the way through.

Tools I recommend
Finally, a couple recommendations on tools for paper prototyping:

  • Number 2 Pencil :-)
  • Giant art pad for drawings – you can get these at an art shop or office store
  • Balsamiq (check out the video on the linked page)
  • Macromedia Fireworks

In general, nothing beats pencil and paper, but that’s just me ;-) I’ve been told that for people who aren’t comfortable drawing, using tools like Balsamiq helps quite a bit.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

September 15th, 2009 at 9:16 am

Posted in Uncategorized

Building the initial team for seed stage startups

without comments

Seed funding and building out the initial team

One of the most exciting events for a startup is landing seed funding, which transforms a “2 dudes in a living room idea” into something with much more potential. I wanted to summarize a couple things that are relevant for this stage, learned from personal experiences and conversations with other entrepreneurs. This blog is targeted at startups who have raised their first $500k-1M of funding, which often leads to hiring 4-6 people – this first batch of folks is critical, of course.

Here’s a quick outline of some of the things I’ve encountered:

  1. Hiring T-shaped people versus specialists
  2. Try to get doers
  3. More candidate flow solves a lot of problems
  4. Interview for the actual work you’ll be doing, not skillset trivia
  5. Raw intelligence is just one factor – don’t overestimate it

There are many more topics, of course, but this is a good start – let’s dig in:

Hiring T-shaped people versus specialists
One of the truisms in startupland is that everyone has to wear many hats – backend programmers might have to pitch in and do some feature work, designers might have to write some marketing copy, and the CEO might have to vaccuum the office ;-) Just as importantly, if you believe that startups are fundamentally undergoing a process to learn about their customer and the market, then you need people who are  versatile who can see distant connections between a variety of topics. So you want generalists, but a specific kind.

I’ve come to believe that the first batch of people you want on your team are going to be T-shaped, meaning they are broad in a bunch of different areas and deep in a particular one. The breadth of skills gives them enough common context that they can have conversations with anyone on the team about anything, but the depth gives them a source of knowledge that makes them vital to the team.

Testing for this can be as simple as asking a deeper set of questions when interviewing candidates, and asking them to do exercises that are outside of their stated skillset. Most engineering interviews are specialized enough that there are coding questions, but not many that I’ve seen also include an interview around product creation or UI design. Similarly, when discussing candidates, you’ll want to give equal weight to their depth area as their breadth areas.

Watch out for people who are so deep in one area that they seem to be overspecialized – it can be a signal for a lack of interest for pitching in on areas that might be vital for your team, or they may have nothing to do if your startup inevitably changes directions.

Try to get doers
It’s very important to hire people who are execution-focused early on. You just don’t have a lot of room for senior people or “philosophers” that don’t immediately contribute value in the product development process. When it comes to seniority, I’ve often liked to hire people who have recently had titles as team leads or directors, but nothing more senior. That way, you get people who are used to the responsibility of leading a team, yet are still low enough to the ground to have immediate impact. This is why people who are fresh out of consulting or banking backgrounds make for impractical partners – they are too focused on strategy and financials when you really should be focused 100% on concrete products and customers.

The other type that you find that’s not execution-oriented is the philosopher type. These folks often interview really well, are familiar with a wide spectrum of things and often experiment with new technologies all the time. The hard part about adding these folks to your early team is that they may be more interested in reading blogs and indulging themselves intellectually than really working hard on a team to get a lot of work done.

More candidate flow solves a lot of problems

For most seed stage startups, getting the first 2-3 people usually won’t be a problem – you’ll have people in mind, or people who are in your immediate group of friends who are easily accessible. What’s much harder is once you move beyond your immediate network, where you may find:

  • People you want have jobs and aren’t interested
  • As an entrepreneur, you know lots of entreprenurs who want to start something, not join something
  • Lots of “OK” people who are interested, but who are hard to get jazzed about
  • etc.

It’s easy to immediately get into a state where bars get lowered, things you don’t want to get accommodated are, and all sorts of other problems. Or you’ll have interviews where the person was OK but not great, and you really want the skillset.

All of this hand-wringing can be solved if you find a repeatable model for contacting qualified people and getting them in the door. I’ve found that when you start hiring for a new job role, it’s hard to figure out who a perfect candidate is – it isn’t until you see 10 candidates that you start to hone into what you really want and like. Figuring out the repeatable process that works for you is the hard part – but you want to find communities where your ideal candidates are already involved, and start talking to as many of them as possible. This may be Newgrounds for Flash people, or the Firefox extensions directory for browser folks.

Interview for the actual work you’ll be doing, not skillset trivia
I’ve previously written a bunch about my feelings on this topic, so I’ll just link here. The short of it is that I think most interview processes suck because they aren’t actually tests of what it would be like to actually work together. The ideal interview, imho, is just to interview, then work together for 2 months and do a checkpoint to see if it’s working OK. But because most people looking for a job aren’t willing to do that, having a 3-day “working interview” is a reasonable substitute as well.

Raw intelligence is just one factor – don’t overestimate it
All the young, energetic entreprenurs I know want to hire other people like them – high-horsepower people who work hard. As a result, you can organize an entire hiring process around intelligence, full of puzzles and brainteasers, and reward anyone who thinks quickly on their feet. I’ve found that this actually sucks as a minimum bar for hiring people – it’s just as important to evaluate things like passion in the area you’re working in, their reasons and goals for being at your startup, etc. The reason why you need to evaluate this stuff is that startups are really hard, and often take more time than you think they will – as a result, it’s important to understand peoples’ motivations to make sure it’s a good match from the beginning.

A couple of the things that are useful to evaluate:

  • How do they see work fitting in their life? Does the style match? (Mornings vs late, hours, etc.)
  • Level of process and improvisation – how are decisions made, how well are things spec’d and defined, etc.
  • Mutual goals and motivations – money versus serving a goal versus learning versus others
  • What stage of startups do they like? Seed, mid, later on? Why?
  • Long-term goals – create a great lifestyle company, or try to go big?

I think questions like the above should get equal billing as testing for skillset – these are the kinds of things that impact long-term collaboration and performance as much as mastery of skillsets of knowledge.

Figuring out the alignment of these soft skills reminds me a lot of this great interview with John Doerr of Kleiner Perkins, where he discusses Missionaries versus Mercenaries – you can watch the video here. Ideally, you find people who really understand and believe in the mission of the company – and if you have really smart mercenaries in the team, it may work well for when things are going well, but there’ll be significant issues if there are any hiccups (which there are bound to be some).

As a side note, this is often a problem with hiring metrics-oriented people, because their passion and interests aren’t as much in the value that’s created through the product as much as figuring out how to make the metrics go up. This can create a very revenue-oriented mercenary culture that leads to weird company vision problems. I think the ideal scenario is to find people who have a passion for the particular product you’re bringing to market, and then training them to be metrics-oriented, versus taking people who love numbers/data/algos and trying to train them to love a particular product area.

Comments? Any lessons to share?
If so, please comment below – would love to hear from other early stage startups and their lessons in hiring the first 10 or so people.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

September 14th, 2009 at 8:30 am

Posted in Uncategorized

BBS door games: Social Gaming innovation from the 1980s

without comments

Sre

I was always more of a Barren Realms Elite fan, but this picture was cooler!

And now you learn how I spent my childhood…
Some of the fondest memories from my childhood are playing BBS door games. Back before the web existed, I was a 10 year old kid in Seattle dialing into 3 or 4 different  different BBSes using a pirated version of Procomm Plus. There, I found that you could download all sorts of awesome products (though in 20 different parts, which you had to put back together), and more importantly, they had the ability to launch “door games” like Tradewars 2002, Legend of the Red Dragon, Barren Realms Elite, and a number of other games I grew to love. I spent so many hours tying up the phone line after getting back from school that eventually my parents banned me from playing these games – but that only convinced me to set an alarm for 3am to wake up in the middle of the night to play!

Furthermore, it became a critical thing in my life to get all my friends from school to also play these games with me. Together, we’d team up into a powerful, coordinated unit, and dominate the other players on the BBS and regional network. (Or at least that was the plan) Eventually years passed, I learned the internet was more than gopher, and I moved on to better and more powerful massively multiplayer games.

It’s obvious, in retrospect, that a lot of the door games that existed 20 years ago pioneered a lot of the same techniques that social games use today. Let’s drill into a bunch of these similarities, covering the following topics:

  • Door games are the “apps” to the BBS platform
  • Social gameplay with friends and neighbors
  • Turn-based, RPG-like gameplay
  • Low-tech graphical experiences, delivered as a persistent social experience

If I’m missing anything, please leave a comment! Anyway, let’s get started…

Door games are the “apps” to the BBS platform
First, let me describe what a BBS actually is – you can read a more official version on Wikipedia here. Anyone with a phone line, modem, and computer running the right software could start up a BBS. The software would tell the computer, when it received a call, to automatically pick up the line and start talking to the computer on the other end. On the other side, anyone could dial into the BBS with the right terminal software and once the connection was established, you’d get a screen that looked something like this:

On these BBSes, you’d typically find a couple different sections:

  • Private communication (reading and writing messages)
  • Public communication (message boards)
  • File sharing (downloading and uploading)
  • External applications, including Door games

The external applications were integrated with the BBS as described by Wikipedia:

The BBS software starts the external program, and the door system passes data back and forth between the door program, the BBS, and the remote user. To supply the door program with the user’s information (such as the user’s alias and the amount of time they had spent online), the BBS software creates a dropfile containing information for the program to read.

This “dropfile” typically contained all the user information, so rather than the standard API where the app asks for that information, instead it was provided in one big file. The door game would then parse this data and use it for the game. For the extra nerdtastic detail, you can go here for the dropfile specs.

Now of course this process of extending the BBS’s functionality isn’t as flexible as things are now – after all, you couldn’t just upload a new Door game to a BBS and get it running. You also couldn’t update your game and instantly propagate the new version out to all the users. But the central idea is still the same.

Social gameplay with friends and neighbors
One of the interesting properties of BBSes is that because they are all accessed by phone, and you don’t want to pay long-distance bills, you end up dialing into BBSes in your own area code. For me, that was 206, and it means that I was mostly gaming with people in my same regional area. Similarly, I convinced all my friends to also dial into the same BBSes and play games with me. For the games that had leaderboards and user-to-user interaction, it was easy to feel the same fun game-like motivations that make social gaming work today.

As an aside, while doing research for this blog post, I found this funny ASCII based dating site with Myers-Briggs based writing, and a Wall!

You can tell from the number of dudes on the screen above that the world of lonely nerds has not changed much over the last 20 years.

Turn-based, RPG-like gameplay
One of the big design challenges for any BBS game is that you want to encourage social gameplay, yet it can’t be real time. This is because, of course, people can’t be logged in to the same BBS simultaneously unless the BBS had a ton of different phone lines (not likely). As a result, each of the Door games had to support an social, asynchronous play style that allowed people to dial in one after the other and still engage.

The way that was done depended on the game, of course, but usually combined a mix of computer players (aka NPCs) and “slow” real-time action where each loop of action lasted a day. Then on any given day, you are given a number of turns which you can expend. Once you play these turns, then you need to wait until the next day to get more turns. This made it so that for a game like Tradewars 2002, you can log in, do your trading/mining/attacking, and interact with computer-controlled characters. If you encounter another player’s ship, then you can interact with them with the computer taking control of the other player, so when attacking them, they will automatically defend.

Some of these games played very much like RPGs, with levels, currency, monsters, swords, quests, and the usual mechanics. One of the most popular games was called The Legend of the Red Dragon:

Having the combination of social gameplay with the traditional RPG mechanics created a rich world that allowed for months of play time.

Low-tech graphical experiences, delivered as a persistent social experience
You may notice from all of these screenshots that these games are very, very basic. Many of my friends at the time criticized the simplicity of the gameplay, only to be  sucked in for social reasons :-) Similar to the current incarnation of social gaming, the emphasis is not on graphics. The primary advantages of a persistent, continually updated world with social gameplay far outweighed the fact that downloadable single player games had much better graphics. Of course I still downloaded and played Wolfenstein and Doom when it first came out, but throughout that time, I stilled played BBS games.

If there’s one thing to be learned from the BBS games and their related cousins, MUDs, is that great social interactions can trump pretty much everything else. Of course the products that can deliver higher production values and great social experiences are even better off.

It was fun to write this article! Such a blast from the past. Hope you share your memories of BBSes in the comments :-)

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

(This blog was republished by the excellent blog Inside Social Gaming)

Written by Andrew Chen

August 25th, 2009 at 8:30 am

Posted in Uncategorized

How desktop apps beat websites at building large active userbases

without comments

Why does everyone hate on desktop applications? Homer and spiderpig love them.

Desktop apps have better retention, while websites have better user signup rates – which factor wins?
There’s a lot of conventional wisdom that it’s dumb to build a desktop app, and it’s marketing suicide to do so.

This argument usually has two parts:

  • Poor conversion rates: Maybe 1-2 out of every 100 – will download your application
  • Poor virality: Since desktop apps are often standalone utilities, they lack the social element that makes them viral

You can compare this to web products which often have 10%+ registration rates, up to 50% when they are through friend-to-friend invitations.

It seems obvious that going down the web route is a slam dunk, but in fact it’s not – desktop applications often have a better long-term retention, and this can easily offset the lower download+install rates. The way to look at this is that the number of active registered users is a function of signup rate AND retention, and you can balance one with the other. (And of course, ideally you have both). To me, this discussion opens the way for more innovation in browser extensions, downloadable apps, and other low signup % products as long as the long-term value is great enough.

Note that this blog post will focus exclusively on the signup rate versus retention rate, and leave the virality discussion for another day. Let’s dig into this further.

Comparing applications versus websites
Here’s an example of the simple differences between the two channels:

Total users Signup % Registrations
Application 100,000 1% 1,000
Website 100,000 10% 10,000

Starting with the same number of new unique users, it’s obvious that this can lead to a huge difference in account registrations.

However, because web products are so easy to get into, they are also easy to get out of – it’s hard to be sticky. The one true retention mechanic is using e-mail notifications to get the user back. Compare that to a desktop product that use techniques like:

  • Open itself whenever a file extension is clicked
  • Install itself on the system tray
  • Add itself to the desktop
  • Start up automatically when the OS loads
  • Run nicely in the background to pop up when appropriate
  • … and many other retention-happy features

(Of course, you should never use these techniques without contributing value to the user, lest you get uninstalled and reported to Symantec).

Similarly, there is also emerging a world of in-between web-triggered applications like Firefox extensions and Adobe AIR apps, which are easier to install but also take advantage of a wider set of retention hooks to stay relevant.

So when all of this has been taken into account, you can see how our 100,000 new users fare after a couple time periods below. Here’s a table that describes two retention rates period-over-period, and how many active users are left after each period, starting with the initial numbers (1k vs 10k) discussed previously:

Retention 0 1 2 3 4 5
Application 80% 1,000 800 640 512 410 328
Website 50% 10,000 5,000 2,500 1,250 625 313

As you can see, after the course of 5 months, the application now has more active users than the website, even though it started with a 1/10th the registered users.

Note that retention rates usually improve period-over-period, and are not constant as shown above – I’m just using a constant retention rate so that we can simply the math in the next section!

Looking at the math
For the readers that fall asleep when an equation is shown, please skip this section :-)

Ultimately, the function for describing the number of active users at any period is:

# of active users at time t = initial user signups * (retention rate)^t
= (new users * signup %) * (retention rate) ^ t

So if you have 100,000 new users, a 10% signup rate and 50% retention rate, then your equation looks like:

# of website actives at time t = 100,000 * 10% * (0.50)^t

If you want to calculate when a website’s active users falls below a desktop app’s active users, you can set the two equal to each other and solve for t:

100,000 * 10% * (0.50)^t = 100,000 * 1% * (0.80)^t
10% * 0.5^t = 1% * 0.8^t
10% / 0.1% = 0.8^t / 0.5^t
log 10 = log(0.8^t/0.5^t)
1 = log(0.8^t) – log(0.5^t)
1 = t * log(0.8) – t * log(0.5)
1 = t * (log(0.8)-log(0.5))
1 = t * log(0.8/0.5)
t = 1 / log(0.8/0.5) = 4.9

Thus, after 4.9 periods you’d see the higher retention product start beating the high signup product. In the cases where they never intersect, you’d get a negative number there. I will leave it as an exercise for the reader to solve this in the general case where you know that a website’s signup rate is X times more than desktop app, and Y times in retention.

Conclusions
Ultimately, I believe my calculations show that desktop apps have natural advantages (and disadvantages), but are not strictly worse than building a web property. You still need a long-term value proposition that drives great natural retention. You need expertly-done “hooks” into the OS, email, and other notification systems that encourage repeat usage. And finally, you need social hooks into viral channels (whether web or beyond) that encourage virality and user-to-user interaction

I think it’s not a surprise that there have been great success stories in desktop apps in recent years, such as Skype, Twitter clients, new browsers, and other tools that follow the design patterns of the above. And of course, nothing beats building a killer product that spreads naturally through word-of-mouth – that said, you can stack the deck using great retention and virality :-)

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

August 24th, 2009 at 8:30 am

Posted in Uncategorized

iLike, Lookery, Google Voice: Recent platform lessons from app developers

without comments


Sometimes platforms can be dangerous to your health…

Recent platform news
I’ve been interested in the rush of recent news about the various challenges that app developers are facing relative to the platforms they’re building on, whether that’s Facebook or iPhone:

Reading the excerpts
Each of these stories is slightly different, but worth repeating – here’s the relevant paragraph from Scott Rafer’s Lookery post:

So far so good on using an ephemeral opportunity to create a company, but this is where I place Coulda-Shoulda #1. We exposed ourselves to a huge single point of failure called Facebook. I’ve ranted for years about how bad an idea it is for startups to be mobile-carrier dependent. In retrospect, there is no difference between Verizon Wireless and Facebook in this context. To succeed in that kind of environment requires any number of resources. One of them is clearly significant outside financing, which we’d explicitly chosen to do without. We could have and should have used the proceeds of the convertible note to get out from under Facebook’s thumb rather to invest further in the Facebook Platform.

Similarly, here’s the excerpt from the recent article about iLike:

Some in Silicon Valley have speculated that MySpace isn’t willing to pay more for iLike because it fears Facebook will boot iLike once its main rival takes control of the service. But that doesn’t go far enough in describing the situation, said one of the sources. What has pushed iLike’s valuation down is a problem with control. The company’s managers have no way to prove to potential acquirers that their business model has a bright future because they can’t predict from one day to the next which direction Facebook’s Platform will go. The source said that leaders at iLike, or any other company on the platform, are not truly in control of their fate–Facebook’s Mark Zuckerberg is.

“The cash flow of any company doing business on Facebook’s API, or Facebook Connect, or Facebook platform is inherently at risk,” said the source. “The multiple that an investor can place on that cash flow is not that much greater than 1, because you never know at which point Facebook could change the terms of the relationship or change the technology and cut off that cash flow.”

And finally, the discussion on the iPhone platform, which Jason Calacanis makes in 5 parts with the last 3 points involving their App store platform:

  1. Destroying MP3 player innovation through anti-competitive practices
  2. Monopolistic practices in telecommunications
  3. Draconian App Store policies that are, frankly, insulting
  4. Being a horrible hypocrite by banning other browsers on the iPhone
  5. Blocking the Google Voice Application on the iPhone

The mismatch of agendas
Ultimately, the vast majority of these disagreements between platforms and applications seem to be over the inherent mismatch of agendas between the two parties. Applications seek to maximize their distribution and gain customer share, while minimizing their dependence to the particular channel. For platforms, they seek to control the applications which depend on them, and prioritize the long-term success of the application ecosystem rather than any individual application’s. The ecosystem around a platform is complex because there’s a 2-sided market built in – the customers they serve, and the applications that want to use them. Prioritizing application developers above all else leads to sloppy, disjointed experiences – that’s one of the things you have to admire about Apple’s tight-fisted approach to App Store discovery, payments, etc. I haven’t had a bad experience yet, versus the constant complaint comments I get from disgruntled users whenever I wrote about Super Rewards or Offerpal.

As I wrote about in my previous blog post Benefit-Driven Metrics,  ultimately the platforms should try to help the the applications that build on them make as much sustainable revenue as possible. As long as there’s a long-term business there, more developers will continue to be attracted to building more functionality and richness. I believe that the Facebook economy has (surprisingly) shown itself to be capable to support several VC-backed companies making 10s of millions in revenue, whereas the same cannot be said for the iPhone platform yet. I’m sure someone in the mobile world will figure it out eventually, though.

For any application developer though, the core lesson is – don’t get too comfortable ;-)

Conclusion
How will these recent issues steer the platform agenda in the future? In particular, let’s look at iLike’s exit – what are the implications for other startups?

Will people conclude negatively about:

  • music startups
  • ad-based app companies
  • startups that are building horizontal apps on Facebook/Twitter
  • any startups building on one of these platforms

Perhaps it will be all 4, or perhaps just localized to a particular sector. Only time will tell.

Hope you enjoyed this article, and leave me any comments if you have extended thoughts!

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

August 22nd, 2009 at 3:42 pm

Posted in Uncategorized

To my first 10,000 blog subscribers: Thank you!

without comments

To my first 10,000: thank you for reading! (And I will try to blog more)

Lately I have been tweeting more than anything else, and if you haven’t followed me already, you can do so here: @andrewchen.

Best,
Andrew

PS. I updated my list of essays recently, so all the latest stuff is on there. I wanted to include it below, for your convenience.

Viral marketing and user acquisition

For web entrepreneurs, growing your userbase is a key challenge, alongside product development and financing. These posts emphasize a quantitative approach to getting traction and growing users.

Engagement and product design

Using principles from game design and analysis of consumer behavior, these essays cover the process of creating experiences your customers will love.

Freemium and online ad monetization

Social web product have unique characteristics as it applies to online advertising and direct monetization levels. These posts cover some of the issues around key topics such as ARPUs, conversion funnels, CPM rates, behavioral data, revenue modeling, etc.

Metrics

Without metrics, web entrepreneurs are just flying blind. These essays cover some of the organization and development issues around instituting a metrics system – what to measure, in what order, and how to implement them.

Media and games

Traditional media, including TV, music, games, and movies are at a crossroads. Here are some thoughts on how the industry is changing and evolving.

Entrepreneurship and startup life in San Francisco

Just a couple thoughts on things I’ve encountered while arriving in SF.

Written by Andrew Chen

August 19th, 2009 at 1:54 pm

Posted in Uncategorized

What if interviews poorly predict job performance? What if dating poorly predicts marital happiness?

without comments

Weird, contrarian business ideas

One of the best books I’ve read in the recent past is Hard Facts, Dangerous Half-Truths, and Total Nonsense by Stanford Professor Bob Sutton. (He now also has a great book called The No Asshole Rule, which you may have heard of also) In the Hard Facts book, he talks about a variety of different common business topics, and compares the academic research on each of the topics versus what paid management consultants often preach.

In particular, one of those question is – there’s a ton of anecdotes around the idea of The War for Talent, popularized with such phrases like “A-Players hire A-Players, B-Players hire C- and D-Players” etc. Embedded within many of these notions is, of course, the really big assumption that you can actually interview for talent, and that interview processes actually work. In the Hard Facts book, Professor Sutton actually points at a bunch of research that says that in fact, there’s tons of evidence that the hiring process doesn’t work well. And if you look at the marks that people get coming out of a hiring process versus the on-the-job marks they get in their first year in a job, they are actually not correlated at all.

I personally find the idea that interviews being poor predictor of job performance both unsurprising, but also troubling! Interviews predicting job performance seems like one of the core building blocks of American business.

This has been a particularly interesting topic for me to think about because of the differential that exists between technical and non-technical interviews also. All of the non-technical interviews I’ve ever involved in have been terrible, and I’m still not 100% satisfied by my thoughts on how to improve them.

Anyway, I wanted to embed a video interview of Professor Sutton discussing his book below, which you can watch at your convenience. Unfortunately he doesn’t mention job interviews in it, but he talks about a bunch of other interesting stuff.

(scroll down past the video to continue reading the blog post)

Short-term activity used to predict long-term activity
In fact, the core of job interviews really is about using some short-term activity (like dating, interviews, etc.) to try and predict some longer-term success (marriages, job performance). These little prediction scenarios pop up all over the place, and they’re inherently subjective.

Here are some other places where this takes place:

  • Investors evaluating a pitch, in order to invest in a company
  • Looking at headshots to cast someone in a play
  • Reading the script of a movie to greenlight the film
  • Being a great premed student versus becoming a doctor
  • etc.

Some of these evaluation processes are closely aligned with the actual long-term activity, but sometimes they are not. For job interviews, it would seem that it may not be the same skills to get your resume noticed by a recruiter, then filtered up to a hiring manager, then passing an interview – that is hugely different than actually doing the job in a team setting. This discussion also reminds me of a common discussion I had with pre-meds back in college, where the primary shor-term selection criteria seemed like acing the easiest classes possible, pulling all-nighters, and memorizing obscure biology/chemistry textbooks. However, in the long run, being a good doctor was as much about dealing with people – be it other doctors, nurses, and patients – as it is about having good grades.

Is dating a good way to predict marriages?
OK, before we jump into job interviews, let’s talk about something more fun: Dating ;-)

The traditional notions of dating are a funny thing, because of how contrived it is in many ways. In particular, you might consider dating to have characteristics like:

  • Pre-defined activities designed to be fun and happy
  • Strong cultural, familial, and peer pressure that specifies standards and traditions
  • Low sample size relative to long-term marriage commitment (dating for 6-12 months and then intending to be married for 60+ years)
  • Relatively strong separation of stuff like finances, scheduling logistics, etc.
  • And of course, no sense of what raising children is like

Contrast this to actually being married, as imagined by an unmarried guy like myself:

  • Long-stretches of normal, domestic life that can be exciting, but is often not
  • A very long-term outlook on the relationship, spanning 60+ years
  • Lots of intermingling of legal issues and logistics
  • And of course, the entire process of raising kids, having a house together, is almost an enterprise in itself regardless of the romanic situation

The fact that the before and after is so different, in many ways, means that you better hope that the stuff you learn about the other person while dating gives you a strong view of how the long-term relationship would work.

Job interviews as predicting long-term work relationships
Of course, job interviews are very much like dating as well. Inside of the job interview process, there are a bunch of inherent assumptions about what kinds of candidates are good candidates.

For example, you often have processes that prioritize top schools, or that prioritize “culture fit” and other intangibles. Interview processes often test very specific skills against a “snapshot” of a candidates skills at any given moment of time. Is it fair to ask engineers questions about SQL or specific language trivia when it’s something they might learn or pick up in hours or days, not months? It’s not clear.

The worse issue is around the inherent bias that comes into play. People like to hire people like them, and every startup full of Stanford-educated 20-something guys ends up hiring more Stanford young dudes. Sutton refers to this in his book as homosocial reproduction. How important is it, really, what school you went to? Is it more important your grades? Do you really need to know obscure things about a programming language, or go lower-level, when your day-to-day job is unlikely to utilize that knowledge? I think a lot of these biases come from the people who design the interview, and don’t objectively evaluate success or failure in professional settings.

Do interviews miss the “intangibles?”
Most damning, of course, is if interviews simply don’t test the majority of a job applicant’s fit to a role. Let’s have a thought experiment where for any job candidate, that skillset accounts for 20% of their performance, and other things like motivation, communication, and possibly weird, obscure skillsets actually contribute 80%? Then you can test the 20% all you want, but in any sort of 1:1, contrived conference room interview setting, you can’t scratch the 80%. In fact, you might find candidates that fail most of the 20% but are such amazing fits elsewhere that it’s in fact an awesome match!

Over time, I’ve come to believe that interviews likely test a small amount of a job candidates skills, and you have to more directly test them in realistic work scenarios to get at the other stuff.

How would you more directly test job performance?
In many ways, thinking about job interviews as inherently bad predictors has a strong tie to the Built to Fail blog post I did a while back – except rather than assuming that your code is bad, and needing unit testing to support it, instead you assume job interviews are bad, and you need a larger framework to support that.

So taking the ideas from that post, I would recommend the following:

  • Accept that traditional job interviews suck, and you can’t learn much about a person in 30 minutes to an hour
  • You should interview MORE people, and potentially lots of weird people that don’t seem to be good matches right upfront
  • You must streamline your interview process to handle more people, in larger batches simultaneously – thus 8 hours of 1:1 interviews probably doesn’t scale
  • And you should test your job candidates in realistic work scenarios – assigning real tasks to groups of candidates (potentially mixed in with employees), working together in tandem
  • Also perhaps instead of focusing your hiring on specific individuals, instead you make offers conditional to entire teams that seem to work well together, and keep them together in their actual job

In many ways, I think this is closest to the Boiler Room or Bootcamp view of the world, which was pointed out to me by a long-term mentor, Bill Gossman. You bring in more people, test them in real-world settings, and hire whoever comes out on the other side, regardless of background or performance. To me, this has the benefit of a truly meritocratic society, where people are hired because of their real performance, rather than what the designers of the interview decided were subjectively important or unimportant.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

July 28th, 2009 at 8:25 am

Posted in Uncategorized

Does Silicon Valley noise detract from long-term value creation?

without comments

Encountering the Silicon Valley echochamber

I grew up in Seattle and worked there for many years, and finally moved to the Bay Area in 2007 because I wanted to be at the epicenter of the startup scene. When I was in Seattle, I always lamented the fact that there wasn’t a “scene” the way that the Bay Area has one. I used to talk about it as one of the big negatives of the region, since you didn’t have the density of events, bloggers, and general activity in Seattle as in the Bay.

Now that I’ve been here for a few years, it’s clear to me that the Silicon Valley echochamber has its clear negatives as well. Being out of touch with the average American consumer is one obvious negative. Chasing down technological rabbit holes is another.

But I believe one especially strong issue is the constant peer reinforcement from fellow entrepreneurs to be working on stuff that everyone deems “hot” or easily relatable. And how easy it feels to be left behind when there’s a “hot new trend” in a particular direction, even when there’s obviously many good markets to be explored at any given time. I would argue that this strong peer reinforcement from fellow entrepreneurs makes it easy to focus on very short-term successes, and ignore long-term contrarian bets.

Peer reinforcement from fellow entrepreneurs
One of the most common conversations you’ll overhear at any startup event is one entrepreneur giving another entrepreneur their elevator pitch. Or, you’ll overhear an entrepreneur giving their pitch to a prospective startup engineer. In fact, I would argue that many startups spend more time talking to other people “in the know,” than they do potential customers, whether those startup savvy people are investors, job candidates, fellow entrepreneurs, advisors.

And just as similarly, a common conversation you’ll overhear is the equivalent of the “hot tip” on a stock – but instead, the conversation will be about a particular market or company. Oh, did you hear that company so-and-so is doing X million in revenue? Oh, the Y space is blowing up.

All of these conversations, as insular, belly-gazing discussions establishing the social pecking order at any given time, provide quick feedback about the entire startup ecosystem. It’s what guides the decisions of many employees or investors to dive deeply into the “hot” markets. In many ways, this is one of the deep strengths of Silicon Valley, that the information is so efficient, and new markets can be quickly identified and exploited by dozens of companies simultaneously.

Copy cats galore
At the same time, these conversations can easily reinforce the feeling for many entrepreneurs that “You’re missing out!” It causes many people to look at the sectors that seem to be immediately doing well, and jump into them as copycats, because it looks like easy money. There’s nothing like the feel of a gold rush to make everyone go nuts.

Interestingly enough, many of these copycats are only in it to exploit the short-term advantages of the market, and some are self-admittedly not passionate about the area they go into. In particular with this economy, a long-time mentor described it as “Silicon Valley’s version of quitting your idealistic startup and going back to work at Microsoft” – meaning potentially soulless activities that generate revenue, regardless of actual passion or long-term belief in the projects as viable businesses.

Focusing on the long-term
As an entrepreneur, I can’t help but look at the short-term choices that get made in an environment like this without some degree of disappointment. There are many brilliant people who could be trying to make the world for the better and really create long-term value, but instead they are engaged in a zero-sum game to extract as much value as possible from the world. Now perhaps as a market, these startups will collectively make the world a better place – such is the wonder of markets – but at the same time, it disturbs my sense of idealism about entrepreneurship.

More importantly, building a startup takes years no matter what the economic environment – maybe 5, maybe more. And if you’re going to be stuck doing something for 5 years or more, then you might as well pick something you’re really excited about. Taking a long-term view, I think, means accepting that many of these new markets will significantly change over time, possibly merge with other markets, or possibly turn out to be too small.

It’s worth thinking about what kind of company you want to be in 5 or more years, rather than just grabbing onto whatever trend seems to be floating by at the moment.

Staying focused on the long-term
So how do you stay focused on the long-term, when there’s so much noise? Here are a couple thoughts for you:

Stop reading blogs so damn much
Every once in a while, I’m busy enough that I don’t read any blogs for a week or so at a time, and you know what? The world doesn’t end ;-) Obviously it’s useful to keep up with how the rest of the tech industry is moving, and where the markets are developing, but clearly there’s a diminishing returns to the minutiae around the startup word.

Have a strong vision that’s flexible yet specific
Another issue is how easily small companies are swayed when the vision is not clearly defined and understood. It’s easy, when internal values and vision haven’t been set, to follow the customer to wherever they would like to go. Or to fast-follow whatever is the darling startup at the moment, or to be swayed by competitor moves. So you need something that’s specific enough to figure out how much external data to incorporate, but also be flexible enough that if you hit contrary data, your entire startup’s core thesis doesn’t fall apart. The tension of the two is what makes this a challenge!

Ignore the competition
For most startups, the market is not clearly defined enough to also have clearly defined competition. In most cases, you’re better off focusing on your customer and learning from them both quantitatively and qualitatively, rather than emulating what your competitors are doing. And in particular, if you are extracting a ton of interesting knowledge about your customer, you may end up with a unique set of insights that would beat whatever you’d get from copying anyway.

Don’t go to startup events
Another common environment where you’re compelled to pitch your startup over and over is startup events. Skip these, and you’ll find yourself thinking more independently from other entrepreneurs.

Forgo short-term opportunities if they are clearly short-term
One very difficult challenge is that along the road to success, there will be many tempting rabbit holes to go down. Many ideas are hard to scale into larger businesses, but make a ton of sense at a smaller scale. Many ideas are also unsustainable, as a hole closes in the market, or because customers don’t get enough long-term value. Of course, sorting out long-term from short-term is the difficult part here.

Move down to the Peninsula, not the city
One of the small geographical differences that exist between the city and the peninsula is that there are far more media-oriented, hipster entrepreneurial engineers in San Francisco compared to Palo Alto, Mountain View, etc. As a result, the city is a fun place to get your company started, as there’s ample idea exchange between all your fellow entrepreneurs. On the other hand, once you get going with your startup, the sheer number of parties, get-togethers, and coffee meetings can get overwhelming.

Be skeptical of opportunities that are both hot, and easy
The most interesting opportunities I hear are ones that appear to be easy revenue, and I hear about them from multiple sources. Many of these opportunities are the equivalent of windows of arbitrage that appear in the stock market – they’ll quickly be closed, and never appear again.

Remember that you only need one big success
The final point I’ll make is that at least as far as startups go, you really only need to find one awesome line of attack on a market, and that’s it. Maybe that takes a month to find, or maybe it takes years. But ultimately, if you are making forward progress on your business and you reach a huge market eventually, it doesn’t matter much what happens between now and then. In this way, having a great deal of patience is very useful if you can systematically discover high-quality, long-term opportunities. This may be harder than the short-term stuff, but it also creates the ability to become a category-defining company.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

UPDATE: Thanks to Marc for reminding me about startup events and peninsula living also!

Written by Andrew Chen

July 27th, 2009 at 8:30 am

Posted in Uncategorized

Social design explosion: Polls, quizzes, reviews, forums, chat, blogs, videos, comments, oh my!

without comments


Why do social products tends towards clutter?
One of the toughest design problems for people working on social products is the inevitable path towards cluttered interfaces and diluted brands, as you try to build more social activities and richness within your product. As a result, these products tend to drift towards “portals,” “hubs,” or “platforms,” rather than clean, single-purpose destinations. This might be good or bad, depending on your viewpoint, but it certainly introduces a number of design challenges when every central “entity” on your site (be it a video page, or profile) has dozens of jumping off points for more complex interactions. Or when you have a “tab explosion” as you bolt on common social application paradigms like blogs, chat, or whatever.

For MySpace, this manifested itself as a massive top menu detailing all the different ways to interact with the site, including Classifieds, Music, Games, Video, Forums, and others. For Facebook, they had to build the Windows-like application bar that shows up on every page and allows access to chat and commonly used applications. It seems as though this clutter is almost inevitable as you try to centralize a wide variety of social activities.

Users push you towards more social activities, not less
The central driver, I believe, for this social activity explosion is that people want to have LOTs of different ways to interact with their friends. These different activities let them have very nuanced interactions that have deep and meaningful social signaling.

Let’s take an offline activity, for example, an invitation to a date – there are lots of nuances that can be read into asking someone to:

  • have a quick mid-day coffee
  • come over and have a nice dinner
  • go to a movie
  • have a drink mid-week versus Friday/Saturday
  • go out with a group of friends to a music show
  • having brunch with your parents
  • etc.

All of the above activities provide different social signals based on how big of a time commitment it is, who’s involved, what time of week it happens, how expensive it is, etc. And if you were to build online equivalents of these types of activities, it would be better at each step, since it allows for richer interactions.

As a result of this, your users will always like any new social interactions you push out, and will often suggest/demand new activities.

The social web laundry list
As a result of the demand for new social activities, you inevitably get a series of bolt-on design patterns that recur across many different social products. An incomplete list might include:

  • polls
  • quizzes
  • reviews
  • comments
  • forums
  • chat
  • blogs
  • videos, photos, and other multimedia
  • avatars
  • leaderboards
  • private/public messaging
  • status messages
  • etc.

What else am I missing? Suggest some other ones in comments, and I’ll be happy to continue extending this list :-)

Either way, there’s probably some rule that if a new social product these days decides that, “hey! What our product needs is polls!” then the design philosophy of the product probably should be reevaluated. It’s a powerful indicator that the product roadmap is overly focused on short-term user engagement versus a long-term market position.

Drawing the line between “core” versus other
These mechanics are so easily bolt-on-able that it destroys the differentiated value of a product – this happens through clutter, confusion, and overduplication of features relative to other sites. It becomes a trap that weakens the brand long-term, while producing higher engagement in the short-term – quite the devilish dilemma.

Ultimately, to avoid this fate, every product needs to draw a line in the sand on what is core, and what are extraneous social activities that should happen off the site. Or, if not off the site, in a carefully cordoned-off area. Either way, these choices need to get made, otherwise clutter ensues.

Potential solutions
Several companies have dealt with this design problems in different ways – let’s go through all of them:

Solution 1: Build everything
In the MySpace example, the site ultimately decided to incorporate a very large chunk of all the functionality they could think of. Just explore the top menu bar, and I think you’d be surprised by how much product is sitting inside of there.

Solution 2: Open up the CSS/HTML layer
Interestingly enough, MySpace also used another method of allowing users to extend their profiles by allowing people to just copy and paste arbitrary CSS/HTML. Another company that did this is eBay, as well as many blogging sites. Outside of the obvious security issues, the nice part about this is that this is a really simple integration that works with many different kinds of tools and widgets.

Solution 3: Provide a rich onsite platform
This is the Facebook/OpenSocial approach, where applications exist on a site rather than off of it

Solution 4: Create off-site APIs and activities
To some extent, this can happen by itself with an API or not, as passionate users will create forums, mailing lists, blogs, and other social structures about your product. But as Twitter and blogs show, you can build an API which allows off-site applications and websites to build richer functionality. It will be interesting to see if Twitter eventually creates an on-site API a la Facebook, or if they will always make their onsite experience very simple and clean.

Solution 5: just focus on one thing
And the final solution is just to ignore your users, and focus on the main value that your product provides. This certainly has a nice charm to it, but obviously few companies follow this – more ambition leads to more features, typically, even though the user experience might suck as a result.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

July 20th, 2009 at 9:00 am

Posted in Uncategorized

Built to Fail: How companies like Google, IDEO, and 37signals build failure-tolerant systems for anything!

without comments

Planning for success, not failure

High achieving people who have a long history of being successful often plan accordingly – doing so, of course, means that they plan for success in whatever they do. And when you take a successful person and put them in a successful big company that’s already making money from their products, there’s even more reason to plan for high-achievement outcomes.

But let’s say that you put these successful people and put them in environments of great uncertainty, like at a Silicon Valley startup – what happens? That’s when realities collide! When you apply the big successful company playbook to startups, you can end up with monolithic planning processes, products that can’t find their markets, and lots of money being spent on launches for the wrong products. It’s not that these tactics are stupid, it’s just that they don’t work as well when you’re dealing with ill-defined customer problems with unknown solutions.

At the heart of this conversation is – what happens when you take something that’s usually assumed to be successful, and you instead say that it’s very likely to fail?

In a way, you can think of this as planning to fail, but then building the support structure around the failure in order to create a failure-tolerant system. Let’s dive into this.

Planning for failure, not success
The title of this blog refers to the fact that companies like Google, IDEO, and 37signals all have the culture of “Failure is OK” built into them.

At Google:

  • Google makes money by being always available, ubiquitous, and having a great product
  • To deliver their service, they have 100,000s of servers (maybe more?)
  • Any one of these servers have a high likelihood of failing at any time
  • To create a fault-tolerant system, they have lots of redundancy and lots of sophistication around what happens when an individual box fails
  • Contrast this to a big-iron approach that builds all the redundancy into specialized hardware that’s designed to never fail

At IDEO:

  • Companies hire IDEO to give them fresh designs based on a customer-focused approach
  • Part of every project involves lots of brainstorming and coming up with ideas
  • However, any specific idea is likely bad (for example, 12 out of 4,000 toy ideas were actually successful = 0.3%)
  • Thus, IDEO combines structured brainstorming, rapid prototyping, and field research to rapidly try out new concepts and get to good products
  • Contrast this to a process where the “Great Man” designer thinks about a design problem and then comes up with the right solution spontaneously

At 37signals, in particular Ruby on Rails:

  • Rails is framework built for programmers to build websites
  • Of course, every web project requires lots of lines of code which can easily break at any moment
  • If you assume that programmers will more often write code that is buggy and breaks, then you’ll want to make testing and iteration easy – this is at the heart of Agile, TDD, continuous integration, and other related disciplines
  • Contrast this to a waterfall engineering approach which assumes the correct design and architecture can be thought out by experienced software engineers

Each one of these examples is similar, yet unique in their own way – but there are similar themes that pervade each one of these approaches.

Characteristics of failure-tolerant systems
Each one of these systems takes the central part of a process and assumes failure, and then builds up a support system around it.

This happens by building on a few core principles:

  • Acceptance of failure: You have to accept that shit happens and failure is commonplace – this needs to be internalized so that failure isn’t punished, but rather embraced!
  • Massive redundancy: Then, it needs to be easy to have lots of redundancy built into the system – for designers, that means lots of designs get generated. For startups, that means lots of ideas are tested, and for Google, that means lots of servers are used
  • Cheap, easy, fast: As a side-effect of the redundancy, it needs to be easy, cheap, and fast to have lots of ideas, lots of servers, or write lots of code. The harder it is, harder it will be to create redundancy
  • Iterative, reality-based testing: Testing these individual components constantly becomes key – you need to force failure on the system to figure out how it reacts from a system-wide level

Building up processes based on the ideas above makes it easier and easier to deal with failure and come out on the other side!

Conclusion and next ideas
There are lots of interesting directions that this line of thinking can go.

This area of thinking started out with the hiring process, and the idea that maybe interviews don’t work at all – there’s a bunch of academic research that implies that, actually. So if how would you build a failure-tolerant system around the hiring process, if you assume that good interview candidates actually have no correlation to successful employees?

For dating, what happens if you assume that people you like to date may not be the kind of person you’d have a successful marriage with? What if people suck at figuring out what kind of guy or gal is the “type you’d bring home to Mom?” I think anyone could attest to the idea that many people suck at figuring out the right person to date, much less the right kind of person to marry. I personally find it crazy that people make a 50+year decision to be married based on a 18-month sample size :-)

For careers, what if it turns out that people have a really bad idea figuring out what they’ll actually want to do 40 hours a week, 50 weeks a year, for the rest of their life? How would you figure out the right career faster rather than shorter?

All of these are great thought experiments, I think.

What else am I missing? :-) I’d love to take any suggestions and write up some thought experiments around it.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

July 13th, 2009 at 8:30 am

Posted in Uncategorized

Dear readers, should I keep the automatic weekly Twitter links?

without comments

I’ve been auto-posting all my tweets on my blog. I’m curious what people think about it – useful? Or not?

Take the poll below, and please leave comments if you have extended thoughts:

Thanks for the feedback!

Written by Andrew Chen

June 27th, 2009 at 3:18 pm

Posted in Uncategorized

Why metrics-driven startups overlook brand value

without comments

The perils of ignoring brand value
The nature of internet marketing makes it easy to have a highly accountable, metrics-driven view – but companies that are highly metrics driven easily overlook hard-to-measure issues like brand and user experience. The reason is that when all product decision-making is run through metrics-driven reports, soft things like “Brand” show up as costs, but never as benefits.

This leads to systematic erosion in many “soft” but important factors, like customer experience, brand value, and “love.” :-) And ultimately you need all of these things to create a massive, enduring consumer brand – it’s not enough to optimize funnels.

Let’s discuss why:

Two worlds: Direct marketing and brand marketing
In the advertising industry, there’s been a long, historic distinction between brands and direct response – and this distinction echoes its way into the online startup building world as well.

In the brand world, you have companies like Coca Cola, Apple, and others who pour millions of dollars into high-reach vehicles like TV which lack any real accountability. Thus the saying:

Half the money I spend on advertising is wasted; the trouble is I don’t know which half
— John Wanamaker, US department store merchant (1838 – 1922)

To many people, the brand advertising world is irrational and fashion-driven, because of the complex interactions between agencies, their partners, and the publishers that rely on them. Just watch Mad Men.

On the other hand, you have direct marketers who thrive on accountability. They buy into marketing channels like direct mail, coupons, infomercials, and most recently online remnant ads, because they can purchase cheaply and use sophisticated statistical techniques to optimize their media buys.

Startup engineers tend towards metrics-driven
So what side do startups tend to side on? It obviously depends, but because of the highly accountable and measurable nature of online, it’s much easier to become metrics focused. Similarly, startups are mostly poor ;-) Thus, expensive brand efforts are mostly out of reach. (Probably for the better!)

Also, with the possible exception of GoDaddy, I don’t know a single startup that made it or not based on their brand advertising strategy. The typical path is focused on products and technology, and large organic growth which builds large consumer audiences.

And obviously, readers of this blog will tend to be much more metrics driven compared to the average entrepreneur!

You optimize what you measure
The first issue that causes metrics-driven startups to ignore brand value has to do with the fact that it’s very hard to measure brand, and you tend to optimize what you can measure. As soon as you throw some numbers on a big report, there’s an inherent human desire to make the numbers go up!

This is why one of the fundamental tenants of metrics-driven startups is to build lots of highly accessible reports that everyone in the organization can look at. Even if it’s easy enough to pull something out via a SQL query, it’s another thing for everyone to be able to hit a URL and load it instantly, no matter who they are on the team.

Measuring brand value is hard!
But measuring brand value, or user experience, or community “feel” or other soft things like that is very hard. I think they’re hard because while it’s clearly important, at the same time:

  • The quantitative effects accumulate over large periods of time
  • These might be “source” variables that drive lots of behavior, but it’s hard to measure past surveys and explicit information collection
  • Some of the most important datapoints may be qualitative, not quantitative
  • Changing these soft things may require big efforts above and beyond small A/B-testable changes

The companies out in the marketplace that try to measure brand value mostly just use surveys to detect changes. Or, many companies simply resort to a pretty ineffectual number like “reach,” which refers to the number of people who saw the campaign. This can sort of work, but self-reporting also sucks, and the quantitative data you get out may not be as useful as the qualitative data.

In my previous online ad career, I was shocked to hear that the standard way to measure a brand advertising campaign online was to fork $50k over to Dynamic Logic, whose job was to run a dinky little survey and tell you if your campaign worked. $50k to run a survey!

Reports show the cost of branding, but not the benefits
As a result of brand advertising being hard to measure, you get two systematic, interrelated issues:

  1. Product changes that result in brand value are overlooked, whereas the costs of delivering that value is not
  2. Features that negatively impact brand value but show short-term quantitative value are accepted

Here are two examples – let’s say that you think your site’s interface looks like crap, and you want to improve it to make it higher class and more trustworthy. But your metrics czar says, let’s make a really small improvement and see if it affects anything before we revamp the whole site. That sounds reasonable, but then you find out that in fact, making a visually compelling site just doesn’t drive better metrics, and in fact, it’s expensive and maybe lowers certain metrics. What do you do? (This is case #1)

Another example is that you make it really hard to unsubscribe from your mailing list. Maybe you don’t have a link, or you have to login first, or whatever. Making this change clearly affects your ability to retain users, but you get a small percentage of complaints, but the overall quantitative metrics look good. Should you keep this hard-to-unsubscribe mailing list issue? (This is case #2)

Ultimately, it should be clear that both cases are not clear cut issues at all. I could find reasons to go either way, but when you’re trading off a qualitative metric versus a quantitative thing, the numbers-driven approach tends to win. But this may not be the right thing. Similarly, sometimes the numbers may justify the decision, and the brand costs are actually quite low.

How do you make these decisions then? I’ll just wave my hands and say, “Entrepreneurial judgement” ;-)

Who’s the brand advocate?
One of the big, important roles that you need on every team as a result is someone who can advocate for the soft things. Who’s your brand advocate? Or customer experience advocate? Having someone on your team who can make logical arguments to balance out the quantitative stuff is hugely key, otherwise you’ll inevitably go down a path of brand-eroding quantitatively driven decisions.

Similarly, if you find that you’re never making decisions that go against the numbers, then frankly, you’re probably doing something wrong. If the data drives all the decision-making, then a lot of “soft” data is getting ignored.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

June 18th, 2009 at 9:34 am

Posted in Uncategorized

Why you should make it easy for users to quit your product

without comments

Don’t worry, I’m not a hippie
From the title of this blog post, you might think that I’m going to make a touchy-feely argument about why you should respect the right of your users to do all the terrible things that every entrepreneur fears:

  • delete their accounts
  • unsubscribe from email lists
  • cancel their subscriptions
  • uninstall their apps

… but you’d be wrong.

In fact, I’m going to argue that for every early product out in the market, making it really easy to quit is completely aligned with self-interested thinking. I’ll make the assumption that all the entrepreneurs reading this post are greedy, self-interested individuals, and target the appeal straight into your dark hearts ;-)

My central argument is that if you believe that every startup is an iterative learning process that converges towards product/market fit, then you need extremely high-fidelity signals to tell you if you’re going in the right direction. That means that along with trying to charge people money from early on, which is the highest form of “I love this!” you should give people valves to tell you “I hate this!” so that you can learn more faster.

Let’s drive into this further…

Product/market fit
There’s a notion of product/market fit that Marc Andreessen references in his blog, and he calls it the “only thing that matters” and says that every startup should do everything they can to get to this point. Let’s see what he writes:

The only thing that matters is getting to product/market fit.
Product/market fit means being in a good market with a product that can satisfy that market.

… and Marc continues:

Lots of startups fail before product/market fit ever happens.

My contention, in fact, is that they fail because they never get to product/market fit.

Carried a step further, I believe that the life of any startup can be divided into two parts: before product/market fit (call this “BPMF”) and after product/market fit(“APMF”).

When you are BPMF, focus obsessively on getting to product/market fit.

Do whatever is required to get to product/market fit. Including changing out people, rewriting your product, moving into a different market, telling customers no when you don’t want to, telling customers yes when you don’t want to, raising that fourth round of highly dilutive venture capital — whatever is required.

When you get right down to it, you can ignore almost everything else.

If you believe what he says, that gives you a pretty firm set of marching orders. And for early products on the market, getting to to this point in which your product is good enough and the market is compelling enough is a tough slog. So the question is, how do you navigate your way to product/market fit?

At the heart of every startup is a learning loop
For the idea that every startup is inherently a learning machine, we can turn to two of my favorite startup people, Steve Blank and Eric Ries. Eric has blogged in a lot of detail about how he believes that inside of every startup is an OODA loop that involves trying stuff out, learning, and trying more stuff again. And of course a lot of these ideas are built off of Steve Blank’s Customer Development framework that I’d encourage my readers to look into as well.

In this light, to combine the two ideas: Every startup is a series of iterative experiments that gets you from zero to product/market fit, and if you can do it before running out of money, then you might get rich ;-)

And the decision-making process in this approach is totally different. In most product strategy conversations I’ve been involved in, the most heated debates center around whether a particular product will work, and all the pros and cons of the situation. Contrast this to a learning-centric approach, which emphasizes whether or not experimenting with an idea will yield insights, and how much it’ll cost to learn these insights.

In other words, you’re much more likely to try things that will fail, if those failures teach you something important about the market.

Of course, all of the decisions that power these iterations rely data – and the better the data, the better your decisions will be, naturally. So where do you get the data to tell you if customers are happy or not about your product?

Explicit signals beat implicit signals almost every time
One of the key lessons I took away from my time from the behavioral targeting ad industry is that explicit data is much, much better than implicit data, when it comes to predicting user behavior.

That is, you’d prefer explicit “intent” data like:

  • made a purchase
  • used a student loan calculator
  • searched for “palo alto bmw dealership”
  • filled out a form

versus the less valuable implicit “interest” data like:

  • have similar demographics to other people who buy
  • visit the same publications as similar customers
  • having a pattern of reading finance articles

So if you are looking to collect data to drive decisions, then the best kind comes from the explicit data of having users specifically take action, whether it’s positive or negative. Purchase intent data, as illustrated above, is positive – and quitting intent gives you the negative half. In fact, if you only look at the positive feedback, you might be ignoring 50% of your data.

As a result, you want lots of explicit data points in the axis of “I love it!” to “I hate it!” which includes people giving you money (maybe donations being the ultimate form of love) to allowing them to easily quit. Make it easy  for your users to quit, unsubscribe, or otherwise cancel – it gives you the strong signal when you’re doing wrong! And make sure to track it and include it in all of your quantitative experiments as well.

Better data = better learnings = Better product
So to summarize my key arguments here:

  • Give users lots of explicit ways to show appreciation and hatred
  • These datapoints will help you iterate your product
  • Better product iterations will let you reach product/market fit faster
  • Reaching product/market fit will lead to more money faster

You can only learn so much from reacting to positive data, and trapping your users in unwanted subscriptions won’t get you to product/market fit any faster.

And finally, don’t do it because it’s annoying ;-)
‘nuf said.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

June 15th, 2009 at 8:30 am

Posted in Uncategorized

Benefit-Driven Metrics: Measure the lives you save, not the life preservers you sell

without comments

Measuring value created rather than optimizing for yourself
In my last blog post, I talked about the idea that value creation generates revenue, traffic, and other metrics, not the other way around. This is a particularly interesting idea to implement because it goes against much of the standard analytics reports that are out there.

The reason is that ultimately, most metrics tend to focus inwards, on self-interested gain, rather than outwards on the value you’re creating for your customers. Let’s take a couple examples of inward-focused metrics that people often cite:

  • account registrations
  • pageviews
  • unique visitors per month
  • revenues

I’m sure you measure many of the above, as I do as well. It’s OK to measure this stuff, but if you start to optimize for it, you are starting to focus on the business of value extraction, not value creation.

Measuring # of life preservers sold versus the # of lives saved
Thus we come back to the title of the post. Most people are using standard analytics packages that are commoditized to focus inwards on metrics like pageviews and revenue. To use an analogy, that’s akin to the idea of measuring the # of life preservers that you sell, and trying to optimize for that, rather than optimizing for the benefit, which is the # of lives saved.

If you focus primarily on selling life preservers, then you’ll tend to do all sorts of stuff like:

  • making them cheaper to build
  • adding doo-dads to them that are flashy to customers
  • using aggressive sales tactics
  • etc.

These things might generate revenue in the short to medium run, but if you prioritize this at the expense of actually delivering on the product benefit, then that’s a bad optimization in the long term.

Now contrast that to the idea of trying to save as many lives as possible. You might still want to make them as cheap as possible, so that every ship in the world can have as many of them as needed. You might still want to add upgrades to them, but only if they help save lives. etc. While these changes may be similar in execution, they are different in spirit than the changes you’d make when optimizing for sales.

Introducing “Benefit-Driven Metrics”
So ultimately to start this exercise, you should throw out all the standard metrics (conversion rates, pageviews, etc.) and just focus on one thing:

What are your customers measuring?

By looking at how they define value, then you get yourself aligned to them as closely as possible. Answering this question sets your company up for value creation, which then unlocks the ability to gain something from that value, then you have to start here.

I’ll deem these quantitative measurements as “Benefit-driven metrics.”

How do you measure it?
Here’s the interesting part – everyone’s benefit-driven metrics will be completely different, because most people’s customers and value proposition and product are ultimately very different. Unfortunately, you don’t have the crutch of standardized numbers like pageviews or uniques to lean on.

But let me give you some examples for reference:

For dating sites:
Why do customers join dating sites? To find their soulmates. Thus, measure the quantity of successful matches you make, not the lifetime value of the customer. Focusing on LTV can easily lead you to do things like creating fake accounts to make people come back, or optimizing it so that they find their best matches several months down the line, or trying to get everyone to pre-pay for the service rather than making the product experience awesome.

For marketplaces:
Why do customers sell on a marketplace? To make money and get rid of their stuff. Why do customers buy on a marketplace? So that they can get things cheaply and quickly, and are happy with their purchase. Thus, measure the quantity of how much your sellers take home, and how many buyers are happy with their experiences. Contrast this with overfocusing on listing fee revenues, which might get you into a spiral of raising prices rather than creating the best commerce experience.

For social networks:
Why do customers use social networks? To “connect” with their friends – let’s boil that down to communicating (though it’s obviously much richer than that). Then ideally, you might want to focus on the number of messages/comments/posts that end up getting replies from their friends. If you overfocus on something like user registrations, then you might get a ton of users, but maybe they won’t be getting to experience the benefits of the product.

For online publishers who sell to advertisers:
Why do advertisers buy ads on websites? To generate traffic to their own sites, which in turn leads to revenue. In this case, you should quantify the amount of revenue you generate for your advertiser customers, or at least the number of conversions they receive. Contrast this with the approach of measuring and optimizing your own CPMs as a publisher, which results in potentially delivering a lot of crappy traffic to advertisers who will drop their payments in the long term.

Special note for ad-driven startups :-)
Now ad-supported startups have a particularly interesting issue in this, because I keep using the words “benefits” and “customers” and perhaps it’d be easy to think this refers to the users of the product. But maybe not, as I’ve outlined before in Your ad-supported Web 2.0 site is actually a B2B enterprise in disguise. The reason is that your customer may actually be the advertiser on the site, not your user!

And in fact, if you overfocus on pleasing your users to the detriment of your advertiser customers, which is very easy – then that leads to very bad things.

Start this benefits-driven approach now, not later, so you can learn the right things
Finally, I want to emphasize that I believe it’s important to start thinking about these benefit-driven metrics from the beginning of your business, not later. The reason is that every learning that a startup makes is often hugely applicable to a specific context, but not at all applicable to other variations.

If you’re going to start a website that churns users like crazy but hits massive user goals, you will build an entire organization to optimize for those metrics. And once you’ve gone far on this, it’s not clear that you’ll have the DNA, the technology, the ideas, or the willpower to execute in a different direction.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

June 11th, 2009 at 9:00 am

Posted in Uncategorized

Creating value versus optimizing revenue

without comments


Revenue stems from value, not the other way around
One of the big thematic issues that has been referenced numerous times by myself, Eric Ries, Mike Speiser, and others is the limitations of quantitative testing in building a business. In particular, several objections have been mentioned:

  • Over-optimizing leads to local maxima, particularly in product design
  • Focusing too much on pageviews/uniques ignores actual product/market fit
  • Relying on quantitative models leads to anti-innovative behavior
  • etc.

For all the data geeks reading my blog – my opinion on all of this is, these are absolutely all true, and are all very important and relevant conversations that every data-driven startup needs to be having. Are you having them?

All of these have gotten me focused on one of the core questions of any business: What value are you actually creating?

Distribution-led approaches can lead to local maxima on value creation
Many new companies in this age of quantitative virality easily fall into hitting local maxima on value creation, all for very good reasons. By focusing on viral invites, addressbook scraping, A/B testing, and other techniques, you end up getting a big inflow of traffic and your question becomes, “What is the best product I can make to keep all these users around?”

There become three major temptations:

  1. First, there’s a huge desire to build as efficiently as possible. That is, build in just enough to satisfy the user, but don’t overpolish
  2. Similarly, there’s a big temptation to build for the lowest common denominator, because you’re trying to appeal to a huge audience. As a result, a lot of designs err towards persistent low-brow internet “recipes” – like quizzes, polls, forums, and other mechanics
  3. In addition, your product veers towards a portfolio of experiments, rather than one cohesive experience. After all, you’re still trying stuff out, and it’s a lot easier to add a new feature or use a crazy headline to get people to your site, rather than really going through the difficult synthesis process that’s at the heart of every design discussion

As a result of all of this, it’s very easy to build a shitty product that generates small to medium value, but doesn’t do something amazing.

I won’t go too much into the solution of how to solve this, but I think the key thing to think about is that the quantitative lean philosophy doesn’t allow you to skip the difficult process of coming up with a hugely value-creating product. You still have to do it, but you have a framework in which to think about the process.

Maximizing the source rather than your share
Another issue in all of this is that the focus for quantitatively driven companies ends up being on outputs rather than inputs. For example, it’s easy to start to optimize traffic as an entity in itself, rather than thinking about the fact that traffic comes out of product/market fit. Or similarly, you can optimize revenue, but I think it’s misguided to do it without considering the fact that you have to be creating value for whoever is paying you.

Thus, one can argue that “value creation” is the ultimate source for all of these secondary variables like revenue, traffic, etc. And you can make the decision to focus on extracting as much as possible from the secondary variables, but you become fundamentally limited by the primary value creation process within your product.

Another way to think of this is that ultimately, every product creates a bunch of “value” (however you want to define it) and then you end up taking some % of that value back as revenue. Abstractly, this is true regardless of whether your product is ad-based, freemium, or otherwise. If you think about things this way, the following two approaches are fundamentally different strategies:

  1. Create a massive amount of value, and capture a small amount
  2. Create a moderate amount of value, and totally dominate the economics

(and obviously this is a spectrum as well)

I would put companies like Wikipedia, Craigslist, Open Source, and others as extreme examples of #1. And unfortunately, I think a lot of short-lived apps on Facebook are really more or less examples of #2.

I think this is why, for people who question the value of internet companies like Facebook and Twitter, the natural thing to ask is, are these companies generating real value? If they are, I think the process of turning that into cash is much easier than the process of creating the huge value in the first place!

The biggest value drivers are qualitative
So the question then becomes, how do you systematically create value? I think that this is a very hard question, and one that it may be difficult to use quantitative tools to define, because the biggest value drivers are often qualitative. They are things like:

  • What’s does your product do?
  • Who’s your customer?
  • Why do people give you money?
  • etc.

Now, a lot of these you can turn from qualitative to quantitative. After all, after you build your product, you can generate hypotheses around how people ought to use it and make it better in the most common flows, by optimizing the page flows. Similarly, you can figure out how much money you should charge for something.

Yet simultaneously, the process of figuring out the core product requires the entrepreneur to have an opinion, perhaps one that is difficult to test or takes many years to test. And whether you do this quantitatively is its own thing – after all, companies like IDEO have a very evidence-driven design process, but is one that uses qualitative evidence gathering to generate the product prototypes.

Looking at landing page optimization as a value creator
I think that this entire perspective about maximizing value creation rather than optimizing outputs leads to a lot of interesting, subtle changes in how you approach things. Let’s take landing page optimization as an example of this.

Typically, the entire discussion around landing page optimization is just one about conversion rates, and all the different possible candidates to get to a conversion. Instead of this perspective, you might ask: What value does an optimized landing page generate in the first place? Ultimately, I think this optimization makes it so that people can grok what they’re signing up for better. It helps them scan the page better for relevant pieces of information. And it could make them less confused about the page they’re on.

Compare that line of thought with, “hey, let’s make a lot of random headlines and see what people react to” as two different ways to approach the same problem, with one prioritizing value creation and the other prioritizing the output (conversion rates in this case).

Looking at viral loops as a value creator
Same for viral loops, and the process of getting people to invite their friends to a site. If you are sincerely value-oriented, then the entire question is:

  • why do people WANT to invite their friends to the site?
  • how does having your friends on the site make the product a better experience?
  • what conveniences can you build in to make people expose their friends to the process?
  • etc.

Contrast this to a perspective that an outcome like the viral factor is all you care about optimizing, and however you can get that number >1.0, then the better off you are. I think that this numbers-centric model absolutely can lead to viral websites and apps, but also sucks at actually creating a huge base of value that you can recoup later.

Conclusion
My point with all of the above is simple: No matter what your product is, the only way to make money long-term is to make a lot of people happy, and then getting some % of the value you created back, in return. The right strategy to build a long-term sustainable business is to build long-term sustainable value. No amount of viral tricks or optimization will allow you to escape that truth!

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

June 10th, 2009 at 9:00 am

Posted in Uncategorized

How do you do concrete interviews for non-technical people?

without comments


Why do people interview technical and nontechnical people differently? Nerds vs jocks

Interviewing engineers and objective tests for competence
Recently, I’ve been meeting a number of engineers to find more people I might want to work with. I think as a whole, most tech companies do a pretty good job interviewing engineers because at the very least, there are objectively correct answers to programming questions. In particular, you can give engineering questions which result in code that either runs or doesn’t, and having an interview candidate code with you for an hour is pretty enlightening.

(Note that ultimately, there’s still lots of gray area, and some solutions are better than others, but there’s at least a minimum bar for objectivity. You can never get rid of human judgement, of course)

Because there’s a strong signal for competence, as a result, you can create a series of “can you tie your shoelaces?” type questions which quickly suss out their level of expertise, and you can do this all in one phone call (or at the very least at the end of a couple hours together). In fact, if you search for “programming interview questions” you can get a sense for how concrete these interviews get – I think this is great.

Nontechnical interviews and weak signals for competence
Now let’s compare this to nontechnical interviews, which, in my expertise at least, generate weak signals for competence: Almost every interview process I’ve ever been involved with, whether I’m the interviewer or the interviewee, seems to lack the level of rigor that most engineers go through. Why is that? I imagine that much of it has to do with the fact that in nontechnical positions, it’s harder to decide objectively what’s “good” or “bad” – people often disagree on strategy, design, and it’s hard to figure out if you’re actually competent or not.

As a result, many of the nontechnical interviews I’ve seen tend degenerate into descriptions of previous work, or soft skills, or very subjective conversations around “how would you improve X or Y?” That’s not to say that these discussions aren’t useful, but for me personally, I’ve seen far too many people with 10 years of experience in some area that turn out to not be able to tie their shoelaces. The question is, how do you find this out sooner rather than later?

In fact, it may be that this entire discussion isn’t really about interviewing for technical people versus nontechnical, bur rather thorough interviewing versus sucky processes. Even then, I’d mostly argue that there’s a real issue that you can use objective tests in the engineering world to create strong signals of competence, whereas it’s much harder for marketing and product roles.

Crafting concrete interview questions for nontechnical roles
So what would a series of concrete tests look like for nontechnical roles? I would argue that you can rigorously test a couple key areas such as:

  1. Can you create the deliverables that are part of the day-to-day role?
  2. Are you familiar with previous relevant work in your area (whether you follow it or not)?
  3. Can you demonstrate that you can do the thing you’re being hired to do?

Let’s drive into each one of these areas, as a thought experiment of what it’d look like to do an interview structure that’s as concrete as what most engineers have to go through:

Part 1: Can you create the deliverables that are part of the day-to-day role?
This is probably the closest that you can get to a coding question, at least for nontechnical people. The point is, most nontechnical jobs still do provide deliverables to other people in the company – for some, they will be spreadsheets, or documented product roadmaps, or launch schedules, or powerpoints, or whatever. The question is, can you have them sit down and craft a basic version of whatever deliverables they’ll be expected to create on the job?

Here’s an example: Let’s say that you were going to hire a product manager who needs to have a strong background in user acquisition via search engine marketing. Ideally, you should be able to sit them in front of a blank spreadsheet and they should be able to model out the user acquisition process from start to end. This means they’ll know how to think about the problem like a funnel, show the different steps, be able to roughly approximate what the numbers might be, and then calculate the cost per acquisition.

Or, if you have you interviewing for sales, they should be able to sketch out the basics of an RFP response, or build out a sales pipeline document, or make a list of sales collateral they might need, or whatever. If it’s someone from the marcomm world, then they should be able to sit down with you and craft a budget for making a splash at a tradeshow, or creating a schedule for a product launch, or whatever.

The point of all of this is that it’s a “tie your shoelaces” exercise that complements the soft skills discussion and meandering conversation about previous work.

Part 2: Are you familiar with previous relevant work in your area (whether you follow it or not)?
The next set of questions can be asked around how engaged they are in previous work in their area, regardless of whether or not they follow it. This area I’m often torn about, since there are often talented people who don’t know anything about historical precedence – but I do think that it demonstrates competence in the main. In concrete terms, I think that you can test for a couple specific things:

  • Are they familiar with industry jargon in their field?
  • Do they understand the theoretical underpinnings for what they’re doing?
  • Have they read relevant books and blogs, attended conferences, or otherwise engaged in the discussion?

So for example, a product manager who focuses on go-to-market strategy should ideally be familiar with books like Crossing the Chasm or be aware of previous successes/failures in the tech industry. If the product manager is involved in the development process, you’d want them to be familiar with Scrum or Agile development, and ideas like the man-month. If they involved in product design, they would ideally know terms like visual language or affordances or Fitt’s Law.

As with the caveat in the previous section, I would ask these questions primarily to suss out expertise level and while it would contribute to a final hire/no-hire decision, it wouldn’t be the overriding factor. Ultimately some people are amazing decision makers on products without having formal training, but as entrepreneurship has a long history of failures, you’d ideally find people who were familiar with other situations that led to success or failure.

Part 3: Can you demonstrate that you can do the thing you’re being hired to do?
Similar to a programming interview to test programming skills, ideally you’d have the applicant tested in a way that most resembles their actual day to day job. That way you are testing them for their actual skills, rather than their self-reported skills. The trick to this, I think, is to break down the actual job description into specific areas that define the success or failure for them in the role.

For example, let’s take hiring a technical recruiter, whose day-to-day role you might break into:

  • Prospecting (finding new candidates)
  • Making contact and selling
  • Evaluating skillsets
  • Scheduling and interview coordination
  • etc.

Ideally, the applicant would be tested using the real tools that they would use. If they are prospecting and using Linkedin, you’d give them a job description and ask them to pull up the site. Then you’d have them go through and try to find good candidates for you. Similarly, you’d ask them to pick a particular candidate and draft a high-quality, personalized request for them to come in. And so on.

When my girlfriend interviewed for IDEO, the global product design consultancy, they had her present her resume to a group of people in slide format and then take questions from a group. This is really smart, because of course a lot of her job is to take ideas, synthesize them down, and present it in groups. So I think that having her do that is a great test for competence in this area.

What’s next here?
I think the next step in this blog discussion would be to actually post some job titles and give rough formats for interviewing for that type. If I have time over the next couple weeks, I’ll write something up.

Has your company created a rigorous process for selecting non-technical hires? If so, I’d enjoy hearing more – please write a comment.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

May 18th, 2009 at 9:00 am

Posted in Uncategorized

I want more tools to reach my readers, not monetize them!

without comments

I am often pitched with new blog widgets and services to try out. Most of the time, I won’t try it, unless it falls into one of two categories:

  1. It helps me build my readership and audience, particularly email and RSS subscribers or Twitter followers
  2. Or, it helps me understand my audience, and even better, creates a way for me to start a dialog with them

The best tools do both. If you have one like that, then send me a note at voodoo at gmail. I am likely to try it out!

Money doesn’t matter
Note that monetization doesn’t factor into my decision above – and that’s because I didn’t start this blog to make money ;-) Instead, I am writing it because I enjoy the process, it helps me structure my thoughts, and I often meet interesting people through my blog.

When I’m thinking about a problem and am willing to make it public, a quick post or two about the topic often brings world class people to the forefront, and it’s a lot easier to learn from people who are much smarter than me than to try to figure things out from first principles in a vacuum. 

Plus, making money from writing a niche blog is hard, with the only viable method being direct monetization via ebooks or something similar. You definitely need scale to make advertising work.

For the above reason, over time, I’ve come to trust on a number of tools that I can’t live without:

1) Feedburner, for email subscriptions
There are certainly better email feed managers, but I built my initial audience with this, and it’s working well enough. Every time someone subscribes by email, they enter in their address. This becomes a valuable clue and I often will google email addresses or look them up on Linkedin, just to know. And sometimes, it becomes a trigger for me to send an email and introduce myself. That’s pretty good. The biggest plus would be if there was a way to

2) Twitter, for follower bios
Twitter has a similar affect, except people are much more likely to follow on Twitter than give you their email address. Similarly, Twitter has a nice “bio” that you can eyeball for details about the person. I’ve also recently discovered Tweepsearch, which lets me do things like search for any of my followers who do iPhone work and contact them. The point is, it makes it so that blogging becomes a great tool for me to massively (but passively) build a big business network – and the corners of this network may not be helpful today, but maybe one day ;-)

3) LinkedIn, for business backgrounds
LinkedIn has some of the same properties as above, but also has detailed info. On the other hand, it absolutely skews towards marketing and business professionals. I often find that a lot of engineers don’t have Linkedin accounts.

And by the way, Google Friend Connect sucks
I recently tried out Google Friend Connect, and it definitely sucks using the values above. If you go to my blog now, at the bottom there’s a “Social Bar” where you can “join” this blog (whatever that means). Since implementing it last night, I’ve had a couple people join, but it’s basically worthless. There’s really no profile to speak of, and there’s no way for me to reach out to interesting people, even if I could figure out whether they were interesting or not. What’s the point of having people join then, if I can’t do anything with the audience I build?

Disqus has a little bit of this also – people often comment on stuff, and sometimes their comments will be really interesting. But when I go to a Disqus profile, there’s often little to know information, and certainly no way to contact them. Instead, I just have to reply in the public comments, which is pretty kludgey and certainly not a “passive” way to go about doing this.

Stop focusing on monetization, at least for 99% of bloggers!
My uber point on this, ultimately, is that 99% of bloggers don’t make any money from their blogs, and the rewards for their work are meeting interesting people as well as building up a following. Things that help pierce the veil of anonymity (with reader consent, of course) are the most useful since it helps on all the non-monetization goals.

Anyway, if you have a useful widget or plugin for me to try that fulfills any of the above, please shoot me an email.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here

Written by Andrew Chen

May 17th, 2009 at 1:56 pm

Posted in Uncategorized

Dear readers, I need your help!

without comments

Hi everyone,

Hopefully all of you are enjoying the blog although I’ve only been writing about once a week now. We recently passed 7,000 subscribers, which is fantastic. The blog continues to grow and seems to be a useful reference for people.

Anyway, as some of you know, I’ve been working on some of my own startup projects – still quite early – and am now looking for recommendations for smart engineers.

In particular, I’m looking for advice on:

  • Engineers that are interested in consumer internet startups
  • Have experience with Ruby (or PHP/Python) including frameworks like Rails or Django
  • Can work with me on customer-centered product development
  • Have a BS/MS in Computer Science or equivalent experience

If you have recommendations for interesting people for me to talk to, please shoot me a note at voodoo [at] gmail.

voodoo-email.png

I’d hugely appreciate it!

Thanks,
Andrew

Written by Andrew Chen

May 5th, 2009 at 8:30 am

Posted in Uncategorized

Talk to your target customer in 4 easy steps

without comments

Answer this question honestly…
When’s the last time you spoke to your target customer? Like really talked to them?

If it’s been more than a month, then shame on you!

Consumer internet companies are often overly dependent on quantitative data like Google Analytics, but without understanding the qualitative parts – the consumer psychology that actually goes into making purchase decisions. It’s a good idea to balance out the data aspects, particularly if you are not your target customer.

If you haven’t finished developing your product yet, that’s no excuse! After all, there are many methods of doing qualitative user research without writing a single line of code. In fact, in many ways talking to your customer and understanding them great detail is often much more powerful before you even go through the product development process.

How to recruit target customers to talk to, in 5 easy steps
It’s very very easy to talk to people on the internet. You really don’t have to do much work. Here’s what I will often do, in order to get some opinions about a particular set of products, or to deeply understand user behavior (like gifting! or decorating), or to get a better picture of what people do day to day.

Step 1: Write a recruiting survey
First off, go to Wufoo.com or a similar site (Surveymonkey.com works well too).

The most important part is to title the survey “Get a $20 Amazon gift certificate for 1 hour on the phone” or something similar.

Make a survey that includes the following questions:

  • First name (text)
  • Phone number (phone number)
  • Email so we can send you a gift certificate (text)
  • Best time to call, morning, afternoon, evening, weekend (multiple choice)
  • Tell me about yourself! (textarea)

That is usually a good base, and you should make all the entries required. Then you also want to provide a couple questions that can help you screen or otherwise prioritize your questions. For example, for a Facebook app you might ask:

  • What types of games do you like (multiple choice) 
  • What kind of phone do you have? (multiple choice)
  • Why do you like game X? (textarea)
  • Have you ever spent money on a game? (multiple choice)
  • etc.

Anyway, you get the point. I usually try to keep these pretty short.

Step 2: Recruit your participants
Now that you have a survey set up, then you can take the URL and start getting people to fill it out. There are a couple obvious areas to recruit people, and I typically do the following:

  • Link the survey from your product (if it’s out there)
  • Buy ads on Facebook and send traffic to your link
  • Post your survey on Craigslist
  • Buy ads on Google Adwords and send clicks to your survey

For the ad-based solutions, I will usually limit the buy to $50 per day, and spend $0.50 or so per click. I usually find that it costs about $1-2 per survey completion. After I recruit a couple dozen, then you can start moving forward with the call.

Step 3: Do your phone interviews and learn something!
This where you’ll learn the most – you can just pick up the phone and start talking. I usually structure the interviews into a couple distinct sections, depending on what I’m trying to learn.

The first section I usually try to learn about basic internet usage:

  • Tell me about yourself
  • What’s your typical day like?
  • Tell me about your computer setup – what do you have? When do you use it?
  • What are your favorite internet sites? What sites do you use every day?

Then depending on the topic, I’ll usually drill into 3 or 4 different areas with a couple questions each. The entire point is to ask open-ended questions without leading them too much. I will do as many of these calls as makes sense until I am hearing the same information over and over. Then I’ll start tweaking things and changing the interview to adjust.

Also, I will usually not show them a product unless the entire discussion is focused on that – the point of these conversations for me is usually qualitative understanding, not usability. Having them thoroughly test competitive products can be interesting also. You want to use this information to drive product strategy, and not be reactive.

I guarantee you’ll learn something!

Step 4: Buy your interviewees a gift card
When you’re done, don’t forget to send your interviewees a gift certificate – $20 card from Amazon is a good idea – to thank them for their time.

One of the best things is that once you get some relationships going with the best interviewees, you can go back to them for updates or to identify some of the most extreme cases.

Conclusion
The point is, it’s easy to talk to people, and it’s this type of detective work that separates customer-focused companies from technology-driven ones. There’s even a fun tool to suggest a bunch of other methodologies like this also – the IDEO method card deck.

If you have other additions to this, please suggest in the comments!

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here

Written by Andrew Chen

May 4th, 2009 at 8:30 am

Posted in Uncategorized

Bay Area investors I’m following on Twitter

without comments

Here’s more people to follow on Twitter!

As a followup to my previous post Bay Area entrepreneurs I’m following on Twitter, here’s the version for investors (mostly focused on early stage angels).

Bryce Roberts http://twitter.com/bryce O’Reilly Alphatech
Dave McClure http://twitter.com/davemcclure Founders Fund
David Lee http://twitter.com/daslee Baseline Ventures
David Shen http://twitter.com/dshen Betaworks
Jeff Clavier http://twitter.com/Jeff SoftTech VC
Josh Kopelman http://twitter.com/joshk First Round Capital
Marc Andreessen http://twitter.com/pmarca Andreessen Horowitz
Mike Maples http://twitter.com/m2jr Maples Investments
Mitch Kapor http://twitter.com/mkapor KEI
Naval Ravikant http://twitter.com/naval Hitforge
Om Malik http://twitter.com/om True Ventures

If you’re not on the list, then I didn’t see you use Twitter in the last day ;-)

And last but not least, you can follow me on Twitter here.

Am I missing anyone? Just comment if you have recommendations.

Written by Andrew Chen

April 27th, 2009 at 8:30 am

Posted in Uncategorized

3 key ideas from a recent Freemium dinner conversation

without comments

Freemium pow-wow!
My friend Charles Hudson and I recently co-hosted a dinner conversation on the topic of Freemium business models. First, a quick blug: if you aren’t reading Charles’s blog, you should check it out! He runs BD at Serious Business up in San Francisco, and also has put on a number of great conferences like the Social Gaming Summit.

Anyway, we had a bunch of interesting people on hand, including folks who were working on monetization from a bunch of companies. The dinner was generously hosted by Bluerun Ventures, and we ate a lot of pizza. We had folks from places like:

  • Xobni
  • LogMeIn
  • YouSendIt
  • Puzzle Pirates
  • Dropbox
  • Imeem
  • Dogster
  • Crazy Egg
  • etc.

There were a couple of key themes in the conversation, which I’ll outline below.

Key idea #1: There’s Consumer freemium, and there’s Enterprise freemium
First off, there was a strong distinction between the usage of freemium in the enterprise versus consumer. In many ways, it was as if there were two completely different conversations going on. In the consumer world, the focus is very much on topics like: payment methods, virtual items, subscription vs microtransactions, etc. In the enterprise, much of the focus is more on the IT infrastructure, departmental structure, expense reports, etc.

I think ultimately the distinction comes down to the fact that in the consumer world, people are spending their own money – as a result, they are much stingier, the demographics are more difficult, and you’re often an entertainment experience competing with other discretionary products. Compare this to enterprise, where the goals are more often utilitarian, and business users can more easily justify an ROI with the tools. Furthermore, because the users live in a broader business ecosystem, you have to deal with the IT organization, as well as the opportunity for people to simply expense their freemium costs.

Key idea #2: Freemium playbook has already been written
Another interesting discussion revolved around the fact that many of the basic tactics in the freemium world have already been documented and used by previous players.

In particular, there are tactics out of the playbook such as:

  • 30-day free trial (with credit card upfront)
  • Free service platform that upsells multiple premium products
  • Freemium service that disrupts existing pay-only product category
  • A/B testing pricing, purchase flows, etc.
  • Achieving purchases by optimizing the new user experience
  • Default to premium product, but allow the user to skip to Free
  • Lifecycle-based discounts and upsells
  • Start with a high price but A/B test coupons to price test
  • etc.

(am I missing any? Please write me a comment! Will drill into these in more detail sometime)

UPDATE: Ted Rheingold from Dogster also added a couple ideas that came up – see the list below:

  • The higher the price point, the less churn/drop-off amongst subscribers.
  • Offer users a 30-day free version of the premium right next to offer to join free service. Put the two free offers side by side so people a) know they are making the choice for the premium version, b) less likely to be concerned about a ‘catch’
  • Offer a money back guarantee period once payment starts.
  • On the consumer side do not overlook the emotional motivation to subscribing. Whether it to feel a part of the club, to show your elevated commitment, or to keep moving up the kicking-ass ladder (see Kathy Sierra) subscribing to even utility services such as LinkedIn can have a very strong emotional component.

Many of these tactics have been used by successful players in the market – the most often used examples are ZoneAlarm, eFax, AVG, and others. In fact, here’s a longer Linkedin discussion with many of those brands and more. Sean Ellis in particular is an expert on this area.

Note, of course, that most of the above tactics and examples stem more from the enterprise world, whereas the consumer folks tend to use different examples – like Skype, Cyworld, etc.

Key idea #3: Freemium products face common design challenges
As a corollary to having a playbook of different tactics, you might also imagine that Freemium products must have similar design challenges as well. In particular, the biggest question of freemium is:

When does Free stop and Premium start?

On one hand, if you give away too much, then your conversion rate from free-to-paid ends up being too low. This means that people are too easily satisfied with your product, and have no reason to convert to being a paid user.

On the other hand, if you force the user to premium too early, then you lose out as well. They may not give your product a chance, and move on to something else, before they start down the path of converting to a premium user. Similarly, the free segment of your audience can help drive distribution and virality, and without that group, it becomes much harder to get meaningful amounts of traffic.

Charles Hudson has a great discussion of this design issue on a recent blog, where he writes:

It is very difficult to properly segment users and features such that you provide enough value to both paid and free audiences. For example, an email service that provided a 10 MB of storage for free and 1 GB for the paid version would have a hard time surviving – the basic offering isn’t sufficiently compelling to get people in the door. Conversely, a service that offered 2 GB for free and 10 GB for the premium service might be giving away too much value in the free product to expect a large audience of people to upgrade. And that’s just one product dimension. Adding more dimensions just makes it that much more difficult to figure out the features for which users would be willing to pay.

You can read more here. The longer blog goes into the discussion of the best way to segment free versus premium, and whether it’s better to go with a trial period with a full premium product, or if it’s better to go with a stripped-down Free product and a separately upgraded Premium product. This segmentation is a key design issue in the Freemium world.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Special thanks to Hiten Shah for helping me recall some of the bullet points!

Written by Andrew Chen

April 20th, 2009 at 8:30 am

Posted in Uncategorized

“Stealing MySpace” and my personal experience monetizing MySpace ads

without comments

Just finished “Stealing MySpace”
I recently got a Kindle, and one of the first books I ordered was Stealing MySpace by Julia Angwin, a journalist from the Wall Street Journal. There are lots of interesting reviews on the Amazon page, so I won’t rehash them here.

I added the book to my books list, which you can view here.

Instead, I’ll give a quick description of how I first ran into MySpace at my previous company, Revenue Science, and what I learned from it.

Why start a cost-per-click behavioral ad network?
Revenue Science is a behaviorally targeted ad network – this means that they tag users based on their browsing history, and then target ads to them based on their historical behavior. So ideally, if you knew that they were in market for a car, then you could target them with car ads and achieve better clickthrough rates and conversions. This technology has slowly permeated all corners of the ads world, and it’s a powerful story.

In 2004, we were in talks with Yahoo/Overture to experiment with behaviorally targeted ads within the Yahoo network. As with all behemoth companies, they had several internal teams already working on behavioral targeting, and there was some uneasiness due to classic NIH syndrome. As a result, some of our sponsors within the Yahoo organization encouraged us to take an XML feed of all of their ads, with some basic pricing data, and sign up publishers as a full-blown ad network. The arrangement was that we’d receive access to their thousands of cost-per-click (CPC) text ads, and we’d put together data showing that we could generate revenue using behavioral targeting in a direct response setting.

Chicken or the egg
Of course, with most ad networks, there is usually a chicken-and-egg problem of getting advertisers and publishers to the table. If you don’t have a ton of advertisers already buying, then you can’t deliver premium CPMs to the publishers. And of course the advertisers won’t buy without your network being able to deliver a significant number of ad impressions with publishers already signed up. Having this XML ad feed from Yahoo made it easy for us to bootstrap the advertiser side of our ad network quickly and easily.

As an aside, most ad networks get started because a top salesguy splinters off from an already effective network and brings a bunch of advertisers with them – that means they can do some buying quickly and easily to start the business. As with most marketplaces, the Golden Rule applies – “he who has the gold makes the rules” – and as a result, most networks tend to skew towards beyond advertiser-friendly rather than publisher-friendly.

Adventures on the open internet
Now that Revenue Science had access to a bunch of advertisers, it was time to sign up publishers. I started a small group within the company that consisted of a couple inside sales folks who could sign up publishers, fax them contracts, and quickly get ads flighted. This was truly a startup-within-a-startup experience.

One of the first thing we did was construct a big target list – essentially this meant buying the list of domains from Alexa, Nielsen, and others, and prioritizing them based on factors like:

  • Is the content written in English?
  • Do they already have ads placed?
  • Are they selling something, or is it a content site?
  • Is it a forums site? Or a social site? Or a communication site?
  • Do they have at least 1 million impressions per day?

Using some of the basic criteria, we could have a team of fresh-out-of-college sales analysts go through the top 30,000 sites on Alexa and email each publisher.

The sales pitch was simple – “You know that your audience is worth a lot of money, but your ad revenues aren’t reflecting this.” This pitch was especially compelling, of course, to social networks and consumer internet sites which lacked context. That is, people would go there to “see what was going on” versus going there to buy something specific. And of course, we found that the social networks that accumulated massive amounts of profile data on their users were especially interested in doing something with that data.

We had also heard that LinkedIn and Friendster were making a ton of dough targeting ads to keywords that people had placed in their profiles. It seemed like a worthy experiment.

Finding MySpace in 2004
Right around the end of 2004, we started getting in touch with some of the larger sites on our list. At the time, there was a entry called Intermix which was grouped as a network in the Nielsen top sites listing, but we couldn’t tell which sites were driving the traffic. And so we started emailing the properties we could find, and eventually reached an executive there via one of their eCard sites. As “Stealing MySpace” discusses, they had a number of eCard sites that were old school BlueMountain.com type clones. The business model for these was to harvest email addresses from people sending eCards back and forth, which would then be used to upsell consumers to offer-based monetization.

Anyway, the Intermix guy mentioned a bunch of different ways to work together, and near the end of the call, he asked us if we had ever heard of MySpace. And of course, this being late 2004, there was really no writing about the property, so we thought it was just another random site in the Intermix portfolio.

What really got us was that he said the site was adding 50,000 new registered users a day!

After the call, I ran to my desk and pulled up the site – and was immediately disappointed. It looked like a Geocities site, and a Friendster clone. But it was clear after clicking “Browse” that the site was incredibly active, and I just didn’t “get it” yet – there were tens of thousands of active users, and almost every profile had very recent comments and was completely pimped out.

Monetizing social networks is hard
After meeting with several of the execs there, we started thinking about a custom integration with them whereby they would pass us relevant keywords about user segments.

The founding team at MySpace was superb – we were impressed by almost all folks we met, and it was clear they were a scrappy, entrepreneurial group, not the staid media executives that roam the halls of most public internet companies. Many of the folks who are now no longer there, including Jason Feffer, Steve Pearman, and others, are now starting new companies of their own.

Anyway, the entire idea for behavioral targeting on MySpace was that we would take relevant keywords about audiences and then target ads to those users. While this is a fantastic idea in theory, there were a large number of difficulties exposed from this process:

  • From a technology standpoint, the keywords in user profiles is extremely free-form. There’s a lot of formatting garbage, like people saying *~ dancing ~* as an interest. Similarly, people filled out the interests in complete sentences, jargon, and many other non-trivial text parsing problems
  • Similarly, there’s a lack of purchase behavior on social networks – because people are there to hang out, they aren’t putting things like “looking for a new digital camera” in their profile, nor are they searching for it. Instead, they are saying “i <3 taking pics” which is different than saying you’re in the market for a great new digital SLR camera. The search traffic was similar.
  • Also, there is a ton of noise in the clickthrough data we’re getting back – we found that MySpace traffic was very noisy, had a lot of accidental clicks, and thus created problems in the backend for advertisers
  • Similarly, there was just SO MUCH traffic – when we started working with MySpace, they were at about a 1 billion ad impressions per month, but quickly got up to 1 billion ad impressions PER DAY. Pretty amazing growth, and we worked with them right through the inflection point. This really compelled us to get very serious about scaling our ad servers, and we heard repeatedly from other ad networks that they couldn’t take the volume MySpace tried to give them
  • Another difficulty was the completely context-free ad impressions that exist on a social networking site – people are there to hang out, not to buy stuff, and so the clickthrough rates were very low. Anywhere from 0.05% to 0.2%, but never into the >10% CTRs that you’ll see on search pages.
  • Visit lengths were another issue – as I’ve written about previously, more engagement doesn’t better monetization. The first 10 ad impressions might monetize extremely well, but once you’ve exhausted those premium campaigns, your super hardcore user that generates 200 pageviews is not substantially different than your engaged user that generates 30. Once you get past a certain point, then it’s all punch-the-monkey ads.

I’ve covered these monetization issues in more depth here and in these essays here.

That said, it was clear just looking from the data that MySpace was a very special property. In particular:

  • The traffic was growing incredibly fast – as I said above, they went from 10-15M registered users when we started working with them to over 100M very quickly, in just a few years
  • The users were highly engaged, and were definitely spending hours on the site – this is all obvious now, but at the time, MySpace was the only site we were seeing this type of engagement
  • The geographies of the users were all across the US, but also 95%+ American traffic – which is a huge premium within the advertising industry

I don’t think it needs to be said, but for the folks who are ga-ga over the monetization potential of Twitter, I’d encourage you to think about the monetization shortcomings of social networks, blogs, and email, and present an argument about why it’ll be 100X better. (That said, I love using Twitter as a service, so I hope they figure it out!)

And in comes Google
Over time, we started to find things that worked well for us to monetize MySpace traffic. Taking in-market data from outside of MySpace and then targeting those same uniques was definitely effective. We found that certain ad units and sections monetized much better than others.

Very quickly though, Google came in and threw down their $900MM deal for MySpace’s traffic. The details of this deal are in Stealing MySpace, and while I got to hear about the aftermath of the deal, Julia Angwin’s book fills out a bunch of the details from the executive point of view.

The point is, while the MySpace traffic was certainly amazing, it was clear to me (and the other ad networks folks I was in touch with), that there was no way the ads would perform at the level to justify the deal. And ultimately, I think we were proven right. The deal seemed like a crazy auction with a “winner’s curse” and it always seemed like the big bucks were going to get attached to the brand deals the FIM/MySpace team were putting together rather than making the remnant text ad inventory perform 10X better.

Conclusion
Now it’s clear that MySpace’s dominance of the internet is waning, and to me, there’s no stronger indicator of this than the search term “myspace” plateau’ing recently. This means that folks who typically type it to get back to the site via navigational search are not as interested anymore. You can fix some aspects of retention through notifications, better SEO, etc., but if your users don’t want to search for you anymore, then something is wrong.

Graph of searches for “myspace”

Anyway, only time will tell if MySpace is able to recover their strength, or if they are stuck at where they are. Certainly their monetization is bound to improve, but turning the ship on growth is always very difficult.

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

April 14th, 2009 at 9:00 am

Posted in Uncategorized

Designing and Testing an Ad Product: 5 lessons learned form imeem’s audio ads (Guest Post)

without comments

Another guest blog, today from Sachin Rekhi, who is currently an entrepreneur-in-residence at Trinity Ventures. You can find more of his writing at SachinRekhi.com, and follow his Twitter at @sachinrekhi. Most recently, he ran product efforts in Imeem’s monetization efforts, and before that started the company Anywhere.fm through the YCombinator program (acquired by Imeem). I asked him to write about some of his work monetizing Imeem’s ad inventory, and some of the issues he worked through – his post below is about those projects. One last important note, Sachin is engaged to marry my sister Ada next year! Congrats to the happy couple ;-) –Andrew

Designing and Testing an Ad Product: 5 Lessons Learned from imeem’s Audio Ads
By Sachin Rekhi

Introduction
In its search to find the most effective way to monetize user’s time spent listening to music, imeem has become one of the early innovators in the nascent online audio advertising space.

From the process of designing, testing, and iterating on imeem’s unique audio ad product, I wanted to highlight 5 key lessons learned that are applicable not only in developing imeem’s ad offering, but in general to designing any innovative ad product.

Lesson 1: Align the ad product with your site’s user experience
Lesson 2: The easy way is often not the best
Lesson 3: Pick the right metrics to optimize
Lesson 4: Make sure to look at qualitative feedback
Lesson 5: Iterate on the sell in addition to the ad product

Align the ad product with your site’s user experience

imeem had classically employed a variety of advertising strategies to monetize users, including display ad inventory that was filled by our direct sales team through high impact brand campaigns as well as dozens of ad networks we used to fill our glut of remnant inventory.

Yet we knew with our audio consumption experience, we were creating a new kind of available ad inventory which could be much more effective at reaching our users than display ads since audio-based advertising better aligned with the activity users were most engaged with on the site. With terrestrial radio ads still generating $21B in revenue, there was clearly an opportunity to shift some of those dollars online and provide a better experience for both users and advertisers.

The easy way is often not the best
Online audio ads are not a new concept. They have been used by a variety of major online streaming outlets, including AOL Radio, CBS Radio, Live 365, and Yahoo LaunchCast. However, the initial incarnation of audio ads took the easy way out. They typically ran 30 second audio ad spots which they obtained from ad agencies that re-purposed their terrestrial radio creative for online audio ads. This made it very easy for agencies to get their feet wet with online audio advertising with no additional creative costs. While this may work for traditional online streaming services, the new generation of music streamers like imeem, Last.FM, and Pandora would not be willing to run such long audio ads out of fear of losing their user base.

So what was needed was an audio ad unit custom tailored for personalized streaming services. And that’s what we ended up creating at imeem. We came up with an 8 second audio ad spot that would advertise a national brand and show a standard IAB medium rectangle (300×250) banner on top of the player during the audio ad playback. The user could click-through the medium rectangle to the advertiser’s landing page like classic banner ads. We started with a very low frequency of a maximum of 2 ads per user per hour.

However, this was far from easy, as it required imeem to develop in-house production capabilities for the 8 second audio creative, as agencies never had existing creative and were rarely willing to develop another set of creative themselves. While this was an undertaking, it is often necessary to bear the cost of innovation to deliver the right ad product to your audience.

Pick the right metrics to optimize
In order to understand the effectiveness of any ad unit, it’s important to systematically test it. The first step in designing a successful experiment was determining what were the metrics that we were testing. We knew that we were trying to satisfy two customer segments with this ad product: advertisers and users. For advertisers, there were a variety of ad-related performance metrics that we could measure. However, we decided to start by measuring the advertiser metrics that ad agencies had classically been most interested in. We wanted to determine whether we could make advertisers happy through the performance of these classic metrics, since trying to educate ad agencies on the importance of new metrics is an uphill battle that would significantly decrease your ability to sell the unit. Thus the initial advertiser metrics we tracked were click-through rate of the tethered medium rectangle banner as well as aided and un-aided brand recall as measured through quantitative surveys administered by our research partner Dynamic Logic.

For users, what we wanted to understand was whether introducing audio ads onto our site would decrease the amount they used the site. While we tracked page views, visits, session length, etc, we focused on number of songs played per user during the life of the experiment as the most important proxy for site usage.

Make sure to look at qualitative feedback
In addition to measuring quantitative metrics, it’s equally important to collect qualitative feedback from real users. The iModerate online focus groups we conducted ended up being very enlightening and allowed us to derive interesting insights of consumer motivations and behaviors that looking at the quantitative data alone wouldn’t provide.

For example, though initially we were significantly worried that the introduction of audio ads would cause users to flock to our ad-free competitors, we learned through interviews that many of our young users had developed a strong affinity with imeem, understood the need for imeem to monetize, and were eager to suggest ad verticals they would be most interested in hearing to improve the product.

Iterate on the sell in addition to the ad product
An area that’s as important to iterate on as the ad product itself is how you sell or position this offering in the marketplace. Selling innovative ad products is actually the greatest challenge in the process. Anytime you introduce a new ad unit, significant education is required for brand marketers and agencies to help them to understand the importance, effectiveness, and promise of this new medium.

Our sales planning team iterated many times on the pitch to advertisers for the audio ad product as well as how we reported on ad unit performance at the end of each campaign. This was regularly refined based on feedback we elicited from our advertising partners.

Conclusion
While many have claimed the death of online advertising in light of the recession, its important to remind ourselves that ad dollars are still being spent online. Now is an opportunity to innovate on the ad products that we offer advertisers to show greater value, brand awareness, and performance. We must keep in mind that ad agencies are eager to find better ways to spend ad dollars, as they are equally interested in showing results to their brand clients to hold on to their ad budgets. We should partner with our advertisers and users to find the most efficient way to leverage online advertising to monetize our sites.

Written by Andrew Chen

April 14th, 2009 at 8:30 am

Posted in Uncategorized

Will social payment platforms really work long-term? (Guest post by Jay Weintraub)

without comments

My friend Jay Weintraub writes an amazing blog about the leadgen industry at JayWeintraub.com, which I’d recommend you check out. He also runs the LeadsCon conference series. Since Jay is such an expert in the leads space, I recently asked him to comment on the incentivized social offer platforms that have recently seen much success, and assess the pros and cons of the business. His thorough response is below! –Andrew

Conversionomics: Analyzing social cash/alternative payment platform market longevity
by Jay Weintraub

(Advanced/Ad industry readers: Scroll to “Conversion Economics” Section)

A Not So Brief Background
Every once in a while, the world of higher brow internet advertising concepts – social media, engagement, Web 2.0, etc. – intersect with the oft-perceived lower brow world of performance-based advertising. Calling performance-based advertising lower brow does some injustice to one of the more dynamic sectors, but this is the same sector that has figured out how to convince tens of thousands of users daily to sign-up for “free” trials of acai berry and colon cleanse products via the flog. It is also the same sector that despite the legal and regulatory hurdles (including millions in fines) thrown at it for its marketing of ringtones continues to generate well north of one million new cell phone pin submits (monetary transactions) monthly for other mobile subscription services. If it doesn’t seem intuitive that this almost anything goes wild west of arbitrage would intersect with the heavily funded world of higher brow online advertising, you’re not alone. It was purely accidental but perhaps not unexpected at some level, especially when we consider the underlying trend that tends to attract performance marketers – massive amounts of inventory that very few have a grasp on how to monetize. That description fits social media to a tee.

In the display world, surplus inventory tends to get lumped into the bucket of remnant ads. In this case, the surplus is only remnant because few outside of performance marketers have started to figure it out. The social media inventory being referenced here doesn’t include the MySpace login screen or other areas that a) have enough scale and b) have won over the trust of agencies. We are talking about inventory that to some doesn’t differ that differently from user generated content, application traffic. We could write a whole piece on how the dominant ad networks missed the boat on monetizing app banner traffic. Each app has made room for standard ad units, and taken together, resemble any other collection of publishers. The networks would even have greater data available for optimization but alas no cookies, on which most of their optimization platforms are reliant. Had these networks become the dominant players, we would still see the same result, though – a limited revenue stream for the app owner. None of which is new, especially to this audience. It does, however, set the stage for the unexpected rise of the more dominant player in app monetization, a group that taken together makes almost as much money monetizing app traffic as Facebook does monetizing their internally controlled impressions.

If we are honest with ourselves, it is fair to say that no one would have guessed just how lucrative monetizing app traffic would be. The data might now tell us this, but ask someone if they’d sign-up for Dish TV in order to earn points to dress their baby or buy their friend, and the rational person would say no. Sure, they might do some things in order to earn more points – send invites to their friends, come back daily, etc., but jump through enough hoops for a mob-themed fantasy game enough to earn its founders more than one million monthly? No one saw that coming. Plenty of companies are profiting from this discovery – be it app makers themselves like Zynga or especially the new breed of companies that enable this commerce, the alternative payment platforms which include the likes of OfferPal Media, Super Rewards, Peanut Labs, Gambit, etc. Only now are those not directly tied to this ecosystem starting to grasp the scale that some of the companies have reached (>$100mm annual run rates). They are profitable, enjoying hockey stick like growth curves and great valuations by the venture community. They are also, for the time being, indelibly tied to the performance-marketing space, as that is from where the vast majority of their ads come. Why? Because ultimately, the alternative payment platforms are a form of incentivized ads.

History Repeated?
Among almost all forms of performance-based online advertising, not many have enjoyed the fame and notoriety that incentivized marketing has. Its initial rise to prominence took place in a similar setting – a media recession with a surplus of untargeted inventory available. The incentive promotion ads, think “Free iPod” offers, did what other types of online advertising could not. They took someone who hasn’t expressed any intent about a product or service and turned them into customers for a variety of products and services. Those who mastered this made large sums of money, so much so that they can afford to purchase other companies with a better public image, e.g. Intreprid Investments buying Q Interactive. Those who mastered it even better, found themselves in the cross-hairs of the FTC. Distilled to its core, though, the incentive promotion companies operated on a simple principle – users want something, e.g., an iPod, and they will jump through hoops (complete offers) in order to get it. At that level, it doesn’t seem conceptually different than the alternative payment platforms. This newer generation of companies uses the same ads, the same tracking methodology; they just get their users through a different way – app traffic instead of banner/email/search traffic.

Given the boom and bust that the original innovators of incentivized marketing have undergone, the big question for many tracking the unexpected popularity of the real Incentivized Marketing 2.0 is what does the future hold in store for them. It is a topic that entrepreneur Niki Scevak, and the most widely quoted blogger (from Andrew’s blog to Venturebeat) for someone with supposedly fewer than 200 RSS readers, discussed in his fantastic post, “The Impending Doom of Facebook Apps.” At the heart of the issue – quality. The incentive promotion space gained a reputation for burning through advertisers due to its horrible quality. Users who signed up for offers didn’t really want the offer. They just wanted their now not so free good. And a whole cottage industry even sprung up on how to game the system to get your iPod or other electronic for as little real money as possible. Is the same fate to beset the alternative payment platform space? Says, Niki,

    “The consumer behavior has changed only subtly from five years ago: instead of completing a laundry list of offers to qualify for a $150-250 value product, each consumer is completing a small number of offers for a smaller economic value to be used in the game or application.
    But just like credit card companies and Netflix were happy to give the free ipod guys a shot, they are also happy to completely shut down the channel as well once it proves it doesn’t work. As the category of leads/customers grows, the more important it will be for more senior marketing folks to take a real look at the quality of leads provided through Facebook games and apps. And they’ll find the same result of quality they did back in 2004-5: a whole heap of shit.”

Types of Offers
Front and center in the ecosystem are the ads that these “Managed Offer Platform” companies (a term coined by Offerpal Media) run. At the end of the day, they pay the bills. In the world of online advertising, ads tend to fall along one of the following payment metrics – per thousand (CPM), per click (CPC), or per action (CPA). Those in the offer platform space focus on the last, per action. But, per action is a broad term encompassing the following subcategories – per sale (CPS) and per lead (CPL), the chief distinction being that the former requires data plus credit card whereas the latter requires just a user’s data. To confuse the situation further, we could create a whole new category of CPA ads based around subscription services (a form of cost per sale) – one that includes payment with credit cards, e.g., Netflix or the (potentially) more sinister free trials (acai berries for example.) This subscription category also includes services where the transaction occurs using the mobile phone (ringtones, quiz, crush).

In case you haven’t installed an app that uses one of the offer platforms, here are some sample ads that you would see. This example looks at ads as a user tries to earn Lunch Money, a virtual currency managed and tracked by alternative payment platform company OfferPal Media. You will see examples for per lead (auto insurance), mobile subscription (IQ Challenge), credit-card subscription (Netflix), per sale (Zone Alarm), and even one incentive promotion 1.0 offer (free laptop). Here there are:

Take the IQ Challenge!
Quiz Ad – Todays high score is 125. See if you can beat it? Compare your score to others in the IQ Challenge community!

No credit card needed to receive L$ within minutes.

L$ awarded after submission of a valid mobile number and PIN confirmation.

Price – 7.05

Netflix DVD Delivery Service – Free Trial & GET L$!
Try Netflix DVD home delivery for FREE and get some L$. No coupon codes allowed! If you use a coupon code, your L$ will not be awarded.

Purchase required to receive L$ within minutes.

L$ awarded upon registration for home DVD delivery. New members must order and receive initial movie order or L$ will be reversed.
Price – 15.00

Get Free Auto Insurance Quotes and easy L$!
Direct Insure Online offers exciting new insurance quotes fast, free and easy. Just fill out a simple form and get up to FIVE quotes which can help you save money on your Auto Insurance.

Free! No purchase required to receive L$ within minutes.

L$ awarded upon Complete insurance quote request. Fraudulent information will lead to expulsion from application.

Price – 3.78

Share your thoughts on Laptops, and get one for FREE!
Answer a short survey on laptops and receive one for FREE! Participation required.

Free! No purchase required to receive L$ within an hour.

L$ awarded after submission of a valid email address, personal information, and navigation to the final offers page where you must click on an offer.
Price – 3.00

Make your PC Safe and FASTThe Most Complete Internet Security. Need home protection for up to 3 PCs? Get the ZoneAlarm Internet Security Suite 3 User-Family Pack for just $10 more!

Purchase required to receive L$ within minutes.

L$ awarded when you buy now.
Price – 19.80

The “Price” reflects what the user earns as a result of their efforts. In this case, Lunch Money (L$) gets awarded in the millions, but we rounded down to the nearest million. And, for those who do not want earn their lunch money, they can purchase them. The rate is $9.99 per 100 million L$. It’s a data point that might not seem like a lot but provides us with a very important starting point for assessing quality.

Conversion Economics
In the world of performance-based advertising, the price per action correlates directly with quality. The higher the quality, the greater the payout. Let’s look at two examples, one from the lead generation world and one from the subscription world. In the lead generation space, e.g., auto insurance, the data purchased by a lead buyer has no real value to them. It only matters if the leads turn into policies. The greater the lead to policy ratio, the greater the value the buyer can and will pay for that lead. In the subscription world, the signup has some value, for users must enter valid credit card data, but more often than not, the advertiser must pay an initial bounty that exceeds what they earn from that initial charge. The longer the average user stays subscribed, the more the advertiser can afford to pay.

We can now at two examples from the subscription world that appeared as choices for the user wanting to earn their Lunch money – Netflix, which requires a credit card, and IQ Challenge, which requires a mobile subcriptions. Netflix pays the user 15mm L$. (As an aside, that L$ doesn’t have a corollary with real dollars is very wise.) That price is after the payment platform makes its money. In the default situation, this means a 50/50 split between the app owner and offer platform. Assuming that to be the case with Lunch Money apps, the rate to the platform is 30mm. But, what is 30mm L$ in actual dollars. One way to figure it out is to use the hard currency L$ ratio for a clue – $9.99 for 100mm L$. That implies the Netflix offer has a value of roughly 1/3 or $3.00. As a user, you’d be much better off paying $9.99 for 100mm L$ than converting on Netflix for 15mm where you will receive a charge of at least 9.95 on your credit card. The challenge with this math is that the dollar/point ratio doesn’t always give us a good sense for the actual economics. If anything it shows us the propensity for users to select an offer over paying hard dollars. If the system were truly aligned (ad dollars and offer dollars), the user would probably receive at least 100mm because Netflix pays Offerpal at least $20 for that user. At $20, the publisher receives $10, with $10 being roughly equaly to 100mm L$ according to the exchange rate. Right now, though, users don’t understand the offer economics the way someone in the performance marketing space would, so they wouldn’t naturally look to question the point spread.

The second example, mobile subscription offer IQ Challenge, pays 7mm L$. By knowing the market rates, we can back into a dollar per point value. In the affiliate space, this offer would pay between $6 and $10. Given this is incentive traffic we will assume the low end, $6 of which the publisher would see $3. The publisher received $3 and the user earns 7mm, showing an exchange rate of just greater than 2 to 1. It is still not commiserate with the 10 to 1 ratio when directly purchasing L$. But, it is enlightening when we use this price point to try and estimate what Netflix might pay. If the mobile offer pays $6 for for 7mm to the user, then Netflix could pay as little as $13 for 15mm to the user. The wrinkle in the analysis is that that many of the offers do not come direct from the advertiser.

Traffic Blend
Despite their growth, the jury is still out on the offer platform companies’ traffic quality. We know that they are better than the free iPod offers, but the $64,000(000) question is how much better. Is it Scenario A, where they are closer to the incentive promotion space or Scenario B where they are either somewhere in between, perhaps even closer to true intent than they are incentive promotion.

Scenario A:

Scenario B:

Regarding overall quality (and thus, viability), the jury is still out because, not surprisingly, the results are all over the map. There are specific examples from those praising it and those lambasting it. Adding to the confusion, not all advertisers who receive traffic from the platforms realize that they do. This does not mean deliberate deceit, especially on the part of the offer platforms. But, it doesn’t imply complete ignorance either. The CPA Networks (aggregators of CPA offers) that supply them the offers also don’t know the traffic quality yet either, so they hedge their bets. They make sure that the traffic from the “apps” (both the applications and the alternative payment platforms) doesn’t comprise too great a percentage of the total traffic to a given advertiser. That way if the traffic isn’t as good as they estimate, it won’t hurt their relationship with the advertiser. They do this too because many networks know their advertises wouldn’t necessarily give permission to run on the apps, so they take the ask forgiveness route. While it might seem better to operate on full transparency, there are some quirks which prevent a fully transparent system from being the best solution.

When we say the platforms aren’t completely ignorant, it is because they know they have more traffic for a given advertiser/campaign than a given cpa network gives them. As a result, for some campaigns, they end up getting the campaign from multiple providers. Unlike Trialpay, who largely doesn’t play in the app space, those in the app space still rely on third-party providers. That is to say that while they do have direct relationships, it’s not unrealistic to assume that more than 50% of their revenue still comes from third-party providers. Complicating matters further, the platforms might have Netflix (as an example) direct but also run it from a third-party company. This happens all too commonly in the performance marketing world where one company has a lower limit than what they can deliver and another a greater. Plenty of offline parallels exist for that, but whether it is a good or sustainable practice online remains to be seen.

I find it hard to believe that the quality could be entirely bad or that the traffic round-robin (taking ad from one network x%, same offer from another y%) that effective to cover truly bad quality. Another option is that it could be that bad, and that there is enough overall budget spread among networks to cover-up the quality here. Again, with an industry contributing north of $200mm per year among what is ultimately a narrow set of advertisers, the cover-up doesn’t explain things fully. Is it bad but too small to get noticed? Good and built on a solid foundation? Good but still up-in-the air (because of this segment’s youth and the continuity advertisers receiving the traffic having multi-month periods for determining quality)? I’d say somewhere between the last two points.

Standing Hypothesis
First, the bad news. Pricing will probably go down. The “app” ecosystem makes more than it probably should on per unit basis. The good news is that the traffic quality in my opinion more closely resembles Scenario B than Scenario A. In other words, I do not foresee the impending doom of Facebook apps or the companies that help them monetize the traffic. The not so eloquent answer as to why they won’t die comes from the world the world they mirror at some level – the good ol’ incentive promotion space. It is an industry that just won’t die, and it continues to morph as the audience and traffic sources change. If it can survive, how is it that the offer platforms, with their much higher intent, would not? Assuming you want a real answer and not an “if they can so can you” response, here are two other reasons:

  1. One to one – while app users are still participating in offers in order to earn soft-money, which by its definition is not only incentivized but also lower intent than a consumer choosing to seek out the service, there is a big difference between how users engage with these offers versus those that are part of the free iPod offers. Users engaging in the app process pick a specific offer from a list of offers. While still somewhat limited, it is still a choice. In addition, users aren’t being asked to jump through major hoops. Contrast that with the free iPod approach where much of the disconnect comes from users being funneled through a flow in which they see more than a hundred opt-ins, none of which actually help them achieve their end goal. Plus, once they finally get to the final page, instead of being a simple process, it is quite cumbersome (number of offers, additional people, etc.) and designed for breakage. They make more if you don’t finish; not so with the offer platforms. They only make money if you do complete it.
  2. Accountability – in the incentive promotion space, users enter skeptically and/or with false expectations. Additionally, the incentive promotion path doesn’t have a connection nor does it really try to build a connection with its usres. The exact opposite is the case in the app environment. The user has a vested interest in the app. It is tied to their personality, their profile. The action that they take or don’t take reflects directly on them. Want to cheat the system? You will get caught. But, that is not always a strong detterent because people don’t realize that. So, the offer platforms have become very explicit and obvious regarding the requirements for credit. Take Netflix for example, “Purchase required to receive L$ within minutes.L$ awarded upon registration for home DVD delivery. New members must order and receive initial movie order or L$ will be reversed.” Very clear. So too is the one for Zone Alarm, “Purchase required to receive L$ within minutes. L$ awarded when you buy now.” Perhaps my favorite are seeing the incentive promotion guys’ language, “Free! No purchase required to receive L$ within an hour. L$ awarded after submission of a valid email address, personal information, and navigation to the final offers page where you must click on an offer.” Good luck. Where things get tricky is with lead generation (my area of specific focus). Here, the users have a better chance at gaming the system, and it partially explains why few true lead generation offers exist. The one from above, has the following disclaimer, “Free! No purchase required to receive L$ within minutes. L$ awarded upon Complete insurance quote request. Fraudulent information will lead to expulsion from application.” But, no matter how strict the language, it doesn’t require much sophistication for valid look fraudulent data to be entered.


Saving Grace
Whether the current model employed by the alternative payment platforms stays this course almost doesn’t matter. At some point, it’s expected that users should wise-up and realize that they can play better games for free elsewhere. Not all will, but the enthusiasm and take rates we see now won’t hold. But again, that won’t impact the longevity or long-run success of these payment platforms. App developers and offer platformas can help slow any decline by not being – creating a balance between what they ask users to do with what the users get in return. One could argue that users must do a little too much, and that relatively speaking, they get so little in return. Convering on two higher value offers would get a person a full game on XBox 360, but luckily for this ecosystem, users aren’t rational and thinking about alternatives when in the midst of impulse and ego driven actions.

Ultimately, the platforms have an incredibly valuable asset that assures them a role in the barely developed world of social media monetization – access. They own prime real estate, and if there is one thing that the ad networks have shown, it truly is location, location, location. It is why an ad network with poor tech but great inventory will outperform one with amazing optimization but lesser traffic. Whether they intended to be or not, as opposed to the banner ad networks that work with app publishers, the offer platforms are the real gate keepers, and they can transform themselves much the way the old incentive guys have as the traffic and advertise base changes. And luckily for the platforms, they will have no shortage of mediums with which to play – social media, iPhone today, game consoles tomorrow. While intent may still always be questioned, their relevancy will not be.

About the author: Jay Weintraub works in and writes about the performance-marketing ad ecosystem. He runs the largest conference servicing the online customer acquisition industry – LeadsCon. His next event, LeadsCon East takes place August 17 – 18, 2009 in New York City at the Marriott Marquis Times Square. You can also find a collection of his writings on his personal blog, JayWeintraub.com.

Written by Andrew Chen

April 13th, 2009 at 9:00 am

Posted in Uncategorized

Video: Panel on “Monetization and Business Models for Flash Games”

without comments

Flash Games Summit
Yesterday I attended the first ever Flash Games Summit which has a bunch of informative sessions and interesting people involved. Thanks to Mochi Media for inviting me!

Monetization panel
I moderated a panel called “Monetization and Business Models for Flash Games” with:

  • Adam Caplan from Super Rewards
  • Kate Connally, AddictingGames
  • Jameson Hsu, Mochi Media
  • Kenny Rosenblatt, Arkadium Games

This was a nice mix of people because it represented a virtual goods-focused payments platform (Super Rewards), an ad network (Mochi Media), a content portal (AddictingGames), and a developer (Arkadium), so there was a variety of interesting viewpoints.

Panel topics
We covered a variety of topics, including:

  • How do the different players (ad network, portal, etc.) make money?
  • What are the biggest factors in driving monetization?
  • What are the differences in monetizing demographics and geographical areas?
  • What kinds of games are more successful at monetization?
  • How are social gaming folks different than flash gaming creators?
  • What are the key metrics they look at?
  • How has the recession changes their business, and how will it affect the industry overall?
  • .. and many more!

Hope you enjoy watching the discussion.

Here’s the video – enjoy!
(click here if you don’t see the embed)

Online TV Shows by Ustream

Written by Andrew Chen

March 23rd, 2009 at 12:34 pm

Posted in Uncategorized

Friends versus Followers: Twitter’s elegant design for grouping contacts

without comments


BFF means “best friends forever” for those of you who are wondering why there’s a monkey and banana at the top of this blog post

Examining the power of one-way friending AKA “follow”
When I first joined Twitter, I found the one-way following mechanic pretty weird – but now, it’s clear that it’s very powerful and provides a richness that you can’t get from two-way friend requests. Initially though, I was confused. After all, hadn’t all social networks standardized around two-way friend requests that both parties have to accept? Why try to fix it? It seemed like it’d just be confusing, and potentially freak some people out that they were being followed by random people they didn’t know.

This post examines the strengths of the one-way “follow” design, in particular, the ability for this paradigm to support 4-tiers of relationships rather than the simple 2-tiered model in the classic friends case.

First, let’s discuss the social groupings issue.

Social groupings and friend segmentation
At the same time as Twitter was just getting started, the rapid explosion of users on Facebook, MySpace, and other social networks raised a bunch of really core and important questions about these social applications. Among these issues:

  • Will one social network rule them all?
  • Or alternatively, will you use one social network for work, one for your personal life, and possibly others for other vertical interests?
  • If it’ll be one, how will you group your work friends in one, and your personal friends in another?
  • How will this work at a design level? How about at a technical level? (aka Data Portability?)

These are all great questions, and point out a number of potentially fundamental weaknesses to the all-in-one social networking model. If you look at many other communication channels, like phone, email, etc., you’ll often see people segment their identities. Their work voicemail will be boring, and their personal voicemail will be funny, and they’ll use different phone numbers for each.

Of course, the initial petri dish that social networks grow – high school and college students – don’t really have to deal with this. Their social groupings are more or less homogenous, because they only have personal friends. But after you’ve worked a couple places, moved around, and have your friends’ careers diverge into lawyers and slackers, then your social network becomes more complex and segmented.

The approach that many social networks have taken to solve this is to group people into networks and friend lists. Either through self-assignment or you assigning them, people go into different lists. Of course this hurdle is basically a type of boring security configuration that consumers have historically had trouble with.

Twitter’s “follow” model
The amazing thing about Twitter’s model of allowing one-way following is that it adds depth and a couple simple segmentations to your friend list, without needing to do any configuration beyond hitting a button.

With the one-way follow design, you have:

  1. People who follow you, but you don’t follow back
  2. People who don’t follow you, but you follow them
  3. You both follow each other (Friends!)
  4. Neither of you follow each other

Having these 4-tiers of relationships on Twitter is nice – combined with Protected Updates, it creates a nuanced set of definitions, executed with just one button: Follow.

The advantages are numerous: First, it’s easier to get started by opting into a number of feeds that pre-exist, and you can populate your timeline without anyone accepting your friend requests. Second, it makes it possible to have interactions with lots of people (@replies), but your timeline only has information you care about, as you don’t have to follow folks you’re not interested in. Third, some profiles are inherently appealing to a cross-section of users – these include celebrities, companies, media content, etc. – and it the one-way follow design supports all of these nicely whereas two-way friending makes things complex. 

Two-way friending with public profiles?
Compare the above to the traditional two-way friending case, supported by social networks

  1. You’re friends
  2. You’re not friends

So how do you deal with Sean Combs aka P. Diddy (aka @iamdiddy)? If you were to friend him (and he friends you back), all of a sudden, you are exposed to the random people (like you) who are interacting with him, which creates a lot of low-value information on your newsfeed.

As a result, it only makes sense to separate Diddy’s profile into two separate ones, a public and private profile, where the private is the “real” friends and the public one is everyone else. For MySpace, they opted to differentiate these public profiles as “Artist Profiles” whereas Facebook decided to call them “Pages.” I imagine that they treat information flowing in and out of these pages specially, so that they know not to public crazy amounts of information from random people, and they can segment those interactions out.

Note that MySpace was very early in having these celebrity profiles, which has led to the right of so-called MySpace celebs like Tila Tequila, Forbidden, etc. whereas I’m not aware of any Facebook celebs emerging ;-)

Maybe this two-way friends with public/private profiles works, but it’s much less elegant than a single “follow” button. In the dual profile version, you end up needing either lots of configuration (what photos to publish, which friends belong in which), or you end up with two distinct pieces of content. This would mean multiple photos, multiple profile content, and two places to do everything. Not attractive, in my opinion.

Conclusion
Ultimately, both approaches have their advantages – the two-way friending model is better at supporting strictly real-life relationships. That ability has obviously led MySpace and Facebook to conquer a lot of real estate and build eyeballs. At the same time, this model requires them to design around the complexity introduced by celebrities, brands, and companies, which are all important folks to have in your ecosystem for long-term monetization as well as mass appeal.

As always, leave a comment with your thoughts! See any other friending models that have advantages?

Want more?
If you liked this post, please subscribe or follow me on Twitter. You can also find more essays here.

Written by Andrew Chen

March 16th, 2009 at 9:00 am

Posted in Uncategorized

Random links for week of March 16th

without comments

Here are some links I’ve posted to my twitter account over the last week or two. You can follow me on Twitter if you like these! Many are work unrelated.

Written by Andrew Chen

March 15th, 2009 at 7:13 pm

Posted in Uncategorized

App monetization: Gambit launches, funnel metrics, and ARPU vs “CPM”

without comments

 

New monetization option for app developers launches
Today, my friend Noah Kagan launched a new payments service called Gambit which you can check out here. The focus is on virtual currency-based Facebook/OpenSocial applications, and supports credit cards, mobile payments, and offers-based monetization. He also has a blog and twitter account.

Given the proliferation of these services, I wanted to spend a couple minutes talking through the new monetization funnel for apps and some of the metrics that are being thrown around.

Spreadsheet model
First off, I wanted to quickly share a very simple spreadsheet model which you can download here.

Funnel steps
At a high level, the key issues to track for an offers-driven monetization looks something like this:

  • total installs / registered users
  • monthly active uniques
  • daily active uniques
  • daily paypage uniques – how many users go page where you can pay or fill out offers?
  • daily lead clickthroughs
  • daily net lead completions (lead completions – chargebacks)
  • daily revenue
  • monthly revenue

From top to bottom, you can see that the focus is around uniques, and how many of them translate to completed offers and ultimately revenue. Of course, many of these transactions will actually end up as direct payment via credit card or mobile, and that is trivial to add to this model – I won’t cover those just for simplicity.

One quick note on leadgen though, for those who are unfamiliar – essentially, leads are opt-in forms that users can fill out in order to generate virtual currency. This might be subscribing to a Netflix offer, for example, or giving out a real address to get a direct mailing from a university. More about the leadgen industry here. As a result of this construct, users may not have to use credit cards in get money to the publisher, which can be great.

As a result of this offer-based monetization, it becomes important to track not only how many people click through to begin filling out a lead, but also how many folks complete leads, and ultimately how much revenue each lead is worth. Different demographics, geographical areas, and lead types generate different kinds of revenue. There’s also chargebacks that happen when the leads are rejected for being complete or incorrect.

Example numbers
If you plug this into a monetization table, then you can see the flow.

Here are some example numbers derived from games that Noah’s company Kickflip had previously developed and operated – he was comfortable sharing the data that came out of his own apps. The numbers below would represent a large and successful app with millions of actives per month:

 

total installs  15,000,000
monthly active uniques  3,000,000
daily active uniques  450,000
daily paypage uniques  45,000
daily lead clickthroughs  13,500
daily net lead completions  1,890
daily revenue  $5,670.00
monthly revenue  $172,935.00

and of course these numbers are driven by all the percentages of how much dropoff there is at each step. For quick reference, the percentages are listed below and drive the numbers in the table above.

% of monthly actives 20.00%
% daily active users of MAU 15.00%
% of DAUs that visit payments 10.00%
% that clickthrough to leads 30.00%
% that complete leads 15.00%
revenue per lead  $3.00
% chargeback 1.00%

Now that we have these metrics, we can start to calculate to other metrics.

Let’s define a new term, “ACPM” which stands for “App CPM”
As someone from the online ad industry, I was saddened to hear that the term “CPM” had been co-opted by these app monetization companies to mean something entirely different and weird.

In the app monetization world, this is the definition:

App CPM = (daily revenue / daily uniques to the paypage) * 1000

Recall that this is very different than the standard definition:

Online ad CPM = (daily revenue / daily ad impressions) * 1000

They are certainly not apples-to-apples, even though they are represented in some literature as such. And unfortunately, now that some of the market leaders are using these misguided terms, everyone is following suit. Yuck!

So as you can see, the “app CPM” (which I’ll refer as ACPM) is defined by uniques to the payments/offer page, rather than by pageviews or impressions. Using the above numbers, we’d get:

ACPM = ($5,670 / 45,000) * 1000 = $126

I’ve been told by multiple people that numbers from $50-$200 are all possible here.

Measuring ARPU
You’ll notice that the ACPM has no relation to the overall usage of the product – in fact, you might have $300-$400 app ACPMs but still have low revenue, since the ACPM is only defined once the users hit the payments page. Maybe you have a small number of users who do so, or maybe you only have a small % of users who are active at any given time.

To measure how the numbers fit together from top to bottom, instead we’ll have to calculate the ARPU:

ARPU = revenue / total actives

This means that this would include any and all actives, regardless of whether or not they visited the payments page. For the numbers above, they’d translate to:

Monthly ARPU = $172,935 / 3,000,000 = $0.058

On Facebook, I’ve been told from multiple sources that numbers from $0.01 to $0.25 are all reasonable, and that off of the social platforms you’ll find more niche destinations that generate closer to $1 ARPU.

Conclusion
Ultimately, it’s very exciting that multiple monetization platforms are getting created in the industry, and that competition will be great for everyone. Gambit is certainly one of many new creative companies that  will come out with great stuff.

At the same time, it’ll be up to publishers to figure out how to squeeze as much monetization they can from their properties, but without compromising the user experience. By looking at the numbers above, it’s clear how you can increase both ARPU and ACPM in very systematic ways, but it’s potentially at the cost of retention or engagement within the product.

Ideas or questions? Leave me a comment.

Want more?
If you liked this post, please subscribe or follow me on Twitter.

UPDATE: thanks to Jared Fliesler for correcting a silly mistake in my arithmetic ;-)

Written by Andrew Chen

March 11th, 2009 at 9:00 am

Posted in Uncategorized

Free to Freemium: 5 lessons learned from YouSendIt.com

without comments

Today we have a fantastic guest blog from Ranjith Kumaran, on his adventure going from an ad-supported free service to a subscription-based freemium model. Ranjith is the Co-Founder & CTO of YouSendIt.com, a Silicon Valley company that allows businesses and individuals send, receive and track digital content securely and easily. Enjoy! -Andrew

Free to Freemium: 5 Lessons Learned
by Ranjith Kumaran

Introduction
A tech reporter recently asked me if YouSendIt.com had made the switch from a free ad-based business model to a subscription-based freemium model “just in the nick of time”. After all, with death chasing every ad-revenue-fueled startup these days, surely we must have been scrambling over the last few months!

The truth is that we got our first paid subscriber at YouSendIt on the night of February 28th, 2006, over three years ago. The company recently passed the 100,000 paid subscriber mark but that first customer was where it all started: the transition from free to freemium. As startup pundits we expect business models to iterate but this particular switch was a thrill-a-minute ride.

So if you’re ready to take the plunge or are still on the fence between free vs. freemium then read on. I’ll highlight five key lessons learned over the last three years as we went from a 100% free model to freemium:

Lesson 1: It’s all about DNA
Lesson 2: Funnels come in all shapes and sizes
Lesson 3: Compound growth is a double-edged sword
Lesson 4: Don’t let pricing psyche you out
Lesson 5: “Boring” things can give you lots of conversion lift

It’s all about DNA
It doesn’t matter how smart your team is or how hard you work, everyone has to want to make the switch from free to freemium. The thesis of our first venture round of investment was to test both models to see how they scaled. But the reality was we already had a significant free business (advertising revenue helped my co-founders and I keep the lights on, sound familiar?) so the lion’s share of the first six months were spent building a team that could keep the viral, ad-impression-generating parts of the business growing. When the subscription service launched and showed great promise (we collected our first payment within 4 minutes of pushing the code live), the business model was changed but many of the team’s mindsets were not. Reconciling these differences was exhausting but we got there.

Do yourself a favor and pull the band-aid off quickly. If you re-channel all effort into improving conversion and building your brand your subscription business will get out of the blocks much faster. A change in DNA is the hardest thing a company can endure and some don’t; get through it early.

Funnels come in all shapes and sizes
Once you’ve made the switch a number of things will happen:

  • People who don’t believe in paying for web-based services will call you a sell-out. Unsurprisingly, these folks aren’t in your target market. If you provide a valuable service the majority of your users will stay with you (most for free and some percentage will subscribe right away). YouSendIt.com’s traffic took a 30% haircut in traffic during this process. If we didn’t have anything further down the funnel this would have been devastating.
  • Expect to see a drastic change in the mix of users you serve going forward. YouSendIt’s business is international (everyone sends files), including geographies that any startup will struggle to effectively monetize showing ads; in general the same geographies also yield weaker subscriber numbers. This pruning of who you serve and how much (by, say, asking for payment) is very, very, common and often more deliberate; it’s a cost-to-serve discussion every web business that thinks beyond customer acquisition will eventually have. Over time we found that the users who were willing to pay for our services attracted similar users to the site.
  • Plan to change the metrics by which you measure your business. Our dashboard went from plotting CPMs, impressions and make-goods to conversion rates, churn, and ARPU. Acquisition cost, cost-to-serve, and lifetime value start to rear their ugly heads. If you want to fully understand your freemium business, learn to love them.

Compound growth is a double-edged sword
Once the freemium engine has run for a while you’ll see that, unlike fluctuations in ad-impressions and CPMs, subscription revenue is very predictable; your shareholders will appreciate this. Step functions in revenue are seen when new products are launched (including up-selling to the current base and convincing more users to subscribe) and new channels into the top of the funnel are created (making our way onto the desktop was a big one). Compounded subscriber growth is very powerful: convert more users in January and you’ll have a chunk of the year’s revenue in the bag, provided you’ve got churn under control; fall behind and revenue shortfall amplifies just as quickly over time.

Don’t let pricing psyche you out
Balancing market penetration and the fear of leaving money on the table is no fun and more than one startup has failed to even launch a paid product because of the pricing hurdle. Here’s a quick and dirty way to put a stake in the ground:

  1. Make a list of your competitors or find adjacent markets / potential substitutes with similar users and use cases. You should already have this list.
  2. Plot the spectrum of all the price-points of their offerings.
  3. Plan to release at least two paid tiers: one at the bottom end of the spectrum that is driven by volume and one at the top that is clearly differentiated by value.

By doing this you can accomplish the following: fill in any market share vs. revenue maximization discussion rat-holes (now you can test both); give customers a way to compare between three offerings (free, a little more, and lot more; being able to compare is an important part of any purchase decision); feel good that you’ve done some diligence on pricing without prematurely shelling out a lot of cash on market research.

If the above exercise seems unscientific that’s because it is. Your pricing work has just begun: constantly observe the rates at which users move through the funnel at different price-points, use promotions to get buyers off the fence, and re-price as you get more price elasticity data. At YouSendIt we raised prices (yes, it can be done) successfully several times in the early days as we learned more and more about buying behavior.

“Boring” things can give you lots of conversion lift
Conversion lift doesn’t always come from groundbreaking changes in product, offers, or funnel analysis. These days I will look for ten 1% lifts in conversion before one 10% magic bullet; in reality there probably aren’t a lot of 10% lifts left after the first handful. Get into the groove of turning knobs a little at a time, learning, and iterating; you never did this further down the funnel when you were selling ads and you are likely out of practice. Other mundane things that you haven’t invested in start to get a lot of play: customer service SLAs, quality of service, and even the right terms of service are all areas which can drive conversion. Look for a 1% lift in conversion right now, it’s in there somewhere; then do it again a million times.

Conclusion
With any luck there are enough examples above to convince you that switching from free to a freemium business model can be done with a little perseverance and a lot of belief. I’ve experienced the rush of going from 0 loyal users, to thousands, hundreds of thousands, and millions a few times in my career. But there is a different kind of satisfaction you and your team will get when your business starts to amass paid subscribers: users who believe the things your company has worked so hard to create are good enough to pay for. This is the ultimate validation of your efforts.

These topics will be covered in more detail at a couple of panels (one on Understanding Freemium Business Models, another on Customer Acquisition in a Down Economy) I’m helping to organize in the coming weeks. If you’d like to participate as an attendee, panelist, or moderator or if you’re simply interested in hearing about more lessons learned (the hard way), please follow the Twitter feed I’ve set up.

Written by Andrew Chen

March 9th, 2009 at 8:00 am

Posted in Uncategorized

Twitter links for March 4, 2009

without comments

Here are some links I’ve posted to my twitter account over the last week or two. You can follow me on Twitter if you like these! Many are work unrelated.

Enjoy!

Written by Andrew Chen

March 4th, 2009 at 10:00 am

Posted in Uncategorized

Bay Area entrepreneurs that I’m following on Twitter

without comments

Lots of Bay Area startup chatter on Twitter
As I’ve written about many times, I’m of the often-repeated notion that the SF Bay area is a one-of-a-kind place for entrepreneurs. There’s lots of advantages, but one great one is the constant casual chatter of other entrepreneurs and CEOs here.

I quickly combed my tweets for today and made a list of active users of Twitter who also happen to be founder/CEOs in the companies around here. Almost all of them are running VC-backed startups, and they are among the best folks to learn from. I’m purposely leaving out obvious choices like @ev and conference types that get enough visibility as it is ;-)

The quick list of folks you might follow on Twitter
Here we go:

 

Albert Lai http://twitter.com/albertsupdates Kontagent
Bob Ippolito http://twitter.com/etrepum Mochi Media
Chris Lunt http://twitter.com/chrislunt Nombray, Friendster, etc.
Daniel Ha http://twitter.com/danielha Disqus
Darian Shirazi http://twitter.com/darian314 Fwix, Facebook
David Weekly http://twitter.com/dweekly PBWiki
Gabe Rivera http://twitter.com/gaberivera Techmeme
Garry Tan http://twitter.com/garrytan Posterous
Jon Aizen http://twitter.com/yonajon Dapper
Jonathan Abrams http://twitter.com/abrams Socializr, Friendster
Joyce Kim http://twitter.com/joycekim Soompi
Lane Becker http://twitter.com/monstro Get Satisfaction, Adaptive Path
Max Levchin http://twitter.com/mlevchin Slide, Paypal
Merci Grace http://twitter.com/merci Gamelayers
Ming Yeow Ng http://twitter.com/mingyeow MrTweet
Peter Pham http://twitter.com/peterpham Billshrink, Photobucket
Philip Kaplan http://twitter.com/pud Adbrite, Fucked Company
Sachin Rekhi http://twitter.com/sachinrekhi (stealth startup), Anywhere.fm
Scott Rafer http://twitter.com/rafer Lookery, MyBlogLog
Siqi Chen http://twitter.com/blader Serious Business (Friends for Sale)
Ted Rheingold http://twitter.com/tedr Dogster

If you’re not on the list, then I didn’t see you use Twitter in the last day ;-)

And last but not least, you can follow me on Twitter here.

Am I missing anyone? Just comment if you have recommendations.

Written by Andrew Chen

March 3rd, 2009 at 7:19 pm

Posted in Uncategorized

5 warning signs: Does A/B testing lead to crappy products?

without comments


Above: Hollywood sequels follow from risk-averse design decisions, like the widely panned Godfather Part 3

The dangers of the metrics-driven design process
Many readers of this blog are expert practitioners of metrics-driven product development, and with this audience in mind, my post today is on the dangers of going overboard with analytics.

I think that this is an important topic because the metrics-driven philosophy has come to dominate the Facebook/OpenSocial ecosystem, with negative consequences. App developers have pursued short-term goals and easy money – leading to many copycat and uninspired products.

At the same time, it’s clear that A/B testing and metrics culture serves only to generate more data-points, and what you do with that data is up to you. Smart decisions made by entrepreneurs must still be employed to reach successful outcomes. (Thus, my answer to the title question is that no, A/B testing does NOT lead to crappy products, but poor decision-making around data can absolutely lead to it)

So let’s talk about the dangers of being overly metrics-driven – here are a couple of the key issues that can come up:

  1. Risk-averse design
  2. Lack of cohesion
  3. Quitting too early
  4. Customer hitchhiking
  5. Metrics doesn’t replace strategy 

Let’s dive in deeper…

#1 Risk-averse design
The first big issue is that when you design for metrics, it’s easy to become risk-averse. Why try to create an innovative interaction when something proven like status/blogging/friends/profiles/forums/mafia/etc already exists? By copying something, you’re more likely to quickly converge to a mediocre outcome quickly, rather than spending a ton of effort potentially creating something bad – but of course this also eliminates amazing, ecstatic design outcomes as well. 

This risk-averse product design can lead to watered down experiences that combine a mish-mash of features that your audience has already seen elsewhere, and done better too. So while it’s an efficient use of effort, it’s unlikely that your experience will ever be a great one. It’s a recipe for mediocrity. 

Risk aversion is responsible for a whole bunch of bad product decisions outside of the Internet industry as well: Why do Hollywood sequels get made, even though they are usually much worse than the original? Why do companies continually do “brand extensions” that dilute the value of their brand position? The reason is that it’s an efficient thing to do, and it’s pretty easy to make some money even if the end product is not that great. But it’ll hurt in the long run, since these products will inherently be mediocre rather than great.

In my opinion, the only way to avoid this is to never get lazy about design, and to always take the time to create innovative product experiences. Of course you’ll always have parts of your product which will borrow from the tried-and-true, yet I think it’s always important that the core of the experience is differentiated and compelling.

#2 Lack of cohesion
As hinted above, the next issue is that A/B tested designs often create severe inconsistency within an experience. The bottoms-up design process that results from lots of split testing is likely to come up with many local effects, which may rule global design principles.

Here’s a thought experiment to demonstrate this: Let’s say you tested every form input on your website, with different labels, fonts, sizes, buttons, etc. You’re likely, if you picked the best-performing candidate, to have wildly different looking forms across the site. While it may perform better, it also makes the experience inconsistent and confusing.

Ultimately, I think resolving this has to do with striking a balance between global design principles and local effects. One great way to do this is to split out the extremely critical parts of your product funnel to be locally optimized, and keep the rest of the experience the same. For a social gaming site, the viral loop and the transaction funnel should be optimized separately, whereas the core of the game experience should be very internally consistent.

#3 Quitting too early
Another way to get to uninspired products is to quit too early while iterating an experience because of early test data. When metrics are easy to collect on a new product feature, it’s often very tempting to launch a very rough feature and use the initial metrics to judge the success of the overall project. And unfortunately, when the numbers are negative, there can be a huge urge to quit early – this is a very human reaction to wanting to not waste a bunch of time on something that’s perceived to fail.

Sometimes a product requires features A, B, and C to work right, and if you’ve only done A, it’s hard to figure out how the entire experience will work out. Maybe the overall data is negative? Or maybe it inspire dynamics that go away once all the features are bundled? Interim data is often just that – interim. But people are great at extrapolating data, but sometimes the right approach is just to play out your hand, see where things go, and evaluate once the entire design process has completed.

#4 Customer hitchhiking
A colleague of mine once used the term “customer hitchhiking” to describe how it’s easy to follow the customer on whatever they want to do, rather than having an internal vision of where YOU want to go. This can happen whenever the data overrules internal discussion and resists interpretation, because it’s so uncompromising as hard evidence. The important thing to remember, of course, is that the analysis is only as good as the analyst, and it’s up to the entrepreneur to put the data into the context of strategy, design, team, and all the other perspectives that occur within a business.

Today, I think you see a lot of this customer hitchhiking whenever companies string together a bunch of unrelated features just to please a target audience. This reminds me of what is often called the “portal strategy” of the late-90s. Just combine a bunch of stuff in one place, and off you go. The danger of that, of course, is that that it leads to incoherent user experience, company direction, and numerous other sources of confusion.

In the Facebook/OpenSocial ecosystem, of course, this manifests itself as companies that have many unrelated apps. You can dress this up as a “portfolio” or a “platform” but at the same time, it can be a recipe for crappy product experiences.

#5 Metrics doesn’t replace strategy
What do you think it would be like to write a novel, one sentence at a time, without thinking about the broader plot? I’m sure it’d be a terrible novel, and similarly, I bet that testing one feature at a time is likely to lead to a crappy product.

Ultimately, every startup needs to decide what they want to do when they grow up – this is a combination of entrepreneurial judgement, instinct, and strategy.

Every startup has to figure out how big the market is, they have to deliver a compelling product, and they need a powerful marketing strategy to get their services in front of millions of people. Without a long-term vision of how these things will happen, an excessive amount of A/B testing will surely lead to a tiny business.

To use a mountaineering analogy: Metrics can be very helpful in helping you scale the mountain once you’re on top of the right one – but how do you figure out whether you’re scaling the right peak? Analytics are unlikely to help you there.

Conclusions
My point on this – nothing is ever a silver-bullet, and as much as I am an evangelist for metrics-driven approaches to startup building, I’m also very aware of the shortcomings. In general, these tools are great for optimizing specific, local outcomes, but they need to be combined with a larger framework to reach big successes.

Ultimately, quantitative metrics are just another piece of data that can be used to guide decision-making for product design – you have to combine this with all the other bits of information to get it right.

Agree or disagree? Have more examples? Leave me a comment! 

Want more?
If you liked this post, please subscribe or follow me on Twitter.

Written by Andrew Chen

March 2nd, 2009 at 9:00 am

Posted in Uncategorized

Which startup’s collapse will end the Web 2.0 era?

without comments

The Silicon Valley machine is still going, for now

Here in Palo Alto, the Silicon Valley machine is still going strong – entrepreneurs are still starting companies, angels and VCs are still investing, and engineers are still coding. In the last 3 months, I’ve had half a dozen friends get their companies financed, which is great. Certainly things are more difficult, but deals are still happening, and there’s still a lot of companies growing.

But I’ll say that I’m still quote worried, because of my belief that the worst has yet to come. There is a large group of 2004-2007 self-described Web 2.0 companies which haven’t hit bottom yet, and I’d like to discuss this possibility in this post. I hope this blog will spawn off useful discussions for entrepreneurs thinking about where we are in the boom-bust cycle.

So first, some thoughts about Web 2.0 and how that category has played out:

Web 2.0 isn’t cool anymore
In the 2004-2007 era, many companies in the “Web 2.0” space received a tremendous amount of funding. You can debate what the term means, but generally I would classify them as companies having some of the following qualities:

  • Consumer internet destinations (or widgets!)
  • User generated content and activities
  • Advertising-based revenue models
  • Appealed strongly to the early adopter audience

Yet ultimately, it turned out that most of these startups didn’t work out as real businesses. The reasons hinged primarily on the difficulties of monetizing user-generated content based on ads that I’ve written about. As a result, to be a VC-backable business, you either need to be a top 50 internet property (good luck on that!) or have a well-defined monetization backend that probably wasn’t advertising.

My guess is that the # of companies describing themselves as Web 2.0 has dramatically decreased over the last year, as these business model problems have been rapidly discovered and popularized.

And yet, now the difficulty of course, is that there are dozens of Web 2.0 startups funded in the 2004-2007 timeframe that have a meaningful amount of cost, and not enough revenue. It’s these startups that I’m worried about.

Venture financings as a lagging indicator for the economy
The problem is, VC financings tend to be a lagging indicator for the economy. We haven’t seen the established startups who are trying to raise Series B or C rounds get turned down by the market. The reason is that it’s too early, and these companies failures will lag the downturn in the economy by a year or possibly more.

Lots of smart companies and entrepreneurs did a great job of getting their financings done last year before the economy really fell apart. As a result, these lucky ones have cash in the bank right now and can continue iterating on their model to hopefully figure things out. But if they haven’t figured things out and the economy is still bad, then ruh-roh, that’s no good. But we’re unlikely to see the effects of these sick companies with dysfunctional business models until later this year.

Who are these startups that might be in trouble? Let’s discuss:

Characteristics of startups in danger
I’m not going to call anyone out ;-) But I think that there are several startups out there which are now in the precarious position of either finding their model ASAP, or collapsing.

These companies might include the following characteristics:

  • Started in 2004-2007, and self-described as Web 2.0 startups
  • Have grown to lots of headcount, let’s say >40 people, which can burn through a $5M Series A in under a year
  • Substantial traffic, let’s say >5 million uniques per month, which drives up the cost structure
  • Ad-based business models, which rely on big sales teams calling up agencies (whose pockets are now reduced, if not closed)
  • Low-context advertising inventory, with low CPM in sectors like communication and entertainment
  • Mature internet sectors, where the upside is now established, and acquirers are less likely to pay up as a result
  • Not a leader in their category, where they may be #5 or higher, and investors may be unlikely to keep supporting their growth
  • Media content hosting, where they allow users to upload, host, and stream content without charging a dime, which also drives down the cost structure

I will leave it as an exercise to the reader to pick out companies that might fit the bill.

The point is, I think this cycle is going to get a lot worse, and a downslide will likely be caused by one or a number of 2004-2007 vintage classically Web 2.0 companies hitting the skids. I am hoping that the slope down will be gentle.

Want more?
If you liked this post, please subscribe or follow me on Twitter.

Written by Andrew Chen

February 23rd, 2009 at 8:30 am

Posted in Uncategorized

10% off for Flash Gaming Summit, March 22nd in San Francisco

without comments

I wanted to pass this along – I will be moderating a panel at the conference, agenda and conference details below.

Click here for a 10% off of registration for readers of this blog.

More detail:

Conference description

Flash Gaming Summit is a one day conference dedicated to fostering the growth and success of the Flash games community. The conference will bring together leaders in the Flash game space to share industry insights and strategies for monetization, distribution and successful game development. The Mochis, an Flash games award show, will take place at the conference to recognize the best games of 2008.  Flash Gaming Summit sponsors include Adobe, Kongregate and Nonoba and is organized by Mochi Media.

Agenda

Time Session
8:45 Doors Open – Registration & Breakfast
9:45 Opening Keynote
10:00 Session 1 – Designing and Building Successful Multiplayer Games   

Flash game developers are increasingly building more immersive and engaging multiplayer experiences for their users. What makes a multiplayer game successful? Our panel will share their experiences and best practices on designing successful multiplayer games that engage gamers and keep them coming back.

  • Moderator: Ranah Edelin, Raptr
  • Chris Benjaminsen, Nonoba
  • Daniel James, Three Rings
  • Jim Greer, Kongregate
  • Paul Preece, Casual Collective
11:00 Session 2 – Getting Eyeballs – Marketing and Distributing Flash Games   

How important is it (or not) to get distributed on game portals? Where are the game plays coming from? Hear from a panel of portals and game developers on the best strategies and tactics to market, distribute and publicize your game to reach gamers.

  • Moderator: Jeremy Liew, Lightspeed Venture Partners
  • Chris Hughes, FlashGameLicense.com
  • John Cooney, Armor Games
  • Richard Fields, MindJolt Games
  • Matt Spall, Gimme5games
12:00 Lunch
1:30 Session 3 – The Future of Flash   

Hear from Adobe about the latest developments in Flash 10, CS4, AIR, Flex, Catalyst and more. Adobe Evangelist Ryan Stewart will discuss and answer questions about how these improvements will impact the game development community.

  • Ryan Stewart, Adobe
2:30 Session 4 – Monetization and Business Models for Flash Games   

There are multiple models emerging in the industry for monetizing games. How should developers choose which business models to pursue? Our expert panel will share their experiences and thoughts on various business models and their potential.

  • Moderator: Andrew Chen, Futuristic Play
  • Adam Caplan, Super Rewards
  • Kate Connally, AddictingGames
  • Jameson Hsu, Mochi Media
  • Kenny Rosenblatt, Arkadium Games
3:30 The Mochis Flash Game Awards   

Recognize the best Flash games of 2008 with The Mochis Award Show!

4:30 Session 5 – What Makes a Flash Game a Hit?   

Game developers share insights on how to design and build hit, popular games that appeal to a mass audience. Panelists will discuss topics including game design, game testing/tuning, the effectiveness of creating sequels, and success metrics on succeeding in games.

  • Moderator: Brian Robbins, Fuel Industries
  • Edmund McMillen, creator of Coil (IGF Finalist)
  • Joel Breton, AddictingGames
  • Sean Cooper, SeanTCooper.com
  • Stephen Harris, Ninja Kiwi
5:30 – 6:00 Session 6 – Social Game Design 101: How To Make Flash Games That Social Networkers Want to Play   

What are the key elements to designing successful Flash games for reaching gamers on social networks? This talk includes a deep dive into game design and viral mechanisms that you must use to succeed on social networks such as Facebook and MySpace.

  • Bret Terrill, Zynga Games
6:00 Join us for drinks at the Official Flash Gaming Summit After Party: License to Play

 

Written by Andrew Chen

February 22nd, 2009 at 3:33 pm

Posted in Uncategorized

Warren Buffett’s bio “The Snowball” and lessons for startups

without comments


The Snowball by Alice Schroeder

I <3 Warren Buffett
One of the recent books I read during my blogging vacation was The Snowball, a comprehensive biography of Warren Buffett. I’ve always enjoyed reading the shareholder letters, and have read other books about him, but at nearly 1,000 pages, this book is particularly detailed and goes into a lot of unique stories.

In many ways, Buffett’s world is diametrically opposed to the startup world. He specializes in boring industries, doesn’t worry much about products, and has extremely long timeframes. Yet I took a lot out of reading about his experiences, and thought I’d share some thoughts about the startup world:

  1. Enduring businesses take a long time to build
  2. Who cares what other people think? Boring businesses can win big 
  3. Access to money can be a huge competitive advantage
  4. Success begets success

Let’s talk through a couple of these ideas:

1. Enduring businesses take a long time to build
I live in the heart of Silicon Valley, in Palo Alto, and there is an inescapable pull of “get rich quick” in the culture here. Every time there is a new trend, I quickly get calls and emails from people telling me about a new “gold rush” forming, and how I don’t want to miss out. In many ways, getting visibility on these trends is why I’m here, yet at the same time, it demonstrates a real breakdown in the business culture of Silicon Valley to be focused on quick flips.

Right now, the current “gold rush” trend is probably around mobile and social gaming, particularly the amount of revenue that small developer teams are generating on Mob Wars clones. Or iPhone apps. And before that, there was a huge gold rush around Facebook apps, and before that, Web 2.0 anything. The real pinnacle of this is an exit like YouTube, which generated over a billion dollars before it became profitable, and in less than 2 years.

Yet at the same time that this froth exists, there’s an undeniable fact that enduring, profitable, standalone businesses have still taken 5-8 years to build. Yes, it’s very cheap to get started and run some experiments, but to scale into a huge business, it takes real time and capital. Take a look at Facebook, for example, which clearly still has many years on it before it cracks the code on its business model and scales into something huge. Even Google took from 1996/7 to 2004 to get big – can you really do it faster?

I think the Warren Buffett view of the world is in a decade-or-more type timeframes, not in months. It’d probably be worthwhile for everyone to think about their startups and their eventual businesses that way, rather than just trying to get rich as soon as possible.

2. Who cares what other people think? Boring businesses can win big 
The other striking thing about the Buffett model of investing is that he puts money into a ton of companies which ultimately sell pretty boring things – bricks, chocolates, jewelry, etc. These are evergreen industries that will be around 5 years from now or 50 years from now.

From my personal observation, the startup world doesn’t think like this, and the focus is on the hot new thing. There’s always a lot of excitement about technologies or products, regardless of whether or not they address mainstream consumer needs, and thus a lot of market risk gets injected into every company. From the last couple years, trends like podcasting or RSS or P2P/BitTorrent got white hot, but ultimately didn’t go anywhere. Right now these hot trends might be new platforms (like mobile, SNs, etc.) or data portability, or status messages, or lots of other candidates. Maybe these will all go somewhere, but maybe they won’t.

And woe to the startup which is not working on a sexy problem, but instead working on something boring ;-)

I think there’s a lot of great opportunities in consumer internet that are now considered boring or forgotten. For example, doing things like domaining, ad arbitrage, forum sites, SEO sites, etc. Or similarly, there are now a number of proven consumer interactions, like video sites and toolbar companies, which are not likely to ever get funded now. All of these boring sectors, given the right twist or the right team, might actually lead to big successes, since a significant portion of the model has been proven out, but not as many smart people are working on them. Certainly you won’t have Techcrunch and the geek press banging on your door, but for some, that’s part of the appeal.

Instead of asking “what are the hot areas right now??” instead, the question to ask might be, “what are the overlooked areas right now?” 

3. Access to money can be a huge competitive advantage
One of big competitive advantages of Warren Buffett’s investment model is that he discovered that he could take money out of his cash-rich businesses, particularly in the insurance world, and then use that to invest in more cash-rich companies. This allows him to have a lot more flexibility in reacting to the market, and only jump in when he thinks companies are cheap. (Interestingly enough, the first chapter of the book starts out with everyone writing him off in the late 90s for not investing in tech companies, saying that he’s lost his touch, etc.)

For startups, my interpretation is pretty simple – ultimately, startups get money via two methods:

  • Customers give them money
  • Or, investors/bankers/VCs give them money

And these two methods are constantly in flux, depending on whether it’s more fashionable to sell stories to investors or whether the focus is more on revenue. And especially right now, the latter holds true. Having access to this capital is huge, for obvious reasons – there’s been a ton of discussion about why it’s a great time to start a company, because it’s easier to hire, office space is cheaper, etc. so I won’t go and repeat all of this.

The important part, I think, is to gear your company into to maximize for bringing cash into the door – you need to make sure that you’re either in a super hot space where investors are willing to throw money at you, and ideally also you’re making a ton of money. Oftentimes it seems like companies end up going either in one direction or the other – either they are going big, growing traffic, and raising money at crazy valuations, or they are boring and make money slowly but surely ;-) Not sure why it seems so mutually exclusive, but either way, have a strategy ;-)

4. Success begets success
Last thought on something that occurred to me while reading The Snowball – in the early years, you could really see that Warren Buffett succeeded just by pure skill alone. He invested in undervalued companies, and made money on them over time. But as his fame grew, it also became clear that there was a secondary impact from him investing – not only did his capital help, but the brand of Warren Buffett investing was enough to drive up the value of his shares. Pretty nice.

In the Silicon Valley startup world, I think it’s unavoidable that many of the successes here also hinge on luck as much as skill. But there’s also some secondary effects of having the right people involved, whether it’s founders or investors or other – you end up seeing that the social proof of having successful people involved has a self-perpetuating angle to it. Success begets success.

No wonder that the best venture capital firms experience serial persistence – the best ones stay at the top, year after year, even though this is typically not the pattern in other investment categories. As an entrepreneur, this tells me that it’s important to get the best people involved with your company, at all levels of the business, from investors to employees – and the more you are kicking ass, the easier it gets.

Your thoughts?
If you guys have thoughts on the above and/or on Warren Buffet or the book, please leave me a comment! Thanks. 

Want more?
If you liked this post, please subscribe or follow me on Twitter

Written by Andrew Chen

February 22nd, 2009 at 3:17 pm

Posted in Uncategorized

Returning from my blogging vacation + Twitter links

without comments

Blogging vacation was fun
After a good couple months not blogging, I’m going to try to get back in the groove of things ;-) It’ll be hard since the format of writing long essays tends to suck up a lot of time, but I’m going to try to get back to writing something substantive 1-2 times a week. Ideas and suggestions for topics are welcome! Please comments or write me an email

Twitter was a “good enough” replacement for blogging
Of course, even while I was off on blogging vacation, I was still tweeting away on links and other short comments. It was a great way to still publish content but without the constraints of doing something serious in the long-form blog format. If you haven’t followed me on Twitter, please click here to do so! And be sure to email me at voodoo [at] gmail and let me know what you’re working on these days.

Twitter links (lots of them)
I haven’t posted a link dump from Twitter lately, so I’ll do that now. Enjoy!

If you want to stay current, follow me at Twitter here.

Written by Andrew Chen

February 22nd, 2009 at 2:02 pm

Posted in Uncategorized

Great iPhone preso on AppStore retention curves, pricing strategies, engagement metrics, etc.

without comments

This was so good I had to repost it (via pinchmedia).

Here’s some of the great graphs included in the presentation:

  • Downloads per unit time versus price
  • # of downloads needed to hit Top 25 and Top 100 list
  • Retention curves for free apps
  • Retention curves for paid apps
  • Retention curves per app category
  • Engagement for free vs paid and by category
  • Advertising revenue tradeoffs for free versus paid
  • Cumulative application runs since first use, by decile

Slides below:

Written by Andrew Chen

February 19th, 2009 at 10:09 am

Posted in Uncategorized

How to create a profitable Freemium startup (spreadsheet model included!)

without comments

freemium

How to get the spreadsheet
First, here’s the spreadsheet:
Click to download Freemium spreadsheet

You can open it in Excel and fiddle around the numbers. The rest of this essay is a discussion on this!

Background on this discussion
Last year, the stupendous Daniel James co-hosted a talk with me on Lifetime Value metrics for subscription and virtual goods-based items. You can see the video/outline for the talk, Daniel’s commentary, and a mindmap of the talk (scroll to the bottom of the post).

As part of the talk, we worked on a spreadsheet model for freemium businesses that we didn’t get enough time to work on – so I’m going to cover it in this post! If you haven’t gotten the spreadsheet yet, here’s another link to it.

Here are the questions this post (and the spreadsheet) is meant to answer:

  • What are the key factors that drive freemium profitability?
  • How do freemium businesses acquire customers?
  • What are the drivers of customer lifetime value?
  • How do all these variables interact?

If these questions interest you, keep reading :-)

Article summary (for people with attention deficit!)
To become profitable using a freemium business model, this simple equation must hold true:

Lifetime value > Cost per acquisition + Cost of service (paying & free)

Said in plain english, the lifetime value of your paying customers needs to be greater than the cost it took to acquire them, plus, the cost servicing all users (free or paying).

There are lots of different factors that influence profitability, including:

  • Cost per acquisition
    • Efficiency of media (traffic sources, CTR, impressions)
    • Signup funnel conversion %
    • Average viral invites sent out
  • Lifetime value
    • Retention metrics
    • Revenue mix

By understanding these subcomponents, you can tweak your model and figure out what metrics need to be hit in order to reach profitability.

Now for all the gory details… 

User acquisition
The first tab in the spreadsheet covers the issue of paid user acquisition – many subscription businesses mostly rely on AdWords and ad network buys in order to acquire users. For freemium businesses, particularly ones that are social apps, there’s often a word of mouth or viral component, which we’ll cover in a second.

I’ve written extensively on paid user acquisition in the past, particularly the blog post: How to calculate cost-per-acquisition for startups relying on freemium, subscription, or virtual items biz models.

At a high level, here are some of the things you’ll want to track:

  • How are you paying for traffic? (CPM/CPA/CPC)
  • What do the intermediate metrics look like? (impressions/CTR/etc)
  • How does your signup funnel perform?
  • How much are you spending for the users you end up registering?

Basically, you end up with a media buying matrix that looks something like this:

Source Ads bought
CTR Clicks Signup % Upload pic Users Cost CPA
Google 1M 0.50% 5,000 20% 50% 500 $5,000.00 $10.00
Ad.com 20M 0.10% 20,000 10% 50% 1000 $20,000.00 $20.00

and these are some factors worth thinking about, in terms of increasing or decreasing the cost per acquisition (CPA):

Type Options Importance
Source of traffic Ad networks, publishers ++
Cost model CPM, CPC, CPA +
User requirements Install, browser plug-in, Flash +++++
Audience and theme Horizontal vs vertical ++
Funnel design Landing page, length, fields +++
Viral marketing Facebook, Opensocial, email +++++
A/B testing process None, homegrown, Google +++++

As previously mentioned, lots more detail here.

Funnel
Once you get your users registered onto the site, then there’s the question of how convert to paying customers, and whether there are any viral effects. The model covered in the spreadsheet has a separate tab, called “Funnel” which covers these issues.

At a high level, there’s what is happening:

  • Each time period, a bunch of newly registered users come in (both acquired through ads or through viral marketing)
  • Some % of these users convert into paying users
  • Some % of these users then send off viral invites
  • Revenue is generated by building up a base of paying users
  • Cost is generated through building up a base of active users (paying or not!)

To me, this tab captures the “art” side of building a freemium business. Persuading peopleto pay for your service and invite their friends requires creativity, product design, and lots of metrics. Josh Kopelman of First Round Capital had a great tweet recently on this topic where he says:

@joshk: Too many freemium models have too much free and not enough mium

As Josh notes, the key is to create the right mix of features to segment out the people who are willing to pay, but without alienating the users who make up your free audience. Do it right, and your conversion rates might be as high as 20%. Do it wrong, and your LTV gets very close to zero. This is why premium features have to be built into the core of a freemium business, rather than added in at the end. You want to be right at the balance between free and ‘mium!

Just remember that during the time period that it takes you to figure out your funnel, viral loop, and everything else, all the free users you’re building up create cost in your system.

Businesses that aren’t eyeball businesses shouldn’t act like eyeball businesses :-)

Anyway, the product design issue (and resultant conversion rates) are a a deep topic, and here are some other related posts (by others and myself):

User retention
Of course, it’s not enough to just acquire paying users, you need to retain them. If you have a super high churn rate, then at best you’ll be stuck at a revenue treadmill (doing lots of work but flat revenue and no profitability). At worse, it’s easy to lose a ton of money, if the CPA exceeds the LTV. I wrote about this topic earlier in my essay When and why do Facebook apps jump the shark (which also has a spreadsheet).

How sensitive are retention numbers on lifetime value? Here’s a quick thought experiment: Lifetime value is the sum of the revenue that a user might generate from their first time period to when they quit the service. Think of it as an infinite sum that looks like:

LTV = rev + rev*R + rev*R^2 + rev*R3 + …

where rev is the revenue that a user produces during a time period, and R is the retention rate between time periods.

You can simplify this, based on the magic of infinite series:

LTV = 1/(1-R) * rev

So let’s say that you make $1 per time period, and you have 1000 paying users. Let’s compare the difference between a 50% retention rate and a 75% retention rate:

At a 50% retention rate:
LTV = 1/(1-0.5) * $1 * 1000 = $2000
At a 75% retention rate:
LTV = 1/(1-0.75) * $1 * 1000 = $4000

This means that in this case, by increasing your retention rate by half (relatively speaking), you actually DOUBLE your revenue. And even more when you reach “killer app” status and attain retention rates around 90%. This is a big lever.

At a 90% retention rate:
LTV = 1/(1-0.90) * $1 * 1000 = $10,000

Note that retention rates are generally not fixed numbers – they usually get better the longer a cohort of users stays with you! I’m using a fixed retention number to set a lower bound, and for mathematical simplicity.

OK, so the biggest factors affecting retention boil down to three things:

  • Product design
  • Notifications (optimize them, of course)
  • In success cases, saturation effects

For more reading on product design, I’d recommend Designing Interactions from IDEO. For notifications, there’s been a lot of great work in the database and catalog marketing world, for example Strategic Database Marketing. Tesco, Harrah’s, and Amazon are all companies well-known for their strategic use of personalization and customer interaction. For saturation effects, as previously mentioned, my old-ish article When and why do Facebook apps jump the shark.

Cashflow (and ad-reinvestment)
The tab “cashflow” in the spreadsheet captures a couple different issues:

  • Paid user acquisition is usually an upfront expense, whereas the revenue comes in over time
  • Your revenue per paying user depends on a mix of revenue sources
  • You pay a “cost of service” across all users, whether they are paying or not – be careful that this cost of service is not too high!!

Some more detail on the above:

In a model with paid user acquisition, it takes time to break even. You pay for a user upfront, but then the revenue stream trickles in over several time periods. As a result, you tend to be cashflow negative for some number of time periods, and which then goes positive later. This effect is compounded further if your model specifically depends on viral acquisition, because you don’t get significant users in virally until your userbase becomes large.

This is why you get a graph like this, where you’re unprofitable for a while, then break even:

Note that it’s also VERY possible that they never cross, and the entire business is unprofitable. Just play around with the numbers in the spreadsheet and you can see how easy it is to happen!

In terms of average revenue per paying customer, what you typically find is that your customer base is made up of multiple segments. You can price them differently through different tiers of subscription (Free versus Pro versus Business) or with Pay-as-you-go or with many other models.

Ultimately you can roll this all up into a single number, which is referred to in the spreadsheet as revenue per paying customer. You can also divide the revenue by the number of total users (paying or not) in order to get the average revenue per user (ARPU).

As for the cost of service, your mileage will vary. The main thing is, try not to do anything too expensive for free users! After all, given that typical conversion rates are <10%, and subscription services are typically <$20/month, the following thought experiment is insightful:

Out of 1000 users, let’s say 50 pay $10/month. This generates $500/month
This means that the costs must not exceed $500/month for 1000 users, or $0.50/user

Plus then you have to factor in the acquisition cost! (Probably a couple bucks per user, so thousands of bucks per 1000 users).

Lifetime value
And finally, the last tab on the spreadsheet calculates lifetime value. Basically you figure out the number of payments that a paying user will generate over their lifetime, referred to in the model as “user periods.” (I arbitrarily took this out to 20 time periods, but you can do something different) This is then multiplied by revenue per paying user, to get the total dollar figure generated.

More important for the paid acquisition model is to do the LTV calculation not for paying users, but for all registered users (paying or free). Doing this then lets you figure out if you can profitably arbitrage traffic via ad buying. This is done using the same method detailed in the above paragraph, but using total user numbers rather than just paying users. Then you compare this LTV number with the effective LTV that you get from buying users and then factoring in their viral effects (as shown in the Funnel tab).

Model improvements
Of course there are tons of things in this model of freemium businesses that ought to be improved!

In particular, a couple ideas:

  • Benchmarks of real world data for comparison
  • More granularity for user acquisition for affiliate versus ad buys versus other
  • Saturation rates in the viral model
  • Better model for retention rate other than one fixed number
  • More sophisticated accounting of cost per user (infrastructure/employees/etc.)
  • Model in multiple revenue sources including transaction fees, for Paypal versus Offerpal versus In-store cards versus mobile
  • Better intelligence around ad-buying, including ramping up when profitable, slowing down when unprofitable
  • etc.

More on funnels, retention, viral, etc.
If you liked this article, please subscribe to my RSS feed! I will be writing more when I’m officially off my blog break ;-)

You can also see my other essays, check out some book recommendations, or follow me on Twitter.

Written by Andrew Chen

January 19th, 2009 at 9:00 am

Posted in Uncategorized