@andrewchen

Get the newsletter · Featured · Recent

Why are we so bad at predicting startup success?

Startups and bad predictions
One of my favorite reads this year was Nate Silver’s The Signal and the Noise which has the subtitle “Why so many predictions fail, but some don’t.” It covers a ton of different topics, from weather to politics to gambling, and I couldn’t help but read it with a startup/tech point of view.

After all, the industry of technology startups is all about prediction- we try to predict what will be a good market, what will be a good product, as we “iterate” and “pivot” on our predictions. And of course the business of venture capital is even more directly about knowing how to pick winners- especially the seed and Series A investments.

And yet, we’re all so bad at predicting what will work and what won’t. I’ve written about my embarrassing skepticism about Facebook, but hey, I’m just a random tech guy. For the folks whose job it is to professionally pick winners, the venture capitalists, they aren’t doing very well either. It’s been widely noted that the venture capital asset class, after fees, has lagged the public markets- you’d be better off buying some index funds.

Startup exceptionalism = sparse data sets = shitty prediction models
One of the most challenging aspects of predicting the next breakout startup is that there’s so few of them. It’s been widely discussed that 10-15 startups a year generate 97% of the returns in tech, and each one seems like a crazy exception. And as an industry we get myopically focused on each one of them.

Watch Ben Horowitz elaborate on the sobering stats, starting at the 38:00 minute mark:

With these kinds of odds, our brains go crazy with pattern-matching. When a once-in-a-generation startup like Google comes around, for the next few years after that, we all ask, “OK, but do you have any PhDs on the team? What’s the ‘PageRank’ of your product?” And now that we have AirBnb, we’ve gone from being skeptical of designer-led companies to being huge fans of them. With so few datapoints, the prediction models we generate as a community aren’t great- they’re simplistic and are amplified with the swirl of attention-grabbing headlines and soundbites.

These simplistic models result in generic startup advice. As I wrote about earlier, there’s a whole ecosystem of vendors, press, consultants, and advisors who go on advice autopilot and give the same advice regardless of situation. Invest in great UX, charge users right away, iterate quickly, measure everything, launch earlier, work long hours, raise more money, raise less money – all of these ideas are helpful to complete newbies but dangerous when applied recklessly to every situation.

We all know how to parrot this common wisdom, but how do we know when we’re hearing good versus bad advice? If you think about the idea that there’s 10-15 companies every year who are breakouts, how many people really have first-hand experience making the right decisions to start and build breakout companies?

Hedgehogs and pundits
I was reminded for my dislike of generic startup advice when in his book, Nate Silver writes about hedgehogs versus foxes and their approaches towards generating predictions – here’s the Wikipedia definition on the concept:

[There are] two categories: hedgehogs, who view the world through the lens of a single defining idea and foxes who draw on a wide variety of experiences and for whom the world cannot be boiled down to a single idea.

Silver clearly identifies as a fox, and contrasted his approach to the talking head pundits that dominate political talk shows on TV and radio. For the pundits, the more aggressive, contrarian, and certain they seem, the more attention-grabbing they are. Rather similar to what we see in the blogosphere, where people are rewarded for writing headlines like “10 reasons why [hot company] will be killed by [new product].” Or “Every startup should care about [metric X]” or whatever.

This hedgehog-like behavior is amplified by the fact that there’s always pressure to articulate a thesis on what’s going on in the market. People in the press are always trying to spot trends or boil down complex ideas, and investors are constantly asked, “What kinds of startups are you investing in? Why?” And entrepreneurs are always forced to fit their businesses into the broader trends of the market, to find sexy competitors, all in the change to find a simple narrative that describes what’s going on.

The solution to all of this isn’t easy- to be a fox means to draw from a much broader set of data, to look at the problem from multiple perspectives, and to reach a conclusion that combines all of those datapoints. There’s been some great work on the science of forecasting by Philip Tetlock of UPenn, who’s set up an open contest to study good forecasting here. There’s an interview of him Edge.org here and a video describing some of his academic research below:

Worth watching.

My personal experience  
Over my 5 years in Silicon Valley, the biggest lesson I’ve learned from trying to predict startups is calibration. They talk about it in the video above, but the short way to describe it is to be careful with what you think you know versus what you don’t. I’ve found that my area of expertise where I can make good decisions is actually pretty narrow- I’ve done a bunch of work in online ads, analytics, consumer communication/publishing, and I think my judgement is pretty good there, but it’s much shakier outside of that area.

When I do an analysis, I try to match my delivery with how much I think I know- and these days, it means that they sound a lot more tentative than the younger, brasher version of myself when I first came to SF. I’ve also tried to be diligent in my employment of “advice autopilot” – if I meet with entrepreneurs and find myself saying the same thing multiple times, then I try to refine the idea to take into account the specifics and nuances of that product. It’s easier, lazier, but less helpful to just say the same thing over and over again.

Be the fox, not the hedgehog.

PS. Get new updates/analysis on tech and startups

I write a high-quality, weekly newsletter covering what's happening in Silicon Valley, focused on startups, marketing, and mobile.

My Quora answer to: How do you find insights like Facebook’s “7 friends in 10 days” to grow your product faster?

I recently answered a question on Quora and am sharing it on my blog:

How do you find insights like Facebook’s “7 friends in 10 days” to grow your product faster?

Here’s my thoughts below:

Why make a rule like this?
It’s important to remember the goal of making a pithy goal like “7 friends in 10 days” – it’s to help your team drive towards a clear objective. I’m sure “10 friends in 12 days” works well too, as does “5 friends in 1 day” but you just pick something that makes sense and easily memorable.

Anyway, here’s some thoughts about how to make something useful:

Defining the success metric
First, you need a way to evaluate how “successful” a user is, based on their behaviors. You might define this based on something like:

  • days they were active in the last 28 days
  • revenue from purchases in the last 28 days
  • content uploaded in the last 28 days
  • … or whatever else you want to define.

How do you figure out the right evaluation function? You just have to pick one, based on what makes sense for your business. There’s no one-size-fits-all answer here- you need to tailor this based on what makes your product work. In Facebook and Twitter’s cases, since they are ad-based models, they care a lot about frequency and engagement.

Exploring the data
Once you have a way to evaluate the success of a user, then you want to grab a cohort of users (let’s say everyone who’s joined in the last X days) and start creating rows of data for that user. Include the success metric, but also include a bunch of other stats you are tracking- maybe how many friends they have, how much content they’ve created, whether they’ve downloaded the mobile app, maybe how many comments they’ve given, or received, or anything else.

Eventually you get a row like:
success metric, biz metric 1, biz metric 2, biz metric 3, etc…

Once you have a bunch of rows, you can run a couple correlations and just see which things tend to correlate with the success metric. And obviously the whole point of this is to formulate a hypothesis in your head about what drives the success metric. The famous idea here is that, fire engines correlate with house fires, but that doesn’t mean that fire engines CAUSE house fires.

Running the regression
In some cases, it might be obvious that a particular metric correlates more strongly with your success metric than anything else. That helps you along. But if you want to get more formal, then you can do the kind of regression that David Cook describes.

The usual problem I’ve seen for startups is that there’s often not enough data, and too many variables, to be able to generate a really strong statistically significant model. And you can’t really tell your growth team “OK guys, active days is driven by friends, posts, likes, and 20 other factors. Let’s increase them.” Not very inspiring. So instead you’re just looking for something simple that explains enough of variation in success to rally your team behind it.

Verifying your model
After you’ve found the model what works for you, then the next step is to try and A/B test it. Do something that prioritizes the input variable and increases it, possibly at the expense of something else. See if those users are more successful as a result. If you see a big difference in your success metric, then you’re on to something. If not, then maybe it’s not a very good model.

“Branding” your model
Finally, once you’ve explored the data, run some regressions, and verified that your model works- then you have to be able to explain it to other people. So make it dead simple to talk about, repeat it over and over, and generally simplify it to the point where a lot of your growth product roadmap is focused on moving the metric up.

PS. Get new updates/analysis on tech and startups

I write a high-quality, weekly newsletter covering what's happening in Silicon Valley, focused on startups, marketing, and mobile.

I got a startup pitch via Snapchat, here’s the story

I’ve recently been asking my Twitter followers to add me on Snapchat, so I can build up a bigger addressbook there and have a more engaging experience. Even though my audience is skewed, it’s a way to attempt to break through into becoming an activated user. If you aren’t an activated users, social products can lack meaning, as I wrote about previously here.

To my surprise, after a few days, I got sent a URL to http://andrewmeetus.com, which turned out to be a new Polish team working on a local + social mobile app. Huge props for the cold snapchat pitch! I met them a few weeks later in Palo Alto, heard about their new product Nearbox, and congratulated them on their creative way to get my attention.

Last thing- feel free to add me on Snapchat, my username is andrewchen. Send me whatever!

PS. Get new updates/analysis on tech and startups

I write a high-quality, weekly newsletter covering what's happening in Silicon Valley, focused on startups, marketing, and mobile.

Social products win with utility, not invites (Guest Post)

Minimize your Time to Product/Market Fit

I’m a Google Glass skeptic and think it’ll be the next Apple Newton

Rational Growth (PDF): An intro to growing user signups via data and analytical thinking

3 common email marketing failures (Guest Post)

Why it’s hard to evaluate new social products

Confessions of a Startup Seagull

Retention causes virality, and vice versa

Why good design and open design often conflict

Polite growth

Retention versus frequency for mobile product categories

How to write good and bad titles for your blog post

Blog posts I don’t want to write

SaaS products aren’t viral (preso)

Is your market actually big? Or is it a fake market?

My friend Noah and his $100M lesson after being fired from Facebook

Career Suicide versus Startup Suicide

Design & Thinking: A film about design thinking at PAIFF

You don’t need a growth hacker

After the Techcrunch bump: Life in the “Trough of Sorrow”

How mobile startups can iterate better, faster, stronger

How long will the “seed stage bubble” last?

Mobile app startups are failing like it’s 1999

Why companies should have Product Editors, not Product Managers

Don’t just design your product, design your community too

What factors influence DAU/MAU? Nature versus nurture

No, you don’t need a real-time data dashboard by Mike Greenfield

Pitch the future while building for now

Strive for great products, whether by copying, inventing, or reinventing

How do I balance user satisfaction versus virality?

What does a growth team work on day-to-day?

Apple’s Minimum Viable Product

Quora: When does high growth not imply product/market fit?

War of the platforms: Facebook, Apple, Android, Twitter.

Stop asking “But how will they make money?”

Know the difference between data-informed and versus data-driven

What makes Sequoia Capital successful? “Target big markets”