15 Comments

I really enjoyed reading this, partly just because it felt like a good TLDR for The Lean Startup, but Facebook-centric. But I feel like there are a couple (and probably more) pretty important unanswered questions:

- Is this really how good ideas/products/experiences happened at Facebook and other successful companies? What I've seen in Facebook-lite companies is that without relevant experience/wisdom/discipline-in-reading-data-that-is-a-poor-proxy-for-measuring-real-life-events, the hypothesis can be so wrong that the company just loses its appetite for more iteration/testing in a given area. How many tests can you run and get bad results before you give up on the whole thing? What if you started with a better idea of where the target was? I think the way most companies mitigate this is just having people who are slower and more thoughtful who have more experience make sure they're basically testing in the right general area with reasonably hypotheses.

- What is the north star metric if you're iterating that fast? If it's growth of the usage of the product, how do you weigh the cost of getting people killed with one of your experiments with pretty good growth on the other side? Is everybody cool with that? For people who would do this on a social network which affects real people, is this also how they raise their kids or govern relationships with their friends and family? No way, right?

(In reading through Nate's comments, he's touched on some of these things in a more thorough way and I see that you've already responded)

Expand full comment

You are indeed right.

One thing that this post didn't really cover is the equivalent of "gradient descent". How do you predict what directions are likely to be good. Another important thing it didn't really cover is the importance of having a "chain of metrics" with fast moving metrics (e.g. time spent) acting as leading indicators for slow moving metrics (e.g. profitability).

This post also didn't really touch on the idea of human judgement, and the importance of one human (e.g. a PM or Engineer) being able to simulate the judgement of another (e.g. the CEO, or a user).

I'm considering writing a follow up post about some of these ideas, and maybe stirring in a bit of GPT along the way.

As always, there are more important ideas than can fit in one blog post, so it's partly a question of deciding where to draw the boundaries to end up with something neatly self-contained.

Expand full comment

When Carol Dweck wrote Mindset, I really liked it and bought it for a bunch of friends and family. But something was nagging at me. I think part of it was her listing all the successful people who didn't have a growth mindset, but a fixed mindset. And it kind of made me think that even thought I intuitively bought into her argument, its not clear that it correlated well with actual success/getting things done/etc. (There might be really good reasons for this, such as "intuitive, fixed mindset" thinkers tend to have a clearer message and so maybe are just more charismatic/persuasive, even if there ideas have a higher failure rate or they're just not as rational/smart)

I kind of feel the same way about this framing. Totally makes sense, would be weird to find a different philosophy in a large, successful company that pushed a lot of code. And yet, I don't really believe this is how all (or even most) small, innovative companies start and get to their early milestones.

Anyway, really enjoying your writing, Rob, thanks for sharing it. Would love to catch up over another beer some time!

Expand full comment

I definitely don't think this is the "entirety" of how successful products are built, but I think it's a key part of how pretty much all complex things are "grown", once they become too complex for a single person to fully understand.

Notably, this approach doesn't really work prior to product market fit, because at that point you don't really know what it is that you are trying to optimize towards. At that point you are likely to be doing something closer to the process I describe in my "great but painful" post.

I guess I need to write a follow-up blog post :-)

Expand full comment

I agree with much of what you have to say here... and yet.

I'm actually pretty mad at the "move fast and break things" ethos right now. I feel it has done (and is still doing) harm to the world, and made silicon valley more jaded / less ethical. All things being equal, just messing around until you find something that works is often a good strategy. However, when there's a power imbalance, the situation changes profoundly. One bad decision by a programmer on a popular project could have life-altering consequences for millions of people. I don't think developers should be paralyzed trying to think through every decision, but I do think we have a responsibility to be careful when we wield so much power. Folks who buy into "high speed stupidity" too deeply can convince themselves that it's for the best that they make no attempt to anticipate or mitigate the potential harms of their product. The amortized cost is better this way. The damages are acceptable, because "sometimes you have to break a few eggs." That's easy to say when they aren't your eggs.

So, yeah, this is an important principle to understand and use, but it's gotta be tempered with humility, common sense, and compassion.

Expand full comment

Yeah. I think you are right. The question is how to do that?

High speed stupidity is clearly powerful. Do we have something comparably powerful that doesn’t have these issues, or a good way of “aligning” high speed stupidity that doesn’t dampen its benefits too much?

Expand full comment

Let me try rephrasing your question like this:

How do we balance anticipating the consequences of our actions vs. testing them in reality?

Is that a fair substitution?

I think that makes the discussion a little clearer. The problem with anticipation is that it is costly and unreliable. As hard as you try, reality is still going to surprise you sometimes. On the flip side, anticipation is really valuable if it can prevent you from doing something dumb in reality where the costs are high.

In the tech industry, anticipation plays two critical roles. The first one is, where do you start your semi-random search? "High speed stupidity" can be powerful, but not if you're working on the wrong problem entirely, or starting off very far from the ideal solution.

The second one is, what are the consequences of this decision? Here we run into the reliability problem. Like a good Bayesian, you must discount your guess by how likely it is to come true, and since reality has nearly infinite possibility, you're probably wrong. However, you must also weigh the consequences of being right. If the outcome you anticipate is really good or really bad, then you should intentionally steer towards or away from that outcome. It would be foolish to proceed randomly.

So, basically, it's important that we think long and hard about what problem to work on and how to work on it. From there, "high speed stupidity" may be a good tactic, but only when the consequences of those low-effort decisions are low. When you have a choice that _matters_, it's important to use human judgment. That means with every decision you should anticipate how big the consequences might be, which determines whether or not it's acceptable to make a quick and lightweight decision.

That's the part tech companies sometimes get wrong. They think "move fast and break things" is a license not to consider / care about the consequences of their actions. Power dynamics make this much harder to do well, because the consequences of a bad decision are typically much smaller for the company than for some of its users. If Facebook launches a feature that depresses people or encourages harassment, they might get some bad press for it, but some users might literally die. That's why it's so important for tech companies to have good relationships with their users and actually talk to them about major product changes.

Expand full comment

This is a good point. High speed stupidity lets you optimize something fast, but it's important to make sure you are optimizing the right thing, and it's often less clear how to do that. Indeed this is something that frequently goes wrong: you see evolutionary dead ends, companies doing bad things when optimizing for money, tech companies optimizing for engagement over user health etc etc.

This is an important space and I'm toying with writing a blog post about some of the thoughts I've got.

I'm quite excited about the idea of using GPT to construct better metrics, by having GPT judge whether the product seems "good for the user" is some kind of user-calibrated way. But I'm unsure whether I'm ready to write a blog post about that before I've tried it out in anger. Maybe I'll write a blog post with a lot of disclaimers about how it's tentative.

Expand full comment

I'm surprised to hear you suggest using GPT to judge if a product is "good for the user." How would a text prediction engine have the capacity to do that? I'm confident it could generate plausible text to justify any product decision you like, and of course a human could judge if they agree or disagree with that reasoning. But isn't that just a recipe for self deception? Lots of bad arguments sound good if you don't verify the research methodology, and in this case _nobody_ would be doing actual research. GPT could write a research report, but only by inventing one out of thin air.

Expand full comment

You are right that you can get an LLM to generate a convincing argument for why pretty much anything is good - but that's not what I'm proposing.

What I'm thinking about is using GPT to "simulate the CEO" or "simulate a user".

In an ideal world, you'd have the CEO, or a user, look carefully at a load of samples of product behavior and see if they appeared to be good according to some criteria. E.g. "is this feed informative", "do these ads contain likely scams". Right now we mostly don't do this because it's so much more expensive and slow than just finding some simple metric to count - particularly if you want the actual CEO of user to be making the judgements.

I'm guessing it wouldn't take that many examples of real CEO/User judgements to be able to fine-tune a LLM to be a pretty good proxy - maybe even few enough that it would work well as a GPT few-shot prompt.

This approach wouldn't be perfect of course, but it should allow significantly more nuanced metric-guided optimization than the simple counting approaches we use today.

Expand full comment

I see. That makes more sense, and I agree that with a fair amount of work you might be able to get ChatGPT to generate realistic simulacra of user feedback.

That raises another problem, though: ChatGPT isn't using the product to solve a problem. It has no needs, it has no intentions, and (for now) it can't interact with a UI. So, at best, it could generate text that's like the feedback users give for similar products.

You might be better off collecting feedback in the usual way, and then asking ChatGPT to summarize. Then you're at least actually getting input from real users of your product!

Expand full comment

Fantastic piece. Thanks, Rob. Provocative and illuminating, I enjoyed your insights, particularly around the role of iteration in the context of metric-driven development. Love your examples and reference to the hippy movement. And I'm not sure I agree with the conclusions. In particular, I see two large issues, or conundrums: one is that what is locally optimal or right (according a "local," such as individual organization, goal) may not jive well, if at all, with a larger (societal? community? species?) goal or definition of what constitutes "improvement." I wonder how we work with that. Perhaps there is an analogy to how societies attempt (or hope) to manage individual morality and individual aims amidst broader societal goals and wants.

The second issue, or question, I see is this: is faster necessarily better? As you point out, there can be a lot of suffering -- transition, adjustment, etc -- that can happen between two points A and B of improvement on any of these metrics, and so I think we need to take that upheaval and suffering into account. And perhaps more deeply, i wonder if we know what is the greatest (deepest) aim, is it really greater social connectivity? Is the species model of evolution -- slow, relatively speaking -- outdated and out of synch, or is there something positive and crucial hidden in not iterating at breakneck speed?

Expand full comment