Everyone agrees that online platforms have a problem with free speech. They just can't agree on whether we need more of it or less of it.
But maybe Free Speech is the wrong way to think about the problems that online platforms have, and the real problem that speech needs to flow in better ways.
It’s tempting to think of free speech as being a dial that you can turn up or down. Turn it down to zero and you get the approved opinions of the Chinese Communist Party or BigCorp. Turn it up to 11 and everyone has a personalized feed of crazy, telling them to drink bleach and hate everyone. In this model, all you have to do is to find the right place to turn the dial.
Indeed for many kinds of content this “turn the dial” approach works pretty well. For sex and violence, platforms like Facebook (where I worked in the Integrity org) make a somewhat arbitrary decision about how sexy or violent something can be before it isn’t allowed, build unavoidably noisy systems to detect such content, and make a somewhat arbitrary decision about how to make the trade off between false positives (e.g. taking down innocent baby photos) and false negatives (e.g. keeping up active shooter videos). People complain a bit on the edges, but by and large it works remarkably well.
Where this approach doesn’t work well is controlling the spread of ideas.
Let’s start with a fictitious example of how ideas can spread badly.
Alice writes an article saying “Eating Cheese Prevents Leprosy”. She isn’t meaning to deceive people - she’s just really bad at statistics. She shares it with some of her pro-dairy friends who re-share it to show their loyalty to dairy. A few layers of shares later, the post is going viral, reshare-optimized ranking algorithms are putting it top of everyone’s feeds, and the dairy industry is using ads to boost distribution. Ad-funded news sites see that the topic is trending and quickly produce copycat articles, including some that say cheese makes you immortal, or immune to vampires. Millions of people, seeing similar arguments from many sources, assume it to be true, switch to cheese only diets, and get admitted to hospital with cheese overdoses.
A day later the original pro-cheese article reaches the anti-cheese community. The community only consists of 5 people, but they are very loud and each have a thousand fake accounts. They write posts arguing that eating cheese is a pro Darth Vader idea. A few online influencers, wanting to distance themselves from Darth Vader, join in the pile on. The Russians decide that Cheese is a good wedge issue to divide America and amplify the pile-on. Alice’s employer hadn't realized that eating cheese was now a taboo idea, but it seems everyone else thinks it’s, so they quickly fire Alice to avoid PR problems.
This isn’t a real example, but it’s very nearly a real example. This is pretty much how social media works, and it’s terrible.
So what went wrong here?
The problem wasn’t that Alice’s original article should have been blocked by her social media platform. Maybe eating cheese actually does prevent leprosy and further scrutiny would have confirmed that important discovery. Or maybe investigating the flawed cheese-leprosy connection would have revealed something valid.
The first thing that went wrong was that Alice’s original idea spread too fast, without waiting for it to be validated. The second problem was that, by putting it in everyone’s feeds, apparently from multiple sources, the platforms created a false consensus effect, that caused readers to think it was accepted truth. Essentially the same thing then happened for Alice’s firing, with the argument against her spreading too fast without scrutiny, creating a false consensus belief that she needed to be fired.
We also have a load of other problems that contribute to this mess. Journalists who are rewarded for saying things that are interesting rather than things that are right. Fake accounts and disproportionate engagement that make online behavior a poor proxy for what real people think. “Open and Connected” spaces that take away the ability for different groups to peacefully enforce internal cultural norms without fighting each other.
Really the whole thing is a mess and it’s amazing humanity hasn’t set itself on fire by now. But the problem isn’t free speech - it’s the way information flows between people.
So what might better speech flow look like?
Maybe we we can get inspiration from in-person organizations. Inside a well-functioning in-person organization, problems are typically dealt with through nested groups and privacy. If Alice has a new idea, she starts by discussing it privately with someone she knows. If that goes well then she can present her (likely refined) idea to a small private group, or test it experimentally. If that goes well, then the ida gets presented to increasingly big groups as it gets increasingly validated.
And indeed organizations don’t have to be in-person to operate this way. Lots of remote organizations running on platforms like Slack distribute information in roughly this manner, with much less chaos than social media. What matters isn’t that these organizations are in-person, but that they have structure, gatekeepers, and hierarchy.
So maybe we have a second dial we can turn to choose how “structured” our information flow should be, similar to the “free speech” dial. It’s tempting to think that the “structure” and “free speech” dials are the same, but they are pretty different. You could have a very structured space with extreme free speech, where anyone can say whatever taboo idea they like in their small group, and those taboo ideas will flow up to the next level group if they get lower level support.
So maybe “social town square” platforms like Facebook and Twitter would benefit from having more of this kind of structure - from feeling more like a workplace?
Maybe they would. Except so far nobody has found a good way to make this work. Most people don’t want their personal social communication to feel like BigCorp, and if platform companies give people a product they don’t like then they will leave (unlike BigCorp, where management decides on your platform).
So what happens next?
Maybe someone comes up with a better product idea that changes the world (I’m working on some myself)? Maybe our culture adapts to be able to cope better with borderless communication? Maybe people retreat into the arms of structured organizations in order to more trustworthy information. But I do hope we move on from thinking we can solve this problem by turning the free speech dial.
Got thoughts? Hit me up in the comments…
Wow. Really glad to have found this post. You make some great points, especially that premise of yours. Great work
What scares me the most, is that this perfectly cogent article is about 10 years too late and WAY too under-represented....