Thoughts on Section 230

If you're outside the USA and don't know what "Section 230" is, in short, it's the bit of legislation that means social networks (and other online publishers/platforms) aren't accountable for the content their users post.

It has some limitations that everyone can easily agree on. If somebody posts illegal pornography for example, the platforms have to remove it. There are however still a great many things that fall into a grey area based on subjective opinions. It's commonplace to hear people from all over the political spectrum calling for somebody to be banned from twitter for something they wrote.

While we can all likely point out some people we would rather didn't have a voice and platform, it's nigh impossible to define, at least in any way sufficiently long lived that we would want to codify it in legislation, what is sufficiently controversial to warrant censure.

So I'd like to turn your attention away from Section 230, frankly, I think it's probably fine as is. Before we move on though, I want to draw out one nugget. Section 230 treats the platform as a passive actor.

The Algorithms

At the end of the day, social media platforms, and most of the news sites, are trying to sell advertising. If they can demonstrate many users spending a lot of time on their site, they can sell more advertising at a higher price. So how do they get you to stay on, and engage with, their sites?

On the news sites, we see an awful lot of clickbait. Lets remember though, those news sites are typically presenting content written by their own stable of journalists. They are in fact actually liable for what they publish and could be sued for libel.

On the social media platforms, we see content promotion. Social media platforms have poured vast resources into presenting you with just the right content to keep you on the site. They don't produce any of that content, they instead scrape the slurry of text, images, and videos that people are posting to try and find something that aligns with content you've engaged with in the past.

To make sure you really understood that last sentence, they're not finding content you like, they're finding content you engage with. It could be content that makes you angry, or fearful, or anxious. We tend to have stronger reactions to the negative rather than the positive. The traditional news outlets have known this for generations; "fear sells" as the saying goes.

Passive no more

So these social media platforms may not be creating the content, but they also aren't publishing it in some objective stream. I still remember the early days of Facebook's feed where there was still an option to switch to a simple time based feed of only your friends. Maybe it's still there somewhere, I wouldn't know these days. However the default is certainly a more managed content stream. It includes content from your friends and group's you're a member of certainly, but it also includes suggestions of other content. Specifically other content their algorithm has decided you're likely to engage with.

In this context where the social media platforms are subjectively boosting content, they can no longer be considered passive actors. They're not simply platforms where disparate ideas, no matter how controversial, can be exchanged. They're actively distorting how common we perceive those ideas to be. That's a dangerous position.

We used to talk to our friends, family, and neighbours. If one person had a crackpot idea, there's a good chance we'd be innoculated against it by the rest of our social circle calling it out. If half or more of our social circle all agreed on something, we'd be more inclined to go along with it.

This is exactly the kind of reality distortion that's taking place online. Without the visibility of how many people actually agree on something, a problem that is made even harder with the proliferation of bots, it's easy to start seeing niche ideas as mainstream.

Accountability

It's worth noting social media platforms aren't doing this because they have some shady agenda. They're all in it for the money. Like any corporation, they're amoral, not immoral. As long as they're making more and more money, consequences be damned.

However, there needs to be some accountability. A platform shouldn't get to wave Section 230 around when they're playing an active role in what content is being put infront of you. An old school bulletin board that simply displays threaded discussions based on latest activity is clearly not trying to manipulate your emotions and keep you on the site. The same is not true of social media, at least not in its present form.

Once a platform goes down the path of engagement, and is selecting what content to present you with, there needs to be a higher degree of accountability and liability. The impotent solutions they've put forward so far, like labelling misinformation, are insufficient.

Maybe this is an update to Section 230 that clarifies it only protects passive content platforms. Maybe it needs to be new/additional legislation to specifically address platforms taking an active role in what content is being promoted.

Subscribe to Thoughts 'n Shit

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe