Yesterday, Twitter announced Birdwatch, a community moderation system.
Keith Coleman, Twitter’s VP of Product:
Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.
Birdwatch is only available in the US right now. It currently exists as a separate site, where the community can flag misleading or false Tweets. Like every effort at adding moderation tools to an already mature social network, it will fail. And like everything else, it comes down to cold hard cash.
Your attention is, obviously, for sale
Free-to-use social networks like Twitter are virality engines that:
- Give greater visibility to content that is new, funny or outrageous so that
- Users come back more often in order for
- A greater number of increasingly personalised adverts to be shown which results in
- More profit for shareholders.
Every decision made by Twitter can be viewed through this four-stage process. It’s a sales funnel built from outrage and fastened together with despair.
The core business model of the major tech companies and their subsidiaries is to capture and sell attention. They make the vast majority of their fortunes from selling ads based on what they know about you. Recent figures show that Google makes 83 per cent of its money this way, Facebook 99%, and Twitter over 70%. Of this ad revenue, a huge proportion comes from targeted behavioural advertising. This is how these services are free at the point of use.
It necessarily follows that most of the terrible things these platforms do—boost inflammatory content, track our location, enable election manipulation, decimate the news industry, help fascists to rise to positions of power in government—arise from the goal of boosting ad revenues.
Birdwatch will fail because it runs counter to Twitter’s business model. In its infancy it already exists as an um so maybe let’s leave this to the community to sort out? model. This is, you will not be surprised to hear, generally a sign that the company is not fully committed to its success. It if fails, it won’t be Twitter’s fault—the users must not have wanted the problem to be solved.
(Of course, there are reasons why Twitter should put in place better moderation systems that are also related this process—to allay the concerns of the subset of advertisers who are nervous about platform disinformation, or to head off the threat of regulation—but these are of lesser importance and urgency or are more indirect than the major engagement = $$$ through line.)
It very clearly does not have to be this way. You would think that sites like Wikipedia—something that anyone can edit—would be just as susceptible to being used as a tool for misinformation. But it’s not, because the content isn’t ad supported, isn’t personalised for users and isn’t subject to algorithmic amplification. If you’re reading an article about, say, a medieval flatulist named Roland the Farter, it’s almost certainly because you looked for it or some idiot like me shared it with you rather than any on-platform mechanisms. Wikipedia relies on other metrics for success and other ways to make money, and so should Twitter.
Remove the fiscal incentive for misinformation
We ought to have investment models where the wider interests of society contribute to shareholder profitability, not models that consider them a cost. Alas, we live in an imperfect world: see the popularity of James Corden for further evidence of this. In the meantime, the simplest fix for the issues I mention above is to ban, or at least heavily restrict, targeted behavioural advertising. Misinformation, disinformation, extreme content, copyright violations: instead of trying to solve these things individually, just remove the root fiscal incentive and force these virality engines to make their money another way.
In economics, Gresham’s law is commonly written as ‘bad money drives out good’. I’ve heard people comment that a version of this is at work here. The unwanted and damaging content (let’s use that horrible term ‘fake news’), which gets more engagement and is essentially free to produce, drives out real news, which often tells people things they don’t want to hear, and is expensive to produce.
I work with companies to help with, among other things, their online advertising. Lest anything think I am trying to do myself out of a job, there are still great ways to advertise products and services to the right people without the ad industry necessarily needing to know users’ sexual orientations, current mood, menstrual cycle—i.e. the sorts of things that most users don’t even realise ad companies know about them. Good advertising can still exist, but bad advertising systems are causing irreparable harm in the wider world, and ideas like Birdwatch are mere salves that fail to treat the root illness.