It’s not a new topic—I’ve enjoyed reading John Hermann, Mike Caulfield, Caitlin Dewey and Jeff Jarvis (among others) for some time. But Trump’s victory has turned it from a curiosity into a dangerous force.
Jarvis has co-written a list of 15 suggestions for platforms to adopt or investigate. This stands out to me as particularly important:
Create a system for media to send metadata about their fact-checking, debunking, confirmation, and reporting on stories and memes to the platforms. It happens now: Mouse over fake news on Facebook and there’s a chance the related content that pops up below can include a news site or Snopes reporting that the item is false. Please systematize this: Give trusted media sources and fact-checking agencies a path to report their findings so that Facebook and other social platforms can surface this information to users when they read these items and — more importantly — as they consider sharing them. Thus we can cut off at least some viral lies at the pass. The platforms need to give users better information and media need to help them. Obviously, the platforms can use such data from both users and media to inform their standards, ranking, and other algorithmic decisions in displaying results to users.
These linked data connections are not difficult to implement but they won’t happen without us asking for them. Platforms simply aren’t interested.
Same for this idea, also on the list:
Make the brands of those sources more visible to users. Media have long worried that the net commoditizes their news such that users learn about events “on Facebook” or “on Twitter” instead of “from the Washington Post.” We urge the platforms, all of them, to more prominently display media brands so users can know and judge the source — for good or bad — when they read and share. Obviously, this also helps the publishers as they struggle to be recognized online.
A key issue that Caulfield has repeatedly noted is that Facebook doesn’t really care whether you read articles that are posted; just whether you react to them, helping the platform learn more about you, in order to improve its ad targeting:
Facebook, on the other hand, doesn’t think the content is the main dish. Instead, it monetizes other people’s content. The model of Facebook is to try to use other people’s external content to build engagement on its site. So Facebook has a couple of problems.
First, Facebook could include whole articles, except for the most part they can’t, because they don’t own the content they monetize. (Yes, there are some efforts around full story embedding, but again, this is not evident on the stream as you see it today). So we get this weird (and think about it a minute, because it is weird) model where you get the headline and a comment box and if you want to read the story you click it and it opens up in another tab, except you won’t click it, because Facebook has designed the interface to encourage you to skip going off-site altogether and just skip to the comments on the thing you haven’t read.
Second, Facebook wants to keep you on site anyway, so they can serve you ads. Any time you spend somewhere else reading is time someone else is serving you ads instead of them and that is not acceptable.
The more I read about this, the more dispirited I become. The those of us who care about limiting fake news need to gather around a set of ideas and actions—Jarvis’s list is the best we have so far.