Fact checking tools in the browser

Mike Caulfield has some ideas about how the humble browser could be used to combat mis- and disinformation. For example:

Site info: Browsers expose some site info, but it’s ridiculously limited. Here’s some site info that you could easily provide users: date domain first purchased, first crawl of URL by Google or archive.org, related Wikipedia article on organization (and please financially support Wikipedia if doing this), any IFCN or press certification. Journal impact factor. Date last updated. Even better: provide some subset of this info when hovering over links.

Likely original reporting source: For a news story that is being re-re-re-reported by a thousand clickbait artists, use network and content analysis to find what the likely original reporting source is and suggest people take a look at that.

Other suggestions: in-built reverse image lookups, OCR of image memes, related sites.

 

Tagging fake news on Facebook doesn’t work

Jason Schwartz for Politico:

Facebook touts its partnership with outside fact-checkers as a key prong in its fight against fake news, but a major new Yale University study finds that fact-checking and then tagging inaccurate news stories on social media doesn’t work.

The study, reported for the first time by POLITICO, found that tagging false news stories as “disputed by third party fact-checkers” has only a small impact on whether readers perceive their headlines as true. Overall, the existence of “disputed” tags made participants just 3.7 percentage points more likely to correctly judge headlines as false, the study said.

This is particularly disappointing:

The researchers also found that, for some groups—particularly, Trump supporters and adults under 26—flagging bogus stories could actually end up increasing the likelihood that users will believe fake news.

Battling fake news with schema.org

More from The Economist, who’ve made a prototype of a tool that estimates the standing of a publisher based on the data about themselves that they make available using structured data:

In simple terms, here’s how our idea works from the perspective of a news reader: imagine that you stumbled upon an article via social media or search. You’ve never seen this site before and you have never heard of the publisher. You want to be able to validate the page to make sure the organisation behind the news is legit. You simply enter the URL of the page into our tool and it produces a score based on how much information the publisher has disclosed about itself in the code of its web page.

A few immediate thoughts:

  • This wouldn’t be impossible to game, but the extra work involved might make it slightly less easy or appealing to pull the web equivalent of the Twitter egg account move: set up a basic WordPress site with no information with the sole purpose of writing and sharing fake news stories for ad revenue.
  • As well as being an end-user action, platforms could adopt some of these checks (among many, many other signals) when determining how to rank content in news feeds and search results.
  • It could also be a quality factor for ad networks when determining where to place adverts.