Battling fake news with schema.org

More from The Economist, who’ve made a prototype of a tool that estimates the standing of a publisher based on the data about themselves that they make available using structured data:

In simple terms, here’s how our idea works from the perspective of a news reader: imagine that you stumbled upon an article via social media or search. You’ve never seen this site before and you have never heard of the publisher. You want to be able to validate the page to make sure the organisation behind the news is legit. You simply enter the URL of the page into our tool and it produces a score based on how much information the publisher has disclosed about itself in the code of its web page.

A few immediate thoughts:

  • This wouldn’t be impossible to game, but the extra work involved might make it slightly less easy or appealing to pull the web equivalent of the Twitter egg account move: set up a basic WordPress site with no information with the sole purpose of writing and sharing fake news stories for ad revenue.
  • As well as being an end-user action, platforms could adopt some of these checks (among many, many other signals) when determining how to rank content in news feeds and search results.
  • It could also be a quality factor for ad networks when determining where to place adverts.

Author: Matthew Culnane

Sometime social and UX person working in education. Interested in food, books, music, others. Working out how it all works.