Fact checking tools in the browser

Mike Caulfield has some ideas about how the humble browser could be used to combat mis- and disinformation. For example:

Site info: Browsers expose some site info, but it’s ridiculously limited. Here’s some site info that you could easily provide users: date domain first purchased, first crawl of URL by Google or archive.org, related Wikipedia article on organization (and please financially support Wikipedia if doing this), any IFCN or press certification. Journal impact factor. Date last updated. Even better: provide some subset of this info when hovering over links.

Likely original reporting source: For a news story that is being re-re-re-reported by a thousand clickbait artists, use network and content analysis to find what the likely original reporting source is and suggest people take a look at that.

Other suggestions: in-built reverse image lookups, OCR of image memes, related sites.

 

How social media broke our democracy

Mike Caulfield:

I could not sleep last night at all. So I organized my notes I’ve been taking over the last year on the problem of doing politics in distributed feed-based systems.

I know this election was about so much more than that (so much more), and our problems are so much deeper. But I remain convinced that even if social media is not the fire or the fuel of Breitbartian racism it is in fact the oxygen that helps it thrive and spread.

There are 537 pages of notes in this PDF, and it may not be immediately clear what each has to do with the book, but in my head at least they all relate. They are worth a read.

[pdf]

Wow—this is fantastic, and exactly what I was getting at in my earlier post about indiscriminate collecting. I’ve started using DEVONthink to collect and organise my notes and web clippings. I hope to get to a point where I have a similar collection. Not only does it help my understanding of concepts, but it enables me to make unexpected connections between them.