News story formats

Following the birth of my son I’ve just started an extended period of parental leave from work. Prior to my departure I was trying to better understand, rationalise and improve the way we used platforms and formats.

These are clearly linked: you cannot post audio to Twitter; you can’t post a long-form article to Instagram. But this is good! Most publishers are just doing the basic stuff and there’s room to easily reach a far larger audience by publishing in different formats or by repurposing archive content for different platforms. And within our faculties we have so many potential writers, presenters and collaborators! We were just beginning to get somewhere. Oh well. Something to pick up when I go back.

When I’m doing this sort of work I’m a sucker for this sort of visualisation:

This comes from Beyond 800 words: new digital story formats for news, a typology of news formats by Tristan Ferne for BBC R&D:

For the inception of a BBC R&D project to explore alternatives to these conventional formats I’ve conducted a review of the landscape of digital news, looking for innovations in article and video formats online. I’ve been looking particularly for story formats used for news that aren’t legacies from print or broadcast, that try to use the affordances of digital, that have been specifically designed for news and that are re-usable across stories and genres.

Data reporting links from NICAR17

Chrys Wu has a comprehensive list of talks and resources from NICAR17—the conference for the (U.S.) National Institute for Computer-Assisted Reporting.

Some that jumped out at me as being particularly useful and/or interesting:

The different types of mis- and disinformation

From First Draft News’ post Fake news. It’s complicated there’s a useful figure that shows a spectrum of mis- and disinformation:

The scale, according to author Clare Wardle, “loosely measures the intent to deceive”.

Map these against the 8 Ps (Poor Journalism, Parody, to Provoke or ‘Punk’, Passion, Partisanship, Profit, Political Influence or Power, and Propaganda) and you start to see some mini-patterns:

Designing headlines to make them more useful

Nieman Lab is running their annual predictions for journalism. Melody Kramer’s piece about designing headlines caught my eye:

In other words, how can we encode as much useful information as possible in a headline? Colors, fonts, shading, size, position, pictures, interactivity, history, metadata — basically all the design elements of information encoding across multiple dimensions. Which of those are most helpful to enhancing the headline? How can we test them?

For example, could we think of a headline as something that one can hover over, and immediately see source material? Or how many times the headline has changed? Or how other publications have written the same headline? (How does that help readers? How could that help publications?)

Let’s go broader. Why are headlines text? Could they be something else? What is the most important element at the top of a page? Is it five to fourteen words or is it something else entirely?

The whole piece is interesting and (typically for Mel) full of good ideas. Later she discusses the role of text:

Do we only think of mainly-text-based solutions because of the current nature of the platforms we share on? What if that changes? How could that change? A lot of current restrictions around headlines come from social and search restrictions and it would be interesting to think about that impact and how publications might bypass them with headline-like constructs (like Mic’s multimedia notifications or BuzzFeed’s emoji notifications.) They’re take the headline space and reworking it using images. What could we use besides images? In addition to images?

This is key. We use text because, well, text. It’s demanded by the channels we use to disseminate content. As readers we can react in non-textual ways: Facebook, Buzzfeed and others allow us to offer what might be very nuanced reactions using (barely?) representative icons and emoji. But as publishers, our platforms—both those that we own and third-party sites in our extended IA—generally haven’t evolved to a point where we can implement much of what Mel imagines.

This is a shame, as there’s plenty wrong with text and how it is used. Alan Jacobs wrote a short post in November, disagreeing with another post that championed text over other forms of communication:

Much of the damage done to truth and charity done in this past election was done with text. (It’s worth noting that Donald Trump rarely uses images in his tweets.) And of all the major social media, the platform with the lowest levels of abuse, cruelty, and misinformation is clearly Instagram.

No: it’s not the predominance of image over text that’s hurting us. It’s the use of platforms whose code architecture promotes novelty, instantaneous response, and the quick dissemination of lies.

This is problematic, and brings me back once again to Mike Caulfield’s excellent take on the layout and purpose of Facebook’s news distribution:

The way you get your stories is this:

  • You read a small card with a headline and a description of the story on it.
  • You are then prompted to rate the card, by liking it or sharing them or commenting on it.
  • This then is pushed out to your friends, who can in turn complete the same process.

This might be a decent scheme for a headline rating system. It’s pretty lousy for news though.

[…]

So we get this weird (and think about it a minute, because it is weird) model where you get the headline and a comment box and if you want to read the story you click it and it opens up in another tab, except you won’t click it, because Facebook has designed the interface to encourage you to skip going off-site altogether and just skip to the comments on the thing you haven’t read.

No conclusions this end, but plenty of interrelated issues to ponder:

  1. How do we (re-)engineer headlines to be more useful by revealing more information than is currently available in a few short words?
  2. How do we maintain the curiosity gap without ever-increasing reliance on clickbait?
  3. How do we continue the battle against fake news and propaganda masquerading as unbiased thought?
  4. How do we reconcile this with third-party distribution platforms that can only (barely) cope with text, and that treat content as a title and comments box only?

There’s something else in here about headlines and metadata and their role in content discovery and dissemination, and how users decide what to read and when. I was talking about this today with Richard Holden from The Economist and it’s sparked a few assorted thoughts that are yet to coalesce into anything new or meaningful. Perhaps in time.

How Google’s AMP and Facebook’s Instant Articles camouflage fake news

Kyle Chaka for The Verge:

The fake news problem we’re facing isn’t just about articles gaining traffic from Facebook timelines or Google search results. It’s also an issue of news literacy — a reader’s ability to discern credible news. And it’s getting harder to tell on sight alone which sites are trustworthy. On a Facebook timeline or Google search feed, every story comes prepackaged in the same skin, whether it’s a months-long investigation from The Washington Post or completely fabricated clickbait.

Another unintended consequence of the homogenised/minimalist publishing platform movement. See also: Medium.

Battling fake news with schema.org

More from The Economist, who’ve made a prototype of a tool that estimates the standing of a publisher based on the data about themselves that they make available using structured data:

In simple terms, here’s how our idea works from the perspective of a news reader: imagine that you stumbled upon an article via social media or search. You’ve never seen this site before and you have never heard of the publisher. You want to be able to validate the page to make sure the organisation behind the news is legit. You simply enter the URL of the page into our tool and it produces a score based on how much information the publisher has disclosed about itself in the code of its web page.

A few immediate thoughts:

  • This wouldn’t be impossible to game, but the extra work involved might make it slightly less easy or appealing to pull the web equivalent of the Twitter egg account move: set up a basic WordPress site with no information with the sole purpose of writing and sharing fake news stories for ad revenue.
  • As well as being an end-user action, platforms could adopt some of these checks (among many, many other signals) when determining how to rank content in news feeds and search results.
  • It could also be a quality factor for ad networks when determining where to place adverts.

Cooperation against fake news

I’ve spent the past few days reading almost exclusively about the rise, dissemination and impact of fake news.

It’s not a new topic—I’ve enjoyed reading John Hermann, Mike Caulfield, Caitlin Dewey and Jeff Jarvis (among others) for some time. But Trump’s victory has turned it from a curiosity into a dangerous force.

Jarvis has co-written a list of 15 suggestions for platforms to adopt or investigate. This stands out to me as particularly important:

Create a system for media to send metadata about their fact-checking, debunking, confirmation, and reporting on stories and memes to the platforms. It happens now: Mouse over fake news on Facebook and there’s a chance the related content that pops up below can include a news site or Snopes reporting that the item is false. Please systematize this: Give trusted media sources and fact-checking agencies a path to report their findings so that Facebook and other social platforms can surface this information to users when they read these items and — more importantly — as they consider sharing them. Thus we can cut off at least some viral lies at the pass. The platforms need to give users better information and media need to help them. Obviously, the platforms can use such data from both users and media to inform their standards, ranking, and other algorithmic decisions in displaying results to users.

These linked data connections are not difficult to implement but they won’t happen without us asking for them. Platforms simply aren’t interested.

Same for this idea, also on the list:

Make the brands of those sources more visible to users. Media have long worried that the net commoditizes their news such that users learn about events “on Facebook” or “on Twitter” instead of “from the Washington Post.” We urge the platforms, all of them, to more prominently display media brands so users can know and judge the source — for good or bad — when they read and share. Obviously, this also helps the publishers as they struggle to be recognized online.

A key issue that Caulfield has repeatedly noted is that Facebook doesn’t really care whether you read articles that are posted; just whether you react to them, helping the platform learn more about you, in order to improve its ad targeting:

Facebook, on the other hand, doesn’t think the content is the main dish. Instead, it monetizes other people’s content. The model of Facebook is to try to use other people’s external content to build engagement on its site. So Facebook has a couple of problems.

First, Facebook could include whole articles, except for the most part they can’t, because they don’t own the content they monetize. (Yes, there are some efforts around full story embedding, but again, this is not evident on the stream as you see it today). So we get this weird (and think about it a minute, because it is weird) model where you get the headline and a comment box and if you want to read the story you click it and it opens up in another tab, except you won’t click it, because Facebook has designed the interface to encourage you to skip going off-site altogether and just skip to the comments on the thing you haven’t read.

Second, Facebook wants to keep you on site anyway, so they can serve you ads. Any time you spend somewhere else reading is time someone else is serving you ads instead of them and that is not acceptable.

The more I read about this, the more dispirited I become. The those of us who care about limiting fake news need to gather around a set of ideas and actions—Jarvis’s list is the best we have so far.

The Washington Post uses AI to generate Olympic content

Peter Kafka for Recode:

The Post is using homegrown software to automatically produce hundreds of real-time news reports about the Olympics. Starting tomorrow morning, those items will appear, without human intervention, on the Post’s website, as well as in outside channels like its Twitter account.

The idea is to use artificial intelligence to quickly create simple but useful reports on scores, medal counts and other data-centric news bits — so that the Post’s human journalists can work on more interesting and complex work, says Jeremy Gilbert, who heads up new digital projects for the paper.

Audiogram turns audio into video for social media

WNYC, America’s most popular public radio station, is open sourcing its Audiogram service for turning audio clips into videos for native sharing on social media.

The most popular social media platforms—Facebook, Instagram and Twitter—don’t have a content type for audio and are predominantly visual. Facebook in particular sees video at the heart of what it does, and brands are using the format more often. See for example the huge increase in cooking and how-to videos.

It’s increasingly important to share content natively on social media platforms—that is, to use the platforms’ own media types, which are privileged in users’ news feeds.

Common solutions are to use audio hosting services such as SoundCloud or Audioboom, but these are a click away from a user’s Facebook news feed, or st the very least don’t auto play. This means that a user is less likely (source) to click to play or visit the content, which in turn results in low engagement, which in turn leads to lower exposure within Facebook.

I’ve seen this anecdotally when sharing SoundCloud recordings. I see far fewer likes, comments and shares, and people tell me they never saw the posts in their feeds.

WNYC’s tool turns audio files (.mp3 and .wav) into movie files, adding branding, captions and a waveform visualisation. They plan to introduce options for subtitling in a future release. The idea isn’t brand new—organisations like The Economist have had some success already—but by open sourcing their workflow, more people can try it out.

The target audience for the tool is WNYC partners and other news organisations who record interviews, but there are potential uses for:

  • Bedroom musicians to share demos
  • Podcasters
  • Writers of spoken-word fiction or radio plays
  • Stand-up comics

WNYC’s Delaney Simmons:

WNYC shows have been seeing great results. On Twitter, the average engagement for an audiogram is 8x higher than a non-audiogram tweet and on Facebook some of our shows are seeing audiogram reach outperform photos and links by 58% and 83% respectively.

Maybe turning audio into video is the way for it to finally go viral?

All the city’s flotsam and jetsam

1: Cancer and climate change

I’m a climate scientist who has just been told I have Stage 4 pancreatic cancer.

This diagnosis puts me in an interesting position. I’ve spent much of my professional life thinking about the science of climate change, which is best viewed through a multidecadal lens. At some level I was sure that, even at my present age of 60, I would live to see the most critical part of the problem, and its possible solutions, play out in my lifetime. Now that my personal horizon has been steeply foreshortened, I was forced to decide how to spend my remaining time. Was continuing to think about climate change worth the bother?

2: Ten thousand years of the mortar and pestle

3: Vader’s Redemption: The Imperial March in a Major Key

4: The tube at a standstill: why TfL stopped people walking up the escalators

It’s British lore: on escalators, you stand on the right and walk on the left. So why did the London Underground ask grumpy commuters to stand on both sides? And could it help avert a looming congestion crisis?

5: Google Earth fractals

The following is a “photographic” gallery of fractal patterns found while exploring the planet with Google Earth. Each is provided with a KMZ file so the reader can explore the region for themselves. Readers are encouraged to submit their own discoveries for inclusion, credits will be included. Besides being examples of self similar fractals, they are often very beautiful structures … not an uncommon characteristic of fractal geometry.

6: The digital materiality of GIFs

The history, present and future of GIFs.

7: A collection of Bat-labels

Collecting the explanatory labels on everything in the 1966-1968 Batman TV series.

8: Michael Wolf captures abstract, accidental sculptures in Hong Kong alleyways

For over 20 years Michael Wolf has been photographing Hong Kong. During that time he has captured the towering pastel facades of its high rise architecture in a vein similar to Thomas Struth or Andreas Gursky, but perhaps more interestingly he has delved into the hidden maze of the city’s back alleys. What he found and has faithfully documented, are the innumerable abstract urban still lifes seen throughout. All the city’s flotsam and jetsam, from clusters of gloves and clothes hangers, to networks of pipes and a full colour spectrum of plastic bags, are photographed in strange, but entirely happenstance arrangements.

9: A list of the 100 oldest rockstars still living

10: ‘Shocking celebrity nip slips’: Secrets I learned writing clickbait journalism

I spent six months writing traffic-baiting articles about ‘nearly naked’ red carpet dresses and Hollywood bikini shots. Here is my dispatch from the dark side of online celeb journalism.

11: Poachers using science papers to target newly discovered species

Academic journals have begun withholding the geographical locations of newly discovered species after poachers used the information in peer-reviewed papers to collect previously unknown lizards, frogs and snakes from the wild, the Guardian has learned.

12: Why I ignore the daily news and read The Economist instead (and how you can too)

But there’s one big downside to the The Economist: it’s a bear to read every week. Not because of the writing, which is crisp and engaging, but because of the volume. Each issue contains about 90 pages of densely packed 9-point type and few photos.

Here’s my 7-step system for reading The Economist every week.

Optimising a WordPress news site for content editors

I’ve produced OU News, a news and media WordPress site for my employer, The Open University (OU). It complements and may eventually replace an existing press release repository.

  <img src="http://static1.squarespace.com/static/5051d6b824ac3544de31ef8b/5051d73de4b040c6b9bcd681/5648eb1ae4b00accac88529c/1447619383331//img.png" alt=""/>

It’s aimed at a wider audience than just journalists: students, staff, alumni as well as the general public.

I approached it with a firm focus on content strategy, and this post outlines some choices I made to make it useful and usable for its readers as well as making the content publishing process as quick and easy as possible for editors.

The existing OU news site

I believe the press release site to be nearly 15 years old. It’s had minor incremental improvements since then, but mostly to stop things breaking rather than proactive enhancements.[1] Various features have been removed or deprecated, such as the RSS feed.

The site isn’t terribly useful for its intended audience: navigation isn’t great, and it is quite restrictive in terms of what can be included in a post. Just text and a thumbnail image. I understand the content publishing process can be laborious and time-consuming.

New OU sites are typically built using Drupal[2] and OU ICE, a front-end HTML, CSS and Javascript framework for quick and easy production of accessible and brand-compliant sites. This suitable for the vast majority of cases. In the 5–10% where something more is needed, OU IT will produce and support custom Drupal themes or sub-themes. In this case, IT weren’t able to support the design and build, so we embarked upon it ourselves.[3]

Building a WordPress replacement

I hadn’t used WordPress in anger for several years and, if I’m honest, I didn’t have terrific memories of using it. When I started this site (the one you are reading) I didn’t even consider it.

I spent a couple of days downloading, installing and testing a dozen or so CMSs. The majority were wholly unsuited for this project. They required technical expertise beyond what would be expected of the team who manage the content—their skills are in media relations, not wrangling static site generators using the command line.

It was obvious that I should leave any bias behind. WordPress ticked the most boxes: a couple of the team had used it before, it’s thoroughly extensible, and there’s a huge support community in case things go wrong.

CMS chosen, I looked at themes. The intention was always to buy a flexible theme and to customise it to better suit our needs. We chose Sense. It has good navigation and UX out of the box while being much more visually appealing than the previous site. There are a large number of ways to organise and lay out posts.

There are several other good things about Sense. It has a highly usable drag and drop interface for building page layouts and uses widgets to build sidebars and footers in a intuitive way. Site editors can drop in URLs from media and social media sites and they automatically embed the content—no need for shortcodes or embed codes.[4]

Choosing plugins to help editors

Part of the reason I hadn’t had a terrific experience with WordPress on a previous project was that I had bad experience with plugins failing or being incompatible. This time round, I spent a lot of time searching for reliable plugins to make my life and the editors’ lives easier.

Here’s a rundown of the plugins I used:

  • Advanced Custom Fields. I ended up using this less than I expected. It allows you to customise the fields used in your posts in order to structure your content in a much more useful way. One for the content modellers out there. As the project developed, it was apparent that most of our content types were well served by WordPress’s standard content types and fields.
  • Avatar manager. Allows me to upload images of the site editors rather than them using Gravatar.
  • Better writing. Adds a readability score to all posts using the Flesch reading ease test. As a university we’re prone to unnecessary verbiage; this plugin is a reminder to editors to speak plainly for a general audience.
  • Broken Link Checker. Periodically scans your site for 404s (internal and external). Prints the results on the admin dashboard and emails them to the editor who published the post. I’m unsure how resource-intensive this plugin will be as the site grows, so I’ll keep my eye on it.
  • Google Analytics (Yoast). Makes it easy to add tracking code regardless of theme and see headline metrics in your WordPress admin interface.
  • ImageInject. Lets editors insert Creative Commons images based on the post title or a search string of their choosing. Adds them in the body of the post or as a featured image, and includes attribution information consistent with the licence.
  • Inline Tweet Sharer. Lets you turn quotes or other short passages of text into anchor text for Tweetable links. I might remove this as the editors haven’t really taken to using it.
  • P3: Plugin Performance Profiler. If you’ve looked at this list and thought, “That’s a long list of plugins”, you’ll like this one. It undertakes an on-demand scan of your site to see if any plugins are drastically affecting site performance. (For the record, there isn’t anything in this list causing alarm.)
  • Radio Buttons for Taxonomies. I have a compulsion that all posts should sit in a single category and have multiple tags, so I’ve enforced that on the editors using this plugin. Forces editors to choose one term from your taxonomy or taxonomies.
  • UK Cookie Consent. Adds a cookie banner to the site and produces a cookie information page. I discarded the default text, preferring to rewrite the OU’s existing cookie information into slightly better text so that it explains what a cookie is, why we use them, and which cookies are used.
  • WP Help. This is pretty great—it allows administrators to add custom help documentation for editors. I’ve made it quite granular, so there are entries for how to source and use images, guidelines for categorising and tagging content, improving readability scores, that sort of thing.
  • WP Hide Post. Allows you to publish a post to the live site but hide it from the main page, or its category page, or the author page, etc. Limited use cases, but potentially very helpful.
  • WP Super Cache. If you’ve used WordPress before then you probably know this one: a fast caching plugin to speed up sites. I adjusted the settings while building the site so I could see changes as I made them.
  • Yoast SEO. Probably the most important plugin we use, and another one you’ve probably heard of. We expect the posts to be shared widely on social media, and it makes it trivially easy to add Open Graph and Twitter metadata for better presentation in Facebook, Twitter and other platforms. It reminds the editors to optimise their posts for search by adding a primary keyword and ensuring that it is included in the title, URL, metadata and body content. The editors seem to like it as it focuses them on the user and what they might search for. There’s a useful analysis tab where you can review how well your content is optimised for search and readability. There’s now an equivalent Drupal module which I’d like IT to add to our standard distribution.

We’ll likely add Yoast’s Google News plugin in order to create a dynamic XML sitemap that conforms with the Google News schema.

All in all I’m hugely impressed with the options available to optimise a WordPress site for better content strategy. And I only managed to delete all the site content once! (That was a hairy hour or so while I arranged for it to be restored from a recent backup. Don’t tell my boss.) WordPress and its community hardly need my patronage but I’ll definitely use it for future projects (where appropriate). I’m considering moving this site across to it.


  1. I haven’t worked at the OU for this long, so this is what I’ve heard rather than experienced.  ↩

  2. The major exception being the prospectus site, which was rebuilt using Kentico in 2014.  ↩

  3. This explains the current URL. The site may well move to become part of the main OU information architecture, but for the moment it uses a non- *open.ac.uk domain name and is hosted externally.  ↩

  4. I think this is the theme doing this—since this site was produced I’ve played with other themes where it hasn’t done this.  ↩

Journalists vs. readers

Three new services in the amorphous sphere of news, “storytelling” and social media. What’s most interesting to me is the way they describe themselves, and what that in turn tells us about how they perceive the ultimate audience: users.

Sure, I’ve cherry-picked the quotes and the services are aimed at different audiences, but there is a collosal gulf in language and expectation between those who make news services and the people who will ultimately read or use them.

Google launches First Draft Coalition to answer questions and offer training around social media reporting:

Launching today, the First Draft Coalition is a group of thought leaders and pioneers in social media journalism who are coming together to help you answer these questions, through training and analysis of eyewitness media.

What the fuck? Ugh. Imagine writing a sentence like this and making it bold in the middle of the launch article.

Google and Storyful are launching YouTube Newswire, a feed of verified user-generated videos:

The platforms for Storyful are both the source and the destination for great content and powerful storytelling […] But those platforms are getting noisier and it’s our job to find the stories worth telling and help journalists do great journalism using the power of eyewitness media.

Slightly better?

BuzzFeed’s new app is everything the site isn’t: Short, news-focused and serious

It’s an interesting challenge, to get information to people who aren’t always swimming in it the way journalists like me are.

That makes sense! You should tell that to other people in your field.

This shit is toxic and it needs to die yesterday

1: A complete taxonomy of internet chum

Toward a grand unified theory of “Around the Web”, i.e. those terrible ad grids you see on desperate websites:

Chum is decomposing fish matter that elicits a purely neurological brain stem response in its target consumer: larger fish, like sharks. It signals that they should let go, deploy their nictitating membranes, and chomp down blindly on a morsel of fragrant, life-giving sustenance. Perhaps in a frenzied manner […] This is a chumbox. It is a variation on the banner ad which takes the form of a grid of advertisements that sits at the bottom of a web page underneath the main content.

2: Visipedia.

Visipedia is a joint project between Pietro Perona’s Vision Group at Caltech and Serge Belongie’s Vision Group at Cornell Tech. Visipedia, short for “Visual Encyclopedia,” is an augmented version of Wikipedia, where pictures are first-class citizens alongside text. Goals of Visipedia include creation of hyperlinked, interactive images embedded in Wikipedia articles, scalable representations of visual knowledge, largescale machine vision datasets, and visual search capabilities. Toward achieving these goals, Visipedia advocates interaction and collaboration between machine vision and human users and experts.

3: NY Times: Trending

Billed as a real-time dashboard of popular Times content. Interesting to see the way they categorise content:

  • Fresh Eyes: stories that are popular with readers who are new to The Times
  • Page-Turner: stories that are holding the attention of our readers
  • Renewed Interest: older stories that are making a comeback and experiencing a second wind
  • Staying Power: stories that have been consistently popular since publication

4: Why “Agile” and especially Scrum are terrible

It’s probably not a secret that I dislike the “Agile” fad that has infested programming. One of the worst varieties of it, Scrum, is a nightmare that I’ve seen actually kill companies. By “kill” I don’t mean “the culture wasn’t as good afterward”; I mean a drop in the stock’s value of more than 85 percent. This shit is toxic and it needs to die yesterday. For those unfamiliar, let’s first define our terms. Then I’ll get into why this stuff is terrible and often detrimental to actual agility. Then I’ll discuss a single, temporary use case under which “Agile” development actually is a good idea, and from there explain why it is so harmful as a permanent arrangement.

5: The history of Henry Mancini’s Moon River

I didn’t realise how much I loved this song until relatively recently. I recorded a version of it, if you’re inclined to listen.

6: Inside the cult of Secret Wedding Pinterest, where fiances are optional

One third of all boards on Pinterest are secret wedding-planning boards.

7: A plant by any other name

On botanical and common names of plants. No, really, it’s a good short thing.

8: Abandoned fishing village in China reclaimed by nature

In the mouth of the Yangtze River off the eastern coast of China, a small island holds a secret haven lost to the forces of time and nature—an abandoned fishing village swallowed by dense layers of ivy slowly creeping over every brick and path.

9: On the fine art of the footnote

Ever since David Hume noted that, while reading Edward Gibbon’s The History of the Rise and Fall of the Roman Empire, “One is also plagued with his Notes, according to the present Method of printing the Book” and suggested that they “only to be printed at the Margin or the Bottom of the Page,” footnotes have been the hallmark of academia. For centuries, then, the footnote existed as a blunt instrument, wielded by pedants and populists alike, primarily for the transmission of information, but occasionally to antagonize opponents with arch rhetorical asides. But it would take a couple hundred years until writers again took up the footnote for other, more artful purposes, discovering in this tiny technique emotional and intellectual depth far beyond the realm of the merely experimental.

Shit readers give zero fucks about

The king of bullshit news

This is an excellent critical look into the veracity of CEN’s too-good-to-be-true stories, used by The Daily Mail, among (many) others:

How a small British news agency and its founder fill your Facebook feed with stories that are wonderful, wacky – and often wrong.

The words the media industry prefers

The scraping process and resulting visualisations are interesting; what got me more was Ford’s typically humorous style:

Does Bing care how I use it? I bet “nope.” After some testing, it seemed that was true. You can hit Bing tons of times and Microsoft is like, our milkshake brings all the bots to the cloud […] I exported the data to Excel because Google Spreadsheet charts look like they were made by color-blind eleven-year-olds. Excel charts, on the other hand, look like they were made by drunks who sell timeshares in Tampa […] In the far future, you might attend my wake. He did important work, you will think. His comparison of sexualized terms on websites changed America.

UX from hell

A couple [of] weeks ago a UX designer Twitter friend tweeted “Web peeps: Is there a particularly industry, segment, or niche that—generally speaking—has REALLY bad mobile web experiences?” I didn’t even have to have to think about it before replying: News sites.

I’m interested (personally and professionally) in news site UX, and have documented many similar things. I like the idea of “Shit the UX designer was forced to include” vs “Shit readers give zero fucks about”. Almost always a complete overlap.

A brief history of the # and the @, by Keith Houston

The average tweet is not an especially remarkable thing. It can contain letters (and almost always does), marks of punctuation (perhaps more of an acquired taste in this context), and pictures (mostly of cats and/or the photographer themselves). But in amongst these most conventional components of modern written communication are two special symbols around which orbits the whole edifice of Twitter. Neither letters nor marks of punctuation, the @- and #-symbols scattered throughout Twitter’s half billion daily messages are integral to its workings. And yet, they have always been interlopers amongst our written words.

Keith’s Shady Characters blog and book are both highly recommended.

Explore the trees

A terrifically detailed visualisation of all street trees in San Francisco. See also Matt Dance’s Trees of Edmonton.

A crowdsourced list of the top 50 cult movies

I asked my Twitter followers about their favorite cult films, and got some great responses (I also triggered a kind of Twitter war over whether quoting people’s tweets using the new embed feature is rude and/or noisy, but I will leave that for another day). Here’s a list of the top 50 suggestions — I didn’t include every one, but they all appear in the tweets I’ve embedded below.

Apple: ‘We do not accept fart apps on the Apple Watch’