The theme of the criticisms Apple has made against Facebook are true of Google too: data collection; advertising model; “you are the product, not the customer”; etc. Rhetorically savaging your opponent is generally a “bad look” in marketing for all kinds of reasons —it substantiates them; it looks desperate and angry and gross; etc.— but savaging Facebook at a time when everyone is doing so lets Tim Cook attack Google implicitly. Whenever he says “companies that sell your data violate your human right to privacy,” the press covers it as him knocking Facebook; readers and the public, however, may recall it when thinking about Google and Android.
Facebook touts its partnership with outside fact-checkers as a key prong in its fight against fake news, but a major new Yale University study finds that fact-checking and then tagging inaccurate news stories on social media doesn’t work.
The study, reported for the first time by POLITICO, found that tagging false news stories as “disputed by third party fact-checkers” has only a small impact on whether readers perceive their headlines as true. Overall, the existence of “disputed” tags made participants just 3.7 percentage points more likely to correctly judge headlines as false, the study said.
This is particularly disappointing:
The researchers also found that, for some groups—particularly, Trump supporters and adults under 26—flagging bogus stories could actually end up increasing the likelihood that users will believe fake news.
What this means is that even more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company. I’ve spent time thinking about Facebook, and the thing I keep coming back to is that its users don’t realise what it is the company does. What Facebook does is watch you, and then use what it knows about you and your behaviour to sell ads. I’m not sure there has ever been a more complete disconnect between what a company says it does – ‘connect’, ‘build communities’ – and the commercial reality. Note that the company’s knowledge about its users isn’t used merely to target ads but to shape the flow of news to them. Since there is so much content posted on the site, the algorithms used to filter and direct that content are the thing that determines what you see: people think their news feed is largely to do with their friends and interests, and it sort of is, with the crucial proviso that it is their friends and interests as mediated by the commercial interests of Facebook. Your eyes are directed towards the place where they are most valuable for Facebook.
I finally got round to reading this—I currently, and temporarily, have a lot of free time on my hands, so I’m reading everything—and it’s fantastic. Recommended reading for anyone interested in the nascent subject of web platforms (in fact this piece is reminiscent at times of John Herrman, who is currently the writer of the most interesting and relevant articles on the topic).
In other words, how can we encode as much useful information as possible in a headline? Colors, fonts, shading, size, position, pictures, interactivity, history, metadata — basically all the design elements of information encoding across multiple dimensions. Which of those are most helpful to enhancing the headline? How can we test them?
For example, could we think of a headline as something that one can hover over, and immediately see source material? Or how many times the headline has changed? Or how other publications have written the same headline? (How does that help readers? How could that help publications?)
Let’s go broader. Why are headlines text? Could they be something else? What is the most important element at the top of a page? Is it five to fourteen words or is it something else entirely?
The whole piece is interesting and (typically for Mel) full of good ideas. Later she discusses the role of text:
Do we only think of mainly-text-based solutions because of the current nature of the platforms we share on? What if that changes? How could that change? A lot of current restrictions around headlines come from social and search restrictions and it would be interesting to think about that impact and how publications might bypass them with headline-like constructs (like Mic’s multimedia notifications or BuzzFeed’s emoji notifications.) They’re take the headline space and reworking it using images. What could we use besides images? In addition to images?
This is key. We use text because, well, text. It’s demanded by the channels we use to disseminate content. As readers we can react in non-textual ways: Facebook, Buzzfeed and others allow us to offer what might be very nuanced reactions using (barely?) representative icons and emoji. But as publishers, our platforms—both those that we own and third-party sites in our extended IA—generally haven’t evolved to a point where we can implement much of what Mel imagines.
This is a shame, as there’s plenty wrong with text and how it is used. Alan Jacobs wrote a short post in November, disagreeing with another post that championed text over other forms of communication:
Much of the damage done to truth and charity done in this past election was done with text. (It’s worth noting that Donald Trump rarely uses images in his tweets.) And of all the major social media, the platform with the lowest levels of abuse, cruelty, and misinformation is clearly Instagram.
No: it’s not the predominance of image over text that’s hurting us. It’s the use of platforms whose code architecture promotes novelty, instantaneous response, and the quick dissemination of lies.
This is problematic, and brings me back once again to Mike Caulfield’s excellent take on the layout and purpose of Facebook’s news distribution:
The way you get your stories is this:
- You read a small card with a headline and a description of the story on it.
- You are then prompted to rate the card, by liking it or sharing them or commenting on it.
- This then is pushed out to your friends, who can in turn complete the same process.
This might be a decent scheme for a headline rating system. It’s pretty lousy for news though.
So we get this weird (and think about it a minute, because it is weird) model where you get the headline and a comment box and if you want to read the story you click it and it opens up in another tab, except you won’t click it, because Facebook has designed the interface to encourage you to skip going off-site altogether and just skip to the comments on the thing you haven’t read.
No conclusions this end, but plenty of interrelated issues to ponder:
- How do we (re-)engineer headlines to be more useful by revealing more information than is currently available in a few short words?
- How do we maintain the curiosity gap without ever-increasing reliance on clickbait?
- How do we continue the battle against fake news and propaganda masquerading as unbiased thought?
- How do we reconcile this with third-party distribution platforms that can only (barely) cope with text, and that treat content as a title and comments box only?
There’s something else in here about headlines and metadata and their role in content discovery and dissemination, and how users decide what to read and when. I was talking about this today with Richard Holden from The Economist and it’s sparked a few assorted thoughts that are yet to coalesce into anything new or meaningful. Perhaps in time.
It’s not a new topic—I’ve enjoyed reading John Hermann, Mike Caulfield, Caitlin Dewey and Jeff Jarvis (among others) for some time. But Trump’s victory has turned it from a curiosity into a dangerous force.
Jarvis has co-written a list of 15 suggestions for platforms to adopt or investigate. This stands out to me as particularly important:
Create a system for media to send metadata about their fact-checking, debunking, confirmation, and reporting on stories and memes to the platforms. It happens now: Mouse over fake news on Facebook and there’s a chance the related content that pops up below can include a news site or Snopes reporting that the item is false. Please systematize this: Give trusted media sources and fact-checking agencies a path to report their findings so that Facebook and other social platforms can surface this information to users when they read these items and — more importantly — as they consider sharing them. Thus we can cut off at least some viral lies at the pass. The platforms need to give users better information and media need to help them. Obviously, the platforms can use such data from both users and media to inform their standards, ranking, and other algorithmic decisions in displaying results to users.
These linked data connections are not difficult to implement but they won’t happen without us asking for them. Platforms simply aren’t interested.
Same for this idea, also on the list:
Make the brands of those sources more visible to users. Media have long worried that the net commoditizes their news such that users learn about events “on Facebook” or “on Twitter” instead of “from the Washington Post.” We urge the platforms, all of them, to more prominently display media brands so users can know and judge the source — for good or bad — when they read and share. Obviously, this also helps the publishers as they struggle to be recognized online.
A key issue that Caulfield has repeatedly noted is that Facebook doesn’t really care whether you read articles that are posted; just whether you react to them, helping the platform learn more about you, in order to improve its ad targeting:
Facebook, on the other hand, doesn’t think the content is the main dish. Instead, it monetizes other people’s content. The model of Facebook is to try to use other people’s external content to build engagement on its site. So Facebook has a couple of problems.
First, Facebook could include whole articles, except for the most part they can’t, because they don’t own the content they monetize. (Yes, there are some efforts around full story embedding, but again, this is not evident on the stream as you see it today). So we get this weird (and think about it a minute, because it is weird) model where you get the headline and a comment box and if you want to read the story you click it and it opens up in another tab, except you won’t click it, because Facebook has designed the interface to encourage you to skip going off-site altogether and just skip to the comments on the thing you haven’t read.
Second, Facebook wants to keep you on site anyway, so they can serve you ads. Any time you spend somewhere else reading is time someone else is serving you ads instead of them and that is not acceptable.
The more I read about this, the more dispirited I become. The those of us who care about limiting fake news need to gather around a set of ideas and actions—Jarvis’s list is the best we have so far.
Why we made the videos
Here’s what Prof. John Wolffe, an academic I worked with, said:
These short videos are designed to replicate on screen the experience of visiting seven of London’s principal religious buildings through the use of 360° technology. Each building is introduced by a leading member of the community associated with it.
Although Christianity has long lost its historic religious monopoly, it remains the largest religious tradition in London, and has indeed seen some resurgence in recent years. Hence three out of the seven buildings are Christian ones. St Paul’s Cathedral represents the Church of England, still the national church with residual ties to the state although actively supported only by a minority of London’s Christians. Westminster Cathedral and Jesus House represent the two numerically largest Christian groups, Roman Catholics and Pentecostals. The latter have grown particularly rapidly since the turn of the millennium.
The other four buildings represent London’s (and the UK’s) four largest religious minorities. The early eighteenth-century Bevis Marks Synagogue is a striking physical reminder that religious diversity has a long history in this country dating back to the readmission of the Jews in 1656. Hindus, Muslims and Sikhs have also had a longstanding presence in London, although major purpose-built places of worship such as the Neasden Temple, the East London Mosque and the Sri Guru Singh Sabha Gurdwara have only appeared in recent decades.
These buildings offer just one approach to the study of religion. They do however enable one to begin to appreciate some comparisons and contrasts between major traditions. To take the study further one needs, among other things, also to be aware of the countless smaller and inconspicuous places of worship to be found all over London and other towns and cities; to look at the rituals and practices taking place both in these buildings and in many other places; to understand the role of sacred texts and images in religious life; and to reflect on the nature and significance of religious experience. We should also balance the rich ‘insider’ perspectives offered in these videos with more detached academic analysis and remember that the rich internal diversity of religious traditions means that other ‘insiders’ might have different perspectives from the speaker in a particular video.
These films therefore serve as a ‘taster’ for a new Open University module, A227 Exploring Religion: Places, Practices, Texts and Experiences which will be offered from autumn 2017, and will pursue all these issues in depth.
Issues with publishing and embedding
This was a fun project! Despite the hype around 360° videos, there remains several issues with publishing and embedding them on OpenLearn, the OU’s site for free learning:
- The videos display correctly when played on YouTube on desktop machines in Chrome.
- On mobiles and tablets, they work fine in the YouTube app, but not on mobile browsers. They display in their ‘unstitched’ state. Imagine the 360° video as a sphere, then flatten it out. It’s not attractive or useful.
- Even on a desktop, when the YouTube videos are embedded on a site, they usually (but not always) display unstitched.
- Uploading them to Facebook helps! They can be embedded on other sites without any noticeable problems on desktop browsers. Except…
- They don’t play in mobile browsers. The videos don’t even appear.
- Another option is to use Google VR, but there are more bugs and issues for various browser/OS combinations.
- OpenLearn itself isn’t responsively designed, making it harder for mobile users in general. We’re addressing this as part of a relaunch and redesign later this month.
It turns out that making 360° for all users is harder than the platforms would have you believe. There doesn’t seem to be a single way to present the videos to all users across all devices.
I’ve had to include some clunky advisory text on the OU site that some people probably won’t even notice. I’m yet to widely promote the videos on OU accounts until I can find an better way to do this.
Watch the videos
So, the videos below are embedded from Facebook. If you’re using a mobile or tablet, they may not play correctly or display at all. You can try opening the YouTube playlist in the YouTube app.
For now, the network hums along, mostly beneath the surface. A post from a Liberty Alliance page might find its way in front of a left-leaning user who might disagree with it or find it offensive, and who might choose to engage with the friend who posted it directly. But otherwise, such news exists primarily within the feeds of the already converted, its authorship obscured, its provenance unclear, its veracity questionable. It’s an environment that’s at best indifferent and at worst hostile to traditional media brands; but for this new breed of page operator, it’s mostly upside. In front of largely hidden and utterly sympathetic audiences, incredible narratives can take shape, before emerging, mostly formed, into the national discourse.
While this is (mostly) about the U.S. election, the same basic pattern exists in all territories at all times. The only surprising thing should be the scale (tiny) and profitability (staggeringly high) of the content farms.
The term “Weird Facebook” is fast becoming synonymous with Facebook pages dedicated to posting ironic memes — some of which, like Bernie Sanders’s Dank Meme Stash and I play KORN to my DMT plants, smoke blunts all day & do sex stuff, can clock over 100,000 followers. New York Magazine called them home to “thousands of the web’s most innovative weirdos,” while the Daily Dot called them “fodder for the guy you bought weed from in high school.” These larger groups often act like fan pages: One or a handful of admins make and post the memes for subscribers to like and share. But the delight of Weird Facebook is the network itself, which spills beyond these Facebook groups to the feeds of many of their members. “Dank Meme Stash” is only one realization of a vital and much more expansive sensibility. Weird Facebook lives in the posts that a loose community of artists, writers, weirdos and depressives make on their personal accounts and in conversation with each other. A genre emerges in these personal posts, something like a combination of performance art and comedy, and uniquely Facebookian: The art is in the performance of self, real or fictional or some combination thereof, with the depth and scope that a full profile, photo album and Timeline can allow.
This is an interesting change. Facebook clearly still shows share counts on their own buttons. It’s only the availability of data for third party buttons that has been removed. In other words, Facebook is trying to shut down third party share counters, in favor of making marketers either use no-count buttons like Twitter, or making them use the official Facebook buttons.
Site owners everywhere will need to update or remove their sharing buttons. It’s questionable how useful having the count next to the button is to the audience anyway:
My question is actually how long Facebook’s buttons will continue showing share counts. I may be erring on the apocalyptic side here, but this hints to me at a larger change in the works. Facebook share counts are a good metric to monitor for tracking engagement rates, but the display of the counts wasn’t necessarily helpful or valuable.
Here’s what I think is the key takeaway:
[Marketers] didn’t work towards better goals, and treated share counts as the goal in and of themselves […] I’m not saying seeking engagement is a bad thing, but it’s just another example of fixation on a number that isn’t as meaningful as people thought it was.
We’ve heard from people that they specifically want to see fewer stories with clickbait headlines or link titles. These are headlines that intentionally leave out crucial information, or mislead people, forcing people to click to find out the answer. For example: “When She Looked Under Her Couch Cushions And Saw THIS… I Was SHOCKED!”; “He Put Garlic In His Shoes Before Going To Bed And What Happens Next Is Hard To Believe”; or “The Dog Barked At The Deliveryman And His Reaction Was Priceless.”
To address this feedback from our community, we’re making an update to News Feed ranking to further reduce clickbait headlines in the coming weeks. With this update, people will see fewer clickbait stories and more of the stories they want to see higher up in their feeds.