In an interview on his newest project (the just over 1-year-old long-form platform Medium) Twitter co-founder Evan Williams shared a few thoughts on the uselessness of general news, and the need for a platform to highlight ideas of lasting import.
Williams is taking aim squarely at the news industry’s most embarrassing vulnerability: the incessant need to trump up mundane happenings in order to habituate readers into needing news like a daily drug fix.
“News in general doesn’t matter most of the time, and most people would be far better off if they spent their time consuming less news and more ideas that have more lasting import,” he tells me during our interview inside a temporary Market Street office space that’s housing Medium, until the top two floors are ready for his growing team. “Even if it’s fiction, it’s probably better most of the time.”
[…] Instead, Williams argues, citizens should re-calibrate their ravenous appetite for information towards more awe-inspiring content. “Published written ideas and stories are life-changing,” he gushes, recalling his early childhood fascination with books as the motivation to take on the media establishment. The Internet “was freeing that up, that excitement about knowledge that’s inside of books–multiplied and freed and unlocked for the world; and, the world would be better in every way.”
In Williams’s grand vision, the public reads for enlightenment; news takes a backseat directly in proportion to how often it leaves us more informed and inspired.
This is a really valid, and really noble ambition that resonates with more than a few people. In a letter to a young journalist, Pulitzer winning writer Lane DeGregory looks back on her career and says she wishes she had “read more short stories and fewer newspaper articles.”
It also echoes what Maria Popova has been aiming to do with her curatorial interestingness project, Brain Pickings, for years now. Last week, she wrote a must-read piece on tech writer Clive Thompson’s new book, which pushes past “painfully familiar and trite-by-overuse notions like distraction and information overload,” to deeply examine the impact of digital tools. She writes:
Several decades after Vannevar Bush’s now-legendary meditation on how technology will impact our thinking, Thompson reaches even further into the fringes of our cultural sensibility — past the cheap techno-dystopia, past the pollyannaish techno-utopia, and into that intricate and ever-evolving intersection of technology and psychology.
The Problem: Though I’ve been excited about Medium and its potential, I’m inclined to file Williams’ vision for it into the “pollyannaish techno-utopia” bucket that Popova mentions because although the impulse behind it (the desire for an antidote to the ravenous appetite for tidbits of useless information) is something I wholeheartedly agree with, algorithmic curation worries me.
Traditional news editors stake their reputations on having an intuition for what drives eyeballs to their sites. Editors don’t, however, know whether readers leave more informed.
Williams thinks Medium has an answer: an intelligent algorithm that suggests stories, primarily based on how long users spend reading certain articles (which he’s discussing publicly for the first time). Like Pandora did for music discovery, Medium’s new intelligent curator aims to improve the ol’ human-powered system of manually scrolling through the Internet and asking others what to read.
In the algorithm itself, Medium prioritizes time spent on an article, rather than simple page views. “Time spent is not actually a value in itself, but in a world where people have infinite choices, it’s a pretty good measure if people are getting value,” explains Williams.
"Time spent" seems like a questionable way to measure value, if "enlightening" content is what Medium wants to put on the screens of readers. As a content-neutral long-form discovery platform, sure, it makes sense. And there isn’t really anything wrong with it either. But touting itself as a solution to our appetite for endless streams of meaningless information seems troubling to me. Here’s why:
A key aspect of Thompson’s argument on the good the internet has done for our brains is that it has given us unprecedented access to one another’s memory stores, which means that our ability to indiscriminately discover information and understand the world through it, has expanded infinitely. To oversimplify it: we don’t have to remember as much by ourselves—we simply need to remember where information is stored and how to access it quickly. While the benefits are obvious, the issue with this is that it hampers creative thought, and our ability to make connections.
In light of platforms like Medium, longer isn’t better, especially when the discovery of value is left to machines. Popova excerpts a portion of Thompson’s book in which he explains how an algorithm’s biases exist, but are almost impossible to identify:
The real challenge of using machines for transactive memory lies in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners’ minds work — where they’re strong, where they’re weak, where their biases lie. I can judge that for people close to me. But it’s harder with digital tools, particularly search engines. You can certainly learn how they work and develop a mental model of Google’s biases. … But search companies are for-profit firms. They guard their algorithms like crown jewels. This makes them different from previous forms of outboard memory. A public library keeps no intentional secrets about its mechanisms; a search engine keeps many. On top of this inscrutability, it’s hard to know what to trust in a world of self-publishing. To rely on networked digital knowledge, you need to look with skeptical eyes. It’s a skill that should be taught with the same urgency we devote to teaching math and writing.
Popova explains that without a mental pool of resources from which we can connect existing ideas into new combinations—and I’d add, thereby access, retain, and be “enlightened” by information—our capacity to do so is deflated.
TL;DR: Popova’s piece doesn’t directly address or assess discovery platforms like Medium, but I think it’s worth considering them together. Longer form writing isn’t an antidote to short bites of information, and ideas of lasting value can’t be judged by time spent consuming them. The point here is that for content platforms that truly seek to give people access to more ideas with more lasting import, a lot more work has to be done, namely: (1) The limitations of algorithmic curation need to be transparent, and talked about, and (2) Readers need to be taught how to critically consume self-published writing that they received through digitally networked knowledge. —Jihii
Even robots have biases.
Any decision process, whether human or algorithm, about what to include, exclude, or emphasize — processes of which Google News has many — has the potential to introduce bias. What’s interesting in terms of algorithms though is that the decision criteria available to the algorithm may appear innocuous while at the same time resulting in output that is perceived as biased.
We need, in short, to pay attention to the materiality of algorithmic processes. By that, I do not simply mean the materiality of the algorithmic processing (the circuits, server farms, internet cables, super-computers, and so on) but to the materiality of the procedural inputs. To the stuff that the algorithm mashes up, rearranges, and spits out.
CW Anderson, Culture Daily. The Materiality of Algorithms.
In what reads like a starting point for more posts on the subject, CUNY Prof Chris Anderson discusses what documents journalists may want to design algorithms for, and just how hard that task will be.
Algorithms doing magic inside massive data sets and search engines, while not mathematically simple, are generally easy to conceptualize — algorithms and their data are sitting in the computer, the algorithm sifts through the excel sheet in the background and bam! you have something.
But if you’re working with poorly organized documents, it’s difficult to simply plug them in.
Chris writes that the work required to include any document in a set will shape the algorithm that makes sense of the whole bunch. This will be a problem for journalists who want to examine any documents made without much forethought, which is to say: government documents, phone records from different companies and countries, eye witness reports, police sketches, mugshots, bank statements, tax forms, and hundreds of other things worth investigating.
The recovered text [from these documents] is a mess, because these documents are just about the worse possible case for OCR [optical character recognition]: many of these documents are forms with a complex layout, and the pages have been photocopied multiple times, redacted, scribbled on, stamped and smudged. But large blocks of text come through pretty well, and this command extracts what text there is into one file per page.
To read the rest of Stray’s account, see his Overview Project.
And to see more with Chris Anderson, see our recent video interviews with him.