If you’re a human reporter quaking in your boots this week over news of a Los Angeles Times algorithm that wrote the newspaper’s initial story about an earthquake, you might want to cover your ears for this fact:
Software from Automated Insights will generate about 1 billion stories this year — up from 350 million last year, CEO and founder Robbie Allen told Poynter via phone.
FJP: Here’s a ponderable for you.
A few weeks ago, the New York Post reported that Quinton Ross died. Ross, a former Brooklyn Nets basketball player, didn’t know he was dead and soon let people know he was just fine.
"A couple (relatives) already heard it," Ross told the Associated Press. “They were crying. I mean, it was a tough day, man, mostly for my family and friends… My phone was going crazy. I checked Facebook. Finally, I went on the Internet, and they were saying I was dead. I just couldn’t believe it.”
The original reporter on the story? A robot. Specifically, Wikipedia Live Monitor, created by Google engineer Thomas Steiner.
Slate explains how it happened:
Wikipedia Live Monitor is a news bot designed to detect breaking news events. It does this by listening to the velocity and concurrent edits across 287 language versions of Wikipedia. The theory is that if lots of people are editing Wikipedia pages in different languages about the same event and at the same time, then chances are something big and breaking is going on.
At 3:09 p.m. the bot recognized the apparent death of Quinton Ross (the basketball player) as a breaking news event—there had been eight edits by five editors in three languages. The bot sent a tweet. Twelve minutes later, the page’s information was corrected. But the bot remained silent. No correction. It had shared what it thought was breaking news, and that was that. Like any journalist, these bots can make mistakes.
Quick takeaway: Robots, like the humans that program them, are fallible.
Slower, existential takeaway: “How can we instill journalistic ethics in robot reporters?”
As Nicholas Diakopoulos explains in Slate, code transparency is an inadequate part of the answer. More important is understanding what he calls the “tuning criteria,” or the inherent biases, that are used to make editorial decisions when algorithms direct the news.
Read through for his excellent take.