Posts tagged robots

What Writer's Block? Swedish Man and His Bot Have Authored 2.7 Million Wikipedia Articles

Via The Wall Street Journal:

Sverker Johansson could be the most prolific author you’ve never heard of.

Volunteering his time over the past seven years publishing to Wikipedia, the 53-year-old Swede can take credit for 2.7 million articles, or 8.5% of the entire collection, according to Wikimedia analytics, which measures the site’s traffic. His stats far outpace any other user, the group says.

He has been particularly prolific cataloging obscure animal species, including butterflies and beetles, and is proud of his work highlighting towns in the Philippines. About one-third of his entries are uploaded to the Swedish language version of Wikipedia, and the rest are composed in two versions of Filipino, one of which is his wife’s native tongue.

An administrator holding degrees in linguistics, civil engineering, economics and particle physics, he says he has long been interested in “the origin of things, oh, everything.”

It isn’t uncommon, however, for Wikipedia purists to complain about his method. That is because the bulk of his entries have been created by a computer software program—known as a bot. Critics say bots crowd out the creativity only humans can generate.

Mr. Johansson’s program scrubs databases and other digital sources for information, and then packages it into an article. On a good day, he says his “Lsjbot” creates up to 10,000 new entries.

That’s one way to go about it. Some Wikipedia editors aren’t happy it though.

The Robots are Coming, Part 132
First, some background, via Kevin Roose at New York Magazine:

Earlier this week, one of my business-beat colleagues got assigned to recap the quarterly earnings of Alcoa, the giant metals company, for the Associated Press. The reporter’s story began: “Alcoa Inc. (AA) on Tuesday reported a second-quarter profit of $138 million, reversing a year-ago loss, and the results beat analysts’ expectation. The company reported strong results in its engineered-products business, which makes parts for industrial customers, while looking to cut costs in its aluminum-smelting segment.”
It may not have been the most artful start to a story, but it got the point across, with just enough background information for a casual reader to make sense of it. Not bad. The most impressive part, though, was how long the story took to produce: less than a second.

If you’re into robots and algorithms writing the news, the article’s worth the read. It’s optimistic, asserting that in contexts like earnings reports, sports roundups and the like, the automation frees journalists for more mindful work such as analyzing what those earning reports actually mean
With 300 million robot-driven stories produced last year – more than all media outlets in the world combined, according to Roose – and an estimated billion stories in store for 2014, that’s a lot of freed up time to cast our minds elsewhere.
Besides, as Roose explains, “The stories that today’s robots can write are, frankly, the kinds of stories that humans hate writing anyway.”
More interesting, and more troubling, are the ethics behind algorithmically driven articles. Slate’s Nicholas Diakopoulos tried to tackle this question in April when he asked how we can incorporate robots into our news gathering with a level of expected transparency needed in today’s media environment. Part of his solution is understanding what he calls the “tuning criteria,” or the inherent biases, that are used to make editorial decisions when algorithms direct the news.
Here’s something else to chew on. Back to Roose:

Robot-generated stories aren’t all fill-in-the-blank jobs; the more advanced algorithms use things like perspective, tone, and humor to tailor a story to its audience. A robot recapping a basketball game, for example, might be able to produce two versions of a story using the same data: one upbeat story that reads as if a fan of the winning team had written it; and another glum version written from the loser’s perspective.

Apply this concept to a holy grail of startups and legacy organizations alike: customizing and personalizing the news just for you. Will future robots feed us a feel-good, meat and potatoes partisan diet of news based on the same sort behavioral tracking the ad industry uses to deliver advertising. With the time and cost of producing multiple stories from the same data sets approaching zero, it’s not difficult to imagine a news site deciding that they’ll serve different versions of the same story based on perceived political affiliations.
That’s a conundrum. One more worth exploring than whether an algorithm can give us a few paragraphs on who’s nominated for the next awards show.
Want more robots? Visit our Robots Tag.
Image: Twitter post, via @hanelly.

The Robots are Coming, Part 132

First, some background, via Kevin Roose at New York Magazine:

Earlier this week, one of my business-beat colleagues got assigned to recap the quarterly earnings of Alcoa, the giant metals company, for the Associated Press. The reporter’s story began: “Alcoa Inc. (AA) on Tuesday reported a second-quarter profit of $138 million, reversing a year-ago loss, and the results beat analysts’ expectation. The company reported strong results in its engineered-products business, which makes parts for industrial customers, while looking to cut costs in its aluminum-smelting segment.”

It may not have been the most artful start to a story, but it got the point across, with just enough background information for a casual reader to make sense of it. Not bad. The most impressive part, though, was how long the story took to produce: less than a second.

If you’re into robots and algorithms writing the news, the article’s worth the read. It’s optimistic, asserting that in contexts like earnings reports, sports roundups and the like, the automation frees journalists for more mindful work such as analyzing what those earning reports actually mean

With 300 million robot-driven stories produced last year – more than all media outlets in the world combined, according to Roose – and an estimated billion stories in store for 2014, that’s a lot of freed up time to cast our minds elsewhere.

Besides, as Roose explains, “The stories that today’s robots can write are, frankly, the kinds of stories that humans hate writing anyway.”

More interesting, and more troubling, are the ethics behind algorithmically driven articles. Slate’s Nicholas Diakopoulos tried to tackle this question in April when he asked how we can incorporate robots into our news gathering with a level of expected transparency needed in today’s media environment. Part of his solution is understanding what he calls the “tuning criteria,” or the inherent biases, that are used to make editorial decisions when algorithms direct the news.

Here’s something else to chew on. Back to Roose:

Robot-generated stories aren’t all fill-in-the-blank jobs; the more advanced algorithms use things like perspective, tone, and humor to tailor a story to its audience. A robot recapping a basketball game, for example, might be able to produce two versions of a story using the same data: one upbeat story that reads as if a fan of the winning team had written it; and another glum version written from the loser’s perspective.

Apply this concept to a holy grail of startups and legacy organizations alike: customizing and personalizing the news just for you. Will future robots feed us a feel-good, meat and potatoes partisan diet of news based on the same sort behavioral tracking the ad industry uses to deliver advertising. With the time and cost of producing multiple stories from the same data sets approaching zero, it’s not difficult to imagine a news site deciding that they’ll serve different versions of the same story based on perceived political affiliations.

That’s a conundrum. One more worth exploring than whether an algorithm can give us a few paragraphs on who’s nominated for the next awards show.

Want more robots? Visit our Robots Tag.

Image: Twitter post, via @hanelly.

‘Robot’ to write 1 billion stories in 2014 but will you know it when you see it? | Poynter.

If you’re a human reporter quaking in your boots this week over news of a Los Angeles Times algorithm that wrote the newspaper’s initial story about an earthquake, you might want to cover your ears for this fact:

Software from Automated Insights will generate about 1 billion stories this year — up from 350 million last year, CEO and founder Robbie Allen told Poynter via phone.

FJP: Here’s a ponderable for you.

A few weeks ago, the New York Post reported that Quinton Ross died. Ross, a former Brooklyn Nets basketball player, didn’t know he was dead and soon let people know he was just fine.

"A couple (relatives) already heard it," Ross told the Associated Press. “They were crying. I mean, it was a tough day, man, mostly for my family and friends… My phone was going crazy. I checked Facebook. Finally, I went on the Internet, and they were saying I was dead. I just couldn’t believe it.”

The original reporter on the story? A robot. Specifically, Wikipedia Live Monitor, created by Google engineer Thomas Steiner.

Slate explains how it happened:

Wikipedia Live Monitor is a news bot designed to detect breaking news events. It does this by listening to the velocity and concurrent edits across 287 language versions of Wikipedia. The theory is that if lots of people are editing Wikipedia pages in different languages about the same event and at the same time, then chances are something big and breaking is going on.

At 3:09 p.m. the bot recognized the apparent death of Quinton Ross (the basketball player) as a breaking news event—there had been eight edits by five editors in three languages. The bot sent a tweet. Twelve minutes later, the page’s information was corrected. But the bot remained silent. No correction. It had shared what it thought was breaking news, and that was that. Like any journalist, these bots can make mistakes.

Quick takeaway: Robots, like the humans that program them, are fallible.

Slower, existential takeaway: “How can we instill journalistic ethics in robot reporters?

As Nicholas Diakopoulos explains in Slate, code transparency is an inadequate part of the answer. More important  is understanding what he calls the “tuning criteria,” or the inherent biases, that are used to make editorial decisions when algorithms direct the news.

Read through for his excellent take.

Robots Reporting Earthquakes
Via Slate:

Ken Schwencke, a journalist and programmer for the Los Angeles Times, was jolted awake at 6:25 a.m. on Monday by an earthquake. He rolled out of bed and went straight to his computer, where he found a brief story about the quake already written and waiting in the system. He glanced over the text and hit “publish.” And that’s how the LAT became the first media outlet to report on this morning’s temblor. “I think we had it up within three minutes,” Schwencke told me.
If that sounds faster than humanly possible, it probably is. While the post appeared under Schwencke’s byline, the real author was an algorithm called Quakebot that he developed a little over two years ago. Whenever an alert comes in from the U.S. Geological Survey about an earthquake above a certain size threshold, Quakebot is programmed to extract the relevant data from the USGS report and plug it into a pre-written template. The story goes into the LAT’s content management system, where it awaits review and publication by a human editor.

Interested in – or freaked out about – robots writing your news? Check our Robots Tag.
Image: Screenshot, text I received from my brother Peter this morning. – Michael

Robots Reporting Earthquakes

Via Slate:

Ken Schwencke, a journalist and programmer for the Los Angeles Times, was jolted awake at 6:25 a.m. on Monday by an earthquake. He rolled out of bed and went straight to his computer, where he found a brief story about the quake already written and waiting in the system. He glanced over the text and hit “publish.” And that’s how the LAT became the first media outlet to report on this morning’s temblor. “I think we had it up within three minutes,” Schwencke told me.

If that sounds faster than humanly possible, it probably is. While the post appeared under Schwencke’s byline, the real author was an algorithm called Quakebot that he developed a little over two years ago. Whenever an alert comes in from the U.S. Geological Survey about an earthquake above a certain size threshold, Quakebot is programmed to extract the relevant data from the USGS report and plug it into a pre-written template. The story goes into the LAT’s content management system, where it awaits review and publication by a human editor.

Interested in – or freaked out about – robots writing your news? Check our Robots Tag.

Image: Screenshot, text I received from my brother Peter this morning. – Michael

Where Robots Create Your Weekly Paper

Via Nieman Lab:

The Guardian is experimenting in the craft newspaper business and getting some help from robots.

That may sound odd, given that the company prints a daily paper read throughout Britain. A paper staffed by humans. But the company is tinkering with something smaller and more algorithm-driven.

The Guardian has partnered with The Newspaper Club, a company that produces small-run DIY newspapers, to print The Long Good Read, a weekly print product that collects a handful of The Guardian’s best longform stories from the previous seven days. The Newspaper Club runs off a limited number of copies, which are then distributed at another Guardian experiment: a coffee shop in East London. That’s where, on Monday mornings, you’ll find a 24-page tabloid with a simple layout available for free.

On the surface, The Long Good Read has the appeal of being a kind of analog Instapaper for all things Guardian. But the interesting thing is how paper is produced: robots. Okay, algorithms if you want to be technical — algorithms and programs that both select the paper’s stories and lay them out on the page.

Jemima Kiss, head of technology for The Guardian, said The Long Good Read is another attempt at finding ways to give stories new life beyond the day they’re published: “It’s just a way of reusing that content in a more imaginative way and not getting too hung up on the fact it’s a newspaper.”

Read through to see how it’s done.

Penn State UPenn Creates All-Terrain Walking Robot

Researchers at Penn State UPenn are teaching Rhex (short for “robot hexapod”) how to travel across varied terrain by basing its movements in Parkour, an inventive way of propelling yourself from Point A to Point B as quickly as possible by using only your body and your surroundings to move forward.

The robot is unique because its equipped with legs instead of wheels, so researchers are taking their design inspiration from the movements of humans. Rhex can jump, back-flip, and even pull itself up over obstacles that are bigger than the robot itself. 

Video: University of Pennsylvania 

Update: Sorry for the Penn State/UPenn snafu. (Go Quakers!)

The Robots Will Now Grade Your Papers

Via the New York Times:

Imagine taking a college exam, and, instead of handing in a blue book and getting a grade from a professor a few weeks later, clicking the “send” button when you are done and receiving a grade back instantly, your essay scored by a software program.

And then, instead of being done with that exam, imagine that the system would immediately let you rewrite the test to try to improve your grade.

EdX, the nonprofit enterprise founded by Harvard and the Massachusetts Institute of Technology to offer courses on the Internet, has just introduced such a system and will make its automated software available free on the Web to any institution that wants to use it. The software uses artificial intelligence to grade student essays and short written answers, freeing professors for other tasks.

FJP: As we said when we first heard about this, “Perhaps if robots are grading the papers, students can use robots to write the papers. Then everyone can call it even and head outdoors for class.”

FJP: Last week we highlighted a Slate article that looked into the morality of war and robots. In particular, that autonomous war “machines are not, and cannot, be legally accountable for their actions.”

Today, Human Rights Watch released “Losing Humanity: The Case Against Killer Robots,” a 50-page report arguing for the ban of fully autonomous weapon systems.

humanrightswatch:

Ban ‘Killer Robots’ Before It’s Too Late

“Losing Humanity is the first major publication about fully autonomous weapons by a nongovernmental organization and is based on extensive research into the law, technology, and ethics of these proposed weapons. It is jointly published by Human Rights Watch and the Harvard Law School International Human Rights Clinic.

Human Rights Watch and the International Human Rights Clinic called for an international treaty that would absolutely prohibit the development, production, and use of fully autonomous weapons. They also called on individual nations to pass laws and adopt policies as important measures to prevent development, production, and use of such weapons at the domestic level.

Fully autonomous weapons do not yet exist, and major powers, including the United States, have not made a decision to deploy them. But high-tech militaries are developing or have already deployed precursors that illustrate the push toward greater autonomy for machines on the battlefield. The United States is a leader in this technological development. Several other countries – including China, Germany, Israel, South Korea, Russia, and the United Kingdom – have also been involved. Many experts predict that full autonomy for weapons could be achieved in 20 to 30 years, and some think even sooner.

Read more after the jump.

Robots, War and Morality

Via Slate:

The “Global Campaign To Stop Killer Robots” kicked off in New York on Oct. 21. Nobel Peace Prize Laureate Jody Williams urged the nations of the world to act against lethal autonomous robots, declaring them “beyond the pale.” Williams is not alone; on CNN earlier in October, Peter Bergen, the author of several best-selling books about Osama Bin Laden, also argued for a convention regulating lethal robots. The International Committee for Robot Arms Control, a group of academic experts on robot technologies and international security, is on board as well. The pressure on the robots is mounting.

Underlying the debate about “killer robots” is concern that machines are not, and cannot, be legally accountable for their actions. As professor Oren Gross of the University of Miami School of Law told this year’s inaugural “We Robot” conference on robots and the law in April, domestic and international law are not well suited to dealing with robots that commit war crimes.

As technology advances, we face a very real danger that it will become increasingly difficult to hold those who wage war on our behalf accountable for what they do.

Paul Robinson, Slate. Who Will Be Accountable for Military Technology?

The AP Plans to Use Robotic Cameras for Olympic Coverage
The Associated Press isn’t just sending photographers, photo editors and video journalists to the Olympics. They’re also booting up the robots.
Via the AP:

Remote-controlled robotic cameras at the swimming, weightlifting and diving venues will provide alternative angles, including under water, to supplement AP’s regular photo coverage. In addition to a selection of hand-placed remote cameras at a several other venues, such as those for gymnastics, track and field, AP photographers will use the latest Canon 1DX cameras and take advantage of new workflows and technology to move more photos faster than ever before.  

Being the remote operator would be a fun gig. — Michael

The AP Plans to Use Robotic Cameras for Olympic Coverage

The Associated Press isn’t just sending photographers, photo editors and video journalists to the Olympics. They’re also booting up the robots.

Via the AP:

Remote-controlled robotic cameras at the swimming, weightlifting and diving venues will provide alternative angles, including under water, to supplement AP’s regular photo coverage. In addition to a selection of hand-placed remote cameras at a several other venues, such as those for gymnastics, track and field, AP photographers will use the latest Canon 1DX cameras and take advantage of new workflows and technology to move more photos faster than ever before.  

Being the remote operator would be a fun gig. — Michael

Just Another 47 Passenger Carrying Mechanical Elephant
Via Les Machines de L’ile

Just Another 47 Passenger Carrying Mechanical Elephant

Via Les Machines de L’ile

I like to know I’m writing for a real flesh-and-blood reader who is excited by the words on the page. I’m sure children feel the same way.

Harvard College Writing Program director Thomas Jehn • Fathoming the idea of automated essay grading — essentially, essays graded by robots. The idea is getting pitched in a contest by the William and Flora Hewlett Foundation, which plans to offer $100,000 in prize money to any group of programmers that can figure out a way to automate the process of grading essays. We’re with Jehn: If students are spending all this time writing essays, it’s only right that the person on the other side of the coin is also a human being. (via shortformblog)

FJP: Perhaps if robots are grading the papers, students can use robots to write the papers. Then everyone can call it even and head outdoors for class.

Future Journalists, Pounding the Pavement
The following was written by a robot:

Newt Gingrich received the largest increase in Tweets about him today. Twitter activity associated with the candidate has shot up since yesterday, with most users tweeting about taxes and character issues. Newt Gingrich has been consistently popular on Twitter, as he has been the top riser on the site for the last four days. Conversely, the number of tweets about Ron Paul has dropped in the past 24 hours. Another traffic loser was Rick Santorum, who has also seen tweets about him fall off a bit.
While the overall tone of the Gingrich tweets is positive, public opinion regarding the candidate and character issues is trending negatively. In particular, @MommaVickers says, “Someone needs to put The Blood Arm’s ‘Suspicious Character’ to a photo montage of Newt Gingrich. #pimp”.

Stilted and inelegant to be sure, the computer generated story was created by Narrative Science, an Illinois-based startup, that’s combining machine learning, data analysis and artificial intelligence to produce short and long form articles from data heavy industries such as real estate, finance, sports and polling.
For example, Narrative Science technology creates computer generated sports recaps for the Big Ten Network, a joint venture between the Big Ten Conference and Fox Networks.
As the New York Times explained last fall:

The Narrative Science software can make inferences based on the historical data it collects and the sequence and outcomes of past games. To generate story “angles,” explains Mr. Hammond of Narrative Science, the software learns concepts for articles like “individual effort,” “team effort,” “come from behind,” “back and forth,” “season high,” “player’s streak” and “rankings for team.” Then the software decides what element is most important for that game, and it becomes the lead of the article, he said. The data also determines vocabulary selection. A lopsided score may well be termed a “rout” rather than a “win.”

Glass half empty: journalists will be automated out of their jobs.
Glass half full: journalists will be freed from writing drudgey news summaries and can focus on more significant work.
Image: via Senor Roboto (yes, I smiled too).

Future Journalists, Pounding the Pavement

The following was written by a robot:

Newt Gingrich received the largest increase in Tweets about him today. Twitter activity associated with the candidate has shot up since yesterday, with most users tweeting about taxes and character issues. Newt Gingrich has been consistently popular on Twitter, as he has been the top riser on the site for the last four days. Conversely, the number of tweets about Ron Paul has dropped in the past 24 hours. Another traffic loser was Rick Santorum, who has also seen tweets about him fall off a bit.

While the overall tone of the Gingrich tweets is positive, public opinion regarding the candidate and character issues is trending negatively. In particular, @MommaVickers says, “Someone needs to put The Blood Arm’s ‘Suspicious Character’ to a photo montage of Newt Gingrich. #pimp”.

Stilted and inelegant to be sure, the computer generated story was created by Narrative Science, an Illinois-based startup, that’s combining machine learning, data analysis and artificial intelligence to produce short and long form articles from data heavy industries such as real estate, finance, sports and polling.

For example, Narrative Science technology creates computer generated sports recaps for the Big Ten Network, a joint venture between the Big Ten Conference and Fox Networks.

As the New York Times explained last fall:

The Narrative Science software can make inferences based on the historical data it collects and the sequence and outcomes of past games. To generate story “angles,” explains Mr. Hammond of Narrative Science, the software learns concepts for articles like “individual effort,” “team effort,” “come from behind,” “back and forth,” “season high,” “player’s streak” and “rankings for team.” Then the software decides what element is most important for that game, and it becomes the lead of the article, he said. The data also determines vocabulary selection. A lopsided score may well be termed a “rout” rather than a “win.”

Glass half empty: journalists will be automated out of their jobs.

Glass half full: journalists will be freed from writing drudgey news summaries and can focus on more significant work.

Image: via Senor Roboto (yes, I smiled too).

Robots, Video and the News

A South Korean startup called Shakr is automating video news production for the web.

To do so, they run a semantic analysis of top news stories, send out bots to gather images and publicly available video about the story, send out another bot to gather text to be read by an automated voice, mash it all together and boom, a video’s produced on the topic.

The company says it can do it almost in realtime.

Via ReadWriteWeb:

Shakr is lead by David Lee, an entrepreneur we wrote about first for his work on video chat platform Tinychat. Lee says the new company has raised seed funding and has already secured a deal with Tatter Media, a large South Korean blog syndicate. Shakr will automatically produce video versions of that company’s bloggers text, in near real time. A consumer-facing app will also allow end users to create multi-media shows out of their home media assets.

"For writers the transition to video is lucrative but extremely expensive on the front-end," Lee says. "We will help bloggers and small online news sites compete with the powerhouses of online content by turning out video even faster than the big boys do."

A network of “little guys” all participating with the most powerful parts of their computers will enable Shakr to create news video automatically, faster than Fox News or CNN. That’s the company’s aim - but there’s no need to stop at news, either.

If this sounds vaguely familiar, you might remember that the StatSheet sports network is written by robots