Posts tagged with ‘artificial intelligence’

These are the humans trying to give our jobs to robots
There’s been a lot of talk lately about Narrative Science, its boss Kristian Hammond, and their algorithmic journalist robots of the future. Most of the controversy has been over a few audacious comments, as most controversy usually is (via Wired):

Last year at a small conference of journalists and technologists, I asked Hammond to predict what percentage of news would be written by computers in 15 years. At first he tried to duck the question, but with some prodding he sighed and gave in: “More than 90 percent.”

He also predicted that a computer will win the Pulitzer Prize by 2017. But that’s just talk — from reading what his algorithms have done, it’s hard to expect a Pulitzer, but it’s not as easy to rebuke the 90% assumption. 
via Slate, on what the robots cover:

Narrative Science is one of several companies developing automated journalism software. These startups work primarily in niche fields—sports, finance, real estate—in which news stories tend to follow the same pattern and revolve around statistics. 

Take the financial articles that NS writes for Forbes, as considered a little later in the article:

Don’t miss the irony here: Automated platforms are now “writing” news reports about companies that make their money from automated trading. These reports are eventually fed back into the financial system, helping the algorithms to spot even more lucrative deals. Essentially, this is journalism done by robots and for robots. The only upside here is that humans get to keep all the cash.

Following the diplomatic/commodity trail that influences stock prices, or tracking stats and numbers in sports to find stories, may eventually become an obsolete task for us humans as robots begin to cover them more efficiently, and faster. And, having begun to crawl through Twitter for election coverage, Narrative Science’s scope may (soon! soon!) slowly grow.
FJP: But as for what this post covers, the concern is a lot like other problems people have with today’s journalism. In the same way that programmers or bloggers won’t replace columnists and reporters, but will instead facilitate, complement, and in all sorts of ways share the new workload, so too might Narrative Science-esque algorithms cover some of the responsibilities that future journalism expects, but which are difficult/unreasonable/impossible for, say, a journalist from ten years ago to handle.
Photo courtesy of Narrative Science.

These are the humans trying to give our jobs to robots

There’s been a lot of talk lately about Narrative Science, its boss Kristian Hammond, and their algorithmic journalist robots of the future. Most of the controversy has been over a few audacious comments, as most controversy usually is (via Wired):

Last year at a small conference of journalists and technologists, I asked Hammond to predict what percentage of news would be written by computers in 15 years. At first he tried to duck the question, but with some prodding he sighed and gave in: “More than 90 percent.”

He also predicted that a computer will win the Pulitzer Prize by 2017. But that’s just talk — from reading what his algorithms have done, it’s hard to expect a Pulitzer, but it’s not as easy to rebuke the 90% assumption. 

via Slate, on what the robots cover:

Narrative Science is one of several companies developing automated journalism software. These startups work primarily in niche fields—sports, finance, real estate—in which news stories tend to follow the same pattern and revolve around statistics. 

Take the financial articles that NS writes for Forbes, as considered a little later in the article:

Don’t miss the irony here: Automated platforms are now “writing” news reports about companies that make their money from automated trading. These reports are eventually fed back into the financial system, helping the algorithms to spot even more lucrative deals. Essentially, this is journalism done by robots and for robots. The only upside here is that humans get to keep all the cash.

Following the diplomatic/commodity trail that influences stock prices, or tracking stats and numbers in sports to find stories, may eventually become an obsolete task for us humans as robots begin to cover them more efficiently, and faster. And, having begun to crawl through Twitter for election coverage, Narrative Science’s scope may (soon! soon!) slowly grow.

FJP: But as for what this post covers, the concern is a lot like other problems people have with today’s journalism. In the same way that programmers or bloggers won’t replace columnists and reporters, but will instead facilitate, complement, and in all sorts of ways share the new workload, so too might Narrative Science-esque algorithms cover some of the responsibilities that future journalism expects, but which are difficult/unreasonable/impossible for, say, a journalist from ten years ago to handle.

Photo courtesy of Narrative Science.

Will Robots Herd us into Information Silos? →

Evgeny Morozov, author of The Net Delusion: The Dark Side of Internet Freedom, writes about two information trends he worries will limit the news and ideas we’re exposed to.

The first is robots which we’ve written about here. These aren’t the tin can humanoids seen in jurassic sci-fi but rather artificial intelligence used by the likes of Narrative Science, an Illinois-based startup that turns data into written prose.

For example, as Morozov points out, Forbes uses Narrative Science to automatically generate articles on corporate earnings statements. Other organizations use the company’s data analysis and Artificial Intelligence to create articles on real estate, sports and polling.

The second trend is the personal customization that all the Internet heavies are working so hard to fulfill. For example, we know that the ads we see as we go from site to site reflect where we’ve been and what we’ve indicated we’ve liked before we arrive at the page in question.

Remove ad tracking and replace it with content and we begin to see that the content we’re exposed to is similarly customized to our tastes. Have a political slant, the recommendation engine algorithm will make sure you get your daily dose of red meat.

So what happens when you marry the two? When robot generated news articles can be endlessly produced on the fly at little to know cost and those articles are customized to a viewers taste? We have something along the llnes of what Morozov describes here:

[T]he rise of “automated journalism” may eventually present a new and different challenge, one that the excellent discovery mechanisms of social media cannot solve yet: What if we click on the same link that, in theory, leads to the same article but end up reading very different texts?

How will it work? Imagine that my online history suggests that I hold an advanced degree and that I spend a lot of time on the websites of the Economist or the New York Review of Books; as a result, I get to see a more sophisticated, challenging, and informative version of the same story than my USA Today-reading neighbor. If one can infer that I’m also interested in international news and global justice, a computer-generated news article about Angelina Jolie might end by mentioning her new film about the war in Bosnia. My celebrity-obsessed neighbor, on the other hand, would see the same story end with some useless gossipy tidbit about Brad Pitt.

Producing and tweaking stories on the spot, customized to suit the interests and intellectual habits of just one particular reader, is exactly what automated journalism allows—and why it’s worth worrying about. Advertisers and publishers love such individuation, which could push users to spend more time on their sites. But the social implications are quite dubious. At the very least, there’s a danger that some people might get stuck in a vicious news circle, consuming nothing but information junk food and having little clue that there is a different, more intelligent world out there.

The upside to this downside state of affairs is something we’ve mentioned before: automation technologies like those from Narrative Science could theoretically free up journalists to do deeper, more analytical work.

The downside to the downside: as algorithms push us into information silos nobody will actually see it.

Evgeny Morozov, Slate, A Robot Stole My Pulitzer!

Future Journalists, Pounding the Pavement
The following was written by a robot:

Newt Gingrich received the largest increase in Tweets about him today. Twitter activity associated with the candidate has shot up since yesterday, with most users tweeting about taxes and character issues. Newt Gingrich has been consistently popular on Twitter, as he has been the top riser on the site for the last four days. Conversely, the number of tweets about Ron Paul has dropped in the past 24 hours. Another traffic loser was Rick Santorum, who has also seen tweets about him fall off a bit.
While the overall tone of the Gingrich tweets is positive, public opinion regarding the candidate and character issues is trending negatively. In particular, @MommaVickers says, “Someone needs to put The Blood Arm’s ‘Suspicious Character’ to a photo montage of Newt Gingrich. #pimp”.

Stilted and inelegant to be sure, the computer generated story was created by Narrative Science, an Illinois-based startup, that’s combining machine learning, data analysis and artificial intelligence to produce short and long form articles from data heavy industries such as real estate, finance, sports and polling.
For example, Narrative Science technology creates computer generated sports recaps for the Big Ten Network, a joint venture between the Big Ten Conference and Fox Networks.
As the New York Times explained last fall:

The Narrative Science software can make inferences based on the historical data it collects and the sequence and outcomes of past games. To generate story “angles,” explains Mr. Hammond of Narrative Science, the software learns concepts for articles like “individual effort,” “team effort,” “come from behind,” “back and forth,” “season high,” “player’s streak” and “rankings for team.” Then the software decides what element is most important for that game, and it becomes the lead of the article, he said. The data also determines vocabulary selection. A lopsided score may well be termed a “rout” rather than a “win.”

Glass half empty: journalists will be automated out of their jobs.
Glass half full: journalists will be freed from writing drudgey news summaries and can focus on more significant work.
Image: via Senor Roboto (yes, I smiled too).

Future Journalists, Pounding the Pavement

The following was written by a robot:

Newt Gingrich received the largest increase in Tweets about him today. Twitter activity associated with the candidate has shot up since yesterday, with most users tweeting about taxes and character issues. Newt Gingrich has been consistently popular on Twitter, as he has been the top riser on the site for the last four days. Conversely, the number of tweets about Ron Paul has dropped in the past 24 hours. Another traffic loser was Rick Santorum, who has also seen tweets about him fall off a bit.

While the overall tone of the Gingrich tweets is positive, public opinion regarding the candidate and character issues is trending negatively. In particular, @MommaVickers says, “Someone needs to put The Blood Arm’s ‘Suspicious Character’ to a photo montage of Newt Gingrich. #pimp”.

Stilted and inelegant to be sure, the computer generated story was created by Narrative Science, an Illinois-based startup, that’s combining machine learning, data analysis and artificial intelligence to produce short and long form articles from data heavy industries such as real estate, finance, sports and polling.

For example, Narrative Science technology creates computer generated sports recaps for the Big Ten Network, a joint venture between the Big Ten Conference and Fox Networks.

As the New York Times explained last fall:

The Narrative Science software can make inferences based on the historical data it collects and the sequence and outcomes of past games. To generate story “angles,” explains Mr. Hammond of Narrative Science, the software learns concepts for articles like “individual effort,” “team effort,” “come from behind,” “back and forth,” “season high,” “player’s streak” and “rankings for team.” Then the software decides what element is most important for that game, and it becomes the lead of the article, he said. The data also determines vocabulary selection. A lopsided score may well be termed a “rout” rather than a “win.”

Glass half empty: journalists will be automated out of their jobs.

Glass half full: journalists will be freed from writing drudgey news summaries and can focus on more significant work.

Image: via Senor Roboto (yes, I smiled too).

Computer-Generated Articles Are Gaining Traction →

lifeandcode:

Cue 10K media blogs losing their shit. 

thereadingspace:

“The clever code is the handiwork of Narrative Science, a start-up in Evanston, Ill., that offers proof of the progress of artificial intelligence — the ability of computers to mimic human reasoning. (…)

The Narrative Science software can make inferences based on the historical data it…

FJP: Let’s pack it up. Robots are crossing the threshold and will soon rule this world.

By way of background, Narrative Science’s software currently generates articles about college sports. No human’s required. The article The Reading Space refers to appeared in this weekend’s New York Times.

I am not a Robot, I’m a Unicorn

Cornell’s Creative Machine Labs wanted to know what would happen if a chatbot talked to itself. So they hooked up two intances of Cleverbot to get the ball rolling.

Via Singularity Hub:

The program Cleverbot is a web-based application that talks to people through a text interface. It’s one of many such “chatbots” you can find online, each able to respond to messages you type. Cleverbot learns to be a better conversationalist by remembering all the previous discussions it has had (20 million+ so far) and choosing which previous statements made by humans best fit the current discussion it’s having with a human. If you want, you can go to the Cleverbot site right now and participate in its learning process. When you do, I want you to keep in mind what you see in the following video from Cornell’s Creative Machines Lab. We (the internet) taught Cleverbot how to converse. If even it seems to find itself ridiculous and hard to listen to, what does that say about us?

How’s that for existential robot unicorn conversation?