Posts tagged with ‘big data’

Bidding on Your Personal Browser History

Proclivity Media and others are working very hard to find out what you want to buy, and they’re getting to know you very well along the way.

Here’s the backstory: one particularly savvy way of advertising has begun receiving a lot of attention lately. It’s called re-targeting, and it relies on personal browser history to figure out what users may want to buy.

Automated programming bids on ad space individual users see based on their personal search history, more traditional consumer reports and retailer records, selling one-time ads at several hundred dollars a pop.

via Internet Retailer:

Proclivity uses its Consumer Valuation Platform to place cookies in consumers’ web browsers to monitor their browsing behavior around the Internet and tracks their specific interactions on a client retailer’s site using tiny pieces of embedded software code in site content. Proclivity adds data from the retailer, including the merchant’s own web analytics on shoppers’ click activity, and information on sales, merchandizing campaigns and product pricing, then scores it to determine when each customer is likely to buy and at what price point.

This is very similar to Facebook Exchange, which has been working cautiously well since June.

Here’s the Wall Street Journal:

Facebook is using its data trove to study the links between Facebook ads and members’ shopping habits at brick-and-mortar stores, part of an effort to prove the effectiveness of its $3.7 billion annual ad business to marketers.

FJP: This is big data at work — for many businesses, there’s a lot to find when comparing data sets that follow consumer behavior online and in stores.

Imagine if your whole life you’ve looked through one eye, only seeing through one eye and suddenly, scientists can give you the ability to open up a second eye. So what you would see is not just more data but it’s a whole different way of seeing.

Said photojournalist Rick Smolan today, telling the audience at a Human Face of Big Data event the same thing he told his son when, at 2am, the little boy climbed out of bed, snuck into the kitchen and asked him why he stayed up late everynight on the phone talking about “big data.” Smolan continued:

My son, who again wanted to stay up as late as he could before I sent him back to bed, said: could scientists and computers, like, let us open up a third eye and a fourth and a fifth? And I said yes.

See the group’s phone app, its upcoming book and more here.

Congratulations to the Winners of the Knight News Challenge!

A Knight Foundation contest that looked for the best cases of giving big data to the general public has announced its six winners.

Three projects present new data. OpenElections, Census.IRE.org, and Pop Up Archive plan to provide comprehensive, highly-searchable data for public research and enjoyment.

The other three — Safecast, LocalData, and Development Seed — have created new toolsets for people to contribute to big data, be it by measuring radition in Los Angeles or by using a smart phone to share data with Google Earth, Fusion Tables, and elsewhere.

Finding the Photogenic Side of Big Data
via the New York Times:

Massive rivers of digital information are a snooze, visually. Yet that is the narrow, literal-minded view. Mr. Smolan’s new project, “The Human Face of Big Data,” which is being formally announced on Thursday, focuses on how data, smart software, sensors and computing are opening the door to all sorts of new uses in science, business, health, energy and water conservation. And the pictures are mostly of the people doing that work or those being affected.

Finding the Photogenic Side of Big Data

via the New York Times:

Massive rivers of digital information are a snooze, visually. Yet that is the narrow, literal-minded view. Mr. Smolan’s new project, “The Human Face of Big Data,” which is being formally announced on Thursday, focuses on how data, smart software, sensors and computing are opening the door to all sorts of new uses in science, business, health, energy and water conservation. And the pictures are mostly of the people doing that work or those being affected.

The heart of data science is designing instruments to turn signals from the real world into actionable information. Fighting the data providers to give you those signals in a convenient form is a losing battle, so the key to success is getting comfortable with messy requirements and chaotic inputs. As an engineer, this can feel like a deal with the devil, as you have to accept error and uncertainty in your results. But the alternative is no results at all.

— Pete Warden, former Apple engineer and current CTO of Jetpac, to O’Reilly Radar. Embracing the Chaos of Data.

See this animation? It’s made with a global life expectancy data set from the World bank that’s been imported into Google’s Public Data Explorer

Google’s been working on the Data Explorer ever since it acquired Gapminder and its Trendalyzer software. So far, visualizations like these have been created with Google’s in-house team. But not so anymore. They’ve released a Web interface for data journalists, academics and others with large data sets to upload their info and start visualizing.

Reports Nieman Lab:

[Benjamin] Yolken and Omar Benjelloun, Google Public Data’s tech lead, have written a new data format, the Dataset Publishing Language (DSPL), an XML-based format designed particularly to support animated visualizations. “DSPL is like those in the Public Data Explorer,” Benjelloun notes in a blog post announcing the opening. “We’ve been using DSPL internally to produce all of the datasets and visualizations in the product”; now, he writes, “you can now use it to upload and visualize your own DSPL-formatted datasets in your own applications.”

It’s an experimental feature that, like the Public Data Explorer itself — not to mention some of Google’s most fun features (Google Scribe, Google Body, Google Books’ Ngrams viewer, etc.) — lives under the Google Labs umbrella. And, importantly, it’s a feature, Yolken notes, that “allows users who may or may not have technical expertise to explore, visually, a number of public data sets.”

The newly open tool could be particularly useful for news organizations that would like to get into the dataviz game, but who don’t have the resources — of time, of talent, of money — to invest in proprietary systems. 

Good times.