Whether you want to share sensitive protest footage without exposing the faces of the activists involved, or share the winning point in your 8-year-old’s basketball game without broadcasting the children’s faces to the world, our face blurring technology is a first step towards providing visual anonymity for video on YouTube.
Related: The Guardian and Witness.org released ObscuraCam earlier this year to accomplish much the same thing.
Google’s caught a lot of heat over its G+ real name policy. Part of it’s simply the arbitrary nature of the real name enforcement: many people using their real names — and well known nicknames — have been kicked off Plus.
But there’s a much deeper and more important conversation taking place that has to do with identity, privacy and the right to anonymity.
Danah Boyd, a researcher with Microsoft and fellow at Harvard’s Berkman Center, considers real name policies an abuse of power:
I’m really really glad to see seriously privileged people take up the issue, because while they are the least likely to actually be harmed by “real names” policies, they have the authority to be able to speak truth to power. And across the web, I’m seeing people highlight that this issue has more depth to it than fun names (and is a whole lot more complicated than boiling it down to being about anonymity, as Facebook’s Randi Zuckerberg foolishly did).
What’s at stake is people’s right to protect themselves, their right to actually maintain a form of control that gives them safety. If companies like Facebook and Google are actually committed to the safety of its users, they need to take these complaints seriously. Not everyone is safer by giving out their real name. Quite the opposite; many people are far LESS safe when they are identifiable. And those who are least safe are often those who are most vulnerable.
News sites are continuously grappling with how to elevate the tone of reader comments. One chosen way is to make people use their real names in order to comment on stories. For example, some sites require you to swipe your credit card for a nominal one-time fee (say, a dollar) in order to prove you’re you.
Site’s that have done this (or found other ways to implement “real name” systems) generally report that while the overall number of comments goes down, the quality of discussion improves. That is, there’s less of an impulse to lob rhetorical bombs when people know exactly who you are.
But apply what Boyd writes here to the newspaper rather than the social network and we have the same dynamic. Namely, the paper dictating who can comment and participate, and ignoring the very real reasons why some in a community would need to anonymously contribute to a conversation about sensitive issues.
If news sites want to clean up comment sections, create a civil culture within them by having moderators, reporters and editors set the tone by actively participating in them. Otherwise, your crazies with an axe to grind will continue to ruin the roost.
But Facebook, which celebrates its seventh birthday Friday and has more than a half-billion users worldwide, is not eagerly embracing its role as the insurrectionists’ instrument of choice. Its strategy contrasts with rivals Google and Twitter, which actively helped opposition leaders communicate after the Egyptian government shut down Internet access.
The Silicon Valley giant, whether it likes it or not, has been thrust like never before into a sensitive global political moment that pits the company’s need for an open Internet against concerns that autocratic regimes could limit use of the site or shut it down altogether.
Clay Shirky has some thoughts on conversations and comments online.
“…bad discourse isn’t a behavior problem,” he says, “it’s a design problem.”
Fine, fine, but does he have applicable ideas? Sort of.
That provides some options for turning the jerk dial down. One is to make identity valuable: Stack Overflow won’t let new users post until they have exhibited enough other behaviors—visiting the site, responding in helpful ways to other posts—to earn the karma for full participation. Another approach is to partition public platforms, thus reducing the incentive to publicly act out. Twitter does this by segmenting its audience: I can rant all I like, but only to the users I can persuade to follow me. Yet another approach is to enlist users in defensive filtering. Amazon sometimes refuses to publish a post, but most of its policing is done by customers who flag offensive reviews and elevate those they find helpful.
[Washington] Post readers constantly complain about the excessive use of anonymous sources in the newspaper. But the problem is even worse online.
Staff-written news blogs are replete with violations of The Post’s long-established and laudable standards governing confidential sources. These unnamed sources often are cited without providing readers with even a hint of their reliability or why they were granted anonymity.
In the first two weeks of December alone, Post news blogs included more than 20 unnamed sources without any explanation of their quality or why they warranted confidentiality. Many blogs referred only to “sources” or “those close to” a subject or situation.