Posts tagged with ‘crowdsourcing’
While most of these and other statistics are backed up by a substantial amount of empirical research, estimations of the total number of labor-hours contributed to Wikipedia are one notable exception. However, this has not stopped champions of the project from stating with more and less certainty that Wikipedia is one of the largest projects in human history…
…[A] well-documented and often-repeated labor hour estimation is that of the Empire State Building, which took 3,000 laborers a total of 7 million labor-hours to construct. Figures for the construction of the Channel Tunnel report a total 170 million labor-hours, while estimations of the Great Pyramid at Giza range from 880 million to 3.5 billion labor-hours. The first edition of the Encyclopedia Britannica was written and published by 3 employees authoring 24 pages a week for 100 weeks, which is around 12,000 labor-hours assuming 40 hour work week…
…Summing the duration of all continuous editing sessions and single edit sessions, we identified 41,018,804 total labor-hours expended in the English-language version of Wikipedia… Extrapolating to all language version of Wikipedia based on the total number of edits made to each project, we estimate that 61,706,883 total labor-hours have been contributed in edit sessions for non-English language Wikipedias, for a total of 102,673,683 total labor-hours to all Wikipedia versions.
R. Stuart Geiger and Aaron Halfaker, Using Edit Sessions to Measure Participation in Wikipedia (PDF).
FJP: That’s approximately 11,720 years of peer production.
It’s pretty simple to do, and very interesting to explore:
Take a photo of yourself holding a sign with a key word or phrase you want the president to remember.
Then explain, in as many words as you want, what you mean and see yourself here.
See Global Voices, a citizen journalism site that does an incredible job of providing passionate people with a place to coordinate and research, write, translate and distribute online news. Above is a case study of a land grab in Brazil, and follows the story from idea to Italian, among other languages.
Sharon Weinberger, BBC. Intelligence agencies turn to crowdsourcing.
Sharon’s talking about Intelligence Advanced Research Projects Activity, a US Government program that crowdsources geopolitical predictions.
Sharon suggests that the crowd may foresee events that we wouldn’t guess at otherwise, like these infamous examples:
The intelligence community has often been blasted for its failure to forecast critical world events, from the fall of the Soviet Union to the Arab Spring that swept across North Africa and the Middle East. It was also heavily criticized for its National Intelligence Estimate in 2002, which supported claims that Iraq had weapons of mass destruction.
The latest site, however, is their most interesting. It’s called Global Crowd Intelligence and its creators have catered to our (that is, human) desires for competition, games, and fun.
Indeed, what users wanted, it turned out, was something competitive, so that’s what the company has given them. The new website rewards players who successfully forecast future events by giving them privileged access to certain “missions,” and also allowing them to collect reputation points, which can then be used for online bragging rights. When contributors enter the new site, they start off as junior analysts, but eventually progress to higher levels, allowing them to work on privileged missions.
Appealing to people is, after all, a good way to solicit information from them.
Sharon looks elsewhere, too, where other crowds are making guesses of their own. At Wikistrat, a privately owned, self-titled Massively Multiplayer Online Consultancy (MMOC, seriously) has guessed at possible outcomes for Syria.
Yesterday Jihii wrote about an effort originating in the Reddit community to crowdsource a privacy bill to protect people’s online rights.
Perhaps, then, a trend, because yesterday also saw the launch The Internet Blueprint, an effort by Public Knowledge, a Washington DC-based digital advocacy group, that crowdsources technology bills that members of Congress can then pick up and run with.
The idea is certainly interesting. What we saw recently in the fights over SOPA and PIPA — and see generally over everything else — is reactive protests against proposed laws drafted with little public input and often by the lobbyists whose groups will most benefit from them.
The Internet Blueprint attempts to turn this process on its head by proactively promoting Internet-related laws that are written in public, by the public (and with Public Knowledge lawyers massaging them into proper DC legalese). Visitors to the site can vote up and comment on particular bills, vote on ideas they think should become proposed bills, and contact their representatives to get behind completed bills.
Via Public Knowledge:
While it can be reasonably easy to get people to agree on broad principles, conflict can often come when it is time to focus on details. That is especially true when it comes to legislative language – a single word (or even a single comma) can change the impact of a bill. That is why The Internet Blueprint goes beyond broad concepts and proposes concrete legislative language. The bills on The Internet Blueprint could be introduced and passed as-is.
The Internet Blueprint is a place for everyone – individuals, organizations, and companies – to come together and make it clear what is important to them. When you visit the site, the first thing you will see is a list of complete bills. Along with the text there is a headline, a short explanation, and a more detailed explanation of both the problem and our solution.
Public Knowledge has seeded the site with a few completed bills that focus on copyright policy and openness in international intellectual property negotiations. You can view them here.
— Ben Huh, CEO, Cheezeburger, Inc. The Ethics of the Fail.