Friday, May 22, 2009

There's crowdsourcing, and there's crowdsourcing

During my last year at Reed College, I wrote an undergraduate thesis in economics that hinged upon the concepts of efficiency and equity. I merrily went about writing my thesis, turned it in, then prepared for the oral defense of my thesis. One of the professors asked me to explain why I chose the definitions of efficiency and equity that I used in my thesis. As it turns out, I hadn't really thought of the "why." I turned to my thesis advisor for support, but my advisor (rightly) noted that the definitions that I chose were up to me. A definite learning experience.

In a May 20 post, I spoke about the concept of crowdsourcing. As part of the post, I referenced two definitions from Jeff Howe:

The White Paper Version: Crowdsourcing is the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.

The Soundbyte Version: The application of Open Source principles to fields outside of software.


In a May 21 post, Rob Diana also spoke about crowdsourcing, asking whether the American Idol television show is truly an example of crowdsourcing. He turned to a Wikipedia definition of crowdsourcing, which included the following two sentences - Diana quoted the second:

Crowdsourcing is a neologism for the act of taking a task traditionally performed by an employee or contractor, and outsourcing it to an undefined, generally large group of people or community in the form of an open call. For example, the public may be invited to develop a new technology, carry out a design task (also known as community-based design[1] and distributed participatory design), refine or carry out the steps of an algorithm (see Human-based computation), or help capture, systematize or analyze large amounts of data (see also citizen science).

Focusing upon the examples given by Wikipedia, Diana said:

This does not sound like voting on a very subjective matter like talent or who makes the better pop star. This sounds more like which solution works best, or what is the common pattern in a large set of data. These are very objective measures.

Diana refers to another Wikipedia entry (this one on "The Wisdom of Crowds") that identifies, among other things, four criteria that separate wise crowds from irrational ones. These four criteria are diversity of opinion, decentralization, independence, and aggregation.

In the case of American Idol, Diana questions whether independence is truly present in American Idol voting, given that (as Diana notes) "[p]eople are highly influenced by the opinions of others around them."

So how does that apply to Sheila Scarborough's request for superior Virginia destinations? In this case, I believe that all four criteria were met. Regarding independence, I did not have contact with any of the other respondents before expressing my opinion, and frankly I don't think I've even heard of any of the other respondents (unless one of them is Steven Perez posting under an assumed name).

But are these four criteria truly necessary for true crowdsourcing to occur? It's all a matter or semantics, but it's worthwhile to note that, at least at first glance, the inventor of the term "crowdsourcing" does not seem to impose any requirement that the crowd itself be wise. While one idealistically hopes that a crowd will be wise, it's not always the case. You can, in Howe's words apply "Open Source principles" and come up with a really crappy product.

But regardless of how you define crowdsourcing, Rob Diana's conclusions at the end of his post are still valid:

So, if you are looking to create some great application on the web, and crowdsourcing is how you are going to solve some problem, think about what your crowd should be. Who should be in your crowd? Should it be a bunch of technology-loving early adopters? Maybe, or maybe not. You do not need to go after the largest crowd, just a crowd that is relevant to your problem.

But then again, a caution is in order here. I'll speak more about requirements in a future post, but if you're going to crowdsource (or otherwise solicit) the requirements for a software product, you need to think about the participants. Do you only talk to the purchasing manager who will buy the product? Or do you also talk to the person who will manage the group that will use the product? How about the employees? The IT organization? The customers of the company who will use the product? And what weight should be assigned to each of these groups?
blog comments powered by Disqus