By Carlo Soresina, Co-Founder SkipsoLabs
When Jeff Howe coined the term crowdsourcing in the June 2006 issue of Wired Magazine (in his article “The Rise of Crowdsourcing”), he referred to crowdsourcing as a new organizational model that allowed companies to take functions that were once performed internally and outsource them to online communities in the form of an open call.
Some of the pioneering companies he mentioned in his article – Threadless.com, InnoCentive.com, Amazon’s Mechanical Turk and iStockphoto.com – have become well known case studies and have paved the way to a number of new exciting crowdsourcing models that have (and are) disrupting entire industries.
With this proliferation of new crowdsourcing start-ups, applications and business models it has become quite challenging to clearly define what crowdsourcing really is and what its true boundaries are. You might be familiar with Ross Dawson’s crowdsourcing landscape map or Crowdsourcing.org’s own industry landscape, both do a great job at graphically displaying the different companies that operate in this (crowded!) space. If you do some desktop research (by the way over 7 million results for “Crowdsourcing” on google!) you will see x-number of other frameworks, models and maps used to classify and define crowdsourcing. So let’s try and put some order.
While there isn’t probably one right way to categorise the crowdsourcing landscape, the most popular classifications done by industry experts and researchers define crowdsourcing according to the following 4 variables:
- Based on the type of labor performed
- Based on the motivation to participate
- Based on how applications function
- Based on the problems that crowdsourcing is trying to solve
#1 Crowdsourcing classified based on Labor Performed (Nicholas Carr)
Crowdsourcing categorised based on the type of labor performed by the crowd and the way individuals in the crowd communicate and collaborate with one another:
- Social-production crowds – this is when a large group of individuals lends their distinct talents to the creation of some product (as an example Wikipedia or Linux).
- Averaging crowds – provide an average judgement on some complex matter that can be, in some cases, more accurate than the judgement of any one individual (as an example the stock market)
- Data-mine crowds – this is when a large group of people, without any knowledge of its members, produces a set of behavioral data that allows to gain insight into market patterns (as an example Ebay’s or Amazon’s recommendation systems).
- Networking crowds – a group that trades information through a shared communication system such as Facebook or Twitter
- Transactional crowds – a group that coordinates mainly around point-to-point transactions (as an example eBay and Innocentive).
This categorization is useful as it allows to understand the different abilities crowds possess and the many ways they can work together or in isolation to perform a task.
#2 Crowdsourcing classified based on Motivation to Participate (Eric Martineau)
Crowdsourcing is categorised based on the motivation that drives crowds to participate in crowdsourcing applications.
- Communals – mesh their identities with the crowd and develop social capital through participation on the site
- Utlizers – develop social capital by developing their individual skills through the site
- Aspirers – help select content in crowdsourcing contests but do not contribute original content themselves
- Lurkers – who simply observe
This categorization focuses more on the crowd members rather than the problems that crowdsourcing may solve.
#3 Crowdsourcing based on How Various Applications Function
Another interesting approach is to categorise crowdsourcing based on how various applications function.
Jeff Howe used this approach to classify crowdsourcing in the following 4 categories:
- Crowd wisdom – using the “collective intelligence” of people within or outside an organization to solve complex problems (Innocentive is the classic example).
- Crowd creation – leveraging the ability and insights of a crowd of people to create new products. Since Howe’s original definition this is an area that has evolved significantly and that I am following with particular interest (as an example I love Quirky’s co-creation community).
- Crowd voting – where the community votes for their favorite idea or product (Threadless is Howe’s original example)
- Crowd funding – there is a proliferation of crowdfunding platforms in the market of different types (rewards-based such as Kickstarter and equity-based such as CrowdCube) and serving different purposes.
Another great categorization of crowdsourcing based on a similar approach is the one provided by Ross Dawson:
- Distributed innovation platforms – the main concept here is that there are people outside your organization who have the answer to your challenges. We mentioned Innocentive, but there are a number of examples here. One that I love is Kaggle’s global innovation community of data scientists.
- Idea platforms – used by companies to be able to source, gather and filter ideas that are proposed. As an example Starbucks leverages the ideas of its community to improve its products and services.
- Innovation prizes – increasingly used by companies, governments and not for profit organizations to generate ground-breaking ideas. There are many examples here, but one you might be familiar with is the X-prize.
- Content markets – platforms where people submit their content for people to purchase (such as the online art community Red Bubble).
- Prediction markets – bring together many different opinions from a community of people to predict the future often based on “stock market-type” mechanisms (check out enterprise prediction market Consensus Point).
- Competition platforms – are becoming more popular to source experts and expertise in different areas (design is popular for competition platforms, e.g. DesignCrowd
#4 Crowdsourcing classified based on the Problem Being Solved (Daren Brabham)
I recently came across a great book by MIT professor David C. Brabham “Crowdsourcing” that does a great job at defining crowdsourcing:
“Crowdsourcing is an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals.”
According to Daren Brabham, all of the above segmentations of crowdsourcing are not focusing on the kind of problem an organization wants to solve when it turns to a crowd. His problem-centric segmentation is in fact based on the type of problems that crowdsourcing is best suited to solve:
- Knowledge discovery and management – an organization tasks a crowd with finding and collecting information into a common format (examples: Peer-to-patent peertopatent.org, SeeClickFix or one recently launched by my friend Gianluca Petrelli BeMyEye). Ideal for information gathering, organization and reporting problems.
- Broadcast search – organizations tasks a crowd with solving empirical problems (e.g. Innocentive, Goldcorp Challenge). Ideal for ideation problems with empirical provable solutions such as scientific challenges.
- Peer-vetted creative production – organizations tasks a crowd with creating and selecting creative ideas. (e.g. Threadless, Doritos contest)
- Distributed human intelligence tasking – appropriate not to produce designs, find information, or develop solutions, but to process data. Large data problems are decomposed into small tasks requiring human intelligence, and individuals in the crowd are compensated for processing bits of data. Monetary compensation is a common motivator for participation. Amazon Mechanical Turk is the perfect example.
I like this last problem-centric approach a lot. I think it is a great way to help explain, in simple terms, what crowdsourcing is and how it applies to solve tangible problems. At SkipsoLabs we generally follow this approach when implementing our Crowdsourcing and Open Innovation platforms and services. For us the starting point is always one: what is the problem you are trying to solve? Our solution is built to help you address that problem.
However, our experience tells us that there isn’t a one-size-fits-all solution, but rather a blend of different applications with varying degrees of customization. When it comes to implementing the right crowdsourcing solution we tend to adopt a holistic approach which takes into consideration all of the key drivers and variables described above:
- Problems being addressed
- Functions / Functionality required
- Type of audience (crowd) to be engaged
- Incentives to participate
I will expand on our approach and methodology in future posts!