There are many ways to solve problems and making innovation happen, but crowdsourcing has proven to be a very successful approach for some companies. In this article, old and new examples of crowdsourcing competitions that transformed companies and industries show demonstrate that it is possible to innovate through opening the doors to new ideas from the crowd.
Learn about the Crowdsourcing Competitions that proved to be a successful innovation model
John Harrison and the Chronometer
On the contrary with what many innovation professionals might think, crowdsourcing approach to problem-solving is not something new: In fact, the invention of the first marine chronometer which dates back to 300 years ago is a successful case of crowdsourcing.
The chronometer was a long-sought-after device for solving the problem of establishing the East-West position or longitude of a ship at sea. A ship’s east-west position was essential when approaching land. Typically after a long voyage, cumulative errors in dead reckoning frequently led to shipwrecks and a great loss of life.
The problem was considered so intractable, and following the Scilly naval disaster of 1707 so important, that the British Parliament offered the Longitude prize of £20,000 (the equivalent of £2.75 million today) for a solution to this issue. This outreach was particularly important because other in-house attempts had proven to be unsuccessful. For example, a host of brilliant scientists, including Giovanni Domenico Cassini, Christiaan Huygens, Edmond Halley, and Isaac Newton, had all tried to address the issue only to find out their solutions were not fully reliable.
The winning solution, one of more than 100 submissions, was an accurate chronometer that enabled the exact triangulation of location. It came from John Harrison, a carpenter and clockmaker from the English countryside, who was eventually awarded the prize.
The story of the first marine chronometer is a reminder of the value of crowdsourcing: The impact that hidden heroes like John Harrison make sometimes lasts for generations.
Mining company Goldcorp found gold
Back in the late 90s, technology was not as advanced as it is today. However, the mining company Goldcorp was clever enough to rely on open source software to develop what many experts consider a prime example of crowdsourcing.
Rob McEwen, CEO of Goldcorp, was inspired by Linux and its code-free operating system, which was available to developers all over the world. Inspired by the democratization of technology, McEwen proposed the idea of releasing all geological data from one of its most complex projects: a gold mine in Canada that was expected to hold significant gold deposits.
By 2000, the Goldcorp Challenge was a reality. With a cash prize of 575,000$ dollars, people from all over the world were encouraged to take a look at all the geological data from 1984 in order to try and find where the gold might be. Participants from different backgrounds participated in the challenge, which actually yielded four winning solutions out of the best five proposals.
The rest, as they say, is history. Goldcorp is now worth around 10$ billion dollars, 100 times its 1999 value. In the beginning, business purists and geologists were outraged. In the end, they all had to admit that this crowdsourcing competition had significantly advanced the entire process. That is why we always discuss the importance of an open doors approach to innovation.
DARPA and the swimming tank
The Pentagon researchers launched a crowdsourcing competition to build a swimming tank for the Marines in a fraction of the time it takes the military’s lumbering acquisitions process. Only the relevant data and a set of web-based collaborative tools were needed to win a million dollars.
Welcome to the FANG Challenge, one of Darpa’s various design challenges that leverage the distributed intelligence of the crowd.
FANG stands for Fast, Adaptable, Next-Generation Ground Vehicle. In this case, Darpa wants to build an amphibious infantry vehicle to the specifications of the Marine Corps’ Amphibious Combat Vehicle, designed to carry Marines from ship to shore under fire. Only Darpa thinks you can design something that’s more innovative than a traditional military vehicle, in less time, and without the support of mega-defense corporations. To do so, you’ll have to break from the process through which those vehicles get engineered.
The gamble behind the FANG Challenge is that once a design team has the data spelling out the requirements of each of the systems for a military vehicle, that team ought to be able to design the individual parts of component systems while taking note of how the other parts have to knit together.
Again, Darpa isn’t actually expecting to build every part of this infantry vehicle all at once. The FANG Challenge is broken up into three phases. The first phase seeks to build the drivetrain and mobility systems. Darpa opened C2M2L and VehicleFORGE to participants so that teams had 4 months to work. The winning design, judged according to the Marine Corps’ criteria for the Amphibious Combat Vehicle, got a million dollars.
The next phase was to design the hull, informed by the work in phase 1. That’s another $1 million, and the winning design for the full vehicle got $2 million.
Darpa’s crowdsourced vehicle designs before. The 2010 Experimental Crowd-derived Combat-Support Vehicle (XC2V) Design Challenge created a combat vehicle that could be used for medevac.
Kaggle and Land Use in the Amazon
Planet launched the Understanding the Amazon from Space Kaggle competition. The goal of the contest was to build a classifier which can predict the type of land use in images of the Amazon taken by satellites. They provided over 100,000 chips extracted from large images taken by a flock of satellites over the Amazon basin in 2016.
The 40,000 training and 60,000 testing chips were given in both 3-band RGB JPEG and four band IR-RBG TIFF formats. Using crowd-sourced labor, each training chip was assigned a set of ground truth labels indicating the types of land use appearing in the chip. The quality of the predictions was measured using the F2 score, which is a weighted average of precision and recall, with greater emphasis on recall.
The winning Kaggler, `best-fitting`, obtained a score of 0.93318, and a discussion of his approach can be found in his solution summary.