A session from the recent LIDA conference outlined how the power of the crowd contributed to the success of a local history documentation project. The possibilities for this type of public collaboration are endless, and people are happy to contribute their knowledge if the platform is easy to use.
The major emphasis of crowdsourcing is placed on people, the collective "crowd". This crowd consists of individuals scattered all over the world and are all connected via the web. However when we use the term "crowdsourcing", we should also question whether we mean "nichesourcing", which is a strategy that aims to tap into expert knowledge available in niche communities rather than the general "crowd".
This got me thinking about the differences (and similarities) between crowdsourcing and nichesourcing, and how they both relate to content aggregation platforms like Vable.
There is much to be explored regarding information management and the wisdom of an expert crowd, and I am glad that it’s being talked about at library conferences. In the meantime, I am glad to have found Harri Oinas-Kukkonen and his chapter on "Crowds of People as Sources of New Organisational Knowledge".
In it he says - amongst other things - that:
We spend our entire library careers saying “the right information needs to be delivered to the right people in the right place, at the right time, and in the right way”. We achieve this by implementing the right information aggregation functionality in our information departments. But before we go further, let’s establish some definitions.
It has been defined as,
"an online, distributed problem-solving and production model that leverages the collective intelligence of online communities, to serve specific organizational goals”. (Daren Brabham, 2013)
The term “crowdsourcing” was coined by Jeff Howe in a 2006 article in response to a proliferation of organisations harnessing the power of many “enthusiastic amateurs”. He defined it as,
"an act whereby an organization or institution takes a function or more which was once performed by employees and outsources them to an undefined network of people which is generally in the form of an open call".
In that article Howe brought together some of the best examples of crowdsourcing. They all required people to tag, sort, or transcribe information so that meaningful knowledge could be extracted from a vast amount of data.
The list of projects which employ - or have employed - crowdsourcing is staggering. Not-for-profits or GLAMs (Galleries, Libraries, Archives and Museums) are represented, as are companies such as IBM Watson, Unilever and Pepsi.
Nichesourcing is a type of crowdsourcing, but there are fewer people involved and they are all experts in their field. One Finnish team defined it as,
"a specific type of crowdsourcing where tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for the complex tasks with high-quality product expectations".
For instance, it could be argued that most of the crowdsourcing projects mentioned above require expertise.
Art UK, one of the first tagging crowdsourcing projects I was involved with, encouraged people to include extra tags if they demonstrated an art history qualification. Certainly, the LIDA conference paper project required specialists - how many people do you know that are literate in medieval Glagolitic script? A niche topic indeed!
I have written about information overload on this blog before. The combined information gathered from automated web crawling, RSS feeds and emails generates an immense amount of content. Automated systems can handle the huge amount of news and information being produced every day, but human input is required to critically evaluate the sources and search results.
We can agree, as legal information professionals, that content evaluation is a niche specialism. As John DiGilio, the firmwide Director of Library Services at Sidley Austin LLP points out,
"[We] need the guidance of people trained in library and information science to help make sense of it all, to facilitate skilled information retrieval and to collaborate with information requesters for evaluation and further report".
A content aggregation platform like Vable relies on qualified information professionals to add only the best, most trustworthy and quality content. We use our knowledge and experience to assess the quality of online news and information to populate newsletters and alerts so that we can maintain and build trust with end-users.
Nichesourcing relies on collective expertise which is reflected in the quality of information added to crowdsourced current awareness platforms. Not only do we need an excellent understanding of an organisation’s underlying business goals, but we will need to know which sources are relevant to end-users.
We should instinctively consider accuracy, neutrality, readability, relevance, and trustworthiness, and anything else that is important.
As long as we are constantly evaluating the information we are adding to the platform, standards will be maintained. However content aggregation platforms have safeguards in place so that people can restrict access if appropriate:
Most of the time we are benefitting from the expertise of the information crowd. Given that new content is being added all the time, benefits include the discovery of new and relevant sources of information of use to colleagues; or we might find trustworthy free alternatives to expensive subscriptions.
Nichesourcing and the more general crowdsourcing have both demonstrated “the power of the crowd” and if it is done in the right way, they have incredible potential. If one information specialist adds value to an organisation, just imagine the effect of combining the expertise of many information people! Apply this thinking to current awareness workflow, and see where collaboration, automation and imagination can take us.