Crowd Truth: Harnessing disagreement in crowdsourcing a relation extraction gold standard
Any type of content formally published in an academic journal, usually following a peer-review process.
One of the rst steps in most web data analytics is creating a human annotated ground truth, typically based on the assumption that for each annotated instance there is a single right answer. From this assumption it has always followed that ground truth quality can be measured in inter-annotator agreement. We challenge this assumption by observing that for certain annotation tasks, disagreement reflects semantic ambiguity in the target instances and provides useful information. We propose a new type of ground truth, a crowd truth, which is richer in diversity of perspectives and interpretations, and reflects more realistic human knowledge. We propose a framework to exploit such diverse human responses to annotation tasks for analyzing and understanding disagreement. Our discussion centers on a use case of relation extraction from medical text, however the crowd truth idea generalizes to most crowdsourced annotation tasks.