UQV100: An IR Test Collection With Query Variability

<b>Abstract from the SIGIR 2016 short paper (DOI below)</b>: We describe the UQV100 test collection, designed to incorporate variability from users. Information need “backstories” were written for 100 topics (or sub-topics) from the TREC 2013 and 2014 Web Tracks. Crowd workers were asked to read the backstories, and provide the queries they would use; plus effort estimates of how many useful documents they would have to read to satisfy the need. A total of 10,835 queries were collected from 263 workers. After normalization and spell-correction, 5,764 unique variations remained; these were then used to construct a document pool via Indri-BM25 over the ClueWeb12 corpus. Qualified crowd workers made relevance judgments relative to the backstories, using a relevance scale similar to the original TREC approach; first to a pool depth of ten per query, then deeper on a set of targeted documents. The backstories, query variations, normalized and spell-corrected queries, effort estimates, run outputs, and relevance judgments are made available collectively as the UQV100 test collection. We also make available the judging guidelines and the gold hits we used for crowd-worker qualification and spam detection. We believe this test collection will unlock new opportunities for novel investigations and analysis, including for problems such as task-intent retrieval performance and consistency (independent of query variation), query clustering, query difficulty prediction, and relevance feedback, among others.<div><div><br></div><div><b>Files</b>: Download uqv100-allfiles.zip to get all of the files available as part of this collection, including README.txt. </div><div><br></div><div><b>Citation</b>: Please cite the paper linked below if you make use of the collection.</div></div><div><br></div><b>Authors</b>: Peter Bailey (Microsoft), Alistair Moffat (The University of Melbourne), Falk Scholer (RMIT University), Paul Thomas (Microsoft).<br><div><br></div>