Crowdsourcing Backstories for Complex Task-Based Search

Abstract

Backstories provide vital contextual information for information retrieval evaluation. They are useful as textual representations of information needs, for example to aid in relevance judgements as part of test collections for performance evaluation, for studying longer search queries, and for interactive retrieval. While backstories exist for some popular search tasks and domains thanks to evaluation campaigns such as TREC, NTCIR, and CLEF, they are not available for a large range of other tasks and domains. In this paper, we explore crowdsourcing as an approach for obtaining high-quality backstories, with the aim of supporting the development of backstories as key resources for new domains and search tasks. Compared to typical crowdsourcing tasks in the IR domain, such as gathering relevance judgements or short textual search queries, obtaining backstories is more complex. Workers are required to think of information need scenarios and put these thoughts into comparatively lengthy text fragments. This possibly entails a higher cognitive load and longer working time. We describe a crowdsourcing methodology to maximise the usefulness of results, using the creation of backstories for the job search domain as an example. We also present and release a collection of 756 job search backstories, which was obtained via the proposed methodology.

Publication
Australasian Document Computing Symposium