Extracting Audio Summaries to Support Effective Spoken Document Search

  • Damiano Spina
  • Johanne R. Trippas
  • Lawrence Cavedon
  • Mark Sanderson
Journal of the Association for Information Science and Technology, 2017

We address the challenge of extracting "query biased audio summaries" from podcasts to support users in making relevance decisions in spoken document search via an audio-only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or "snippets" for supporting users in making relevance judgments against a query. In particular, results show that summaries generated from ASR transcripts are comparable, in utility and user-judged preference, to spoken summaries generated from error-free manual transcripts of the same collection. We also observed that content-based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection which we have made publicly available.

@article {spina2017extracting,
author = {Spina, Damiano and Trippas, Johanne R. and Cavedon, Lawrence and Sanderson, Mark},
title = {Extracting audio summaries to support effective spoken document search},
journal = {Journal of the Association for Information Science and Technology},
volume = {68},
number = {9},
issn = {2330-1643},
url = {http://dx.doi.org/10.1002/asi.23831},
doi = {10.1002/asi.23831},
pages = {2101--2115},
year = {2017}
}