We propose research to investigate a new paradigm for Interactive Information Retrieval (IIR) where all input and output is mediated via speech. Our aim is to develop a new framework for effective and efficient IIR over a speech-only channel: a Spoken Conversational Search System (SCSS). This SCSS will provide an interactive conversational approach to determine user information needs, presenting results and enabling search reformulations. We have thus far investigated the format of results summaries for both audio and text, features such as summary length and summaries documents (noisy document or clean document) generated from (noisy) speech-recognition output from spoken document. In this paper we discuss future directions regarding a novel spoken interface targeted at search result presentation, query intent detection, and interaction patterns for audio search.