Conversational agents based on Large Language Models (LLMs) such as ChatGPT can provide numerous benefits to multiple stakeholders inside and outside an organisation such as RMIT University. One of the primary benefits is that they can help different stakeholders in reducing the cost of tackling various tasks, saving time and resources (e.g., translate data into different formats, prepare draft of documents, etc.). Note that the aim is not to replace humans doing the tasks, but to assist them to be more cost-effective.
In this setting, one of the major concerns is the risk of giving away sensitive data, compromising the privacy and security of individuals. This can make it harder for members of the organization to comply with data management policies, as they may be uncertain about what data is being collected by third-parties such as OpenAI.
At CIDDA, we wanted to join forces by combining our diverse expertise to demonstrate ourselves (and to learn on doing so) how far we can get in designing and deploying our in-house version of a conversational LLM. To make this long-term goal more reachable, we identified a more tangible milestone: can we deploy our own LLM (i.e., CIDDA-GPT) to assist us to build a chatbot for RMIT Open Day? We considered a manually curated Frequently Asked Questions document from the School of Computing Technologies (SCT) as “sensitive data”, and we challenged ourselves to see how we can use our CIDDA-GPT to create training phrases (and alternative paraphrases of the correct answers in the FAQ), to create Walert, a conversational agent that answers questions about SCT programs.
What is it?
The prototype has two main components: Walert, a conversational model deployed in Amazon Alexa, and CIDDA-GPT, an opensource large language model deployed in Amazon Web Services (AWS) through RACE.
For the chatbot Walert, we used Alexa's Skill Developer to create a custom skill, which was deployed and tested into an Amazon Echo Dot device. We leveraged the Natural Language Understanding embedded in Amazon Alexa's services to identify a user's intent behind a question and respond accordingly. The custom skill was trained using popular question-answer pairs from an RMIT SCT's internal FAQ document. In such a scenario, the questions from the FAQ would be used as training utterances for Walert.
As chatbot need multiple semantically equivalent utterances as training phrases for the conversational model, we wanted to experiment with using CIDDA-GPT to generate paraphrases for the questions and answers extracted from the FAQ.
For CIDDA-GPT, we considered different opensource alternatives, and we ended up exploring two options: Meta’s LLaMa deployed in Amazon EC2, and Falcon LLM 7b that is available through Amazon Jumpstart and SageMaker.
We used SageMaker Studio as IDE for Python Jupyter Notebook. To set up the environment, a GPU needs to be selected. For example, Falcon-7b would need 5xlarge-GPU to load it up for the model inference.
In our deployment of LLaMA-1, we utilized checkpoints from Meta and chose to work with the 7B version. Our aim was to minimize execution costs. CIDDA-GPT main application was to rephrase questions and answers, which would later be used to feed our conversational agent Walert.
We conducted a qualitative comparative study between LLaMA-1 and Falcon-7b-instruct, and the latter seemed to be more robust, delivering the most human-like paraphrases for the questions and answers.
The team
The team who is working in the CIDDA-GPT project and Walert are HDR students, research fellows, and members from [ADM+S](https://www.admscentre.org) and [RMIT CIDDA](https://www.rmit.edu.au/cidda), with experience in information access and retrieval, conversational user interfaces, natural language processing, software engineering, and machine learning: Sachin Pathiyan Cherumanal, Kaixin Ji, Lin Tian, Futoon Abu Shaqra, Angel Felipe Magnossão de Paula, Danula Hettiachchi, Halil Ali, Johanne Trippas, Falk Scholer, and Damiano Spina.
Acknowledgements
Walert was designed and developed in the unceded lands of the Woi Wurrung and Boon Wurrung peoples, as part of the CIDDA-GPT project. CIDDA-GPT is partially supported by the Australian Research Council (DE200100064, CE200100005) and the RMIT RACE Hub. We thank Amina Hossain and Santha Sumanasekara for their valuable contributions.