Project group Facts4Chat
Current generative language models such as ChatGPT have issues with factual correctness. They can be used very well in creative writing, but they are not half as useful in a professional or scientific context, as all answers need to be checked for factual accuracy. For example, a chatbot for answering product questions on the web site of a company must answer with correct product features, and must not recommend the products of competitors. However, there are many projects underway to improve this by integrating information retrieval functionality (e.g., via plugins in ChatGPT) to provide the chat model with the facts, and have the generative model only perform the user interface.
The task of this project group is:
- to set up an own chat bot using state-of-the-art open source models
- build a local search index for domain-specific local corpora
- use the search index to improve the answers of the chatbot
- evaluate the quality of answers of different models and the effect of the index
- perform model fine-tuning using feedback obtained from users
- write a progress report and a final report for the project
A longer description is available on the German version of this page.
Literature
- A. Vaswani, N. Shazeer, N. Parmar et al. “Attention is All you Need”. In: NeurIPS. 2017, p. 5998–6008. https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
- A. Radford, K. Narasimhan, T. Salimans et al. Improving language understanding by generative pre-training. 2018. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf.
- J. Devlin, M. Chang, K. Lee et al. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. In: NAACL-HLT. 2019, p. 4171–4186. https://doi.org/10.18653/v1/n19-1423.
- N. Reimers und I. Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks”. In: EMNLP-IJCNLP. 2019, p. 3980–3990. https://doi.org/10.18653/v1/D19-1410.
- D. M. Ziegler, N. Stiennon, J. Wu et al. Fine-Tuning Language Models from Human Preferences. 2019. https://doi.org/10.48550/arXiv.1909.08593.
- T. B. Brown, B. Mann, N. Ryder et al. “Language Models are Few-Shot Learners”. In: NeurIPS. 2020. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html.
- N. Stiennon, L. Ouyang, J. Wu et al. Learning to summarize from human feedback. 2020. https://doi.org/10.48550/arXiv.2009.01325.
- Y. Bai, p. Kadavath, S. Kundu et al. Constitutional AI: Harmlessness from AI Feedback. 2022. https://doi.org/10.48550/arXiv.2212.08073.
- p. Borgeaud, A. Mensch, J. Hoffmann et al. “Improving Language Models by Retrieving from Trillions of Tokens”. In: International Conference on Machine Learning, ICML. Bd. 162. Proceedings of Machine Learning Research. 2022, S. 2206–2240. url: https://proceedings.mlr.press/v162/borgeaud22a.html.
- T. Dao, D. Y. Fu, p. Ermon et al. “FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness”. In: NeurIPS. 2022. url: http://papers.nips.cc/paper/2022/hash/67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html.
- J. Geiping und T. Goldstein. Cramming: Training a Language Model on a Single GPU in One Day. 2022. https://doi.org/10.48550/arXiv.2212.14034.
- J. Hoffmann, p. Borgeaud, A. Mensch et al. Training Compute-Optimal Large Language Models. 2022. https://doi.org/10.48550/arXiv.2203.15556.
- L. Ouyang, J. Wu, X. Jiang et al. “Training language models to follow instructions with human feedback”. In: NeurIPS. 2022. http://papers.nips.cc/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html.
- K. Shuster, J. Xu, M. Komeili et al. BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage. 2022. https://doi.org/10.48550/arXiv.2208.03188.
- R. Thoppilan, D. D. Freitas, J. Hall et al. LaMDA: Language Models for Dialog Applications. 2022. https://doi.org/10.48550/arXiv.2201.08239.
- V. Lialin, V. Deshpande und A. Rumshisky. Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning. 2023. https://doi.org/10.48550/arXiv.2303.15647.
- R. Taori, I. Gulrajani, T. Zhang et al. Stanford Alpaca: An Instruction-following LLaMA model. GitHub repository. 2023. https://github.com/tatsu-lab/stanford_alpaca.
- H. Touvron, T. Lavril, G. Izacard u. a. LLaMA: Open and Efficient Foundation Language Models. 2023. https://doi.org/10.48550/arXiv.2302.13971.