Project group Facts4Chat

Current generative language models such as ChatGPT have issues with factual correctness. They can be used very well in creative writing, but they are not half as useful in a professional or scientific context, as all answers need to be checked for factual accuracy. For example, a chatbot for answering product questions on the web site of a company must answer with correct product features, and must not recommend the products of competitors. However, there are many projects underway to improve this by integrating information retrieval functionality (e.g., via plugins in ChatGPT) to provide the chat model with the facts, and have the generative model only perform the user interface.

The task of this project group is:

A longer description is available on the German version of this page.

Literature

  1. A. Vaswani, N. Shazeer, N. Parmar et al. “Attention is All you Need”. In: NeurIPS. 2017, p. 5998–6008. https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
  2. A. Radford, K. Narasimhan, T. Salimans et al. Improving language understanding by generative pre-training. 2018. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf.
  3. J. Devlin, M. Chang, K. Lee et al. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. In: NAACL-HLT. 2019, p. 4171–4186. https://doi.org/10.18653/v1/n19-1423.
  4. N. Reimers und I. Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks”. In: EMNLP-IJCNLP. 2019, p. 3980–3990. https://doi.org/10.18653/v1/D19-1410.
  5. D. M. Ziegler, N. Stiennon, J. Wu et al. Fine-Tuning Language Models from Human Preferences. 2019. https://doi.org/10.48550/arXiv.1909.08593.
  6. T. B. Brown, B. Mann, N. Ryder et al. “Language Models are Few-Shot Learners”. In: NeurIPS. 2020. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html.
  7. N. Stiennon, L. Ouyang, J. Wu et al. Learning to summarize from human feedback. 2020. https://doi.org/10.48550/arXiv.2009.01325.
  8. Y. Bai, p. Kadavath, S. Kundu et al. Constitutional AI: Harmlessness from AI Feedback. 2022. https://doi.org/10.48550/arXiv.2212.08073.
  9. p. Borgeaud, A. Mensch, J. Hoffmann et al. “Improving Language Models by Retrieving from Trillions of Tokens”. In: International Conference on Machine Learning, ICML. Bd. 162. Proceedings of Machine Learning Research. 2022, S. 2206–2240. url: https://proceedings.mlr.press/v162/borgeaud22a.html.
  10. T. Dao, D. Y. Fu, p. Ermon et al. “FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness”. In: NeurIPS. 2022. url: http://papers.nips.cc/paper/2022/hash/67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html.
  11. J. Geiping und T. Goldstein. Cramming: Training a Language Model on a Single GPU in One Day. 2022. https://doi.org/10.48550/arXiv.2212.14034.
  12. J. Hoffmann, p. Borgeaud, A. Mensch et al. Training Compute-Optimal Large Language Models. 2022. https://doi.org/10.48550/arXiv.2203.15556.
  13. L. Ouyang, J. Wu, X. Jiang et al. “Training language models to follow instructions with human feedback”. In: NeurIPS. 2022. http://papers.nips.cc/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html.
  14. K. Shuster, J. Xu, M. Komeili et al. BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage. 2022. https://doi.org/10.48550/arXiv.2208.03188.
  15. R. Thoppilan, D. D. Freitas, J. Hall et al. LaMDA: Language Models for Dialog Applications. 2022. https://doi.org/10.48550/arXiv.2201.08239.
  16. V. Lialin, V. Deshpande und A. Rumshisky. Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning. 2023. https://doi.org/10.48550/arXiv.2303.15647.
  17. R. Taori, I. Gulrajani, T. Zhang et al. Stanford Alpaca: An Instruction-following LLaMA model. GitHub repository. 2023. https://github.com/tatsu-lab/stanford_alpaca.
  18. H. Touvron, T. Lavril, G. Izacard u. a. LLaMA: Open and Efficient Foundation Language Models. 2023. https://doi.org/10.48550/arXiv.2302.13971.