taesiri commited on
Commit
6bb8582
1 Parent(s): ae3dbad

Upload abstract/2310.11511.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2310.11511.txt +1 -0
abstract/2310.11511.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Despite their remarkable capabilities, large language models often produce responses containing factual inaccuracies due to their sole reliance on the parametric knowledge they encapsulate. Retrieval-Augmented Generation, an ad hoc approach that augments language models with retrieval of relevant knowledge, decreases such issues. However, indiscriminately retrieving and incorporating a fixed number of retrieved passages, regardless of whether retrieval is necessary or passages are relevant, diminishes language model versatility or can lead to unhelpful response generation. We introduce a new framework called Self-Reflective Retrieval-Augmented Generation that enhances a language model's quality and factuality through retrieval and self-reflection. Our framework trains a single arbitrary language model that adaptively retrieves passages on-demand and generates and reflects on retrieved passages and its own generations using special tokens called reflection tokens. Generating reflection tokens makes the language model controllable during the inference phase, enabling it to tailor its behavior to diverse task requirements. Experiments show that Self-RAG significantly outperforms state-of-the-art language models and retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA, reasoning, and fact verification tasks, and it shows significant gains in improving factuality and citation accuracy for long-form generations relative to these models.