text
stringlengths
2
6.93k
system_prompt
stringclasses
1 value
# Modules |User|Query|Documents|User|Query|Documents|Routing|Predict| |---|---|---|---|---|---|---|---| |Indexing| |Indexing| | | |Rewrite|RAG| |Retrieval| |Retrieval| | | |Demonstrate|Fusion| |Post-Retrieval| | |patterns| | | | | |Foront|Summon|Fujlon|RCWMCC|Demonstrate|Retrieve| | | |Prompt|Frozen LLM| |Retrieve|Retrieve| | | | |Rerank| | | | | | | | |Output| |Output| | | | | | |Naive RAG| | |Advanced RAG| | |Modular RAG| | Fig. 3.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Comparison between the three paradigms of RAG. (Left) Naive RAG mainly consists of three parts: indexing, retrieval, and generation. (Middle) Advanced RAG proposes multiple optimization strategies around pre-retrieval and post-retrieval, with a process similar to the Naive RAG, still following a chain-like structure. (Right) Modular RAG inherits and develops from the previous paradigm, showcasing greater flexibility overall. This is evident in the introduction of multiple specific functional modules and the replacement of existing modules. The overall process is not limited to sequential retrieval and generation; it includes methods such as iterative and adaptive retrieval. Pre-retrieval process. In this stage, the primary focus is on optimizing the indexing structure and the original query. The goal of optimizing indexing is to enhance the quality of the content being indexed. This involves strategies: enhancing data granularity, optimizing index structures, adding metadata, alignment optimization, and mixed retrieval. While the goal of query optimization is to make the user’s original question clearer and more suitable for the retrieval task. Common methods include query rewriting query transformation, query expansion, and other techniques [7], [9]–[11]. Post-Retrieval Process. Once relevant context is retrieved, it’s crucial to integrate it effectively with the query. The main methods in the post-retrieval process include rerank chunks and context compressing. Re-ranking the retrieved information to relocate the most relevant content to the edges of the prompt is a key strategy.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This concept has been implemented in frameworks such as LlamaIndex2, LangChain3, and HayStack [12]. Feeding all relevant documents directly into LLMs can lead to information overload, diluting the focus on key details with irrelevant content. To mitigate this, post-retrieval efforts concentrate on selecting the essential information, emphasizing critical sections, and shortening the context to be processed. Modular RAG The modular RAG architecture advances beyond the former two RAG paradigms, offering enhanced adaptability and versatility. It incorporates diverse strategies for improving its components, such as adding a search module for similarity searches and refining the retriever through fine-tuning. Innovations like restructured RAG modules [13] and rearranged RAG pipelines [14] have been introduced to tackle specific challenges. The shift towards a modular RAG approach is becoming prevalent, supporting both sequential processing and integrated end-to-end training across its components. Despite its distinctiveness, Modular RAG builds upon the foundational principles of Advanced and Naive RAG, illustrating a progression and refinement within the RAG family. New Modules: The Modular RAG framework introduces additional specialized components to enhance retrieval and processing capabilities. The Search module adapts to specific scenarios, enabling direct searches across various data sources like search engines, databases, and knowledge graphs, using LLM-generated code and query languages [15]. RAG-Fusion addresses traditional search limitations by employing a multi-query strategy that expands user queries into diverse perspectives, utilizing parallel vector searches and intelligent re-ranking to uncover both explicit and transformative knowledge [16]. The Memory module leverages the LLM’s memory to guide retrieval, creating an unbounded memory pool that 2https://www.llamaindex.ai 3https://www.langchain.com/
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# III. RETRIEVAL In the context of RAG, it is crucial to efficiently retrieve relevant documents from the data source. There are several key issues involved, such as the retrieval source, retrieval granularity, pre-processing of the retrieval, and selection of the corresponding embedding model. # A. Retrieval Source RAG relies on external knowledge to enhance LLMs, while the type of retrieval source and the granularity of retrieval units both affect the final generation results. 1.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Data Structure: Initially, text is the mainstream source of retrieval. Subsequently, the retrieval source expanded to include semi-structured data (PDF) and structured data (Knowledge Graph, KG) for enhancement. In addition to retrieving from original external sources, there is also a growing trend in recent researches towards utilizing content generated by LLMs themselves for retrieval and enhancement purposes.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Method|Retrieval Source|Retrieval Data Type|Retrieval Granularity|Augmentation Stage|Retrieval Process| |---|---|---|---|---|---| |CoG [29]|Wikipedia|Text|Phrase|Pre-training|Iterative| |DenseX [30]|FactoidWiki|Text|Proposition|Inference|Once| |EAR [31]|Dataset-base|Text|Sentence|Tuning|Once| |UPRISE [20]|Dataset-base|Text|Sentence|Tuning|Once| |RAST [32]|Dataset-base|Text|Sentence|Tuning|Once| |Self-Mem [17]|Dataset-base|Text|Sentence|Tuning|Iterative| |FLARE [24]|Search Engine,Wikipedia|Text|Sentence|Tuning|Adaptive| |PGRA [33]|Wikipedia|Text|Sentence|Inference|Once| |FILCO [34]|Wikipedia|Text|Sentence|Inference|Once| |RADA [35]|Dataset-base|Text|Sentence|Inference|Once| |Filter-rerank [36]|Synthesized dataset|Text|Sentence|Inference|Once| |R-GQA [37]|Dataset-base|Text|Sentence Pair|Tuning|Once| |LLM-R [38]|Dataset-base|Text|Sentence Pair|Inference|Iterative| |TIGER [39]|Dataset-base|Text|Item-base|Pre-training|Once| |LM-Indexer [40]|Dataset-base|Text|Item-base|Tuning|Once| |BEQUE [9]|Dataset-base|Text|Item-base|Tuning|Once| |CT-RAG [41]|Synthesized dataset|Text|Item-base|Tuning|Once| |Atlas [42]|Wikipedia, Common Crawl|Text|Chunk|Pre-training|Iterative| |RAVEN [43]|Wikipedia|Text|Chunk|Pre-training|Once| |RETRO++ [44]|Pre-training Corpus|Text|Chunk|Pre-training|Iterative| |INSTRUCTRETRO [45]|Pre-training corpus|Text|Chunk|Pre-training|Iterative| |RRR [7]|Search Engine|Text|Chunk|Tuning|Once| |RA-e2e [46]|Dataset-base|Text|Chunk|Tuning|Once| |PROMPTAGATOR [21]|BEIR|Text|Chunk|Tuning|Once| |AAR [47]|MSMARCO,Wikipedia|Text|Chunk|Tuning|Once| |RA-DIT [27]|Common Crawl,Wikipedia|Text|Chunk|Tuning|Once| |RAG-Robust [48]|Wikipedia|Text|Chunk|Tuning|Once| |RA-Long-Form [49]|Dataset-base|Text|Chunk|Tuning|Once| |CoN [50]|Wikipedia|Text|Chunk|Tuning|Once| |Self-RAG [25]|Wikipedia|Text|Chunk|Tuning|Adaptive| |BGM [26]|Wikipedia|Text|Chunk|Inference|Once| |CoQ [51]|Wikipedia|Text|Chunk|Inference|Iterative| |Token-Elimination [52]|Wikipedia|Text|Chunk|Inference|Once| |PaperQA [53]|Arxiv,Online Database,PubMed|Text|Chunk|Inference|Iterative| |NoiseRAG [54]|FactoidWiki|Text|Chunk|Inference|Once| |IAG [55]|Search Engine,Wikipedia|Text|Chunk|Inference|Once| |NoMIRACL [56]|Wikipedia|Text|Chunk|Inference|Once| |ToC [57]|Search Engine,Wikipedia|Text|Chunk|Inference|Recursive| |SKR [58]|Dataset-base,Wikipedia|Text|Chunk|Inference|Adaptive| |ITRG [59]|Wikipedia|Text|Chunk|Inference|Iterative| |RAG-LongContext [60]|Dataset-base|Text|Chunk|Inference|Once| |ITER-RETGEN [14]|Wikipedia|Text|Chunk|Inference|Iterative| |IRCoT [61]|Wikipedia|Text|Chunk|Inference|Recursive| |LLM-Knowledge-Boundary [62]|Wikipedia|Text|Chunk|Inference|Once| |RAPTOR [63]|Dataset-base|Text|Chunk|Inference|Recursive| |RECITE [22]|LLMs|Text|Chunk|Inference|Once| |ICRALM [64]|Pile,Wikipedia|Text|Chunk|Inference|Iterative| |Retrieve-and-Sample [65]|Dataset-base|Text|Doc|Tuning|Once| |Zemi [66]|C4|Text|Doc|Tuning|Once| |CRAG [67]|Arxiv|Text|Doc|Inference|Once| |1-PAGER [68]|Wikipedia|Text|Doc|Inference|Iterative| |PRCA [69]|Dataset-base|Text|Doc|Inference|Once| |QLM-Doc-ranking [70]|Dataset-base|Text|Doc|Inference|Once| |Recomp [71]|Wikipedia|Text|Doc|Inference|Once| |DSP [23]|Wikipedia|Text|Doc|Inference|Iterative| |RePLUG [72]|Pile|Text|Doc|Inference|Once| |ARM-RAG [73]|Dataset-base|Text|Doc|Inference|Iterative| |GenRead [13]|LLMs|Text|Doc|Inference|Iterative| |UniMS-RAG [74]|Dataset-base|Text|Multi|Tuning|Once| |CREA-ICL [19]|Dataset-base|Crosslingual,Text|Sentence|Inference|Once| |PKG [75]|LLM|Tabular,Text|Chunk|Inference|Once| |SANTA [76]|Dataset-base|Code,Text|Item|Pre-training|Once| |SURGE [77]|Freebase|KG|Sub-Graph|Tuning|Once| |MK-ToD [78]|Dataset-base|KG|Entity|Tuning|Once| |Dual-Feedback-ToD [79]|Dataset-base|KG|Entity Sequence|Tuning|Once| |Method|Retrieval Source|Retrieval Data Type|Retrieval Granularity|Augmentation Stage|Retrieval Process| |KnowledGPT [15]|Dataset-base|KG|Triplet|Inference|Muti-time| |FABULA [80]|Dataset-base,Graph|KG|Entity|Inference|Once| |HyKGE [81]|CMeKG|KG|Entity|Inference|Once| |KALMV [82]|Wikipedia|KG|Triplet|Inference|Iterative| |RoG [83]|Freebase|KG|Triplet|Inference|Iterative| |G-Retriever [84]|Dataset-base|TextGraph|Sub-Graph|Inference|Once|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Fig.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
4. RAG compared with other model optimization methods in the aspects of “External Knowledge Required” and “Model Adaption Required”. Prompt Engineering requires low modifications to the model and external knowledge, focusing on harnessing the capabilities of LLMs themselves. Fine-tuning, on the other hand, involves further training the model. In the early stages of RAG (Naive RAG), there is a low demand for model modifications. As research progresses, Modular RAG has become more integrated with fine-tuning techniques. Unstructured Data, such as text, is the most widely used retrieval source, which are mainly gathered from corpus. For open-domain question-answering (ODQA) tasks, the primary retrieval sources are Wikipedia Dump with the current major versions including HotpotQA 4 (1st October , 2017), DPR5 (20 December, 2018). In addition to encyclopedic data, common unstructured data includes cross-lingual text [19] and domain-specific data (such as medical [67]and legal domains [29]). Semi-structured data. typically refers to data that contains a combination of text and table information, such as PDF. Handling semi-structured data poses challenges for conventional RAG systems due to two main reasons. Firstly, text splitting processes may inadvertently separate tables, leading to data corruption during retrieval. Secondly, incorporating tables into the data can complicate semantic similarity searches. When dealing with semi-structured data, one approach involves leveraging the code capabilities of LLMs to execute Text-2-SQL queries on tables within databases, such as TableGPT [85]. Alternatively, tables can be transformed into text format for further analysis using text-based methods [75]. However, both of these methods are not optimal solutions, indicating substantial research opportunities in this area. Structured data, such as knowledge graphs (KGs) [86], which are typically verified and can provide more precise information. KnowledGPT [15] generates KB search queries and stores knowledge in a personalized base, enhancing the RAG model’s knowledge richness. In response to the limitations of LLMs in understanding and answering questions about textual graphs, G-Retriever [84] integrates Graph Neural Networks (GNNs), LLMs and RAG, enhancing graph comprehension and question-answering capabilities through soft prompting of the LLM, and employs the Prize-Collecting Steiner Tree (PCST) optimization problem for targeted graph retrieval. On the contrary, it requires additional effort to build, validate, and maintain structured databases.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
On the contrary, it requires additional effort to build, validate, and maintain structured databases. LLMs-Generated Content. Addressing the limitations of external auxiliary information in RAG, some research has focused on exploiting LLMs’ internal knowledge. SKR [58] classifies questions as known or unknown, applying retrieval enhancement selectively. GenRead [13] replaces the retriever with an LLM generator, finding that LLM-generated contexts often contain more accurate answers due to better alignment with the pre-training objectives of causal language modeling. Selfmem [17] iteratively creates an unbounded memory pool with a retrieval-enhanced generator, using a memory selector to choose outputs that serve as dual problems to the original question, thus self-enhancing the generative model. These methodologies underscore the breadth of innovative data source utilization in RAG, striving to improve model performance and task effectiveness. 2) Retrieval Granularity: Another important factor besides the data format of the retrieval source is the granularity of the retrieved data. Coarse-grained retrieval units theoretically can provide more relevant information for the problem, but they may also contain redundant content, which could distract the retriever and language models in downstream tasks [50], [87]. On the other hand, fine-grained retrieval unit granularity increases the burden of retrieval and does not guarantee semantic integrity and meeting the required knowledge. Choosing
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# the appropriate retrieval granularity during inference can be a simple and effective strategy to improve the retrieval and downstream task performance of dense retrievers. In text, retrieval granularity ranges from fine to coarse, including Token, Phrase, Sentence, Proposition, Chunks, Document. Among them, DenseX [30] proposed the concept of using propositions as retrieval units. Propositions are defined as atomic expressions in the text, each encapsulating a unique factual segment and presented in a concise, self-contained natural language format. This approach aims to enhance retrieval precision and relevance. On the Knowledge Graph (KG), retrieval granularity includes Entity, Triplet, and sub-Graph. The granularity of retrieval can also be adapted to downstream tasks, such as retrieving Item IDs [40] in recommendation tasks and Sentence pairs [38]. Detailed information is illustrated in Table I. # Indexing Optimization In the Indexing phase, documents will be processed, segmented, and transformed into Embeddings to be stored in a vector database. The quality of index construction determines whether the correct context can be obtained in the retrieval phase. # Chunking Strategy: The most common method is to split the document into chunks on a fixed number of tokens (e.g., 100, 256, 512) [88]. Larger chunks can capture more context, but they also generate more noise, requiring longer processing time and higher costs. While smaller chunks may not fully convey the necessary context, they do have less noise. However, chunks lead to truncation within sentences, prompting the optimization of recursive splits and sliding window methods, enabling layered retrieval by merging globally related information across multiple retrieval processes [89]. Nevertheless, these approaches still cannot strike a balance between semantic completeness and context length. Therefore, methods like Small2Big have been proposed, where sentences (small) are used as the retrieval unit, and the preceding and following sentences are provided as (big) context to LLMs [90]. # Metadata Attachments: Chunks can be enriched with metadata information such as page number, file name, author, category timestamp. Subsequently, retrieval can be filtered based on this metadata, limiting the scope of the retrieval. Assigning different weights to document timestamps during retrieval can achieve time-aware RAG, ensuring the freshness of knowledge and avoiding outdated information. In addition to extracting metadata from the original documents, metadata can also be artificially constructed.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Download Dataset news article is paired with metadata, including the Dataset Collection Select News title, publish date, author, category, URL, and news source. # Evidence Extraction For each article, we extract factual or opinion sentences using a trained language model. These factual sentences are later used as evidence for answering multi-hop queries. # Claim Generation [Bridge-Entity Generation] Bridge-Topic Generation # Query and Answer Generation Comparison Inference Null Temporal Step 3: Claim, Bridge-Entity, Bridge-Topic Generation. Our goal is to use GPT-4 to automatically generate high-quality multi-hop queries using the evidence set. However, the raw evidence obtained from Step 2 is not ideal for query generation due to inconsistency in linguistic structure. For example, some pieces of evidence use pronouns to refer to subjects and lack the actual entity in the text. To address this, we employ GPT-4 to paraphrase the evidence, which we refer to as claims, given the original evidence and its context. To ensure consistency between the generated claim and the evidence, we further perform fact-checking using the UniEval (Zhong et al., 2022) framework to verify the alignment between the evidence and claim. Appendix A presents the prompt used for GPT-4 for claim generation. Bridge-Entity and Bridge-Topic: The shared entity or topic across pieces of evidence is referred to as the bridge-entity or bridge-topic. These bridge-entities or bridge-topics can be used to link different pieces of evidence from which a multi-hop query’s answer is derived. For example, in a claim such as “Google reports its third-quarter results for 2023, showcasing a detailed overview of its financial performance, including revenue growth, profit margins”, the term profit margin can be viewed as a bridge-topic and the term Google can be viewed as a bridge-entity that links the different pieces of evidence. We prompt GPT-4 to identify the bridge-entity and bridge-topic for each claim. Appendix A also presents the prompt used for GPT-4 for bridge generation. # Figure 2: MultiHop-RAG Construction Pipeline # A Benchmarking Dataset: MultiHop-RAG In this section, we provide detailed information on the construction of the MultiHop-RAG dataset. Specifically, we describe the process of creating a set of multi-hop queries, along with the corresponding ground truth evidence sets and answers derived from a collection of news articles. # MultiHop-RAG Construction Step 1: Dataset Collection. We download a news dataset using the mediastack API, a REST API interface delivering worldwide news data. The news data source comprises various English-language websites covering a range of news categories: entertainment, business, sports, technology, health, and science. To mimic real-world RAG scenarios, where the knowledge base data, such as an enterprise’s internal data, may differ from the LLMs’ training data, we select news articles published from September 26, 2023, to December 26, 2023. This timeframe extends beyond the knowledge cutoff of some widely-used LLMs, including Chat-GPT and LLaMA, as of the time of writing.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# CAIN 2024, April 2024, Lisbon, Portugal Scott Barnett, Stefanus Kurniawan, Srikanth Thudumu, Zach Brannelly, Mohamed Abdelrazek |Case Study|Domain|Doc Types|Dataset Size|RAG Stages|Sample Questions| |---|---|---|---|---|---| |Cognitive Reviewer*|Research|PDFs|(Any size)|Chunker, Rewriter, Retriever, Reader|What are the key points covered in this paper?| |AI Tutor*|Education|Videos, HTML, PDF|38|Chunker, Rewriter, Retriever, Reader|What were the topics covered in week 6?| |BioASQ|Biomedical|Scientific PDFs|4017|Chunker, Retriever, Reader|Define pseudotumor cerebri. How is it treated?| Table 1: A summary of the RAG case studies presented in this paper.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
For example, adding summaries of paragraphs, as well as introducing hypothetical questions. This method is also known as Reverse HyDE. Specifically, using LLM to generate questions that can be answered by the document, then calculating the similarity between the original question and the hypothetical question during retrieval to reduce the semantic gap between the question and the answer. # Structural Index: One effective method for enhancing information retrieval is to establish a hierarchical structure for the documents. By constructing In structure, RAG system can expedite the retrieval and processing of pertinent data. Hierarchical index structure. File are arranged in parent-child relationships, with chunks linked to them. Data summaries are stored at each node, aiding in the swift traversal of data and assisting the RAG system in determining which chunks to extract. This approach can also mitigate the illusion caused by block extraction issues.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Knowledge Graph index. Utilize KG in constructing the hierarchical structure of documents contributes to maintaining consistency. It delineates the connections between different concepts and entities, markedly reducing the potential for illusions. Another advantage is the transformation of the information retrieval process into instructions that LLM can comprehend, thereby enhancing the accuracy of knowledge retrieval and enabling LLM to generate contextually coherent responses, thus improving the overall efficiency of the RAG system. To capture the logical relationship between document content and structure, KGP [91] proposed a method of building an index between multiple documents using KG. This KG consists of nodes (representing paragraphs or structures in the documents, such as pages and tables) and edges (indicating semantic/lexical similarity between paragraphs or relationships within the document structure), effectively addressing knowledge retrieval and reasoning problems in a multi-document environment. # Query Optimization One of the primary challenges with Naive RAG is its direct reliance on the user’s original query as the basis for retrieval. Formulating a precise and clear question is difficult, and imprudent queries result in subpar retrieval effectiveness.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Sometimes, the question itself is complex, and the language is not well-organized. Another difficulty lies in language complexity ambiguity. Language models often struggle when dealing with specialized vocabulary or ambiguous abbreviations with multiple meanings. For instance, they may not discern whether “LLM” refers to large language model or a Master of Laws in a legal context. # Query Expansion: Expanding a single query into multiple queries enriches the content of the query, providing further context to address any lack of specific nuances, thereby ensuring the optimal relevance of the generated answers. Multi-Query. By employing prompt engineering to expand queries via LLMs, these queries can then be executed in parallel. The expansion of queries is not random, but rather meticulously designed. Sub-Query. The process of sub-question planning represents the generation of the necessary sub-questions to contextualize and fully answer the original question when combined. This process of adding relevant context is, in principle, similar to query expansion. Specifically, a complex question can be decomposed into a series of simpler sub-questions using the least-to-most prompting method [92]. Chain-of-Verification(CoVe). The expanded queries undergo validation by LLM to achieve the effect of reducing hallucinations. Validated expanded queries typically exhibit higher reliability [93].
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Query Transformation The core concept is to retrieve chunks based on a transformed query instead of the user’s original query. Query Rewrite The original queries are not always optimal for LLM retrieval, especially in real-world scenarios. Therefore, we can prompt LLM to rewrite the queries. In addition to using LLM for query rewriting, specialized smaller language models, such as RRR (Rewrite-retrieve-read) [7]. The implementation of the query rewrite method in the Taobao, known as BEQUE [9] has notably enhanced recall effectiveness for long-tail queries, resulting in a rise in GMV. Another query transformation method is to use prompt engineering to let LLM generate a query based on the original query for subsequent retrieval. HyDE [11] construct hypothetical documents (assumed answers to the original query). It focuses on embedding similarity from answer to answer rather than seeking embedding similarity for the problem or query. Using the Step-back Prompting method [10], the original query is abstracted to generate a high-level concept question (step-back question). In the RAG system, both the step-back question and the original query are used for retrieval, and both the results are utilized as the basis for language model answer generation. # Query Routing Based on varying queries, routing to distinct RAG pipeline, which is suitable for a versatile RAG system designed to accommodate diverse scenarios. Metadata Router/ Filter The first step involves extracting keywords (entity) from the query, followed by filtering based on the keywords and metadata within the chunks to narrow down the search scope. Semantic Router Another method of routing involves leveraging the semantic information of the query. Specific approach see Semantic Router 6. Certainly, a hybrid routing approach can also be employed, combining both semantic and metadata-based methods for enhanced query routing. # Embedding In RAG, retrieval is achieved by calculating the similarity (e.g. cosine similarity) between the embeddings of the question and document chunks, where the semantic representation capability of embedding models plays a key role. This mainly includes a sparse encoder (BM25) and a dense retriever (BERT architecture Pre-training language models). Recent research has introduced prominent embedding models such as AngIE, Voyage, BGE, etc [94]–[96], which benefit from multi-task instruct tuning. Hugging Face’s MTEB leaderboard7 evaluates to provide initial search results for training dense retrieval models. Additionally, pre-training language models (PLMs) can be utilized to learn term weights to enhance sparse retrieval. Specifically, it also demonstrates that sparse retrieval models can enhance the zero-shot retrieval capability of dense retrieval models and assist dense retrievers in handling queries containing rare entities, thereby improving robustness. Fine-tuning Embedding Model In instances where the context significantly deviates from pre-training corpus, particularly within highly specialized disciplines such as healthcare, legal practice, and other sectors replete with proprietary jargon, fine-tuning the embedding model on your domain dataset becomes essential to mitigate such discrepancies. In addition to supplementing domain knowledge, another purpose of fine-tuning is to align the retriever and generator, for example, using the results of LLM as the supervision signal for fine-tuning, known as LSR (LM-supervised Retriever). PROMPTAGATOR [21] utilizes the LLM as a few-shot query generator to create task-specific retrievers, addressing challenges in supervised fine-tuning, particularly in data-scarce domains. Another approach, LLM-Embedder [97], exploits LLMs to generate reward signals across multiple downstream tasks. The retriever is fine-tuned with two types of supervised signals: hard labels for the dataset and soft rewards from the LLMs. This dual-signal approach fosters a more effective fine-tuning process, tailoring the embedding model to diverse downstream applications. REPLUG [72] utilizes a retriever and an LLM to calculate the probability distributions of the retrieved documents and then performs supervised training by computing the KL divergence. This straightforward and effective training method enhances the performance of the retrieval model by using an LM as the supervisory signal, eliminating the need for specific cross-attention mechanisms. Moreover, inspired by RLHF (Reinforcement Learning from Human Feedback), utilizing LM-based feedback to reinforce the retriever through reinforcement learning. # Adapter Fine-tuning models may present challenges, such as integrating functionality through an API or addressing constraints arising from limited local computational resources. Consequently, some approaches opt to incorporate an external adapter to aid in alignment. To optimize the multi-task capabilities of LLM, UPRISE [20] trained a lightweight prompt retriever that can automatically retrieve prompts from a pre-built prompt pool embedding models across 8 tasks, covering 58 datasets. Additionally, C-MTEB focuses on Chinese capability, covering 6 tasks and 35 datasets.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
There is no one-size-fits-all answer to “which embedding model to use.” However, some specific models are better suited for particular use cases. # Mix/hybrid Retrieval Sparse and dense embedding approaches capture different relevance features and can benefit from each other by leveraging complementary relevance information. For instance, sparse retrieval models can be used.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
References: - https://github.com/aurelio-labs/semantic-router - https://huggingface.co/spaces/mteb/leaderboard
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# introduces an innovative method for integrating knowledge into white-box models via directive fine-tuning [75] In this approach, the retriever module is directly substituted to generate relevant documents according to a query. This method assists in addressing the difficulties encountered during the fine-tuning process and enhances model performance.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# IV. GENERATION After retrieval, it is not a good practice to directly input all the retrieved information to the LLM for answering questions. Following will introduce adjustments from two perspectives: adjusting the retrieved content and adjusting the LLM. # A. Context Curation Redundant information can interfere with the final generation of LLM, and overly long contexts can also lead LLM to the “Lost in the middle” problem [98]. Like humans, LLM tends to only focus on the beginning and end of long texts, while forgetting the middle portion.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Therefore, in the RAG system, we typically need to further process the retrieved content. |1) Reranking:|Reranking fundamentally reorders document chunks to highlight the most pertinent results first, effectively reducing the overall document pool, severing a dual purpose in information retrieval, acting as both an enhancer and a filter, delivering refined inputs for more precise language model processing [70]. Reranking can be performed using rule-based methods that depend on predefined metrics like Diversity, Relevance, and MRR, or model-based approaches like Encoder-Decoder models from the BERT series (e.g., SpanBERT), specialized reranking models such as Cohere rerank or bge-raranker-large, and general large language models like GPT [12], [99].| |---|---| |2) Context Selection/Compression:|A common misconception in the RAG process is the belief that retrieving as many relevant documents as possible and concatenating them to form a lengthy retrieval prompt is beneficial. However, excessive context can introduce more noise, diminishing the LLM’s perception of key information. (Long) LLMLingua [100], [101] utilize small language models (SLMs) such as GPT-2 Small or LLaMA-7B, to detect and remove unimportant tokens, transforming it into a form that is challenging for humans to comprehend but well understood by LLMs. This approach presents a direct and practical method for prompt compression, eliminating the need for additional training of LLMs while balancing language integrity and compression ratio. PRCA tackled this issue by training an information extractor [69]. Similarly, RECOMP adopts a comparable approach by training an information condenser using contrastive learning [71]. Each training data point consists of one positive sample and five negative samples, and the encoder undergoes training using contrastive loss throughout this process [102].| In addition to compressing the context, reducing the number of documents also helps improve the accuracy of the model’s answers. Ma et al. [103] propose the “Filter-Reranker” paradigm, which combines the strengths of LLMs and SLMs. # B. LLM Fine-tuning Targeted fine-tuning based on the scenario and data characteristics on LLMs can yield better results. This is also one of the greatest advantages of using on-premise LLMs. When LLMs lack data in a specific domain, additional knowledge can be provided to the LLM through fine-tuning. Huggingface’s fine-tuning data can also be used as an initial step. Another benefit of fine-tuning is the ability to adjust the model’s input and output. For example, it can enable LLM to adapt to specific data formats and generate responses in a particular style as instructed [37]. For retrieval tasks that engage with structured data, the SANTA framework [76] implements a tripartite training regimen to effectively encapsulate both structural and semantic nuances. The initial phase focuses on the retriever, where contrastive learning is harnessed to refine the query and document embeddings. Aligning LLM outputs with human or retriever preferences through reinforcement learning is a potential approach. For instance, manually annotating the final generated answers and then providing feedback through reinforcement learning. In addition to aligning with human preferences, it is also possible to align with the preferences of fine-tuned models and retrievers [79]. When circumstances prevent access to powerful proprietary models or larger parameter open-source models, a simple and effective method is to distill the more powerful models (e.g. GPT-4). Fine-tuning of LLM can also be coordinated with fine-tuning of the retriever to align preferences. A typical approach, such as RA-DIT [27], aligns the scoring functions between Retriever and Generator using KL divergence. # V. AUGMENTATION PROCESS IN RAG In the domain of RAG, the standard practice often involves a singular (once) retrieval step followed by generation, which can lead to inefficiencies and sometimes is typically insufficient for complex problems demanding multi-step reasoning, as it provides a limited scope of information [105]. Many studies have optimized the retrieval process in response to this issue, and we have summarized them in Figure 5. # A. Iterative Retrieval Iterative retrieval is a process where the knowledge base is repeatedly searched based on the initial query and the text generated so far, providing a more comprehensive knowledge.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# ITERATIVE RECURSIVE ADAPTIVE |Provide more context information|Break down complex problems step by step|Flexible and adaptive control over retrieval and generation| |---|---|---| |Query Retrieve|Query|Retrieve On Demand| |Generate|Iterate Times|Query| |Judge Response|Retrieve|Generate| |Max Times Threshold|Max Depth (Tree) Threshold|Generate Special Token Threshold| Fig. 5. In addition to the most common once retrieval, RAG also includes three types of retrieval augmentation processes. (left) Iterative retrieval involves alternating between retrieval and generation, allowing for richer and more targeted context from the knowledge base at each step. (Middle) Recursive retrieval involves gradually refining the user query and breaking down the problem into sub-problems, then continuously solving complex problems through retrieval and generation. (Right) Adaptive retrieval focuses on enabling the RAG system to autonomously determine whether external knowledge retrieval is necessary and when to stop retrieval and generation, often utilizing LLM-generated special tokens for control. Base for LLMs.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Case studies marked with a * are running systems currently in use. OpenEvals technique implemented by OpenAI6. From the generated questions we manually inspected 40 issues and all issues that the OpenEvals flagged as inaccurate. We found that the automated evaluation was more pessimistic than a human rater for this domain. However, one threat to validity with this finding is that BioASQ is a domain specific dataset and the reviewers were not experts i.e. the large language model may know more than a non-expert. 5 FAILURE POINTS OF RAG SYSTEMS From the case studies we identified a set of failure points presented below. The following section addresses the research question What are the failure points that occur when engineering a RAG system? - FP1 Missing Content: The first fail case is when asking a question that cannot be answered from the available documents. In the happy case the RAG system will respond with something like “Sorry, I don’t know". However, for questions that are related to the content but don’t have answers the system could be fooled into giving a response. - FP2 Missed the Top Ranked Documents: The answer to the question is in the document but did not rank highly enough to be returned to the user. In theory, all documents are ranked and used in the next steps. However, in practice the top K documents are returned where K is a value selected based on performance. - FP3 Not in Context - Consolidation strategy Limitations: Documents with the answer were retrieved from the database but did not make it into the context for generating an answer. This occurs when many documents are returned from the database and a consolidation process takes place to retrieve the answer. - FP4 Not Extracted: Here the answer is present in the context, but the large language model failed to extract out the correct answer. Typically, this occurs when there is too much noise or contradicting information in the context. - FP5 Wrong Format: The question involved extracting information in a certain format such as a table or list and the large language model ignored the instruction. - FP6 Incorrect Specificity: The answer is returned in the response but is not specific enough or is too specific to address the user’s need. This occurs when the RAG system designers have a desired outcome for a given question such as teachers for students. In this case, specific educational content should be provided with answers not just the answer. Incorrect specificity also occurs when users are not sure how to ask a question and are too general. 6 LESSONS AND FUTURE RESEARCH DIRECTIONS The lessons learned from the three case studies are shown in Table 2.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This approach has been shown to enhance the robustness of subsequent answer generation by offering additional contextual references through multiple retrieval iterations. However, it may be affected by semantic discontinuity and the accumulation of irrelevant information. ITERRETGEN [14] employs a synergistic approach that leverages "retrieval-enhanced generation" alongside "generation-enhanced retrieval" for tasks that necessitate the reproduction of specific information. The model harnesses the content required to address the input task as a contextual basis for retrieving pertinent knowledge, which in turn facilitates the generation of improved responses in subsequent iterations. # Recursive Retrieval Recursive retrieval is often used in information retrieval and NLP to improve the depth and relevance of search results. The process involves iteratively refining search queries based on the results obtained from previous searches. Recursive Retrieval aims to enhance the search experience by gradually converging on the most pertinent information through a feedback loop. IRCoT [61] uses chain-of-thought to guide the retrieval process and refines the CoT with the obtained retrieval results. ToC [57] creates a clarification tree that systematically optimizes the ambiguous parts in the Query. It can be particularly useful in complex search scenarios where the user's needs are not entirely clear from the outset or where the information sought is highly specialized or nuanced.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The recursive nature of the process allows for continuous learning and adaptation to the user's requirements, often resulting in improved satisfaction with the search outcomes. To address specific data scenarios, recursive retrieval and multi-hop retrieval techniques are utilized together. Recursive retrieval involves a structured index to process and retrieve data in a hierarchical manner, which may include summarizing sections of a document or lengthy PDF before performing a retrieval based on this summary. Subsequently, a secondary retrieval within the document refines the search, embodying the recursive nature of the process. In contrast, multi-hop retrieval is designed to delve deeper into graph-structured data sources, extracting interconnected information [106]. # Adaptive Retrieval Adaptive retrieval methods, exemplified by Flare [24] and Self-RAG [25], refine the RAG framework by enabling LLMs to actively determine the optimal moments and content for retrieval, thus enhancing the efficiency and relevance of the information sourced. These methods are part of a broader trend wherein LLMs employ active judgment in their operations, as seen in model agents like AutoGPT, Toolformer, and Graph-Toolformer [107]-[109]. Graph-Toolformer, for instance, divides its retrieval process into distinct steps where LLMs proactively use retrievers, apply Self-Ask techniques, and employ few-shot prompts to initiate search queries. This proactive stance allows LLMs to decide when to search for necessary information, akin to how an agent utilizes tools. WebGPT [110] integrates a reinforcement learning framework to train the GPT-3 model in autonomously using a search engine during text generation. It navigates this process using special tokens that facilitate actions such as search engine queries, browsing results, and citing references, thereby expanding GPT-3's capabilities through the use of external search engines. Flare automates timing retrieval by monitoring the confidence of the generation process, as indicated by the
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# probability of generated terms [24]. When the probability falls below a certain threshold would activates the retrieval system to collect relevant information, thus optimizing the retrieval cycle. Self-RAG [25] introduces “reflection tokens” that allow the model to introspect its outputs. These tokens come in two varieties: “retrieve” and “critic”. The model autonomously decides when to activate retrieval, or alternatively, a predefined threshold may trigger the process. During retrieval, the generator conducts a fragment-level beam search across multiple paragraphs to derive the most coherent sequence. Critic scores are used to update the subdivision scores, with the flexibility to adjust these weights during inference, tailoring the model’s behavior. Self-RAG’s design obviates the need for additional classifiers or reliance on Natural Language Inference (NLI) models, thus streamlining the decision-making process for when to engage retrieval mechanisms and improving the model’s autonomous judgment capabilities in generating accurate responses. # VI. TASK AND EVALUATION The rapid advancement and growing adoption of RAG in the field of NLP have propelled the evaluation of RAG models to the forefront of research in the LLMs community. The primary objective of this evaluation is to comprehend and optimize the performance of RAG models across diverse application scenarios. This chapter will mainly introduce the main downstream tasks of RAG, datasets, and how to evaluate RAG systems. # A. Downstream Task The core task of RAG remains Question Answering (QA), including traditional single-hop/multi-hop QA, multiple-choice, domain-specific QA as well as long-form scenarios suitable for RAG. In addition to QA, RAG is continuously being expanded into multiple downstream tasks, such as Information Extraction (IE), dialogue generation, code search, etc. The main downstream tasks of RAG and their corresponding datasets are summarized in Table II. # B. Evaluation Target Historically, RAG models assessments have centered on their execution in specific downstream tasks. These evaluations employ established metrics suitable to the tasks at hand. For instance, question answering evaluations might rely on EM and F1 scores [7], [45], [59], [72], whereas fact-checking tasks often hinge on Accuracy as the primary metric [4], [14], [42]. BLEU and ROUGE metrics are also commonly used to evaluate answer quality [26], [32], [52], [78]. Tools like RALLE, designed for the automatic evaluation of RAG applications, similarly base their assessments on these task-specific metrics [160]. Despite this, there is a notable paucity of research dedicated to evaluating the distinct characteristics of RAG models. The main evaluation objectives include: # Retrieval Quality Evaluating the retrieval quality is crucial for determining the effectiveness of the context sourced by the retriever component. Standard metrics from the domains of search engines, recommendation systems, and information retrieval systems are employed to measure the performance of the RAG retrieval module. Metrics such as Hit Rate, MRR, and NDCG are commonly utilized for this purpose [161], [162]. # Generation Quality The assessment of generation quality centers on the generator’s capacity to synthesize coherent and relevant answers from the retrieved context. This evaluation can be categorized based on the content’s objectives: unlabeled and labeled content. For unlabeled content, the evaluation encompasses the faithfulness, relevance, and non-harmfulness of the generated answers. In contrast, for labeled content, the focus is on the accuracy of the information produced by the model [161]. Additionally, both retrieval and generation quality assessments can be conducted through manual or automatic evaluation methods [29], [161], [163]. # C. Evaluation Aspects Contemporary evaluation practices of RAG models emphasize three primary quality scores and four essential abilities, which collectively inform the evaluation of the two principal targets of the RAG model: retrieval and generation. 1) Quality Scores: Quality scores include context relevance, answer faithfulness, and answer relevance. These quality scores evaluate the efficiency of the RAG model from different perspectives in the process of information retrieval and generation [164]–[166]. Context Relevance evaluates the precision and specificity of the retrieved context, ensuring relevance and minimizing processing costs associated with extraneous content. Answer Faithfulness ensures that the generated answers remain true to the retrieved context, maintaining consistency and avoiding contradictions.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Answer Relevance requires that the generated answers are directly pertinent to the posed questions, effectively addressing the core inquiry. 2) Required Abilities: RAG evaluation also encompasses four abilities indicative of its adaptability and efficiency: noise robustness, negative rejection, information integration, and counterfactual robustness [167], [168]. These abilities are critical for the model’s performance under various challenges and complex scenarios, impacting the quality scores.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Noise Robustness appraises the model’s capability to manage noise documents that are question-related but lack substantive information. Negative Rejection assesses the model’s discernment in refraining from responding when the retrieved documents do not contain the necessary knowledge to answer a question. Information Integration evaluates the model’s proficiency in synthesizing information from multiple documents to address complex questions. Counterfactual Robustness tests the model’s ability to recognize and disregard known inaccuracies within documents, even when instructed about potential misinformation. Context relevance and noise robustness are important for evaluating the quality of retrieval, while answer faithfulness, answer relevance, negative rejection, information integration, and counterfactual robustness are important for evaluating the quality of generation.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Task|Sub Task|Dataset|Method| |---|---|---|---| |QA|Single-hop|Natural Question(NQ) [111]|[3], [4], [22], [27], [40], [43], [54], [62], [71], [112]| | | |TriviaQA(TQA) [113]|[4], [27], [59], [62], [112] [22], [25], [43], [44], [71], [72]| | | |SQuAD [114]|[20], [23], [30], [32], [45], [69], [112]| | | |Web Questions(WebQ) [115]|[3], [4], [13], [30], [50], [68]| | | |PopQA [116]|[7], [25], [67]| | | |MS MARCO [117]|[4], [40], [52]| |Multi-hop| |HotpotQA [118]|[23], [26], [31], [34], [47], [51], [61], [82] [7], [14], [22], [27], [59], [62], [69], [71], [91]| | | |2WikiMultiHopQA [119]|[14], [24], [48], [59], [61], [91]| | | |MuSiQue [120]|[14], [51], [61], [91]| |Long-form QA| |ELI5 [121]|[27], [34], [43], [49], [51]| | | |NarrativeQA(NQA) [122]|[45], [60], [63], [123]| | | |ASQA [124]|[24], [57]| | | |QMSum(QM) [125]|[60], [123]| |Domain QA| |Qasper [126]|[60], [63]| | | |COVID-QA [127]|[35], [46]| | | |CMB [128],MMCU Medical [129]|[81]| |Multi-Choice QA| |QuALITY [130]|[60], [63]| | | |ARC [131]|[25], [67]| | | |CommonsenseQA [132]|[58], [66]| |Graph QA| |GraphQA [84]|[84]| |Dialog|Dialog Generation|Wizard of Wikipedia (WoW) [133]|[13], [27], [34], [42]| | |Personal Dialog|KBP [134]|[74], [135] DuleMon [136]| | |Task-oriented Dialog|CamRest [137]|[78], [79]| | |Recommendation|Amazon(Toys,Sport,Beauty) [138]|[39], [40]| |IE|Event Argument Extraction|WikiEvent [139]|[13], [27], [37], [42] RAMS [140]| | |Relation Extraction|T-REx [141],ZsRE [142]|[27], [51]| |Reasoning|Commonsense Reasoning|HellaSwag [143]|[20], [66]| | |CoT Reasoning|CoT Reasoning [144]|[27]| | |Complex Reasoning|CSQA [145]|[55]| |Others|Language Understanding|MMLU [146]|[7], [27], [28], [42], [43], [47], [72]| | |Language Modeling|WikiText-103 [147]|[5], [29], [64], [71]| | |Fact Checking/Verification|FEVER [149]|[4], [13], [27], [34], [42], [50] PubHealth [150]| | |Text Generation|Biography [151]|[67]| | |Text Summarization|WikiASP [152]|[24]| | | |XSum [153]|[17]| | |Text Classification|VioLens [154]|[19]| | | |TREC [155]|[33]| | |Sentiment|SST-2 [156]|[20], [33], [38]| | |Code Search|CodeSearchNet [157]|[76]| | |Robustness Evaluation|NoMIRACL [56]|[56]| | |Math|GSM8K [158]|[73]| | |Machine Translation|JRC-Acquis [159]|[17]|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# TABLE III |Context|Faithfulness|Answer Relevance|Noise Robustness|Negative Rejection|Information Integration|Counterfactual Robustness| |---|---|---|---|---|---|---| |Accuracy|✓|✓|✓|✓|✓|✓| |EM| | | | | |✓| |Recall|✓| | | | | | |Precision|✓| |✓| | | | |R-Rate| | | | | |✓| |Cosine Similarity| |✓| | | | | |Hit Rate|✓| | | | | | |MRR|✓| | | | | | |NDCG|✓| | | | | | |BLEU|✓|✓|✓| | | | |ROUGE/ROUGE-L|✓|✓|✓| | | | The specific metrics for each evaluation aspect are summarized in Table III. It is essential to recognize that these metrics, derived from related work, are traditional measures and do not yet represent a mature or standardized approach for quantifying RAG evaluation aspects.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Custom metrics tailored to the nuances of RAG models, though not included here, have also been developed in some evaluation studies. # Evaluation Benchmarks and Tools A series of benchmark tests and tools have been proposed to facilitate the evaluation of RAG. These instruments furnish quantitative metrics that not only gauge RAG model performance but also enhance comprehension of the model’s capabilities across various evaluation aspects. Prominent benchmarks such as RGB, RECALL, and CRUD focus on appraising the essential abilities of RAG models. Concurrently, state-of-the-art automated tools like RAGAS, ARES, and TruLens employ LLMs to adjudicate the quality scores. These tools and benchmarks collectively form a robust framework for the systematic evaluation of RAG models, as summarized in Table IV. # DISCUSSION AND FUTURE PROSPECTS Despite the considerable progress in RAG technology, several challenges persist that warrant in-depth research. This chapter will mainly introduce the current challenges and future research directions faced by RAG. # RAG vs Long Context With the deepening of related research, the context of LLMs is continuously expanding. Presently, LLMs can effortlessly manage contexts exceeding 200,000 tokens. This capability signifies that long-document question answering, previously reliant on RAG, can now incorporate the entire document directly into the prompt. This has also sparked discussions on whether RAG is still necessary when LLMs are not constrained by context. In fact, RAG still plays an irreplaceable role. On one hand, providing LLMs with a large amount of context at once will significantly impact its inference speed, while chunked retrieval and on-demand input can significantly improve operational efficiency. On the other hand, RAG-based generation can quickly locate the original references for LLMs to help users verify the generated answers. The entire retrieval and reasoning process is observable, while generation solely relying on long context remains a black box. Conversely, the expansion of context provides new opportunities for the development of RAG, enabling it to address more complex problems and integrative or summary questions that require reading a large amount of material to answer. Developing new RAG methods in the context of super-long contexts is one of the future research trends. # RAG Robustness The presence of noise or contradictory information during retrieval can detrimentally affect RAG’s output quality. This situation is figuratively referred to as “Misinformation can be worse than no information at all”. Improving RAG’s resistance to such adversarial or counterfactual inputs is gaining research momentum and has become a key performance metric. Cuconasu et al.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
analyze which type of documents should be retrieved, evaluate the relevance of the documents to the prompt, their position, and the number included in the context. The research findings reveal that including irrelevant documents can unexpectedly increase accuracy by over 30%, contradicting the initial assumption of reduced quality. These results underscore the importance of developing specialized strategies to integrate retrieval with language generation models, highlighting the need for further research and exploration into the robustness of RAG. # Hybrid Approaches Combining RAG with fine-tuning is emerging as a leading strategy. Determining the optimal integration of RAG and fine-tuning whether sequential, alternating, or through end-to-end joint training—and how to harness both parameterized
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Evaluation Framework|Evaluation Targets|Evaluation Aspects|Quantitative Metrics| |---|---|---|---| |RGB†|Retrieval Quality|Noise Robustness|Accuracy| | |Generation Quality|Negative Rejection|EM| | | |Information Integration|Accuracy| | | |Counterfactual Robustness|Accuracy| |RECALL†|Generation Quality|Counterfactual Robustness|R-Rate (Reappearance Rate)| | |Retrieval Quality|Context Relevance|*| |RAGAS‡|Generation Quality|Faithfulness|*| | |Retrieval Quality|Context Relevance|Accuracy| |ARES‡|Generation Quality|Faithfulness|Accuracy| | |Retrieval Quality|Context Relevance|*| |TruLens‡|Generation Quality|Faithfulness|*| | |Retrieval Quality|Answer Relevance|*| |CRUD†|Retrieval Quality|Knowledge-intensive QA|ROUGE-L| | |Generation Quality|Error Correction|BertScore| | | |Summarization|RAGQuestEval| † represents a benchmark, and ‡ represents a tool. * denotes customized quantitative metrics, which deviate from traditional metrics.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We present our findings for the research question: What are the key considerations when engineering a RAG system? Based on our takeaways we identified multiple potential research areas linked to RAG as follows: 6.1 Chunking and Embeddings Chunking documents sounds trivial. However, the quality of chunking affects the retrieval process in many ways and in particular on the embeddings of the chunk then affects the similarity and matching of chunks to user queries. There are two ways of chunking: heuristics based (using punctuation, end of paragraph, etc.), and semantic chunking (using the semantics in the text to inform start-end of a chunk). Further research should explore the tradeoffs between these methods and their effects on critical downstream processes like embedding and similarity matching. A systematic evaluation framework comparing chunking techniques on metrics like query relevance and retrieval accuracy would benefit the field. Embeddings represent another active research area, including generating embeddings for multimedia and multimodal chunks such as tables, figures, formulas, etc. Chunk embeddings are typically created once during system development or when a new document is indexed. Query preprocessing significantly impacts a RAG system’s performance, particularly handling negative or ambiguous queries. Further research is needed on architectural patterns and approaches to address the inherent limitations with embeddings (quality of a match is domain specific). 6.2 RAG vs Finetuning LLMs are great world models due to the amount of training data, and finetuning tasks applied on the model before it’s released. However, these models are general-purpose models (may not know the very specifics of your domain) and also not up to date (there is a cutoff date on their knowledge). Fine-tuning and RAG offer two potential customization pathways, each with distinct tradeoffs. Finetuning requires curating internal datasets to adapt and train the LLM on. However, all your data are baked into the model and you need to
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Readers are encouraged to consult pertinent literature for the specific quantification formulas associated with these metrics, as required. and non-parameterized advantages are areas ripe for exploration [27]. Another trend is to introduce SLMs with specific functionalities into RAG and fine-tuned by the results of RAG system. For example, CRAG [67] trains a lightweight retrieval evaluator to assess the overall quality of the retrieved documents for a query and triggers different knowledge retrieval actions based on confidence levels. D. Scaling laws of RAG End-to-end RAG models and pre-trained models based on RAG are still one of the focuses of current researchers [173]. The parameters of these models are one of the key factors. While scaling laws [174] are established for LLMs, their applicability to RAG remains uncertain. Initial studies like RETRO++ [44] have begun to address this, yet the parameter count in RAG models still lags behind that of LLMs. The possibility of an Inverse Scaling Law 10, where smaller models outperform larger ones, is particularly intriguing and merits further investigation. E. Production-Ready RAG RAG’s practicality and alignment with engineering requirements have facilitated its adoption. However, enhancing retrieval efficiency, improving document recall in large knowledge bases, and ensuring data security—such as preventing inadvertent disclosure of document sources or metadata by LLMs—are critical engineering challenges that remain to be addressed [175]. The development of the RAG ecosystem is greatly impacted by the progression of its technical stack. Key tools like LangChain and LLamaIndex have quickly gained popularity with the emergence of ChatGPT, providing extensive RAG-related APIs and becoming essential in the realm of LLMs. The emerging technology stack, while not as rich in features as LangChain and LLamaIndex, stands out through its specialized products. For example, Flowise AI prioritizes a low-code approach, allowing users to deploy AI applications, including RAG, through a user-friendly drag-and-drop interface. Other technologies like HayStack, Meltano, and Cohere Coral are also gaining attention for their unique contributions to the field. In addition to AI-focused vendors, traditional software and cloud service providers are expanding their offerings to include RAG-centric services. Weaviate’s Verba 11 is designed for personal assistant applications, while Amazon’s Kendra 12 offers intelligent enterprise search services, enabling users to browse various content repositories using built-in connectors. In the development of RAG technology, there is a clear trend towards different specialization directions, such as: 1) Customization - tailoring RAG to meet specific requirements. 2) Simplification - making RAG easier to use to reduce the 10https://github.com/inverse-scaling/prize 11https://github.com/weaviate/Verba 12https://aws.amazon.com/cn/kendra/
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Fig.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
6. Summary of RAG ecosystem |Downstream Tasks|Technology Stacks|Challenges|Modality Extension|Ecosystem| |---|---|---|---|---| |Dialogue Summarization|Question answering|Langchain|Llamalndex|RAG InLongContext Length| |Fact verification|FloviseAI|AutoGen|Hybrid|Robustness| | | |Scaling-laws for RAG| |Production-ready RAG| | | | |Code|Specialization| | | | | | | |Naive RAG|Advanced RAG| | | | | | | | | | |Techniques for Better RAG| | | | | |Iterative Retrieval|Retriever Fine-tuning|Evaluation Aspects| | | |Chunk Optimization| | | | | |Key Issues of RAG| | | | | |What to retrieve|When to retrieve|Retrieval| | | | | | | | | The mutual growth of RAG models and their technology stacks is evident; technological advancements continuously establish new standards for existing infrastructure. In turn, enhancements to the technology stack drive the development of RAG capabilities. RAG toolkits are converging into a foundational technology stack, laying the groundwork for advanced enterprise applications. However, a fully integrated, comprehensive platform concept is still in the future, requiring further innovation and development. # Multi-modal RAG RAG has transcended its initial text-based question-answering confines, embracing a diverse array of modal data. This expansion has spawned innovative multimodal models that integrate RAG concepts across various domains: |Image|Audio and Video|Code| |---|---|---| |RA-CM3 [176]|GSS method|RBPS [182]| |BLIP-2 [177]|UEOP|CoK method [106]| |Visualize Before You Write method [178]|KNN-based attention fusion| | | |Vid2Seq| | The summary of this paper, as depicted in Figure 6, emphasizes RAG’s significant advancement in enhancing the capabilities of LLMs by integrating parameterized knowledge from language models with extensive non-parameterized data from external knowledge bases. The survey showcases the evolution of RAG technologies and their application on many different tasks. The analysis outlines three developmental paradigms within the RAG framework: Naive, Advanced, and Modular RAG, each representing a progressive enhancement over its predecessors. RAG’s technical integration with other AI methodologies, such as fine-tuning and reinforcement learning, has further expanded its capabilities. Despite the progress in RAG technology, there are research opportunities to improve its robustness and its ability to handle extended contexts. RAG’s application scope is expanding into multimodal domains, adapting its principles to interpret and process diverse data forms like images, videos, and code. This expansion highlights RAG’s significant practical implications for AI deployment, attracting interest from academic and industrial sectors.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# REFERENCES [1] N. Kandpal, H. Deng, A. Roberts, E. Wallace, and C. Raffel, “Large language models struggle to learn long-tail knowledge,” in International Conference on Machine Learning. PMLR, 2023, pp.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
15 696–15 707. [2] Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao, Y. Zhang, Y. Chen et al., “Siren’s song in pe ai ocean: A survey on hallucination in large language models,” arXiv preprint arXiv:2309.01219, 2023. [3] D. Arora, A. Kini, S. R. Chowdhury, N. Natarajan, G. Sinha, and A. Sharma, “Gar-meets-rag paradigm for zero-shot information retrieval,” arXiv preprint arXiv:2310.20158, 2023. [4] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. K¨uttler, M. Lewis, W.-t. Yih, T. Rockt¨aschel et al., “Retrieval-augmented generation for knowledge-intensive nlp tasks,” Advances in Neural Information Processing Systems, vol. 33, pp.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
9459–9474, 2020. [5] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Ruperford, K. Millican, G. B. Van Den Driessche, J.-B. Lespiau, B. Damoc, A. Clark et al., “Improving language models by retrieving from trillions of tokens,” in International conference on machine learning. PMLR, 2022, pp. 2206–2240. [6] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., “Training language models to follow instructions wip human feedback,” Advances in neural information processing systems, vol. 35, pp.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
27 730–27 744, 2022. [7] X. Ma, Y. Gong, P. He, H. Zhao, and N. Duan, “Query rewriting for retrieval-augmented large language models,” arXiv preprint arXiv:2305.14283, 2023. [8] I. ILIN, “Advanced rag techniques: an illustrated overview,” https://pub.towardsai.net/advanced-rag-techniques-an-illustrated-overview-04d193d8fec6, 2023. [9] W. Peng, G. Li, Y. Jiang, Z. Wang, D. Ou, X. Zeng, E. Chen et al., “Large language model based long-tail query rewriting in taobao search,” arXiv preprint arXiv:2311.03758, 2023. [10] H. S. Zheng, S. Mishra, X. Chen, H.-T. Cheng, E. H. Chi, Q. V. Le, and D. Zhou, “Take a step back: Evoking reasoning via abstraction in large language models,” arXiv preprint arXiv:2310.06117, 2023. [11] L. Gao, X. Ma, J. Lin, and J. Callan, “Precise zero-shot dense retrieval wipout relevance labels,” arXiv preprint arXiv:2212.10496, 2022. [12] V. Blagojevi, “Enhancing rag pipelines in haystack: Introducing diversityranker and lostinpemiddleranker,” https://towardsdatascience.com/enhancing-rag-pipelines-in-haystack-45f14e2bc9f5, 2023. [13] W. Yu, D. Iter, S. Wang, Y. Xu, M. Ju, S. Sanyal, C. Zhu, M. Zeng, and M. Jiang, “Generate raper pan retrieve: Large language models are strong context generators,” arXiv preprint arXiv:2209.10063, 2022. [14] Z. Shao, Y. Gong, Y. Shen, M. Huang, N. Duan, and W. Chen, “Enhancing retrieval-augmented large language models wip iterative retrieval-generation synergy,” arXiv preprint arXiv:2305.15294, 2023. [15] X. Wang, Q. Yang, Y. Qiu, J. Liang, Q. He, Z. Gu, Y. Xiao, and W. Wang, “Knowledgpt: Enhancing large language models wip retrieval and storage access on knowledge bases,” arXiv preprint arXiv:2308.11761, 2023. [16] A. H. Raudaschl, “Forget rag, pe future is rag-fusion,” https://towardsdatascience.com/forget-rag-pe-future-is-rag-fusion-1147298d8ad1, 2023. [17] X. Cheng, D. Luo, X. Chen, L. Liu, D. Zhao, and R. Yan, “Lift yourself up: Retrieval-augmented text generation wip self memory,” arXiv preprint arXiv:2305.02437, 2023. [18] S. Wang, Y. Xu, Y. Fang, Y. Liu, S. Sun, R. Xu, C. Zhu, and M. Zeng, “Training data is more valuable pan you pink: A simple and effective mepod by retrieving from training data,” arXiv preprint arXiv:2203.08773, 2022. [19] X. Li, E. Nie, and S. Liang, “From classification to generation: Insights into crosslingual retrieval augmented icl,” arXiv preprint arXiv:2311.06595, 2023. [20] D. Cheng, S. Huang, J. Bi, Y. Zhan, J. Liu, Y. Wang, H. Sun, F. Wei, D. Deng, and Q. Zhang, “Uprise: Universal prompt retrieval for improving zero-shot evaluation,” arXiv preprint arXiv:2303.08518, 2023. [21] Z. Dai, V. Y. Zhao, J. Ma, Y. Luan, J. Ni, J. Lu, A. Bakalov, K. Guu, K. B. Hall, and M.-W. Chang, “Promptagator: Few-shot dense retrieval from 8 examples,” arXiv preprint arXiv:2209.11755, 2022. [22] Z. Sun, X. Wang, Y. Tay, Y. Yang, and D. Zhou, “Recitation-augmented language models,” arXiv preprint arXiv:2210.01296, 2022. [23] O. Khattab, K. Sanpanam, X. L. Li, D. Hall, P. Liang, C. Potts, and M. Zaharia, “Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp,” arXiv preprint arXiv:2212.14024, 2022. [24] Z. Jiang, F. F. Xu, L. Gao, Z. Sun, Q. Liu, J. Dwivedi-Yu, Y. Yang, J. Callan, and G. Neubig, “Active retrieval augmented generation,” arXiv preprint arXiv:2305.06983, 2023. [25] A. Asai, Z. Wu, Y. Wang, A. Sil, and H. Hajishirzi, “Self-rag: Learning to retrieve, generate, and critique prough self-reflection,” arXiv preprint arXiv:2310.11511, 2023. [26] Z. Ke, W. Kong, C. Li, M. Zhang, Q. Mei, and M. Bendersky, “Bridging pe preference gap between retrievers and llms,” arXiv preprint arXiv:2401.06954, 2024. [27] X. V. Lin, X. Chen, M. Chen, W. Shi, M. Lomeli, R. James, P. Rodriguez, J. Kahn, G. Szilvasy, M. Lewis et al., “Ra-dit: Retrieval-augmented dual instruction tuning,” arXiv preprint arXiv:2310.01352, 2023. [28] O. Ovadia, M. Brief, M. Mishaeli, and O. Elisha, “Fine-tuning or retrieval? comparing knowledge injection in llms,” arXiv preprint arXiv:2312.05934, 2023. [29] T. Lan, D. Cai, Y. Wang, H. Huang, and X.-L. Mao, “Copy is all you need,” in The Elevenp International Conference on Learning Representations, 2022. [30] T. Chen, H. Wang, S. Chen, W. Yu, K. Ma, X. Zhao, D. Yu, and H. Zhang, “Dense x retrieval: What retrieval granularity should we use?” arXiv preprint arXiv:2312.06648, 2023. [31] F. Luo and M. Surdeanu, “Divide & conquer for entailment-aware multi-hop evidence retrieval,” arXiv preprint arXiv:2311.02616, 2023. [32] Q. Gou, Z. Xia, B. Yu, H. Yu, F. Huang, Y. Li, and N. Cam-Tu, “Diversify question generation wip retrieval-augmented style transfer,” arXiv preprint arXiv:2310.14503, 2023. [33] Z. Guo, S. Cheng, Y. Wang, P. Li, and Y. Liu, “Prompt-guided retrieval augmentation for non-knowledge-intensive tasks,” arXiv preprint arXiv:2305.17653, 2023. [34] Z. Wang, J. Araki, Z. Jiang, M. R. Parvez, and G. Neubig, “Learning to filter context for retrieval-augmented generation,” arXiv preprint arXiv:2311.08377, 2023. [35] M. Seo, J. Baek, J. Thorne, and S. J. Hwang, “Retrieval-augmented data augmentation for low-resource domain tasks,” arXiv preprint arXiv:2402.13482, 2024. [36] Y. Ma, Y. Cao, Y. Hong, and A. Sun, “Large language model is not a good few-shot information extractor, but a good reranker for hard samples!” arXiv preprint arXiv:2303.08559, 2023. [37] X. Du and H. Ji, “Retrieval-augmented generative question answering for event argument extraction,” arXiv preprint arXiv:2211.07067, 2022. [38] L. Wang, N. Yang, and F. Wei, “Learning to retrieve in-context examples for large language models,” arXiv preprint arXiv:2307.07164, 2023. [39] S. Rajput, N. Mehta, A. Singh, R. H. Keshavan, T. Vu, L. Heldt, L. Hong, Y. Tay, V. Q. Tran, J. Samost et al., “Recommender systems wip generative retrieval,” arXiv preprint arXiv:2305.05065, 2023. [40] B. Jin, H. Zeng, G. Wang, X. Chen, T. Wei, R. Li, Z. Wang, Z. Li, Y. Li, H. Lu et al., “Language models as semantic indexers,” arXiv preprint arXiv:2310.07815, 2023. [41] R. Ananpa, T. Bepi, D. Vodianik, and S. Chappidi, “Context tuning for retrieval augmented generation,” arXiv preprint arXiv:2312.05708, 2023. [42] G. Izacard, P. Lewis, M. Lomeli, L. Hosseini, F. Petroni, T. Schick, J. Dwivedi-Yu, A. Joulin, S. Riedel, and E. Grave, “Few-shot learning wip retrieval augmented language models,” arXiv preprint arXiv:2208.03299, 2022. [43] J. Huang, W. Ping, P. Xu, M. Shoeybi, K. C.-C. Chang, and B. Catanzaro, “Raven: In-context learning wip retrieval augmented encoder-decoder language models,” arXiv preprint arXiv:2308.07922, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Bibliography [44] B. Wang, W. Ping, P. Xu, L. McAfee, Z. Liu, M. Shoeybi, Y. Dong, O. Kuchaiev, B. Li, C. Xiao et al., “Shall we pretrain autoregressive language models wip retrieval? a comprehensive study,” arXiv preprint arXiv:2304.06762, 2023. [45] B. Wang, W. Ping, L. McAfee, P. Xu, B. Li, M. Shoeybi, and B. Catanzaro, “Instructretro: Instruction tuning post retrieval-augmented pre-training,” arXiv preprint arXiv:2310.07713, 2023. [46] S. Siriwardhana, R. Weerasekera, E. Wen, T. Kaluarachchi, R. Rana, and S. Nanayakkara, “Improving pe domain adaptation of retrieval augmented generation (rag) models for open domain question answering,” Transactions of pe Association for Computational Linguistics, vol. 11, pp. 1–17, 2023. [47] Z. Yu, C. Xiong, S. Yu, and Z. Liu, “Augmentation-adapted retriever improves generalization of language models as generic plug-in,” arXiv preprint arXiv:2305.17331, 2023. [48] O. Yoran, T. Wolfson, O. Ram, and J. Berant, “Making retrieval-augmented language models robust to irrelevant context,” arXiv preprint arXiv:2310.01558, 2023. [49] H.-T. Chen, F. Xu, S. A. Arora, and E. Choi, “Understanding retrieval augmentation for long-form question answering,” arXiv preprint arXiv:2310.12150, 2023. [50] W. Yu, H. Zhang, X. Pan, K. Ma, H. Wang, and D. Yu, “Chain-of-note: Enhancing robustness in retrieval-augmented language models,” arXiv preprint arXiv:2311.09210, 2023. [51] S. Xu, L. Pang, H. Shen, X. Cheng, and T.-S. Chua, “Search-in-pe-chain: Towards accurate, credible and traceable large language models for knowledge-intensive tasks,” CoRR, vol. abs/2304.14732, 2023. [52] M. Berchansky, P. Izsak, A. Caciularu, I. Dagan, and M. Wasserblat, “Optimizing retrieval-augmented reader models via token elimination,” arXiv preprint arXiv:2310.13682, 2023. [53] J. L´ala, O. O’Donoghue, A. Shtedritski, S. Cox, S. G. Rodriques, and A. D. White, “Paperqa: Retrieval-augmented generative agent for scientific research,” arXiv preprint arXiv:2312.07559, 2023. [54] F. Cuconasu, G. Trappolini, F. Siciliano, S. Filice, C. Campagnano, Y. Maarek, N. Tonellotto, and F. Silvestri, “The power of noise: Redefining retrieval for rag systems,” arXiv preprint arXiv:2401.14887, 2024. [55] Z. Zhang, X. Zhang, Y. Ren, S. Shi, M. Han, Y. Wu, R. Lai, and Z. Cao, “Iag: Induction-augmented generation framework for answering reasoning questions,” in Proceedings of pe 2023 Conference on Empirical Mepods in Natural Language Processing, 2023, pp. 1–14. [56] N. Thakur, L. Bonifacio, X. Zhang, O. Ogundepo, E. Kamalloo, D. Alfonso-Hermelo, X. Li, Q. Liu, B. Chen, M. Rezagholizadeh et al., “Nomiracl: Knowing when you don’t know for robust multilingual retrieval-augmented generation,” arXiv preprint arXiv:2312.11361, 2023. [57] G. Kim, S. Kim, B. Jeon, J. Park, and J. Kang, “Tree of clarifications: Answering ambiguous questions wip retrieval-augmented large language models,” arXiv preprint arXiv:2310.14696, 2023. [58] Y. Wang, P. Li, M. Sun, and Y. Liu, “Self-knowledge guided retrieval augmentation for large language models,” arXiv preprint arXiv:2310.05002, 2023. [59] Z. Feng, X. Feng, D. Zhao, M. Yang, and B. Qin, “Retrieval-generation synergy augmented large language models,” arXiv preprint arXiv:2310.05149, 2023. [60] P. Xu, W. Ping, X. Wu, L. McAfee, C. Zhu, Z. Liu, S. Subramanian, E. Bakhturina, M. Shoeybi, and B. Catanzaro, “Retrieval meets long context large language models,” arXiv preprint arXiv:2310.03025, 2023. [61] H. Trivedi, N. Balasubramanian, T. Khot, and A. Sabharwal, “Interleaving retrieval wip chain-of-pought reasoning for knowledge-intensive multi-step questions,” arXiv preprint arXiv:2212.10509, 2022. [62] R. Ren, Y. Wang, Y. Qu, W. X. Zhao, J. Liu, H. Tian, H. Wu, J.-R. Wen, and H. Wang, “Investigating pe factual knowledge boundary of large language models wip retrieval augmentation,” arXiv preprint arXiv:2307.11019, 2023. [63] P. Sarpi, S. Abdullah, A. Tuli, S. Khanna, A. Goldie, and C. D. Manning, “Raptor: Recursive abstractive processing for tree-organized retrieval,” arXiv preprint arXiv:2401.18059, 2024. [64] O. Ram, Y. Levine, I. Dalmedigos, D. Muhlgay, A. Shashua, K. Leyton-Brown, and Y. Shoham, “In-context retrieval-augmented language models,” arXiv preprint arXiv:2302.00083, 2023. [65] Y. Ren, Y. Cao, P. Guo, F. Fang, W. Ma, and Z. Lin, “Retrieve-and-sample: Document-level event argument extraction via hybrid retrieval augmentation,” in Proceedings of pe 61st Annual Meeting of pe Association for Computational Linguistics (Volume 1: Long Papers), 2023, pp. 293–306. [66] Z. Wang, X. Pan, D. Yu, D. Yu, J. Chen, and H. Ji, “Zemi: Learning zero-shot semi-parametric language models from multiple tasks,” arXiv preprint arXiv:2210.00185, 2022. [67] S.-Q.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Yan, J.-C.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Gu, Y. Zhu, and Z.-H. Ling, “Corrective retrieval augmented generation,” arXiv preprint arXiv:2401.15884, 2024. [68] P. Jain, L. B. Soares, and T. Kwiatkowski, “1-pager: One pass answer generation and evidence retrieval,” arXiv preprint arXiv:2310.16568, 2023. [69] H. Yang, Z. Li, Y. Zhang, J. Wang, N. Cheng, M. Li, and J. Xiao, “Prca: Fitting black-box large language models for retrieval question answering via pluggable reward-driven contextual adapter,” arXiv preprint arXiv:2310.18347, 2023. [70] S. Zhuang, B. Liu, B. Koopman, and G. Zuccon, “Open-source large language models are strong zero-shot query likelihood models for document ranking,” arXiv preprint arXiv:2310.13243, 2023. [71] F. Xu, W. Shi, and E. Choi, “Recomp: Improving retrieval-augmented lms wip compression and selective augmentation,” arXiv preprint arXiv:2310.04408, 2023. [72] W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W.-t. Yih, “Replug: Retrieval-augmented black-box language models,” arXiv preprint arXiv:2301.12652, 2023. [73] E. Melz, “Enhancing llm intelligence wip arm-rag: Auxiliary rationale memory for retrieval augmented generation,” arXiv preprint arXiv:2311.04177, 2023. [74] H. Wang, W. Huang, Y. Deng, R. Wang, Z. Wang, Y. Wang, F. Mi, J. Z. Pan, and K.-F. Wong, “Unims-rag: A unified multi-source retrieval-augmented generation for personalized dialogue systems,” arXiv preprint arXiv:2401.13256, 2024. [75] Z. Luo, C. Xu, P. Zhao, X. Geng, C. Tao, J. Ma, Q. Lin, and D. Jiang, “Augmented large language models wip parametric knowledge guiding,” arXiv preprint arXiv:2305.04757, 2023. [76] X. Li, Z. Liu, C. Xiong, S. Yu, Y. Gu, Z. Liu, and G. Yu, “Structure-aware language model pretraining improves dense retrieval on structured data,” arXiv preprint arXiv:2305.19912, 2023. [77] M. Kang, J. M. Kwak, J. Baek, and S. J. Hwang, “Knowledge graph-augmented language models for knowledge-grounded dialogue generation,” arXiv preprint arXiv:2305.18846, 2023. [78] W. Shen, Y. Gao, C. Huang, F. Wan, X. Quan, and W. Bi, “Retrieval-generation alignment for end-to-end task-oriented dialogue system,” arXiv preprint arXiv:2310.08877, 2023. [79] T. Shi, L. Li, Z. Lin, T. Yang, X. Quan, and Q. Wang, “Dual-feedback knowledge retrieval for task-oriented dialogue systems,” arXiv preprint arXiv:2310.14528, 2023. [80] P. Ranade and A. Joshi, “Fabula: Intelligence report generation using retrieval-augmented narrative construction,” arXiv preprint arXiv:2310.13848, 2023. [81] X. Jiang, R. Zhang, Y. Xu, R. Qiu, Y. Fang, Z. Wang, J. Tang, H. Ding, X. Chu, J. Zhao et al., “Think and retrieval: A hypopesis knowledge graph enhanced medical large language models,” arXiv preprint arXiv:2312.15883, 2023. [82] J. Baek, S. Jeong, M. Kang, J. C. Park, and S. J. Hwang, “Knowledge-augmented language model verification,” arXiv preprint arXiv:2310.12836, 2023. [83] L. Luo, Y.-F. Li, G. Haffari, and S. Pan, “Reasoning on graphs: Faipful and interpretable large language model reasoning,” arXiv preprint arXiv:2310.01061, 2023. [84] X. He, Y. Tian, Y. Sun, N. V. Chawla, T. Laurent, Y. LeCun, X. Bresson, and B. Hooi, “G-retriever: Retrieval-augmented generation for textual graph understanding and question answering,” arXiv preprint arXiv:2402.07630, 2024. [85] L. Zha, J. Zhou, L. Li, R. Wang, Q. Huang, S. Yang, J. Yuan, C. Su, X. Li, A. Su et al., “Tablegpt: Towards unifying tables, nature language and commands into one gpt,” arXiv preprint arXiv:2307.08674, 2023. [86] M. Gaur, K. Gunaratna, V. Srinivasan, and H. Jin, “Iseeq: Information seeking question generation using dynamic meta-information retrieval and knowledge graphs,” in Proceedings of pe AAAI Conference on Artificial Intelligence, vol. 36, no.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Seven Failure Points When Engineering a Retrieval Augmented Generation System CAIN 2024, April 2024, Lisbon, Portugal |FP|Lesson|Description|Case Studies| |---|---|---|---| |FP4|Larger context get better results (Context refers to a particular setting or situation in which the content occurs)|A larger context enabled more accurate responses (8K vs 4K). Contrary to prior work with GPT-3.5 [13]|AI Tutor| |FP1|Semantic caching drives cost and latency down|RAG systems struggle with concurrent users due to rate limits and the cost of LLMs. Prepopulate the semantic cache with frequently asked questions [1].|AI Tutor| |FP5-7|Jailbreaks bypass the RAG system and hit the safety training.|Research suggests fine-tuning LLMs reverses safety training [11], test all fine-tuned LLMs for RAG system.|AI Tutor| |FP2, FP4|Adding meta-data improves retrieval.|Adding the file name and chunk number into the retrieved context helped the reader extract the required information. Useful for chat dialogue.|AI Tutor| |FP2, FP4-7|Open source embedding models perform better for small text.|Opensource sentence embedding models performed as well as closed source alternatives on small text.|BioASQ, AI Tutor| |FP2-7|RAG systems require continuous calibration.|RAG systems receive unknown input at runtime requiring constant monitoring.|AI Tutor, BioASQ| |FP1, FP2|Implement a RAG pipeline for configuration.|A RAG system requires calibrating chunk size, embedding strategy, chunking strategy, retrieval strategy, consolidation strategy, context size, and prompts.|Cognitive Reviewer, AI Tutor, BioASQ| |FP2, FP4|RAG pipelines created by assembling bespoke solutions are suboptimal.|End-to-end training enhances domain adaptation in RAG systems [18].|BioASQ, AI Tutor| |FP2-7|Testing performance characteristics are only possible at runtime.|Offline evaluation techniques such as G-Evals [14] look promising but are premised on having access to labelled question and answer pairs.|Cognitive Reviewer, AI Tutor| Table 2: The lessons learned from the three case studies with key takeaways for future RAG implementations Sort out the security/privacy (who can access what). Furthermore, as the foundation model itself evolves or you get new data to add to the model, you will need to run finetuning again.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
10, 2022, pp.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
10 672–10 680. [87] F. Shi, X. Chen, K. Misra, N. Scales, D. Dohan, E. H. Chi, N. Sch¨arli, and D. Zhou, “Large language models can be easily distracted by irrelevant context,” in International Conference on Machine Learning. PMLR, 2023, pp. 31 210–31 227. [88] R. Teja, “Evaluating pe ideal chunk size for a rag system using llamaindex,” https://www.llamaindex.ai/blog/evaluating-pe-ideal-chunk-size-for-a-rag-system-using-llamaindex-6207e5d3fec5, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Langchain, “Recursively split by character,” https://python.langchain.com/docs/modules/data connection/document transformers/recursive text splitter, 2023. # S. Yang, “Advanced rag 01: Small-to-big retrieval,” https://towardsdatascience.com/advanced-rag-01-small-to-big-retrieval-172181b396d4, 2023. # Y. Wang, N. Lipka, R. A. Rossi, A. Siu, R. Zhang, and T. Derr, “Knowledge graph prompting for multi-document question answering,” arXiv preprint arXiv:2308.11730, 2023. # D. Zhou, N. Sch¨arli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schurmans, C. Cui, O. Bousquet, Q. Le et al., “Least-to-most prompting enables complex reasoning in large language models,” arXiv preprint arXiv:2205.10625, 2022. # S. Dhuliawala, M. Komeili, J. Xu, R. Raileanu, X. Li, A. Celikyilmaz, and J. Weston, “Chain-of-verification reduces hallucination in large language models,” arXiv preprint arXiv:2309.11495, 2023. # X. Li and J. Li, “Angle-optimized text embeddings,” arXiv preprint arXiv:2309.12871, 2023. # VoyageAI, “Voyage’s embedding models,” https://docs.voyageai.com/embeddings/, 2023. # BAAI, “Flagembedding,” https://github.com/FlagOpen/FlagEmbedding, 2023. # P. Zhang, S. Xiao, Z. Liu, Z. Dou, and J.-Y. Nie, “Retrieve anything to augment large language models,” arXiv preprint arXiv:2310.07554, 2023. # N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang, “Lost in the middle: How language models use long contexts,” arXiv preprint arXiv:2307.03172, 2023. # Y. Gao, T. Sheng, Y. Xiang, Y. Xiong, H. Wang, and J. Zhang, “Chat-rec: Towards interactive and explainable llms-augmented recommender system,” arXiv preprint arXiv:2303.14524, 2023. # N. Anderson, C. Wilson, and S. D. Richardson, “Lingua: Addressing scenarios for live interpretation and automatic dubbing,” in Proceedings of the 15th Biennial Conference of the Association for Machine Translation in the Americas (Volume 2: Users and Providers Track and Government Track), J. Campbell, S. Larocca, J. Marciano, K. Savenkov, and A. Yanishevsky, Eds. Orlando, USA: Association for Machine Translation in the Americas, Sep. 2022, pp. 202–209. [Online]. Available: https://aclanthology.org/2022.amta-upg.14 # H. Jiang, Q. Wu, X. Luo, D. Li, C.-Y. Lin, Y. Yang, and L. Qiu, “Longllmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression,” arXiv preprint arXiv:2310.06839, 2023. # V. Karpukhin, B. O˘guz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W.-t. Yih, “Dense passage retrieval for open-domain question answering,” arXiv preprint arXiv:2004.04906, 2020. # Y. Ma, Y. Cao, Y. Hong, and A. Sun, “Large language model is not a good few-shot information extractor, but a good reranker for hard samples!” ArXiv, vol. abs/2303.08559, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[Online]. Available: https://api.semanticscholar.org/CorpusID:257532405 # J. Cui, Z. Li, Y. Yan, B. Chen, and L. Yuan, “Chatlaw: Open-source legal large language model with integrated external knowledge bases,” arXiv preprint arXiv:2306.16092, 2023. # O. Yoran, T. Wolfson, O. Ram, and J. Berant, “Making retrieval-augmented language models robust to irrelevant context,” arXiv preprint arXiv:2310.01558, 2023. # X. Li, R. Zhao, Y. K. Chia, B. Ding, L. Bing, S. Joty, and S. Poria, “Chain of knowledge: A framework for grounding large language models with structured knowledge bases,” arXiv preprint arXiv:2305.13269, 2023. # H. Yang, S. Yue, and Y. He, “Auto-gpt for online decision making: Benchmarks and additional opinions,” arXiv preprint arXiv:2306.02224, 2023. # T. Schick, J. Dwivedi-Yu, R. Dess`ı, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom, “Toolformer: Language models can teach themselves to use tools,” arXiv preprint arXiv:2302.04761, 2023. # J. Zhang, “Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt,” arXiv preprint arXiv:2304.11116, 2023. # R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders et al., “Webgpt: Browser-assisted question-answering with human feedback,” arXiv preprint arXiv:2112.09332, 2021. # T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee et al., “Natural questions: a benchmark for question answering research,” Transactions of the Association for Computational Linguistics, vol. 7, pp.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
453–466, 2019.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# planner for personalized knowledge-grounded dialogue [157] H. Husain, H.-H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt, "Codesearchnet challenge: Evaluating pe state of semantic code search," arXiv preprint arXiv:1909.09436, 2019. # Large language models as source planner for personalized knowledge-grounded dialogue [135] ——, "Large language models as source planner for personalized knowledge-grounded dialogue," arXiv preprint arXiv:2310.08840, 2023. # Long time no see! open-domain conversation with long-term persona memory [136] X. Xu, Z. Gou, W. Wu, Z.-Y. Niu, H. Wu, H. Wang, and S. Wang, "Long time no see! open-domain conversation wip long-term persona memory," arXiv preprint arXiv:2203.05797, 2022. # Conditional generation and snapshot learning in neural dialogue systems [137] T.-H. Wen, M. Gasic, N. Mrksic, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, D. Vandyke, and S. Young, "Conditional generation and snapshot learning in neural dialogue systems," arXiv preprint arXiv:1606.03352, 2016. # Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering [138] R. He and J. McAuley, "Ups and downs: Modeling pe visual evolution of fashion trends wip one-class collaborative filtering," in proceedings of pe 25p international conference on world wide web, 2016, pp. 507–517. # Document-level event argument extraction by conditional generation [139] S. Li, H. Ji, and J. Han, "Document-level event argument extraction by conditional generation," arXiv preprint arXiv:2104.05919, 2021. # Multi-sentence argument linking [140] S. Ebner, P. Xia, R. Culkin, K. Rawlins, and B. Van Durme, "Multi-sentence argument linking," arXiv preprint arXiv:1911.03766, 2019. # T-rex: A large scale alignment of natural language with knowledge base triples [141] H. Elsahar, P. Vougiouklis, A. Remaci, C. Gravier, J. Hare, F. Laforest, and E. Simperl, "T-rex: A large scale alignment of natural language wip knowledge base triples," in Proceedings of pe Elevenp International Conference on Language Resources and Evaluation (LREC 2018), 2018. # Zero-shot relation extraction via reading comprehension [142] O. Levy, M. Seo, E. Choi, and L. Zettlemoyer, "Zero-shot relation extraction via reading comprehension," arXiv preprint arXiv:1706.04115, 2017. # Hellaswag: Can a machine really finish your sentence? [143] R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi, "Hellaswag: Can a machine really finish your sentence?" arXiv preprint arXiv:1905.07830, 2019. # The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning [144] S. Kim, S. J. Joo, D. Kim, J. Jang, S. Ye, J. Shin, and M. Seo, "The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-pought fine-tuning," arXiv preprint arXiv:2305.14045, 2023. # Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph [145] A. Saha, V. Pahuja, M. Khapra, K. Sankaranarayanan, and S. Chandar, "Complex sequential question answering: Towards learning to converse over linked question answer pairs wip a knowledge graph," in Proceedings of pe AAAI conference on artificial intelligence, vol. 32, no.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
1, 2018. # Measuring massive multitask language understanding [146] D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt, "Measuring massive multitask language understanding," arXiv preprint arXiv:2009.03300, 2020. # Pointer sentinel mixture models [147] S. Merity, C. Xiong, J. Bradbury, and R. Socher, "Pointer sentinel mixture models," arXiv preprint arXiv:1609.07843, 2016. # Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies [148] M. Geva, D. Khashabi, E. Segal, T. Khot, D. Rop, and J. Berant, "Did aristotle use a laptop? a question answering benchmark wip implicit reasoning strategies," Transactions of pe Association for Computational Linguistics, vol. 9, pp.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
346–361, 2021. # Fever: a large-scale dataset for fact extraction and verification [149] J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal, "Fever: a large-scale dataset for fact extraction and verification," arXiv preprint arXiv:1803.05355, 2018. # Explainable automated fact-checking for public health claims [150] N. Kotonya and F. Toni, "Explainable automated fact-checking for public healp claims," arXiv preprint arXiv:2010.09926, 2020. # Neural text generation from structured data with application to the biography domain [151] R. Lebret, D. Grangier, and M. Auli, "Neural text generation from structured data wip application to pe biography domain," arXiv preprint arXiv:1603.07771, 2016. # Wikiasp: A dataset for multi-domain aspect-based summarization [152] H. Hayashi, P. Budania, P. Wang, C. Ackerson, R. Neervannan, and G. Neubig, "Wikiasp: A dataset for multi-domain aspect-based summarization," Transactions of pe Association for Computational Linguistics, vol. 9, pp.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
211–225, 2021. # Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization [153] S. Narayan, S. B. Cohen, and M. Lapata, "Don't give me pe details, just pe summary! topic-aware convolutional neural networks for extreme summarization," arXiv preprint arXiv:1808.08745, 2018. # Vio-lens: A novel dataset of annotated social network posts leading to different forms of communal violence and its evaluation [154] S. Saha, J. A. Junaed, M. Saleki, A. S. Sharma, M. R. Rifat, M. Rahouti, S. I. Ahmed, N. Mohammed, and M. R. Amin, "Vio-lens: A novel dataset of annotated social network posts leading to different forms of communal violence and its evaluation," in Proceedings of pe First Workshop on Bangla Language Processing (BLP-2023), 2023, pp. 72–84. # Learning question classifiers [155] X. Li and D. Rop, "Learning question classifiers," in COLING 2002: The 19p International Conference on Computational Linguistics, 2002. # Recursive deep models for semantic compositionality over a sentiment treebank [156] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts, "Recursive deep models for semantic compositionality over a sentiment treebank," in Proceedings of pe 2013 conference on empirical mepods in natural language processing, 2013, pp. 1631–1642.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References [181] A. Yang, A. Nagrani, P. H. Seo, A. Miech, J. Pont-Tuset, I. Laptev, J. Sivic, and C. Schmid, “Vid2seq: Large-scale pretraining of a visual language model for dense video captioning,” in Proceedings of pe IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 10 714–10 726.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
On the other side, RAG systems seem to offer a pragmatic solution allowing you to chunk your data as needed and only use relevant chunks into the context to ask the LLM to generate an answer from the included context. This facilitates continuously updating the knowledge with new documents and also gives the control over what chunks the user is able to access. However, optimal strategies for chunk embedding, retrieval, and contextual fusion remain active research. Further work should systematically compare finetuning and RAG paradigms across factors including accuracy, latency, operating costs, and robustness.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[182] N. Nashid, M. Sintaha, and A. Mesbah, “Retrieval-based prompt selection for code-related few-shot learning,” in 2023 IEEE/ACM 45p International Conference on Software Engineering (ICSE), 2023, pp. 2450–2462.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Harnessing Retrieval-Augmented Generation (RAG) for Uncovering Knowledge Gaps Joan Figuerola Hurtado Independent Researcher joanfihu@gmail.com Abstract We present a methodology for uncovering knowledge gaps on the internet using the Retrieval Augmented Generation (RAG) model. By simulating user search behaviour, the RAG system identifies and addresses gaps in information retrieval systems. The study demonstrates the effectiveness of the RAG system in generating relevant suggestions with a consistent accuracy of 93%. The methodology can be applied in various fields such as scientific discovery, educational enhancement, research development, market analysis, search engine optimization, and content development. The results highlight the value of identifying and understanding knowledge gaps to guide future endeavors. # 1 Introduction The increasing number of users dissatisfied with the relevance of commercial search engine results is surprising, given the unprecedented access to vast information and sophisticated search technologies. In this paper, we employ the Retrieval Augmented Generation (RAG) model to simulate user search behaviour, aiming to identify and address knowledge gaps on the Internet. We posit that uncovering and bridging these gaps is crucial for enhancing the efficacy of information retrieval systems. # 2 Related Work Yom.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
et. al [14] presents an algorithm to estimate query difficulty. Estimation is based on the agreement between the top results of the full query and the top results of its sub-queries. In doing so, difficult queries reveal gaps in a content library. The methodology is based on training an estimator based on a small dataset. We argue that there are now simpler LLM prompting techniques that do not require training a custom model and yield better generalization across multiple domains. # 3 Methodology To identify knowledge gaps, we simulate user interactions with search engines in a structured process. Initially, we begin with a query and methodically review each search result until an answer is found. If the first top 10 results do not yield an answer, we generate up to four alternative queries and retrieve up to two documents per query, iterating through the search process again. Figure 1: Iteration loop to find knowledge gaps Our approach utilizes AskPandi [12], a Retrieval-Augmented Generation (RAG) system, to mimic user behavior. AskPandi integrates Bing's web index for data retrieval and GPT as a reasoning engine. After finding an answer, we capitalize on the in-context capabilities of LLMs to generate a series of relevant follow-up questions. This process is guided by the premise that a well-generalized LLM should provide useful information. This is a preprint.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
It is not peer reviewed yet.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
recommendations based on the initial question and answer. The prompt we use is: 'Based on the answer '{}' and the question '{}', what are some potential short follow-up questions?' This methodology diverges from traditional recommender systems [9], which filter through existing content. In contrast, our system focuses on generating the most relevant content, regardless of its preexistence, highlighting a shift from extractive to generative approaches. The process is then iterated, with each cycle going deeper into the query’s topic, thus increasing the difficulty of finding relevant information. We consider the emergence of a knowledge gap when the LLM can no longer generate an answer. In terms of terminating the process, we incorporate a mechanism to identify stop words in answers. We explored two methods: either letting the model naturally produce a stop word or directing the model to generate one in cases of uncertainty [10]. This comprehensive process not only helps in identifying knowledge gaps but also enhances our understanding of the potential of generative AI in facilitating more relevant information retrieval systems. # Experiments We build a dataset with 500 search queries classified in 25 categories. We pick the parent categories from Google Trends as of 2023 [11]. Given that Google Trends derives its data from Google search queries, it is hypothesised that this tool provides a representative sample of the general online search behaviour. All the 500 search queries can be found in our GitHub repository [13].
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|1.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Arts & Entertainment|2. Autos & Vehicles|3. Beauty & Fitness|4. Books & Literature|5. Business & Industrial| |---|---|---|---|---| |6. Computers & Electronics|7. Finance|8. Food & Drinks|9. Games|10. Health| |11. Hobbies & Leisure|12. Home & Garden|13. Internet & Telecom|14. Jobs & Education|15. Law & Government| |16. News|17. Online Communities|18. People & Society|19. Pets & Animals|20. Property| |21. Reference|22. Science|23. Shopping|24. Sports|25. Travel| For each category, we generate 20 queries grouped by their complexity: easy and difficult. To determine the complexity of each query, we use the following methodology: Length of Query - Easy: Short queries, usually 1-3 words. - Difficult: Very long queries or full sentences, more than 6 words. Specificity of Query - Easy: General or broad queries. - Difficult: Highly specific, niche, or detailed queries. Use of Jargon or Technical Terms - Easy: Common language, no specialised terms. - Difficult: Heavy use of technical terms, jargon, or acronyms. Ambiguity or Clarity of Query - Easy: Clear and straightforward, with likely one main interpretation. - Difficult: Ambiguous, requiring context or additional information to interpret. Search Intent - Easy: General information seeking or popular topics. - Difficult: In-depth research, controversial topics, or highly detailed queries. Knowledge Level Required - Easy: Suitable for a general audience, no special knowledge needed. - Difficult: Requires in-depth knowledge or expertise in the field. Query Format - Easy: Basic questions or keyword searches. - Difficult: Complex questions, hypotheticals, or requiring multi-step thinking. For each search simulation, we measured the following metrics: - Accuracy: the percentage of queries that were answered correctly by the RAG system. Answers that have been manually reviewed. - Topic Depth: the number of iterations until the LLM system stopped answering the question. This is a preprint.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
It is not peer reviewed yet.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
● Average number of sources used per search simulation. |5|Analysis| |---|---| | |We carried out search simulations for 60 keywords, generating 323 answers across 655 sources. We have found that using more than 60 keywords from the initial 500 keywords dataset did not make a significant difference. All the search simulations can be found in our GitHub repository [13]. The results demonstrate the effectiveness of using a RAG system in simulating user search behaviour and generating relevant suggestions.| | |With a consistent accuracy of 93% for both simple and complex keywords, the RAG system proved to be a reliable tool for information retrieval. The study also found that finding sources becomes slightly more challenging for specific topics, as indicated by the average number of sources needed per keyword difficulty, 10.9 sources for easy queries and 11.23 for difficult ones. No significant differences were observed in accuracy or source quantity across categories, likely due to the broad and balanced nature of the selected categories.| |5.|Search Engine Optimization: Improving search recommendations by identifying what users might be looking for but isn’t currently available online.| |6.|Content Development: It aids in recognizing content gaps within a content library, assisting content creators in filling these voids.| | |Each of these applications demonstrates the value of identifying and understanding what is missing, thereby guiding future endeavours in various fields.| 7 Conclusion We have successfully demonstrated a methodology for identifying knowledge gaps in content libraries. For future work, there is potential to expand this research by exploring alternative search simulation methods. Specifically, utilising agents could be a promising avenue. These agents, with their broader bandwidth in search engine usage and content processing, offer capabilities surpassing those of human users. Future research could extend the evaluation to additional answer engines, thereby enabling a more comprehensive benchmarking of the estimation methodology outlined in reference [14]. Additionally, we discovered that on average, a knowledge gap is encountered at the fifth level of topic depth. It's worth pointing out that we don’t have direct access to a web index to do a more rigorous evaluation. Future work could consider the system’s ability to predict whether a query is a MCQ (missing content query) [14] given gold-standard labels (perhaps using a TREC-style test collection and removing the relevant documents from the collection for some queries). # 6 Applications Recommending nonexistent content is a powerful tool for revealing knowledge gaps. This approach has a wide range of applications, including: 1. Scientific Discovery: It can pinpoint unexplored areas in research, highlighting future research topics that have yet to be investigated. 2. Educational Enhancement: By identifying missing elements in learning materials, it helps in creating more comprehensive educational resources. 3.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Research Development: This method can uncover untapped research opportunities, guiding scholars and scientists towards novel inquiries. 4. Market Analysis: In the business realm, it can reveal product gaps in a catalogue, offering insights for new product development. This is a preprint. It is not peer reviewed yet. # References 1. Dmitri Brereton. 2022. Google Search Is Dying. Published on February 15, 2022. [Online]. Available: https://dkb.io/post/google-search-is-dying 2. Edwin Chen. 2022. Is Google Search Deteriorating? Measuring Google's Search Quality in 2022. Published on January 10, 2022. [Online]. Available: https://www.surgehq.ai/blog/is-google-search-deteriorating-measuring-search-quality-in-2022 3. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv:2005.11401 [cs.CL].
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# 6.3 Testing and Monitoring RAG systems Software engineering best practices are still emerging for RAG systems. Software testing and test case generation are one of the areas for refinement. RAG systems require questions and answers that are application specific often unavailable when indexing unstructured documents. Emerging work has considered using LLMs for generating questions from multiple documents [4]. How to generate realistic domain relevant questions and answers remains an open problem. Once suitable test data is available quality metrics are also required to assist engineers in making quality tradeoffs. Using large language models is expensive, introduces latency concerns, and has performance characteristics that all change with each new release. This characteristic has previously been studied for machine learning systems [5, 6] but the required adaptations (if any) have yet to be applied to LLM based systems such as RAGs. Another idea is to incorporate ideas from self-adaptive systems to support monitoring and adapting RAG systems, preliminary work has started for other machine learning applications [2]. # 7 Conclusion RAG systems are a new information retrieval that leverages LLMs. Software engineers increasingly interact with RAG systems a) through implementing semantic search, or b) through new code-dependent tasks. This paper presented the lessons learned from 3 case studies including an empirical investigation involving 15,000 documents and 1000 questions. Our findings provide a guide to practitioners by presenting the challenges faced when implementing RAG systems. We also included future research directions for RAG systems related to 1) chunking and embeddings, 2) RAG vs Finetuning, and 3) Testing and Monitoring. Large language models are going to continue to obtain new capabilities of interest to engineers and researchers. This paper presents the first investigation into RAG systems from a software engineering perspective. ACKNOWLEDGMENTS To Amanda Edgar, Rajesh Vasa, Kon Mouzakis, Matteo Vergani, Trish McCluskey, Kathryn Perus, Tara Draper, Joan Sutherland and Ruary Ross for their support and involvement in making the AI Tutor project possible.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
4. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. In Proceedings of the 2019 Conference. [Online]. Available: https://api.semanticscholar.org/CorpusID:160025533 5. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. 2022. Chain of Thought Prompting Elicits Reasoning in Large Language Models. CoRR, abs/2201.11903.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[Online]. Available: https://arxiv.org/abs/2201.11903 6. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. NeurIPS. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. arXiv:2210.03629 [cs.CL]. Kenji Kawaguchi, Yoshua Bengio, and Leslie Kaelbling. 2022. Generalisation in Deep Learning. In Mathematical Aspects of Deep Learning, Philipp Grohs and Gitta Kutyniok, Eds. Cambridge University Press, Cambridge, 112–148. DOI: https://doi.org/10.1017/9781009025096.003 Ricci, F., Rokach, L., Shapira, B. (2022). Recommender Systems: Techniques, Applications, and Challenges. In: Ricci, F., Rokach, L., Shapira, B. (eds) Recommender Systems Handbook. Springer, New York, NY. https://doi.org/10.1007/978-1-0716-2197-4_1 Anthropic's Team. Let Claude Say "I Don't Know" to Prevent Hallucinations. Anthropic. Accessed in 2023. [Online]. Available: https://docs.anthropic.com/claude/docs/let-claude-say-i-dont-know Google Trend's Team. Google Trends.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Google. Accessed in 2023. [Online]. Available: https://trends.google.com/trends/ AskPandi's Team. AskPandi - Ask Me Anything. AskPandi. Accessed in 2023. [Online]. Available: https://askpandi.com https://github.com/webeng/llm_knowledge_gap_finder Yom-Tov, Elad et al. “Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval.” Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2005). This is a preprint.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
It is not peer reviewed yet.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation |Authors|Affiliations|Contact| |---|---|---| |Shicheng Xu1,2*|1CAS Key Laboratory of AI Safety, Institute of Computing Technology, CAS 2University of Chinese Academy of Sciences|{xushicheng21s,pangliang,shenhuawei,cxq}@ict.ac.cn| |Liang Pang1†| | | |Mo Yu3†|3Pattern Recognition Center, WeChat AI|moyumyu@global.tencent.com| |Fandong Meng3| |{fandongmeng,withtomzhou}@tencent.com| |Huawei Shen1| | | |Xueqi Cheng1| | | |Jie Zhou3| | | Abstract Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additional information from retrieval. However, studies have shown that LLMs still face challenges in effectively using the retrieved information.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In this paper, a novel perspective is proposed where LLMs act as "Information Refiner" to generate more concise, accurate, and complete texts than the input retrieved texts. An information refinement training method named INFO-RAG is introduced to optimize LLMs for RAG in an unsupervised manner. INFO-RAG is low-cost and general across various tasks, showing improvements in performance and advantages in in-context learning and robustness of RAG. # Introduction Retrieval-augmented generation (RAG) is a popular framework in modern NLP systems that equips neural networks with retrieved information for text generation in tasks like open-domain question answering, dialogue, etc. Recently, RAG has been applied to large language models (LLMs) to provide additional knowledge.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
LLMs to only regard the input retrieved texts as a part of the prefix for language modeling rather than additional reference, which leads to the following problems. Firstly, for the long and complex retrieved texts, LLMs struggle to extract the correct answers (Deng et al., 2023) accurately. Secondly, in situations where the retrieved texts cannot address the task, LLMs lack the capability to integrate the knowledge within model parameters with the retrieved texts to generate improved texts. Thirdly, LLMs are susceptible to incorrect and noisy information in retrieved texts, posing a risk of being misled (Chen et al., 2023; Yoran et al., 2023). To solve above problems, some previous methods explore strategies for how or when to perform retrieval for LLMs by prompt techniques (Press et al., 2023; Khattab et al., 2022; Xu et al., 2023; Asai et al., 2023). However, prompt cannot materially change the ability of LLMs to utilize retrieved texts because model parameters are not updated for this ability. Some methods fine-tune LLMs on the constructed RAG data for a specific task such as QA (Yoran et al., 2023; Yu et al., 2023). However, under the trend that LLMs are regarded as foundation models for various tasks in zero-shot setting, fine-tuning LLMs only on a few tasks make LLMs limited to the RAG of training tasks and lose their generalizability. Because catastrophic forgetting still exists in supervised fine-tuning of LLMs (Luo et al., 2023). Although constructing data for a large number of tasks can alleviate this, it is hard to design the data in various RAG tasks and requires high data annotation costs. Our paper aims to fundamentally improve the ability of LLMs to utilize retrieved texts while preserving the generalizability of LLMs for various RAG tasks in zero-shot setting, which is orthogonal to prompt techniques and can be combined with them to get better performance. In this paper, considering that LLMs have a certain ability to use their own knowledge to examine information (Dhuliawala et al., 2023), we introduce a novel perspective to reassess the role of LLMs in RAG. Specifically, we propose considering LLMs as “Information Refiner”. The key idea behind this is to continue training the pre-trained LLMs with an Information Refinement objective that regardless of the correctness, completeness, or usefulness of the input retrieved texts, LLMs can consistently integrate knowledge within the retrieved texts and model parameters to generate the texts that are more concise, accurate, and complete than the retrieved texts (Figure 1). We term this process “Positive Information Gain”. This enables LLMs to extract correct information from complex texts as well as resist and rectify retrieved erroneous information and noise, thereby improving the information bottleneck of the RAG and allowing the knowledge capacity of RAG to approximate the combined knowledge of IR and LLMs. We make the information refinement training work in a completely unsupervised manner, such that it is easy to obtain large-scale training data and maintain the generalizability of the trained LLMs that can be used in various RAG tasks in zero-shot setting. Specifically, we propose an unsupervised training method named INFO-RAG. INFO-RAG classifies the retrieved texts into three scenarios (shown in Figure 1) and proposes the unsupervised training task for each scenario. For the first scenario that all knowledge for the question is already in the retrieved texts, LLMs need to accurately extract relevant knowledge from complex retrieved texts and generate more concise texts. For the second scenario that retrieved texts are incomplete or incorrect for the question, LLMs need to combine the knowledge within model parameters to verify the retrieved texts, correct the wrong knowledge, and complete the missing knowledge. For the third scenario that retrieved texts are relevant but do not have any answer, LLMs need to find the knowledge within model parameters based on relevant context to generate correct answers. We mix the above three tasks to train INFO-RAG unsupervisedly. Main contributions of this paper are as follows: (1) We introduce a novel perspective to reassess the role of LLMs in the RAG system that considers LLMs as “Information Refiner” that can produce positive information gain in RAG scenarios. (2) We propose an unsupervised training method named INFO-RAG that enables LLMs to perform information refinement in RAG. INFO-RAG is low-cost and general for various RAG tasks. (3) Extensive experiments show INFO-RAG enhances the zero-shot RAG of LLaMA2 across Question Answering, Slot-Filling, Language Modeling, Dialog, and Code Generation. INFO-RAG also shows advantages in in-context learning and robustness of RAG.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Code is released at https://github.com/xsc1234/INFO-RAG/.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Related Work Retrieval Augmented Generation Retrieval augmented generation (RAG) aims to provide additional
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# CAIN 2024, April 2024, Lisbon, Portugal Scott Barnett, Stefanus Kurniawan, Srikanth Thudumu, Zach Brannelly, Mohamed Abdelrazek # REFERENCES |[1]|Fu Bang. 2023. GPTCache: An Open-Source Semantic Cache for LLM Applications Enabling Faster Answers and Cost Savings. In 3rd Workshop for Natural Language Processing Open Source Software.| |---|---| | |Paliouras.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Unsupervised Learning of RAG Unsupervised learning of RAG can be divided into the training of retrievers and language models. As for retrievers, REALM (Guu et al., 2020) proposes using masked language modeling to pre-train a knowledge retriever. REPLUG (Shi et al., 2023) trains the retriever according to the feedback from black-box LM. As for language models, RETRO (Borgeaud et al., 2022) improves language models by retrieving tokens. Atlas proposes pretext tasks to jointly train the retriever and language model. However, these two methods focus on the model of encoder-decoder architecture, which is inconsistent with the current mainstream LLMs based on decoder-only. Previous unsupervised training methods do not consider the specific role that language models should play in RAG. In this paper, we focus on training language model as an “Information Refiner” that can further improve the information bottleneck of RAG and be robust to retrieved texts. Our INFO-RAG This section introduces our INFO-RAG, an unsupervised training method to enable LLMs to perform information refinement in RAG. Firstly, we summarize the retrieved texts in RAG into three scenarios and define the positive information gain for each scenario. Secondly, we construct sample pairs in which the output has information gain compared to the input for these three scenarios and design three training tasks. Thirdly, we train LLMs under our designed tasks on the unsupervised samples. Unsupervised training makes INFO-RAG low-cost and general for RAG in various tasks. Positive Information Gain in RAG In this paper, we introduce a novel perspective to reassess the role of LLMs in RAG that LLMs should be the “Information Refiner” that can produce “Positive Information Gain” in the information flow of RAG. This section details the scenarios of retrieved texts and defines specific information gain LLMs should produce in each scenario.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Scenario 1. The first scenario is that all knowledge for the question is already in the retrieved texts. Even if the correct knowledge already exists in the retrieved texts, complex and lengthy retrieved texts are not conducive for users to directly obtain the knowledge. Therefore, the positive information gain in this scenario means that LLMs extract correct knowledge as much as possible while removing irrelevant information, thereby generating more direct and concise texts for users. Scenario 2. The second scenario is that although the retrieved texts contain some usable knowledge, they still contain some incomplete or incorrect knowledge. This scenario is very common, especially with the current proliferation of fake news, misinformation, and fragmented knowledge on the Internet. There has been study proving that noise and erroneous knowledge in retrieved texts greatly mislead the generation of LLMs (Xu et al., 2023). The positive information gain in this scenario is that LLMs can exploit the knowledge within their parameters to verify the knowledge in the retrieved texts. Utilize accurate knowledge, rectify incorrect knowledge, and complete missing knowledge.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Scenario 3. The third scenario is that the retrieved texts do not have any answer that can be used to solve the question. This scenario means that the question is very difficult or the target knowledge is very long-tail for information retrieval systems. Even in this case, the retrieval model’s ability to model semantics allows it to provide texts that are semantically related to the question (Karpukhin et al., 2020). Therefore, the positive information gain in this scenario is that LLMs can stimulate the knowledge within their parameters based on semantically relevant context to solve the question.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Wikipedia Document Initial Sentence Set𝑆𝑆 Simulated Retrieved Texts𝓡𝓡(𝑠𝑠𝑙𝑙) |Scenario 1| | |Scenario 1| | | |---|---|---|---|---|---| |Prefix| | |Target| | | |Keep| | |LLM| | | |…| | |…| | | |Intercept k consecutive sentences| | |Scenario 2| | | |Correct and Complete| | |LLM| | | |…| | |…| | | |Prefix| | |Target| | | |Eliminate 𝑠𝑠𝑙𝑙| | |LLM| | | |…| | |…| | | |Contextual Stimulation| | |LLM| | | Figure 2: Overview of our INFO-RAG. Each sample is only processed for a single scenario to avoid data leakage.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Unsupervised Learning This section introduces unsupervised learning in INFO-RAG. We construct the input-output pairs that satisfy the information gain in the above three scenarios on Wikipedia. We continue to train pre-trained LLMs on the constructed data to perform information refinement in the form of next token prediction in prefix language modeling, which is general for various tasks. Pipeline is in Figure 2. # Data Collection The data construction is performed on English Wikipedia. Specifically, for each document d in Wikipedia, we intercept k consecutive sentences from d and get the sentence set S = [s1, s2, ..., sk]. Our method randomly selects sl from S and uses it as the object for language modeling. The first 31 to 2 of the tokens of sl are randomly intercepted as the prefix (sl p) and the other tokens of sl are used as the prediction target (sl t). We also perform the process (Section 3.2.2) on sentence set S so that it can be used to simulate the retrieved texts R(sl p) for prefix sl in three scenarios for conditioning the generation of sl t. Then, we can get an unsupervised training sample for prefix language modeling that predicts slt given the prefix sl and the retrieved texts R(sl p). This can be formulated as: p(sl t) = pθ([R(sl p), sl p]) # Data Construction and Training Tasks This section details our data construction and training tasks for three scenarios in Section 3.1. For Scenario 1 that needs LLMs to extract the correct knowledge from the complex texts, we propose the training task named Select and Copy. Specifically, given the sentence set S for a sample, Select and Copy directly uses all sentences in S as retrieved texts for conditioning LLMs to predict slt for the given prefix sl p. This can be formulated as: p(sl t) = pθ([S; sl p]) In Select and Copy, sl (both sl and sl p) has been contained in the retrieved texts S, this needs LLMs to select the texts matching the prefix sl p from the complex retrieved texts S and directly copy the target slt and input retrieved texts S is that slt is for generation. The information gain between sl and p. For Scenario 2 that needs LLMs to verify the knowledge in the retrieved texts, utilize accurate knowledge, rectify incorrect knowledge, and complete missing knowledge. We propose the training task named Correct and Complete. Given a sentence set S, firstly, this task uses the stability of word distribution between layers to get informative tokens. The intention for this is that the more unstable the word distribution of the token among the topmost layers is, the more it indicates that the token is an informative token. We follow (Chuang et al., 2023) to achieve this. Specifically, for each sentence si in S, our method obtains the next word distribution of the a-th token si[a] given prefix si<a of si in each layer of LLM as: p(si[a] | si <a) = softmax(WHj[a]) where j indicates the j-th layer of LLMs, Hj ∈ Rh is the hidden states for token si[a] in the j-th layer, W ∈ Rh×v is the vocabulary head that maps the hidden states Hj[a] to the word distribution with θ are parameters of LLMs, [R(sl p) and sl p by a special token.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Training Strategy After the data construction for three training tasks, we mix them for multi-task training. Specifically, we use LoRA (Hu et al., 2021) to train the pre-trained LLMs on the mixed dataset of three tasks. Three tasks are trained alternately in batches. Since Select and Copy is relatively simple for LLMs, it only accounts for 20% of the batches, while Correct and Complete and Contextual Stimulation each account for 40% of the batches. Using LoRA not only reduces training costs but also makes our method plug-and-play. The trained LoRA parameters are loaded when LLMs need to perform RAG and unloaded when RAG is not needed. # Experiments # Datasets and Evaluation Metrics To demonstrate the generality of our unsupervised training method, we evaluate the performance of INFO-RAG on eleven datasets across seven tasks. Open-domain Question Answering Open-domain QA is a typical knowledge-intensive task that can directly evaluate the knowledge of LLMs. We use Natural Questions (Kwiatkowski et al., 2019) (NQ) and WebQuestions (Berant et al., 2013) (WebQ) as the datasets. We use cover Exact Match (EM) to determine whether the ground truth exactly appears in the output and the accuracy is used as the evaluation metric, following (Schick et al., 2023). Soft Filling Soft filling requires LLMs to output the object entities for the input subject entity and relation. We use two knowledge-intensive datasets including Zero Shot RE (Levy et al., 2017) (ZS) and T-REx (Elsahar et al., 2018).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We use the same evaluation metric as Open-domain QA. Long-Form Question Answering Compared with open-domain QA, LFQA is the QA task whose ground truth answer is a relatively long text. We use ELI5 (Fan et al., 2019), a knowledge-intensive dataset for LFQA. We use ROUGE-L as the evaluation metric (Petroni et al., 2020). Dialogue Dialogue in our experiment focuses on the factual knowledge. We use Wizard of Wikipedia (Dinan et al., 2018) (WoW), a knowledge-powered dialogue dataset whose conversation is grounded with knowledge. We use F1 as the evaluation metric (Petroni et al., 2020). Language Modeling We use WikiText-103 (Merity, 2016), a popular dataset for language modeling. We use ROUGE-L as the evaluation metric.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Overall performance on retrieval-augmented generation on 11 datasets across 7 tasks in zero-shot setting | |Soft-Filling|ODQA|Multi-Hop QA|LFQA|Dialog|LM|Code Gen| |---|---|---|---|---|---|---|---| |LLaMA-2-7B|55.60|54.08|46.82|43.52|39.40|25.95|15.18|7.85|60.77|21.44|22.99|35.78| |+ INFO-RAG|65.91|57.01|45.74|44.68|46.56|30.19|17.18|9.09|62.91|26.75|32.06|39.83| |LLaMA-2-7B-chat|60.63|55.03|49.42|46.72|50.03|42.69|27.81|10.21|60.26|22.46|23.90|40.83| |+ INFO-RAG|65.77|58.32|53.93|49.13|52.01|44.45|28.15|10.49|63.24|27.25|28.79|43.78| |LLaMA-2-13B|60.08|50.77|47.40|44.62|42.12|25.78|14.80|7.04|62.20|21.52|29.16|36.86| |+ INFO-RAG|62.80|55.63|47.82|45.42|51.48|35.02|17.48|7.20|64.14|29.00|35.50|41.04| |LLaMA-2-13B-chat|62.53|56.81|50.36|45.47|61.23|47.06|27.07|11.19|60.52|22.34|30.96|43.23| |+ INFO-RAG|65.39|59.05|54.04|51.07|61.91|47.93|27.24|11.38|63.92|31.98|38.12|46.55| # Multi-Hop Question Answering (Multi-hop QA) Measures the ability of LLMs to perform combined reasoning on multiple knowledge. HotpotQA (Yang et al., 2018) and Musique (Trivedi et al., 2022b) are used for this task.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The same evaluation metric as Open-domain QA is utilized. # Code Generation Code generation aims to generate the code for the given natural language. Java and Python in CodeXGLUE (Iyer et al., 2018) are used for this task. CodeBLEU (Ren et al., 2020) is used as the evaluation metric. # Experimental Settings LLMs in the paper include LLaMA-2-7B, 13B, and their chat version (Touvron et al., 2023b). LoRA is used to fine-tune these pre-trained LLMs on four A100 GPUs with a learning rate of 1e-5, per-gpu batch size of 4 (for 7B) and 2 (for 13B) for 5K steps. For the training data, 15 consecutive sentences are intercepted for each example. For Open-domain QA, Soft Filling, and Language Modeling, ColBERTv2 (Santhanam et al., 2022) is used as the retriever, and Wikipedia consisting of 21,015,324 passages (Karpukhin et al., 2020) is used as the retrieval database. For Code Generation, SCODE-R (Parvez et al., 2021) is used as the code retriever, and deduplicated source codes in CodeSearchNET (Husain et al., 2019) are used as the retrieval database. Top-5 retrieved passages are given to each example for all tasks.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Experimental Results Main Results (Zero-Shot Setting): Experimental results show the improvement of the method on the utilization of retrieved knowledge from four aspects. 1. Short and Direct Knowledge: Significant improvement in RAG performance on ODQA and Slot-Filling tasks. 2. Reasoning on Multiple Knowledge: Advantages in cross-passage reasoning on multiple knowledge of retrieval lists. 3. Long and Complex Knowledge: Improvement in RAG performance on LFQA, Dialogue, and Language Modeling. 4. Code Knowledge: Improvement in RAG performance on Code Generation, demonstrating cross-task generality. The method is trained on natural language but shows advantages in programming language tasks, indicating successful utilization of retrieved information. Unsupervised and prefix language modeling training paradigms make the method general in various tasks.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023. BioASQ-QA: A manually curated corpus for biomedical question answering. Scientific Data 10 (2023), 170. Citation Key: 422.| |[2]|Maria Casimiro, Paolo Romano, David Garlan, Gabriel Moreno, Eunsuk Kang, and Mark Klein. 2022. Self-adaptive Machine Learning Systems: Research Challenges and Opportunities. 133–155. https://doi.org/10.1007/978-3-031-15116-3_7| |[3]|Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2023. Benchmarking Large Language Models in Retrieval-Augmented Generation. arXiv preprint arXiv:2309.01431 (2023).| |[4]|Mingda Chen, Xilun Chen, and Wen-tau Yih. 2023. Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data Synthesis. arXiv preprint arXiv:2305.13691 (2023).| |[5]|Alex Cummaudo, Scott Barnett, Rajesh Vasa, and John Grundy. 2020. Threshy: Supporting safe usage of intelligent web services. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1645–1649.| |[6]|Alex Cummaudo, Scott Barnett, Rajesh Vasa, John Grundy, and Mohamed Abdelrazek. 2020.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 1 | |has-ans.|replace|no-ans.|has-ans.|replace|no-ans.|has-ans.|replace|no-ans.|has-ans.|replace|no-ans.| |---|---|---|---|---|---|---|---|---|---|---|---|---| |LLaMA-2-7B|67.19|38.37|6.49|64.41|12.78|2.44|65.54|16.91|3.41|60.64|25.68|7.90| |+ INFO-RAG|79.80|41.79|7.04|68.10|13.55|3.26|64.43|22.68|4.70|62.70|26.48|8.96| |LLaMA-2-7B-chat|73.79|40.56|4.87|66.71|14.19|1.63|68.72|20.81|4.50|66.86|28.63|5.62| |+ INFO-RAG|80.01|42.92|5.42|69.64|15.02|2.65|70.99|23.14|5.62|68.73|29.74|9.12| |LLaMA-2-13B|72.26|39.47|7.76|60.14|19.71|4.69|65.94|18.45|4.42|62.09|26.63|9.27| |+ INFO-RAG|75.80|44.08|8.48|65.94|23.21|4.90|64.98|27.60|8.02|63.51|28.24|9.88| |LLaMA-2-13B-chat|75.96|43.79|5.59|67.03|16.58|1.42|69.37|30.72|6.16|65.07|31.88|5.47| |+ INFO-RAG|79.25|48.59|6.67|70.26|25.02|3.87|73.73|33.85|8.39|70.59|37.48|11.25| # Table 2: Experimental results on three scenarios "has-ans." is the first scenario that correct answers are in retrieved texts.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
"replace" is the second scenario that correct answers are randomly replaced with other phrases to simulate the incorrect and incomplete knowledge. "no-ans." is the third scenario that retrieval cannot find any answers. # Results on In-context Learning for RAG | |NQ|LLaMA-2| | | | | | | | | | |---|---|---|---|---|---|---|---|---|---|---|---| |Number of Examples in ICL| |0|2|4|8|12|16| | | | | | | |+INFO-RAG|43.36|44.35|45.88|44.45|47.75|46.25| | | | | | |WebQ LLaMA-2|43.20|18.36|9.40|36.71|44.80|44.81| | | | | | |+INFO-RAG|43.20|48.03|49.82|48.25|47.86|47.29| | | | | | |T-REx LLaMA-2|59.83|47.05|49.11|56.51|55.23|56.31| | | | | | |+INFO-RAG|59.83|63.08|63.45|63.54|63.57|63.38| | | | |ZS LLaMA-2| |52.41|42.71|37.05|50.40|50.20|51.01| | | | | | | |+INFO-RAG|52.41|56.53|60.37|59.86|59.75|59.85| | | | # Table 3: RAG performance changes with number of examples in In-context learning # Enhancement to the state-of-the-art RAG framework | |Multi-Hop QA|Slot-Filling| |---|---|---| | |Previous SOTA|28.19|10.03|63.10|57.09| | |SearChain|31.21|11.27|64.58|58.91| |+INFO-RAG| |33.04|12.10|66.95|60.72| Table 4: Enhancement to the state-of-the-art RAG framework. Previous SOTA includes DSP, Self-Ask, React. To show the enhancement to SearChain by INFO-RAG, we perform SearChain based on LLaMA-2-13B-chat trained by INFO-RAG. Results in Table 4 show that INFO-RAG can make SearChain achieve better performance. This provides additional support that our unsupervised INFO training fundamentally improves the RAG performance of LLMs. # 4.4 Analysis Fine-grained Analysis for Three Scenarios: As shown in Table 2, our INFO-RAG is effective in all three RAG scenarios and shows better robustness to incorrect, incomplete, and noisy retrieved texts. We propose corresponding unsupervised training tasks for the three scenarios of RAG. This section introduces the fine-grained analysis for each scenario.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 5: Analysis on the best-performed model LLaMA-2-13B-chat |Method|NQ|Datasets|Method|Max ∆ ratio|Max ∆ position|Max ∆ number| |---|---|---|---|---|---|---| |Baseline|50.36|69.37|30.72|6.16|NQ LLaMA-2|-51.94%|-16.18%|-25.43%| |S1: Select and Copy|48.77|69.59|25.40|0.11|+ INFO-RAG|-43.48%|-15.80%|-17.25%| |S2: Correct and Complete|51.59|70.42|32.71|4.48|LLaMA-2|-50.57%|-5.63%|-22.13%| |S3: Contextual Stimulation|52.75|72.50|31.77|8.86|WebQ + INFO-RAG|-45.48%|-8.72%|-11.91%| |S2&S3|53.73|73.01|32.50|9.01|LLaMA-2|-46.57%|-9.45%|-5.95%| |INFO-RAG (S1& S2&S3)|54.04|73.73|33.85|8.39|T-REx + INFO-RAG|-44.38%|-8.61%|-2.99%| # Table 6: Effects of three training tasks |Method|Max relative performance change caused by changes in retrieval results| |---|---| |S1|has negative effects when performed alone, it can achieve the best results when trained together with S2 and S3| # Table 7: Maximum relative performance change caused by changes in retrieval results Robustness to Retrieval Results INFO-RAG is more robust to changes in retrieval results including pe ratio and position of positive passages and number of retrieved passages. More details can be found in Section A of Appendix. Avoid Catastrophic Forgetting Experiment on MMLU (Hendrycks et al., 2020) without RAG shows that INFO-RAG performs very close to the original LLaMA-2 (7B: 45.0 vs. 45.3; 13B: 54.3 vs.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
54.8), which indicates that INFO-RAG enhances RAG while avoiding catastrophic forgetting. More details can be found in Section A.6 of Appendix. # Conclusion This paper proposes a novel perspective to reassess the role of LLMs in RAG that considers LLMs as "Information Refiner". This means that regardless of the correctness, completeness, or usefulness of the retrieved texts, LLMs can consistently integrate knowledge within model parameters and the retrieved texts to generate texts that are more concise, accurate, and complete. To achieve it, we propose an information refinement training method named
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
INFO-RAG in an unsupervised manner, which is Tom B. Brown, Benjamin Mann, Nick Ryder, et al. 2020. low-cost and general across various tasks. Extensive experiments across 11 datasets of 7 tasks in zero-shot setting show that INFO-RAG improves the performance of LLMs for RAG. INFO-RAG also shows advantages in ICL and robustness of RAG and can be combined with the SOTA RAG framework to further improve its performance. Limitations This paper aims to enable LLMs to perform information refinement in RAG by unsupervised training, so as to accurately extract correct information and avoid the interference of incorrect information. The main limitation of this paper is that due to the lack of computing resources, we only conduct experiments on models with 7B and 13B parameter sizes. In the future, we consider using more computing resources to explore the performance of models with larger parameter sizes. Ethics Statement After careful consideration, we believe that our paper does not introduce additional ethical concerns. We declare that our work complies with the ACL Ethics Policy. Acknowledgements This work was supported by the National Key R&D Program of China (2022YFB3103700, 2022YFB3103704), the National Natural Science Foundation of China (NSFC) under Grants No. 62276248 and U21B2046, and the Youth Innovation Promotion Association CAS under Grants No. 2023111. References Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-rag: Learning to retrieve, generate, and critique prough self-reflection. arXiv preprint arXiv:2310.11511. Jonapan Berant, Andrew Chou, Roy Frostig, and Percy Liang.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of pe EMNLP 2013, pages 1533–1544. Sebastian Borgeaud, Arpur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Ruperford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206–2240. PMLR. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300. Angela Fan, Yacine Jernite, Epan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. Eli5: Long form question answering. arXiv preprint arXiv:1907.09190. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929–3938. PMLR. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241. Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023. Chain-of-verification reduces hallucination in large language models. Jingcheng Deng, Liang Pang, Huawei Shen, and Xueqi Cheng. 2023. Regavae: A retrieval-augmented gaussian mixture variational auto-encoder for language modeling. arXiv preprint arXiv:2310.10567. Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonapon Hare, Frederique Laforest, and Elena Simperl. 2018. T-rex: A large scale alignment of natural language wip knowledge base triples. In Proceedings of LREC 2018. Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, and Pengcheng He. 2023. Dola: Decoding by contrasting layers improves factuality in large language models. arXiv preprint arXiv:2309.03883. Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023. Benchmarking large language models in retrieval-augmented generation. arXiv preprint arXiv:2309.01431. Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, and Shuming Shi.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2019. Retrieval-guided dialogue response generation via a matching-to-generation framework. In Proceedings of pe 2019 Conference on Empirical Mepods in Natural Language Processing and pe 9p International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1866–1875. Deng Cai, Yan Wang, Victoria Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, and Shuming Shi. 2018. Skeleton-to-response: Dialogue generation guided by retrieval memory. arXiv preprint arXiv:1809.05296.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen Tomáš Mikolov et al. 2012. Statistical language models based on neural networks. Presentation at Google, Mountain View, 2nd April, 80(26). # Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt 2019. Code-searchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436. # Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer 2018. Mapping language to code in programmatic context. arXiv preprint arXiv:1808.09588. # Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave 2022. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299. # Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781. # Omar Khattab, Keshav Santhanam, Xiang LisaLi, David Hall, Percy Liang, Christopher Potts, Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham 2022. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp. arXiv preprint arXiv:2212.14024.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Beware the evolving ‘intelligent’ web service! An integration architecture tactic to guard AI-first components. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 269–280.| |[7]|Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International conference on machine learning. PMLR, 3929–3938.| |[8]|Sebastian Hofstätter, Jiecao Chen, Karthik Raman, and Hamed Zamani. 2023. Fid-light: Efficient and effective retrieval-augmented text generation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1437–1447.| |[9]|Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint| |[10]|Anastasia Krithara, Anastasios Nentidis, Konstantinos Bougiatiotis, and Georgios Paliouras. 2023. BioASQ-QA: A manually curated corpus for biomedical question answering. Scientific Data 10 (2023), 170. Citation Key: 422.| |[11]|Simon Lermen, Charlie Rogers-Smith, and Jeffrey Ladish.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}