text
stringlengths
2
6.93k
system_prompt
stringclasses
1 value
# Query: Was the performance of the Chicago Bears’ defense reported as improved by Yardbarker after Sporting News highlighted a sack by the Bears’ defense on Joshua Dobbs during the NFL ’Monday Night Football’ game? # Answer: Yes # Evidence List: |Title|Bears vs. Vikings live score, updates, highlights from NFL ’Monday Night Football’ game| |---|---| |Source|Sporting News| |Published Time|2023-11-27T23:32:04+00:00| |Fact|The Bears answer right back and sack Dobbs, with Sweat and Brisker in there to take him down.| |Title|Hottest seat on each NFC team: Buns burning for these four head coaches| |---|---| |Source|Yardbarker| |Published Time|2023-11-30T22:29:33+00:00| |Fact|In his second season as HC, the defense has improved, but positive results are hard to come by behind a lackluster offense ranked 19th in yards (323.2) and 21st in points per game (20.2).| # Query: What is the first letter of the CEO’s last name in the news article from Bloomberg on TomTom, and what is the first letter of the city where the company’s headquarters is located in the news article from Reuters?
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
(eds.) Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 14447–14465. Association for Computational Linguistics, Toronto, Canada (Jul 2023). https://doi.org/10.18653/v1/2023.acl-long.808, https://aclanthology.org/2023.acl-long.808| |54.|Zhang, S., Liu, X., Liu, J., Gao, J., Duh, K., Van Durme, B.|Record: Bridging the gap between human and machine commonsense reading comprehension (Oct 2018). https://doi.org/10.48550/ARXIV.1810.12885| |55.|Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.|BERTScore: Evaluating Text Generation with BERT. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net (2020), https://openreview.net/forum?id=SkeHuCVFDr| |56.|Zhang, Y., Khalifa, M., Logeswaran, L., Lee, M., Lee, H., Wang, L.|Merging Generated and Retrieved Knowledge for Open-Domain QA. In: Bouamor, H., Pino, J., Bali, K.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
(eds.) Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. pp. 4710–4728. Association for Computational Linguistics, Singapore (Dec 2023). https://doi.org/10.18653/v1/2023.emnlp-main.286, https://aclanthology.org/2023.emnlp-main.286| |57.|Zhao, P., Zhang, H., Yu, Q., Wang, Z., Geng, Y., Fu, F., Yang, L., Zhang, W., Cui, B.|Retrieval-augmented generation for ai-generated content: A survey (Feb 2024). https://doi.org/10.48550/ARXIV.2402.19473| |58.|Zheng, L., Chiang, W.L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E.P., Zhang, H., Gonzalez, J.E., Stoica, I.|Judging llm-as-a-judge with mt-bench and chatbot arena (Jun 2023). https://doi.org/10.48550/ARXIV.2306.05685| |59.|Zhu, F., Lei, W., Wang, C., Zheng, J., Poria, S., Chua, T.S.|Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering. Tech. rep. (May 2021), http://arxiv.org/abs/2101.00774, arXiv:2101.00774 [cs] type: article|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Evaluation of Retrieval-Augmented Generation: A Survey Structure of RAG System Retrieval Component The retrieval component of RAG systems in Figure 1 can be categorized into three types: sparse retrieval, dense retrieval [59], and web search engine. The standard for evaluation is the output of relevant documents with numerical scores or rankings.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Before the introduction of neural networks, sparse retrievals are widely used for retrieving relative text content. Methods like TF-IDF [38] and BM25 [39] rely on keyword matching and word frequency but may miss semantically relevant documents without keyword overlap. By leveraging deep learning models such as BERT [8], dense retrieval can capture the semantic meaning of texts, which allows them to find relevant documents even when keyword overlap is minimal. This is crucial for complex queries that require a contextual understanding to retrieve accurate information. With advanced fusion structure for queries and documents [24] and the more efficient implementation of K-Nearest Neighbors (KNN) [42], Approximate Nearest Neighbor (ANN) [11,22] search techniques, dense retrieval methods have become practical for large-scale use. Web search engine employs the complex online search engine to provide relevant documents, such as Google Search [16], Bing Search [35], DuckDuckGo [12]. RAG systems can traverse the web’s extensive information, potentially returning a more diverse and semantically relevant set of documents via the API of the search provider. The black box of the search engine and the expense of large-scale search are not affordable sometimes. It is observed that dense retrieval techniques, particularly those leveraging embeddings, stand out as the preferred choice within the RAG ecosystem. These methods are frequently employed in tandem with sparse retrieval strategies, creating a hybrid approach that balances precision and breadth in information retrieval. Moreover, the adoption of sophisticated web search engines for benchmark assessment underscores their growing significance in enhancing the robustness and comprehensiveness of evaluations. Indexing The indexing component processes and indexes document collections, such as HuggingFace datasets or Wikipedia pages. Chunking before indexing can improve retrieval by limiting similarity scores to individual chunks, as semantic embedding is less accurate for long articles, and desired content is often brief [27]. Index creation is designed for fast and efficient search. For example, the inverted index for sparse retrieval and the ANN index for dense retrieval. Sparse Retrieval involves calculating IDF for each term and storing values in a database for quick look-up and scoring when queried. Dense Retrieval encodes documents into dense vectors using a pre-trained language model like BERT. These vectors are then indexed using an Approximate Nearest Neighbor (ANN) search technique, like graph-based Hierarchical Navigable Small World (HNSW) or Inverted File Index (IVF) [11]. This process allows for the efficient retrieval of “closed” items by given predefined distance metrics.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong†, Qi Liu, and Zhaofeng Liu Search This step is responsible for retrieving relevant documents based on a given query. Queries are submitted using the respective API to retrieve relevant documents for web search engine retrieval. For local resources, the query component is responsible for formatting the query in the format required by different sparse or dense retrieval methods. Then, the query is submitted to the retrieval system, which returns a set of relevant documents along with their scores. In both local and web-based scenarios, an optional reranker can be employed to refine the ranking of retrieved documents further. The reranker usually comprises a more complex and larger model that considers additional features of the documents and the given query. These additional features often include the semantic relationship between the query and the document content, document importance or popularity, and other custom measures specific to the information need at hand. # Generation Component The evaluable output for the generation component is the response of LLMs and the structured or formatted output from the phrased response. Prompting The generation process critically hinges on prompting, where a query, retrieval outcomes, and instructions converge into a single input for the language model. Research showcases various strategic prompting tactics such as the Chain of Thought (CoT) [48], Tree of Thought (ToT) [3], and Self-Note [26], each significantly shaping the model’s output. These methods, especially the step-by-step approach, are pivotal in augmenting LLMs for intricate tasks. Prompting innovations have introduced methods like Rephrase and Respond (RaR) [7], enhancing LLMs by refining queries within prompts for better comprehension and response. This technique has proven to boost performance across diverse tasks. The latest RAG benchmarks [49,50] in the specific domains start to evaluate the robustness of various prompting engineering skills, including CoT, RaR, etc. Inference The final input string prepared in the prompting step is then passed on to the LLMs as input, which generates the output. The inference stage is where the LLM operates on the input derived from the retrieval and the prompting stages in the pipeline to generate the final output. This is usually the answer to the initial query and is used for downstream tasks.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Depending on the specifics of the task or expected output structure, a post-processing step may be implemented here to format the generated output suitably or extract specific information from the response. For example, the classification problems (multi-choice questions) or if the task requires the extraction of specific information from the generated text, this step could involve additional named entity recognition or parsing operations.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2405.13576v1 [cs.CL] 22 May 2024 # FlashRAG: A Modular Toolkit for Efficient Retrieval-Augmented Generation Research Jiajie Jin, Yutao Zhu, Xinyu Yang, Chenghao Zhang, Zhicheng Dou* Gaoling School of Artificial Intelligence Renmin University of China {jinjiajie, dou}@ruc.edu.cn, yutaozhu94@gmail.com # Abstract With the advent of Large Language Models (LLMs), the potential of Retrieval Augmented Generation (RAG) techniques have garnered considerable research attention. Numerous novel algorithms and models have been introduced to enhance various aspects of RAG systems. However, the absence of a standardized framework for implementation, coupled with the inherently intricate RAG process, makes it challenging and time-consuming for researchers to compare and evaluate these approaches in a consistent environment. Existing RAG toolkits like LangChain and LlamaIndex, while available, are often heavy and unwieldy, failing to meet the personalized needs of researchers. In response to this challenge, we propose FlashRAG, an efficient and modular open-source toolkit designed to assist researchers in reproducing existing RAG methods and in developing their own RAG algorithms within a unified framework. Our toolkit implements 12 advanced RAG methods and has gathered and organized 32 benchmark datasets. Our toolkit has various features, including customizable modular framework, rich collection of pre-implemented RAG works, comprehensive datasets, efficient auxiliary pre-processing scripts, and extensive and standard evaluation metrics. Our toolkit and resources are available at https://github.com/RUC-NLPIR/FlashRAG. # 1 Introduction In the era of large language models (LLMs), retrieval-augmented generation (RAG) [1, 2] has emerged as a robust solution to mitigate hallucination issues in LLMs by leveraging external knowledge bases [3]. The substantial applications and the potential of RAG technology have attracted considerable research attention. With the introduction of a large number of new algorithms and models to improve various facets of RAG systems in recent years, comparing and evaluating these methods under a consistent setting has become increasingly challenging. Many works are not open-source or have fixed settings in their open-source code, making it difficult to adapt to new data or innovative components. Besides, the datasets and retrieval corpus used often vary, with resources being scattered, which can lead researchers to spend excessive time on pre-processing steps instead of focusing on optimizing their methods. Furthermore, due to the complexity of RAG systems, involving multiple steps such as indexing, retrieval, and generation, researchers often need to implement many parts of the system themselves. Although there are some existing RAG toolkits like LangChain [4] and LlamaIndex [5], they are typically large and cumbersome, hindering researchers from implementing customized processes and failing to address the aforementioned issues. *Corresponding author Preprint.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Under review.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Thus, a unified, researcher-oriented RAG toolkit is urgently needed to streamline methodological development and comparative studies. To address the issue mentioned above, we introduce FlashRAG, an open-source library designed to enable researchers to easily reproduce existing RAG methods and develop their own RAG algorithms. This library allows researchers to utilize built pipelines to replicate existing work, employ provided RAG components to construct their own RAG processes, or simply use organized datasets and corpora to accelerate their own RAG workflow. Compared to existing RAG toolkits, FlashRAG is more suited for researchers. To summarize, the key features and capabilities of our FlashRAG library can be outlined in the following four aspects: Extensive and Customizable Modular RAG Framework. To facilitate an easily expandable RAG process, we implemented modular RAG at two levels. At the component level, we offer comprehensive RAG components, including 13 components across four major categories: judger, retriever, refiner, and generator. These components can be used individually in one’s code or combined to form a cohesive pipeline. At the pipeline level, after reviewing the current state of RAG development, we implemented 8 common RAG processes. Based on this framework, existing methods can be easily replicated, and RAG processes can be run and evaluated under different settings. Pre-Implemented advanced RAG algorithms. To our knowledge, the implementation of existing work provided by FlashRAG is the most extensive. So far, based on our framework, we have implemented 12 advanced RAG algorithms, such as Self-RAG and FLARE, covering Sequential RAG, Conditional RAG, Branching RAG, and Loop RAG categories. These methods have been evaluated under a unified setting, and a benchmark report is available.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
With our framework, researchers can easily evaluate these methods under various settings and fairly compare them with their own methods, enhancing overall reproducibility. More methods are planned to be incorporated into our library.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Answer: Insufficient information.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Comprehensive benchmark datasets. To improve the consistency and reusability of datasets in RAG research, we have compiled 32 common RAG benchmark datasets and preprocessed them into a unified format. Some of these datasets, such as asqa and wikiasp, have undergone specific adjustments for RAG scenarios to ensure consistency. We have hosted these datasets on the Hugging Face platform for easy access and use. Efficient Helping Scripts for RAG. To minimize the setup time in RAG experiments, we offer a comprehensive suite of helping scripts, including downloading and slicing Wikipedia for corpus creation, building indexes for retrieval, and prepare retrieval results in advance. These steps are important for the subsequent process, but they are often tedious and can take up a lot of time. Our user-friendly scripts are designed to be intuitive, ensuring researchers can easily navigate the preparatory stages of RAG-related research. # Related work The RAG process often involves various components and complex preliminary handling (such as constructing corpus and building indexes). Due to the lack of a dedicated RAG library for research, most open-source codes tend to use their preferred implementation and entail intricate environment configurations. Therefore, it is often time-consuming to run others’ code and difficult to migrate to your own settings. Simultaneously, the processing and use of datasets and corpus lack standardization, enhancing the challenge of making a fair comparison between oneself and existing methods. In recent years, numerous open-source toolkits pertaining to RAG have been developed, providing rich RAG components. Langchain [4], LlamaIndex [5], and Haystack [6] are among the widely adopted works. These libraries provide a range of advanced APIs related to LLM, such as vector databases and embedding models, which greatly streamline the interaction with LLM and facilitate running a RAG process effortlessly. Despite the many advantages, these libraries lack support for researchers. On one hand, they tend to overlook the implementation of existing works including methods, widely used retrieval corpus, and benchmark datasets. On the other hand, they are often too hefty and heavily encapsulated, obscuring operational details or necessitating complex document searches, thereby lacking flexibility for customization. Given these issues, several specialized toolkits for RAG have been introduced that are lighter and more customizable. For instance, FastRAG [7] optimizes based on Haystack’s api and provides a
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 1: Comparison with other RAG toolkits |Toolkit|Modular Component|Automatic Evaluation|Corpus Helper|# Provided Dataset|# Support Work| |---|---|---|---|---|---| |Langchain [4]|✓|✗|✓|-|2| |LlamaIndex [5]|✓|✓|✓|-|2| |Haystack [6]|✓|✓|✗|-|-| |FastRAG [7]|✓|✗|✗|2|1| |LocalRQA [8]|✗|✓|✗|3|-| |AutoRAG [9]|✓|✓|✗|4|-| |FlashRAG (ours)|✓|✓|✓|32|12| limited number of support methods and benchmark datasets. LocalRQA [8] focuses on the training stage in the RAG process, providing comprehensive scripts for training various components (such as retrievers, generators) that might be involved in the RAG process during research. AutoRAG [9] adopts a similar design philosophy to ours, implementing modular RAG process. This library represents each component in RAG as a node, and the RAG process is achieved by connecting the nodes. Although AutoRAG encompasses a variety of evaluation metrics and benchmarks, it falls short concerning the direct implementation of existing works. Therefore, in our library, we have not only designed an exhaustive assortment of RAG components to implement a wide array of RAG processes but also implemented various RAG works so that the effects of existing works under various settings can be replicated directly with a few lines of code. Furthermore, we offer a wealth of resources, including a large number of processed datasets, scripts for obtaining and pre-processing widely-used corpus, among others, to expedite researchers’ preparation time as much as possible. # 3 The Toolkit: FlashRAG The FlashRAG is designed to facilitate RAG-related research for researchers. As depicted in Figure 1, the overall structure of the FlashRAG toolkit comprises three hierarchical modules: the environment module, the component module, and the pipeline module. The environment module is fundamental to the toolkit, establishing the requisite datasets, hyperparameters, and evaluation metrics necessary for experimentation. Building upon the environment module, the component module consists of various RAG components, each endowed with its specific role (e.g., retrieval and generation). The pipeline module synthesizes an assortment of component modules with the purpose of effectuating a complete RAG process. In this paper, we will introduce the component and pipeline modules. Additional details are available in the documentation of our library. # 3.1 Component Module The Component Module consolidates all elements involved in the RAG process into a unified framework. Each component is equipped with autonomous functionality, enabling standalone application.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Currently, the Component Module encompasses five main components: Judger, Retriever, Reranker, Refiner, and Generator. Judger functions as a preliminary component that assesses whether a query necessitates retrieval. Given the limited work and models in this domain, we presently offer a judger based on the SKR [10] method, which determines the necessity of retrieval using a curated set of LLM self-knowledge data. Retriever implementations are extensively covered by our toolkit. For sparse retrieval, we have integrated the Pyserini library [11] to facilitate the BM25 method. For dense retrieval, we provide support for various BERT-based embedding models such as DPR [12], E5 [13] and BGE [14]. FlashRAG also support models based on the T5 architecture like ANCE [15]. We employ FAISS [16, 17] for vector database computations to ensure retrieval efficiency and utilize the HuggingFace’s datasets library to enhance corpus loading speed. To enhance the reusability of retrieval results and accommodate non-open source retrievers, our library supports the use of pre-retrieved results termed "retrieval cache".
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
During each retrieval instance, the system automatically searches the retrieval cache for relevant results using the current query, presenting them as the return value. Using our retrievers, user can set automatic saving of retrieval
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Environment |Config File|Parameter Dict|Evaluation Module| |---|---|---| |DatasetZoo| |CorpusZoo| # Data |Question Answering|Multiple Choice|Dialog Generation|Wikipedia Pre-processing| |---|---|---|---| |Entity Linking|Fact Verification|Other Datasets|MS MARCO Scripts| # Pipelines |Sequential Pipeline|Conditional Pipeline|Branching Pipeline|Iterative Loop Pipeline| |---|---|---|---| |Generator|Refiner| | | # Basic Components |Decoder-Only Generator|Encoder-Decoder Generator|Extractive Refiner|Abstractive Refiner|RECOMP Refiner| |---|---|---|---|---| |vllm Generator|FastChat Generator|LLMLingua Refiner|SelectiveContext Refiner| | # Components |Retriever|Reranker|Judger| |---|---|---| |BM25 Retriever|Embedding Models|SKR Judger| |T5-Based Encoders|Cross-Encoder Models| | Figure 1: An overview of the FlashRAG toolkit. caches as JSONL files for future use.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
For non-open source retrievers, user can format the retrieval results to fit our cache structure for loading. Reranker aims at refining the order of results returned by the retriever to enhance retrieval accuracy. Currently, FlashRAG supports a variety of widely-used Cross-Encoder models, such as the bge- reranker and jina-reranker. In scenarios where embedding models are used for reranking (e.g., employing BM25 as the retriever), we also facilitate the use of Bi-Encoder models like E5 as rerankers. In practice, the reranker is integrated into the retriever’s retrieval function via a decorator, enabling seamless combination with any retriever. Users can assemble any retriever and reranker with just one line of code. Refiner refines the input text for generators to reduce token usage and reduce noise from retrieved documents, improving the final RAG responses. Serving as an essential part of the RAG process, various studies focus on developing superior refinements. We have reviewed the existing literature and implemented four types of refiners, each performing differently in handling retrieved documents. The Extractive Refiner employs an embedding model to extract semantic units, like sentences or phrases, from the retrieved text that hold higher semantic similarity with the query. The Abstractive Refiner utilizes a seq2seq model to directly summarize the retrieved text, supporting dedicated models like RECOMP [ 18 ], as well as the general summarizer models with similar structures available on HuggingFace. Furthermore, we also facilitate the use of LLMLingua [19 , 20 ] Refiner and Selective-Context [21] Refiner, both perplexity-based refiners.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Generator is the final component in the RAG process, thoroughly covered within our toolkit. In the generator module, we’ve integrated two leading LLM acceleration libraries, vllm [22 ] and FastChat [23 ], hence, a myriad of mainstream LLMs are supported. Furthermore, we provide the native interface of the Transformers library [ 24] to enhance robustness. We also support various encoder-decoder models, such as Flan-T5 [25 ]. For these models, we facilitate the use of Fusion in Decoder (FiD) techniques [ 26 ], further optimizing efficiency when dealing with retrieved documents. # Pipeline Module Building on the diverse components outlined earlier, we are able to decouple the algorithmic flow of the RAG process from the specific implementations of each component, facilitating the assembly of the entire pipeline. The entire pipeline processes the dataset provided by the user, executes the corresponding RAG process on it, and delivers both the final evaluation outcomes and intermediate results. In constructing the pipeline, one only needs to consider which components are required for the entire RAG process and the logic of data flow between these components. Specifically, within
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Each pipeline, it is necessary to load the required components in the init(.) function and implement the corresponding logic in the run(.)
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
function according to each component’s interface. To systematically execute the operational logic of various RAG tasks, we conducted an in-depth survey of RAG-related literature. Drawing on the summaries from the RAG survey [27], we categorized all RAG process flows into four types: Sequential, Branching, Conditional, and Loop. So far, we have implemented 8 different pipelines, covering a range of advancing RAG works. Sequential Pipeline implements a linear execution path for the query, formally represented as query -> retriever -> post-retrieval (reranker, refiner) -> generator. Once the user has configured their settings, the library automatically loads the necessary components along with their corresponding process logic. Branching Pipeline executes multiple paths in parallel for a single query (often one path per retrieved document) and merges the results from all paths to form the ultimate output. Currently, our library supports two advancing branching methods: REPLUG pipeline [28] and SuRe pipeline [29]. The REPLUG pipeline processes each retrieved document in parallel and combines the generation probabilities from all documents to produce the final answer. The SuRe pipeline generates a candidate answer from each retrieved document and then ranks all candidate answers. In implementing SuRe, we adhere to the original paper’s prompt and processing flow to ensure accuracy and comparability of the results. Conditional Pipeline utilizes a judger to direct the query into different execution paths based on the judgement outcome. In the current framework, queries deemed in need of retrieval are sent into the normal sequential process, while the rest bypass retrieval and proceed directly to generation. We offer utility functions to split and merge the input dataset based on the judger’s determination, ensuring that all processing can be conducted in batches, which enhances the efficiency of the pipeline. Moreover, the conditional pipeline supports integration with various types of pipelines, meaning it can execute different pipelines for queries based on different judger outcomes. Loop Pipeline involves complex interactions between retrieval and generation processes, often encompassing multiple cycles of retrieval and generation. Compared to the previous three types of pipelines, this type offers greater flexibility and improved outcomes. We support four widely recognized methods, including Iterative [30, 31], Self-Ask [32], Self-RAG [33], and FlARE [34]. For each of these methods, we support flexible adjustments to the retrievers and generators to test their performances in different scenarios. # Datasets and Corpus # Datasets As shown in Table 2, we collect and pre-process 32 benchmark datasets, covering the majority of the datasets utilized in RAG works. We researched and listed the sources of answers in each dataset for reference. For most datasets, the knowledge comes from Wikipedia, underscoring its importance in RAG tasks. All datasets have been formatted into a unified JSONL structure, typically encapsulating four fields: ID, question, golden answer, and metadata. For multiple-choice datasets like MMLU [35, 36] and OpenBookQA [37], an additional "choices" field is provided as options. We have hosted the processed datasets on HuggingFace for easy access.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Details on dataset processing can be found in the appendix. Besides the datasets, we offer a variety of dataset filtering tools for users to filter the entire dataset. For instance, users can choose a certain number of samples from the entire dataset, either randomly or sequentially for evaluation, or select a subset of the dataset through the dataset’s metadata. These methods are unified within a dataset loading function, which is accessible through a standard interface. Users are also allowed to implement their own filtering functions. # Corpus Besides datasets, the corpus used for retrieval, also known as the knowledge base, is another vital preparation of experiments. In various research works, the following two types of corpus are often used: Wikipedia dump and MS MARCO passage.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2403.09040v1 [cs.CL] 14 Mar 2024 RAGGED: Towards Informed Design of Retrieval Augmented Generation Systems Jennifer Hsia* Afreen Shaikh∗ Zhiruo Wang Graham Neubig Carnegie Mellon University {jhsia2,afreens,zhiruow,gneubig}@cs.cmu.edu Abstract Retrieval-augmented generation (RAG) greatly benefits language models (LMs) by providing additional context for tasks such as document-based question answering (DBQA). Despite its potential, the power of RAG is highly dependent on its configuration, raising the question: What is the optimal RAG configuration? To answer this, we introduce the RAGGED framework to analyze and optimize RAG systems. On a set of representative DBQA tasks, we study two classic sparse and dense retrievers, and four top-performing LMs in encoder-decoder and decoder-only architectures. Through RAGGED, we uncover that different models suit substantially varied RAG setups. While encoder-decoder models monotonically improve with more documents, we find decoder-only models can only effectively use < 5 documents, despite often having a longer context window. RAGGED offers further insights into LMs’ context utilization habits, where we find that encoder-decoder models rely more on contexts and are thus more sensitive to retrieval quality, while decoder-only models tend to rely on knowledge memorized during training. Introduction Retrieval-augmented generation (RAG) (Chen et al., 2017a; Lewis et al., 2020) is a technique widely applied to enhance the performance of top-performing LMs on knowledge-intensive generation tasks like document-based question answering (Karpukhin et al., 2020). Given a question, the technique includes using a retriever model to obtain multiple relevant passages (i.e.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Task|Dataset Name|Knowledge Source|# Train|# Dev|# Test| |---|---|---|---|---|---| |QA|NQ [38]|Wiki|79,168|8,757|3,610| | |TriviaQA [39]|Wiki & Web|78,785|8,837|11,313| | |PopQA [40]|Wiki|/|/|14,267| | |SQuAD [41]|Wiki|87,599|10,570|/| | |MSMARCO-QA [42]|Web|808,731|101,093|/| | |NarrativeQA [43]|Books, movie scripts|32,747|3,461|10,557| | |WikiQA [44]|Wiki|20,360|2,733|6,165| | |WebQuestions [45]|Google Freebase|3,778|/|2,032| | |AmbigQA [46, 38]|Wiki|10,036|2,002|/| | |SIQA [47]|-|33,410|1,954|/| | |CommenseQA [48]|-|9,741|1,221|/| | |BoolQ [49]|Wiki|9,427|3,270|/| | |PIQA [50]|-|16,113|1,838|/| | |Fermi [51]|Wiki|8,000|1,000|1,000| | |HotpotQA [52]|Wiki|90,447|7,405|/| |Multi-Hop QA|2WikiMultiHopQA [53]|Wiki|15,000|12,576|/| | |Musique [54]|Wiki|19,938|2,417|/| | |Bamboogle [32]|Wiki|/|/|125| |Long-Form QA|ASQA [55]|Wiki|4,353|948|/| | |ELI5 [56]|Reddit|272,634|1,507|/| | |MMLU [35, 36]|-|99,842|1,531|14,042| | |TruthfulQA [57]|Wiki|/|817|/| |Multiple-Choice|HellaSwag [58]|ActivityNet|39,905|10,042|/| | |ARC [59]|-|3,370|869|3,548| | |OpenBookQA [37]|-|4,957|500|500| |Entity-linking|AIDA CoNLL-YAGO [60, 61]|Wiki & Freebase|18,395|4,784|/| | |WNED [62, 61]|Wiki|/|8,995|/| |Slot filling|T-REx [63, 61]|DBPedia|2,284,168|5,000|/| | |Zero-shot RE [64, 61]|Wiki|147,909|3,724|/| |Fact Verification|FEVER [65, 61]|Wiki|104,966|10,444|/| |Dialog Generation|WOW [66, 61]|Wiki|63,734|3,054|/| |Open-domain Summarization*|WikiAsp [67]|Wiki|300,636|37,046|37,368|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Evaluation Our library supports a variety of evaluation metrics to assess the quality of RAG process. Depending on the subject of evaluation, our supporting metrics can be divided into two categories: retrieval-aspect metrics and generation-aspect metrics. Retrieval-aspect metrics: To evaluate the quality of the retrieval, we support four metrics including recall@k, precision@k, F1@k, and mean average precision (MAP). Unlike assessing standalone retrieval systems, the documents retrieved in the RAG process often lack golden labels (e.g., related or unrelated tags). Therefore, we facilitate these evaluations by considering whether the golden answer is present within the retrieved documents as an indicator of relevance. Other types of metrics can be obtained by inheriting existing metrics and modifying the calculation methods inside. Generation-aspect metrics: For evaluating the quality of generation, we support five metrics including token-level F1 score, exact match, accuracy, BLEU [69], and ROUGE-L [70]. Moreover, we support evaluating the number of tokens used in generation, to facilitate the analysis of the overall process cost. To accommodate custom evaluation metrics, our library provides a metric template for users to implement. As our library automatically saves intermediate results of the execution, users can conveniently evaluate the outcomes produced by intermediate components.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
For example, users might compare the number of tokens before and after the refiner runs, or the precision differences between multiple rounds of retrieval results. # Experimental Result and Discussion FlashRAG can enable researchers to benchmark RAG methods, evaluate their own RAG approaches, and conduct investigations within the RAG field. To demonstrate the capabilities of FlashRAG, we conducted several experiments for providing reproducible benchmarks and exploration. Experimental Setup: In our main experiment, we employed the latest LLAMA3-8B-instruct [71] as the generator and the E5-base-v2 as the retriever, utilizing Wikipedia data from December 2018 as the retrieval corpus. The max input length of generator model is set to 4096. For each query, we retrieved five documents.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
For approaches not utilizing custom-defined prompts, we applied a consistent default prompt, which is shown in the appendix. Methods requiring specific settings and hyperparameters are marked with asterisks in our tables, with their specific configurations noted in the appendix. All experiments are carried out on 8 NVIDIA A100 GPUs. We conducted experiments on six common datasets: Natural Questions(NQ) [38], TriviaQA [39], HotpotQA [52], 2WikiMultihopQA [53], PopQA [40] and WebQuestions [45]. We use exact match as the metric on NQ, TriviaQA, Web Questions, and token level F1 as the metric on HotpotQA, 2WikiMultihopQA and PopQA. Methods: We conducted experiments on all supported RAG methods. These methods are categorized based on the RAG component they primarily focused on optimizing: AAR [72] aims at optimizing the retriever; LongLLMLingua [20], RECOMP [18], and Selective-Context [21] focus on the refiner to compress input prompts; Ret-Robust [73] and REPLUG [28] focus on optimizing the generator and its related decoding methods; SKR [10] enhances the judger that decides whether to retrieve for a query; SuRe [29], Self-RAG [33], FLARE [34], Iter-RetGen [30], and ITRG [31] optimize the entire RAG flow, including multiple retrievals and generation processes. # Main results The main results of various methods are shown in Table 3. Overall, RAG methods significantly improve compared to the direct generation baseline. Standard RAG, with advanced retrievers and generators, is a strong baseline, performing well across six datasets. AAR improves retrievers by training the contriever model, and get comparable result to the e5 baseline on multiple datasets. For refiners, all three methods show notable improvements. Refiners perform especially well on multi-hop datasets like HotpotQA and 2WikiMultihopQA. This is likely because complex problems lead to less accurate document retrieval, creating more noise and requiring refiner optimization. In generator optimization method, Ret-Robust uses the Llama2-13B model with a lora module, greatly enhancing the generator’s understanding of retrieved documents and outperforming other.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Method|Optimize component|Pipeline type|NQ (EM)|TriviaQA (EM)|HotpotQA (F1)|2Wiki (F1)|PopQA (F1)|WebQA (EM)| |---|---|---|---|---|---|---|---|---| |Naive Generation|-|Sequential|22.6|55.7|28.4|33.9|21.7|18.8| |AAR [72]|Retriever|Sequential|30.1|56.8|33.4|19.8|36.1|16.1| |LongLLMLingua [20]|Refiner|Sequential|32.2|59.2|37.5|25.0|38.7|17.5| |RECOMP-abstractive [18]|Refiner|Sequential|33.1|56.4|37.5|32.4|39.9|20.2| |Selective-Context [21]| | | | | | | | | |Ret-Robust∗ [73]|Refiner|Sequential|30.5|55.6|34.4|18.5|33.5|17.3| | |Generator|Sequential|42.9|68.2|35.8|43.4|57.2|9.1| |SuRe [29]|Flow|Branching|37.1|53.2|33.4|20.6|48.1|24.2| |REPLUG [28]|Generator|Branching|28.9|57.7|31.2|21.1|27.8|20.2| |SKR [10]∗ [33]| |JudgerFlow|ConditionalLoop|25.5|36.4|55.9|38.2|29.8|29.6|28.5|25.1|24.5|32.7|18.6|21.9| |FLARE [34]|Flow|Loop|22.5|55.8|28.0|33.9|20.7|20.2| |Iter-RetGen [30], ITRG [31]|Flow|Loop|36.8|60.1|38.3|21.6|37.9|18.2| Metric Score | |E5-base|BM25|Bge-base| |---|---|---|---| |Top 1|40.0|60| | |Top 3|37.5| | | |Top 5|35.0| | | |Top 10|32.5| | | |Top 15|30.0| | | Figure 2: The results of standard RAG process under different number of retrieved documents and retrievers. Left: Average results on six datasets using three different retrievers with varying numbers of retrieved documents.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Right: Individual results on six datasets using E5 as the retriever. training-free methods. The effectiveness of optimizing the RAG process varies by dataset. On simpler datasets like NQ and TriviaQA, FLARE and Iter-RetGen are on par with or slightly below standard RAG. However, on complex datasets that requiring multi-step reasoning, like HotpotQA, there are significant improvements over the baseline. This suggests adaptive retrieval methods are more suited for complex problems, while on simpler tasks, they may incur higher costs with only modest benefits. # 4.2 Impact of Retrieval on RAG In RAG process, the retriever is a crucial component that significantly impacts the results. The quantity and quality of input retrieved documents determine the final answer. However, due to considerations such as cost, existing research works often employs a fixed retriever and a fixed number of retrieved documents, neglecting exploration in this area. To thoroughly investigate the influence of the retrieval process on overall RAG results, we conducted a series of experiments. In Figure 2, we present the results for varying numbers of retrieved documents. As shown in the left part of Figure 2, the overall performance is optimal when the number of retrieved documents is 3 or 5. Both an excessive and insufficient number of retrieved documents lead to a significant decrease in performance, with a drop of up to 40%. This trend is consistent across different retrievers, including both dense and sparse retrieval methods. Additionally, we observe that when the number of retrieved documents is large, the results of the three different quality retrievers converge. In contrast, for the top1 results, there is a substantial gap between dense methods (E5, Bge) and BM25, indicating that the fewer documents retrieved, the greater the impact of the retriever’s quality on the final result.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In the right part of Figure 2, we plot the impact of the number of retrieved documents on different datasets. It can be seen that on most datasets, using top3 or top5 retrieved results yields the best performance, suggesting that this may represent a good balance between the quality of retrieved documents and noise. # Limitations Our toolkit currently has some limitations, which we plan to gradually improve in the future.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
(1) Although we strive to encompass many representative RAG methods, due to time and cost considerations, we have not included all existing RAG works. This may require contributions from the open-source community in the future. (2) Our toolkit lacks support for training RAG-related components. We considered training during the initial design, but given the diversity of training methods and the presence of many repositories specifically dedicated to the training of retrievers and generators, we did not include this part. In the future, we may add some helping scripts to provide some assistance for researchers’ training needs. # Conclusion To address the challenges researchers face in replicating studies and the high development costs associated with research in the RAG domain, we introduce a modular RAG toolkit. Our toolkit includes comprehensive RAG benchmark datasets, implementations of advanced RAG methods, and code for pre-processing corpus and multiple evaluation metrics. It enables researchers to easily reproduce existing RAG methods, develop new algorithms, and focus on optimizing their research. # References |[1]|Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning, pages 2206–2240. PMLR, 2022.| |---|---| |[2]|Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. REALM: Retrieval-augmented language model pre-training. In International Conference on Machine Learning. JMLR.org, 2020.| |[3]|Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity, 2023.| |[4]|Harrison Chase. LangChain, October 2022.| |[5]|Jerry Liu.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
LlamaIndex, November 2022.| |[6]|Malte Pietsch, Timo Möller, Bogdan Kostic, Julian Risch, Massimiliano Pippi, Mayank Jobanputra, Sara Zanzottera, Silvano Cerza, Vladimir Blagojevic, Thomas Stadelmann, Tanay Soni, and Sebastian Lee. Haystack: the end-to-end NLP framework for pragmatic builders, November 2019.| |[7]|Peter Izsak, Moshe Berchansky, Daniel Fleischer, and Ronen Laperdon. fastRAG: Efficient Retrieval Augmentation and Generation Framework, February 2023.| |[8]|Xiao Yu, Yunan Lu, and Zhou Yu. Localrqa: From generating data to locally training, testing, and deploying retrieval-augmented qa systems, 2024.| |[9]|Jeffrey Kim Bwook Kim. AutoRAG, 2024.| |[10]|Yile Wang, Peng Li, Maosong Sun, and Yang Liu. Self-knowledge guided retrieval augmentation for large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10303–10315, Singapore, December 2023. Association for Computational Linguistics.| |[11]|Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and RodrigoNogueira. Pyserini: A Python toolkit for reproducible information retrieval research with|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References [12] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online, November 2020. Association for Computational Linguistics. [13] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533, 2022. [14] Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. C-pack: Packaged resources to advance general chinese embedding, 2023. [15] Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In International Conference on Learning Representations, 2021. [16] Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. The faiss library. 2024.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
paragraphs) across potentially different documents, then inputting these passages to a reader model as additional contexts for generating an answer. * Equal contribution. Code/data for the RAGGED framework are available at https://github.com/neulab/ragged
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[17] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535–547, 2019. [18] Fangyuan Xu, Weijia Shi, and Eunsol Choi. Recomp: Improving retrieval-augmented lms with compression and selective augmentation. arXiv preprint arXiv:2310.04408, 2023. [19] Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. LLMLingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13358–13376. Association for Computational Linguistics, December 2023. [20] Huiqiang Jiang, Qianhui Wu, , Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. LongLLMLingua: Accelerating and enhancing llms in long context scenarios via prompt compression. ArXiv preprint, abs/2310.06839, 2023. [21] Yucheng Li. Unlocking context constraints of llms: Enhancing context efficiency of llms with self-information-based content filtering. arXiv preprint arXiv:2304.12102, 2023. [22] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. [23] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. [24] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online, October 2020. Association for Computational Linguistics. [25] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[26] Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. In Paola Merlo, Jorg Tiedemann, and Reut Tsarfaty, editors, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online, April 2021. Association for Computational Linguistics.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References |[27]|Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey, 2024.| |---|---| |[28]|Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. Replug: Retrieval-augmented black-box language models, 2023.| |[29]|Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, and Jinwoo Shin. Sure: Summarizing retrievals using answer candidates for open-domain QA of LLMs. In The Twelfth International Conference on Learning Representations, 2024.| |[30]|Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9248–9274, Singapore, December 2023. Association for Computational Linguistics.| |[31]|Zhangyin Feng, Xiaocheng Feng, Dezhi Zhao, Maojin Yang, and Bing Qin. Retrieval-generation synergy augmented large language models, 2023.| |[32]|Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5687–5711, Singapore, December 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Association for Computational Linguistics.| |[33]|Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. Self-RAG:Learning to retrieve, generate, and critique through self-reflection. In The Twelfth International Conference on Learning Representations, 2024.| |[34]|Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983, 2023.| |[35]|Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR), 2021.| |[36]|Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. Aligning ai with shared human values. Proceedings of the International Conference on Learning Representations (ICLR), 2021.| |[37]|Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP, 2018.| |[38]|Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466, 2019.| |[39]|Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Regina Barzilay and Min-Yen Kan, editors, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada, July 2017.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Association for Computational Linguistics.| |[40]|Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint, 2022.| |[41]|Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Jian Su, Kevin Duh, and Xavier Carreras, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas, November 2016. Association for Computational Linguistics.| |[42]|Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS MARCO: A human-generated MAchine reading COmprehension dataset, 2017.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References |[43]|Tomáš Koˇ ciský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, TBD:TBD, 2018.| |---|---| |[44]|Yi Yang, Wen-tau Yih, and Christopher Meek. WikiQA: A challenge dataset for open-domain question answering. In Lluís Màrquez, Chris Callison-Burch, and Jian Su, editors, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013–2018, Lisbon, Portugal, September 2015. Association for Computational Linguistics.| |[45]|Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on Freebase from question-answer pairs. In David Yarowsky, Timothy Baldwin, Anna Korhonen, Karen Livescu, and Steven Bethard, editors, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA, October 2013. Association for Computational Linguistics.| |[46]|Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. AmbigQA: Answering ambiguous open-domain questions. In EMNLP, 2020.| |[47]|Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Social IQa: Commonsense reasoning about social interactions. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463–4473, Hong Kong, China, November 2019. Association for Computational Linguistics.| |[48]|Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.| |[49]|Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In NAACL, 2019.| |[50]|Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In AAAI Conference on Artificial Intelligence, 2019.| |[51]|Ashwin Kalyan, Abhinav Kumar, Arjun Chandrasekaran, Ashish Sabharwal, and Peter Clark. How much coffee was consumed during emnlp 2019? fermi problems: A new reasoning challenge for ai. arXiv preprint arXiv:2110.14207, 2021.| |[52]|Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380, Brussels, Belgium, October-November 2018. Association for Computational Linguistics.| |[53]|Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. Constructing a multi-hop QA dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6609–6625, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics.| |[54]|Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. MuSiQue: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 2022.| |[55]|Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, and Ming-Wei Chang.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
ASQA: Factoid questions meet long-form answers. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8273–8288, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.| |[56]|Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. ELI5:Long form question answering. In Anna Korhonen, David Traum, and Lluís Màrquez, editors.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics |Reference|Authors|Title|Conference/ Journal|Date| |---|---|---|---|---| |[57]|Stephanie Lin, Jacob Hilton, and Owain Evans|TruthfulQA: Measuring how models mimic human falsehoods|ACL 2022|Dublin, Ireland, May 2022| |[58]|Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi|Hellaswag: Can a machine really finish your sentence?|ACL 2019| | |[59]|Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord|Think you have solved question answering? try arc, the AI2 reasoning challenge|CoRR|2018| |[60]|Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum|Robust disambiguation of named entities in text|EMNLP 2011|Edinburgh, Scotland, UK., July 2011| |[61]|Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel|KILT: a benchmark for knowledge intensive language tasks|NAACL-HLT 2021|Online, June 2021| |[62]|Simone Tedeschi, Simone Conia, Francesco Cecconi, and Roberto Navigli|Named Entity Recognition for Entity Linking: What works and what’s next|EMNLP 2021|Punta Cana, Dominican Republic, November 2021| |[63]|Hady ElSahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon S. Hare, Frédérique Laforest, and Elena Simperl|T-rex: A large scale alignment of natural language with knowledge base triples|LREC 2018|Miyazaki, Japan, May 7-12, 2018., 2018| |[64]|Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer|Zero-shot relation extraction via reading comprehension|CoNLL 2017|Vancouver, Canada, August 2017| |[65]|James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal|FEVER: a large-scale dataset for fact extraction and VERification|NAACL-HLT 2018| | |[66]|Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston|Wizard of Wikipedia: Knowledge-powered conversational agents|ICLR 2019| | |[67]|Hiroaki Hayashi, Prashant Budania, Peng Wang, Chris Ackerson, Raj Neervannan, and Graham Neubig|Wikiasp: A dataset for multi-domain aspect-based summarization|TACL 2020| | |[68]|Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes|Reading Wikipedia to answer open-domain questions|ACL 2017| | |[69]|Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu|Bleu: a method for automatic evaluation of machine translation|ACL ’02|USA, 2002| |[70]|Chin-Yew Lin|ROUGE: A package for automatic evaluation of summaries|Text Summarization Branches Out|Barcelona, Spain, July 2004| |[71]|AI@Meta|Llama 3 model card| |2024|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
high-quality data, while Yu et al.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References [72] Zichun Yu, Chenyan Xiong, Shi Yu, and Zhiyuan Liu. Augmentation-adapted retriever improves generalization of language models as generic plug-in. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of pe 61st Annual Meeting of pe Association for Computational Linguistics (Volume 1: Long Papers), pages 2421–2436, Toronto, Canada, July 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Association for Computational Linguistics. [73] Ori Yoran, Tomer Wolfson, Ori Ram, and Jonapan Berant. Making retrieval-augmented language models robust to irrelevant context, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2404.13948v1 [cs.CL] 22 Apr 2024 Typos that Broke the RAG’s Back: Genetic Attack on RAG Pipeline by Simulating Documents in the Wild via Low-level Perturbations Sukmin Cho &emsp; Soyeong Jeong &emsp; Jeongyeon Seo &emsp; Taeho Hwang &emsp; Jong C. Park* School of Computing Korea Advanced Institute of Science and Technology {nelllpic,starsuzi,yena.seo,doubleyyh,jongpark}@kaist.ac.kr Abstract The robustness of recent Large Language Models (LLMs) has become increasingly crucial as their applicability expands across various domains and real-world applications. Retrieval-Augmented Generation (RAG) is a promising solution for addressing the limitations of LLMs, yet existing studies on the robustness of RAG often overlook the interconnected relationships between RAG components or the potential threats prevalent in real-world databases, such as minor textual errors. In this work, we investigate two underexplored aspects when assessing the robustness of RAG: 1) vulnerability to noisy documents through low-level perturbations and 2) a holistic evaluation of RAG robustness. Furthermore, we introduce a novel attack method, the Genetic Attack on RAG (GARAG), which targets these aspects. Specifically, GARAG is designed to reveal vulnerabilities within each component and test the overall system functionality against noisy documents. We validate RAG robustness by applying our GARAG to standard QA datasets, incorporating diverse retrievers and LLMs. The experimental results show that GARAG consistently achieves high attack success rates. Also, it significantly devastates the performance of each component and their synergy, highlighting the substantial risk that minor textual inaccuracies pose in disrupting RAG systems in the real world. Introduction Recent Large Language Models (LLMs) (Brown et al., 2020; OpenAI, 2023b) have enabled remarkable advances in diverse Natural Language Processing (NLP) tasks, especially in Question-Answering (QA) tasks (Joshi et al., 2017; Kwiatkowski et al., 2019). Despite these advances, however, LLMs face challenges in having to adapt to ever-evolving or long-tailed knowledge due to their limited parametric memory (Kasai et al., 2023; Mallen et al., 2023), resulting in a hallucination where the models generate convincing yet factually incorrect text (Li et al., 2023a). Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) has emerged as a promising solution by utilizing a retriever to fetch enriched knowledge from external databases, thus enabling accurate, relevant, and up-to-date response generation. Specifically, RAG has shown its superior performance across diverse knowledge-intensive tasks (Lewis et al., 2020; Lazaridou et al., 2022; Jeong et al., 2024), leading to its integration as a core component in various real-world APIs (Qin et al., 2024; Chase, 2022; OpenAI, 2023a).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Given its extensive applications, ensuring robustness under diverse conditions of real-world scenarios becomes critical for safe deployment. Thus, assessing potential vulnerabilities within the overall RAG system is vital, particularly by assessing its components: the retriever and the reader. However, existing studies on assessing the robustness of RAG often focus solely on either retrievers (Zhong et al., 2023; Zou et al., 2024; Long et al., 2024) or readers (Li et al., 2023b; Wang et al., 2023; Zhu et al., 2023). The robustness of a single component might only partially capture the complexities of RAG systems, where the retriever and reader work together in a sequential flow, which is |Question:|Retriever|Retrieval Error| |---|---|---| |Name food you might eat on Thanksgiving?|Reader|Grounding Error| |Thanksgiving dinner; traditionally featuring turkey, playing central role the celebration of Thanksgiving|Thanksgiving dinner; traditionally featuring turkey, playing central celebration of Thanksgiving| | | |Typos|Thanksgiving dinner; traditionally featuring turkey, playing central| Figure 1: Impact of the noisy document in the real-world database on the RAG system.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
crucial for optimal performance.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In other words, the reader’s ability to accurately ground information significantly depends on the retriever’s capability of sourcing query-relevant documents (Baek et al., 2023; Lee et al., 2023). Thus, it is important to consider both components simultaneously when evaluating the robustness of an RAG system. While concurrent work has shed light on the sequential interaction between two components, they have primarily evaluated the performance of the reader component given the high-level perturbed errors within retrieved documents, such as context relevance or counterfactual information (Thakur et al., 2023; Chen et al., 2024; Cuconasu et al., 2024). However, they have overlooked the impact of low-level errors, such as textual typos due to human mistakes or preprocessing inaccuracies in retrieval corpora, which commonly occur in real-world scenarios (Piktus et al., 2021; Le et al., 2023). Additionally, LLMs, commonly used as readers, often struggle to produce accurate predictions when confronted with textual errors (Zhu et al., 2023; Wang et al., 2023). Note that these are the practical issues that can affect the performance of any RAG system in real-world scenarios, as illustrated in Figure 1. Therefore, to deploy a more realistic RAG system, we should consider: “Can minor document typos comprehensively disrupt both the retriever and reader components in RAG systems?” In this work, we investigate two realistic yet underexplored dimensions of RAG robustness evaluation: 1) the quantitative resilience of the individual retriever and reader components and their sequential relationships and 2) vulnerability to noisy documents with low-level perturbations. First, we introduce two specific objectives for a retriever and reader to assess each component’s robustness against low-level perturbations. These objectives assess the impact of perturbed documents on the RAG pipeline’s retrieval and grounding capabilities, providing a detailed understanding of component-specific resilience beyond traditional QA metrics. To further explore robustness under these newly defined dimensions, we introduce a novel adversarial attack algorithm, namely GARAG, which targets at the dual objectives within the RAG system. Specifically, the adversarial document population is initially generated by injecting low-level perturbations to clean documents while keeping the answer tokens intact. The population then undergoes iterative crossover, mutation, and selection processes to discover the most optimal adversarial documents within the search space formulated by our objectives. To sum up, GARAG assesses the holistic robustness of an RAG system against minor textual errors, offering insights into the system’s resilience through iterative adversarial refinement. We validate our method on three standard QA datasets (Joshi et al., 2017; Kwiatkowski et al., 2019; Rajpurkar et al., 2016), with diverse retrievers (Karpukhin et al., 2020; Izacard et al., 2022) and LLMs (Touvron et al., 2023; Chiang et al., 2023; Jiang et al., 2023). The experimental results reveal that adversarial documents with low-level perturbation generated by GARAG significantly induce retrieval and grounding errors, achieving a high attack success rate of approximately 70%, along with a significant reduction in the performance of each component and overall system. Our analyses also highlight that lower perturbation rates pose a greater threat to the RAG system, emphasizing the challenges of mitigating such inconspicuous yet critical vulnerabilities. Our contributions in this paper are threefold: - We point out that the RAG system is vulnerable to minor but frequent textual errors within the documents, by evaluating the functionality of each retriever and reader component. - We propose a simple yet effective attack method, GARAG, based on a genetic algorithm searching for adversarial documents targeting both components within RAG simultaneously. - We experimentally show that the RAG system is fatal to noisy documents in real-world databases.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Related Work # Robustness in RAG The robustness of RAG, characterized by its ability to fetch and incorporate external information dynamically, has gained much attention for its critical role in real-world applications (Chase, 2022; Liu, 2022; OpenAI, 2023a). However, previous studies concentrated on the robustness of individual components within RAG systems, either retriever or reader. The vulnerability of the retriever is captured by injecting adversarial documents, specially designed to disrupt the retrieval capability, into retrieval corpora (Zhong et al., 2023; Zou et al., 2024; Long et al., 2024). Additionally, the robustness of LLMs, often employed as readers, has been critically examined for their resistance to out-of-distribution data and adversarial attacks (Wang et al., 2021; Li et al., 2023b; Wang et al., 2023).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Zhu et al., 2023) However, these studies overlook the sequential interaction between the retriever and reader components, thus not fully addressing the overall robustness of RAG systems. In response, there is an emerging consensus on the need to assess the holistic robustness of RAG, with a particular emphasis on the sequential interaction of the retriever and reader (Thakur et al., 2023; Chen et al., 2024). They point out that RAG’s vulnerabilities stem from retrieval inaccuracies and inconsistencies in how the reader interprets retrieved documents. Specifically, the reader generates incorrect responses if the retriever fetches partially (or entirely) irrelevant or counterfactual documents within the retrieved set. The solutions to these challenges range from prompt design (Cho et al., 2023; Press et al., 2023) and plug-in models (Baek et al., 2023) to specialized language models for enhancing RAG’s performance (Yoran et al., 2024; Asai et al., 2024). However, they focus on the high-level errors within retrieved documents, which may overlook more subtle yet realistic low-level errors frequently encountered in the real world. In this study, we spotlight a novel vulnerability in RAG systems related to low-level textual errors found in retrieval corpora, often originating from human mistakes or preprocessing inaccuracies (Thakur et al., 2021; Piktus et al., 2021; Le et al., 2023). Specifically, Faruqui et al. (2018) pointed out that Wikipedia, a widely used retrieval corpus, frequently contains minor errors within its contents. Therefore, we focus on a holistic evaluation of the RAG system’s robustness against pervasive low-level text perturbations, emphasizing the critical need for systems that can maintain comprehensive effectiveness for real-world data. # Adversarial Attacks in NLP Adversarial attacks involve generating adversarial samples designed to meet specific objectives to measure the robustness of models (Zhang et al., 2020). In NLP, such attacks use a transformation function to inject perturbations into text, accompanied by a search algorithm that identifies the most effective adversarial sample. The operations of the transformation function can be categorized into high-level and low-level perturbations. High-level perturbations leverage semantic understanding (Alzantot et al., 2018; Ribeiro et al., 2018; Jin et al., 2020), while low-level perturbations are based on word or character-level changes, simulating frequently occurring errors (Eger et al., 2019; Eger and Benz, 2020; Le et al., 2022; Formento et al., 2023). Search algorithms aim to find optimal adversarial samples that meet specific objectives, utilizing diverse methods such as greedy search, gradient descent-based approaches, and genetic algorithms. Given our aim to evaluate the robustness of the overall RAG system, which has non-differentiable and dual objectives for a retriever and a reader, we propose a novel attack algorithm that incorporates a genetic algorithm. # Method Here, we introduce our task formulation and a novel attack method, GARAG.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Further details of the proposed method are described in Appendix A. # Adversarial attack on RAG Pipeline of RAG. Let q be a query the user requests.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In an RAG system, the retriever first fetches the query-relevant document d, then the reader generates the answer grounded on document-query pair (d, q). The retriever, parameterized with ϕ = (ϕd, ϕq), identifies the most relevant document in the database. The relevance score r is computed by the dot product of the embeddings for document d and query q, as rϕ(d, q) = Enc(d; ϕd)·Enc(q; ϕq). Finally, the reader, using an LLM parameterized with θ, generates the answer a from the document-query pair (d, q), as a = LLM(d, q; θ). Adversarial Document Generation. To simulate typical noise encountered in real-world scenarios that attack RAG, we introduce low-level perturbations to mimic these conditions. Specifically, we design an adversarial document d′ by transforming the original and clean document d into its noisy counterparts with perturbations. Formally, this transformation involves a function f that alters each token d in d into a perturbed version d′.′, where these perturbed tokens collectively form d. Specifically, the function f randomly applies one
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
(2023) intention- ally inject noisy content into input context during model training, and observe increased robustness of these models to low-quality contexts. To provide more concrete suggestions of the best practices under various cases, we introduce an analysis framework, RAGGED,2 to test RAG combinations on a suite of representative document- based question answering (DBQA) tasks, includ- ing open-domain datasets like Natural Questions (Kwiatkowski et al., 2019) and HotpotQA (Yang et al., 2018), which respectively focus on single- hop and multi-hop questions, as well as BioASQ, which targets the specialized, biomedical domain. To ensure a comprehensive evaluation, we incorpo- rate both classic sparse and dense retrievers, BM25 (Robertson et al., 2009) and ColBERT (Khattab and Zaharia, 2020), and four top-performing reader models in encoder-decoder and decoder-only ar- chitectures, from FLAN (Chung et al., 2022; Tay et al., 2023) and LLAMA (Touvron et al., 2023b) families, respectively. We begin by exploring ”How many contexts can readers benefit from?” (§5). Our analysis iden- tifies the optimal context quantity for individual LMs. We find that encoder-decoder models can effectively utilize up to 30 passages within their 2k- token limit, whereas decoder-only models’ perfor- mance declines beyond 5 passages, despite having twice the size of the context limit (4k). Given this intriguing difference, we investigate the models’ context utilization behaviors (§6) and ask ”How reliant are models on provided con- texts?”. We find that decoder-only models, which memorize more during training, exhibit compara- tively less reliance on additional, test-time contexts. In contrast, encoder-decoder models, which mem- orize less during training, are more faithful to the provided contexts. This suggests that providing pas- sages for context-reliant encoder-decoder models is beneficial, whereas it is less so for memory-reliant decoder-only models. Given that some models are reliant on context, we also examine the importance of context quality by asking “How does the retriever quality affect readers’ contextualization behavior?” (§7) Our analysis considers two aspects: a retriever’s abil- ity to identify high-quality passages and a reader’s response to varying passage quality. While dense, neural retrievers perform better on open-domain questions, sparse, lexical retrievers readily achieve comparable accuracy on special domains, with much less computation. Neural retrievers’ advan- tage readily benefits encoder-decoder models, espe- cially for single-hop questions. However, the ben- efits are much less pronounced for decoder-only models and multi-hop questions. In summary, we demonstrate how RAGGED en- ables us to derive actionable insights about the con- ditions under which state-of-the-art RAG compo- nents combine to excel. We introduce a reusable framework that can easily be adapted to analyze new RAG components, such as retriever and reader models, as they evolve. We release our full dataset and code, aiming to provide the community with a deeper understanding of the nuanced interplay between context quantity, quality, and model archi- tecture in RAG systems. # The RAGGED Framework # Framework Overview We first explain the three aspects we vary in our analysis, then explain the three research questions we can answer with our analysis. The three aspects we vary in our framework are: - RAG system components. For example, we vary the choice of the retriever (e.g., BM25, ColBERT), the reader family, (e.g., LLAMA2, FLANT5), and the max input token length. - The number of retrieved passages, denoted as k. We vary from k from 1 to 100, though find the most insightful variations in behavior occur before k = 30. - Data slices of data to examine. For exam- ple, we examine slices where the top-k re- trieved passages include gold passages and where they do not include gold passages. By varying these, we analyze the three following aspects of RAG system behavior: Effective Number of Context Passages (§5) We vary the number of passages and the choice of reader to see how different model architectures and context limits affect the effective number of context passages a reader can process. Here, we evaluate all instances instead of specific subsets.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
of several operations — inner-shuffling, truncation, Note that the lower values of LRSR and LGPR indicate a stronger negative effect on the RAG system. Specifically, each value below 1 identifies a successful adversarial attack against the document d. d′In detail, generating the adversarial document involves selecting tokens for attack, applying perturbations, and assembling the modified document. Initially, to identify the tokens to be altered, a subset of indices I′ is randomly selected from the complete set of token indices I = {1, .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
. , N }, where N is the total number of the tokens in d. This selection is designed to exclude any indices that correspond to the correct answer a within the document, thus ensuring that the perturbations focus exclusively on assessing the impact of noise. Each selected token di is then transformed using the function f, yielding a perturbed version di′, for i ∈ I′ ⊂ I. The final document d′ merges the set of unaltered tokens T = {d|i / I \ I′}i ∈ with the set of modified tokens, represented by T ′ = {dj ′|j ∈ I}, forming d′ = T ∪ T ′. Attack Objectives on RAG Compromising bop pe system’s retrieval and grounding capabilities is essential for a successful adversarial attack on an RAG system. Given a set of adversarial documents D′, pe optimal adversarial document d∗ ∈ D must achieve pe following two objectives. First, d∗ should shift pe system’s attention away from d, ensuring pat it no longer appears as pe top relevance for q. At pe same time, d∗ should distract pe LLM from generating pe correct answer a, given pe adversarial pair (d∗, q). To quantify pe effectiveness of pe aforementioned goals, we formally define two novel objectives: pe Relevance Score Ratio (RSR) for measuring retrieval capability and pe Generation Probability Ratio (GPR) for measuring grounding capability. To be specific, pe former calculates pe ratio of pe perturbed document d′ to pe original document d in relation to pe query q and pe correctly generated answer a, while pe latter does pe opposite. In oper words, pe RSR quantifies variations in pe relevance score determined by pe retriever, whereas pe GPR assesses changes in pe likelihood of generating pe correct answer a, as assigned by pe LLM. These two metrics are formally represented as follows: LRSR(d′) = rϕ(d′,q) / rϕ(d,q) LGPR(d′) = pθ(a|d′,q) / pθ(a|d,q) (1) Given the potential for relevance scores to be negative, we have structured the term to guarantee positivity. Note that the optimal adversarial document should be located within the holistic error zone, where both retrieval and grounding errors occur simultaneously. To achieve this, we present a novel adversarial attack strategy, called GARAG, which employs the genetic algorithm NSGA-II (Deb et al., 2002), to target two objectives that are not differentiable simultaneously. Specifically, GARAG iteratively refines a population of adversarial documents, methodically moves them closer to the origin. Given the star-shaped original document in its clean version, our goal is to generate noisy versions (adversarial documents), represented as orange-colored and blue-colored dots, and aim to locate them within the holistic error zone. This process includes exploring the search space to find new adversarial documents and selecting the most effective ones, which can be achieved through crossover, mutation, and selection steps. Initialization. Our attack begins with the initialization step. We first construct the initial population P0, consisting of adversarial documents di′, formalized as P = {di′}i=1, where S is the total number of documents in the population. The extent of perturbation for each adversarial document d′i is determined by applying a predefined level prper.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Figure 2: (Left) The search space formulated by our proposed attack objectives, LRSR and LGPR. (Right) An overview of the iterative process implemented by our proposed method, GARAG. To the number of tokens N in the original document d. Given the star-shaped original document, these modifications: Tnew = (T \ M ) ∪ M' and T'new' \ M ) ∪ M. The newly mutated document, The initial (parent) documents are represented as orange-colored dots in the initialization step of the figure on the right in Figure 2. # Crossover & Mutation Then, through the crossover and mutation steps, the adversarial documents are generated by balancing the exploitation of existing knowledge within the current population (parent documents) and the exploration of new documents (offspring documents). In detail, the crossover step generates offspring documents by recombining tokens from pairs of parent documents, incorporating their most effective adversarial features. Subsequently, the mutation step introduces new perturbations to some tokens in the offspring, aiming to explore genetic variations that are not present in the parent documents. Formally, the crossover step selects Nparents pairs of parent documents from the population P. Let d'0 and d'1 be the selected parent documents along with their perturbed token sets T 0' and T 1', respectively. Then, the swapping tokens perturbed in each parent document generate the offspring documents, excluding those in the shared set T 0'∩ T 1'. The number of swapping tokens is determined by the predefined crossover rate prcross, applied to the number of unique perturbed tokens in each document. The mutation step selects two corresponding subsets of tokens, M from the original token set T and M' from the perturbed token set T, ensuring that both subsets are of equal size |M | = |M'|. The size of these subsets is determined by the predefined mutation probability prmut., which is applied to prper. ·N.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Tokens di ∈ M are altered using a perturbation function f, whereas tokens d'j ∈ M' are reverted to their original states dj. Following this, the sets of unperturbed and perturbed tokens, Tnew and T new, respectively, are updated to incorporate The new = (T', is composed of the updated sets Tnew and T new, and the offspring set O is then formed, comprising these mutated documents. The offspring documents are represented by blue-colored dots in the figure on the right in Figure 2.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Selection The remaining step is to select the most optimal adversarial documents from the combined set Pˆ = P ∪ O, which includes both parent and offspring documents. Specifically, each document within Pˆ is evaluated against the two attack objectives, LRSR and LGPR, to assess their effectiveness in the adversarial context. Note that it is crucial to balance these two objectives when generating adversarial documents. Therefore, we incorporate a non-dominated sorting strategy (Deb et al., 2002) to identify the optimal set of documents, known as the Pareto front. In this front, each document is characterized by having all objective values lower than those in any other set, as shown in the right of Figure 2. Then, the documents in the Pareto front will be located in a holistic error zone closer to the origin. Additionally, to help preserve diversity within the document population, we further utilize the crowding distance sorting strategy to identify adversarial documents that possess unique knowledge by measuring how isolated each document is relative to others. Then, the most adversarial document d∗ is selected from a less crowded region of the Pareto front, enhancing the efficiency of our adversarial strategy. Note that this process, including crossover, mutation, and selection steps, continues iteratively until a successful attack is achieved, where the selected adversarial document d∗ prompts an incorrect answer a, as illustrated in the figure on the right in Figure 2. If the process fails to produce a successful attack, it persists through the predefined number of iterations, Niter.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 1: Results of adversarial attacks using GARAG, averaged across three datasets. The most vulnerable results are in bold. | | |Attack Success Ratio (↑)| | |Component Error (↓)| | |End-to-End (↓)| |---|---|---|---|---|---|---|---|---| |Retriever|LLM|ASRR|ASRL|ASRT|R.E.|G.E.|EM|Acc| | |Llama2-7b|**79.2**|90.5|70.1|0.327|0.674|77.1|81.3| | |Llama2-13b|**78.4**|92.0|70.8|0.308|0.745|81.9|87.3| |DPR|Vicuna-7b|**88.7**|80.7|69.8|0.384|0.388|57.2|79.3| | |Vicuna-13b|**88.8**|81.6|70.8|0.375|0.409|58.4|83.2| | |Mistral-7b|**83.7**|85.5|69.5|0.363|0.520|66.7|96.5| | |Llama2-7b|**85.3**|91.0|76.6|0.940|0.674|75.0|79.6| | |Llama2-13b|**82.0**|92.0|74.2|0.936|0.740|80.7|87.3| |Contriever|Vicuna-7b|**92.1**|81.5|73.9|0.948|0.391|55.1|76.9| | |Vicuna-13b|**91.3**|83.2|74.7|0.950|0.376|53.5|79.5| | |Mistral-7b|**89.2**|86.6|75.9|0.942|0.514|63.1|95.3| |w/o GARAG| |-|-|-|1.000|1.000|100|100| | | | |**100| | | | |100**| | | |ASRT| | | | | | | | | |ASRA| | | | | | | | | |0.8| | | | | | | # Figure 3: (Left & Center) Adversarial attack results depending on the number of iterations Niter, on NQ with Contriever and Llama2-7b. (Right) Distribution of incorrectness among predictions with the Contriever and Llama-7b depending on LGPR.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# 4 Experimental Setup In this section, we describe the experimental setup. # 4.1 Model Retriever. We use two recent dense retrievers: DPR (Karpukhin et al., 2020), a supervised one trained on query-document pairs, and Contriever (Izacard et al., 2022), an unsupervised one. Reader. Following concurrent work (Asai et al., 2024; Wang et al., 2024) that utilizes LLMs as readers for the RAG system, with parameters ranging from 7B to 13B, we have selected open-source LLMs of similar capacities: Llama2 (Touvron et al., 2023), Vicuna (Chiang et al., 2023), and Mistral (Jiang et al., 2023). Each model has been either chat-versioned or instruction-tuned. To adapt these models for open-domain QA tasks, we employ a zero-shot prompting template for exact match QA derived from Wang et al. (2024). # 4.2 Dataset We leverage three representative QA datasets: Natural Questions (NQ) (Kwiatkowski et al., 2019), TriviaQA (TQA) (Joshi et al., 2017), and SQuAD (SQD) (Rajpurkar et al., 2016), following the setups of Karpukhin et al. (2020). To assess the robustness of the RAG system, we randomly extract 1,000 instances of the triple (q, d, a). In each triple, q is a question from the datasets, d is a document from the top-100 documents retrieved from the Wikipedia corpus corresponding to q, and a is the answer generated by the LLM, which is considered as correct for the specific question-document pair. # 4.3 Evaluation Metric Since we aim to measure how the generated adversarial documents with GARAG attack the RAG system, we incorporate three types of metrics to show 1) the overall effectiveness of the adversarial attacks, 2) the adversarial impact of the adversarial samples for each retriever and reader component, and 3) the end-to-end QA performance. Attack Success Ratio (ASR). Attack Success Ratio (ASR) measures the effectiveness of the adversarial document d′ in disrupting the RAG system compared to the original document d. Specifically, it is quantified by the proportion of adversarial documents located in the holistic error zone by the proportion of adversarial documents that achieve values below 1 in our objective functions. ASRR and ASRL denote the ratios of documents meeting such criteria for each objective function LRSR, LGPR, respectively, while ASRT denotes the documents that satisfy them simultaneously. Component Error (C.E.). To assess the impact of d∗ located in the holistic error zone on each component of RAG, we utilize Retrieval Error (R.E.) and Grounding Error (G.E.).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Specifically, RE
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
measures the average of LRSR values, indicating the relative relevance score compared to the original document. Then, G.E. measures the proportion of predictions that exactly match the actual answers, measuring the grounding capability to noisy documents. Lower values of each metric mean that they are more vulnerable to adversarial documents. End-to-End Performance (E2E). To assess how GARAG influences end-to-end performance, we report it with standard QA metrics: Exact Match (EM) and Accuracy (Acc). In cases when the attack fails, we report the scores using the original document d instead of the adversarial one d′. # Implementation Details The proposed method, GARAG, was configured with hyperparameters: Niter was set to 25, Nparents to 10, and S to 25. prper, prcross, and prmut were set to 0.2, 0.2, and 0.4, respectively. The operations of perturbation function f in GARAG consist of the inner swap, truncate, keyboard typo, and natural typo, following Eger and Benz (2020)3. For computing resources, we use A100 GPU clusters.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
First, we begin by extracting factual sentences from each news article as evidence. For example, an extracted piece of evidence from an article may state: “Back then, just like today, home prices had boomed for years before Fed officials were ultimately forced to hike interest rates aggressively in an attempt to fight inflation.” Second, we input each evidence piece into GPT-4, prompting it to rephrase the evidence into a claim. This claim is clarified with a disambiguated topic and entity. For instance, GPT-4 might rephrase the aforementioned evidence into: “Federal Reserve officials were forced to aggressively hike interest rates to combat inflation after years of booming home prices”, identifying “Interest rate hikes to combat inflation” as the topic and “Federal Reserve” as the entity. These topics and entities act as bridges for constructing multi-hop queries, known as bridge-topic or bridge-entity. Next, we use GPT-4 to generate specific multi-hop queries related to the same bridge-topic or bridge-entity, accompanied by the correct answers. Lastly, we undertake a validation step to ensure the data quality. We demonstrate the benchmarking capabilities of MultiHop-RAG using two experiments, utilizing a RAG system implemented with LlamaIndex (Liu, 2022). The first experiment involves a comparison of different embedding models for retrieving relevant evidence for multi-hop queries. In the second experiment, we assess the reasoning and answering abilities of various state-of-the-art LLMs, including GPT-4, GPT-3.5, PaLM, Claude-2, Llama2-70B, and Mixtral-8x7B, for multi-hop queries when retrieved text is provided. The results from both experiments indicate that the current RAG implementations are inadequate for effectively retrieving and answering multi-hop queries. We publicly release
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
While some models monotonically improve with more provided passages, others may have an early
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Results In this section, we show our experimental results with an in-depth analysis of the adversarial attack. Main Result. Table 1 shows our main results averaged over three datasets using GARAG with three metrics: attack success ratio (ASR), components error (C.E.), and end-to-end performance (E2E). First, a notable success rate of over 70% across all scenarios indicates that GARAG effectively locates adversarial documents within the holistic error zone by simultaneously considering retrieval and reader errors. This also implies that the RAG system is vulnerable to low-level (yet realistic) perturbations. Additionally, the results indicate that two different retrievers show varying susceptibilities to attacks: Contriever is more vulnerable than DPR. Furthermore, the results reveal that an increase in model size does not necessarily enhance robustness to adversarial attacks, as shown by the minimal differences in ASR between LLMs with 7B and 13B parameters. This suggests that simply increasing the size may not be an optimal solution when addressing the realistic challenges in RAG. Then, how does an optimal adversarial document located in the holistic error zone specifically influence each component within the RAG system? To answer this, we analyze its impact on both the retrieval and reader components by measuring C.E. Interestingly, the results indicate that adversarial documents within the holistic error zone do not affect the retriever and reader components of different models to the same extent. Note that a higher ASR does not necessarily result in lower C.E.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
for each component. In detail, although DPR exhibits a significantly lower ASR compared to Contriever, its Retrieval Error (R.E.) remains significantly low, consistently below 0.5. This suggests that adversarial documents targeting DPR are ranked higher in the retrieval corpora, indicating a more effective disruption despite fewer successful attacks. On the other hand, Contriever is more susceptible to attacks, but the impact of these attacks tends to be relatively smaller. Furthermore, although Vicuna appears to be the least vulnerable according to its ASR, it suffers the most significant effects from successful adversarial attacks, as indicated by its Grounding Error (G.E.). Finally, we further analyze the E2E performance to assess how adversarial attacks impact overall QA performance. Based on the EM metric, the performance of RAG systems decreased by an average of 30% and a maximum of close to 50% in all cases. These findings imply that noisy documents with minor errors, frequently found in real-world databases, can pose significant risks to downstream tasks using RAG. Additionally, we find that the robustness of an RAG system varies significantly depending on the specific retriever and LLMs targeted, thus necessitating the need for careful design of both retrievers and readers to address challenges in robust RAG applications effectively. Impact of Hyperparameter. We further explore how varying the perturbation probability prpert and the number of iterations Niter affects the attack outcomes. As the left and center figures of Figure 3 illustrate, there is an apparent correlation between the attack success rates for the retriever (ASRR) and the entire pipeline (ASRT ) while also revealing a significant vulnerability in the reader as indicated by the high success rate for the LLM (ASRL). Interestingly, in the left figure of Figure 3, the results indicate that a lower proportion of perturbation within a document leads to a more disruptive impact on the RAG system. This poses a significant concern, given that documents with a few typos are commonly found in the real world. Overall, these findings highlight the critical role of the retriever as a first line of defense in the entire RAG system.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 2: Case study with Contriever and Llama-7b, where perturbed texts are in red and correct answers are in blue . |Question|Name a food you might eat on thanksgiving.| |---|---| |Noisy Document|Thanksgivong ( 8nited States) the Pilgrims who settled at Plymouth Plantation. It is continued in modern times with the Thanksgiving dinner, traditionally featuring turkey , playing a central ro;e in the celebartion of Thanksgiving. In the United States, cetrain kinds of good are traditionally served at Thanksgiving meals. Turkey , usualla roasted and stuffed (but sometimes deep-fried instead), is typically the feat8red!25 item on most Thanksgiving feast tables, so much so that Thanksgiving is also colloquially known as" Turkey Day." In fact, 45 mollion turkeys were consumed on Thanksgiving Day alone in 2015. With 85 percent of Americans partaking in the meal, that’s an estimated 276.| |Answer|Turkey| |Prediction|Mashed potatoes| # Table 3: Results of punctuation insertion, phonetic swap, and visual swap on NQ with Contriever and Llama-7b. |Attack|ASR|C.E.|E2E|ASRR|ASRL|ASRT|R.E.|G.E.|EM| |---|---|---|---|---|---|---|---|---|---| | |Typo| | | | |85.9|91.1|77.5|0.96|63.0|70.1| | |Punc.| | | | |93.0|93.7|86.7|0.91|65.8|68.9| | |Phonetic.| | | | |84.7|92.1|76.8|0.96|62.3|70.0| | |Visual.| | | | |77.7|90.5|68.8|0.98|61.0|72.5| # Table 4: Ablation studies of assessing the impact of each step within GARAG on NQ with Contriever and Llama-7b. |ASR|ASRR|ASRL|ASRT|R.E.|Niter| |---|---|---|---|---|---| | | | |85.9|91.1|77.5|14.8| | | | | |83.0|90.7|73.7|15.6| | | | | |79.4|89.9|69.5|15.6| Impact of Lowering LGPR. Since the value of LRSR does not directly indicate the likelihood of generating incorrect answers with auto-regressive models, we analyze the correlation between the likelihood of generating incorrect answers and LGPR. As illustrated in the right figure of Figure 3, we categorize predictions into buckets based on their LGPR ranges and calculate the proportion of incorrect answers within each bucket. The results indicate that a lower LGPR value is correlated with a higher likelihood of incorrect responses, thus corroborating our objective design. Other Low-level Perturbations. While focusing on character-level perturbations, we also investigate other low-level yet prevalent disturbances, such as punctuation insertion (Formento et al., 2023) and character swaps based on phonetic or visual similarities (Eger et al., 2019; Le et al., 2022). As shown in Table 3, these perturbations show higher success rates and lower E2E performance than those with typos, with punctuation insertion alone compromising the RAG in 86% of attacks. The results emphasize the RAG system’s susceptibility to diverse low-level perturbations. Ablation Study. We conducted ablation studies to see how each step in GARAG contributes to the overall performance. As shown in Table 4, omitting the crossover and mutation steps results in a lower ASR, reducing the attack’s overall effectiveness due to limited search space exploration. Furthermore, without the selection step, lower ASRR indicates that the optimization becomes unbalanced. Overall, each step in GARAG plays a crucial role in achieving a balanced optimization during attacks targeting both the retriever and reader components. Case Study.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We further qualitatively assess the impact of low-level textual perturbations within a document in Table 2. Note that since we ensure that the answer spans remain unperturbed, the readers should ideally generate correct answers. However, interestingly, an LLM fails to identify the correct answer, “Turkey”, which is mentioned four times in the document, but instead generates “Mashed potatoes”, which is never mentioned at all. We include more diverse cases in Table 6.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Conclusion. In this work, we highlighted the importance of assessing the overall robustness of the retriever and reader components within the RAG system, particularly against noisy documents containing minor typos that are common in real-world databases. Specifically, we proposed two objectives to evaluate the resilience of each component, focusing on their sequential dependencies. Furthermore, to simulate real-world noises with low-level perturbations, we introduced a novel adversarial attack method, GARAG, incorporating a genetic algorithm. Our findings indicate that noisy documents critically hurt the RAG system, significantly degrading its performance. Although the retriever serves as a protective barrier for the reader, it still remains susceptible to minor disruptions. Our GARAG shows promise as an adversarial attack strategy when assessing the holistic robustness of RAG systems against various low-level perturbations.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Acknowledgement Limitation In this work, we explored the robustness of the RAG system by using various recent open-source LLMs of different sizes, which are widely used as reader components in this system. However, due to our limited academic budget, we could not include much larger black-box LLMs such as the GPT series models, which have a hundred billion parameters. We believe that exploring the robustness of these LLMs as reader components would be a valuable line of future work. Furthermore, GARAG aims for the optimal adversarial document to be located within a holistic error zone, by simultaneously considering both retrieval and grounding errors. However, we would like to note that even though the adversarial document is located within the holistic error zone, this does not necessarily mean that the reader will always generate incorrect answers for every query, due to the auto-regressive nature of how reader models generate tokens. Nevertheless, as shown in the right figure of Figure 3 and discussed in its analysis, we would like to emphasize that there is a clear correlation: a lower LGPR value is associated with a higher likelihood of incorrect responses. # Ethics Statement We designed a novel attack strategy for the purpose of building robust and safe RAG systems when deployed in the real world. However, given the potential for malicious users to exploit our GARAG and deliberately attack the system, it is crucial to consider these scenarios. Therefore, to prevent such incidents, we also present a defense strategy, detailed in Figure 4 and its analysis. Additionally, we believe that developing a range of defense strategies remains a critical area for future work. # References |Authors|Title|Publication Details| |---|---|---| |Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B. Srivastava, and Kai-Wei Chang.|Generating natural language adversarial examples.|In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2890–2896. Association for Computational Linguistics.| |Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.|Self-RAG: Learning to retrieve, generate, and critique through self-reflection.|In The Twelfth International Conference on Learning Representations.| |Jinheon Baek, Soyeong Jeong, Minki Kang, Jong C. Park, and Sung Ju Hwang.|Knowledge-augmented language model verification.|In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 1720–1736. Association for Computational Linguistics.| |Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.|Language models are few-shot learners.|In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.| |Harrison Chase.|LangChain.| | |Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun.|Benchmarking large language models in retrieval-augmented generation.|In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada, pages 17754–17762. AAAI Press.| |Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing.|Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.| | |Sukmin Cho, Jeongyeon Seo, Soyeong Jeong, and Jong C. Park.|Improving zero-shot reader by reducing distractions from irrelevant documents in open-domain question answering.|In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 3145–3157. Association for Computational Linguistics.| |Florin Cuconasu, Giovanni Trappolini, Federico Siciliano, Simone Filice, Cesare Campagnano, Yoelle Maarek, Nicola Tonellotto, and Fabrizio Silvestri.|The power of noise: Redefining retrieval for RAG systems.|arXiv preprint arXiv:2401.14887, abs/2401.14887.| |Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and T. Meyarivan.|A fast and elitist multiobjective genetic algorithm: NSGA-II.|IEEE Trans. Evol.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Comput., 6(2):182–197.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Mohammad Dehghan, Dhruv Kumar, and Lukasz Golab 2022. GRS: combining generation and revision in unsupervised sentence simplification. In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 949–960. Association for Computational Linguistics. # Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou 2018. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 31–36. Association for Computational Linguistics. # Steffen Eger and Yannik Benz 2020. From hero to zéroe: A benchmark of low-level adversarial attacks. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, AACL/IJCNLP 2020, Suzhou, China, December 4-7, 2020, pages 786–803. Association for Computational Linguistics. # Steffen Eger, Gözde Gül Sahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych 2019. Text processing like humans do: Visually attacking and shielding NLP systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1634–1647. Association for Computational Linguistics. # Manaal Faruqui, Ellie Pavlick, Ian Tenney, and Dipanjan Das 2018. Wikiatomicedits: A multilingual corpus of wikipedia edits for modeling language and discourse. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 305–315. Association for Computational Linguistics. # Brian Formento, Chuan-Sheng Foo, Anh Tuan Luu, and See-Kiong Ng 2023. Using punctuation as an adversarial attack on deep learning-based NLP systems: An empirical study. In Findings of the Association for Computational Linguistics: EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 1–34. Association for Computational Linguistics. # Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave 2022. Unsupervised dense information retrieval with contrastive learning. Trans.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Mach. Learn.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Res., 2022. # Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, and Jong C. Park 2024. Adaptive-RAG: Learning to adapt retrieval-augmented large language models through question complexity. In 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics. # Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8018–8025. AAAI Press. # Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1601–1611. Association for Computational Linguistics. # Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6769–6781. Association for Computational Linguistics. # Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A. Smith, Yejin Choi, and Kentaro Inui 2023. Realtime QA: what’s the answer right now? In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023. # Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenston Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov 2019. Natural questions: a benchmark for question answering research. Trans.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Context Utilization Behaviors (§6) We focus on varying the choice of reader and the slice of instances to examine how different readers use context when the context quality varies. We examine the impact of context quality by analyzing slices of data where the top-k retrieved passages include gold passages and when they do not. As a result of this analysis, one can better understand how sensitive a reader is to positive context and how robust it is against incorrectly retrieved passages. We also investigate when models benefit from context and when models may be harmed by context. # Influence of Retriever Quality (§7) We focus on varying the retriever and data domain, and observing how well retrievers can perform based on the nature of the questions and how sensitive readers are to the quality of the retrieved passages. As reader models may have varied sensitivity to retrieval quality, one could select more appropriate models given the question characteristics and retrieval performance. # Implementation Details For all experiments, we use the following prompt: |Instruction:|Give simple short one phrase answers for the questions based on the context| |---|---| |Context:|[passage1, passage2, · · · , passagek]| |Question:|[the question of the current example]| |Answer:| | We truncate the Context to make sure the latter question is still included in the prompt. While performing generation for with top-k passages with ∀k ∈ {1, 2, · · · , 30} requires demanding computation, we sample k ∈ {1, 2, 3, 5, 10, 20, 30} to represent the general trend. More model implementation details can be found in §A.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Assoc. Comput. Linguistics, 7:452–466. # Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev 2022. Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115, abs/2203.05115. # Thai Le, Jooyoung Lee, Kevin Yen, Yifan Hu, and Dong-won Lee 2022. Perturbations in the wild: Leveraging
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, and Richard E. Harang 2016. Crafting adversarial input sequences for recurrent neural networks. In 2016 IEEE Military Communications Conference, MILCOM 2016, Baltimore, MD, USA, November 1-3, 2016, pages 49–54. IEEE.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Thai Le, Yiran Ye, Yifan Hu, and Dongwon Lee 2023. Cryptext: Database and interactive toolkit of human-written text perturbations in the wild. In 39th IEEE International Conference on Data Engineering, ICDE 2023, Anaheim, CA, USA, April 3-7, 2023, pages 3639–3642. IEEE. # Hyunji Lee, Se June Joo, Chaeeun Kim, Joel Jang, Doyoung Kim, Kyoung-Woon On, and Minjoon Seo 2023. How well do large language models truly ground? abs/2311.09069.arXiv preprint arXiv:2311.09069 # Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. # Junyi Li, Xiaoxue Cheng, Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen 2023a. Halueval: A large-scale hallucination evaluation benchmark for large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 6449–6464. Association for Computational Linguistics. # Xinzhe Li, Ming Liu, Shang Gao, and Wray L. Buntine 2023b. A survey on out-of-distribution evaluation of neural NLP models. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China, pages 6683–6691. ijcai.org. # Jerry Liu 2022. LlamaIndex.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Quanyu Long, Yue Deng, Leilei Gan, Wenya Wang, and Sinno Jialin Pan 2024. Backdoor attacks on dense passage retrievers for disseminating misinformation. arXiv preprint arXiv:2402.13532, abs/2402.13532. # Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 9802–9822. Association for Computational Linguistics. # OpenAI 2023a. Chatgpt plugins. # OpenAI 2023b. GPT-4 technical report. arXiv preprint arXiv:2303.08774, abs/2303.08774.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. IEEE. Pages 12291–12301. # Jin Yong Yoo and Yanjun Qi. 2021. Towards improving adversarial training of NLP models. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021. Pages 945–956. Association for Computational Linguistics. # Ori Yoran, Tomer Wolfson, Ori Ram, and Jonathan Berant. 2024. Making retrieval-augmented language models robust to irrelevant context. In The Twelfth International Conference on Learning Representations. # Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020. Pages 6066–6080. Association for Computational Linguistics. # Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, and Chenliang Li. 2020. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Trans.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Intell. Syst. Technol., 11(3):24:1–24:41. # Zexuan Zhong, Ziqing Huang, Alexander Wettig, and Danqi Chen. 2023. Poisoning retrieval corpora by injecting adversarial passages. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023. Pages 13764–13775. Association for Computational Linguistics. # Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, and Xing Xie. 2023. Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528, abs/2306.04528. # Wei Zou, Runpeng Geng, Binghui Wang, and Jinyuan Jia. 2024. Poisonedrag: Knowledge poisoning attacks to retrieval-augmented generation of large language models. arXiv preprint arXiv:2402.07867, abs/2402.07867. # Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, and Bo Li. 2023. Decodingtrust: A comprehensive assessment of trustworthiness in GPT models. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023. # Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, and Bo Li. 2021. Adversarial GLUE: A multi-task benchmark for robustness evaluation of language models. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. # Yuhao Wang, Ruiyang Ren, Junyi Li, Wayne Xin Zhao, Jing Liu, and Ji-Rong Wen. 2024. REAR: A relevance-aware retrieval-augmented framework for open-domain question answering. arXiv preprint arXiv:2402.17497, abs/2402.17497.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Phoenix Neale Williams and Ke Li. 2023. Black-box sparse adversarial attack via multi-objective optimization CVPR proceedings. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Implementation Detail Algorithm 1: Genetic Attack on RAG |Input|Operations| |---|---| |Query q, Document d, Number of iterations|Niter, Number of parents Nparent, Population size S, Perturbation rate prper, Crossover rate prcross, Mutation rate prmut| The operations of transformation function f in our work are as follows: - Inner-Shuffle: Randomly shuffles the letters within a subsequence of a word token, limited to words with more than three characters. - Truncate: Removes a random number of letters from either the beginning or end of a word token.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This operation is restricted to words with more than three characters, with a maximum of three characters removed. - Keyboard Typo: Substitutes a letter with its adjacent counterpart on an English keyboard layout to simulate human typing errors. Only one character per word is replaced. - Natural Typo: Replaces letters based on common human errors derived from Wikipedia’s edit history. This operation encompasses a variety of error types, including phonetic errors, omissions, morphological errors, and their combinations. Also, we explore other types of low-level perturbations, such as punctuation insertion and phonetic and visual similarity. The operations of these low-level perturbations are as follows: - Punctuation Insertion: Insert random punctuations into the beginning or end of a word token. We insert a maximum of three identical punctuation into the beginning or end of the word. Exploited punctuation is " ,.’!?; ". - Phonetic Similarity: Swap the characters in a word into the other tokens having phonetic similarity with the original ones. - Visual Similarity: Swap the characters in a word into the other tokens having visual similarity with the original ones. # Process of GARAG The detailed process of GARAG is showcased in Algorithm 1. Our process begins with the initialization of the adversarial document population, and then the population repeats the cycles of crossover, mutation, and selection. # Template We adopt the zero-shot prompting template optimal for exact QA tasks, derived from Wang et al. (2024), for all LLMs exploited in our experiments. QA Template for LLMs [INST] Documents: {Document} Answer the following question with a very short phrase, such as "1998", "May 16th, 1931", or "James Bond", to meet the criteria of exact match datasets. Question: {Question} [/INST] Answer:
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Experimenting with RAG Systems Below, we explain the representative retrievers and readers we analyze in our experiments. # Retriever For retrieval, we employ two contrasting approaches: (1) sparse retrieval, based on lexical information, and (2) dense retrieval, based on neural models. |BM25|BM25, a probabilistic retrieval model formulated by Robertson et al. (2009), leverages TF-IDF (Chen et al., 2017b) principles to estimate passage relevance via term weighting and passage length normalization. BM25 uses term-matching and hence experiences fewer out-of-domain failures when facing special-domain vocabulary, such as legal or medical texts.| |---|---| |ColBERT|One of the best-performing neural-based retrievers is ColBERT (Khattab and Zaharia, 2020), i.e., contextualized late interaction over BERT. The transformer-based, contextualized embedding of ColBERT makes it more proficient than BM25 at identifying semantic similarities between queries and passages beyond lexical matching.| # Reader We compare four top-performing, open-source reader models of varied architectures: encoder-decoder models from the FLAN family, and decoder-only models from the LLAMA2 family. |FLAN Models|The FLAN models are encoder-decoder models. We use the FLANT5-XXL (Chung et al., 2022) with 11B parameters and FLAN-UL2 (Tay et al., 2023) with 20B parameters, both with a context length of 2k tokens. FLANT5-XXL is an instruction-tuned variant of the T5 model (Raffel et al., 2023). FLAN-UL2 (Tay et al., 2023) is an upgraded T5-based model that is trained with Unifying Language Learning Paradigm, a pertaining process that uses a mixture-of-denoisers and mode switching to improve the model’s adaptability to different scenarios.| |---|---| |LLAMA Models|The LLAMA models are decoder models. For LLAMA models (Touvron et al., 2023a), we adopt the latest LLAMA models (Touvron et al., 2023b) in 7B and 70B parameters, both having a context length of 4k tokens. The key feature is that LLAMA models are trained with reinforcement learning human feedback (RLHF).| # Datasets and Evaluation Metrics # Dataset We adopt three DBQA datasets from various domains (Wikipedia, biomedical) and of various types (single-hop, multi-hop, list, yes/no). Natural Questions We choose the Natural Questions (NQ) dataset (Kwiatkowski et al., 2019) to examine model performance on the most generic
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 5: Adversarial attack results of GARAG on three QA datasets across different retrievers and LLMs. | | | | |NQ| |TriviaQA| |SQuAD| |---|---|---|---|---|---|---|---|---| |Retriever LLM| | |ASR(↑)|C.E.(↓)|E2E(↓)| |ASR(↑)|C.E.(↓)|E2E(↓)| |ASR(↑)|C.E.(↓)|E2E(↓)| | | |Llama2-7b|ASRR|75.4|89.8|66.0|0.387|0.689|76.8|80.6|78.2|91.7|70.2|0.312|0.730|81.6|85.3|84.1|90.1|74.2|0.280|0.637|73.0|78.| |LLama2-13b| | |71.3|91.7|63.5|0.357|0.695|82.8|88.2|83.9|92.0|76.1|0.266|0.630|76.7|83.3|80.0|92.4|72.7|0.299|0.722|86.3|90.5| |DPR| |Vicuna-7b|83.0|81.6|65.1|0.423|0.786|62.0|79.2|91.1|79.5|70.8|0.391|0.775|58.4|81.7|92.0|81.1|73.4|0.338|0.742|51.2|76.9| | | |Vicuna-13b|82.8|80.9|64.4|0.423|0.77|58.5|83.3|91.8|83.5|75.4|0.367|0.779|59.2|85.7|91.7|80.5|72.5|0.336|0.722|57.4|80.5| | | |Mistral|Mistral-7b|78.5|85.9|65.1|0.397|0.8|69.1|96.5|84.7|84.9|69.8|0.352|0.811|66.5|97.7|87.8|85.7|73.5|0.34|0.701|64.4|95.2| | | |Llama2-7b|85.9|91.1|77.5|0.941|0.639|70.1|74.7|84.9|90.7|76.0|0.94|0.725|82.0|86.9|85.2|91.2|76.4|0.94|0.605|72.9|77.2| | | |Llama2-13b|78.9|91.2|70.5|0.939|0.647|78.7|85.7|81.0|91.9|72.9|0.932|0.723|86.2|91.7|86.1|93.0|79.1|0.938|0.633|77.2|84.5| |Contriever| |Vicuna-7b|90.8|81.3|72.4|0.949|0.738|52.2|72.5|93.0|80.8|74.0|0.946|0.764|60.3|81.5|92.6|82.5|75.2|0.948|0.712|52.7|76.7| | | |Vicuna-13b|87.5|85.5|73.3|0.94|0.735|63.9|95.4|88.8|86.4|75.2|0.944|0.796|66.2|97.8|91.2|88.0|79.3|0.942|0.704|59.2|92.6| | | |Mistral-7b|87.5|85.5|73.3|0.94|0.735|63.9|95.4|88.8|86.4|75.2|0.944|0.796|66.2|97.8|91.2|88.0|79.3|0.942|0.704|59.2|92.6| # Figure 4: Distribution of grammatically correct document among d∗ on NQ with the Contriever and Llama2-7b. # Figure 5: Correlation matrices of prediction from the adversarial document d∗ across EM and Acc.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
with Contriever. # Additional Results # B.1 Overall Result Table 5 shows the overall results across three QA datasets, two retrievers, and five LLMs. # B.2 Defense Strategy Various defense mechanisms against adversarial attacks in NLP have been proposed. Adversarial training, fine-tuning the model on adversarial samples, is a popular approach (Yoo and Qi, 2021). However, this strategy is not practically viable for RAG systems, given the prohibitive training costs associated with models exceeding a billion parameters. Alternatively, a grammar checker is an effective defense against low-level perturbations within documents (Formento et al., 2023). Our analysis, depicted in Figure 4, compares the grammatical correctness of original and adversarial documents via grammar checker model 4 presented in Dehghan et al. (2022). It reveals that approximately 50% of the original samples contain grammatical errors. Also, even within the adversarial set, about 25% of the samples maintain grammatical correctness at a low perturbation level. This observation highlights a critical limitation: relying solely on a grammar checker would result in dismissing many original documents and accepting some adversarial ones. Consequently, this underscores the limitations of grammar checkers as a standalone defense and points to more sophisticated and tailored defense strategies. # B.3 Analysis on Prediction We analyze the discrepancy between them with the responding patterns of diverse LLMs when affected by adversarial documents, categorizing results based on EM and Acc values in Figure 5. Specifically, EM strictly assesses whether the prediction exactly matches the correct answer, while Acc assesses only whether the answer span is included within the predicted response. When EM is 0 and Acc is 1 (i.e., (0,1)), the answer span is included along with extraneous tokens. By contrast, when EM is 0 and Acc is 0 (i.e., (0,0)), the answer span is entirely incorrect, indicating a hallucinated prediction. Therefore, Llama2 demonstrates a higher tendency to generate responses that exactly match the annotated samples, as indicated by the high portion of (1,1). However, given its lower proportion of (1,0) results, it frequently produces entirely incorrect answers when exposed to adversarial conditions. By contrast, Mistral, while generating fewer exact matches compared to Llama2, more consistently includes the correct answer span in its responses. These findings are crucial for understanding the behavior of different models in real-world scenarios, particularly in how they handle documents containing noise or adversarial modifications.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Therefore, these results show that the
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
1.00 Mter 17 Utcr 5 0.98 0.96 0.94 0.95 1.00 1.05 1.10 Figure 6: The process of population refinement by GARAG on NQ with Contriever and Llama-7b patterns of LLMs are varied under the influence of noisy documents. # Case Study We conducted case studies with diverse LLMs, including Llama-7b, Vicuna-7b, and Mistral-7b, as shown in Table 6.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In all these studies, while the answer tokens were not perturbed—allowing for the possibility of grounding correct information—the LLMs typically failed to locate the correct knowledge within the document. This often resulted in incorrect predictions or even hallucinations, where the answer was not just wrong but absent from the document. However, there was an exception with Mistral-7b, which generated the correct answer and additional explanatory text. While this prediction did not meet the Exact Match (EM) metric, it was semantically correct. Additionally, we provide a detailed overview of how the population is refined through the iterative process, as illustrated in Figure 6.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Question|Noisy Document|Answer|Prediction| |---|---|---|---| |Which site of an enzyme is called allosteric site?|Alosteric enzyme Long-range allostery is especially important in cell signaling. Allosteric regulation is also particularly important in the cell’s ability to adjust enzyme activity. The term "allostery" comes from the Greek "allos", "other," and "stereos", "solid (object)." This is in reference to the fact that the regulatory site of an allosteric protein is physically distinct from its active site. The protein catalyst (enzyme) may be part of a multi-subunit complex, and/or may transiently or permanently associate with a Cofactor (e.g. adenosine triphosphate). Catalysis of biochemical reactions is vital due to the very low reaction rates of the uncatalyzed reactions.|Regulatory site|Active site| |Who did Cora marry in once upon a time?|The Miller´s Daughter (Once Upon a Time) to the King and accepts Henry ’s resultant marriage proposal. The day before her wedding, Cora questions her plans; she is unlikely to become queen as Henry is fifth in line to the throne, while Rumplestiltskin, with whom she has been having an affair, offers her love. They agree to amend the contract so Cora owes Rumplestiltskin "his" child. He also agrees to teach her how to take a heart, so that she can kill King Xavier.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
That night, she confronts the king. He reveals that he knows of her relationship with Rumplestiltskin; telling her that "love is weakness," he says|Henry|Rumplestiltskin| |What is the 3rd largest state in USA?|Western United States LGBT community, and Oakland, California has a large percentage of residents being African-American, as well as Long Beach, California which also has a large Black community. The state of Utah has a Mormon majority (estimated at 62.4% in 2004), while some cities like Albuquerque, New Mexico; Billings, Montana; Spokane, Washington; and Tucson, Arizona are located near Indian Reservations. In remote areas there are settlements of Alaskan Natives and Native Hawaiians. Facing both the Pacific Ocean and the Mexican border, the West has been shaped by a variety of ethnic groups. Hawaii is the only state in the union in which|California, Alaska|Oregon| |When did the movie Peter Pan come out?|Peter Pan as an adult by Robin Williams, with blue eyes and dark brown hair; in flashbacks to him in his youth, his hair is light brown. In this film his ears appear pointy only when he is Peter Pan, not as Peter Banning. His Pan attire resembles the Disney outfit (minus the cap). In the live-action 2003 "Peter Pan" film, he is portrayed by Jeremy Sumpter, who has blond hair and blue-green eyes. His outfit is made of leaves and vines.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
J.M. Barrie created his character based on his older brother, David, who died in an ice-skating accident the day before|25 December 2003|1998| |Make it or break it who goes to the Olympics?|Make It or Break It A new gymnast, Max (Josh Bowman), comes to The Rock, attracting the attention of Lauren and Payson. Though Max seems more interested in Payson, she is more focused on her dream. Lauren tells Payson that Max is her new boyfriend; Max denies this. Max, Austin, and Payson hope Kaylie train in secret, as they try to convince Sasha that Kaylie is ready to compete at Worlds. In the week of Emily’s departure, Kelly Parker becomes closer to Kaylie, despite the scheming of her stage mother, Sheila (Kathy Najimy). Kaylie offers her comfort as Kelly’s mom is only supportive of Kelly’s|Jordan, Payson, Lauren, Colleen, Kaylie|Kaylie goes to the Olympics| |When did Michael Jordan get his last ring?|History of basketball A large part of this is due to the transcendent stars that have played the game through the years. It was because of the play of Michael Jordan that basketball started to reach international audiences, especially on the 1992 United States men’s Olympic basketball team, known as the Dream Team. After his final championship and second retirement in 1998, there was a void as in who would be the face of basketball. Soon after with the help of Shaquille O’Neal, Kobe Bryant would go on to win three straight championships from 2000–2002 with the Los Angeles Lakers, helping make basketball|1998|2002. (This is based on the context of the given document stating that Kobe Bryant and Shaquille O’Neal)|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# From RAG to QA-RAG: Integrating Generative AI for Pharmaceutical Regulatory Compliance Process Jaewoong Kim1 , Moohong Min2 2Social Innovation Convergence Program, University College, Sungkyunkwan University1Department of Applied Data Science, Sungkyunkwan University jwoongkim11@g.skku.edu, iceo@g.skku.edu # Abstract Regulatory compliance in the pharmaceutical industry entails navigating through complex and voluminous guidelines, often requiring significant human resources. To address these challenges, our study introduces a chatbot model that utilizes generative AI and the Retrieval Augmented Generation (RAG) method. This chatbot is designed to search for guideline documents relevant to the user inquiries and provide answers based on the retrieved guidelines. Recognizing the inherent need for high reliability in this domain, we propose the Question and Answer Retrieval Augmented Generation (QA-RAG) model. In comparative experiments, the QA-RAG model demonstrated a significant improvement in accuracy, outperforming all other baselines including conventional RAG methods. This paper details QA-RAG’s structure and performance evaluation, emphasizing its potential for the regulatory compliance domain in the pharmaceutical industry and beyond. We have made our work publicly available for further research and development.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# 1 Introduction # 1.1 The Advancement of Chatbot Recent advancements in Generative AI have significantly enhanced the capabilities of chatbots. The industrial application of these chatbots, powered by Generative AI, is being explored across various sectors [Bahrini et al., 2023; Castelvecchi, 2023; Badini et al., 2023], with the pharmaceutical industry being a notable area of focus. In the realm of drug discovery, recent study has shown that chatbots, powered by Generative AI, can play a significant role in advancing drug discovery [Wang et al., 2023b; Savage, 2023; Bran et al., 2023]. Such advancements not only streamline the discovery process but also pave the way for chatbots to suggest novel research ideas or methodologies, enhancing the collaborative aspect of research. Focusing on healthcare, chatbots are proving to be particularly effective in offering personalized support that can lead to better health outcomes and more effective management of treatments [Ogilvie et al., 2022; Abbasian et al., 2023]. These chatbots can provide timely medication reminders, relay information about potential side effects, and even assist in scheduling physician consultations. # 1.2 Needs of Chatbot for Pharmaceutical Regulatory Guidance Another crucial domain where Generative AI can be harnessed in the pharmaceutical industry is in ensuring compliance with regulatory guidelines. Navigating the complex and extensive guidelines provided by agencies like the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) is often a daunting and time-consuming task for industry players.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}