text
stringlengths
2
6.93k
system_prompt
stringclasses
1 value
open-domain, single-hop questions. NQ questions are real user-search queries on Google. We adopt the KILT version (Petroni et al., 2021) of the dataset, which for each example, provides at least one gold relevant passage and one short answer. HotpotQA We choose HotpotQA (Yang et al., 2018) which provides challenging multi-hop questions. Each question requires reasoning over at least two passages to answer. While maintaining the same Wikipedia domain with the NQ dataset, HotpotQA enables comparisons of model reasoning skills over multiple evidence pieces. BioASQ We choose BioASQ’s Task 11B (Krithara et al., 2023) with biomedical questions as a representative of special-domain questions. Our evaluation dataset is a compilation of the BioASQ Task 11B training and golden enriched set. BioASQ also presents challenging question types, such as lists and yes/no questions. For NQ and HotpotQA datasets in the open domain, we retrieve passages from the Wikipedia corpus provided by the KILT benchmark (Petroni et al., 2021), with a total of 111M passages. For BioASQ, we use the PubMed Annual Baseline Repository for 2023 (of Medicine, 2023), with a total of 58M passages, which are either titles or passages of PubMed papers. # Metrics Below we describe the metric definition for each instance.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The sheer volume of guidelines, combined with their intricate details, can make it challenging for companies to quickly find and apply the relevant information. This often results in increased costs as teams spend valuable time navigating the vast repository of guidelines. A recent study highlighted the financial impact of compliance with regulatory guidelines [Crudeli, 2020]. It revealed that compliance efforts can consume up to 25% of a medium or large pharmaceutical manufacturing site’s operational budget. In light of these challenges, the pharmaceutical industry requires a more efficient method for navigating and interpreting regulatory guidelines. Large language models (LLMs) can contribute to solving the problem. However, despite their extensive pre-training, LLMs often encounter inherent limitations in accessing knowledge that was not included in their initial training data. Particularly in the realm of pharmaceutical regulatory compliance, a field characterized by its highly specialized and detailed nature, it is clear that such domain-specific knowledge has not been fully included in the training material. As a result, LLMs are likely to be ill-equipped for accurately answering the questions of this field. The Retrieval-Augmented Generation (RAG) model stands out as a bridge to this gap. It not only utilizes the innate knowledge of these models but also fetches additional information from external sources to generate responses. The RAG framework, as illustrated in the works of [Wen et al., 2023] and [Yang et al., 2023], demonstrates a sophisticated integration of expansive background documents with answers, ensuring comprehensive and accurate responses to queries. These studies highlight the versatility of RAG in diverse applications, from complex story generation to theorem
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
limited by their inability to capture the deeper semantic meaning of text. Dense retrieval, on the other hand, showed many advantages over sparse retrieval[Lee et al., 2019; Karpukhin et al., 2020; Li et al., 2021]. Dense retrieval approaches go beyond mere keyword matching; they generate vector representations of documents and queries, facilitating the capture of deep semantic meanings. This is crucial in fields requiring high accuracy and contextual understanding, where the relevancy of documents cannot solely be determined by keyword frequency. Given the advantages of dense retrieval, this method was selected for our model. Document preprocessing We have compiled 1,263 final and valid versions of FDA (Food and Drug Administration) guideline documents regarding the pharmaceutical industry, along with 141 ICH (International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use) guideline documents. This brings the total count to 1,404 documents, each uniquely identified as Di, where i ∈ {1, 2, .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
. , 1404}. The FDA is a U.S. federal agency responsible for regulating and overseeing the safety and efficacy of drugs, food, and cosmetics, while the ICH works to harmonize regulatory standards for drug development and registration across different countries, ensuring the safety, quality, and efficacy of medicines. To effectively extract the content of documents into text, we utilized OCR technology. Specifically, we employed Nougat, a transformer model developed for scientific texts [Blecher et al., 2023]. This OCR tool is particularly adept at processing technical and scientific documents due to its advanced capabilities. Each document Di processed through OCR is then divided into several chunks, denoted as Di,j, where j represents the sequence number of each chunk for each document i. # Method In this chapter, we will present the QA-RAG model in detail. QA-RAG is a model designed specifically for the highly domain-specific areas like pharmaceutical regulatory compliance. The purpose of this approach is to give answers or information to the user’s query related to the guidelines with remarkable accuracy. # Overall of QA-RAG Model Figure 1 illustrates the overall structure of QA-RAG model. In contrast to the conventional RAG, QA-RAG system utilizes the answer from a fine-tuned LLM agent, with additional support from the query. Half of the documents are sourced through the answer provided by the fine-tuned LLM agent, which are adept at generating contextually rich and accurate responses to the user’s question. The other half of the document set is acquired using the original query. This method not only broadens the scope of the search but also captures a wider array of potentially relevant information. After obtaining documents through both the answer of the fine-tuned LLM agent and the query, the system then applies a reranking process. This involves evaluating the relevance scores of all retrieved documents with the question and retaining only those with the highest relevance scores. Here’s the breakdown of each part. # Document preprocessing & similarity search When it comes to the document retrieval, Sparse retrieval methods such as BM25[Robertson and Zaragoza, 2009; Trotman et al., 2014] had been prevalent due to their straightforward approach to matching keywords. However, they can be improving. Furthermore, evidence has shown that RAG models excel over typical seq2seq models and certain retrieve-and-extract architectures, particularly in knowledge-dense NLP tasks [Lewis et al., 2020]. Despite the advancements in RAG, we recognized that the accuracy of the conventional RAG methods may fall short in the regulatory compliance domain, where domain-specific and highly specialized information is required. Hence, we introduce the Question and Answer Retrieval Augmented Generation (QA-RAG). Tailored for the highly domain-specific sector that needs professional knowledge, the QA-RAG model precisely aligns regulatory guidelines with practical implementation, streamlining compliance in the pharmaceutical industry. # Method In this chapter, we will present the QA-RAG model in detail. QA-RAG is a model designed specifically for the highly domain-specific areas like pharmaceutical regulatory compliance. The purpose of this approach is to give answers or information to the user’s query related to the guidelines with remarkable accuracy. # Overall of QA-RAG Model Figure 1 illustrates the overall structure of QA-RAG model. In contrast to the conventional RAG, QA-RAG system utilizes the answer from a fine-tuned LLM agent, with additional support from the query. Half of the documents are sourced through the answer provided by the fine-tuned LLM agent, which are adept at generating contextually rich and accurate responses to the user’s question. The other half of the document set is acquired using the original query. This method not only broadens the scope of the search but also captures a wider array of potentially relevant information. After obtaining documents through both the answer of the fine-tuned LLM agent and the query, the system then applies a reranking process. This involves evaluating the relevance scores of all retrieved documents with the question and retaining only those with the highest relevance scores. Here’s the breakdown of each part. # Document preprocessing & similarity search When it comes to the document retrieval, Sparse retrieval methods such as BM25[Robertson and Zaragoza, 2009; Trotman et al., 2014] had been prevalent due to their straightforward approach to matching keywords. However, they can be limited by their inability to capture the deeper semantic meaning of text. Dense retrieval, on the other hand, showed many advantages over sparse retrieval[Lee et al., 2019; Karpukhin et al., 2020; Li et al., 2021]. Dense retrieval approaches go beyond mere keyword matching; they generate vector representations of documents and queries, facilitating the capture of deep semantic meanings. This is crucial in fields requiring high accuracy and contextual understanding, where the relevancy of documents cannot solely be determined by keyword frequency. Given the advantages of dense retrieval, this method was selected for our model. Document preprocessing We have compiled 1,263 final and valid versions of FDA (Food and Drug Administration) guideline documents regarding the pharmaceutical industry, along with 141 ICH (International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use) guideline documents. This brings the total count to 1,404 documents, each uniquely identified as Di, where i ∈ {1, 2, .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
. , 1404}. The FDA is a U.S. federal agency responsible for regulating and overseeing the safety and efficacy of drugs, food, and cosmetics, while the ICH works to harmonize regulatory standards for drug development and registration across different countries, ensuring the safety, quality, and efficacy of medicines. To effectively extract the content of documents into text, we utilized OCR technology. Specifically, we employed Nougat, a transformer model developed for scientific texts [Blecher et al., 2023]. This OCR tool is particularly adept at processing technical and scientific documents due to its advanced capabilities. Each document Di processed through OCR is then divided into several chunks, denoted as Di,j, where j represents the sequence number of each chunk for each document i. # Method In this chapter, we will present the QA-RAG model in detail. QA-RAG is a model designed specifically for the highly domain-specific areas like pharmaceutical regulatory compliance. The purpose of this approach is to give answers or information to the user’s query related to the guidelines with remarkable accuracy. # Overall of QA-RAG Model Figure 1 illustrates the overall structure of QA-RAG model. In contrast to the conventional RAG, QA-RAG system utilizes the answer from a fine-tuned LLM agent, with additional support from the query. Half of the documents are sourced through the answer provided by the fine-tuned LLM agent, which are adept at generating contextually rich and accurate responses to the user’s question. The other half of the document set is acquired using the original query. This method not only broadens the scope of the search but also captures a wider array of potentially relevant information. After obtaining documents through both the answer of the fine-tuned LLM agent and the query, the system then applies a reranking process. This involves evaluating the relevance scores of all retrieved documents with the question and retaining only those with the highest relevance scores. Here’s the breakdown of each part. # Document preprocessing & similarity search When it comes to the document retrieval, Sparse retrieval methods such as BM25[Robertson and Zaragoza, 2009; Trotman et al., 2014] had been prevalent due to their straightforward approach to matching keywords. However, they can be improving. Furthermore, evidence has shown that RAG models excel over typical seq2seq models and certain retrieve-and-extract architectures, particularly in knowledge-dense NLP tasks [Lewis et al., 2020]. Despite the advancements in RAG, we recognized that the accuracy of the conventional RAG methods may fall short in the regulatory compliance domain, where domain-specific and highly specialized information is required. Hence, we introduce the Question and Answer Retrieval Augmented Generation (QA-RAG). Tailored for the highly domain-specific sector that needs professional knowledge, the QA-RAG model precisely aligns regulatory guidelines with practical implementation, streamlining compliance in the pharmaceutical industry. # Method In this chapter, we will present the QA-RAG model in detail. QA-RAG is a model designed specifically for the highly domain-specific areas like pharmaceutical regulatory compliance. The purpose of this approach is to give answers or information to the user’s query related to the guidelines with remarkable accuracy. # Overall of QA-RAG Model Figure 1 illustrates the overall structure of QA-RAG model. In contrast to the conventional RAG, QA-RAG system utilizes the answer from a fine-tuned LLM agent, with additional support from the query. Half of the documents are sourced through the answer provided by the fine-tuned LLM agent, which are adept at generating contextually rich and accurate responses to the user’s question. The other half of the document set is acquired using the original query. This method not only broadens the scope of the search but also captures a wider array of potentially relevant information. After obtaining documents through both the answer of the fine-tuned LLM agent and the query, the system then applies a reranking process. This involves evaluating the relevance scores of all retrieved documents with the question and retaining only those with the highest relevance scores. Here’s the breakdown of each part. # Document preprocessing & similarity search When it comes to the document retrieval, Sparse retrieval methods such as BM25[Robertson and Zaragoza, 2009; Trotman et al., 2014] had been prevalent due to their straightforward approach to matching keywords. However, they can be limited by their inability to capture the deeper semantic meaning of text. Dense retrieval, on the other hand, showed many advantages over sparse retrieval[Lee et al., 2019; Karpukhin et al., 2020; Li et al., 2021]. Dense retrieval approaches go beyond mere keyword matching; they generate vector representations of documents and queries, facilitating the capture of deep semantic meanings. This is crucial in fields requiring high accuracy and contextual understanding, where the relevancy of documents cannot solely be determined by keyword frequency. Given the advantages of dense retrieval, this method was selected for our model. Document preprocessing We have compiled 1,263 final and valid versions of FDA (Food and Drug Administration) guideline documents regarding the pharmaceutical industry, along with 141 ICH (International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use) guideline documents. This brings the total count to 1,404 documents, each uniquely identified as Di, where i ∈ {1, 2, .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
. , 1404}. The FDA is a U.S. federal agency responsible for regulating and overseeing the safety and efficacy of drugs, food, and cosmetics, while the ICH works to harmonize regulatory standards for drug development and registration across different countries, ensuring the safety, quality, and efficacy of medicines. To effectively extract the content of documents into text, we utilized OCR technology. Specifically, we employed Nougat, a transformer model developed for scientific texts [Blecher et al., 2023]. This OCR tool is particularly adept at processing technical and scientific documents due to its advanced capabilities. Each document Di processed through OCR is then divided into several chunks, denoted as Di,j, where j represents the sequence number of each chunk for each document i.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Dual-track Retrieval: Leveraging answer of Fine-tuned LLM for Document retrieval We propose a hybrid method that leverages not only the question, but also the hypothetical answer generated by a fine-tuned LLM agent. In the conventional RAG approach, a single query is employed in similarity search for retrieving relevant documents. However, this method can sometimes be limited in scope, especially when it comes to dealing with the nuances and variability of language. One of the primary challenges is that these methods might miss out on relevant documents due to their dependency on specific keywords or phrases present in the user’s query. To address this issue, various solutions have been proposed, including the use of Multiquery retrieval and HyDE [Gao et al., 2022]. Query transformation for enhanced information retrieval has often been utilized [Wang et al., 2023a; Anand et al., 2023; Wang et al., 2020]. Among such techniques, Multiquery retrieval is an advanced technique that automatically generates multiple queries from the original question with different perspectives. This process, facilitated by a Large Language Model (LLM), retrieves a set of relevant documents for each query, thereby broadening the scope of the search. HyDE, on the other hand, leverages hypothetical documents generated in response to the query to enhance the retrieval process. This method involves using an instruction-following language model to generate a text snippet that responds to the query, which is then used in similarity search for document retrieval. The key aspect of this approach is that the generated text doesn’t need to be factually accurate, but should capture the relevance pattern of the query, allowing for a more nuanced and effective retrieval of information. However, Multiquery retrieval is limited as it is still confined to the narrow scope of the user’s question, hindering its ability to capture a wide range of information. Also, in domain-specific and highly specialized areas like pharmaceutical regulatory compliance, using a general LLM like what has been done in HyDE often produces very incomplete hypothetical answers, necessitating the employment of a more specialized approach. Recognizing this, we utilized a fine-tuned LLM that has been trained on domain-specific data, which enabled it to generate responses with a level of detail and accuracy akin to that of the expert in the pharmaceutical field. Half of the documents were retrieved using the answers provided by this fine-tuned LLM. To enhance the diversity of search, the other half is sourced using the user’s question. By utilizing both the user’s query and the tailored responses generated by the fine-tuned LLM, this dual-track approach achieved a more thorough and nuanced retrieval of information. # Fine tuning process # i) Dataset We used official Q&A datasets from the FDA for fine-tuning. Due to the comprehensive and sometimes unclear nature of FDA guidelines, a multitude of questions have emerged from both the industry and academia.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The FDA offers official responses to these frequently asked questions. We collected them, amounting to 1681 question-answer sets. We designated 85% of the data for training, 10% for validation, and the remaining 5% for testing.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Overall performance on a dataset is computed by averaging over all instances. # Retrieval Following Petroni et al. (2021), we evaluate retrieval performance using the recall@k metric. Recall@k For a given example with a query and ground-truth passage(s), recall@k measures the ratio of ground-truth passages among the top-k retrieved passages. This measure can readily support retrieval for multi-hop questions (i.e., in HotpotQA) as well, where instead of retrieving any one of the gold passages, all passages along the reasoning path are necessary. # Reader Metrics We evaluate the reader predictions using exact match and unigram F1 metrics. |Exact Match|Exact match (EM) measures if the model-generated output is the same as at least one of the ground-truth answers (Richardson et al., 2013). This metric requires models to be precise, yet may be overly strict when models produce verbose answers. Given the extractive nature of the NQ dataset (i.e., answers are spans in supporting passages), we use EM to evaluate NQ.| |---|---| |Unigram F1|Unigram F1 is a well-adopted metric for QA, which quantifies the overlap of unigrams in the generated and reference texts, computed through the harmonic mean of precision and recall at the unigram level (Petroni et al., 2021). For each query, we compute the F1 score of the reader output against all possible gold answers and report the highest score.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The dataset we processed is available online. # ii) Base Model In this study, we have selected ChatGPT 3.5 - Turbo and Mistral-7B [Jiang et al., 2023] as our base LLM to be fine-tuned. ChatGPT 3.5 - Turbo model is developed by OpenAI and it boasts the highest performance among those LLMs currently available for fine-tuning, making it a standout choice. The Mistral-7B model, despite having only 7.3 billion parameters, is acclaimed for its high performance. Developed by Mistral AI, this model has demonstrated exceptional performance in various benchmarks, outperforming the Llama 2 13B across all metrics and showing comparable or superior results to Llama 1 34B in key areas such as reasoning, mathematics, and code generation. For the ChatGPT 3.5 – Turbo model, we conducted fine-tuning over 3 epochs and 2101 steps. As for the Mistral-7B model, to achieve efficient resource handling, we utilized the LoRA (Low-Rank Adaptation) technique [Hu et al., 2021; Zeng and Lee, 2023]. LoRA allows for the efficient adaptation of large language models by adjusting a small set of parameters, significantly reducing computational and storage costs. LoRA has been highly successful in modifying large-scale language models [Dinh et al., 2022]. Using LoRA, implemented through the Hugging Face’s PEFT library, the Mistral-7B model was fine-tuned over 3 epochs and 1074 steps. Figure 3 shows the result of the fine-tuning process. By the end of the tuning, the loss of both models was significantly reduced, demonstrating the model’s enhanced capability to accurately interpret and respond to complex FDA regulatory queries. # iii) Evaluation We evaluated the performance of the fine-tuned models using BertScore [Zhang et al., 2019], a metric for assessing the quality of text generation. BertScore is a sophisticated evaluation method that compares the semantic similarity of | |ChatGPT-3.5 Finetuned|Mistral 7B Finetuned|ChatGPT-4| |---|---|---|---| |precision|0.579|0.485|0.505| |recall|0.589|0.503|0.622| |f1|0.578|0.489|0.555| Table 1: Evaluation Results of Fine-Tuned and Standard LLMs on BertScore Metrics We evaluated the performance of the fine-tuned models using BertScore, a metric for assessing the quality of text generation. BertScore is a sophisticated evaluation method that compares the semantic similarity of 2https://huggingface.co/datasets/Jaymax/FDA Pharmaceuticals FAQ
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Experiment&Result # Evaluation metric For the evaluation, we utilized the LLMs-as-judges metric. Traditional evaluation metrics leveraging n-gram-based evaluation methods like BLEU [Papineni, 2002; Reiter, 2018] have shown limitations in areas outside of Machine Translation [Post, 2018; Sulem et al., 2018] and ROUGE faces challenges in key factors of machine learning evaluation [Grusky, 2023], indicating their limitations. Human evaluation has also been a traditional method [Awasthi et al., 2023].
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Since a model is directly judged by humans, it allows for an accurate assessment. However, due to this very reason, human evaluation can be too costly. LLMs-as-judges method could be a good alternative to human evaluation [Chiang and Lee, 2023]. This method has shown the highest similarity to human evaluation compared to others [Liu et al., 2023; Svikhnushina and Pu, 2023]. Furthermore, when utilizing high-performance LLM like GPT-4, the similarity is known to reach up to 80% [Zheng et al., 2023]. # Evaluation metric for context retrieval Among those LLMs-as-judges metric, we chose the Retrieval Augmented Generation Assessment (Ragas) framework [Es et al., 2023] for evaluating the accuracy of the context retrieval. Ragas is a framework for the evaluation of the RAG systems. It introduces a suite of metrics for evaluating RAG systems without relying solely on ground truth human annotations. The most notable feature of Ragas in evaluating the accuracy of context retrieval is that it does not require a “reference context answer”. Instead, the framework assesses the accuracy of the retrieved context solely based on the question and reference answer. This approach is specialized for situations where no direct reference context is available. The most prominent evaluation metrics for context retrieval Ragas offer include:
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
i) Context Precision This assesses the relevance of the retrieved documents to the question. It is especially important when considering the ne- cessity of retrieving guideline documents. ii) Context Recall This evaluates the ability of the retrieval system to gather all the necessary information needed to answer the question. It involves using the ground truth answer and an LLM to check if each statement from the answer can be also found in the retrieved context. Thus, we employed these two metrics to evaluate the qual- ity of the retrieved context in our experiments. By leveraging these metrics, we aimed to provide a comprehensive and ob- jective assessment of the QA-RAG model’s performance in retrieving relevant and complete information from the phar- maceutical guidelines. Evaluation metric for answer generation The final answers generated by the model, based on the re- trieved context, could be evaluated by comparing their simi- larity to the reference answers. For this purpose, we utilized Bertscore [Zhang et al., 2019]. Given Bertscore’s renowned ability to capture nuanced semantic correlations, it was an ideal choice for comparing the semantic similarity and rele- vance of the model’s responses against the reference answers. # Dataset The test dataset described in the fine-tuning process section 2.3, which is not used in the training procedure of the fine- tuning, was utilized for the experiments. It consists of an- swers to frequently asked questions about industry guide- lines, compiled directly by the official FDA documents.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This dataset’s characteristics ensured that it effectively represented the real-world challenges and queries faced in this domain, making it an ideal choice for assessing the performance of the various approaches. # Experimental setup The evaluation process was designed to compare the perfor- mance of the QA-RAG model with various baselines. To ensure a fair evaluation during the experiment, we fixed the number of documents retrieved to 24 for each method, and then narrowed it down to the top 6 documents during the post- processing stage, such as reranking. The accuracy of these contexts was compared across different baselines. Subse- quently, the final answer generation step was conducted based on the retrieved context, and the generated answers were also compared across the baselines. The final answer agent used in all cases was the ChatGPT-3.5 Turbo model, and the prompts for answer generation were kept consistent. The experiments include: Custom Scoring agent vs. Reranker To select an appropriate method for post-processing the reranker, we opted for the bge-reranker-large [Xiao et al., 2023], which is a powerful cross-encoder model. This com- parison aimed to evaluate the effectiveness of each method in terms of their ability to accurately prioritize retrieved docu- ments based on their relevance to the query, and then to select the top-ranked ones. Evaluation of Context retrieval performance The experiment focused on assessing the accuracy of the doc- uments which were selected and finalized through the post- processing stage. The objective was to determine the ef- fectiveness of the QA-RAG compared to various baselines, in accurately extracting relevant documents. The focus was on understanding the precision and relevance of the retrieved documents by each model, to evaluate their overall capability in extracting contexts related to the question from complex pharmaceutical guidelines. Evaluation of Answer generation performance Following the context retrieval phase, a critical aspect of the evaluation was to assess the QA-RAG model’s ability to gen- erate the final answers, in comparison with other baselines. These answers, formulated based on the question and the re- trieved contexts, underwent a thorough examination for ef- fectiveness and accuracy. Ablation Study To further understand the individual contributions of each component in the QA-RAG model, we conducted an ablation study. The QA-RAG setup was configured to retrieve 12 doc- uments based on the question and another 12 from the fine- tuned LLM’s answers. Post-processing through the reranking method then narrowed this down to the final 6. We first com- pared the result of the “Only hypothetical answer” approach, in which we removed the question-based document retrieval part and retrieved only 12 documents derived from the fine- tuned LLM’s answer, again narrowing down to the final 6. Similarly, we compared the “Only question” approach, which retrieved documents based solely on the question, excluding the fine-tuned LLM’s answer. # Baseline Selection Question + Hypothetical answer This method represents the QA-RAG model, which incorpo- rates both the question and the hypothetical answer derived from the fine-tuned LLM into the retrieval process. Multiquery Questions We expanded the original question by generating three addi- tional questions, each offering a distinct viewpoint, using the langchain package for implementation3. We used GPT-4 to generate the additional questions.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
For each of the four total queries, six contextually pertinent documents were retrieved. After applying the reranker, the top six most relevant docu- ments were extracted. 3https://github.com/langchain-ai/langchain
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Evaluation of Context retrieval performance |Retrieval metric (Number of document retrieved)|Context precision|Context recall| |---|---|---| |Question(12) + Hypothetical answer(12)|0.717|0.328| |Multiquery questions(24)|0.564|0.269| |HyDE with reranker (24)|0.673|0.283| |Only question(24)|0.556|0.27| |Only hypothetical answer(24)|0.713|0.295| The QA-RAG model, using a combination of a question and a hypothetical answer from the fine-tuned LLM, achieved the highest context precision (0.717) and context recall (0.328). This superior performance underscores the model’s ability to retrieve highly relevant documents. In the case of HyDE, it was observed that the performance was surpassed by the “Only hypothetical answer” approach, where context retrieval was based solely on answers from the fine-tuned LLM. This finding underscores the effectiveness of employing fine-tuned LLM responses, especially in specialized domains. The fine-tuned model’s answers, being more aligned with expert knowledge in pharmaceutical regulations, enhance the relevance and accuracy of the retrieved documents. In contrast, the Multiquery approach, while effective, showed limitations in achieving high precision (0.564) and recall (0.269). This limitation was even more pronounced in the “Only question” approach, which demonstrated the least effective performance among the methods tested. This highlights the challenge of relying solely on query in areas where domain-specific knowledge is critical. Evaluation of Answer Generation performance |Retrieval metric (Number of document retrieved)|precision|recall|f1| |---|---|---|---| |Question(12) + Hypothetical answer(12)|0.551|0.645|0.591| |Multiquery questions(24)|0.532|0.629|0.573| |HyDE with reranker (24)|0.540|0.641|0.582| |Only question(24)|0.540|0.636|0.581| |Only hypothetical answer(24)|0.539|0.642|0.583| The evaluation of final answer generation indicates similar findings. The QA-RAG model achieved the highest scores in precision (0.551), recall (0.645), and F1 (0.591).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Notably, the F1 score, a metric that combines precision and recall, exactly matched the top 3 rankings in context retrieval performance, demonstrating the efficacy of employing high-accuracy contexts in generating precise responses. Ablation study |Retrieval metric (Number of document retrieved)|Context precision|Context recall| |---|---|---| |Question(12) + Hypothetical answer(12)|0.717|0.328| |Only question(12)|0.559|0.308| |Only hypothetical answer(12)|0.700|0.259| The ablation study provided valuable insights into the distinct components of the QA-RAG model. Focusing on the # Table 2: Comparison results of Reranker vs ScoringLLM The comparative analysis between the reranker and the scoring agent revealed a consistent superiority of the reranker in terms of context precision and context recall across almost every method, except only for the context recall metric of the Multiquery and HyDE method. This result suggests that although the scoring agent method may have a slight advantage in retrieving relevant information, determined through comparison with the ground truth, the reranker excels in accurately identifying relevant documents in almost every case. Given the reranker’s overall superior performance, we selected it as the post-processing method for the QA-RAG model and applied it across all other baseline methods in our experiments.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
hypothetical answer component alone, the model achieved an impressive context precision of 0.700, lower by just 0.017 points than the full model’s performance. Conversely, removing the hypothetical answer element and relying solely on the user’s question led to a marked drop in context precision to 0.559. In terms of context recall, the “Only question” approach achieved a slightly higher score of 0.308 compared to the “Only hypothetical answer” method (0.259). The difference in context precision scores between the “Only question” (0.559) and “Only hypothetical answer” (0.700) – more pronounced than in context recall – highlights the crucial role that hypothetical answers play in enhancing precision, suggesting their significant contribution to the model’s overall accuracy. # Conclusion # Summary of Findings Our investigation into the QA-RAG model within the regulatory compliance domain reveals its effectiveness in merging generative AI and RAG with pharmaceutical regulatory guidelines. The model provides accurate and contextually relevant document retrieval, ultimately delivering precise answers.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This is especially crucial in the pharmaceutical sector, where adherence to regulatory standards is critical. Key findings from our research include: Superior Performance Driven by Utilization of Answers In our experiments, strategies that incorporated answers for document retrieval exhibited notable advantages. The QA-RAG model, employing a hypothetical answer from a fine-tuned LLM, achieved the highest context precision and recall score. Following closely was the “Only hypothetical answer” approach, which exclusively used a fine-tuned LLM-generated answer and secured the second-highest context precision and recall score. Furthermore, HyDE, which utilized an answer derived from a general LLM that is not fine-tuned, also achieved the third-highest ranking. It emphasizes the advantage of answer-based retrieval strategies in document precision. Advantages of Hybrid Query-Answer Approach The ablation study results underlined the importance of a balanced hybrid question-answer approach in the QA-RAG model. While the hypothetical answer component was vital for high precision and recall, integrating the original question also enhanced the model’s overall performance. By effectively merging these two elements, the QA-RAG model optimizes its retrieval accuracy and relevance, proving the value of this combined approach. Impact of Fine-Tuned LLM in Retrieval The significance of the fine-tuned LLM in the QA-RAG model is validated by its strong performance in our tests. In the context retrieval experiment, the approaches using the fine-tuned LLM (“Only hypothetical answer” and “Question + Hypothetical answer”) ranked among the top two in context precision and recall. Similarly, in the answer generation evaluation, these two approaches again secured the top positions in f1 scoring. This consistent high ranking across different metrics underscores the fine-tuned LLM’s critical role in extracting pertinent documents. By providing accurate answers tailored to pharmaceutical regulations, it effectively guides the retrieval of relevant documents. # Implications for the Pharmaceutical Industry The successful integration of the QA-RAG model into the pharmaceutical industry’s regulatory compliance domain can have following implications: Streamlining Regulatory Compliance The QA-RAG model with pharmaceutical regulatory guidelines streamlines the compliance process by efficiently providing information through Q&A. This not only reduces the time and resources required for navigating complex regulations but also facilitates more informed decision-making.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This is suitable if we do not need the generated answer to match the exact formatting and wording of the ground truth answer. For example, “five seasons” and “5 seasons” would fail EM but can get a partial credit of 0.5 by F1. We evaluate HotpotQA and BioASQ with F1 metric.| # Are More Contexts Always Better? In this section, we study how models perform with various numbers of passages (§5.2), and if these results relate to context limits (§5.3). # RAGGED Analysis Guidelines What to Vary: Experiment with different numbers of context passages provided to the reader model. Start from a small number first since many reader models’ performance peaks before their context limits are reached. Behaviors to Expect: Expect a non-linear relationship between the number of contexts and model performance. Initially, reader performance may improve with more contexts due to the increased availability of “signal”, or supporting information. However, as the number of context passages increases, the amount of “noise”, or irrelevant, information also increases. As a result, reader performance may plateau or even degrade. Model Behavior Implications: Fitting as many contexts as the reader’s context limit can hold does not always guarantee optimal downstream performance. Using these experiments, one can better understand a reader’s ability to sift for “signal” among “noise”, and thereby find the optimal range of context passages for your model. Staying within this range maximizes performance without incurring unnecessary computational costs.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Reduction in Dependency on Human Expertise The model reduces reliance on extensive human expertise traditionally required in this field. By automating parts of the compliance process, it allows experts to focus on more strategic tasks, thereby optimizing the overall workflow. Pioneering the Use of Generative AI in Pharmaceutical Regulatory Compliance Domain As one of the first instances of employing generative AI within the realm of pharmaceutical regulatory compliance, the QA-RAG model sets a precedent. It illustrates the effective strategy for applying generative AI and RAG in pharmaceutical regulatory compliance, providing a cornerstone for future research. # Final Thoughts In conclusion, the QA-RAG model marks a step forward in the application of generative AI in pharmaceutical regulatory compliance. It stands out as one of the first models to leverage high-performance Large Language Models (LLMs) for navigating the complex landscape of regulatory guidelines in the pharmaceutical industry. Its enhanced capabilities in document retrieval and answer generation establish it as a more suitable approach compared to the conventional RAG. Moreover, the adaptable design of the QA-RAG model shows potential for use in other industries that deal with highly domain specific information and require professional analysis. Sectors such as legal compliance, financial regulation, and academic research could greatly benefit from the model’s advanced capabilities. Its application could revolutionize the way organizations across various industries manage large data, leading to swifter and more accurate information retrieval that enhances decision-making. However, like any emerging technology, the long-term implications of the model within various industries will require ongoing evaluation and refinement. The integration of generative AI in highly specialized fields will raise questions about the model’s adaptability to nuanced changes in data and industry practices. Thus, future developments should focus on proving the model’s sustained effectiveness, ensuring it remains a robust tool in the face of ever-changing landscapes. Furthermore, it’s crucial to keep enhancing the model’s performance by staying aligned with the evolving generative AI technologies.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Ethical Statement In the development and application of the QA-RAG model, we emphasize its role as a complementary tool for professionals in the pharmaceutical field. While the model enhances the efficiency and accuracy of navigating complex guidelines, it is designed to augment, not replace, human expertise and judgment. The dataset used for training and evaluating the model consists of publicly accessible documents from the Food and Drug Administration (FDA) and the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), adhering to all applicable data privacy and security protocols. # Acknowledgments This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. RS-2022-00166729). We acknowledge the use of ChatGPT, developed by OpenAI, for generating figures used in this paper to illustrate the model’s design and functionality. # References [Abbasian et al., 2023] M. Abbasian, I. Azimi, A. M. Rahmani, and R. Jain. Conversational healp agents: A personalized llm-powered agent framework. arXiv preprint arXiv:2310.02374, 2023. [Anand et al., 2023] A. Anand, A. Anand, and V. Setty. Query understanding in pe age of large language models. arXiv preprint arXiv:2306.16004, 2023. [Awaspi et al., 2023] R. Awaspi, S. Mishra, D. Mahapatra, A. Khanna, K. Maheshwari, J. Cywinski, et al. Humanely: Human evaluation of llm yield, using a novel web-based evaluation tool. medRxiv, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023-12. [Badini et al., 2023] S. Badini, S. Regondi, E. Frontoni, and R. Pugliese. Assessing pe capabilities of ChatGPT to improve additive manufacturing troubleshooting. Advanced Industrial and Engineering Polymer Research, 2023. [Bahrini et al., 2023] A. Bahrini, M. Khamoshifar, H. Abbasimehr, R. J. Riggs, M. Esmaeili, R. M. Majdabadkohne, and M. Pasehvar. ChatGPT: Applications, opportunities, and preats. In 2023 Systems and Information Engineering Design Symposium (SIEDS), pages 274–279. IEEE, April 2023. [Blecher et al., 2023] L. Blecher, G. Cucurull, T. Scialom, and R. Stojnic. Nougat: Neural optical understanding for academic documents. arXiv preprint arXiv:2308.13418, 2023. [Bran et al., 2023] A. M. Bran, S. Cox, A. D. White, and P. Schwaller. Chemcrow: Augmenting large-language models wip chemistry tools. arXiv preprint arXiv:2304.05376, 2023. [Brown et al., 2020] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877–1901, 2020. [Castelvecchi, 2023] D Castelvecchi. Open-source AI chatbots are booming-what does pis mean for researchers? Nature, 2023. [Chiang and Lee, 2023] C. H. Chiang and H. Y. Lee. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[Crudeli, 2020] M. Crudeli. Calculating quality management costs. Technology Record, 2020.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[Danopoulos et al., 2019] D. Danopoulos, C. Kachris, and D. Soudris. Approximate similarity search wip faiss framework using FPGAs on pe cloud. In Embedded Computer Systems: Architectures, Modeling, and Simulation, pages 373–386, 2019. [Dinh et al., 2022] T. Dinh, Y. Zeng, R. Zhang, Z. Lin, M. Gira, S. Rajput, et al. Lift: Language-interfaced fine-tuning for non-language machine learning tasks. In Advances in Neural Information Processing Systems, volume 35, pages 11763–11784, 2022. [Es et al., 2023] S. Es, J. James, L. Espinosa-Anke, and S. Schockaert. Ragas: Automated evaluation of retrieval augmented generation. arXiv preprint arXiv:2309.15217, 2023. [Fei-Fei et al., 2006] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4):594–611, 2006. [Gao et al., 2022] L. Gao, X. Ma, J. Lin, and J. Callan. Precise zero-shot dense retrieval wipout relevance labels. arXiv preprint arXiv:2212.10496, 2022.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[Grusky, 2023] M. Grusky. Rogue scores. In Proceedings of pe 61st Annual Meeting of pe Association for Computational Linguistics (Volume 1: Long Papers), pages 1914–1934, July 2023. [Hu et al., 2021] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, et al. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. [Jiang et al., 2023] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. D. L. Casas, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. [Johnson et al., 2019] J. Johnson, M. Douze, and H. J´egou. Billion-scale similarity search wip GPUs. IEEE Transactions on Big Data, 7(3):535–547, 2019. [Karpukhin et al., 2020] V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage retrieval for open-domain question answering. In Proceedings of pe 2020 Conference on Empirical Mepods in Natural Language Processing (EMNLP), pages 6769–6781. Association for Computational Linguistics, 2020. [Lake et al., 2015] B. M. Lake, R. Salakhupinov, and J. B. Tenenbaum. Human-level concept learning prough probabilistic program induction. Science, 350(6266):1332–1338, 2015.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References |[Lee et al., 2019] K. Lee, M.-W. Chang, K. Toutanova|[Sulem et al., 2018] E. Sulem, O. Abend, and A. Rappoport| |---|---| |A.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Korhonen, D. Traum, and L. M`arquez. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy, 2019. Association for Computational Linguistics.|Bleu is not suitable for the evaluation of text simplification. arXiv preprint arXiv:1810.05995, 2018.| [Lewis et al., 2020] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020. [Li et al., 2021] Y. Li, Z. Liu, C. Xiong, and Z. Liu More robust dense retrieval with contrastive dual learning. In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval, pages 287–296, 2021. [Liu et al., 2023] Y. Liu, D. Iter, Y. Xu, S. Wang, R. Xu, and C. Zhu Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023. [Miller et al., 2000] E. G. Miller, N. E. Matsakis, and P. A. Viola Learning from one example through shared densities on transforms. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000, volume 1, pages 464–471.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
IEEE, June 2000. [Mu et al., 2019] C. Mu, B. Yang, and Z. Yan An empirical comparison of faiss and fenshses for nearest neighbor search in hamming space. arXiv preprint arXiv:1906.10095, 2019. [Nogueira et al., 2019] R. Nogueira, W. Yang, K. Cho, and J. Lin Multi-stage document ranking with bert. arXiv preprint arXiv:1910.14424, 2019. [Nogueira et al., 2020] R. Nogueira, Z. Jiang, and J. Lin Document ranking with a pretrained sequence-to-sequence model. arXiv preprint arXiv:2003.06713, 2020. [Ogilvie et al., 2022] L. Ogilvie, J. Prescott, and J. Carson The use of chatbots as supportive agents for people seeking help with substance use disorder: A systematic review. European Addiction Research, 28(6):405–418, 2022. [Papineni, 2002] K. Papineni Bleu: a method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics, pages 311–318, 2002. [Post, 2018] M. Post A structured review of the validity of bleu. Computational Linguistics, 44(3):393–401, 2018. [Reiter, 2018] E. Reiter A structured review of the validity of bleu. Computational Linguistics, 44(3):393–401, 2018. [Robertson and Zaragoza, 2009] S. Robertson and H. Zaragoza The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333–389, 2009. [Savage, 2023] N. Savage Drug discovery companies are customizing chatgpt: here’s how. Nat Biotechnol, 41:585–586, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2404.07220v1 [cs.IR] 22 Mar 2024 # Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers |1st Kunal Sawarkar|2nd Abhilasha Mangal|3rd Shivam Raj Solanki| |---|---|---| |IBM|IBM|IBM| |Kunal@ibm.com|Abhilasha.Mangal@ibm.com|Shivam.Raj.Solanki@ibm.com| Abstract—Retrieval-Augmented Generation (RAG) is a prevalent approach to infuse a private knowledge base of documents with Large Language Models (LLM) to build Generative Q&A (Question-Answering) systems. However, RAG accuracy becomes increasingly challenging as the corpus of documents scales up, with Retrievers playing an outsized role in the overall RAG accuracy by extracting the most relevant document from the corpus to provide context to the LLM. In this paper, we propose the 'Blended RAG' method of leveraging semantic search techniques, such as Dense Vector indexes and Sparse Encoder indexes, blended with hybrid query strategies. Our study achieves better retrieval results and sets new benchmarks for IR (Information Retrieval) datasets like NQ and TREC-COVID datasets. We further extend such a 'Blended Retriever' to the RAG system to demonstrate far superior results on Generative Q&A datasets like SQUAD, even surpassing fine-tuning performance. Index Terms—RAG, Retrievers, Semantic Search, Dense Index, Vector Search # I. INTRODUCTION RAG represents an approach to text generation that is based not only on patterns learned during training but also on dynamically retrieved external knowledge. This method combines the creative flair of generative models with the encyclopedic recall of a search engine. The efficacy of the RAG system relies fundamentally on two components: the Retriever (R) and the Generator (G), the latter representing the size and type of LLM. The language model can easily craft sentences, but it might not always have all the facts.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# FlanT5 FlanUL2 LLaMa 7B LLaMa 70B LLaMa 7B (trun) LLaMa 70B (trun)Reader Performance |NQ|HotpotQA|BioASQ| |---|---|---| |+17| | | | |+33| | 1 3 2 2 Top-k documents Figure 3: Reader output scores when using varying numbers of passages on three datasets. Colored circles mark the best performance on the line. Dashed lines indicate results when truncating LLaMa inputs by 2k tokens. For NQ, we use an exact match.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This is where the Retriever (R) steps in, quickly sifting through vast amounts of documents to find relevant information that can be used to inform and enrich the language model's output. Think of the retriever as a researcher part of the AI, which feeds the contextually grounded text to generate knowledgeable answers to Generator (G). Without the retriever, RAG would be like a well-spoken individual who delivers irrelevant information. # II.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
RELATED WORK Search has been a focal point of research in information retrieval, with numerous studies exploring various methodologies. Historically, the BM25 (Best Match) algorithm, which uses similarity search, has been a cornerstone in this field, as explored by Robertson and Zaragoza (2009). BM25 prioritizes documents according to their pertinence to a query, capitalizing on Term Frequency (TF), Inverse Document Frequency (IDF), and Document Length to compute a relevance score. Dense vector models, particularly those employing KNN (k Nearest Neighbours) algorithms, have gained attention for their ability to capture deep semantic relationships in data. Studies by Johnson et al. (2019) demonstrated the efficacy of dense vector representations in large-scale search applications. The kinship between data entities (including the search query) is assessed by computing the vectorial proximity (via cosine similarity etc.). During search execution, the model discerns the 'k' vectors closest in resemblance to the query vector, hence returning the corresponding data entities as results. Their ability to transform text into vector space models, where semantic similarities can be quantitatively assessed, marks a significant advancement over traditional keyword-based approaches. On the other hand, sparse encoder based vector models have also been explored for their precision in representing document semantics. The work of Zaharia et al. (2010) illustrates the potential of these models in efficiently handling high-dimensional data while maintaining interpretability, a challenge often faced in dense vector representations. In Sparse Encoder indexes the indexed documents, and the user’s search query maps into an extensive array of associated terms derived from a vast corpus of training data to encapsulate relationships and contextual use of concepts. The resultant expanded terms for documents and queries are encoded into sparse vectors, an efficient data representation format when handling an extensive vocabulary. # A. Limitations in the current RAG system Most current retrieval methodologies employed in Retrieval-Augmented Generation (RAG) pipelines rely on keyword and similarity-based searches, which can restrict the RAG system’s overall accuracy. Table 1 provides a summary of the current benchmarks for retriever accuracy.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# TABLE I: Current Retriever Benchmarks |Dataset|Benchmark Metrics|NDCG@10|p@20|F1| |---|---|---|---|---| |NQDataset|P@20|0.633|86|79.6| |Trec Covid|NDCG@10| | |80.4| |HotpotQA|F1 , EM| | |0.85| While most of prior efforts in improving RAG accuracy is on G part, by tweaking LLM prompts, tuning etc.,[9] they have limited impact on the overall accuracy of the RAG system, since if R part is feeding irrelevant context then answer would be inaccurate. Furthermore, most retrieval methodologies employed in RAG pipelines rely on keyword and similarity-based searches, which can restrict the system's overall accuracy. Finding the best search method for RAG is still an emerging area of research. The goal of this study is to enhance retriever and RAG accuracy by incorporating Semantic Search-Based Retrievers and Hybrid Search Queries. # III. BLENDED RETRIEVERS For RAG systems, we explored three distinct search strategies: keyword-based similarity search, dense vector-based, and semantic-based sparse encoders, integrating these to formulate hybrid queries. Unlike conventional keyword matching, semantic search delves into the nuances of a user’s query, deciphering context and intent. This study systematically evaluates an array of search techniques across three primary indices: BM25 [3] for keyword-based, KNN [4] for vector-based, and Elastic Learned Sparse Encoder (ELSER) for sparse encoder-based semantic search. 1. BM25 Index: The BM25 index is adept at employing full-text search capabilities enhanced by fuzzy matching techniques, laying the groundwork for more sophisticated query operations. 2. Dense Vector Index: We construct a dense vector index empowered by sentence transformers. It identifies the proximity of vector representations derived from document and query content. 3. Sparse Encoder Index: The Sparse EncodeR Retriever Model index is an amalgam of semantic understanding and similarity-based retrieval to encapsulate the nuanced relationships between terms, thereby capturing a more authentic representation of user intent and document relevance. # A. Methodology Our methodology unfolds in a sequence of progressive steps, commencing with the elementary match query within the BM25 index. We then escalate to hybrid queries that amalgamate diverse search techniques across multiple fields, leveraging the multi-match query within the Sparse Encoder-Based Index. This method proves invaluable when the exact location of the query text within the document corpus is indeterminate, hence ensuring a comprehensive match retrieval. The multi-match queries are categorized as follows: - Cross Fields: Targets concurrence across multiple fields - Most Fields: Seeks text representation through different lenses across various fields. - Best Fields: Pursues the aggregation of words within a singular field. - Phrase Prefix: Operates similarly to Best Fields but prioritizes phrases over keywords. After initial match queries, we incorporate dense vector (KNN) and sparse encoder indices, each with their bespoke hybrid queries.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This strategic approach synthesizes the strengths of each index, channeling them towards the unified goal of refining retrieval accuracy within our RAG system. We calculate the top-k retrieval accuracy metric to distill the essence of each query type. In Figure 1, we introduce a scheme designed to create Blended Retrievers by blending semantic search with hybrid queries. # B. Constructing RAG System From the plethora of possible permutations, a select sextet (top 6) of hybrid queries—those exhibiting paramount retrieval efficacy—were chosen for further scrutiny. These queries were then subjected to rigorous evaluation across the benchmark datasets to ascertain the precision of the retrieval component within RAG. The sextet queries represent the culmination of retriever experimentation, embodying the synthesis of our finest query strategies aligned with various index types. The six blended queries are then fed to generative question-answering systems. This process finds the best retrievers to feed to the Generator of RAG, given the exponential growth in the number of potential query combinations stemming from the integration with distinct index types. The intricacies of constructing an effective RAG system are multi-fold, particularly when source datasets have diverse and complex landscapes. We undertook a comprehensive evaluation of a myriad of hybrid query formulations, scrutinizing their performance across benchmark datasets, including the Natural Questions (NQ), TREC-COVID, Stanford Question Answering Dataset (SqUAD), and HotPotQA. # IV. EXPERIMENTATION FOR RETRIEVER EVALUATION We used top-10 retrieval accuracy to narrow down the six best types of blended retrievers (index + hybrid query) for comparison for each benchmark dataset. 1. Top-10 retrieval accuracy on the NQ dataset : For the NQ dataset [5], our empirical analysis has demonstrated the superior performance of hybrid query strategies, attributable to the ability to utilize multiple data fields effectively. In Figure 2, our findings reveal that the hybrid query approach employing the Sparse Encoder with Best Fields attains the highest retrieval accuracy, reaching an impressive 88.77%. This result surpasses the efficacy of all other formulations, establishing a new benchmark for retrieval tasks within this dataset.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2. Top-10 Retrieval Accuracy on TREC-Covid dataset: For the TREC-COVID dataset [6], which encompasses relevancy scores spanning from -1 to 2, with -1 indicative of irrelevance
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Blended Retriever Queries using Similarity and Semantic Search Indexes |TF-IDF Similarity based search (BM25 index)|Vector based search (KNN index)|Semantic based search (Sparse Encoder Model index)| |---|---|---| |Match Query|Multi-match Query| | |BM25+MQ|SE+MQ|KNN+MQ| |BM25+CF|SE+CF|KNN+CF| |BM25+MF|SE+MF|KNN+MF| |BM25+BF|SE+BF|KNN+BF| |BM25+PP|SE+PP|KNN+PP| Fig. 1: Scheme of Creating Blended Retrievers using Semantic Search with Hybrid Queries. Atcront Query LyPrs TREC COVID dataset different Query Types Fig.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2: Top-10 Retriever Accuracy for NQ Dataset Fig. 3: Top 10 retriever accuracy for Trec-Covid Score-1 and 2 denoting high relevance, our initial assessments targeted documents with a relevancy of 1, deemed partially relevant. Figure 3 analysis reveals a superior performance of vector search hybrid queries over those based on keywords. In particular, hybrid queries that leverage the Sparse EncodeR utilizing Best Fields demonstrate the highest efficacy across all index types at 78% accuracy. Subsequent to the initial evaluation, the same spectrum of queries was subjected to assessment against the TREC-COVID dataset with a relevancy score of 2, denoting that the documents were entirely pertinent to the associated queries. Figure 4 illustrated with a relevance score of two, where documents fully meet the relevance criteria for associated queries, reinforce the efficacy of vector search hybrid queries.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Fig. 4: Top 10 retriever accuracy for Trec-Covid Score-2 Fig. 6: NQ dataset Benchmarking using NDCG@10 Metric |Dataset|Model/Pipeline|NDCG@10| |---|---|---| |Trec-covid|COCO-DR Large|0.804| |Trec-covid|Blended RAG|0.87| |NQ dataset|monoT5-3B|0.633| |NQ dataset|Blended RAG|0.67| # A. Retriever Benchmarking Now that we have identified the best set of combinations of Index + Query types, we will use these sextet queries on IR datasets for benchmarking using NDCG@10 [8] scores (Normalised Discounted Cumulative Gain metric). # Fig. 5: Top 10 retriever accuracy for HotPotQA dataset 1) NQ dataset benchmarking: The results for NDCG@10 using sextet queries and the current benchmark on the NQ dataset are shown in the chart Figure 7. Our pipeline provides the best NDCG@10 score of 0.67, which is 5.8% higher over conventional keyword-based methods.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Notably, the hybrid query incorporating Sparse Encoder with Best Fields demonstrates a 98% top-10 retrieval accuracy, eclipsing all other formulations. This suggests that a methodological pivot towards more nuanced blended search, particularly those that effectively utilize the Best Fields, can significantly enhance retrieval outcomes in information retrieval (IR) systems. 2) TREC-Covid Dataset Benchmarking: In our research, the suite of hybrid queries devised has demonstrably exceeded the current benchmark of 0.80 NDCG@10 score, signaling their superior candidature for the RAG pipeline. Figure 7 shows the results for NDCG@10 using sextet queries. Blended Retrievers achieved an NDCG@10 score of 0.87, which marks an 8.2% increment over the benchmark score of 0.804 established by the COCO-DR Large model (Table II). 3) SqUAD Dataset Benchmarking: The SqUAD (Stanford Question Answering Dataset) [9] is not an IR dataset, but we evaluated the retrieval accuracy of the SquAD dataset for consistency. Firstly, we created a corpus from the SqUAD dataset using the title and context fields in the dataset. Then, we indexed the corpus using BM25, dense vector, and Sparse Encoder. The top-k (k=5,10, and 20) retrieval accuracy results
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Comparison of Benchmark Scores | |Current Benchmark Score|Hybrid Queries on the TREC COVID Dataset| |---|---|---| |Query|0.82|0.81| |Match|0.80| | Fig. 7: TREC-Covid Dataset Benchmarking using NDCG@10 Metric For the SQuAD dataset, dense vector (KNN)-based semantic searches achieve higher accuracy than sparse vector-based semantic searches and traditional similarity-based searches, particularly for top-k retrieval performance with k values of 5, 10, and 20. (See Appendix for more details) # Summary of Retriever Evaluation We evaluated the retrieval accuracy using our approach, quantified by Top-k metrics where k ∈ {5, 10, 20}, across NQ, TREC-COVID, SQUAD, and CoQA datasets. This synopsis demonstrates the capability of our Blended Retrieval methodology within diverse informational contexts. Key observations are: - Enhanced retrieval accuracy is exhibited in all datasets except for CoQA. This enhancement is attributable to the capability of our hybrid queries to effectively utilize available metadata to source the most pertinent results. - Implementing dense vector-based (KNN) semantic search results in a marked improvement over keyword-based search approaches. - Employing semantic search-based hybrid queries realizes better retrieval precision compared to all conventional keyword-based or vector-based searches. - Furthermore, it is discernible that the Sparse Encoder-based semantic search, when amalgamated with the 'Best Fields' hybrid query, often provides superior results than any other method. # RAG Experimentation # RAG Evaluation on the SQuAD Dataset SQuAD is a commonly benchmarked dataset for RAG systems or Generative Q&A using LLMs. Our study juxtaposes three variations of the RAG pipeline from prior work using the evaluation metrics of Exact Match (EM) and F1 scores to gauge the accuracy of answer generation, as well as Top-5 and Top-10 for retrieval accuracy. - RAG-original: This variant, a model fine-tuned on the Natural Questions dataset, has been appraised without domain-specific adaptation.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
For HotpotQA and BioASQ, we use F1. # 5.2 Number of Provided Contexts We analyze the effect of the number of retrieved passages on reader model performance by providing the top-k passages retrieved by ColBERT and evaluating the EM scores of answers generated by four reader models. Results in Figure 3 reveal substantially different patterns between FLAN and LLAMA models. On the NQ dataset, EM scores of both LLAMA models peak early at k ≤ 3 before continuing to decline steadily. In contrast, FLAN models peak later around k = 20, and only level off instead of declining. When conditioning on a total of 30 passages, LLAMA 7B and LLAMA 70B underperform FLAN models by about 17 to 33 point increase in exact match. Similarly on multi-hop questions, FLAN models can still benefit from more than 10 passages, whereas LLAMA can only benefit from 1 or 2 passages. In contrast, on the BioASQ dataset, both models have a small peak around 2 passages and roughly maintain the results afterwards. These findings imply that encoder-decoder models can more effectively process tens of passages. In contrast, decoder-only models can use at most 2–3 passages in all cases. This observation underscores the necessity to carefully select the number of provided passages for different models to optimize their performance. # 5.3 Context Limit vs. Early Truncation Despite LLAMA models having twice the context length of FLAN models (4k versus 2k), they do not achieve superior performance when there is a large number of contexts. Nonetheless, this evidence alone cannot allow us to claim that LLAMA models are worse than FLAN models at processing long contexts. LLAMA’s longer context window could be a confounding factor that exposes the model to more (noisy) passages. To make a fair comparison, we truncate LLAMA model inputs by 2k tokens as we do to the FLAN models, then evaluate LLAMA’s truncated-context performance. As shown by the dashed lines in Figure 3, early truncation can only prevent LLAMA from further degrading after 15 passages on Wikipedia-domain questions but it does not improve results with smaller k. The best-performing LLAMA (70B, truncated) still scores lower than the smallest FLAN model, despite an almost 7 times difference in their number of parameters (70B vs. 11B).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
- RAG-end2end: As an extension of RAG-original, this model undergoes additional fine-tuning, tailored for domain adaptation to the SQuAD. - Blended RAG: Distinctively, our Blended RAG variant has not undergone training on the SQuAD dataset or any related corpora. It harnesses an optimized amalgamation of field selections and hybrid query formulations with semantic indices to feed LLMs to render the most precise responses possible. Consequently, as shown in Table IV, our Blended RAG showcases enhanced performance for Generative Q&A with F1 scores higher by 50%, even without dataset-specific fine-tuning. This characteristic is particularly advantageous for large enterprise datasets, where fine-tuning may be impractical or unfeasible, underscoring this research's principal application. # RAG Evaluation on the NQ Dataset Natural Questions (NQ) is another commonly studied dataset for RAG. The Blended RAG pipeline, utilizing zero-shot learning, was evaluated to ascertain its efficacy against other non-fine-tuned models. The assessment focused on the following metrics: Exact Match (EM), F1 Score, and retrieval accuracy (Top-5 and Top-20) in Table V. Blended RAG (Zero-shot): Demonstrated superior performance with an EM of 42.63, improving the prior benchmark by 35%.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Model/Pipeline|EM|F1|Top-5|Top-20| |---|---|---|---|---| |RAG-original|28.12|39.42|59.64|72.38| |RAG-end2end|40.02|52.63|75.79|85.57| |Blended RAG|57.63|68.4|94.89|98.58| |Model/Pipeline|EM|F1|Top-5|Top-20| |---|---|---|---|---| |GLaM (Oneshot) [12]|26.3| | | | |GLaM (Zeroshot) [12]|24.7| | | | |PaLM540B (Oneshot) [13]|29.3| | | | |Blended RAG (Zero-shot)|42.63|53.96|88.22|88.88| # VI. DISCUSSION While RAG is a commonly used approach in the industry, we realized during the course of this study that various challenges still exist, like there are no standard datasets on which both R (Retriever) and RAG benchmarks are available. Retriever is often studied as a separate problem in the IR domain, while RAG is studied in the LLM domain. We thus attempted to bring synergy between the two domains with this work.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In this section, we share some learning on limitations and appropriate use of this method. # A. Trade-off between Sparse and Dense Vector Indices The HotPotQA corpus presents substantial computational challenges with 5M documents, generating a dense vector index to an approximate size of 50GB, a factor that significantly hampers processing efficiency. Dense vector indexing, characterized by its rapid indexing capability, is offset by a relatively sluggish querying performance. Conversely, sparse vector indexing, despite its slower indexing process, offers expeditious querying advantages. Furthermore, a stark contrast in storage requirements is observed; for instance, the sparse vector index of the HotPotQA corpus occupied a mere 10.5GB as opposed to the 50GB required for the dense vector equivalent. In such cases, we recommend sparse encoder indexes. Furthermore, for enterprises with this volume, we found it better to use multi-tenancy with federated search queries. # B. Blended Retrievers without Metadata When datasets are enriched with metadata or other relevant informational facets, they improve the efficacy of blended retrievers. Conversely, for datasets devoid of metadata, such as CoQA, it is not as impressive. The absence of metadata in the CoQA dataset resulted in hybrid queries offering no improvement over basic queries. This limitation underscores the critical role of metadata in enhancing the efficacy of complex query structures. However, Sparse Encoder-based semantic searches still yield the most favorable outcomes than traditional methods. Additionally, we would like to note that while NDCG@10 scores for Retriever and F1,EM scores for RAG are commonly used metrics, we found them to be poor proxies of Generative Q&A systems for human alignment. Better metrics to evaluate the RAG system is a key area of future work.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# VII. CONCLUSION Blended RAG pipeline is highly effective across multiple datasets despite not being specifically trained on them. Notably, this approach does not necessitate exemplars for prompt engineering which are often required in few-shot learning, indicating a robust generalization capability within the zero-shot paradigm. This study demonstrated: - Optimization of R with Blended Search: Incorporating Semantic Search, specifically Sparse Encoder indices coupled with 'Best Fields' queries, has emerged as the superior construct across all, setting a new benchmark of 87% for Retriever Accuracy on TREC-COVID. - Enhancement of RAG via Blended Retrievers: The significant amplification in retrieval accuracy is particularly pronounced for the overall evaluation of the RAG pipeline, surpassing prior benchmarks on fine-tuned sets by a wide margin. Blended RAG sets a new benchmark at 68% F1 Score on SQUAD and 42% EM Score on NQ dataset; for non-tuned Q&A systems. The empirical findings endorse the potency of Blended Retrievers in refining RAG systems beyond focusing on LLM size & type, getting better results with relatively smaller LLM and thus setting a foundation for more intelligent and contextually aware Generative Q&A systems.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# ACKNOWLEDGMENT “Palm: Scaling language modeling with pathways,” Journal of Machine Learning Research, vol. 24, no.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
240, pp. 1–113, 2023. Authors would like to acknowledge the below members for making this study possible. - IBM Ecosystem The authors conducted this study while employed at IBM Ecosystem. They would like to express their gratitude to the Ecosystem team and leadership for their support in carrying out this work. - IBM Research The authors have received generous feedback on their work from colleagues at IBM Research, particularly Radu Florian, whom the authors would like to acknowledge. - Elastic - The authors have been granted access to the Elastic Search platform and ELSER index as an embodiment of sparse index. They would like to thank Elastic for their support. # REFERENCES [1] S. Robertson and H. Zaragoza, “The bm25 algoripm,” Foundations and Trends in Information Retrieval, 2009. [2] M. Johnson et al., “Knn algoripms for semantic search,” in Proceedings of pe International Conference on Machine Learning, 2019. [3] G. Amati, BM25, pp. 257–260.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Boston, MA: Springer US, 2009. [4] K. Taunk, S. De, S. Verma, and A. Swetapadma, “A brief review of nearest neighbor algoripm for learning and classification,” in 2019 International Conference on Intelligent Computing and Control Systems (ICCS), pp. 1255–1260, 2019. [5] T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M.-W. Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov, “Natural questions: a benchmark for question answering research,” Transactions of pe Association of Computational Linguistics, 2019. [6] L. L. Wang, K. Lo, Y. Chandrasekhar, R. Reas, J. Yang, D. Burdick, D. Eide, K. Funk, Y. Katsis, R. Kinney, et al., “Cord-19: The covid-19 open research dataset,” ArXiv, 2020. [7] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhupinov, and C. D. Manning, “Hotpotqa: A dataset for diverse, explainable multi-hop question answering,” arXiv preprint arXiv:1809.09600, 2018. [8] Y. Wang, L. Wang, Y. Li, D. He, and T.-Y. Liu, “A peoretical analysis of ndcg type ranking measures,” in Conference on learning peory, pp. 25–54, PMLR, 2013. [9] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “Squad: 100,000+ questions for machine comprehension of text,” arXiv preprint arXiv:1606.05250, 2016. [10] S. Reddy, D. Chen, and C. D. Manning, “Coqa: A conversational question answering challenge,” Transactions of pe Association for Computational Linguistics, vol. 7, pp. 249–266, 2019. [11] S. Siriwardhana, R. Weerasekera, E. Wen, T. Kaluarachchi, R. Rana, and S. Nanayakkara, “Improving pe domain adaptation of retrieval augmented generation (rag) models for open domain question answering,” Transactions of pe Association for Computational Linguistics, vol. 11, pp. 1–17, 2023. [12] N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, Y. Zhou, A. W. Yu, O. Firat, et al., “Glam: Efficient scaling of language models wip mixture-of-experts,” in International Conference on Machine Learning, pp. 5547–5569, PMLR, 2022. [13] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# APPENDIX A # DATASET INFORMATION # A. TREC-COVID dataset The TREC-COVID Challenge leveraged the continually updated COVID-19 Open Research Dataset (CORD-19) to assess information retrieval systems, providing valuable insights for the current pandemic response and future system development. This iterative evaluation process, involving biomedical experts’ relevance judgments, culminated in a comprehensive test collection known as TREC-COVID Complete, facilitating research on dynamic search environments. Table 6 shows the basic structure of the Trec-Covid dataset. |id|title|text|metadata| |---|---|---|---| |ug7v899j|Clinical features of culture-proven Mycoplasma|OBJECTIVE: This retrospective chart review des|{’url’: ’https://www.ncbi.nlm.nih.gov/pmc/arti’| |02tnwd4m|Nitric oxide: a pro-inflammatory mediator in respiratory tract|Inflammatory diseases of the respiratory tract|{’url’: ’https://www.ncbi.nlm.nih.gov/pmc/arti’| |ejv2xln0|Surfactant protein-D and pulmonary host defense|Surfactant protein-D (SP-D) participates in th|{’url’: ’https://www.ncbi.nlm.nih.gov/pmc/arti’| |2b73a28n|Role of endothelin-1 in lung disease|Endothelin-1 (ET-1) is a 21 amino acid peptide|{’url’: ’https://www.ncbi.nlm.nih.gov/pmc/arti’| |9785vg6d|Gene expression in epithelial cells in response|Respiratory syncytial virus (RSV) and pneumoni|{’url’: ’https://www.ncbi.nlm.nih.gov/pmc/arti’| # B. NQ dataset We downloaded this data from GitHub Bier. This dataset is created by Google AI, which is available for open source to help out open-domain question answering; the NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Table 7 shows the basic structure of NQ dataset. |id|text|relevant|answers| |---|---|---|---| |153|what episode in victorious is give it up|1|Freak the Freak Out| |7043|malcolm in the middle what is their last name|14|Wilkerson| |4392|distance from las vegas to red wood forest|16|15 miles| |8260|what kind of animal is boots from dora|18|anthropomorphic monkey| |6740|where did the rockefeller tree come from 2014|21|Danville , PA| # C. HotpotQA dataset In order to enable more explainable question answering systems, HotpotQA is a question answering dataset with natural, multi-hop questions and strong supervision for supporting facts. A group of NLP researchers from Université de Montréal, Stanford University, and Carnegie Mellon University gathers it. Table 8 shows the basic structure of the HotpotQA dataset. # D. CoQA dataset The CoQA dataset is an open-source, large-scale dataset for building conversational question-answering systems. The goal of the CoQA challenge is to measure the ability of machines to understand a text passage and answer a series of interconnected questions that appear in a conversation. This data contains dev and train datasets.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Train dataset is used to fine-tune the model. In our experiments, we used a dev dataset. This dataset has 500 documents concerning 7983 question-answer pairs. Table 9 shows the basic structure of the database. # E. SQuAD Dataset The Stanford Question Answering Dataset (SQuAD) dataset is an open-source large-scale dataset. It is a collection of question-and-answer pairs derived from Wikipedia articles. There are two datasets available: squad 1.1 and Squad 2.0. We used the squad 1.0 dataset. This data contains dev and train datasets. We used the dev dataset for our experiments. This dataset contains 2067 documents and 10570 question-answer pairs. Table 10 shows the basic structure of the database.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This outcome challenges the assumption that models trained with extended context limits are inherently better at processing more contexts. Contrary to expectations, our results show that the short-window encoder-decoder models are more efficient in leveraging long contexts. Based on this intriguing observation, in the following sections, we investigate the cause of this difference in context utilization, from models’ willingness to integrate external contexts (§6) and the quality of retrieved passages (§7). # 6 Context Utilization Habits We study how readers utilize contexts, by examining how capable models are at answering questions without context, but instead simply from their pretrained knowledge and generalization capabilities (§6.2). We then study how the readers incorporate test-time contexts with varied qualities (§6.3, §6.4). # 6.1 RAGGED Analysis Guidelines What to Vary: Analyze the reader model’s performance using different slices of instances that represent different qualities of retrieved contexts. The first slice is where the retrieved context in-
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# TABLE VIII: HotpotQA Dataset |id|title|text|metadata| |---|---|---|---| |12|Anarchism|Anarchism is a political philosophy that advocates self-governed societies based on voluntary institutions. These are often described as stateless societies, although several authors have defined them more specifically as institutions based on non-hie|'url': 'https://en.wikipedia.org/wiki?curid=12'| |25|Autism|Autism is a neurodevelopmental disorder characterized by impaired social interaction, impaired verbal and non-verbal communication, and restricted and repetitive behavior. Parents usually notice signs in the first two years of their child's life. Thes|'url': 'https://en.wikipedia.org/wiki?curid=25'| |39|Albedo|Albedo is a measure for reflectance or optical brightness (Latin albedo, whiteness) of a surface. It is dimensionless and measured on a scale from zero (corresponding to a black body that absorbs all incident radiation) to one (corresponding t|'url': 'https://en.wikipedia.org/wiki?curid=39'| |290|A|A (named , plural "As", "A's", "a"s, "a's" or "aes" ) is the first letter and the first vowel of the ISO basic Latin alphabet. It is similar to the Ancient Greek letter alpha, from which it derives. The upper-case version consists of the two slanting|'url': 'https://en.wikipedia.org/wiki?curid=290'| |303|Alabama|Alabama ( ) is a state in the southeastern region of the United States. It is bordered by Tennessee to the north, Georgia to the east, Florida and the Gulf of Mexico to the south, and Mississippi to the west. Alabama is the 30th largest by area and th|'url': 'https://en.wikipedia.org/wiki?curid=303'| # TABLE IX: CoQA Dataset |version|data| |---|---| |1|{'source': 'wikipedia', 'id': '3zotghdk5ibi9cex97fepx7jetpso7', 'filename': 'Vatican Library.txt', 'story': 'The Vatican Apostolic Library (), more commonly called the Vatican Library or simply the Vat, is the library of the Holy See, located in Vatican City. Formally established in 1475, although it is much older, it is one of the oldest libraries in the world and contains one of the most significant collections of historical texts. It has 75,000 codices from throughout history, as well as 1.1 million printed books, which include some 8,500 incunabula. The Vatican Library is a research library for history, law, philosophy, science and theology. The Vatican Library is open to anyone who can document their qualifications and research needs. Photocopies for private study of pages from| |1|{'source': 'cnn', 'id': '3wj1oxy92agboo5nlq4r7bndc3t8a8', 'filename': 'cnn fe05c61a7e48461f7883cdec387567029614f07b.story', 'story': 'New York (CNN) – More than 80 Michael Jackson collectibles – including the late pop star's famous rhinestone-studded glove from a 1983 performance – were auctioned off Saturday, reaping a total $2 million. Profits from the auction at the Hard Rock Cafe in New York's Times Square crushed pre-sale expectations of only $120,000 in sales. The highly prized memorabilia, which included items spanning the many stages of Jackson's career, came from more than 30 fans, associates and family members, who contacted Julien's Auctions to sell their gifts and mementos of the singer. Jackson's flashy glove was the big-ticket item of the night, fetching $420,000| |1|{'source': 'gutenberg', 'id': '3bdcf01ogxu7zdn9vlrbf2rqzwplyf', 'filename': 'data/gutenberg/txt/Zane Grey Riders of the Purple Sage.txt/CHAPTER VII 78c077ef5e268383edbec1f1c9d644b1423f889d258d95ff055aa92', 'story': 'CHAPTER VII. THE DAUGHTER OF WITHERSTEEN "Lassiter, will you be my rider?"
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Jane had asked him.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
"I reckon so," he had replied. Few as the words were, Jane knew how infinitely much they implied. She wanted him to take charge of her cattle and horse and ranges, and save them if that were possible. Yet, though she could not have spoken aloud all she meant, she was perfectly honest with herself. Whatever the price to be paid, she must keep Lassiter close to her; she must shield from him the man who had led Milly Erne to Cottonwoods. In her fear she so controlled her mind| |1|{'source': 'cnn', 'id': '3ewijtffvo7wwchw6rtyaf7mfwte0p', 'filename': 'cnn 0c518067e0df811501e46b2e1cd1ce511f1645b7.story', 'story': '(CNN) – The longest-running holiday special still has a very shiny nose. "Rudolph the Red-Nosed Reindeer" premiered on television December 6, 1964, and is now one of the holiday season's perennial favorites. The story of the reindeer who saves Christmas is beloved among children and adults alike. The Rankin-Bass animated film production company used Japanese puppets and stop motion to tell the tale, bolstered by a soundtrack featuring Burl Ives' rendition of the theme song. In the story, Santa's reindeer Donner and his wife have a son, Rudolph, who has the distinction of a nose that glows. He runs away after being made to feel an outcast and links..| # APPENDIX B INDEX AND QUERIES DETAILS A. BM25 Index Fundamental to our exploration is the BM25 index [3], which utilizes TF-IDF (Term Frequency-Inverse Document Frequency) to evaluate document relevance based on query terms. This index serves as a cornerstone for our initial forays into search
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# TABLE X: Squad Dataset |data|version| |---|---| |{’title’: ’Super Bowl 50’, ’paragraphs’: [{’context’: ’Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi’s Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the ”golden anniversary” with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as ”Super Bowl L”), so that the logo could prominent|1.1| # TABLE XI: BM25 Queires |Query Type|Query Syntax|Explanation| |---|---|---| |Match Query|”query”: {”match”: {”content”:{ ”query”: “”} } }|The match query is the standard query for performing a full-text search, including options for fuzzy matching.| |Cross-field (Blended Query)|{”query”:{”multi match” :{”query”:””, ”type”:”cross fields”, ”ana- lyzer”:”standard”, ”fields”:[ ”title”, ”text”]}}|Treats fields with the same analyzer as though they were one big field. Looks for each word in any field.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Fields are either title or content.| |Most Field|{”query”: { ”multi match” : { ”query”:””, ”type”:”most fields”, ”fields”:[ ”title”, ”text”] }}|Finds documents that match any field and combines the score from each field. Returns top document as result based on scores.| |Phrase prefix|{”query”: { ”multi match” : { ”query”:””, ”type”:”phrase prefix”,”fields”:[ ”title”, ”text”] }}|Run a match phrase prefix query on each field and uses the score from the best field.| |Bool prefix|{”query”: { ”multi match” : { ”query”:””, ”type”:”bool prefix”,”fields”:[ ”title”, ”text”] }}|Creates a match bool prefix query on each field and combines the score from each field| |Best-fields|{”query”:{”multi match” :{”query”:””, ”type”:”best fields”, ”ana- lyzer”:”standard”, ”fields”:[ ”title”, ”text”] }}|Finds documents which match any field, but uses the score from the best field|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# B. KNN Index Dense Vector (KNN) Index: In pursuit of enhanced retrieval precision, we construct a dense vector index. This approach, empowered by sentence transformers, advances the retrieval process beyond the traditional confines of keyword and similarity searches. Utilizing k-nearest neighbor [4] (KNN) algorithms, this vector search method proffers a marked improvement in accuracy by identifying the proximity of vector representations derived from document and query content. We used the KNN index with KNN query + match query and combinations of multi match queries. Table 12 shows the query examples which we used for our experiments. |Query Type|Query Syntax|Score Calculation| |---|---|---| |KNN Simple Query|"knn": {"field": "vector", "query_vector": [], "k":10, "num_candidates":100, "fields": ["text","title"]}|Dense vector score| |KNN + Cross-field (Blended Query)|search_query1 = {"query": {"multi_match" : {"query":"", "type":"cross_fields", "fields":["title", "text"]}}, "knn": {"field":"vector", "query_vector": [54, 10, -2], "k": 10, "num_candidates": 100, "boost": 0.1}, "size": 10}|Dense vector score + boost + Multi match query score| |KNN + Most Field|search_query1 = {"query": {"multi_match" : {"query":"", "type":"most_fields", "fields":["title", "text"]}}, "knn": {"field":"vector", "query_vector": [54, 10, -2], "k": 10, "num_candidates": 100, "boost": 0.1}, "size": 10}|Dense vector score + boost + Multi match query score| |KNN + Phrase prefix|search_query1 = {"query": {"multi_match" : {"query":"", "type":"phrase_prefix", "fields":["title", "text"]}}, "knn": {"field":"vector", "query_vector": [54, 10, -2], "k": 10, "num_candidates": 100, "boost": 0.1}, "size": 10}|Dense vector score + boost + Multi match query score| |KNN + Bool prefix|search_query1 = {"query": {"multi_match" : {"query":"","type":"bool_prefix","fields":["title", "text"]}},"knn": {"field":"vector", "query_vector": [54, 10, -2], "k": 10,"num_candidates": 100,"boost": 0.1},"size": 10}|Dense vector score + boost + Multi match query score| |KNN + Best field|search_query1 = {"query": { "multi_match" : {"query":"","type":"best_fields", "fields":[ "title", "text"]}},"knn": {"field":"vector", "query_vector": [54, 10, -2], "k": 10,"num_candidates": 100,"boost": 0.1},"size": 10}|Dense vector score + boost + Multi match query score| # C. Sparse Encoder Model Index Sparse Encoder (Sparse EncodeR Retriever Model) Index: The Sparse EncodeR Retriever Model index is emblematic of our commitment to semantic search. This model, an amalgam of semantic understanding and similarity-based retrieval, allows for the fusion of these paradigms to formulate hybrid queries.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
By harnessing this index, we elevate our search capabilities to encapsulate the nuanced relationships between terms, thereby capturing a more authentic representation of user intent and document relevance. We used the Sparse Encoder Model-based index with sparse encoder query + match query and combinations of multi match queries. Table 13 shows the query examples which we used for our experiments.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# TABLE XIII: Query Types and Score Calculations |Query Type|Query Syntax|Score Calculation| |---|---|---| |Sparse EncodeR Retriever Model(SERM)|"query":{"text_expansion":{"ml.tokens":{"model_id":".SparseSparse vector score|EncodeR retriever| |Match Query|EncodeR retriever model_model_1","model_text":""}}}| | |Sparse EncodeR Retriever Model + Cross field|{"query": {"bool": {"should": {"text_expansion": {"ml_tokens": {"model_text": "",|Sparse vector score + boost + Multi match| | |"model_id":".Sparse EncodeR retriever model_model_1"}}}}, "must": {"multi_match" : {"query":|query score| | |"", "type": "cross_fields", "analyzer": "standard", "fields": [ "title", "text"]}}}}| | |Sparse EncodeR Retriever Model + Most Field|"query": {"bool": {"should": {"text_expansion": {"ml.tokens": {"model_text":|Sparse vector score + boost + Multi match| |prefix|"", "model_id":".Sparse EncodeR retriever model_model_1"}}}}, "must": {"multi_match" : {"query":|query score| | |"", "type": "most_fields", "fields": [ "title", "text"]}}}}| | |Sparse EncodeR Retriever Model + Phrase prefix|"query": {"bool": {"should": {"text_expansion": {"ml.tokens": {"model_text":|Sparse vector score + boost + Multi match| | |"", "model_id":".Sparse EncodeR retriever model_model_1"}}}}, "must": {"multi_match" : {"query":|query score| | |"", "type": "phrase_prefix", "fields": [ "title", "text"]}}}}| | |Sparse EncodeR Retriever Model + Bool prefix|"query": {"bool": {"should": {"text_expansion": {"ml.tokens": {"model_text":|Sparse vector score + boost + Multi match| | |"", "model_id":".Sparse EncodeR retriever model_model_1"}}}}, "must": {"multi_match" : {"query":|query score| | |"", "type": "bool_prefix", "fields": [ "title", "text"]}}}}| | |Sparse EncodeR Retriever Model + Best field|"query": {"bool": {"should": {"text_expansion": {"ml.tokens": {"model_text":|Sparse vector score + boost + Multi match| | |"", "model_id":".Sparse EncodeR retriever model_model_1"}}}}, "must": {"multi_match" : {"query":|query score| | |"", "type": "best_fields", "fields": [ "title", "text"]}}}}| | # TABLE XIV: List of Abbreviations |Abbreviation|Full Form| |---|---| |KNN|k-nearest neighbour| |MQ|Match Query| |BF|Best Fields| |SERM|Sparse EncodeR Retriever Model| |NQ|Natural Questions| |EM|Exact Match| # APPENDIX C ABBREVIATION TABLE # APPENDIX D RETRIEVER RESULTS- TOP-K ACCURACY RESULTS The following section encapsulates the retrieval accuracy of our evaluative approach, quantified by Top-k metrics where k ∈ {5, 10, 20}, across various datasets: 1. NQ (Natural Questions) dataset 2.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
TREC-Covid dataset 3. SQuAD (Stanford Question Answering Dataset) 4. CoQA (Conversational Question Answering)
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Tables Synopsis Tables 14, 15, and 16 synopsis is designed to demonstrate the comprehensive capability of our retrieval methodology within diverse informational contexts. It can be concluded from the results that ’Blended Retriever’ offers better accuracy than current methods across all the datasets. Sparse EncodeR Retriever Model (SERM) Based index with Best field queries often given best results with 88% top-5 accuracy for NQ-Dataset and 94% on TREC-Covid. The numbers increase for Top-10 and Top-20 accuracy. Figure 10, Figure 11, and Figure 12 show all these results. # Table XV: Top-5 Retrieval accuracy |Top-5 retrieval accuracy|BM25 + MQ|BM25+ BF|KNN + MQ|KNN + BF|SERM + MQ|SERM + BF| |---|---|---|---|---|---|---| |NQ Dataset|25.19|85.05|87|87.67|88|88.22| |Trec-covid Score1|36|40|36|40|46|48| |Trec-covid Score2|86|86|86|92|92|94| |HotpotQA|49.52|52.28| | | | | |SqUAD|91.5|91.52|94.86|94.89|90.7|90.7| # Table XVI: Top-10 Retrieval accuracy |Top-10 retrieval accuracy|BM25 + MQ|BM25+ BF|KNN + MQ|KNN + BF|SERM + MQ|SERM+ BF| |---|---|---|---|---|---|---| |NQ Dataset|36.7|86.26|88.46|88.66|88.55|88.77| |Trec-covid Score1|66|72|66|74|52|78| |Trec-covid Score2|92|96|96|97|64|98| |HotpotQA|55|58.93| | |62.5|65.7| |SqUAD|94.43|94.49|97.43|97.43|94.13|94.16| # Table XVII: Top-20 Retrieval accuracy |Top-20 retrieval accuracy|BM25 + MQ|BM25+ BF|KNN + MQ|KNN + BF|SERM + MQ|SERM + BF| |---|---|---|---|---|---|---| |NQ Dataset|37.13|87.12|88.58|88.66|88.66|88.88| |Trec-covid Score1|86|90|90|92|94|98| |Trec-covid Score2|98|100|100|100|100|100| |HotpotQA|61.32| | | | | | |SqUAD|96.3|96.36|98.57|98.58|96.49|96.52| (a) Top-10 NQ dataset Retrieval accuracy (b) Top-10 Trec-Covid Retrieval accuracy Fig. 10: Top-10 Retrieval accuracy # Appendix E RAG EVALUATION RESULTS Distinctively, our Blended RAG approach has not undergone training on any related corpora.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 1: Performance of four reader models when using no context or top-1 retrieved passages. |Model|NQ|HotpotQA|BioASQ| |---|---|---|---| |Flan-T5 XXL|16.7|33.8|27.4| |Flan-UL2|23.7|35.0|29.6| |LLaMa2 7B|21.5|29.2|23.5| |LLaMa2 70B|34.1|35.6|32.5| Behaviors to Expect: Models may show varying degrees of sensitivity to the quality of context. Reader models provided with only gold passages often act as an upper bound for the reader models' performance with top-k passages. Although one might expect reader models provided with no context passages to act as a lower bound for the reader models' performance with top-k passages, that is not always the case. It depends on the reader's ability to discern between potentially sufficient internal knowledge and irrelevant context knowledge. Implications of Behaviors: For practitioners, these comparisons would tell us how effective the reader model sifts for signal among noise. If the model performs very closely to how it would with only gold passages, then it is highly effective at using relevant information and ignoring irrelevant ones. Conversely, a significant drop in performance with lower-quality contexts suggests a reliance on high-quality retrieval and potential vulnerability to noise. This analysis can guide the development or selection of more robust models that can maintain performance despite the variability in context quality. # 6.2 No-context Generalization from Pre-trained Knowledge We evaluate how capable models are at answering questions directly from their parameters and without any context by evaluating reader test-time performance without any context. We denote this setup as no-ctx and compare it with the standard setting with top-1 passages from ColBERT.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
It harnesses an optimized amalgamation of field selections, query formulations, indices, and Large Language Models (LLMs) to render the most precise responses possible. We used FlanT5-XXL for this pipeline. Consequently, the Blended RAG showcases enhanced performance in the RAG use case, even without dataset-specific fine-tuning. This characteristic renders it particularly advantageous for large enterprise datasets, where fine-tuning may be impractical or unfeasible, underscoring this research’s principal application.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Top-10 Trec Covid Score 2 Retrieval accuracy # Top-10 HotpotQA Retrieval accuracy Fig. 11: Top-10 Retrieval accuracy # Top-10 Retrieval accuracy # Top-20 Retrieval accuracy Fig.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
12: Top-k Retrieval accuracy **Table 17 shows the RAG evaluation results for the NQ Dataset with all relevant matrices.** |Query Types|EM|F1|blue score|meteor score|rouge score|sentence similarity|sim hash score|perplexity score|bleurt score|bert score| |---|---|---|---|---|---|---|---|---|---|---| |BM25 + MQ|32.91|40.4|3.81|33.47|42.65|57.47|18.95|3.15|27.73|6.11| |BM25+ BF|37.58|47.31|4.63|3.98|49.79|63.33|17.02|3.07|13.62|65.11| |KNN + MQ|40.21|50.51|4.77|42.11|53.32|67.02|15.94|3.04|5.12|67.27| |KNN + BF|40.32|50.45|5.05|42.34|53.24|66.88|15.94|3.048|5.7|67.3| |ELSER + MQ|42.63|53.96|5.27|45.13|57.07|70.47|14.95|3.01|2.02|69.25| |ELSER + BF|42.3|53.25|5.24|44.77|56.36|69.65|15.14|3.02|0.24|68.97| # APPENDIX F GITHUB REPO The GitHub Repo for this work is https://github.com/ibm-ecosystem-engineering/Blended-RAG
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2404.15939v2 [cs.IR] 26 Apr 2024 Telco-RAG: Navigating the Challenges of Retrieval-Augmented Language Models for Telecommunications Andrei-Laurentiu Bornea∗, Fadhel Ayed∗, Antonio De Domenico∗, Nicola Piovesan∗, Ali Maatouk+∗Paris Research Center, Huawei Technologies, Boulogne-Billancourt, France +Yale University, New Haven, Connecticut, USA Abstract—The application of Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems in the telecommunication domain presents unique challenges, primarily due to the complex nature of telecom standard documents and the rapid evolution of the field. The paper introduces Telco-RAG,1 an open-source RAG framework designed to handle the specific needs of telecommunications standards, particularly 3rd Generation Partnership Project (3GPP) documents. Telco-RAG addresses the critical challenges of implementing a RAG pipeline on highly technical content, paving the way for applying LLMs in telecommunications and offering guidelines for RAG implementation in other technical domains. # I. INTRODUCTION Large language models (LLMs) are designed to understand, generate, and process text by leveraging extensive training data. These models, built upon architectures such as Transformers, employ deep learning techniques to analyze and predict language patterns [1]. Their capabilities are largely attributed to the vast amount of text they process during training, allowing LLMs to develop a nuanced understanding of language, context, and even idiomatic expressions. The utility of LLMs extends across various domains, among which telecommunications, where models can improve operational efficiency and enhance customer satisfaction [2]. Standalone language models rely solely on their internal representations and learned parameters to generate text, which showcases modest knowledge in technical domains such as telecommunication standard documents [3]. Two primary methodologies have emerged to address this challenge: fine-tuning and retrieval-augmented generation (RAG). Fine-tuning enables language model specialization via further training of a fraction of the parameters using a domain-specific dataset. However, fine-tuning can incur a high computational cost [4] and is not suited for rapidly evolving domains where new knowledge needs to be incorporated on a regular basis. RAG stands out as an appealing alternative due to its cost-effectiveness, adaptability, and scalability [5]. In the RAG paradigm, knowledge from external sources is fetched in real-time when a query is addressed to the system. This is particularly tailored for quickly evolving fields [6]. In the telecommunication industry, a retrieval-augmented language model that masters complex industry-specific knowledge, such as the content of technical standards, would hold significant practical value [7]. For instance, it would allow the development of an advanced chatbot for professionals.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Such a tool would increase the accuracy and speed with which telecommunications professionals access and comply with international standards, fostering quicker development cycles and improved regulatory adherence. In this work, we concentrated our efforts on telecommunication standards, and specifically 3rd Generation Partnership Project (3GPP) documents. This focus was motivated by the aforementioned practical utility of a chatbot specialized in 3GPP and by the observation that even state-of-the-art language models, such as GPT-4, exhibit scarce knowledge of this content [3]. We have identified that the conventional RAG setup, which typically extracts three to five data segments of 512 tokens each [8], does not adequately meet the intricate demands of telecommunications standards. Consequently, we have developed a specialized RAG pipeline named Telco-RAG specifically optimized for 3GPP documents. Besides, through our design and methodology, we aim to provide generally applicable guidelines to overcome the common challenges faced when implementing an RAG pipeline in highly technical domains. These include identifying the most impactful hyperparameters to tune, recommending default settings [9], reducing the high random access memory (RAM) usage, and refining the user’s query [10]. We expect that the Telco-RAG, which we make publicly available as an open-source chatbot for 3GPP standards, and the associated results will contribute substantially to integrating AI in the telecommunications field. # II.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
METHODOLOGY RAGs improve the quality of the LLM-generated responses by providing the LLM with external sources of knowledge based on a large corpus of documents. RAG pipelines start by splitting the document corpora into fixed-sized long (chunk size) segments called chunks. Using an embedding model, each chunk is transformed into a vectorial representation capturing the semantics of the segment. When a query is presented, the system identifies the relevant chunks by computing a similarity between the chunks’ embeddings and the query’s embedding. Lastly, RAG presents the relevant chunks, called
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# User Query # Vocabulary for 3GPP specifications |1. Glossary Enhancement|Embeddings Database| |---|---| |2. NN Router|1.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
NN Router| |3. Retrieval 1|Faiss Index| |4.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Refine Query|LLM e.g. GPT 3.5| The context, alongside the query to a LLM that generates the final response. Any implementation of a RAG system for telecommunications will face four critical challenges: sensitivity to hyperparameters [9], vague user queries [10], high RAM requirements, and sensitivity to the quality of the prompts [11]. In particular, poor prompts affect the capacity of the LLMs to comprehend the context of the queries and correctly reply to them. Moreover, vague queries limit the precision of the retrieval stage. Fig. 1 depicts the proposed Telco-RAG tailored for LLM deployment in the telecommunications sector. The proposed technical documents, focusing specifically on 3GPP standards. It features a dual-stage pipeline, including a query enhancement stage and a retrieval stage. The query enhancement stage includes four steps: initially, it employs a custom glossary of technical terms to augment the query, enhancing contextual understanding. Subsequently, a neural network (NN) router selectively identifies relevant documents from the document corpus. This sub-selection optimizes the accuracy and efficiency, reducing the number of documents loaded in the preliminary retrieval (step 3), which provides the first round of context used to further refine the queries (step 4). Following this, the retrieval stage utilizes the NN router to select the documents (step 1) on which the RAG realizes the second retrieval (step 2). Note that using the improved query boosts the accuracy of the second retrieval thanks to the more accurate embedding representations. The pipeline finalizes with a generating component, relying on a state-of-the-art language model such as GPT-3.5, which generates responses based on retrieved context. # A. Hyperparameters Optimization As numerous studies have shown, hyperparameter optimization can provide large gains for retrieval-augmented models (see [8], [12]). Therefore, using a synthetic dataset constructed for this purpose, we conducted a meticulous optimization of the chunk size, context length, indexing strategy, and embedding models (see Sec. III). These hyperparameters are explained below: - Chunk Size: Determines the length of each text segment the RAG processes at once. - Context Length: Length of the context yielded by retrieval component. - Embedding Models: Algorithms that transform text into numerical representations. - Indexing Strategy: The FAISS index2 [13] by which the system aims to improve the retrieval and processing of tech-model assesses the relevance of each text chunk related to the given query. # B. Query Augmentation Numerous studies indicate significant improvements when augmenting vague queries in RAG pipelines [10]. In telecom documents, two major issues arise with vague queries: the abundance of technical terms and abbreviations in questions, and the inability of the RAG to discern user intent, leading to the retrieval of irrelevant, albeit similar, information. 1) Lexicon-enhanced Queries: In this section, we address the challenge posed by the prevalence of technical terms and abbreviations in questions, which are often difficult to capture accurately in the embedding space. To tackle this issue, we utilized the “Vocabulary for 3GPP Specifications” [14] to construct two dictionaries: one for abbreviations and another for terms with their definitions. Integrating these dictionaries into our query enhancement block of the pipeline- see Fig. 1- allowed us to refine the embedding process. For each question, we enriched the embedding with the definitions of relevant terms from the dictionaries. This process refines the similarity evaluation between the question and potential answers.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# incorporating domain-specific knowledge. We also integrated |3GPP 21 series- embedding|Inner product| |---|---| |3GPP 22 series- embedding|Inner product| |3GPP 23 series- embedding|Inner product| |3GPP 38 series- embedding|Inner product| relevant terms from the dictionaries in our final prompt, built using the retrieved context, user query, and the defined terms and definitions. This ensures that the LLM was prompted with the necessary technical vocabulary and definitions to process the question effectively. This method was employed in the Glossary Enhancement block of our pipeline, see Fig. 1. # Generating Candidate Answers: We use a language model to generate all plausible answers based on the preliminary context selected in Retrieval 1. Then, we add these generated candidate answers to enhance the user’s query, clarifying its intent and preventing the retrieval of irrelevant information. The embedded enhanced query improves the identification of the relevant information in the corpora, yielding a superior final answer quality. # Enhancing the RAM Usage of the Telco-RAG For large document corpora, the dataset of the embedded chunks becomes so voluminous that it exceeds the limitations of RAM capacities. Besides, we show in this work that for highly technical documents, smaller chunks yield better performance (see Sec. III-A2). However, the smaller the chunks, the more the text segments to be processed by the RAG, which increases the required RAM resources. To deal with this issue, we recall that the 3GPP standards categorize specifications into 18 distinct series [15]. Each series provides the technical details of a specific aspect of mobile telecommunications technologies (radio access, core network components, security, etc). To improve the RAG usage efficiency, we developed an NN router tailored to predict relevant 3GPP series based on queries. This model enables selective loading of embeddings, thus drastically reducing the RAM usage. # Input channels. The first channel processes input 1, a 1024-PreprocessingIn our study, we designed a structured, dialogue-oriented sized vector embedding the initial user query, while the second Feed Forward prompt, as prompt engineering literature has shown better Feed Forward channel processes input 2, an 18-sized vector. Each entry of this vector is defined as the inner product between query embeddings and the embedding of each 3GPP series summary description, generated through a dedicated LLM. # Classifier Central to our model are two adjustable trainable parameters, α and β, which modulate the influence exerted by each input stream on the resultant output. The overall architecture is illustrated in Fig.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# RAG with multi-Hop queries # Retrieval-augmented Generation (RAG) In an RAG application, we utilize an external corpus, denoted as D, which comprises multiple documents and serves as the knowledge base. Each document within this corpus, represented as di ∈ D, is segmented into a set of chunks.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Figure 4: NQ results when gold passages are included in the top-k passages. Gray lines on the top show results when providing only the gold passages within the top-k retrieved passages. Figure 4 shows that LLAMA models are more severely affected by noisy context, as their scores decrease more sharply as k increases. Notably, around k = 15 and k = 20, LLAMA 7B and
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2. For the processing of the embedded query, our model implements a series of linear transformations that reduce its dimensionality from 1024 to 256. This reduction incorporates dropout layers to mitigate overfitting and a batch normalization layer to enhance training stability. Concurrently, the second input stream begins with a dimensionality of 18, preprocessed through a softmax layer, which is then expanded to 256 dimensions, to process jointly the contributions from both input streams in the decision-making process. The outputs from these pathways are weighted by 2 trainable parameters, α and β. These weighted outputs are summed up into a unified representation, which our neural network model utilizes to ascertain the target 3GPP series with heightened accuracy. # Fig.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2. The proposed NN router architecture. Integrating this NN model into Telco-RAG framework significantly elevates the ability to discern and categorize standards-related queries, paving the way for more targeted and efficient information retrieval. To train the NN router, we created a synthetic dataset comprising 30,000 questions from 500 documents from 3GPP Release 18, and their originating series that served as target labels. The adoption of synthetic data for training and testing our NN router reduces the risk of overfitting the dataset on which we test Telco-RAG pipeline [16]. # Prompt Engineering Prompt engineering plays a crucial role in RAG systems, particularly in ensuring that the RAG maintains focus on the user’s question while comprehending the broader context [11]. LLM performance with this format [17]. More specifically, the final prompt of Telco-RAG starts with the query followed by the definitions of the terms and abbreviations. After that, the prompt includes the generated context. Importantly, the related options and query instruction helping the model to effectively generate a relevant response. The designed format of the LLM prompt is as follows: - Please provide the answers to the following multiple-choice question: <Question> - Terms and Definitions: <Defined Terms> - Abbreviations: <Abbreviations> - Considering the following context: <Retrieved Context> - Please provide the answers to the following multiple-choice question: <Question> - Options: <Options> - Write only the option number corresponding to the correct answer.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# TABLE I |Name|Embedding Model|Chunk Size|Context Length|Candidate Answers|Glossary Enhancement|NN Router|Enhanced Final Prompt| |---|---|---|---|---|---|---|---| |Benchmark RAG|-3-large[1024 size]|500|1500-2500|IndexFlatIP|No|No|No| |Telco-RAG|-3-large[1024 size]|125|1500-2500|IndexFlatIP|Yes|Yes|Yes| # III. EXPERIMENTAL RESULTS In this section, we present the performance of Telco-RAG framework in enhancing the capabilities of LLMs applied to the telecom domain. To achieve this task, we have used two sets of multiple choice questions (MCQs), one optimization set and one evaluation set, specifically created to assess the knowledge of LLMs in telecommunications. The optimization set is composed of 2000 MCQs generated following the methodology presented in [3] and based on documents from 3GPP Rel.18. The second set consists of the 1840 TeleQnA MCQs related to 3GPP documentations [3]. The purpose of Telco-RAG is to effectively help professionals with complex queries from telecom domain. The MCQ format, though very convenient for evaluation purposes, does not realistically correspond to the type of queries that will be submitted to the system. i.e., the user will likely do not provide any option to the LLM. Hence, we decided not to include the options in the retrieval process and use them solely to assess Telco-RAG accuracy. In the following results, accuracy measures the fraction of correct answers of Telco-RAG to the queries in the datasets. Table I presents the main parameters of Telco-RAG and the RAG benchmark architecture compared throughout the following experiments. # A. Hyperparameters Optimization 1) Selecting the Embedding Model: In this experiment, we compare the performance of two OpenAI embedding models for the Telco-RAG framework: 1) Text-embedding-3-large and text-embedding-ada-002 [18]. Text-embedding-3-large extends the capabilities of its predecessor text-embedding-ada-002. Text-embedding-3-large is trained using Matryoshka Representation Learning [19], a technique that allows the shortening of the embedding vectors, which reduces computational and RAM requirements, while preserving a stronger performance. Our results show that, on average, the text-embedding-3-large model, with a fixed embedding dimension of 1024, improves the accuracy of Telco-RAG by 2.29%, over the text-embedding-ada-002 model. 2) Chunk Size Optimization: We have assessed the influence of varying chunk sizes—125, 250, and 500 tokens—on the accuracy of RAG systems. Importantly, there is an inverse relationship between chunk size and Telco-RAG accuracy. These results highlight the critical importance of optimizing chunk size, which has led to an average improvement of 2.9% in accuracy when selecting as chunk size 125 tokens instead of 500 tokens, for equal context length. 3) Context Length Optimization: Fig. 3 shows the linear regression fitted on the RAG accuracy computed for a diverse set of context lengths, with different configurations. The results show an ascending trend of the accuracy as a function of context length. As a side note, we have noticed a drop in performance when the context length gets larger than 1500 tokens. However, this is alleviated by presenting the query twice, before and after the context, as discussed in Sec.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
II-D. 4) Indexing Strategy Selection: In our research, we have evaluated the impact of different indexing strategies in the accuracy of Telco-RAG: 1) IndexFlatL2, 2) IndexFlatIP, and 3) IndexHNSW. IndexFlatL2 is based on the Euclidean distance while IndexFlatIP uses Euclidean dot product. In contrast, IndexHNSW is an approximate method for efficient searching in high-dimensional data spaces using Euclidean distance. IndexHSNW has shown considerably inferior performance compared to IndexFlatIP and IndexFlatL2. Importantly, despite marginal differences in terms of accuracy, IndexFlatIP has outperformed IndexFlatL2 in 80% of our experiments. # B. Query Augmentation In this section we evaluate the gain in accuracy brought by enhancing the user queries through the methodology described in Sec.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
II-B. 1) Lexicon-enhanced Queries: To validate the effectiveness of this approach, we applied it to a subset of lexicon-focused questions from TeleQnA [3], which were designed to evaluate the understanding of abbreviations and technical terms within the telecommunications sector. Our results presented in Table II have shown that the designed RAG framework enhances the baseline LLM accuracy on lexicon questions, i.e., from 80.2 % to 84.8%. However, Lexicon-enhanced queries have achieved
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# TABLE II |Baseline (No Context)|Benchmark RAG|Telco-RAG| |---|---|---| |80.2%|84.8%|90.8%| 2) Enhancing User’s Query With Candidate Answers: To retrieve a better context, we enhance the user’s query with candidate answers generated by an LLM (step 4 of the query’s enhancement stage). Table III presents the accuracy of Telco-RAG with and without the usage of these candidate answers. Specifically, we can observe that for the text-embed-ada-002 embedding model, the addition of candidate answers considerably improves the query embedding representations, bringing a 3.56% average accuracy gain. The accuracy of the RAG with text-embed-ada-002 including refined queries is larger than the one achieved using text-embed-3-large without refined queries. Furthermore, with text-embed-3-large, we observe a gain of 2.06% on average accuracy when using candidate answers in the retrieval process. # TABLE III |RAG’S ACCURACY WITH AND WITHOUT REFINED QUERY.|Embedding Model|Chunk Size|Context Length|Initial Accuracy|Refined Accuracy| |---|---|---|---|---|---| | | | | | | | | |Text-embed-ada-002|125|750|0.729|0.777 (+4.8%)| | |Text-embed-ada-002|250|2000|0.770|0.795 (+2.5%)| | |Text-embed-ada-002|500|2000|0.740|0.774 (+3.4%)| | |Text-embed-3-large|125|750|0.744|0.780 (+3.6%)| | |Text-embed-3-large|250|2000|0.784|0.796 (+1.2%)| | |Text-embed-3-large|500|2000|0.774|0.788 (+1.4%)| |C. RAM Usage Analysis in the Telco-RAG<br/>Selecting a 125-token chunk size increases the RAM requirements of the Telco-RAG (see Sec.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
II-C).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
However, the integration of the designed NN router can tackle this issue. Fig. 5 presents the histogram of RAM usage for the 2000 MCQs in the optimization set. NN router dynamically selects the number of documents processed by the Telco-RAG pipeline based on their relevance to the query, as opposed to a fixed number of documents processed by the Benchmark RAG architecture. This method introduces variability in RAM usage among different queries, which results in the probability density function (PDF) in Fig. 5. Our results show that the NN-enhanced RAG model leads in average to a RAM consumption of 1.25 GB, thus reducing of 45% the requirement of 2.3 GB obtained by the Benchmark RAG solution.<br/>D. Enhanced Prompt Formatting<br/>In this section, we highlight the accuracy gain brought by the prompt presented in Sec. II-D, which we have designed for LLM answering MCQs related to telecom domain. Our analysis of the results revealed a 4.6% average gain in accuracy, compared to the original JSON format of TeleQnA questions. This result suggests that human-like query structures can significantly elevate the contextual understanding and accuracy of LLM models.<br/>E. Overall Performance<br/>In this section, we present the accuracy of the Telco-RAG on the evaluation MCQs, i.e., 1840 3GPP-related questions from TeleQnA [3]. Specifically, we consider three groups of MCQs, Rel. 17 MCQs, Rel. 18 MCQs, and the overall set of TeleQnA MCQs related to 3GPP documentations. For each of these sets of MCQs, we compare the performance of GPT 3.5 with Telco-RAG, GPT 3.5 with the Benchmark RAG, and GPT 3.5 without RAG. Fig. 4 highlights that Telco-RAG leads to notable gains in all the experiments. Importantly, Telco-RAG results an average improvement of 6.6% and 14.45% compared to GPT 3.5 with and without the Benchmark RAG.<br/># TABLE IV<br/><br/>Top k|NN Router|GPT 3.5|GPT 4| |k=1|51.3%|19.9%|30.4%| |k=3|80.6%|36.6%|70.8%| |k=5|88.3%|50.3%|85.6%| The ability of the designed NN router to accurately deduce the applicable 3GPP series for a given query reduces the consideration of irrelevant content. This reduction not only lowers the computational complexity of the retrieval steps but also the overall resources needed for processing the retrieved content. IV. CONCLUSIONS This paper presented Telco-RAG, a novel RAG framework for processing 3GPP telecommunications standards and supporting LLM in telecom use cases. We have demonstrated
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Fig.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
4. Comparison the accuracy of Telco-RAG system with a baseline GPT 3.5 with/without the Benchmark RAG on the TeleQnA questions related to 3GPP documents. | |GPT 3.5|GPT 3.5 + Benchmark RAG|GPT 3.5 + Telco-RAG| |---|---|---|---| |2500 - Overall|60.1|+8.7|+6.9| |1500 - Overall|60.1|+7.0|+6.3| |2500 - Release 17|59.6|+3.3|+9.6| |1500 - Release 17|59.6|+2.5|+8.4| |2500 - Release 18|60.8|+11.2|+6.4| |1500 - Release 18|60.8|+9.3|+5.8| # Fig. 5. PDF of the RAM usage of Telco-RAG vs Benchmark RAG. Memory (GB) Benchmark RAG: 2.3 GB that refinements in chunk sizes, embedding models, indexing strategies, and query structuring significantly boost RAG system performance and accuracy. The provided solutions are general and can deal with frequent challenges encountered in building RAG pipelines for highly technical domains. We expect that the Telco-RAG, which we make publicly available, and the associated results will contribute substantially to the integration of AI in the telecommunications field. # REFERENCES 1. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems (NIPS), 2017, pp. 5998–6008. 2. A. Maatouk, N. Piovesan, F. Ayed, A. De Domenico, M. Debbah, “Large Language Models for Telecom: Forthcoming Impact on the Industry,” arXiv preprint arXiv:2308.06013, 2024. 3. A. Maatouk, F. Ayed, N. Piovesan, A. De Domenico, M. Debbah, and Z.-Q. Luo, “TeleQnA: A Benchmark Dataset to Assess Large Language Models Telecommunications Knowledge,” arXiv preprint arXiv:2310.15051, 2023. 4. N. C. Thompson, K. Greenewald, K. Lee, and G. F. Manso, “The computational limits of deep learning,” arXiv preprint arXiv:2007.05558, 2020. 5. O. Ovadia, M. Brief, M. Mishaeli, and O. Elisha, “Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs,” arXiv preprint arXiv:2312.05934, 2024.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# 70B models’ performance deteriorates to a level below that of their no-context baseline. |Exact Match (%)|F1 (%)| |---|---| |top-k|no-ctx| |FlanT5|LLaMa 7B| |FlanUL2|LLaMa 70B| In comparison, FLAN models consistently outperform their no-context counterparts by a large margin. Special Questions and Domains |top-k|no-ctx| |---|---| |FlanT5|LLaMa 7B| |FlanUL2|LLaMa 70B| We perform similar analyses on HotpotQA and BioASQ datasets. While FLAN models exhibit trends akin to their open-domain behaviors, we observe substantial behavioral shifts on LLAMA models (Figure 5). Unlike the severe degradation on NQ questions, LLAMA consistently outperforms the no-context baseline. Additional special-domain contexts seem to induce less conflict with LLAMA’s parametric knowledge than additional contexts do in NQ potentially because they provide new, useful information that the model cannot derive from its pertaining memory alone. Top-k documents |top-k|no-ctx| |---|---| |LLaMa 7B|LLaMa 70B| Figure 5: LLAMA results on HotpotQA (left) and BioASQ (right) when gold passages are provided.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kaluarachchi, Rajib Rana, and Suranga Nanayakkara Augmented Human Lab, Auckland Bioengineering Institute, The University of Auckland firstname@ahlab.org Department of Information Systems & Analytics, National University of Singapore, University of Southern Queensland Rajib.Rana@usq.edu.au # Abstract Retrieval Augment Generation (RAG) is a recent advancement in Open-Domain Question Answering (ODQA). RAG has only been trained and explored with a Wikipedia-based external knowledge base and is not optimized for use in other specialized domains such as healthcare and news. In this paper, we evaluate the impact of joint training of the retriever and generator components of RAG for the task of domain adaptation in ODQA. We propose RAG-end2end, an extension to RAG, that can adapt to a domain-specific knowledge base by updating all components of the external knowledge base during training. In addition, we introduce an auxiliary training signal to inject more domain-specific knowledge. This auxiliary signal forces RAG-end2end to reconstruct a given sentence by accessing the relevant information from the external knowledge base. Our novel contribution is unlike RAG, RAG-end2end does joint training of the retriever and generator for the end QA task and domain adaptation. We evaluate our approach with datasets from three domains: COVID-19, News, and Conversations, and achieve significant performance improvements compared to the original RAG model. Our work has been open-sourced through the Huggingface Transformers library, attesting to our work's credibility and technical consistency.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Task in natural language understanding. ODQA methods generally feature a two-stage pipeline: a retriever that selects passages relevant to a given question and a reader that generates the answers from selected passages. Conventionally, these two components are trained separately using ground truth context passages relevant to question-answer (QA) pairs. However, for many real-world scenarios, it is hard to find explicitly annotated context-question-answer triplets (Lee et al., 2019; Lewis et al., 2020b; Guu et al., 2020). Recently, Retrieval Augmented Models (RAGs) have drawn considerable attention from researchers. RAG consists of a state-of-the-art-neural retriever called Dense Passage Retrieval (DPR) and BART seq2seq language model. Compared to the conventional two-staged ODQA pipelines, RAG merges the retriever and reader stages into one architecture. Moreover, unlike expensive language models with billions of parameters (e.g., GPT-3 and Megatrone-LM) where the model's parametric memory represents the complete knowledge, RAG can also extract knowledge from an external knowledge base. Using both parametric and non-parametric memory generally leads to reduced hallucinations and higher interpretability in tasks like question answering and summarization. # Introduction Open Domain Question Answering (ODQA) is an important task in natural language understanding. In this work, we focus on exploring retrieval augmented architectures for the task of domain-specific open-domain question answering. Although there are several similar retrieval augmented architectures, such as REALM and RETRO, we used RAG-end2end for domain adaptation.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
trieval Augmented Generation (RAG) in our experiments due to its excellent open-source documentation and availability. When the RAG model is finetuned for downstream QA tasks, the original implementation keeps the encoding of passages and the external knowledge base fixed.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This is because re-encoding the external knowledge base is computationally expensive and relies on a sophisticated implementation. Despite not finetuning the passage encodings, the RAG model performs well for datasets with Wikipedia-like knowledge bases because the DPR retriever components have already been trained on Wikipedia-based datasets (Kwiatkowski et al., 2019; Joshi et al., 2017). However, the feasibility of adapting RAG to specific ODQA domains such as research papers and news is not well understood. This is a critical research gap to address, as improved domain adaptation can further improve the ODQA performance of RAG. This paper explores the feasibility of using RAG in specialized domains for ODQA. In particular, we propose two modifications to the original RAG to improve its domain adaptability. Motivated by recent end2end retrieval augmented mechanisms (Guu et al., 2020; Sachan et al., 2021; Singh et al., 2021), we first propose a method to finetune the RAG model with its neural retriever and update its knowledge encodings asynchronously during training. We refer to this as RAG-end2end since it allows us to update all RAG components during training, including the external knowledge base, the DPR model, and the BART model. Secondly, we propose an auxiliary training signal to help our model learn more domain-specific knowledge. This took the form of generating a concise and factual statement about a document using a self-retrieved set of passages from the provided domain-specific knowledge base. These two modifications offer a unique feature to RAG-end2end over RAG: joint training of the retriever and generator for the end QA task and domain adaptation. Although asynchronous updates to the knowledge encoder have been proposed before in the REALM, previous work has not evaluated the effects of joint training of the RAG's retriever and the generator for the domain adaptation in ODQA. We evaluate our proposed approach on three different datasets from three domains: COVID-19 research (Wang et al., 2020), Conversations (Wu et al., 2021b), and News (Trischler et al., 2016). The major finding of our work is that the adaptation of the retriever component plays a critical role in overall domain adaptation performance in RAG-like architectures. Updating only the question encoder without updating the knowledge base encoding could degrade performance. Instead of finetuning the DPR retriever separately, our experiments show that finetuning it as a part of the RAG-end2end mechanism gives better overall results.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Our results also show that using the auxiliary signal improves both the retriever component and the overall accuracy. In addition, we open-source the implementation of RAG-end2end with the HuggingFace Transformers (Wolf et al., 2019) Library providing the opportunity for the scientific community to use/test/build on our work. # Background and Related Work Open-domain QA systems (Yang et al., 2015; Kwiatkowski et al., 2019) generally have a two-stage pipeline: passage retrieval (i.e., finding relevant text chunks related to an input question from a knowledge base) and machine comprehension (i.e., generating an answer from a set of selected documents). Traditionally sparse vector methods such as TF-IDF and BM25 are used for document retrieval (Robertson and Zaragoza, 2009). Researchers have recently moved to use dense text representations, which allows modeling textual similarity more semantic level. A recent example is the 'Dense Passage Retriever (DPR)' (Karpukhin et al., 2020), which generates embeddings for questions and text passages using two BERT (Devlin et al., 2018) models. The dot product of the embeddings is used as a similarity score between a question and a passage. DPR has demonstrated that higher retrieval precision results in a higher end-to-end QA accuracy. For the answer generation component of QA systems, recent studies have used either extractive language models like BERT or generative language models like BART/GPT-2 (Min et al., 2021; Lewis et al., 2021). # Retrieval Augmented Architecture Recently, Retrieval Augmented Architectures (Lewis et al., 2020b; Guu et al., 2020) have drawn a lot of attention due to their explainable, scalable, and adaptable nature. Unlike other open-domain 2Huggingface Transformers implementation
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# QA architectures, RAG (Lewis et al., 2020b) combines the information retrieval stage and answer generation stage in a differentiable manner. It uses a combination of parametric and non-parametric memory, where the parametric memory consists of a pre-trained seq2seq BART (Lewis et al., 2019) generator, and the non-parametric memory consists of dense vector representations of Wikipedia articles indexed with the FAISS library (Johnson et al., 2017). RAG first encodes a question into a dense representation, retrieves the relevant passages from an indexed Wikipedia knowledge base, and then feeds them into the generator. The loss function can finetune both the generator and the question encoder at the same time. Lewis et al. (Lewis et al., 2020b) highlight RAG’s ability to perform well in Wikipedia-based general question-answering datasets like Natural Questions (Kwiatkowski et al., 2019). Other recent work also highlights how the outputs generated from RAG models are much more factual due to RAG being conditioned on the retrieved documents, possibly providing an answer to the hallucination problem of generative language models. Shuster, Kurt, et al. (Shuster et al., 2021) also highlight how RAG reduces hallucinations in knowledge-grounded conversational tasks, where the task is to generate responses to dialogues based on a large Wikipedia knowledge base. Xu et al. (2021) illustrate the effectiveness of RAG in chat-bot frameworks and highlight how RAG models are able to recall and summarize conversations compared to standard seq2seq models with only parametric memory. This paper aims to understand how RAG could be extended to an end2end model and adapted to specific domains. To the best of our knowledge, this is the first time RAG is being investigated on domain adaptation for the task of ODQA systems. # REALM-like end2end Retrieval Augment Architectures REALM (Guu et al., 2020) is a similar Retrieval Augmented model to RAG. REALM introduced a novel masked language pre-training step that involves an end-to-end trainable retriever. In the REALM work, the authors first train the entire model on the masked language prediction task and then fine-tune it on question-answering tasks (keeping the retriever frozen). In comparison to REALM, the original RAG model uses an already trained DPR retriever and conducts partial end-to-end training with a BART reader model. Compared to REALM, RAG is less computationally expensive, and its code is available open-source. We explore and extend the original RAG architecture for domain adaptation in our work. We adapted some concepts of our RAG-end2end extension from REALM. REALM only updates its retriever during the pre-training process that uses the masked language modeling (MLM) (Devlin et al., 2018) task. Then during the downstream fine-tuning task, REALM keeps its retriever fixed. However, the REALM end-to-end training code is not open-sourced, possibly due to its computational complexity. Compared to REALM, RAG is a combination of already pre-trained language models where the users do not need to go through a heavy pre-training stage. Due to these engineering-friendly features and high availability, we conducted our experiments with RAG and extended RAG into an end-to-end trainable retrieval augmentation model. It is also important to highlight that none of the prior work has explored the domain adaptation of retrieval augment models for question answering; instead, most focus on general question answering with Wikipedia-based knowledge bases. # Model Architecture and Training Procedure In this work, we extend RAG to finetune all components, including the DPR retriever, and dynamically update the external knowledge base during training.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We hypothesize that the use of asynchronous updates helps with domain adaptation.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Figure 1 demonstrates the main workflow of our model.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Asynchronous re-encoding | |DPR Passage Encoder|FAISS Indexer| |---|---|---| |Encoded Knowledge Base| | | # Asynchronous re-indexing | |Retrieved Passages|Generated Answers| |---|---|---| |Questions| |"Fever is the most common symptom"| Figure 1: System Overview. Our RAG-end2end training architecture uses asynchronous processes to dynamically re-encode and re-index the knowledge base while optimizing a joint QA and paraphrasing signal loss. The training dataset consists of both reconstruction signals and QA pairs. The network learns to generate answers to questions and useful statements jointly. The input to the BART reader is illustrated in Equation 3, where the model can differentiate the answer generation task and statement reconstruction task with the use of a control token. During the training, embeddings and the knowledge base index get updated asynchronously.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In the following sections, we describe our extensions and training signals. # 3.1 RAG Retriever and Generator The retriever is a DPR (Karpukhin et al., 2020) model pre-trained on Wikipedia-based question-answering datasets (Kwiatkowski et al., 2019; Joshi et al., 2017). It consists of two tower BERT-based networks: the Question Encoder (EQ) and the Passage Encoder (EP). We use their CLS token embeddings as representations for questions and passages. The similarity between a question (q) and a passage (p) is calculated by taking the dot product of the two embeddings as shown in Equation 1. sim(p, q) ∝ EQ(q)T EP(p) (1) RAG’s generator consists of a pre-trained BART (Lewis et al., 2019) seq2seq language model. To train these retriever and generator components, RAG enhances the traditional sequence-to-sequence cross-entropy loss function by setting the retrieved passages as a latent variable (Z) (Guu et al., 2020; Lewis et al., 2020b). The loss value of generating each token is marginalized on the probability of selecting documents given a context X (i.e., Document Score p(Z|X)). The formula (RAG-Token-Loss) can be written as illustrated in Equation 2. # 3.2 Indexing of the External Knowledge Base Before the training phase, we need to encode all passages in the external knowledge base using EP. Then we need to retrieve similar passages from the external knowledge base given the output from EQ. This process mainly involves dot product calculation between input question embeddings and encoded passages. The retrieval process will likely result in a performance bottleneck during the training since there are usually millions of passages in the knowledge base. To address this issue, RAG adopts the FAISS indexing approach proposed in (Johnson et al., 2017). With the help of the indexes, we can skip a considerable amount of repeated computation and significantly accelerate the retrieval process. # 3.3 End-to-End Retriever Training Although the DPR module makes use of two BERT models (EP, Eq), the original RAG architecture only fine-tunes the question encoder EQ in the retriever. The passage encoder EP and the external knowledge base’s encoding are fixed during the training.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}