text
stringlengths
2
6.93k
system_prompt
stringclasses
1 value
Gray lines on the top indicate gold performance. # Response to Negative passages We now study the setting where no passages are explicitly useful to answer the questions. At each k, we select the slice of examples where no gold passages exist among the top-k retrieved candidates. We report model results when providing (1) these k negative contexts or (2) no contexts at all. As shown in Figure 6, compared to the no-ctx baseline, LLAMA performance degrades when contextualizing more negative passages (i.e., as k increases). On the other hand, FLAN models can benefit from negative contexts, consistently outperforming no-ctx baselines across all k. Surprisingly, FLAN models perform better with negative passages than with contexts at all. One explanation is that the negative passages may still be partially helpful even though they do not contain the exact correct answer since they may still contain some keywords or related topics that provide hints to the reader models. # Case Study: What Negative Contexts Help? If a negative context is related but insufficient for answering the question, it can still help improve the reader model’s performance. For instances where FLANT5 answers correctly with the top-5, negative contexts but incorrectly with no contexts, 43% of instances have top-k passages that include at least one passage from the Wikipedia page that the gold passages are from. Even though none of these negative passages include the exact correct answer, they are topically related enough that FLANT5 can bridge the knowledge gap. 3For multi-hop questions, we select examples retrieved with all gold passages within the top-k passages since all passages are necessary to answer the question.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
phase. In other words, the pre-trained passage encoder of DPR is only used once to encode the external knowledge base. The RAG authors suggest that such a design performs well for Wikipedia-based ODQA datasets (Kwiatkowski et al., 2019; Joshi et al., 2017). Such settings work because the DPR model was also pre-trained with Wikipedia-based datasets, and their experiment uses an external knowledge base consisting of Wikipedia articles. However, it may be beneficial to fine-tune all the DPR components during RAG training for domain adaptation since the model needs access to different domain-specific external knowledge bases. In this work, we introduce RAG-end2end, where we augment RAG to be fully end-to-end trainable. We fine-tune the passage encoder and question encoder and then update the index of the external knowledge base during the training process. It is straightforward to propagate gradients to both the passage and question encoders with RAG's loss function. Because this loss function employs the passage selection probability known as doc-score(pη (z|x) term illustrated in Equation 2). However, for it to have a true effect on the overall model training process, we have to iteratively update the embeddings with the updated context encoder and then update the index of the external knowledge base. In other words, we need to re-encode and re-index the knowledge base using the updated passage encoder. When the external knowledge base possesses tens of millions of passages, the re-encoding and re-indexing steps can be very time-consuming. Re-encoding can take several GPU hours, and re-indexing with FAISS can take several CPU hours, depending on the size of the knowledge base. Therefore, it is inefficient to stall the training loop while the re-encoding re-indexing steps are being carried out. To have an efficient training mechanism, we designed our training framework into three main processes: (1) The main training loop, which updates the gradients, (2) Re-encoding processes with several GPUs that update the knowledge-base encoding with the updated DPR's context encoder, and (3) A Re-indexing process that uses FAISS to build an index with the updated encoding. Figure 1 illustrates these three processes. Our implementation uses two asynchronous processes to re-encode and re-index the external knowledge base that runs independently to the main training loop. We first distribute the external knowledge base to a set of GPUs that are not used in the main training loop. Then we encode the passages with an updated passage encoder which we call the re-encoding process. Once the re-encoding process has finished, we re-index the knowledge base in another parallel process that uses FAISS (re-indexing process). Inside the main training loop, we ensure that the re-indexing process always starts after finishing the re-encoding process. Then as soon as the new index of the external knowledge base is created, we load that to the main training loop. Once the new index loading is completed again, we start the re-encoding process, which repeats the entire embedding updating process. It is important to note that the first re-encoding process should get finished, and new embeddings should get saved to the hard disk before the start of the FAISS indexing process. If the knowledge base is not entirely updated with the new embeddings, the re-indexing process fails. We use python multiprocessing handles to keep the order, and re-indexing and re-encoding processes are only asynchronous with respect to the main training loop process. The number of steps between each re-encoding process depends on the size of the dataset. To test the number of steps between the knowledge-base updates, we experimented with a knowledge base consisting of 250,000 passages and used four dedicated GPUs for the re-encoding process with a batch size of 32 each. Our computation machine consists of 96 CPU cores. We found that it takes an average of 750 updates. However, the computation time can be easily improved when using more GPUs for encoding and using a machine with a higher number of CPU cores (FAISS indexing process depends on the number of CPU cores). These steps are repeated throughout the training loop. Since the training and knowledge base's index update processes are running asynchronously, it may result in stale gradients. This, however, does not significantly degrade the model performance according to previous research (Guu et al., 2020).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Statement Reconstruction We explore the incorporation of statement reconstruction as an auxiliary signal assuming that it forces the model to gain more domain-specific knowledge. As illustrated in Figure 1, we first encode input statements using the input/question encoder (EQ).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Then the retriever retrieves the most
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
similar set of passages from the indexed external knowledge base by conducting a similarity search. Afterward, the final output generator attempts to re-construct the input statements using only the selected support set of documents.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We ensure that the external knowledge base does not contain the input statement to prevent the model from overfitting on just the lexical content. To differentiate the paraphrasing signal from the QA signal, we prepend a special token < p > (represents passages) in front of the reconstruction statements, which acts as a control token in the seq2seq language modeling (Raffel et al., 2019; Keskar et al., 2019). Concretely, when training the RAG architecture on QA pairs, the questions are prepended to the retrieved passages before being fed to the BART generator. As illustrated in Equation 3, for the input reconstruction signal, we only prepend the < p > token to the retrieved passages before feeding them to the BART generator. |QA INPUT|〈Question〉 + 〈Retrieved Passages〉| |---|---| |RECONSTRUCTION INPUT|〈<p>〉 + 〈Retrieved Passages〉| Experiments & Results # 4.1 Domain Specific Dataset Setup In this work, our main intention is to explore the adaptation of domain-specific retrieval augmentation with regard to ODQA. As mentioned in the recent work (Lewis et al., 2020b), most ODQA datasets like Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), WebQuestions (Berant et al., 2013), and CuratedTrec (Baudiš and Šediv `y, 2015) are answered with Wikipedia-based knowledge-bases. Since neural retrievers like DPR are already trained with Wikipedia-based datasets, it is hard for us to explore the domain adaptation of RAG fairly in this setting. Therefore, we selected three domain-specific datasets for our experiment: COVID-19 QA, News QA, and Conversation QA. Since the availability of domain-specific ODQA datasets is minimal, in our work, we open-source all domain-specific knowledge-bases and question-answer pairs to support future research. COVID-19 QA Domain Knowledge Base Generation: To create the external knowledge base, we use 5,000 full-text scientific articles extracted from the CORD-19 (Wang et al., 2020) dataset. The external knowledge base is created with 250,000 100-word passages. Each passage is pre-pended with the title of the research paper. Reconstruction Statement Generation: We use sentences from the abstract section of research articles for the reconstruction signal. We first extract the abstract sections in 10K papers and split them into sentences using the NLTK library (Loper and Bird, 2002). We filter out the sentences that are too short (less than 15 words) or too long (more than 35 words). In this process, approximately 50,000 abstract statements are generated. It is important to note that when generating the knowledge base, we exclude the abstract sections. Synthetic QA Generation: In this domain, we only use synthetic data for training and validation. Following the prior work (Shakeri et al., 2020), we use a BART seq2seq model trained on the SQuAD dataset (Rajpurkar et al., 2016) to generate synthetic QA pairs given a passage. We used the Squad dataset's passages as the input and corresponding question-answer pairs as the expected output. We trained a BART-large checkpoint for two epochs. Then, we followed round-trip consistency (Alberti et al., 2019) to filter synthetic QA pairs. Our final synthesized QA dataset consisted of 225,000 QA pairs. We use 90% of these QA pairs as training data and 10% as validation data. As the test data, we use 2000 human-labeled question-answer pairs from the COVID-QA dataset (Moller et al., 2020). News QA Domain Knowledge Base Generation: We extract 85,000 100-word passages as the knowledge base using 10,000 news articles from the NewsQA dataset (Trischler et al., 2016). Reconstruction Statement Generation: We extract corresponding news summary sentences from the CNN/DM dataset (Hermann et al., 2015) for the reconstruction signal. Every article consists of more than one summary sentence. However, we use the first sentence as the title of the article, which we used in knowledge base generation and the rest of the statements as reconstruction statements. Our final dataset contains 35,000 summary statements. QA Generation: The NewsQA dataset (Trischler et al., 2016) consists of 100,000 human annotated QA pairs from 10,000 news articles from the CNN/DM dataset (Hermann et al., 2015). We use the train (90,000), valid (5,000) and test (5,000) splits given in the dataset to train and evaluate our model. All questions in the NewsQA dataset focus on the high-level content of articles.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
So, to answer
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
these questions, the model must access a large span of passages to conduct the reasoning process. Conversation QA Domain Knowledge Base Generation: We create the external knowledge base of 110,000 passages by splitting the 10,000 conversations given in the QAConv dataset (Wu et al., 2021b) into passages, each with at most 100 words. We prepend the identifier of each conversation (found in the original dataset) as the title of the passages.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We also appended the speaker’s name, followed by the ":" symbol, to the starting position of each dialogue to keep each conversation connected to its speakers. The F1-score calculates the number of words in the predicted answer that are aligned with the real answer regardless of the order. The Top-k retrieval accuracy is calculated by matching the answer strings with the contents of the retrieved k passages. We compare RAG and RAG-end2end in the following five scenarios.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
- RAG-original. This model is finetuned on the natural question dataset (Kwiatkowski et al., 2019) with the Wikipedia knowledge base and serves as the non-domain adapted baseline. This model is not finetuned with Reconstruction Statement Generation: We use the state-of-the-art abstractive conversation summarization model (Wu et al., 2021a) to generate one-sentence (TLDR) summary (approximately 45 words per conversation). We then use this as the auxiliary signal. We only generate summaries of conversations with more than 45 words. By doing this, we collect 35,000 synthetic summary/reconstruction statements. QA Generation: We use the QAConv dataset (Wu et al., 2021b), which contains 35,000 QA pairs generated from 10,000 conversations that involved two or more parties. We use the train (25,000), valid (5,000) and test (5,000) splits given in the dataset to train and evaluate our model. - domain-specific question-answer pairs, and we report the zero-shot performance. - RAG-original-QA. This is the original RAG model finetuned with only domain-specific question-answer pairs. - RAG-end2end-QA. This is the RAG model with our end2end retriever extensions and finetuned only with domain-specific question-answer pairs. - RAG-original-QA + R. This is the RAG original model finetuned with both domain-specific question-answer pairs and our reconstruction signal. # Training and Evaluation Setup We use the HuggingFace-Transformers (Wolf et al., 2019) library to implement the RAG-end2end architecture. We initialize the DPR and BART models of using the open-source HuggingFace checkpoints. Prior to fine-tuning, we index and encode the external knowledge base using FAISS. We select HNSW FLAT as the indexing mechanism (with 128 bi-directional links). We use 100 words as the maximum passage length as suggested by the prior RAG work (Lewis et al., 2020a). During training, we use six Tesla V100 GPUs with 32 GBs of memory. Four of them are used for training, and two are used for re-encoding. We train each RAG model variant 4.2 for ten epochs and select the final checkpoint with the highest validation accuracy. We use the Exact Match (EM), F1 score, and Top-K retrieval accuracy as evaluation metrics. The EM score computes the word level exact match between the predicted answer and the real answer. - salseforce checkpoint - rag-token-base checkpoint - RAG-end2end-QA + R. This is the RAG model with our end2end retriever extensions and trained with both question-answer pairs and our reconstruction signal. We present the results of each scenario in Table 1. # Effect of End-to-End Retriever Training on Domain Adaptation We first test if finetuning of both the passage encoder and question encoder of the RAG’s retriever while updating the external knowledge base would improve domain adaptation. We compare the performance of RAG-original-QA and RAG-end2end-QA, isolating any performance improvement due to the reconstruction signal. The results in Table 1 illustrate that RAG-end2end-QA significantly outperforms RAG-original-QA on all metrics – EM, F1, Top-5, and Top-20 – across all three domains. The improvements in the EM score varied from 1.13 points in the News domain to 12.16 points in the Conversation domain. - public rag-token-nq checkpoint
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Model Name |EM|F1|Top-5|Top-20|Construction signal to RAG-original by comparing| |---|---|---|---|---| |(1) RAG-original|0.0|4.73|10.56|15.69|We find that even without the end2end extension, the reconstruction signal improves the performance moderately. This improvement in the EM score ranged from 0.84 points in the COVID-19 domain and 3.12 points in the Conversation domain.| |(2) RAG-original-QA|2.95|12.01|12.29|18.43| | |(3) RAG-end2end-QA|8.08|18.38|19.85|26.91| | |(4) RAG-original-QA+R|3.66|12.20|12.79|18.45| | |(5) RAG-end2end-QA+R|8.32|19.57|23.05|31.23| | # News Domain |EM|F1|Top-5|Top-20| | |---|---|---|---|---| |(1) RAG-original|4.33|7.92|19.46|30.33| | |(2) RAG-original-QA|7.26|14.26|22.86|34.55| | |(3) RAG-end2end-QA|8.39|16.31|28.89|41.20| | |(4) RAG-original-QA+R|8.62|16.46|27.28|39.56| | |(5) RAG-end2end-QA+R|14.08|23.7|39.67|50.95| | # Conversation Domain |EM|F1|Top-5|Top-20| | |---|---|---|---|---| |(1) RAG-original|5.49|9.27|12.14|20.02| | |(2) RAG-original-QA|12.09|20.05|22.73|32.05| | |(3) RAG-end2end-QA|24.25|36.05|46.01|55.55| | |(4) RAG-original-QA+R|14.21|24.62|26.32|36.21| | |(5) RAG-end2end-QA+R|25.95|37.96|49.11|58.75| | # Table 1: Domain adaptation Performance of different RAG models used in our experiments. We illustrate the results related to all three domains.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Results on special questions when all retrieved passages are negative We evaluate BM25 and ColBERT’s retrieval performance by computing recall@k for all k from 1 to 50 (Table 2). In general, ColBERT performs better than BM25 on questions about Wikipedia knowledge (NQ and HotpotQA). However, ColBERT only offers a negligible advantage over BM25 on the special-domain BioASQ, yet at a much higher computation cost. Averaged over k ranging from 1 to 50, ColBERT offers a 50-point gain over BM25 for NQ, but only offers a < 1 point gain over BM25 for BioASQ for paragraph recall. When applying to the out-of-domain questions, ColBERT does not gain many advantages from its neural, contextual embeddings. Nonetheless, not all negative passages from the gold page are helpful.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Details about each model are described in Section 4.2. Evaluating the performance of passage retrieval using Top-5 and Top-20 scores, we see a marked increase of around 25 points in the conversation domain, with the other domains showing improvements of between 4.7 to 6.6 points. Above all, these results suggest that fine-tuning both the passage and question encoders of the RAG’s retriever while updating the external knowledge base can improve domain adaptation. # 4.4 Effect of Adding the Statement-Reconstruction Auxiliary Task In this experiment, we test our next hypothesis: adding the auxiliary training signal of statement reconstruction along with QA pairs improves domain adaptation. We compare the performance of RAG-end2end with and without the reconstruction signal by comparing the performance of RAG-end2end-QA + R and RAG-end2end-QA in Table 1. This shows that RAG-end2end-QA + R outperforms RAG-end2end-QA for all three domains. The range of increases in the EM scores varied from 1.7 points in the conversation domain to an 8.39 points increase in the News domain. The top-20 retrieval accuracy also increased in a range between 3.2 to 8 points. We further compare the effect of adding the reconstruction signal to RAG-original by comparing COVID-19 Domain RAG-Original-QA with RAG-Original-QA + R. # 4.5 Retriever’s domain adaptation with RAG-end2end An important part of our RAG-end2end extension is updating the entire DPR retriever during training. Previous work (Ma et al., 2020) has explored the importance of the domain adaptation of neural retrievers and highlighted the performance gains in domain-specific retrieval tasks. We argue, based on our above-mentioned RAG end2end’s retriever performances and prior work, that when adapting RAG to various domains, having a domain-specific retriever plays a key role in achieving good performance. However, this end-to-end RAG fine-tuning can get computationally costly, especially with the number of passages in the external knowledge base where they should get re-encoded and re-indexed. Instead of fine-tuning DPR as a part of RAG-end2end, an alternative approach is to fine-tune DPR on domain-specific data separately on its vector similarity-based loss function (Karpukhin et al., 2020) and then initializing the RAG architecture prior to fine-tuning with the QA data. We explore if RAG-end2end can perform on par if we initialize a RAG model.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# COVID-19 |Domain|Input Statement|Most Similar Retrieved Document|Reconstructed Statement| |---|---|---|---| |COVID-19|Cough (82.5%), fever (75%), and malaise (58.8%) were the most common symptoms, and crepitations (60%), and wheezes (40%) were the most common signs.|The most common signs and symptoms on admission included fever and cough. Of all children, 32% had complaint of difficulty in respiration. Other symptoms observed were myalgia, headache and vomiting. On examination, 66% cases had crepitations and 42% had wheezing. Hypoxemia was observed in 31% cases at admission.|The most common signs and symptoms on admission were fever and cough, and 32% had complaint of difficulty breathing.| # News |Domain|Input Statement|Most Similar Retrieved Document|Reconstructed Statement| |---|---|---|---| |News|Capsule was carrying South Korea’s first astronaut.|Soyuz capsule carrying South Korea’s first astronaut lands 260 miles off its mark.|Soyuz capsule carrying South Korea’s first astronaut lands 260 miles off its mark.| # Conversation The Supreme Court will hear the case on the grounds of First Amendment protection of free speech. (PETITIONER): Yes, Your Honor.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Mr. Tory, who was appearing pro se in the trial court, from the very outset objected that he was being held liable for speech protected by the First Amendment. # Table 2: Examples of Reconstructed Statements Reconstructions generally capture the context of the retrieved documents and are similar to the input statement but are not always factually 100% correct (e.g. COVID-19 example). Input statement column shows the input to the model with the special &lt;p&gt; token. The Retrieved Documents shows a snap-shot of the top-retrieved document used to re-construct the statement with an independent domain-adopted DPR model. This helps us further understand the ability of the RAG-end2end extension to fine-tune the retriever with domain-specific data. Standalone DPR fine-tuning with domain specific data The standalone DPR can be fine-tuned if we have access to gold-standard passages that contain the answers for given questions and hard negative passages which consist of similar details to the question but not the exact answers. DPR uses a dot-product-based similarity loss, capturing the similarity between the correct passage for the question while comparing with some hard-negative examples (Karpukhin et al., 2020) (which are lexically similar but do not contain the answer). We use the deep-haystack framework to fine-tune DPR for each domain using domain-specific data. We created fine-tuning datasets for all three domains. First, for the Covid-19 domain, we utilized the synthetic question-answer pairs and their relevant passages that consist of 100 words. The use of domain-specific synthetic QA pairs for DPR fine-tuning has already shown permanence improvements (Ma et al., 2020). For hard-negative examples, we used BM-25 lexical matching search as mentioned by the DPR authors, where we retrieved passages that do not contain the answer based on their lexical similarity with the question. Although for the News domain and the Conversation domain, we have a supervised dataset where we can map questions into the correct passage, we did not get better results after fine-tuning the original DPR using the supervised. 7deepset-ai
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Model Name | |Top-5|Top-20| |---|---|---| |COVID-19 Domain| | | |(1) DPR-original|9.39|14.72| |(2) DPR-domain-adapted|13.66|20.01| |(3) DPR-RAG-end2end|20.64|28.21| |News Domain| | | |(1) DPR-original|20.95|31.04| |(2) DPR-domain-adapted|20.98|31.92| |(3) DPR-RAG-end2end|39.67|50.95| |Coversation Domain| | | |(1) DPR-original|15.15|23.95| |(2) DPR-domain-adapted|23.15|34.53| |(3) DPR-RAG-end2end|49.11|58.75| Table 3: Comparison of DPR models finetuned on domain specific data against publicly available DPR checkpoint which is trained on Wikipedia domain for all three domains. The main reason for the degradation of performance is the length of the correct passage related to the question. In both News and Conversation domains, most of the questions come from longer passages, whereas the pre-trained DPR only accepts 100-word passages. To mitigate this issue, we generated synthetic question-answer pairs with the external knowledge bases of news and Conversation domains similar to the COVID-19 domains by following the same procedure mentioned in Section 4.1. Then the hard-negative examples were also mined according to the above-mentioned BM-25 lexical matching method.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
After training, we evaluate the DPR retrieval accuracy using the test dataset and external knowledge base for each domain, similar to the RAG's retrieval evaluation we conducted in Section 4.2. Table 3 compares (1) DPR-original, which is the publicly available checkpoint trained on Wikipedia, with (2) DPR-domain-adapted, which is the finetuned model with DPR's original loss function. The (3) DPR-RAG-end2end is the retrieval part of RAG-end2end-QA + R from Table 1 for comparison. We include the DPR-RAG-end2end model to highlight the improvement of the DPR model as a result of RAG-end2end training with both training signals. When comparing the DPR-RAG-end2end model with the other variants in Table 1, we observe that the RAG-end2end architecture significantly improves the DPR's domain. Initializing RAG with domain adapted DPR prior to finetuning Next, we investigate the performance of RAG models when initialized with a domain-adapted DPR. We initialize RAG's question encoder and the passage encoder with DPR-domain-adapted (from trained models illustrated in Table 3) and finetune RAG with the settings of RAG-original-QA+R. The objective is to compare how the RAG models initialized with domain adopted DPR models perform in comparison to using the RAG-end2end extension. | | | | |---|---|---| | | | | | | | | | | | | | | | | Table 4 demonstrates results from four models. (1) RAG-original-QA+R and (3) RAG-end2end-QA+R are taken from the main results (Table 1). The (2) RAG-original-QA+R (DPR-adapted) model was first initialized with a domain-adopted DPR model (from Table 3) before being finetuned with domain-specific QA pairs and re-construction signals with the RAG-original settings. The results in Table 4 indicate that for all domains, finetuning the RAG-original with a domain-adapted DPR gives higher performance than finetuning the RAG-original with the usual DPR model checkpoint (Compare (1) and (2) in the Table 4).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Model Name | |EM score|F1 Score|Top-5|Top-20|5.2|Cost of end2end retriever adaptation| |---|---|---|---|---|---|---| |COVID-19 Domain|(1) RAG-original-QA+R|3.66|12.12|12.79|18.45|It is important to note that RAG-end2end fine tuning can be expensive if the number of passages in the external knowledge base is large. If there are millions of passages, it would be beneficial to have a dedicated number of GPUs that complete the encoding process. Re-indexing with the FAISS library also depends on the number of cores in the CPUs. When having access to strong enough computational power, it is better to use RAG-end2end since we can directly use passages in a knowledge base and question-answer pairs to train both the retriever and the reader. Then we also do not need to generate synthetic question-answer pairs related to passages that are required to train the DPR.| | |(2) RAG-original-QA+R (DPR-adapted)|7.36|17.91|22.39|30.87| | | |(3) RAG-end2end-QA+R|8.32|19.51|23.05|31.23| | |News Domain|(1) RAG-original-QA+R|8.62|16.46|27.28|39.56| | | |(2) RAG-original-QA+R (DPR-adapted)|10.92|19.44|30.72|41.9| | | |(3) RAG-end2end-QA+R|14.08|23.7|39.67|50.95| | |Conversation Domain|(1) RAG-original-QA+R|14.21|24.62|26.32|36.21| | | |(2) RAG-original-QA+R (DPR-adapted)|15.78|25.47|29.01|40.03| | | |(3) RAG-end2end-QA+R|25.95|37.96|49.11|58.75| | Table 4: Comparing the effect of RAG-end2end extension, against initializing RAG-original models with domain adapted DPR models prior to the fine-tuning (Please check the Table 1). We use the independently domain adapted DPR models illustrated in Table 3. We highlight the performance improvements for both answer generation accuracy and retrieval recall scores, where the Covid-19 domain has the largest improvements. We also compare the fine-tuning RAG-end2end model with the RAG-original model, which was first initialized with the domain-adapted DPR models (Compare (2) and (3) in Table 4). This comparison shows that RAG-end2end training mechanism can outperform the RAG-original mechanism that uses a domain-adapted DPR. The results further highlight the importance of RAG-end2end mechanism in domain adaptation where we do not need to train the DPR model separately. # Discussion # 5.1 Role of retriever in domain adaptation As the results suggest, the retriever plays an essential role in domain adaptation for open-domain QA. It is clear that RAG-end2end training improves the results since it can update the embeddings and the indexing of the knowledge base.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Compared with the original RAG finetuning, RAG-end2end improves the performance in all datasets. The main reason for this could be that neural retrievers such as DPR, which are trained on public datasets, struggle to perform well on domain-specific datasets. Our results also highlight an important aspect related to the performance of the stand-alone DPR for document retrieval. It shows that RAG-end2end can improve the domain adaptation of DPR better than finetuning the DPR on its own mechanism. # 5.3 Comparing the RAG-original with RAG-end2end on an In-domain dataset |Model Name|EM|F1|Top-5|Top-20| |---|---|---|---|---| |SQUAD Open-Domain| | | |(1) RAG-original|28.12|39.42|59.64|72.38| | | | | |(2) RAG-end2end|40.02|52.63|75.79|85.57| Table 5: Open-Domain performance comparison between RAG-original and RAG-end2end. We used SQAD dataset in ODQA manner to conduct our experiments. Although our work is mainly focused on the domain adaptation of RAG for specific domains, we also explored whether the end2end training would improve the overall results of an in-domain dataset. Since the original RAG model uses a DPR model that is trained on a Wikipedia-based Natural Questions dataset, we consider this in-domain. Although SQUAD (Rajpurkar et al., 2016) dataset is a machine comprehension dataset, we adapted the SQUAD dataset to perform ODQA. First, we extracted the contexts related to each question-answer pair and created an external knowledge base.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Then we split the knowledge base into 100-words passages. Our final knowledge base consists of 30K passages. As illustrated in Table 5.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
we compared the performance of RAG-original and RAG-end2end on the tasks of answer generation and retrieving correct documents. As the results suggested, RAG-end2end performs better than RAG-original even in other Wikipedia-based datasets. This could be due to RAG-end2end updating the context encoder and embeddings during the training process. Conclusion and Future Work Consistency and reduce hallucinations in final generations. Thirdly, the improvement of RAG with our extension (RAG-end2end) highlights the importance of the retriever in the RAG architecture, which motivates us to improve the retriever part further in future work. Also, as the statement re-construction signal acts as a good auxiliary signal, we encourage exploring other auxiliary signals, which could improve the overall performance of RAG models. In this paper, we proposed a novel extension of RAG: RAG-end2end, which, unlike RAG, does joint training of the retriever and generator for the end QA task and domain adaptation. We showed that the RAG-end2end could improve DPR performance better than fine-tuning the DPR independently. This allows for the training of DPR models with QA pairs and eliminates the need for gold-standard passages related to questions. We also highlighted that the addition of a re-construction auxiliary signal further improves both the retriever and the final answer generation accuracies. We evaluate our approach with three datasets from different domains (COVID-19, News, and Conversations), showing that RAG-end2end achieves significant performance improvements in all three domains compared to the original RAG implementation. In addition, we conducted several other experiments to validate our approach comprehensively. Overall, our results show that our approach is stable and generalizable across different domains. Our experiments highlight the importance of the RAG's retriever component in domain-specific question answering. Based on our findings, we suggest three directions for future research in domain adaptation of RAG Models. Firstly, we consider it worthwhile to explore RAG-end2end on other tasks like Fact Checking (Lewis et al., 2020b), Summarization (Shuster et al., 2021), and conversational response generation (Xu et al., 2021) where the original RAG has shown interesting results. Secondly, it is important to explore generative capabilities with qualitative metrics. This could be aligned with research areas like measuring factual consistency (Kry´sci´nski et al., 2019; Cao et al., 2022) and hallucinations (Cao et al., 2022; Shuster et al., 2021; Nie et al., 2019) of generative language models. Future work could explore whether updating the retriever and document embeddings during the training phase could improve factual consistency and reduce hallucinations in final generations. # References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synpetic qa corpora generation wip roundtrip consistency. arXiv preprint arXiv:1906.05416. Petr Baudiš and Jan Šediv `y.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2015. Modeling of pe question answering task in pe yodaqa system. In International Conference of pe cross-language evaluation Forum for European languages, pages 222–228. Springer. Jonapan Berant, Andrew Chou, Roy Frostig, and Percy Liang.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Among instances where top-5 underperforms no-ctx, 28% include at least one passage from the ground-truth Wikipedia page. # Comparing Different Retrievers |Model|1|2|5|10|20|50| |---|---|---|---|---|---|---| |NQ|BM25|0.7|1.1|2.5|4.1|6.0|7.5| | |10.3|16.3|27.8|36.8|47.7|53.2| |ColBERT|12.3|18.0|25.7|32.1|38.1|41.8| | |27.2|38.8|54.4|65.0|72.9|77.2| |HotpotQA|BM25|0.2|0.4|1.0|1.6|2.4|3.0| | |23.3|31.2|42.7|52.1|59.1|62.8| |ColBERT|34.2|44.7|56.3|63.6|69.9|73.1| |BioASQ|BM25|8.8|12.9|19.6|25.8|33.3|37.8| | |12.4|16.4|23.9|30.6|38.7|43.6| |ColBERT|8.8|13.5|20.7|27.1|34.3|38.6| | |14.2|18.2|25.6|32.2|39.8|44.2| # Generation with Different Retrievers More importantly, we are interested in the question: how much does the choice of retriever affect downstream performance? We find that the nature of the task (e.g., single-hop vs. multi-hop, domain specificity) can significantly affect the choice between neural and sparse retrievers (Figure 8). While neural retrievers may require more resources, their great advantage in single-hop, Wikipedia-based questions and small advantage in specialized domains may still justify the investment. In contrast, neural retrieves are less beneficial for multi-hop. # Table 2: passage/paragraph (top) and page (bottom) recall of BM25 and ColBERT retrievers
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of pe 2013 conference on empirical mepods in natural language processing, pages 1533–1544. Sebastian Borgeaud, Arpur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Ruperford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2021. Improving language models by retrieving from trillions of tokens. arXiv preprint arXiv:2112.04426. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Meng Cao, Yue Dong, and Jackie Chi Kit Cheung. 2022.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Hallucinated but factual! inspecting pe factuality of hallucinations in abstractive summarization. In Proceedings of pe 60p Annual Meeting of pe Association for Computational Linguistics (Volume 1: Long Papers), pages 3340–3354.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. # Kenton Lee, Ming-Wei Chang, and Kristina Toutanova 2019. Latent retrieval for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300. # Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang 2020. Realm: Retrieval-augmented language model pre-training. arXiv preprint arXiv:2002.08909. # Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettlemoyer 2020a. Pre-training via paraphrasing. arXiv preprint arXiv:2006.15020. # Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom 2015. Teaching machines to read and comprehend. Advances in neural information processing systems, 28:1693–1701. # Jeff Johnson, Matthijs Douze, and Hervé Jégou 2017. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734. # Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktaschel, et al.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401. # Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel 2020c. Question and answer test-train overlap in open-domain question answering datasets. arXiv preprint arXiv:2008.02637. # Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel 2021. Paq: 65 million probably-asked questions and what you can do with them. arXiv preprint arXiv:2102.07033. # Edward Loper and Steven Bird 2002. Nltk: The natural language toolkit. arXiv preprint cs/0205028. # Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald 2020. Zero-shot neural retrieval via domain-targeted synthetic query generation. arXiv preprint arXiv:2004.14503.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, et al. 2021. Neurips 2020 efficientqa competition: Systems, analyses and lessons learned. arXiv preprint arXiv:2101.00133.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. # Mojtaba Komeili, Kurt Shuster, and Jason Weston 2021. Internet-augmented dialogue generation. arXiv preprint arXiv:2107.07566. # Wojciech Kry´sci´nski, Bryan McCann, Caiming Xiong, and Richard Socher 2019. Evaluating the factual consistency of abstractive text summarization. arXiv preprint arXiv:1910.12840. # Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Timo Moller, Anthony Reina, Raghavan Jayakumar, and Malte Pietsch. 2020. COVID-QA: A question answering dataset for COVID-19. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020, Online. Association for Computational Linguistics. # Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, et al. 2021. Efficient large-scale language model training on GPU clusters using megatron-lm. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–15. # Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards reducing hallucination in neural surface realization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2673–2679. # Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. # Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2021. Retrieval augmentation reduces hallucination in conversation. arXiv preprint arXiv:2104.07567.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Devendra Singh, Siva Reddy, Will Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. Advances in Neural Information Processing Systems, 34. # Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830. # Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, et al. 2020. Cord-19: The COVID-19 open research dataset. ArXiv. # Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. # Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, and Caiming Xiong. 2021a. Controllable abstractive dialogue summarization with sketch supervision. arXiv preprint arXiv:2105.14064. # Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# FlanT5 FlanUL2 LLaMa 7B LLaMa 70B NQ HotpotQA BioASQReader Performance Top-k documents Figure 8: Reader results on NQ, HotpotQA, and BioASQ (from left to right) using BM25 retrieved passages. Open Domain (Wikipedia) For open-domain single-hop questions, using ColBERT offers substantial improvements over using BM25. At k = 5, ColBERT helps FLAN models achieve a significant 16–18 points EM improvement and LLAMA2 models a more modest 4–6 point increase. This suggests there is value in using high-quality neural retrieval for open-domain, single-hop questions. Special Question Type: Multi-Hop For multi-hop questions, despite the stark retrieval performance gap between BM25 and ColBERT, the impact on reader performance is minimal. At k = 5, FLAN models improve by 3–4 in F1 when paired with ColBERT over BM25, while LLAMA2 models improve by 2–3 in F1. Both models show marginal gains with ColBERT, suggesting that multi-hop questions challenge reader models more in reasoning than context utilization. While BM25 consistently gets recall scores 10 points lower than ColBERT, it is surprising that the reader performs comparably to using ColBERT- retrieved passages. This could be explained by how BM25 still has a high Wikipedia page recall, indicating its capability of finding relevant but inexact information that readers may still benefit from. Special Biomedical Domain ColBERT and BM25 perform similarly in the biomedical domain, so we might also expect the downstream performances to be similar. However, there is still a difference, albeit a small one, between using ColBERT and BM25. For k = 5, FLAN models achieve a 2–5 point improvement when paired with ColBERT over being paired with BM25, while LLAMA2 models exhibit a much smaller, < 1 point improvement. Although there are only slight differences in retrieval quality, the specialized nature of the domain can amplify the impact on reader performance. As a result, there is still a discernible, though modest, preference for ColBERT over BM25 in accessing quality context. # Related Work Context Limit and Processing Capacity LMs with longer context windows are applicable across various knowledge-intensive generation tasks (Beltagy et al., 2020; Bertsch et al., 2023; Su et al., 2024). However, it is unclear how performant these models are in processing long contexts. Liu et al. (2023) study if LMs can be sensitive to the position of useful content within a long context, and struggle when it is in the middle.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Chien-Sheng Wu, Andrea Madotto, Wenhao Liu, Pascale Fung, and Caiming Xiong. 2021b. Qa-conv: Question answering on informative conversations. arXiv preprint arXiv:2105.06912. # Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. arXiv preprint arXiv:2107.07567. # Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 2013–2018. # Devendra Singh Sachan, Mostofa Patwary, Mohammad Shoeybi, Neel Kant, Wei Ping, William L Hamilton, and Bryan Catanzaro. 2021. End-to-end training of neural retrievers for open-domain question answering. arXiv preprint arXiv:2101.00408. # Siamak Shakeri, Cicero Nogueira dos Santos, Henry Zhu, Patrick Ng, Feng Nan, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. End-to-end synthetic data generation for domain adaptation of question answering systems. arXiv preprint arXiv:2010.06028.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Appendix |Test Set Question|RAG-original retrived passages|RAG-end2end retrived passages|Label|prediction|prediction| |---|---|---|---|---|---| |Where does the Kiwi girl that Darren T Maloney spoke to on the phone commute from?|what seems like N Africa to get to work|N Africa to get to work|New Zealand| | | Why Brian Kerrigan apologize to Deutsche bank? Because we don't have any control over pe counterparty's activities How many payday loan stores are pere? There are nearly 18000 payday loan stores in pis country right now Which library is recommended for sending mail? Postal Service Which person made his most famous speech on pe steps of pe Lincoln Memorial? Martin Luper King Jr.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Which programming language implements provide transformers according to Kimery? C # Figure 2: Predicted answers and retrieved passages for a set of questions from the conversational domain (Wu et al., 2021b).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2406.18676v1 [cs.CL] 26 Jun 2024 Understand What LLM Needs: Dual Preference Alignment for Retrieval-Augmented Generation Guanting Dong1, Yutao Zhu1, Chenghao Zhang1, Zechen Wang2, Zhicheng Dou1∗ and Ji-Rong Wen1 1School of Artificial Intelligence, Beijing University of Posts and Telecommunications, 2Gaoling School of Artificial Intelligence, Renmin University of China. {dongguanting19990611,yutaozhu94}@gmail.com, dou@ruc.edu.cn Abstract Retrieval-augmented generation (RAG) has demonstrated effectiveness in mitigating the hallucination problem of large language models (LLMs). However, the difficulty of aligning the retriever with the diverse LLMs’ knowledge preferences inevitably poses an inevitable challenge in developing a reliable RAG system. To address this issue, we propose DPA-RAG, a universal framework designed to align diverse knowledge preferences within RAG systems. Specifically, we initially introduce a preference knowledge construction pipeline and incorporate five novel query augmentation strategies to alleviate preference data scarcity. Based on preference data, DPA-RAG accomplishes both external and internal preference alignment: 1) It jointly integrates pair-wise, point-wise, and contrastive preference alignment abilities into the reranker, achieving external preference alignment among RAG components. 2) It further introduces a pre-aligned stage before vanilla Supervised Fine-tuning (SFT), enabling LLMs to implicitly capture knowledge aligned with their reasoning preferences, achieving LLMs’ internal alignment. Experimental results across four knowledge-intensive QA datasets demonstrate that DPA-RAG outperforms all baselines and seamlessly integrates both black-box and open-sourced LLM readers. Further qualitative analysis and discussions also provide empirical guidance for achieving reliable RAG systems. Our code is publicly available at https://github.com/dongguanting/DPA-RAG. # 1 Introduction The emergence of large language models (LLMs) [1, 2, 3] has profoundly revolutionized a variety of real-world tasks expressed in natural languages [4, 5, 6, 7, 8, 9]. However, when faced with knowledge-intensive tasks, relying solely on internal knowledge for reasoning may easily expose LLMs to factual inconsistency and hallucination [10, 11]. To alleviate these issues, researchers use retrieval-augmented technology [12, 13] to assist LLMs in integrating relevant external knowledge, providing a promising solution to improve the quality of generated answers [14]. In an ideal Retrieval-Augmented Generation (RAG) system, the goal is to enhance LLMs by incorporating supporting documents that align with their intrinsic knowledge preferences, thus facilitating reasoning. However, in practical applications, the retriever and the LLM-based reader serve as separate components within the RAG system, each with distinct model architectures, training objectives, and task formats [15, 16]. These differences often result in documents retrieved by vector similarity Corresponding author Preprint.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Under review.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Figure 1: The results for GPT-3.5 comparing direct responses and answers referencing different retrieved documents (Grounding, 1st, 10th, 50th, 100th) on three QA benchmarks. Failing to meet the specific knowledge demands for LLM reasoning. Moreover, the retrieved documents could even conflict with the self-knowledge of LLMs, potentially disrupting LLMs’ original reasoning abilities [17, 18]. As depicted in Figure 1, we conduct a preliminary analysis on GPT-3.5 across three QA benchmarks, which compare two setups: LLM directly answering question and answering question by referencing different types of retrieved document. We could categorize results into four distinct conditions: - Both Correct.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The question can be resolved directly by LLMs or through the retrieved document.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
- Aligned Knowledge. LLM directly gives the wrong answer, but the retrieved document guide LLM provide right solution. - Unaligned Knowledge. LLM gives the right answer, but the retrieved document may mislead it. - Both Incorrect. Neither the retrieved document nor the LLM can provide an answer correctly. Then we have the following observations: in the scenario of “Aligned Knowledge”, it is notable that documents with low vector similarity (100th) still support LLM in deducing correct answers. Conversely, within the “Unaligned Knowledge” scenario, several documents with high vector similarity tend to mislead LLM more than those with lower similarity (e.g., 10th vs 100th). Surprisingly, even some documents that contain relevant grounding information struggle to align with the LLM’s preferences [19]. These results highlight our statement that “The retrieved documents do not exactly match the knowledge required for LLM reasoning”. Therefore, mitigating the preference gap between the LLM and the retriever emerges as a critical challenge in developing a reliable RAG system. To address the above limitation, we propose a Dual Preference Alignment for Retrieval-Augmented Generation (DPA-RAG), a universal framework designed to align diverse preference knowledge within RAG systems. DPA-RAG consists of three key components: 1. Preference Knowledge Construction: motivated by our preliminary results, we first extract the specific knowledge that significantly affects LLMs’ reasoning preferences. Then we introduce five query augmentation strategies and a quality filtering process to synthesize high-quality preference knowledge. 2. Reranker-LLM Alignment: To meet the diverse LLMs’ knowledge preferences, we carefully design multi-grained alignment tasks for fine-tuning a preference-aligned reranker. Specifically, we jointly integrate pair-wise, point-wise, and contrastive preference alignment abilities into the reranker via multi-task optimization [20]. By this means, the reranker could provide the necessary knowledge for LLM’s inference, achieving external alignment between retriever and LLMs. 3. LLM Self-Alignment: To further enable LLMs to concentrate on knowledge aligned with their reasoning preferences, we introduce a pre-aligned phrase prior to the vanilla SFT stage. This stage allows LLMs to capture preference-aligned knowledge from multiple documents, completing the LLM’s internal self-alignment. To summarize, our contributions are as follows: - Based on a preliminary analysis of GPT-3.5 across three QA benchmarks, we reveal the inherent preference gaps between the retriever and the LLM-based reader in RAG systems. - We propose the DPA-RAG, a universal framework designed to align the knowledge preferences of diverse LLMs within RAG systems. DPA-RAG achieves dual preference alignment in two aspects: (1) It jointly integrates multi-grained preference alignment abilities into the reranker, facilitating
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# external alignment across RAG components. (2) It introduces a pre-aligned phrase prior to the standard SFT stage, guiding LLMs to concentrate on the aligned knowledge, thereby unlocking the internal alignment abilities of the LLMs. To overcome the scarcity and limited diversity of preference data, we devise five novel query augmentation strategies and a quality filtering process, aimed at automatically synthesizing high-quality preference data for effectively aligning downstream models. Experimental results on four knowledge-intensive QA datasets demonstrate the effectiveness of DPA-RAG. Further analysis across dimensions such as Model Parameters, Preference Alignment, Data Quality, and Training Strategies confirm DPA-RAG’s role as a plug-and-play solution, providing practical insights for developing reliable RAG systems. # Related Work Preference Alignment for Large Language Models.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Traditional Preference alignment (PA) methodologies [21, 22, 23, 24] are designed to tailor pre-trained language models to reflect human preferences. Recently, a series of works have relied on reinforcement learning (RL) [25] to align LLMs with human preferences [1]. Owing to the sensitivity of RL’s parameters and the complex process of reward modeling, research works [26, 27, 28, 29, 30, 31, 32, 33, 34] represented by DPO [35] further tried to optimize the loss function and reward scoring mechanism for pruning. However, depending on annotations from humans or expert models still increases the alignment cost. To construct reliable RAG systems, a branch of studies [36, 37, 38] aims to align the retriever with supervision signals generated by LLMs, showcasing remarkable alignment potential. Conversely, other studies attempt to improve the alignment abilities of RAG systems by implementing a multi-round retrieval paradigm [39, 40, 41, 42, 43, 44] and filtering out noise from the training corpus [45, 46, 47, 48, 49]. These approaches, however, often suffer from a lack of multi-level alignments, which limits their ability to adapt to the diverse knowledge preferences of LLMs. In our paper, we introduce DPA-RAG, a system that bridges this gap by aligning the retriever to adapt to the diverse knowledge preferences of LLMs without relying on external expert annotations. Reranking Techniques for Retrieval Augmented Generation. In the RAG system, the reranker is designed to rank a list of retrieved documents to accurately meet LLMs’ demands. A series of sentence transformer models [50, 51, 52, 53] have achieved excellent fine-grained ranking by better aligning the representations between queries and documents. With the rapid development of prompt learning [54], point-wise generative re-ranking frameworks [55, 56, 57, 58] have transformed traditional discriminative tasks into a Seq2seq paradigm, showcasing promising initial alignment abilities. The recent development and application of LLMs have introduced innovative pair-wise and list-wise rerankers, such as RankGPT [59], PRP [60], LRL [61], and RankLLaMA [62]. These models have brought multi-perspectives in addressing the fine-grained re-ranking problem. Moreover, in response to the unique preferences of different users, various methods [63, 64, 65, 66, 67] have been developed to achieve personalized user sorting, yielding significant results in aligning with industrial scenarios. These advancements inspire us to distill the preferences of LLMs into the reranker, facilitating effective alignment between the RAG system’s components. # Methodology To address the misalignment between different components of retrieval-augmented generation (RAG) and improve overall generation performance, we propose the DPA-RAG framework, which is illustrated in Figure 2. In general, DPA-RAG improves traditional RAG architecture in two main aspects: (1) we fine-tune a preference-aligned reranker between the retriever and the LLM to selectively filter out knowledge that aligns with LLMs’ knowledge preferences (§3.3); and (2) we design a self-alignment mechanism that fine-tunes LLMs to better recognize and utilize knowledge consistent with their reasoning preferences (§3.4). To acquire the LLM’s preference knowledge, we devise a three-step construction method, motivated by our preliminary analysis of how different types of retrieved documents affect RAG performance (§3.2).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Moreover, Xu et al. (2024) show that an LM with a smaller context window (4k) using RAG performs comparably with finetuning with a longer-window LM (16k). Following this query, our work studies the effectiveness of LMs in utilizing long contexts, when they have different input capacities. Domain Influence on Downstream Performance It is crucial to know when LMs benefit from including retrieved passages in context. Mallen et al. (2023) find that retrieving contexts may be unnecessary and even detrimental when asking about common knowledge, but it benefits questions about rare knowledge. In contrast, we find that using RAG under the right configurations still offers significant downstream performance boosts even for common, Wikipedia-based questions. Robustness to Noisy Contexts Feeding in noisy contexts deteriorates LM performance. Asai et al.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Below, we will first introduce the task definition (§3.1) and then we delve into the specifics of our approach.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Preference Knowledge Construction |Four Conditions|Augmentation Strategies| |---|---| |1. Both Correct:|Direct Ans. Discard| |Query|Direct Ans.|Ref Ans.|Augmented Seed Data Generator|Complexity|Augmented Preference Data| | |2. Aligned Knowledge:|Direct Ans.|Constraint|Fliter| |LLM|Ref Ans.| |Top-K Docs|Doc Subset|Direct Ans.|Decomposition| | |3. Unaligned Knowledge:|Ref Ans. Discard|SPARQL| |Sample|Ref. Ans.|4. Both Incorrect:|Direct Ans.|Rephrase| | |Ref Ans.| # Reranker-LLM Alignment |Point-wise Preference Alignment|Pair-wise Preference Alignment|Contrastive Preference Alignment| |---|---|---| |Aligned Query|Score1|Preference Order| |Query1|LLM|Score2| |Unaligned Query|Retriever| | |Knowledge Base|Top-K Docs|Retriever| # LLM Self-Alignment |Original Order|Aligned|Random| |---|---|---| |Aligned Query1|Pull|Unaligned Random| |Query2|Score1|Preference Order| |LLM|Score2|Push| |Unaligned Query|Retriever|Batch| |DPA-RAG Inference Process|Self-Aligned|Open-Source LLMs| |Top-K Docs|Re-ranked Docs|Black-Box LLMs| # Query Figure 2: The overall framework of DPA-RAG. The upper part shows the pipeline for preference knowledge construction. The middle part displays the task format for dual preference alignment. The bottom part illustrates the inference process of DPA-RAG. # Task Definition Compared to standard text generation, RAG often follows a retrieve-then-read paradigm [13], where an additional retriever is introduced to collect external knowledge and enhance the generation process. This architecture involves constructing a query q to reflect the information needs of the generation. For example, in question-answering systems, the input question is often used as the query. Given the query q, the retriever R returns relevant documents from a corpus Dq = {di}Ni=1 with N documents. The relevance between document d and query q can be measured by various methods. In this work, we employ a dense retriever that utilizes dual encoders to obtain hidden representations for both the query and the documents. The relevance score is then calculated by computing the dot-product similarity between these representations, enabling the retrieval of the top-k documents Dretrieve: Dretrieve = argtop-k Ed(di)⊤ · Eq(q) | i = {1 . .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
N }]. While the retrieved documents are relevant to the query, they may not necessarily contain the knowledge required by the LLMs. Therefore, in this study, we introduce a reranker Er to rerank Dretrieve and filter out the documents Drerank, which include only those documents aligned with the LLMs’ preferences i.e., Drerank = Er (q, Dretrieve). Finally, the LLMs read from the reranked documents and generate the target text based on the query: y = LLM(q, Drerank) = log Pθ (q, Drerank), where Pθ represents the LLM’s generation probability distribution. Recognizing that LLMs might struggle to effectively utilize retrieved knowledge, we also design a self-alignment mechanism to optimize θ for RAG tasks. # Preference Knowledge Construction To mitigate the misalignment between different RAG components, a critical step is to collect data that reflects LLMs’ knowledge preferences. Therefore, we design a three-step method to gradually mine, augment, and filter out high-quality preference knowledge of LLM, which is shown in Figure 2. Preference Knowledge Extraction. To align with LLMs’ knowledge preferences, it is essential to identify the specific knowledge that can bring performance gains or harms during the model’s inference process. Motivated by the preliminary analysis in Figure 1, given the training set Detrain = {qi, Dqi, yqi}Ntrain, where each sample includes a query qi, top-k retrieved documentsi=1
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Dqi = {di}k=1 and an answer yqi . We guide LLMs to directly answer questions or response byi referencing different types of documents, aiming to filter out samples from Detrain that reflects LLMs’ knowledge preferences. To ensure the distinctiveness among these documents, we hierarchically sample four documents from Dqi to construct the document subset Dqi = {di|i = 1, 25, 50, 100} for each query, as shown in the upper part of Figure 2. Consequently, we also categorize the results of LLMs into Both Correct”, Both Incorrect”, Aligned Knowledge”, and Unaligned Knowledge”. From Dsub contain at least one document labeled Aligned Knowledge” oretrain , we selectively extract samples whose document subsets Dqn Unaligned Knowledge”. This allows us to obtain the preference dataset Depref sub= {qi, Dqi sub, Y isub}i=1,N where Y isub = {yi|i = 1, 25, 50, 100} denotes the preference labels of Dqi , corresponding to the four distinct categories. The motivation behind this selection process is that documents labeled as Aligned Knowledge” or Unaligned Knowledge” provide the LLM with a clear positive or negative impact during reasoning. Due to the difficulty in distinguishing the role of retrieved documents labeled Both Correct” or Both Incorrect”, we choose to discard them.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Diverse Query Augmentation. Upon obtaining Depref which reflects LLM’s preferences, its scarcity Depref (only 20% of Detrain) still poses an obstacle for fine-tuning high-quality models. More critically, the sparsity of preference data results in limited data patterns, reducing both the diversity and complexity of the dataset [ 68, 69 ]. To address these limitations, we are inspired by several augmentation methods [7, 8, 9 , 70 , 71] and specifically design five novel query augmentation strategies for the RAG system as follows2: - Rephrasing. Rephrase the original query with the same intention. - Complexity. Increase the semantic complexity of the original query. - Decomposition. Decompose the original query into several sub-problems. - Constraint. Add more conditional and constrained statements to the original query. - SPARQL.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Rewrite the original query based on the SPARQL syntax and generate it directly. We utilize GPT-3.5-turbo to generate different augmented datasets { Deori, which can be formulated as Dpref = Dpref ∪ (∪ini=1 Deri }n=1, and then merge them with the original dataset Deori eori eori eri ). To control the augmented data’s quality, we introduce a quality filtering procedure by a natural language inference (NLI) model. Given the original query q as the “premise” and the augmented query qaug as the “hypothesis”, the NLI model seeks to determine the semantic relationship between the two queries. The relation can be categorized as entailment, contradiction, or neutral, as follows: pθ (· | q, qaug) = softmax (scoreθ (q, qaug)) , where scoreθ : Rk×ℓq × Rk×ℓqaug → R3 is a scoring function dependent on the model’s parameters θ. To maintain intent consistency between the original and augmented datasets, we exclude any augmented data labeled as "contradiction" (approximately 20%). # Reranker-LLM Alignment After obtaining Dpref, we introduce multi-grained preference alignment tasks to jointly fine-tune a reranker, aiming to filter retrieved knowledge that aligns with LLM preferences. Point-wise Preference Alignment. Distinguishing beneficial or harmful knowledge of LLMs is essential for aligning their preferences. Hence, from each sample {qi, Dsub, Y isub} ∼ Dpref, we canqi e Knowledge”. As shown in Figure 2, we use {qi, di, yi}N further extract one sub-sample {qi, di, y} where yi is labeled as “Aligned Knowledge” or “Unalignedi i=1 to fine-tune the Reranker model Er (θ) with binary cross-entropy loss [72], achieving a point-wise preference alignment: X[yi log(pθ (qi, di)) + (1 − yi) log(1 − pθ (q, di))], Lpoint = − N1 i=1 Detailed information on the different augmentation strategies can be found in Appendix C.2
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
where yi is label (Postive / Negative) for judging the di is aligned or unaligned knowledge. Pair-wise Preference Alignment. Since point-wise alignment empowers the reranker to identify LLM’s favored knowledge, enhancing the reranker to prioritize this preferred knowledge presents a new challenge. Therefore, we propose a pair-wise preference ranking task for fine-grained alignment. In detail, given {q, Dqi i sub, yi sub} ∼ Dpref, we derive an order {oi}Ke i=1 of the documents subset Dsubqi = {di}Ki=1 based on the initial similarity scores from the retriever. Our idea is elegantly simple: we leverage the LLM within the RAG system as a preference reward model rθ to score documents, eliminating the need for external experts. To mitigate bias from relying solely on LLM-generated preference scores [73], we calculate the preference score si for each query by weighting both the LLM preference score rθ and the original similarity score sR(·) from the retriever: si = a · rθ (q, di) + (1 − a) · sR(q, d),i (5) si denotes the preference score of the i-th retrieved document. We then sort the documents according to these preference scores to obtain the LLM’s knowledge preference order ˆii=1. Subsequently, we integrate the preference order into the reranker using RLHF loss [1, 74]: Lpair = − Ck12 E(q,dw ,d,yw ,y)∼ Dl l epref [log (σ (pθ (q, dw, yw) − pθ (q, d, yl)))] ,l (6) where yw and yl represent the labels for documents dw and d, corresponding to “winner” or “loser”l in the preference order ˆi = 1K. pθ denotes the logits of the output. Contrastive Preference Alignment. To align query representations with the LLM’s preferred knowledge, we employ contrastive learning [75, 76] to fine-tune our reranker, thereby preventing the LLM from being misled by highly similar but unaligned knowledge. Unlike previous pairwise approaches [35], our Depref dataset associates each query with multiple documents, rather than a single positive or negative example. Considering this one-to-N scenario, we employ Supervised Contrastive Learning (SCL) [77] to fully leverage Depref. In our task, the query serves as an anchor point hq. Aligned documents are treated as positive samples hp, while documents randomly sampled from other instances in the batch act as negative samples hn. As shown in Figure 2, SCL seeks to reduce the distance of queries and positive samples hp, while increasing the distance from negative samples hn in the semantic space. The loss LCPA is formulated as follows: LCPA = −XNyi − 1j=1Nt X1i̸=j 1y=yj log Nt 1 k=1 1i̸ =k exp(hq · hn/τ), exp(hq · hp/τ) (7) Nt is the nums of samples in each batch. Nyi denotes samples in the batch with the same label as yi.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
τ is a temperature parameter. 1 is an indicator.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Multi-task Optimization. Optimizing multi-grained preference tasks via Multi-task Learning (MTL) [78, 79] offers an efficient way for fine-tuning the reranker. However, learning tasks jointly may further introduce potential bias and conflicts [80]. To tackle this challenge, we employ the MGDA-UB [20], aiming to dynamically find a pareto optimal [81] solution for balancing multi-task optimization. By utilizing MGDA-UB to optimize the MTL weights {ct}t=1 for T tasks. We finally obtain our multi-grained alignment loss function as: Ltotal = c1Lpoint + c2Lpair + c3LCPA (8) LLM Self-Alignment After initially aligning the preferences between external RAG components, in this section, we focus on guiding LLMs to emphasize aligned knowledge during the reasoning process to achieve internal alignment. Inspired by several pre-alignment works [82, 83], we introduce a pre-aligned stage to assist LLMs in implicitly identifying the knowledge crucial for reasoning [48]. An in-depth discussion on scoring mechanisms for different LLMs can be found in Appendix A.2.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
(2022) propose to select documents with high evidentiality scores. Wang et al. (2023) learn a filter model to remove the noisy sentences, and Berchansky et al. (2023) adopt a similar approach at the token level. Further, (Yu et al., 2023; Xu et al., 2023) use a neural summarizer model to aid the LM in identifying relevant information in long retrieved passages. Instead of reducing noise at test
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Pre-aligned Stage. As illustrated in Figure 2, for each sample {qi, Dsub, Y isub} ∼ Dpref, we randomly select one document dq labeled “Aligned Knowledge” or “Unaligned Knowledge” from Dqi sub, along with k − 1 random documents from the retrieved corpus D = {d}i=1. This selection process performs the following training objective with task specific template for each query q. Then we constructs a top-k document set Dalign = {dq , drand1 , . .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
. , drandk−1}: L (θ) = Σ log Pθ (yn| prompt(qn, Dalign)), (9) Given the documents {Dalign = (dq , drand1 , . .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
. , drandk−1 )}. Answer the following question based on the given information or your internal knowledge with few words without the source. Query: {q}. [Judgement]: document-{idq } is Positive or Negative knowledge for answering question. where log P (·) denote probability distribution of LLM’s output. θ denotes model parameters. {idq } represents the position of the preference document. LLMs will implicitly learn the ability to capture self-preferred knowledge from top-k documents by distinguishing y ∈ {positive, negative} during pre-aligned task. Supervised Fine-tuning Stage. Following the pre-aligned task, we load pre-trained parameters and perform subsequent Supervised Fine-tuning (SFT) for QA tasks using the same objective described in Equation (9). We utilize the traditional QA format training set Detrain = {qi, Dqi , yqi }Ntrain. Moreover, we merge five augmented datasets { Deri }5=1 with Detrain. Using the preference-aligned reranker Er, we reorder the documents and filter out the top-k documents as described in Equation (10), forming the final training set Dtrainerank = {qi, Drank, yqi }i=1 of SFT stage. Drankqi = argtop-k [Er (qi, Dqi )] (10) The preference knowledge identification capability developed during the pre-alignment stage enables LLMs to focus more effectively on aligned knowledge during the SFT stage, thereby enhancing their internal alignment potential. The prompt template for SFT stage is as follows: Prompt: information or your internal knowledge with few words without the source. Query:{q}. Given the documents {Top-K Docs: Dq rank}. Answer the following question based on the given # Experiments # Datasets and Metrics We select four question answering (QA) datasets covering three types, including (1) Open-Domain QA, represented by NaturalQuestions (NQ) [ 84 ], and TriviaQA (TQA) [ 85 ]; (2) Multi-Hop QA represented by HotpotQA (HQA) [86 ]; and (3) Knowledge Base QA, represented by WebQuestionsSP (WebQSP) [ 87 ]. Table 1 illustrate the statistics of them. For evaluation metrics, we use Hit@1 for the accuracy of the top-ranked response and F1 score to assess the quality and similarity to the ground-truth. More details of the experimental setup are listed in Appendix B. # Main Results The experimental results are shown in Table 2. In general, our DPA-RAG significantly outperforms all baselines across four datasets in different setups. This clearly highlights the superiority of our approach. We further have the following observations: |Dataset|# Examples (thousands)| |---|---| |NQ|Train: 79.2, Dev: 8.7, Test: 3.6| |TriviaQA|Train: 78.8, Dev: 8.8, Test: 11.3| |HotpotQA|Train: 88.9, Dev: 5.6, Test: 5.6| |WebQSP|Train: 2.84, Dev: 0.25, Test: 1.6|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 2: The main results of DPA-RAG and different kinds of baselines on four QA benchmarks |Method|Reader|QA Benchmark|NQ|Trivia-QA|Hotpot-QA|WebQSP| |---|---|---|---|---|---|---| |RAG [88]|GPT-3.5| |Hit@1|F1|Hit@1|F1|Hit@1|F1| |RAG [88]| | |GPT-3.5|47.47|47.99|75.04|74.13|26.28|32.84|67.97|63.33| |RAG [89]| | |GPT-4|54.04|51.19|79.98|76.85|28.46|33.87|71.30|67.20| |RAG [90]| | |LLaMA2-7B|50.94|54.76|63.90|63.80|31.40|38.90|68.52|64.22| |RAG [90]| | |LLaMA2-13B|56.60|60.60|70.43|71.32|36.31|45.23|76.39|78.63| |RAG [91]| | |LLaMA3-8B|54.81|58.33|69.54|71.21|34.28|42.29|72.82|73.94| |RAG [92]| | |Qwen2-7B|52.01|56.13|63.88|66.52|31.39|39.70|75.98|77.82| |RAG+RankGPT [59]| | |LLaMA2-7B|47.81|52.37|59.05|56.39|28.32|37.06|66.32|62.22| |RAG+LRL [61]| | |LLaMA2-7B|48.09|53.06|60.33|56.86|29.13|37.81|67.43|63.44| |RAG+PRP [60]| | |LLaMA2-7B|51.91|56.17|62.28|57.98|31.90|40.87|68.54|64.08| |RAG+RankLLaMA [62]|LLaMA2-7B| |52.18|56.62|62.34|58.05|32.31|41.39|69.11|65.70| |RAG+BGE [51]| | |LLaMA2-7B|52.43|56.92|62.70|57.58|32.53|41.73|70.20|68.80| |RAG+BCEmbedding [93]|LLaMA2-7B| |49.91|53.19|61.93|57.67|31.52|40.59|68.20|65.40| |RAG+ColBERTv2 [94]| | |LLaMA2-7B|51.49|56.02|62.34|58.16|31.72|40.79|69.70|66.90| |KnowPAT [47]| | |LLaMA2-7B|51.42|54.82|63.20|65.20|29.00|37.40|68.73|65.31| |REPLUG [36]| | |GPT-3.5|49.67|50.58|75.67|75.34|27.30|34.30|69.59|66.22| |RA-Judgement [41]| | |GPT-3.5|48.52|50.18|76.21|76.58|26.50|32.81|66.07|68.32| |RRHF [95]| | |LLaMA2-7B|50.11|52.01|62.50|60.20|28.16|35.40|66.90|63.10| |RAFT [45]| | |LLaMA2-7B|50.24|53.86|60.10|57.40|30.20|35.80|-|-| |FILCO [46]| | |LLaMA2-7B|52.71|55.32|67.30|67.80|32.70|40.80|69.96|68.34| |Our Method: DPA-RAG| | | | | | | |DPA-RAG| | |GPT-3.5|51.60 (+4.13)|52.80 (+4.81)|78.65 (+3.61)|77.05 (+2.92)|28.42 (+2.14)|36.12 (+3.28)|71.80 (+3.83)|69.20 (+5.87)| |DPA-RAG| | |GPT-4|56.45 (+2.41)|53.28 (+2.09)|84.41 (+4.43)|80.08 (+3.23)|33.79 (+5.33)|37.67 (+3.80)|73.12 (+1.82)|74.83 (+7.63)| |DPA-RAG| | |LLaMA2-7B|56.03 (+5.09)|60.19 (+5.43)|70.16 (+6.26)|70.29 (+6.49)|35.23 (+3.83)|43.34 (+4.44)|72.40 (+3.88)|71.80 (+7.58)| |DPA-RAG| | |LLaMA2-13B|59.19 (+2.59)|62.97 (+2.37)|74.18 (+3.75)|75.53 (+4.31)|41.07 (+4.76)|49.60 (+4.37)|80.28 (+3.89)|81.74 (+3.11)| |DPA-RAG| | |LLaMA3-8B|57.43(+2.62)|61.02 (+2.69)|72.04(+2.50)|73.58 (+2.37)|36.01 (+1.73)|44.32 (+2.03)|74.26 (+1.44)|76.11 (+2.17)| |DPA-RAG| | |Qwen2-7B|54.66(+2.65)|58.84 (+2.71)|68.58(+4.70)|70.26 (+3.74)|34.56 (+2.87)|42.47 (+2.77)|78.66 (+2.68)|80.53 (+2.71)| (1) Compared to traditional RAG baselines, DPA-RAG (LLaMA2-7B) shows a remarkable performance improvement (over 5%) across all four datasets. More importantly, this improvement is consistent across various models, including LLaMA2-13B, Qwen2-7B, LLaMA3-8B, GPT-3.5, and GPT-4.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This indicates the broad applicability and generalizability of our method. (2) For reranker-based methods, we find that smaller rerankers such as BGE and ColBERTv2 can achieve comparable or even better performance than LLM-based rerankers. This result validates our motivation of using BGE as the alignment backbone, as it combines efficiency with effectiveness. (3) Among preference-aligned methods, DPA-RAG outperforms direct alignment methods (i.e., REPLUG and RA-Judgement), which rely on logits. This emphasizes the value of implementing multi-grained alignments within our framework. Surprisingly, Filco, which employs data filtering, shows robust alignment capabilities, confirming that unaligned knowledge exists in training corpora. This observation highlights again the importance of our preference optimization at the data level, ensuring that the retrieved and used knowledge is highly relevant and aligned with the LLM’s needs. # Ablation Study To explore the roles of different modules in DPA-RAG, we perform an ablation study and Table 3 shows the results. We use w/o to indicate the version without a particular module. We can see: # Table 3: Ablation study on NQ and TQA |Method|NQ|TQA|Hits@1|F1|Hits@1|F1| |---|---|---|---|---|---|---| |LLaMA2-7B RAG|50.94|54.76|63.90|63.80| | | |LLaMA2-7B DPA-RAG|56.03|60.19|70.16|70.29| | | |w/o PA-Rerank.|-3.23|-3.51|-3.64|-3.91| | | |w/o Pre-Align.|-1.72|-1.76|-2.21|-2.45| | | |w/o Pre-Align.+ PA-Rerank.|-4.12|-4.21|-4.66|-4.50| | | |w/o Query Aug.|-2.13|-2.31|-2.62|-2.87| | | (1) The performance of DPA-RAG declines when any component is removed, which suggests that all the components are very effective. (2) Removing the preference aligned reranker (PA-Rerank.) leads to the largest performance drop, indicating a clear knowledge preference gap between RAG components and LLMs. This confirms the benefit of using a preference-aligned reranker for external alignment. (3) The combined performance gains of preference aligned reranker and pre-aligned task are lower than the complete DPA-RAG framework, which implies that integrating both alignment methods yields a mutually reinforcing.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Scaling Analysis of Different Parameters on HQA |50|RAG Baseline|RAG Baseline| |---|---|---| | |DPA-RAG|DPA-RAG| |45|70| | |F1 Score|60| | |35|Change Rate of Docs (%)| | |30|40| | |Qwen1.5Qwen1.5Phi20.5B 1.8B 2.7BQwen1.5Llama2MistralQwen1.5Llama2Qwen1.54B 7B 7B 7B 13B 14B|Qwen1.5Qwen1.5Phi20.5B 1.8B 2.7BQwen1.5Llama2MistralQwen1.5Llama2Qwen1.54B 7B 7B 7B 13B 14B| | | |Reader Model Parameters|Reader Model Parameters| # The Performance of Preference Alignment on NQ |130|The Performance of Preference Alignment on TQ| |---|---| |120|Llama2-RAG|Llama2-RAG| | |Llama2-Cobert-RAG|Llama2-Cobert-RAG| |115|FILCO|120|FILCO| | |Llama2-DPA-RAG|Llama2-DPA-RAG| |110| | |105| | |100| | |95|90| |90|80| |F1 Score| | Figure 4: The comparison experiment of preference alignment on NQ, TQA. Effect, demonstrating the superiority of our dual alignment strategies.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
More detailed results can be found in Appendix C.1. # 4.3 Quantitative Analysis Scaling Analysis for Different Model Parameters. To investigate the impact of parameter scale and RAG performance, we gradually increase the parameters of LLM readers (ranging from 500M to 13B) and evaluate their performance. According to the results in Figure 3, we have the following observations: - Emergence of RAG Capabilities at Lower Parameter Scales (<7B): We notice a significant improvement in RAG baseline performance, which sharply rises from 500M to 7B parameters (40% F1 score increase), then stabilizes for parameters beyond 7B. A similar pattern is observed in HQA, indicating a strong correlation between the emergence of RAG capabilities and model parameters. This finding presents an interesting parallel to those reported in LIMA [96], where parameter increases below a certain threshold significantly boost model capabilities. - Stable Performance Gains with DPA-RAG as Parameters Increase: Compared to the baseline, DPA-RAG delivers stable improvements as parameter size expands across both datasets, displaying a smoother performance curve. - Greater Benefits from DPA-RAG in Datasets with More Unalignment: The performance gains from DPA-RAG exhibit interesting variations between TQA and HQA as parameters increase. In TQA, where the average F1 score is already over 60, the model quickly reaches a high-performance threshold as parameters increase, leaving limited room for further improvements through preference alignment. Conversely, HQA, characterized by more extensive unaligned knowledge and a lower average F1 score (below 50), shows that the alignment gains provided by DPA-RAG exceed those from increasing foundational RAG capabilities alone, leading to more improvement in alignment for RAG. Effectiveness on Preference Alignment. To delve deeper into the impact of preference alignment, in line with the setup in Section 3.2, we conduct a comparative experiment on direct query answering versus referencing top-3 documents. As shown in Figure 4, DPA-RAG consistently achieves the
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Data Quality of Different Augmented Data | |Decomposition|Decomposition| | | |62| |---|---|---|---|---|---|---| |1.3|Complexity| | | | |54.5| | | |Mixed Training| | | | | |1.2|Constraint|Complexity| | | |60| | | |Standard QA Training| | | |54.0| |1.1|Rephrasing| | | | |58| | |SPARQL|Constraint| | | | | |1.0|Origin| | | | |53.5| |Diversity Score|0.9| | | | | | | | | | | | |54| | | | | | | |53.0| | | | | | | | | | | | | | | |52| | |Origin| | | | |52.5| | |Rephrasing|SPARQL| | | |50| | | | | | | |52.0| | | | | | | |48| # Figure 5: The left figure illustrates the visualization of different data complexity and diversity on NQ. The right figure shows performance of different training strategies on NQ.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
highest scores in the category “Aligned Knowledge” in all datasets, while significantly reducing the category “Unaligned Knowledge”. This demonstrates that DPA-RAG effectively aligns retrieved knowledge with the LLM’s inherent preferences. Interestingly, the improvement of DPA-RAG in the “Both Correct” category even outperforms that observed in “Aligned Knowledge”. Given the significant decrease in “Unaligned Knowledge”, this suggests that DPA-RAG prioritizes addressing the conflicts present in retrieved documents. This behavior is in line with our pipeline’s core principle: the preference-aligned reranker first externally eliminates misaligned knowledge, and the subsequent self-alignment stage allows the LLM to more effectively and implicitly capture information that is aligned with its preferences. Discussion on Query Augmentations. Liu [68] and Lu [97] highlight the significant impact of dataset complexity and diversity on model alignment. To investigate how the complexity and diversity of our augmented queries affect RAG performance, we randomly select 1,000 samples from each dataset and employ Intag technology [97] for automated intent annotation. For each dataset, we measure diversity number of unique tags by calculating number of all samples and complexity by number of all samples.number of all tags # Table 4: The performance result correlates with complexity and diversity on NQ |Aug-Type|Complexity|Diversity|Total|NQ| |---|---|---|---|---| |Origin|1.61|0.35|1.96|51.78| |Rephras.|1.64|0.39|2.03|52.27| |SPARQL|1.77|0.39|2.16|52.95| |Constraint|1.72|0.47|2.19|53.75| |Decompos.|1.77|0.51|2.28|54.16| |Complexity|1.85|0.48|2.33|54.81| Figure 5 visualizes the quality of the augmented data, showing that our five methods consistently enhance data complexity. Specifically, Complexity and Decomposition markedly boost both complexity and diversity scores, which also align with the case studies presented in Table 6. Moreover, we mix the augmented data with the original training set in actual proportions and calculate the data quality. Table 4 shows that all five augmentation strategies enhance the LLM’s performance to different degrees. Surprisingly, when we sum up the two metrics, the overall trend of performance on NQ increases along with the growth of the total quality score. This insight further validates that in RAG tasks, the effectiveness of query augmentations is highly correlated with their complexity and diversity. Sequential Training vs.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Mixed Training. In Section 3.4, we design a knowledge self-alignment task during the pre-aligned phase and further perform sequential SFT on the QA dataset. An alternative approach is directly mixing preference data with QA task data for joint training. Figure 5 illustrates the performance of these two training strategies across training steps. Compared to standard QA fine-tuning, we notice that mixing training data from both tasks leads to a noticeable performance decline and fluctuations. This result may stem from optimization conflicts in multi-task training [98]. However, the sequential training after the pre-aligned phase yields stable performance gains, validating its efficacy. Similar conclusions have been reported in studies on reasoning [83, 99, 100, 101].
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
time, Yoran et al. (2023) train LMs to be robust to irrelevant content. Lastly, Chen et al. (2023) build an evaluation benchmark to test LMs’ noise robustness. Our work similarly studies LMs’ responses to noisy content but is more fine-grained with varied noise ratios. # References Akari Asai, Matt Gardner, and Hannaneh Hajishirzi.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Conclusion In this paper, we reveal the inherent preference gap among RAG components and first propose DPA-RAG to align diverse knowledge preferences. Specifically, we gradually extract and filter out the LLM preferred knowledge from the training set, and propose five high-quality query augmentation strategies to alleviate data sparsity issues. Based on preference data, we jointly integrate pair-wise, point-wise, and contrastive preference alignment abilities into the reranker, achieving external preference alignment among RAG components. Further, we introduce LLM Self-Alignment task to remove knowledge biases and achieve internal alignment. Experimental results demonstrate that DPA-RAG outperforms all strong baselines across four knowledge-intensive QA datasets. The extensive analysis also provides practical insights for developing reliable RAG systems. # References [1] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions wip human feedback. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. [2] Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonapan H. Clark, Laurent El Shafey, Yanping Huang, Kapy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernández Ábrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Bopa, James Bradbury, Siddharpa Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Epan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, and et al.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Palm 2 technical report. CoRR, abs/2305.10403, 2023. [3] OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[4] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alepea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matpias Plappert, Fotios Chantzis, Elizabep Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matpew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. [5] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. The flan collection: Designing data and mepods for effective instruction tuning. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonapan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 22631–22648. PMLR, 2023. [6] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-pought prompting elicits reasoning in large language models. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022 [7] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmap: Empowering mapematical reasoning for large language models via reinforced evol-instruct. CoRR, abs/2308.09583, 2023. [8] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models wip evol-instruct. CoRR, abs/2306.08568, 2023. [9] Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mapematical reasoning wip large language models. CoRR, abs/2308.01825, 2023. [10] Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. In Jong C. Park, Yuki Arase, Baotian Hu, Wei Lu, Derry Wijaya, Ayu Purwarianti, and Adila Alfa Krisnadhi, editors, Proceedings of pe 13p International Joint Conference on Natural Language Processing and pe 3rd Conference of pe Asia-Pacific Chapter of pe Association for Computational Linguistics, IJCNLP 2023 -Volume 1: Long Papers, Nusa Dua, Bali, November 1 - 4, 2023, pages 675–718. Association for Computational Linguistics, 2023. [11] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. Siren’s song in pe AI ocean: A survey on hallucination in large language models. CoRR, abs/2309.01219, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[12] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. REALM: retrieval-augmented language model pre-training. CoRR, abs/2002.08909, 2020. [13] Patrick S. H. Lewis, Epan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. [14] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smip, and Mike Lewis. Measuring and narrowing pe compositionality gap in language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of pe Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 5687–5711. Association for Computational Linguistics, 2023. [15] Jiaqi Bai, Hongcheng Guo, Jiaheng Liu, Jian Yang, Xinnian Liang, Zhao Yan, and Zhoujun Li. Griprank: Bridging pe gap between retrieval and generation via pe generative knowledge improved passage ranking. In Ingo Frommholz, Frank Hopfgartner, Mark Lee, Michael Oakes, Mounia Lalmas, Min Zhang, and Rodrygo L. T. Santos, editors, Proceedings of pe 32nd ACM International Conference on Information and Knowledge Management, CIKM 2023, Birmingham, United Kingdom, October 21-25, 2023, pages 36–46. ACM, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[16] Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. A survey on retrieval-augmented text generation. CoRR, abs/2202.01110, 2022. [17] Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. Language models as knowledge bases? In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of pe 2019 Conference on Empirical Mepods in Natural Language Processing and pe 9p International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2463–2473. Association for Computational Linguistics, 2019.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References [18] Sewon Min, Jordan L. Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick S. H. Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Sejr Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen-tau Yih. Neurips 2020 efficientqa competition: Systems, analyses and lessons learned. In Hugo Jair Escalante and Katja Hofmann, editors, NeurIPS 2020 Competition and Demonstration Track, 6-12 December 2020, Virtual Event / Vancouver, BC, Canada, volume 133 of Proceedings of Machine Learning Research, pages 86–111. PMLR, 2020.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[19] Sarah Lebovitz, Natalia Levina, and Hila Lifshitz-Assaf. Is AI ground truth really true? the dangers of training and evaluating AI tools based on experts’ know-what. MIS Q., 45(3), 2021. [20] Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 525–536, 2018. [21] Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, Fanzhi Zeng, Kwan Yee Ng, Juntao Dai, Xuehai Pan, Aidan O’Gara, Yingshan Lei, Hua Xu, Brian Tse, Jie Fu, Stephen McAleer, Yaodong Yang, Yizhou Wang, Song-Chun Zhu, Yike Guo, and Wen Gao. AI alignment: A comprehensive survey. CoRR, abs/2310.19852, 2023. [22] Yin Fang, Ningyu Zhang, Zhuo Chen, Lingbing Guo, Xiaohui Fan, and Huajun Chen. Domain-agnostic molecular generation with chemical feedback, 2024. [23] Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey. CoRR, abs/2307.12966, 2023. [24] Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Xin Zhao, and Ji-Rong Wen. Structgpt: A general framework for large language model to reason over structured data. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 9237–9251. Association for Computational Linguistics, 2023. [25] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. [26] Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. Chain of hindsight aligns language models with feedback. CoRR, abs/2302.02676, 2023. [27] Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. CoRR, abs/2305.16960, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[28] Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, and Jialu Liu. Statistical rejection sampling improves preference optimization. CoRR, abs/2309.06657, 2023. [29] Deepak Nathani, David Wang, Liangming Pan, and William Yang Wang. MAF: multi-aspect feedback for improving reasoning in large language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 6591–6616. Association for Computational Linguistics, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References |[30]|Bo Shen, Jiaxin Zhang, Taihong Chen, Daoguang Zan, Bing Geng, An Fu, Muhan Zeng, Ailun Yu, Jichuan Ji, Jingyang Zhao, Yuenan Guo, and Qianxiang Wang. Pangu-coder2: Boosting large language models for code with ranking feedback. CoRR, abs/2307.14936, 2023.| |---|---| |[31]|Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards for language model training. CoRR, abs/2306.01693, 2023.| |[32]|Weizhe Yuan, Kyunghyun Cho, and Jason Weston. System-level natural language feedback. CoRR, abs/2306.13588, 2023.| |[33]|Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J. Liu. Slic-hf: Sequence likelihood calibration with human feedback. CoRR, abs/2305.10425, 2023.| |[34]|Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. In Michael J. Wooldridge, Jennifer G. Dy, and Sriraam Natarajan, editors, Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada, pages 18990–18998. AAAI Press, 2024.| |[35]|Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023.| |[36]|Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. REPLUG: retrieval-augmented black-box language models. CoRR, abs/2301.12652, 2023.| |[37]|Luiz Henrique Bonifacio, Hugo Queiroz Abonizio, Marzieh Fadaee, and Rodrigo Frassetto Nogueira. Inpars: Data augmentation for information retrieval using large language models. CoRR, abs/2202.05144, 2022.| |[38]|Vitor Jeronymo, Luiz Henrique Bonifacio, Hugo Queiroz Abonizio, Marzieh Fadaee, Roberto de Alencar Lotufo, Jakub Zavrel, and Rodrigo Frassetto Nogueira. Inpars-v2: Large language models as efficient dataset generators for information retrieval. CoRR, abs/2301.01820, 2023.| |[39]|Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. Active retrieval augmented generation. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 7969–7992.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2022. Evidentiality-guided generation for knowledge-intensive nlp tasks. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. Moshe Berchansky, Peter Izsak, Avi Caciularu, Ido Dagan, and Moshe Wasserblat. 2023. Optimizing retrieval-augmented reader models via token elimination. Amanda Bertsch, Uri Alon, Graham Neubig, and Matpew R. Gormley.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Association for Computational Linguistics, 2023.| |[40]|Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.| |[41]|Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, and Haifeng Wang. Investigating the factual knowledge boundary of large language models with retrieval augmentation. CoRR, abs/2307.11019, 2023.| |[42]|Yujia Zhou, Zheng Liu, Jiajie Jin, Jian-Yun Nie, and Zhicheng Dou. Metacognitive retrieval-augmented large language models. CoRR, abs/2402.11626, 2024.| |[43]|Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 10014–10037.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Association for Computational Linguistics, 2023.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References |Authors|Title|Publication|Year| |---|---|---|---| |Keheng Wang, Feiyu Duan, Peiguang Li, Sirui Wang, and Xunliang Cai|Llms know what they need: Leveraging a missing information guided framework to empower retrieval-augmented generation|2024| | |Tianjun Zhang, Shishir G. Patil, Naman Jain, Sheng Shen, Matei Zaharia, Ion Stoica, and Joseph E. Gonzalez|RAFT: adapting language model to domain specific RAG|2024| | |Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md. Rizwan Parvez, and Graham Neubig|Learning to filter context for retrieval-augmented generation|2023| | |Yichi Zhang, Zhuo Chen, Yin Fang, Yanxi Lu, Fangming Li, Wen Zhang, and Huajun Chen|Knowledgeable preference alignment for llms in domain-specific question answering|2024| | |Jiajie Jin, Yutao Zhu, Yujia Zhou, and Zhicheng Dou|BIDER: bridging knowledge in-consistency for efficient retrieval-augmented llms via key supporting evidence|2024| | |Zihao Wang, Anji Liu, Haowei Lin, Jiaqi Li, Xiaojian Ma, and Yitao Liang|RAT: retrieval augmented thoughts elicit context-aware reasoning in long-horizon generation|2024| | |Nils Reimers and Iryna Gurevych|Sentence-bert: Sentence embeddings using siamese bert-networks|2019| | |Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighof|C-pack: Packaged resources to advance general chinese embedding|2023| | |Omar Khattab and Matei Zaharia|Colbert: Efficient and effective passage search via contextualized late interaction over BERT|2020| | |Rodrigo Frassetto Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin|Multi-stage document ranking with BERT|2019| | |Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig|Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing|2023| | |Rodrigo Frassetto Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin|Document ranking with a pretrained sequence-to-sequence model|2020| | |Jia-Huei Ju, Jheng-Hong Yang, and Chuan-Ju Wang|Text-to-text multi-view learning for passage re-ranking|2021| | |Ronak Pradeep, Rodrigo Frassetto Nogueira, and Jimmy Lin|The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models|2021| |
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References [58] Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. Rankt5: Fine-tuning T5 for text ranking with ranking losses. In Hsin-Hsi Chen, Wei-Jou (Edward) Duh, Hen-Hsen Huang, Makoto P. Kato, Josiane Mothe, and Barbara Poblete, editors, Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, pages 2308–2313. ACM, 2023. [59] Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and Zhaochun Ren. Is chatgpt good at search? investigating large language models as re-ranking agents. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 14918–14937. Association for Computational Linguistics, 2023. [60] Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. Large language models are effective text rankers with pairwise ranking prompting. CoRR, abs/2306.17563, 2023. [61] Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. Zero-shot listwise document reranking with a large language model. CoRR, abs/2305.02156, 2023. [62] Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. Fine-tuning llama for multi-stage text retrieval. CoRR, abs/2310.08319, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[63] Changhua Pei, Yi Zhang, Yongfeng Zhang, Fei Sun, Xiao Lin, Hanxiao Sun, Jian Wu, Peng Jiang, Junfeng Ge, Wenwu Ou, and Dan Pei. Personalized re-ranking for recommendation. In Toine Bogers, Alan Said, Peter Brusilovsky, and Domonkos Tikk, editors, Proceedings of the 13th ACM Conference on Recommender Systems, RecSys 2019, Copenhagen, Denmark, September 16-20, 2019, pages 3–11. ACM, 2019. [64] Yi Li, Jieming Zhu, Weiwen Liu, Liangcai Su, Guohao Cai, Qi Zhang, Ruiming Tang, Xi Xiao, and Xiuqiang He. PEAR: personalized re-ranking with contextualized transformer for recommendation. In Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel Médini, editors, Companion of The Web Conference 2022, Virtual Event / Lyon, France, April 25 - 29, 2022, pages 62–66. ACM, 2022.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[65] Jon Saad-Falcon, Omar Khattab, Keshav Santhanam, Radu Florian, Martin Franz, Salim Roukos, Avirup Sil, Md.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Arafat Sultan, and Christopher Potts. UDAPDR: unsupervised domain adaptation via LLM prompting and distillation of rerankers. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 11265–11279. Association for Computational Linguistics, 2023. [66] Yubo Ma, Yixin Cao, Yong Hong, and Aixin Sun. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 10572–10601. Association for Computational Linguistics, 2023. [67] Peng Shi, Rui Zhang, He Bai, and Jimmy Lin. XRICL: cross-lingual retrieval-augmented in-context learning for cross-lingual text-to-sql semantic parsing. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 5248–5259. Association for Computational Linguistics, 2022. [68] Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, and Junxian He. What makes good data for alignment? A comprehensive study of automatic data selection in instruction tuning. CoRR, abs/2312.15685, 2023. [69] Weihao Zeng, Can Xu, Yingxiu Zhao, Jian-Guang Lou, and Weizhu Chen. Automatic instruction evolving for large language models, 2024.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References |[70]|Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. CoRR, abs/2309.12284, 2023.| |---|---| |[71]|Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, and Chang Zhou. Query and response augmentation cannot help out-of-domain math reasoning generalization. CoRR, abs/2310.05506, 2023.| |[72]|Claude E. Shannon. A mathematical theory of communication. Bell Syst.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Tech.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
J., 27(3):379–423, 1948.| |[73]|Shengyao Zhuang, Bing Liu, Bevan Koopman, and Guido Zuccon. Open-source large language models are strong zero-shot query likelihood models for document ranking. arXiv preprint arXiv:2310.13243, 2023.| |[74]|Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. Learning to summarize from human feedback. CoRR, abs/2009.01325, 2020.| |[75]|Aäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. CoRR, abs/1807.03748, 2018.| |[76]|Philip Bachman, R. Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 15509–15519, 2019.| |[77]|Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.| |[78]|Rich Caruana. Multitask learning. Mach. Learn., 28(1):41–75, 1997.| |[79]|Bernardino Romera-Paredes, Hane Aung, Nadia Bianchi-Berthouze, and Massimiliano Pontil. Multilinear multitask learning. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, volume 28 of JMLR Workshop and Conference Proceedings, pages 1444–1452. JMLR.org, 2013.| |[80]|Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.| |[81]|Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qingfu Zhang, and Sam Kwong. Pareto multi-task learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 12037–12047, 2019.| |[82]|Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, and Nigel Collier. Self-alignment pretraining for biomedical entity representations. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, editors, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 4228–4238. Association for Computational Linguistics, 2021.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
These chunks are then transformed into vector representations using an embedding model and stored in an embedding database. Given a user query q, the system typically retrieves the top-K chunks that best match the query.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023. Unlimiformer: Long-range transformers wip unlimited lengp input. In Thirty-sevenp Conference on Neural Information Processing Systems. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017a. Reading Wikipedia to answer open-domain questions. In Proceedings of pe 55p Annual Meeting of pe Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017b. Reading Wikipedia to answer open-domain questions. In Proceedings of pe 55p Annual Meeting of pe Association for Computational Linguistics (Volume 1: Long Papers). Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2023. Benchmarking large language models in retrieval-augmented generation. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddharpa Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Jordan Hoffmann, Sebastian Borgeaud, Arpur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Ruperford, Diego de las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Kaperine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Oriol Vinyals, Jack William Rae, and Laurent Sifre. 2022. An empirical analysis of compute-optimal large language model training. In Advances in Neural Information Processing Systems. Gautier Izacard and Edouard Grave.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References |[83]|Yejie Wang, Keqing He, Guanting Dong, Pei Wang, Weihao Zeng, Muxi Diao, Yutao Mou, Mengdi Zhang, Jingang Wang, Xunliang Cai, and Weiran Xu. Dolphcoder: Echo-locating code large language models with diverse and multi-objective instruction tuning. CoRR, abs/2402.09136, 2024.| |---|---| |[84]|Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: a benchmark for question answering research. Trans.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Assoc. Comput. Linguistics, 7:452–466, 2019.| |[85]|Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Regina Barzilay and Min-Yen Kan, editors, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1601–1611. Association for Computational Linguistics, 2017.| |[86]|Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2369–2380. Association for Computational Linguistics, 2018.| |[87]|Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. The Association for Computer Linguistics, 2016.| |[88]|Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In NeurIPS, 2022.| |[89]|OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023.| |[90]|Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023.| |[91]|Meta. Introducing meta llama 3: The most capable openly available llm to date, 2024.| |[92]|Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
arXiv preprint arXiv:2309.16609, 2023.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References [93] Inc.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
NetEase Youdao. Bcembedding: Bilingual and crosslingual embedding for rag. https://github.com/netease-youdao/BCEmbedding, 2023. [94] Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. Colbertv2: Effective and efficient retrieval via lightweight late interaction. In Marine Carpuat, Marie-Catherine de Marneffe, and Iván Vladimir Meza Ruíz, editors, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3715–3734. Association for Computational Linguistics, 2022. [95] Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang.,RRHF: rank responses to align language models with human feedback without tears. CoRR abs/2304.05302, 2023. [96] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. LIMA: less is more for alignment. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. [97] Keming Lu, Hongyi Yuan, Zheng Yuan, Runji Lin, Junyang Lin, Chuanqi Tan, Chang Zhou, and Jingren Zhou. #instag: Instruction tagging for analyzing supervised fine-tuning of large language models. CoRR, abs/2308.07074, 2023. [98] Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, and Jingren Zhou. How abilities in large language models are affected by supervised fine-tuning data composition, 2024. [99] Chengyue Wu, Yukang Gan, Yixiao Ge, Zeyu Lu, Jiahao Wang, Ye Feng, Ping Luo, and Ying Shan. Llama pro: Progressive llama with block expansion. arXiv preprint arXiv:2401.02415, 2024.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[100] Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, et al. The art of balancing: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment. arXiv preprint arXiv:2312.09979, 2023. [101] Zhengyang Tang, Xingxing Zhang, Benyou Wang, and Furu Wei. Mathscale: Scaling instruction tuning for mathematical reasoning, 2024. [102] Guanting Dong, Keming Lu, Chengpeng Li, Tingyu Xia, Bowen Yu, Chang Zhou, and Jingren Zhou. Self-play with execution feedback: Improving instruction-following capabilities of large language models, 2024. [103] Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. UniK-QA: Unified representations of structured and unstructured knowledge for open-domain question answering. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1535–1546, Seattle, United States, July 2022. Association for Computational Linguistics. [104] Guanting Dong, Rumei Li, Sirui Wang, Yupeng Zhang, Yunsen Xian, and Weiran Xu. Bridging the kb-text gap: Leveraging structured knowledge-aware pre-training for kbqa, 2023. [105] Haoran Luo, Haihong E, Zichen Tang, Shiyao Peng, Yikai Guo, Wentai Zhang, Chenghao Ma, Guanting Dong, Meina Song, and Wei Lin. Chatkbqa: A generate-then-retrieve framework for knowledge base question answering with fine-tuned large language models, 2023. [106] Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. Dense passage retrieval for open-domain question answering, 2020. [107] Denny VrandeˇCommun. ACMci´c and Markus Krötzsch. Wikidata: A free collaborative knowledgebase., 57(10):78–85, sep 2014.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# References |[108]|Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.| |---|---| |[109]|Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. arXiv preprint arXiv:2403.13372, 2024.| |[110]|Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Qwen technical report. CoRR, abs/2309.16609, 2023.| |[111]|Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b. CoRR, abs/2310.06825, 2023.| |[112]|Mojan Javaheripi, Sébastien Bubeck, Marah Abdin, Jyoti Aneja, Sebastien Bubeck, Caio César Teodoro Mendes, Weizhu Chen, Allie Del Giorno, Ronen Eldan, Sivakanth Gopi, et al. Phi-2: The surprising power of small language models. Microsoft Research Blog, 2023.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Appendix # Contents |1|Introduction|1| |---|---|---| |2|Related Work|3| |3|Methodology|3| | |3.1 Task Definition|4| | |3.2 Preference Knowledge Construction|4| | |3.3 Reranker-LLM Alignment|5| | |3.4 LLM Self-Alignment|6| |4|Experiments|7| | |4.1 Datasets and Metrics|7| | |4.2 Main Results|7| | |4.3 Quantitative Analysis|9| |5|Conclusion|11| # A More Details about DPA-RAG |A.1|The Overall Algorithm Workflow of DPA-RAG|22| |---|---|---| |A.2|Preference Scoring Mechanism for Different LLMs|22| # B More Details on Experiment Setup |B.1|Datasets|22| |---|---|---| |B.2|Prompt Templates|24| |B.3|Implementation Details|25| |B.4|Baselines|25| # C More Details about Experimental Results |C.1|Detailed Results for Ablation Studies|27| |---|---|---| |C.2|Details about Diverse Query Augmentations|28| |C.3|Case Studies for Preference Alignment|30|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# More Details about DPA-RAG A.1 The Overall Algorithm Workflow of DPA-RAG In this section, we delve into the overall workflow of the DPA-RAG algorithm, which can be divided into Reranker Training Algorithm and LLM-based Generator Training. Reranker Training Algoripm Given pe train set Detrain = {q, Dqi , yqi }Ntrain, we initially perform preference knowledge mining techniques to select, augment and filter pe data to construct a preference-aligned dataset Depref. Subsequently, relying on pe Depref, we perform multi-grained distillation alignments wip MGDA-UB strategy to better fine-tune a preference-aligned reranker. The detailed process is listed in algoripm diagram 1. LLM-based Reader Training Algoripm As shown in algoripm diagram 2, for open-source LLM-based reader, we directly utilize pe preference-aligned reranker to perform preference-based reranking on retrieved documents in DePAetrain5 and Dtest, resulting in sorted datasets Dtraineerank and Dtest e rank. In addition, we also construct a dataset Dtrain for pe knowledge self-alignment task based on Depref. Initially, we use Dtrain for pe pre-aligned task, pen we load pe pre-trained model parameters and conduct vanilla QA supervised fine-tuning based on Drank. During pe inference phase, we input pe preference-sorted test set Drank into pe LLM to complete pe prediction. For close-source LLM-based reader, the process is more simple: the preference-aligned reranker is used to sort documents in the test set Detest → Drank, then we use LLMs for the prediction process. A.2 Preference Scoring Mechanism for Different LLMs In practice, we find that models with fewer than 7B parameters struggle with instruction-following capabilities, making it difficult for them to perform the scoring task. To address this, we follow the RankLLaMA and RePLUG, utilizing the output’s logit as the basis for scoring as follows: |Equation 11|Equation 12| |---|---| |rθ (q, d) = log Pθ ( prompt (q, d))|si = a · rθ (q, pi) + (1 − a) · sR(q, pi)| where q, di denotes the query and top i-th document. log P(·) represents the model’s probability distribution. Prompt denotes the prompt template.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}