text
stringlengths 2
6.93k
| system_prompt
stringclasses 1
value |
---|---|
# Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466.
# Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer
2017. Zero-shot relation extraction via reading comprehension. arXiv preprint arXiv:1706.04115.
# Patrick Lewis, Ethan Perez, Aleksandra Piktus, et al.
2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474.
# Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang
2023. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Stephen Merity
2016. The wikitext long term dependency language modeling dataset. Salesforce Metamind, 9.
# Md Rizwan Parvez, Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang
2021. Retrieval augmented code generation and summarization. arXiv preprint arXiv:2108.11601.
# Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813.
# Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, et al.
2020. Kilt: a benchmark for knowledge intensive language tasks. arXiv preprint arXiv:2009.02252.
# Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis
2023. Measuring and narrowing the compositionality gap in language models.
# Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al.
2018. Improving language understanding by generative pre-training.
# Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, and Haifeng Wang
2023. Investigating the factual knowledge boundary of large language models with retrieval augmentation. arXiv preprint arXiv:2307.11019.
# Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia
2022. Colbertv2: Effective and efficient retrieval via lightweight late interaction.
# Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom
2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761.
# Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih
2023. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Tiening Sun, Zhong Qian, Sujun Dong, Peifeng Li, and Ori Yoran, Tomer Wolfson, Ori Ram, and Jonathan Qiaoming Zhu. 2022. Rumor detection on social media with graph adversarial contrastive learning. In Proceedings of the WWW 2022, pages 2789–2797.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2023a. Llama: Open and efficient foundation language models.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models.
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022a. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions.
Wenhao Yu, Hongming Zhang, Xiaoman Pan, Kaixin Ma, Hongwei Wang, and Dong Yu. 2023. Chain-of-note: Enhancing robustness in retrieval-augmented language models.
# More Analysis
# Robustness to ratio of Positive Passages
Our INFO-RAG improves the robustness of RAG performance to retrieval performance. The performance of the retriever greatly affects the performance of LLM in RAG (Chen et al., 2023). We explore this in this section. Specifically, we simulate changes in retrieval performance by varying the ratio of positive and negative passages in the retrieved list and report the RAG performance with different ratios. Table 8 shows INFO-RAG performs better when the ratio is low and the performance is more stable than baseline when the ratio changes from 100% to 0% (Max ∆). The model in this experiment is LLaMA-2-13B-chat.
# Robustness to Positive Passage Position
Experimental results in Table 9 show that our INFO-RAG consistently outperforms the baseline (LLaMA-2) regardless of where the positive passage (passage contains the correct answers) appears in the retrieved list. Specifically, we mix positive and negative passages in a ratio of 1:9 to simulate the retrieved passage list, vary the position of the positive passage in the retrieved list from 0 to 9, and evaluate the corresponding RAG performance respectively. The model in this experiment is LLaMA-2-13B-chat. Experimental results show that our INFO-RAG not only outperforms the baseline at every position but also achieves more stable performance varying with the position (Max ∆).
# Robustness to Number of Retrieved Passages
Experimental results in Table 10 show that our INFO-RAG consistently outperforms the baseline with the different number of retrieved passages (from 1 to 10) and is robust to the change of the number. In this experiment, we use LLaMA-2-13B-chat as the base model, change the number of retrieved passages from 1 to 10, and evaluate the corresponding performance. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Data Model ratio of Positive Passages Max ∆
|Data|Model|ratio of Positive Passages| | | | | | | |Max ∆|
|---|---|---|---|---|---|---|---|---|---|---|
|NQ|LLaMA-2|88.11|82.71|80.81|77.62|69.73|42.35|-51.94%| | |
| |+ INFO-RAG|90.31|83.72|81.72|79.72|71.52|51.04|-43.48%| | |
|WebQ|LLaMA-2|79.41|75.43|71.63|65.53|63.39|39.25|-50.57%| | |
| |+ INFO-RAG|83.66|76.23|74.23|69.05|65.74|45.61|-45.48%| | |
|T-REx|LLaMA-2|80.01|70.05|71.52|68.53|66.23|42.75|-46.57%| | |
| |+ INFO-RAG|83.52|73.22|74.93|72.32|70.12|46.45|-44.38%| | |
|ZS|LLaMA-2|69.52|65.48|63.81|60.95|57.14|28.33|-59.25%| | |
| |+ INFO-RAG|72.50|72.62|67.62|67.86|60.48|36.19|-50.08%| | |
# Table 8: RAG performance changes with the ratio of positive passages (randomly select 500 samples).
|Datasets|Method|Position of Positive Passage| | | | | | | | | | | |Max ∆|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|NQ|LLaMA-2|54.94|48.05|46.05|46.45|46.35|48.30|48.35|47.15|51.64|50.44|-16.18%| | |
| |+ INFO-RAG|63.23|58.34|54.54|54.44|53.54|53.24|53.84|54.44|53.34|53.34|-15.80%| | |
|WebQ|LLaMA-2|66.13|63.21|62.54|62.68|64.01|62.41|63.21|64.54|63.87|64.14|-5.63%| | |
| |+ INFO-RAG|71.58|68.39|66.26|65.34|67.19|65.73|65.73|65.81|65.54|66.72|-8.72%| | |
|T-REx|LLaMA-2|64.43|60.13|58.34|60.23|58.54|59.14|59.74|60.53|63.53|63.23|-9.45%| | |
| |+ INFO-RAG|70.72|66.23|64.93|65.23|65.43|64.83|66.03|67.23|64.63|66.83|-8.61%| | |
|ZS|LLaMA-2|63.04|59.04|54.59|55.03|55.17|57.15|56.42|57.89|58.04|59.47|-13.40%| | |
| |+ INFO-RAG|66.42|63.33|59.04|60.23|61.42|61.66|60.00|61.19|60.23|62.14|-11.11%| | |
# Table 9: RAG performance changes with the position of positive passage (randomly select 500 samples).
| |T-REx|ZS|NQ|WebQ|Method| | | | |T-REx|ZS|NQ|WebQ|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|Baseline|51.47|40.26|45.05|41.78|Baseline|62.53|56.81|50.36|45.47| | | | |
|+ INFO-RAG|55.67|43.29|49.76|44.02|Simple Mask|64.05|58.91|53.80|50.55| | | | |
| | | | | |Our method|65.39|59.05|54.04|51.07| | | | |
# Table 11: Works based on BM25.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Table 13: Ablation study of masking strategy.
# Ablation Study on Masking Strategy
In general, Table 13 and 12 show our masking strategy in Scenario 3 is more effective than simple and straightforward masking. Specifically, our method is more significantly effective in the scenarios that correct answers are randomly replaced with other phrases (replace) and retrieval cannot find any answers (no answer).
# Works with Different Retriever
We evaluate our method and baseline (LLaMA2- 13B-chat) with BM25 as the retriever, the experimental results shown in Table 11 indicate that our method still performs better than baseline when the retriever as BM25.
# Performance on MMLU
Experimental results on MMLU benchmark in the setting without RAG shown in Table 14 show that our INFO-RAG significantly improves the performance of LLMs in RAG, while still maintaining its versatility and avoiding catastrophic forgetting. MMLU is a benchmark that measures massive multitask language understanding ability of LLMs. It covers 57 subjects across STEM, the humanities, the social sciences, and more. It ranges in difficulty from an elementary level to an advanced professional level, and it tests both world knowledge and problem-solving ability (Hendrycks et al., 2020). Experiments show that our INFO-RAG performs very close to the original LLaMA-2 on MMLU, which shows that our INFO-RAG does not damage the basic language understanding ability of LLMs. This is mainly because the prefix language model- | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
|Datasets|Method|Number of Retrieved Passages|Max ∆|
|---|---|---|---|
|NQ|LLaMA-2|38.80 43.21 46.62 47.84 48.61 49.42 52.03 50.23 50.40 50.20|-25.43%|
| |INFO-RAG|45.18 46.80 51.44 51.23 51.00 53.21 54.03 53.44 53.82 54.60|-17.25%|
|WebQ|LLaMA-2|40.22 43.63 48.20 46.61 48.32 49.11 49.40 50.22 51.65 50.43|-22.13%|
| |INFO-RAG|50.21 53.84 54.41 55.07 55.25 55.27 57.00 55.45 56.62 56.03|-11.91%|
|T-REx|LLaMA-2|66.20 63.45 67.22 64.45 64.43 65.40 64.41 65.22 63.22 65.01|-5.95%|
| |INFO-RAG|66.25 66.03 66.31 65.80 67.23 67.22 66.65 67.83 67.03 67.40|-2.99%|
|ZS|LLaMA-2|49.25 50.01 52.38 54.09 56.12 56.20 56.13 56.05 55.95 56.11|-12.37%|
| |INFO-RAG|53.17 54.08 56.35 58.01 59.45 59.12 59.40 58.55 60.03 59.08|-11.43%|
| |T-REx|ZS|NQ|WebQ|
|---|---|---|---|---|
|Baseline|75.96 43.79 5.59|67.03 16.58 1.42|69.37 30.72 6.16|65.07 31.88 5.47|
|Simple Mask|78.43 44.05 5.75|70.30 19.45 1.96|73.59 31.05 6.51|70.55 32.96 6.83|
|Our method|79.25 48.59 6.67|70.26 25.02 3.87|73.73 33.85 8.39|70.59 37.48 11.25|
| |Humanities|STEM|Social-Sciences|Other|Average|
|---|---|---|---|---|---|
|LLaMA-2-7B w/o RAG|42.9|36.4|51.2|52.2|45.3|
|INFO-RAG w/o RAG|42.8|36.1|50.8|52.0|45.0|
|LLaMA-2-13B w/o RAG|52.8|44.1|62.6|61.1|54.8|
|INFO-RAG w/o RAG|52.5|43.7|62.1|60.9|54.3| | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2023. LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B. arXiv:2310.20624 [cs.LG]|
|[12]|Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# arXiv:2403.09727v1 [cs.CL] 12 Mar 2024
Investigating the performance of Retrieval-Augmented Generation and fine-tuning for the development of AI-driven knowledge-based systems
R´obert Lakatos2,3,4, P´eter Pollner1, Andr´as Hajdu2, and Tam´as Jo´1,4o
1Data-Driven Health Division of National Laboratory for Health Security, Health Services Management Training Centre, Semmelweis University
2Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen
3Doctoral School of Informatics, University of Debrecen
4Neumann Technology Platform, Neumann Nonprofit Ltd.
Abstract
The development of generative large language models (G-LLM) opened up new opportunities for the development of new types of knowledge-based systems similar to ChatGPT, Bing, or Gemini. Fine-tuning (FN) and Retrieval-Augmented Generation (RAG) are the techniques that can be used to implement domain adaptation for the development of G-LLM-based knowledge systems. In our study, using ROUGE, BLEU, METEOR scores, and cosine similarity, we compare and examine the performance of RAG and FN for the GPT-J-6B, OPT-6.7B, LlaMA, LlaMA-2 language models. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Based on measurements shown on different datasets, we demonstrate that RAG-based constructions are more efficient than models produced with FN. We point out that connecting RAG and FN is not trivial, because connecting FN models with RAG can cause a decrease in performance. Furthermore, we outline a simple RAG-based architecture which, on average, outperforms the FN models by 16% in terms of the ROGUE score, 15% in the case of the BLEU score, and 53% based on the cosine similarity. This shows the significant advantage of RAG over FN in terms of hallucination, which is not offset by the fact that the average 8% better METEOR score of FN models indicates greater creativity compared to RAG. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Introduction
Transformer-based large language models (LLMs) are a major advance in natural language processing (NLP). Recently, the most popular networks such as BERT, XL, or GPT, and their different versions constantly competed with each other on various language tasks. However, currently, the best results are achieved by generative large language models (G-LLMs). Among G-LLM, the GPT network, developed by OpenAI and also operating as part of the ChatGPT service and the Microsoft Bing search engine is considered a pioneer. But other competitors also appeared, such as PaLM developed by Google, or its further developed version, Gemini, which is the basis of Google’s Bard system. Furthermore, it is also worth mentioning the LLaMA model. The LLaMA is an open-source G-LLM created by Meta. The most advanced versions of these model families demonstrate remarkable language comprehension and generation capabilities. G-LLMs have transformed the field of natural language processing by achieving next-level performance in various tasks, including text generation, translation, and question-answering. These models are trained on massive datasets of text and code, enabling them to capture complex linguistic patterns and generate human-quality text. However, their true potential often emerges when applied to concrete domains. An example of this is the Fun-Search evolution process. The authors of FunSearch show how language models can even be used to solve mathematical problems.
A specialized field of application G-LLMs is AI-driven knowledge systems, which is a novel approach to the development of such systems. A general and at the same time multimodal approach to AI-driven knowledge-based systems also is the services of OpenAI ChatGPT, Microsoft Bing, or Google Bard. Knowledge-based systems using classic database queries are designed to retrieve and store information in a way that is easy for humans to understand. For this, a combination of NLP and information retrieval techniques (special indexing and querying) are used. In contrast, G-LLMs are suitable for creating new texts, codes, scripts, musical works, e-mails, letters, for example. This allows them to create new information that isn’t already in the data they were trained on. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
This makes them great for tasks such as writing creative text formats, translating languages, or answering complex questions. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
In turn, these capabilities can be extended to create state-of-the-art knowledge-based systems as well.
To be able to expand the capabilities of G-LLMs and make them suitable for building a knowledge-based system there are two dominant strategies: FN and RAG. In the field of domain | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
adaptation, both solutions offer unique advantages and challenges.
FN involves further training the G-LLMs on domain and task-specific texts. FN is a well-established technique for domain adaptation of G-LLMs. It involves further training the LLM on a dataset of domain-specific text, allowing the model to incorporate domain-specific knowledge. This approach is effective in improving the performance of G-LLMs on a variety of tasks, including text generation, machine translation, or question-answering.
In the case of RAG, we use a semantic search engine to find the relevant information, which can be injected into the context of the G-LLMs to It can help the model the task solving. Because G-LLMs are sensitive to the transferred context due to the internal attention mechanisms resulting from the transformer architecture. This approach has one of its biggest advantages being that it does not require continuous retraining of the G-LLMs. Namely, it is enough to supplement the database used to generate the context to increase the knowledge base of our system.
Important differences between RAG and FN are that in the case of FN, the risk of hallucination may be greater than in the case of RAG. However, fine-tuned models can better adapt to the target task and reach conclusions that may not be available with RAG. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Naturally, we can apply ensemble approaches. In turn, it is far from trivial whether we can use the ensembled capabilities provided by RAG and FN to achieve better performance.
Currently, there is no recommendation or best practice that precisely defines how to build a knowledge-based system using G-LLMs. This deficiency motivated this chapter of my dissertation. In which I present a possible method of building a knowledge-based system. Considering G-LLM can even be used as an AI-driven expert system in the case of a well-defined target area.
This chapter of my thesis is structured as follows. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
In section 2, we present the databases used to find the best parameters, settings, and methods. In section 3, we describe our methodological approach. In section 4, we provide the measurement results that demonstrate the performance of the different solutions. Finally, in section 5 we draw my conclusions.
# Data
We applied two approaches to create the data. On the one hand, we examined how we can create datasets from PDF and Microsoft Word-based scientific publications because our long-term goals include building our G-LLM-based system. On the other hand, besides the own created data. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
We created another dataset for the measurements. This second dataset, we composed from publicly available data. All this is to make our results easily reproducible and verifiable.
For the scientific-based dataset collected by us, we curated a collection of specialist publications from urban monitoring and corn cultivation with the help of the National Library of the University of Debrecen and the Faculty of Agriculture, Food Science, and Environmental Management. This corpus, comprising 69 pieces of literature on corn cultivation (CORN) and 83 pieces of literature on urban monitoring (UB), provided a rich source of domain-specific terminology and concepts. Every article or book was available to us in PDF or Word format and the University of Debrecen had to have a special license by which we could download the publications.
As an independent and open-access dataset, we utilized the CORD-19 dataset, a freely available repository of tens of thousands of scientific articles on COVID-19, SARS-CoV-2, and related coronaviruses. This dataset encompasses thousands of scientific publications. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
It is in JSON format and represents about 80 GB of text data.
The data preparation processes for the model’s FN and the RAG application are described in more detail in subsection 3.3.
# Methodology
To decide whether RAG or FN is the better approach for creating a G-LLM-based system, we used the following method. We have determined the models 3.1 suitable for the task. We have selected the appropriate metrics 3.2. We prepared the data 3.3 according to the needs of RAG and FN. We fine-tuned the models 3.4. Eventually, we evaluated 3.5 their performance based on the metrics.
# Models
To select the models, we took into account the following aspects:
- The models must be G-LLM.
- The models have been documented scientifically.
- The models have pre-trained versions.
- The models have been well implemented. That means they should be part of the model. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
This selection also helps in teasing out the possibility of the underlying LLM having been exposed to these news articles. We only keep articles with a token length greater than or equal to 1,024.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33 (2020), 9459–9474.|
|[13]|Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172 (2023).|
|[14]|Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: Nlg evaluation using gpt-4 with better human alignment, may 2023. arXiv preprint arXiv:2303.16634 (2023).|
|[15]|Noor Nashid, Mifta Sintaha, and Ali Mesbah. 2023. Retrieval-based prompt selection for code-related few-shot learning. In Proceedings of the 45th International Conference on Software Engineering (ICSE’23).|
|[16]|OpenAI. 2023. GPT-4 Technical Report. https://doi.org/10.48550/ARXIV.2303.08774|
|[17]|Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning. PMLR, 28492–28518.|
|[18]|Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kaluarachchi, Rajib Rana, and Suranga Nanayakkara. 2023. Improving the domain adaptation of retrieval augmented generation (RAG) models for open domain question answering. Transactions of the Association for Computational Linguistics 11 (2023), 1–17.|
|[19]|Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, and Ji-Rong Wen. 2023. Large language models for information retrieval: A survey. arXiv preprint arXiv:2308.07107 (2023).| | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
repository of the HuggingFace12 and PyTorch13 development libraries.
A resource provided by NVIDA DGX Systems should be sufficient to train and test the models since we have access to this platform. Based on these criteria, we selected the GPT-J-6B,14 OPT-6.7B,15 LLaMA-7B,9 and LLaMA2-7B9 models.
# Selected metrics
The following metrics were used to determine the performance of the different language models: Bilingual Evaluation Understudy (BLEU),16 Recall Oriented Understudy for Gisting Evaluation (ROUGE),17 Metric for Evaluation for Translation with Explicit Ordering scores (METEOR) given by,18 and cosine similarity defined in 1.
Cosine(x, y) = ∥x∥∥y∥ = xy pPn=1 (x)2pPii=1 xyin=1 (y)2i (1)
i i i
BLEU is used to measure machine translation performance. BLEU measures n-gram accuracy, which means it counts how many n-grams of the generated text are found in the reference translation.
ROUGE is used to measure the performance of machine translation and text summarization tasks and measures recall, which means that it counts how many n-grams of the reference translation are found in the generated text. ROUGE is designed to work around some of BLEU’s limitations. Namely, ROUGE places more emphasis on recall than BLEU and better takes into account the meaning of the text.
METEOR is used to measure the performance of machine translation, text summaries, and creative text formats. METEOR measures Recall, Precision, and word order compatibility.
Cosine similarity is also used to measure the similarity of texts. To use it, the text must be converted into sentence or word vectors and then the cosine similarity between the vectors must be calculated. A higher cosine similarity means that the texts are more similar to each other. This approach We applied so way this by dividing the generated and reference text into sentences and then converting the individual sentences into embedded vectors using the MiniLM L6 v219 sentence transformer. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
After that, we | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
CS = Cosine(G × R) = {(g, r)|g ∈ G and r ∈ R} (2)
where R is the set of sentence vectors of the reference text and G is the set of sentence vectors of the generated text. Finally, we applied
Pn=1 max(CSv)v (3)
where v is a similarity value from the cosine similarity matrix CS. The maximum of all of v returns the cosine similarity value of the vector where the greatest similarities between the generated sentence and the reference sentence are measured. In other words, we calculated the average of the best matches of the generated sentences with the reference sentences.
To summarize and compare these metrics:
- BLEU is usually the best metric used for machine translation and takes into account matches of words and sentence structures.
- ROUGE is generally the best metric to use for text summaries and creative text formats.
- METEOR is a good general-purpose metric. The meaning and style of the creative text formats such as poems and stories are often evaluated with METEOR.
- With the cosine similarity, we introduce a metric based on vector similarity. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
With this, we were able to measure the semantic similarity between the referenced and the generated texts.
3.3 Data preparation for RAG and FN
In the case of RAG and FN, we had to use two different approaches to data preparation. In the case of FN, we considered the method of the Stanford Alpaca20 model to be the guiding principle. In the case of RAG, we have created easily searchable datasets capable of supporting context transfer. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# 3.3.1 Q&A datasets for FN
To prepare Q&A datasets, in the matter of collected CORN and UB documents, we split the datasets into paragraphs. In the next step, we converted them to raw text, and then we cleaned them with the help of human experts. Regarding COVID data, since the entire COVID dataset was too large for our computational resources (NVIDIA DGX Systems), so we extracted a subset from it. For this, from the COVID dataset, we selected the articles based on the following filter criteria:
- Articles must have abstracts.
- Articles must be in the PubMed Central repository. That is, the articles must be open access and medical biology and life science articles.
- Articles must have an arxiv id. It also strengthens open access.
- Articles must not contain latex elements, so they can also be readable easily and validated by human experts.
With these conditions, we managed to restrict the dataset in such a way that we were able to manage it in our environment. We also divided our datasets (CORN, UB) and the COVID dataset into paragraphs. To do this, we took into account the tokenizers of each model. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
When dividing the paragraphs, we worked in such a way that the individual text parts cannot be longer than 256 tokens according to any model’s tokenizer.
To create the questions of the Q&A dataset, we used the BERT-based generator. The question generator used by us is available as part of the Huggingface Library’s model collection. We generated 5 questions for each paragraph. To be sure that two questions are different for the same paragraph, duplicates were filtered and removed. Thus, we created a maximum of 5 but at least 1 question in the database for each paragraph. With this, we applied a kind of oversampling to the dataset. Table 1 lists the number of paragraphs and questions in the created Q&A (Q&ACORN, Q&AUB, Q&ACOVID) datasets:
# 3.3.2 RAG datasets
The performance of the RAG is highly dependent on the accuracy of the context used to answer the questions. Therefore, we used two different approaches to test RAG. On the one hand, we used the Q&ACORN, Q&AUB, Q&ACOVID datasets created for FN. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Namely, in this dataset, we | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
|Dataset|Paragraphs|Questions|
|---|---|---|
|Q&ACORN|7058|28790|
|Q&AUB|8553|27974|
|Q&ACOVID|18004|58290|
Managed to generate at least one question for each paragraph. We transformed these questions into vectors using the MiniLM L6 v2 sentence transformer. Thus, with the help of cosine similarity, they became identifiable compared to a reference question. After all, the answers will be in those paragraphs to which the generated questions are most similar to the reference question. It is our first type of indexed dataset (IDq). | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
On the other hand, we also used a more obvious approach. We split into sentences the entire text and we embedded every sentence with the MiniLM L6 v2 sentence transformer individually. For more effective embedding, sentences shorter than 10 words but longer than 30 words were removed. So we could manage the sentences as vectorized indices. It is our second type of indexed dataset (IDs). In the matter of all datasets (CORN, UB, COVID) we created both types. The properties of these datasets are endorsed in Table 2.
|Dataset|Sentences (IDs)|Questions (IDq)|
|---|---|---|
|IDCORN|37874|28790|
|IDUB|40002|27974|
|IDCOVID|56861|58290|
# Training, validation and test datasets
For FN, we split the datasets (Q&ACORN, Q&AUB, Q&ACOVID) into training and validation datasets in an 80/20 ratio. We used a special approach for resolution. We did not simply split the datasets, but from those question-answer pairs where we had more than 1 reference question, we selected as many corresponding 20% of the entire dataset. With this, we achieved that the validation accuracy of the models measures the ability of association of the models. The inference ability of the models was measured on the test dataset. When creating the test dataset, we tried to create questions and related answers that the model definitely could not learn directly. Therefore, we used topic modeling based on nested vectors to create the test. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
dataset. For this, we used Sentence Transformer, UMAP, and HDBSCAN models. For the identification of the topics, we used IDs datasets (IDsCORN, IDs U B, IDsCOV ID). We embedded with Sentence Transformer all sentences from the IDs dataset. Following this, we reduced from 386 to 2 the embedded vectors using the UMAP dimension reduction technique. Lastly, we then clustered them with the HDBSCAN algorithm. In the case of HDBSCAN, we set the maximum cluster number to 15 and the minimum cluster number to 6. For the clusters to consist of about 256 tokens for sentences with between 10 and 30 words, the parameters 15 and 6 proved to be optimal. Outlier clusters were removed. We then measured the exact number of tokens contained in each cluster with the tokenizer of each model and, we removed clusters with more than 256 tokens. For the remaining clusters, as in the case of the training and validation datasets, we generated questions here as well. The created test data contained 279 question-answer pairs in each dataset.
3.4 Fine Tuning settings
For training, we worked on an NVIDIA DGX computing system. We fine-tuned our models using standard Hugging Face training code with the following belief hyperparameters in the case of all models: loss function to categorical cross-entropy, batch size to 4, learning rate 2e-4, epochs 5, and max length 256.
3.5 Evaluation strategy
Our evaluation strategy was to measure ROUGE, BLEU, and METEOR scores for the models. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Published as a Tiny Paper at ICLR 2024 OBSERVATIONS ON BUILDING RAG SYSTEMS FOR TECHNICAL DOCUMENTS
Sumit Soman and Sujoy Roychowdhury∗ {sumit.soman, sujoy.roychowdhury}@ericsson.com
# ABSTRACT
Retrieval augmented generation (RAG) for technical documents creates challenges as embeddings do not often capture domain information. We review prior art for important factors affecting RAG and perform experiments to highlight best practices and potential challenges to build RAG systems for technical documents.
# 1 INTRODUCTION
Long form Question Answering (QA) involves generating paragraph-size responses from Large Language Models (LLMs). RAG for technical documents has several challenges Xu et al. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Moreover, we also calculated the cosine similarity of the generated responses compared to the reference responses according to the formulas 2 3. During the evaluation, we followed different strategies for measuring fine-tuned models and RAG.
We fine-tuned the GPT-J-6b, OPT-6.7b, LlaMA-7b, and, LlaMA-2-7b models with the datasets Q&ACORN , Q&AU B , Q&ACOV ID. For the FN of the models, we measured the validation accuracy at the end of each epoch and saved the models. We only evaluated the best-performing models on the test datasets. To do this, we passed all the questions from the test datasets to the most accurate models. Finally, we calculated BLEU, ROUGE, and METEOR scores and cosine similarity values between the responses generated by the models and the reference responses. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# To measure the performance of RAG
We used the LLAMA-2-7b model, which was trained by the authors for this type of application as well. This is not true for several models (GPT-J-6b, OPT-6.7b, LLAMA-7b), so we did not use them to measure RAG performance. In the evaluation of RAG, the context injected and the content of the context are critical. However, the input size of each model may differ. For the model we have chosen, LLAMA-2-7b has a maximum input size of 4096 tokens. The size of the input determines the size of the attachable context. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
For this reason, we introduced a filter to control the size and quality of the context.
We defined the filtering by a threshold value based on cosine similarity. The threshold value specified what was considered relevant information during the search in the dataset. As described in the section dealing with data preparation, we worked with two types of datasets (IDq, IDs). The measurements were made for all datasets. The threshold values were defined on a scale from 0 to 1 with a step interval of 0.1. This meant that in the case of any question, we discarded matches worse than the threshold value. Specifically, for example, in the case of a threshold value of 0.5 and a given question taken from the test dataset, only those paragraphs (IDq) or sentences (IDs) passed the filter whose indices showed a cosine similarity greater than 0.5 compared to the reference question. This also means that in the case of a threshold of 0, everything, and in the case of a threshold of 1, only the 100% semantic match is accepted.
The sentences (IDs) or paragraphs (IDq) that passed the filter were packaged in a uniform context in descending order of similarity and handed over to the model to try to answer the given question based on it. If the size of the packed context was larger than the input of the given model allowed, the context was cut off at the maximum input size.
These rules we controlled the size and quality of the context. Using all indexed databases (IDqCORN, UB, COV ID, IDs C0RN, UB, COV ID) we generated answers to all questions in the reference dataset. Finally, we calculated BLEU, ROUGE, and METEOR scores and cosine similarity values between the responses generated by the models and the reference responses.
# Results
The FN was carried out as specified in section 3.4. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
During FN, the models achieved the lowest loss on the validation datasets in epochs 3 or 4.
On the Q&A C0RN and Q&A COV ID the datasets, LLAMA-7b model was the best. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Table 3: The loss of the models measured on the validation datasets by epochs
|Epoch|GPT-J-6B|OPT-6.7B|LLaMA-7B|LLaMA2-7B|
|---|---|---|---|---|
|1|0.416608|0.596310|0.384520|0.388159|
|2|0.196132|0.216316|0.162778|0.181099|
|3|0.163878|0.163674|0.144640|0.149348|
|4|0.159600|0.153637|0.144515|0.149162|
|5|0.168813|0.155910|0.154746|0.156936|
|1|0.511894|0.825766|0.447366|3.398389|
|2|0.209409|0.258804|0.180724|0.819327|
|3|0.170602|0.171143|0.150210|0.186827|
|4|0.164166|0.159860|0.153346|0.145882|
|5|0.172908|0.161635|0.162520|0.150440|
|1|0.586879|1.618626|0.678659|0.488456|
|2|0.238213|0.471962|0.218672|0.217865|
|3|0.192331|0.227678|0.182879|0.187428|
|4|0.186943|0.190710|0.185803|0.187884|
|5|0.194221|0.187811|0.195959|0.198900|
The final evaluation was performed using the next directives:
- We evaluated the models on the test dataset that we presented in subsection 3.3.3 of the methodology.
- We applied to the measurements the ROUGE, METEOR, BLEU, and CS scores that we presented in section 3.2.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
- For the base model LlaMA-2-7b, we also calculated the scores without applying RAG and FN. Since, the creators of the LlaMA-2-7b pre-trained the basic model on a robust corpus, which is a good basis for comparison in the case of FN and RAG. We consider this approach to our evaluation as a baseline.
- For each fine-tuned model, we calculated the total score as described in section 3.5 of the methodology.
- In the case of RAG, we calculated the scores using Llama-2-7b base and fine-tuned models | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
• The threshold value of the search engine used for the RAG presented in section 3.5 was tested through all possible variations between 0 and 1 with a step interval of 0.1 using the indexed datasets IDs, and IDq.
We summarize our measurement results in the radar plot in (Figure 1) which illustrates the relative performance of the models. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Furthermore, the average performance of each model approach is presented in Table 4.
|ROUGE| | | | | |
|---|---|---|---|---|---|
|0.25|('GPT ] 6B FN,|('OPT-6.78 FN,|('LLaMA-7B FN" , )|('LLaMAZ-7B FN,|("LLaMAZ- 78"',||
|0.19|('LLaMAZ-78 RAGIQ ,|('LLaMAZ-78 RAG(S ,|('LLaMAZ-78 FN RAG(QI ,|("LLaMAZ- 78 FN RAG(S)',)| |
|0.44|0.16|METEOR| | | |
| |BLEU| | | | |
|---|---|---|---|---|---|
|ROUGE| | | |METEOR| |
|0.19| |0.51|0.15|METEOR|0.51|
|0.02| | |0.06| | |
|BLEU| | | | | |
(a) COVID
ROUGE
0.19
(b) CORN
|ROUGE|0.51|0.15|METEOR|0.51|
|---|---|---|---|---|
|0.02| | |0.06| |
|BLEU| | | | |
(c) UB
Figure 1: Radar plot of the evaluation results of the models. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
|Models|ROUGE|METEOR|BLEU|CS|
|---|---|---|---|---|
|Baseline|0.142117|0.119251|0.002770|0.335299|
|Fine-tuned|0.254003|0.242348|0.050048|0.356439|
|RAG with fine-tuned|0.229296|0.195219|0.029378|0.305797|
|RAG|0.294986|0.222193|0.057998|0.544829|
As shown in Figure 1 and Table 4, the results suggest that both FN and RAG outperformed the baseline. RAG performed best and was also the best approach. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Moreover, the FN did not help RAG. This is supported by the fact that the best threshold parameter for the LlaMA-2-7b base model during the application of RAG was the value of 0.5.
In the case of the LlaMA-2-7b finely tuned model, the best threshold was 1.0, which practically means 100% rejection. So the fine-tuned model could no longer be helped by context injection.
The METEOR and BLEU scores of the fine-tuned models were better than those of the RAG models, but in terms of the ROUGE score, they were already inferior compared to the RAG. Furthermore, the RAG produced a significantly better CS score than the fine-tuned models.
This shows that RAG significantly improves hallucination and although the association skills of fine-tuned models may be better, the degree of hallucination of fine-tuned models is significantly larger.
Overall, the best result on the test dataset was obtained by using the RAG Llama-2-7b base model with the IDs dataset. The results of the best approaches are the following: ROUGE 0.3, METEOR 0.22, BLEU 0.063 and, CS 0.57. The best construction is presented in detail in Figure 2. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
(2023); Toro et al. (2023). Factors affecting retrieval performance, including in-context documents, LLMs and metrics, have been evaluated Chen et al. (2023a). To further build on this work, we conduct experiments on technical documents with telecom and battery terminology to examine the influence of chunk length, keyword-based search and ranks (sequence) of retrieved results in the RAG pipeline.
# 2 EXPERIMENTAL SETUP
Our experiments are based on IEEE Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications IEEE (2021) and IEEE Standard Glossary of Stationary Battery Terminology 1881-2016 (2016). We separately process the glossary of definitions and the full document, as many expected questions are based on the definitions. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Preparing vector-based indexed dataset
|User input|Text Corpus (question)|
|---|---|
|Embedding input text|Separate text to sentences|
|Search in vector-based indexed dataset using cosine similarity threshold|Embedding sentences to vectors|
|Context|Concatenating user input with context|
|6-LLM|Response|
Figure 2: Flow diagram of the RAG model (best approach) that uses a search engine based on the vectorial embedding of sentences. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Conclusions
In comparing FN and RAG, RAG achieves better results if we want to create a G-LLM-based knowledge system. In the case of RAG, searching in an indexed database is critical, but by indexing with embedded vectors, it is possible to create a dataset that can be searched efficiently and with which RAG can outperform fine-tuned models. The hallucinations of RAG-based systems are smaller and their expansion is simpler since to expand with new information it is only necessary to add the new data, which requires much less calculation than FN. The combination of FN and RAG is not trivial, as the application of a fine-tuned model with RAG did not result in an extra performance increase.
# References
[1] Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. arXiv preprint arXiv:1810.04805 2018,
[2] Dai, Z.; Yang, Z.; Yang, Y.; Carbonell, J.; Le, Q. V.; Salakhupinov, R. arXiv preprint arXiv:1901.02860 2019,
[3] Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; opers Improving language understanding by generative pre-training. 2018,
[4] OpenAI OpenAI. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2024; https://openai.com/, Accessed on March 12, 2024,
[5] OpenAI ChatGPT. 2024; https://chat.openai.com/, Accessed on March 12, 2024,
[6] Microsoft Microsoft Bing Search. 2024; https://www.bing.com/, Accessed on March 12, 2024,
[7] Chowdhery, A. et al. PaLM: Scaling Language Modeling wip Papways. 2022,
[8] AI, G. Gemini. 2024; https://gemini.google.com, Accessed on March 12, 2024,
[9] Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; Rodriguez, A.; Joulin, A.; Grave, E.; Lample, G. LLaMA: Open and Efficient Foundation Language Models. 2023,
[10] Romera-Paredes, B.; Barekatain, M.; Novikov, A.; Balog, M.; Kumar, M. P.; Dupont, E.; Ruiz, F. J.; Ellenberg, J. S.; Wang, P.; Fawzi, O.; opers Nature 2023, 1–3. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# References
[11] Wang, L. L.; Lo, K.; Chandrasekhar, Y.; Reas, R.; Yang, J.; Burdick, D.; Eide, D.; Funk, K.; Katsis, Y.; Kinney, R.; opers ArXiv 2020,
[12] Wolf, T. et al. HuggingFace’s Transformers: State-of-pe-art Natural Language Processing. 2020.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[13] Paszke, A. et al. Advances in Neural Information Processing Systems 32 ; Curran Associates, Inc., 2019; pp 8024–8035.
[14] Wang, B.; Komatsuzaki, A. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://gipub.com/kingoflolz/mesh-transformer-jax, 2021.
[15] Zhang, S.; Roller, S.; Goyal, N.; Artetxe, M.; Chen, M.; Chen, S.; Dewan, C.; Diab, M.; Li, X.; Lin, X. V.; opers arXiv preprint arXiv:2205.01068 2022,
[16] Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.-J. Bleu: a Mepod for Automatic Evaluation of Machine Translation. Proceedings of pe 40p Annual Meeting of pe Association for Computational Linguistics. Philadelphia, Pennsylvania, USA, 2002; pp 311–318.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[17] Lin, C.-Y. ROUGE: A Package for Automatic Evaluation of Summaries. Text Summarization Branches Out. Barcelona, Spain, 2004; pp 74–81.
[18] Banerjee, S.; Lavie, A. METEOR: An Automatic Metric for MT Evaluation wip Improved Correlation wip Human Judgments. Proceedings of pe ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Ann Arbor, Michigan, 2005; pp 65–72.
[19] Wang, W.; Wei, F.; Dong, L.; Bao, H.; Yang, N.; Zhou, M. MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. 2020.
[20] Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, C.; Liang, P.; Hashimoto, T. B. Stanford Alpaca: An Instruction-following LLaMA model. https://gipub.com/tatsu-lab/stanford_alpaca, 2023.
[21] Voidful Context Only Question Generator. 2024; https://huggingface.co/voidful/context-only-question-generator, Accessed on January 01, 2024.
[22] Reimers, N.; Gurevych, I. arXiv preprint arXiv:1908.10084 2019, | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# References
[23] Lawrence, N. D. The Journal of Machine Learning Research 2012, 13, 1609–1638.
[24] Campello, R. J.; Moulavi, D.; Zimek, A.; Sander, J. ACM Transactions on Knowledge Discovery from Data (TKDD) 2015, 10, 1–51. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# arXiv:2405.04700v1 [cs.LG] 7 May 2024
Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
Ruiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Yiyu Shi1Abbasi1, Zhi Zheng1, Ningyuan CaoUniversity at Buffalo–SUNY2, Jianbo Liu1, Ahmed1University of Notre Dame 21, Kai Ni1, Jinjun Xiong
# ABSTRACT
Large Language Models (LLMs) deployed on edge devices learn through fine-tuning and updating a certain portion of their parameters. Although such learning methods can be optimized to reduce resource utilization, the overall required resources remain a heavy burden on edge devices. Instead, Retrieval-Augmented Generation (RAG), a resource-efficient LLM learning method, can improve the quality of the LLM-generated content without updating model parameters. However, the RAG-based LLM may involve repetitive searches on the profile data in every user-LLM interaction. This search can lead to significant latency along with the accumulation of user data. Conventional efforts to decrease latency result in restricting the size of saved user data, thus reducing the scalability of RAG as user data continuously grows. It remains an open question: how to free RAG from the constraints of latency and scalability on edge devices? In this paper, we propose a novel framework to accelerate RAG via Computing-in-Memory (CiM) architectures. It accelerates matrix multiplications by performing in-situ computation inside the memory while avoiding the expensive data transfer between the computing unit and memory. Our framework, Robust CiM-backed RAG (RoCR), utilizing a novel contrastive learning-based training method and noise-aware training, can enable RAG to efficiently search profile data with CiM. To the best of our knowledge, this is the first work utilizing CiM to accelerate RAG.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# 1 INTRODUCTION
The emerging Large Language Models (LLMs) are deployed primarily on centralized cloud platforms (Cloud LLMs), raising concerns about user privacy and trustworthy issues. These issues become even more prominent in areas such as healthcare, companionship, and personal assistance, where the user privacy and trustworthiness of LLMs are crucial. To address these issues, the cloud LLMs will eventually transform into personalized LLMs, capable of generating personalized responses, deployed on edge devices (Edge LLMs), where users can keep all their private data and the model learns from those data locally.
To better suit the needs of individual users, Edge LLMs must learn from user interactions. However, their capability of learning is constrained by their limited RAM and computational power. Similar to Cloud LLMs, the Edge LLMs primarily learn by fine-tuning their model parameters. Yet, given that these models often contain over 3 billion parameters, updates can be challenging, even with numerous efforts to accelerate them. For example, using the experimental high-performance embedded system like NVIDIA-AGX, the pockengine method can still take 90 hours to learn from a middle-sized dataset Alpaca with only 52k documents, making this option impractical for normal users. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Ruiyang Qin1, Zheyu Yan, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu, Ahmed Abbasi1, Zhi Zheng1, Ningyuan Cao1, Kai Ni, Jinjun Xiong2, Yiyu Shi11
it will need to be offloaded into the storage, such as a hard disk which is different from the documents’ embedding dimension (e.g., Accuracydrive (HDD) or solid-state drive (SSD). Accessing data from HDD128). Therefore, both RAG’s data precision (typically FP32) and its or SSD will significantly increase the data transfer latency [12], rendering real-time user interaction impractical. Secondly, the core retrieval method of RAG, MIPS, may experience decreased efficiency as profile data grows, and it can become potentially prohibitive when dealing with overwhelmingly large datasets. For example, on Raspberry Pi 4B, MIPS can take 5 minutes to find one appropriate profile data among 21M documents [10], which is even longer than the 2-minute inference time of an Edge LLM. Unfortunately, few efforts have been made to optimize RAG towards Edge LLMs.
Thus, we propose to utilize the Computing-in-Memory (CiM) architecture to address this issue. As shown in Figure 1, CiM architectures using memory arrays have shown substantial promise in accelerating matrix-vector multiplication [13], which is the key operation of MIPS. The CiM architectures often utilize massive parallel processing to perform computations directly within the memory array where the data is stored, such that they can minimize the data movement through in-situ data access and significantly increase the throughput [14]. Given the same amount of documents, CiM can finish computation within 50ms [15], which is negligible compared to the computation latency on normal edge devices. Furthermore, by incorporating non-volatile memory (NVM) devices, such as phase-change memories (PCMs), resistive random-access memories (RRAMs), and ferroelectric field-effect transistors (Fe-FETs), CiM can outperform conventional MOSFET-based designs in terms of energy efficiency [16].
|1.0|0.8|0.6|0.4|0.2|0.0|
|---|---|---|---|---|---|
|0.00|0.05|0.10|0.15|0.20|0.25|0.30|0.35|0.40|0.45|0.50|0.55|0.60|
Level of noise ( )
Figure 2: The impact on MIPS accuracy when the RAG’s document embedding is perturbed by various levels of Gaussian noise caused by the device variations. An accurate retrieval means the document retrieved under the impact of the noise is the same as that retrieved without noise.
Unfortunately, simply changing the underlying hardware is not enough, as the non-idealities of the NVM devices in CiM array could greatly deteriorate the RAG performance. First, the operations performed in CiM architectures are susceptible to various sources of noise, including electronic noise (thermal, shot, and flicker), device-to-device variability, and line noise from the supporting circuitry [17]. These noise sources can corrupt the computations, especially when the signal levels are close to the noise floor, which is a common scenario in high-precision tasks. Such noise issues are critical in RAG applications where the accuracy and quality of the generated content heavily rely on the precision of the underlying computations. Additionally, the CiM architecture is primarily designed and optimized for low-resolution computation [18]. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
We source questions based on domain knowledge and report experimental results on 42 representative queries across the documents. Multiple embedding models can be used, Reimers & Gurevych (2019), we use MPNET Song et al. (2020) for the entire document - excluding tables and captions. For the glossary, we split the term and the definition and generate separate embeddings for them, as well as for the full paragraph having the defined term and the definition. Soman & HG (2023) have reviewed other LLMs for telecom domain, but we chose llama2-7b-chat model Touvron et al. (2023) as it is free and has a commercial-friendly license. We evaluate on multiple questions and report on selected questions to substantiate our observations. For reference, the prompts used for the LLM are provided in Appendix A.
# 3 OBSERVATIONS
We first observe that sentence embeddings become unreliable with increasing chunk size. Appendix B Fig. 1 shows the Kernel Density Estimate (KDE) plot of cosine similarity scores for various sentence lengths. We take 10,970 sentences and look at pairwise similarity for all the sentences. A high similarity is observed when the length of the sentences is relatively long. The higher similarity distribution for larger lengths indicates spurious similarities which we manually validate for a few samples. We find that when both the query and queried document are over 200 words, the similarity distribution is bimodal. When either of them are over 200 words, there is a small but less perceptible lift at higher similarities.
Hypopeses and Key Observations
Splitting on definition and terms can help improve results (H1)
Similarity scores being a good measure (H2)
Position of keywords influencing results (H3)
Sentence-based similarity resulting in a better retriever (H4) and generator (H5)
∗Global AI Accelerator, Ericsson R&D, Bangalore, India. Both authors contributed equally. Git Repo Link. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Moreover, CiM arrays are typically sized at a fixed dimension, such as 64x64 [19].
# RELATED WORK
# CiM Architectures and their NVMs
As shown in the middle part of Figure 1, memory arrays are the key component for vector-matrix multiplication. In this array, matrix values are stored at NVM cells, such as emerging NVM technologies like PCMs, RRAMs, and FeFETs, at the cross-points of vertical and horizontal lines. Simultaneously, vector values flow along the horizontal lines of the array. Operations within the memory array take place in the analog domain by exploiting law of physics directly. However, for other essential functions like shift-and-add for multiple bits and sorting to find the top-k ranked values would be done in the digital domain. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Thus, digital-to-analog and analog-to-digital | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Data Construction Module
|NVMs|Device Variation|Flexible Noise-aware Training Module|
|---|---|---|
|positive examples|anchor examples|optimize constraints|
|negative examples| |close far|
|Sentence profile data|Reshape Embedding Model|Contrastive Learning|
Figure 3: Overview of the proposed Robust CiM-backed RAG framework (RoCR). It optimizes the sentence embedding model to adapt different types of NVMs utilized by CiM.
Converters (DACs and ADCs) are used to connect these different components. CiM arrays suffer from various sources of variations and noises. Two major ones include spatial variations and temporal variations. Spatial variations result from fabrication defects and have both local and global correlations. FeFET devices also suffer from temporal variations due to the stochasticity in memory switching and also aging, which causes fluctuations in conductance when programmed at different times. Temporal variations are typically independent from device to device and are irrelevant to the value to be programmed [20]. In this work, as a proof of concept, we focus on the impact of temporal variations in the programming process on DNN performance. Temporal variation makes the programmed resistance of a device deviate from what is expected. The proposed framework can also be extended to other sources of variations with modification.
Measurement results [21, 22] show that the noise on DNN weights caused by device variations can be safely modeled as a Gaussian noise with zero mean, each with a standard deviation associated with the weight value. A detailed representation is given by:
v = v0 + Δv, Δv ∼ N(0, 𝜎𝑣) (1)
where v is the actual embedding deployed on the accelerators, v0 is the target embedding value, and 𝜎𝑣 is a value measured by the experiments. We collect the measurement results from RRAM and FeFET devices and the specific value will be discussed in Section 4.1.
# Past Noise Mitigation Methods
Several strategies have been introduced to tackle the challenge of device variations in CiM accelerators. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
These methods can be separated into software and hardware-based techniques. The software-based techniques are generally developed to obtain more robust DNN models [19, 22–24] or recommendation systems [25], and are thus not suitable for generating more robust MIPS solutions.
For the hardware techniques, the write-verify procedure [26, 27] is one of the most commonly used approaches during programming. Initially, an NVM device is programmed to a set state via a designated pulse pattern. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Subsequent to this, the device’s value is verified to ascertain if its conductance aligns with a stipulated range of the desired value, essentially assessing its accuracy. If discrepancies arise, a supplemental update pulse is initiated to reset the device conductance nearer to the target. This loop persists until the disparity between the programmed device value and the target value diminishes to a satisfactory margin, typically taking a handful of cycles. Cutting-edge research suggests that by selectively applying write-verify to a subset of pivotal devices, one can uphold the average accuracy of a DNN [21]. Additionally, a variety of circuit design initiatives [18, 28] have been put forth to counteract device variations.
# Proposed Work
# Framework Overview
As shown in Figure 3, our proposed framework, Robust CiM-backed RAG (RoCR), consists of three stages. First, we apply contrastive learning to utilize the training data to optimize the training module. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
To do that, in the second stage, we take the profile data and construct via a data construction module to obtain contrastive training data pairs, which are then used in the flexible noise-aware training module. In the third stage, we obtain the constraints of NVMs in CiM via profiling. These constraints will be encoded into the flexible noise-aware training module and used to train the sentence embedding model so that it can generate embeddings that are robust against device variation of the target NVMs. After training, the training module can be turned into a new sentence embedding model and generate CiM-friendly embeddings.
# Contrastive Learning: Triplet Loss Function
When we apply RAG using CiM, we first need to store embeddings into NVMs as shown in Figure 1. Such embeddings are generated by the sentence embedding model, and they are the numerical representations of profile data. Each single document in the profile data can have its unique embedding, which is a vector. The embeddings stored on NVMs can consist of a matrix as the orange blocks shown in Figure 1. Given a user query, which will also be converted into an embedding, CiM can operate MIPS between this user query embedding and all profile embeddings simultaneously via vector-matrix multiplication. The top-ranked values in the product will be used as the index to retrieve the corresponding document data, as the pink block shown in Figure 1. This retrieved user-relevant document is the output of MIPS.
However, as we have explained in Section 2.1, writing the document embeddings into NVMs can cause them to suffer from temporal variations (device variations). Then, the NVM-stored embeddings will be different from the original sentence embedding model generated embeddings. As shown in Figure 4, the vanilla embedding model generates desired embedding, which will deviate to the noise embedding under device variation, such that the irrelevant embedding is ranked higher than desired embedding due to its larger inner product.
Contrastive learning can learn the representations via push away dissimilar examples and pull close similar examples [29]. In particular, the contrastive loss function can be used to increase the distance between dissimilar examples.
In our work, we propose to improve the noise-resilient capability by contrastive learning. By increasing the distance between | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Vanilla CiM-backed RAG
|Device Variation|NVMs|
|---|---|
|Our vanilla embedding model|lead to noise|
|embedding|noise-resilient embeddings|
|desired embedding|irrelevant embedding|
# Robust CiM-backed RAG
|Device Variation|NVMs|
|---|---|
|Our embedding model|noise-resilient embeddings|
Figure 4: Improvement by our Robust CiM-backed RAG. Our framework generates noise-resilient embeddings, as shown the orange and blue point in right subfigure.
|anchor/positive example|positive example (embedding)|
|---|---|
|“Jake Blues, just released from prison, puts his old band back together to save the Catholic home where he and his brother Elwood were raised.” is “classic”|r = 0.1|
|negative example|negative|
|“Jake Blues, just released from prison, puts his old band back together to save the Catholic home where he and his brother Elwood were raised.” is “dystopia”|explicit label E|
Figure 5: Examples of the two data construction methods. For data with explicit labels, CDE is used to construct the training data. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
For data without explicit labels (implicit labeled data), CDI is used to construct the training data.
# Construction Trios via Data with Explicit Labels (CDE)
For the data with explicit labels, each of the data consists of a textual content c and its corresponding label l which indicates the user preferred response regarding to the content c. As shown in the CDE part in Figure 5, there exists explicit label circled by dashed line. Using the profile data, we will construct triplet examples in the format of (𝑥𝑖, 𝑥 𝑖−, 𝑥 𝑖+). Given a dataset D with size of n profile documents, each piece of data consists of a content 𝑐𝑖 and the corresponding label 𝑙𝑖 where 𝑖 ∈ {1, 2, ..., 𝑛}. The anchor example 𝑥𝑖 can be constructed as:
𝑥𝑖 = 𝑐𝑖 ⊕ 𝑙𝑖, for 𝑖 = 1, 2, . . | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
. , 𝑛
where ⊕ denotes a concatenation operation, specifically used here to combine label and content. Negative examples 𝑥 𝑖 can be constructed by concatenating 𝑐𝑖 with a random label 𝑙𝑗 that is different from 𝑙𝑖 as follows:
𝑥−𝑖 = 𝑐𝑖 ⊕ 𝑙𝑗, where 𝑙𝑖 ≠ 𝑙𝑗.
The distance 𝑑 (𝑥𝑎, 𝑥𝑏) is calculated by the Euclidean distance between embeddings of two data 𝑒𝑚𝑏 (𝑥𝑎) and 𝑒𝑚𝑏 (𝑥𝑏). The function 𝑠𝑖𝑚() calculate the semantic similarity.
# Data Construction
To train the sentence embedding model via contrastive learning, it is critical to construct pairs of examples where the positive examples and negative examples need to be distinct from each other. In our work, since we use triplet contrastive loss, instead of pairs of examples, we will construct trios of examples where each triplet contains an anchor, positive, and negative example.
We use profile data to construct triplets of examples. For the profile data, it is generated by the user during the user-LLM interaction and contains the user preference information. There exists two situations for such data. First, the profile data can contain explicit labels indicating the user preferred response to the corresponding content. Second, the profile data also can be statements containing the user-related information but without explicit user preferences. As shown in Figure 5, to deal with the two situations, we come up with two data construction methods: Construction Data with Explicit labels (CDE) and Construction Data with Implicit labels (CDI). | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
Second, the embedding generation process varies based on the dropout rate applied within the model M. When model M is utilized to generate embeddings for anchor and negative examples, the dropout rate is set to 0. In contrast, for generating embeddings for positive examples, a non-zero dropout rate 𝑟 is used. The anchor, negative, positive examples, as shown in Figure 5, can be constructed as:
𝑒𝑚𝑏 (𝑥𝑖 ) = M (𝑥𝑖 , 𝑑𝑟𝑜𝑝𝑜𝑢𝑡 = 0)
𝑒𝑚𝑏 (𝑥 𝑖 −) = M (𝑥 𝑖 −, 𝑑𝑟𝑜𝑝𝑜𝑢𝑡 = 0)
𝑒𝑚𝑏 (𝑥𝑖 +) = M (𝑥𝑖 +, 𝑑𝑟𝑜𝑝𝑜𝑢𝑡 = 𝑟 )
The condition of 𝑟 ≠ 0 can induce variation in the embeddings, enhancing the model’s ability to recognize semantically similar yet variably expressed content.
Given the construction factor 𝐾, we can construct the triplet data examples as:
D𝑡𝑟𝑖𝑝𝑙𝑒𝑡 =Øn(𝑥𝑖 (𝑘 ) , 𝑥 𝑖− (𝑘 ) , 𝑥𝑖 +(𝑘 ) ) : 𝑘 = 1, 2, . . . , 𝐾o𝑁
For the triplet data examples D𝑡𝑟𝑖𝑝𝑙𝑒𝑡 , their embeddings for each augmentation 𝑘 are given by:
Øn(𝑒𝑚𝑏 (𝑥𝑖 (𝑘 ) ), 𝑒𝑚𝑏 (𝑥 𝑖 −(𝑘 ) ), 𝑒𝑚𝑏 (𝑥𝑖+ (𝑘 ) ) : 𝑘 = 1, 2, . . | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Published as a Tiny Paper at ICLR 2024
|Hyp|Hypothesis|Observation|Support (Samples)|
|---|---|---|---|
|H1|Splitting definition and defined words help in queries|For definitions, using the defined word and definition separately for retrieval gives better performance|22 of 30 queries (ID 2, 3)|
|H2|Similarity scores should not be used to compare retrieved results|We observe that similarity scores between different approaches are not comparable and absolute values are often very small for correct answers|24 of 30 queries (ID 2, 3)|
|H3|Position of keywords matter sentence are retrieved with high accuracy|Keywords closer to the beginning of the sentence are retrieved with high accuracy. Keywords which occur later in the sentence are difficult to be retrieved|25 of 30 queries (ID 1, 4, 5, 6)|
|H4|Sentence Based Similarity is better|Similarity based on sentence and distinct paragraphs retrieved gives much detailed context to generator|ID F1 - Table 2 (8 of 10 queries)|
|H5|Generator for sentence based similarity|Generated answer using sentence based similarity and paragraph based retrieval gives better results|8 of 10 queries (App. Table 3 - ID F1)|
|H6|Definitions with acronyms or words having acronyms don’t perform well|Generated answers often expand or provide abbreviations which is not helpful|15 of 16 queries (App. Table 3 - ID F2, F3)|
|H7|Order of retrieved paragraphs in generator results|Order of retrieved paragraphs do not affect generator results in our experiments|NA|
Table 1: Summary of observations - details of individual queries in Appendix B
Answers for definitions based on acronyms (H6) and effect of order of retrieved results on generator performance (H7). Of these, H2 is a result of our experiments with distributions of similarity scores referred earlier and H7 is based on Chen et al. (2023a). | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
. , 𝐾o𝑁
As shown in Figure 5, for data with explicit labels, a content 𝑐 can concatenate with its corresponding label 𝑙 to formalize the positive and anchor example. That content 𝑐 can also concatenate with other labels 𝑙′ to formalize the negative example. The positive example can be finally obtained from the sentence embedding model with dropout rate 𝑟. The anchor and negative example can be finally obtained from the sentence embedding model with 𝑟 = 0.
# Construction Trios via Data with Implicit Labels (CDI)
For data with implicit labels, each of the data consists of solely textual content c. As shown of the CDI part in Figure 5, there is no explicit label to indicate user preferences. Instead, the data can be seen as a statement containing some user-related information. To construct the anchor examples and positive examples, we can use the exact same method in EDC. Given a dataset D with size of n profile data, each piece of data consists of a content 𝑐. The anchor data 𝑥𝑖 can be constructed as:
𝑥𝑖 = 𝑐𝑖 , for 𝑖 = 1, 2, . . . , 𝑛
For each anchor data 𝑥𝑖 , constructing its corresponding negative example is not as simple as merely concatenating the content 𝑐𝑖 with a non-corresponding label 𝑙𝑘. To construct negative examples, we employ a reciprocal approach with the positive examples, applying a similar method to both.
We first initialize the negative example and positive example following the equation 5:
𝑥− = 𝑥𝑖+ = 𝑥𝑖, for 𝑖 = 1, 2, . . | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
. , 𝑛
For the positive example 𝑥𝑖 +, it can be finalized by incorporating a dropout rate 𝑟 into the sentence embedding model M, where a | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Table 1: Performance comparison between our framework and four baselines on five CiM devices with device variation specified in Table 2 across five datasets. Evaluate the performance of our framework using EDC (RoCR-EDC) and using IDC (RoCR-IDC) to optimize the performance of RAG, which utilizes Gemma-2 as its LLM.
|Dataset|Citation|Movie|Rating|News|DBLP|
|---|---|---|---|---|---|
|CiM Method|Acc ↑|F1 ↑|Acc ↑|F1 ↑|MAE ↓|RMSE ↓|ROUGE-1 ↑|ROUGE-L ↑|ROUGE-1 ↑|ROUGE-L ↑|
|SWV|0.4208|0.3339|0.1305|0.1974|0.3850|0.8093|0.0754|0.0731|0.1709|0.1590|
|CxDNN|0.4223|0.3576|0.1516|0.1762|0.4404|0.9135|0.0640|0.0632|0.1646|0.1449|
|CorrectNet|0.4155|0.3791|0.0996|0.1305|0.3609|0.7071|0.0512|0.0764|0.1603|0.1538|
|RoCR-CDE|0.5536|0.3956|0.2242|0.2303|0.3108|0.6856|0.1041|0.0987|0.2066|0.1924|
|RoCR-CDI|0.5409|0.5117|0.2273|0.2487|0.2767|0.6083|0.0831|0.0808|0.2317|0.2176|
|SWV|0.1831|0.1552|0.1992|0.1957|0.4205|0.8775|0.0296|0.0289|0.1968|0.1874|
|CxDNN|0.4013|0.3557|0.2167|0.2019|0.4423|0.8367|0.0604|0.0791|0.1517|0.1401|
|CorrectNet|0.3827|0.3209|0.1625|0.1909|0.3762|0.8062|0.0513|0.0505|0.2042|0.1945|
|Vanilla RAG|0.4801|0.3462|0.1576|0.2079|0.4153|0.9354|0.0296|0.0289|0.1618|0.1353|
|RoCR-CDE|0.5407|0.4396|0.2924|0.2509|0.2553|0.5385|0.1209|0.0946|0.2025|0.1906|
|RoCR-CDI|0.5299|0.4591|0.2971|0.2386|0.2124|0.5763|0.0884|0.0853|0.2240|0.2098|
|SWV|0.2450|0.2564|0.1695|0.1641|0.3460|0.7416|0.0725|0.069|0.1018|0.0954|
|CxDNN|0.4811|0.4006|0.2367|0.2113|0.2851|0.6928|0.0761|0.0707|0.1425|0.1111|
|CorrectNet|0.4510|0.3918|0.0792|0.1029|0.3704|0.7937|0.0585|0.0555|0.1715|0.1346|
|Vanilla RAG|0.4852|0.3618|0.1614|0.1636|0.3255|0.7649|0.0725|0.0690|0.1647|0.1437|
|RoCR-CDE|0.5139|0.4116|0.2242|0.2215|0.3208|0.6481|0.0825|0.0805|0.1893|0.1754|
|RoCR-CDI|0.5515|0.4984|0.2152|0.2131|0.2916|0.6245|0.1099|0.1049|0.2294|0.2140|
|SWV|0.5135|0.4260|0.1271|0.1178|0.3610|0.8196|0.0259|0.0256|0.1871|0.1786|
|CxDNN|0.4733|0.3964|0.1267|0.2158|0.3468|0.7616|0.0646|0.0634|0.1603|0.1538|
|CorrectNet|0.4628|0.4019|0.1592|0.1847|0.4013|0.9274|0.0705|0.0750|0.1628|0.1292|
|Vanilla RAG|0.2101|0.2401|0.1219|0.2019|0.4015|0.8544|0.0505|0.0489|0.1929|0.1814|
|RoCR-CDE|0.5836|0.5555|0.1706|0.2817|0.3139|0.6856|0.0873|0.0851|0.1984|0.1882|
|RoCR-CDI|0.5352|0.4289|0.1642|0.2445|0.2706|0.5916|0.1154|0.1128|0.2148|0.1978|
|SWV|0.4320|0.3541|0.1250|0.1076|0.3652|0.7616|0.0434|0.0427|0.0985|0.0923|
|CxDNN|0.4301|0.0538|0.0751|0.0458|0.3503|0.8185|0.0707|0.0682|0.2042|0.1945|
|CorrectNet|0.4145|0.3926|0.1083|0.1395|0.5526|0.8185|0.0735|0.0776|0.2096|0.1879|
|Vanilla RAG|0.4256|0.3522|0.0847|0.0863|0.3951|0.8515|0.0676|0.0653|0.2018|0.1846|
|RoCR-CDE|0.5698|0.5223|0.2152|0.1669|0.2959|0.6245|0.0936|0.0891|0.1946|0.1844|
|RoCR-CDI|0.5254|0.4504|0.2394|0.2458|0.2624|0.6325|0.0799|0.0764|0.2238|0.2095|
where 𝑒′ = 𝑒𝑚𝑏 (𝑥𝑖 )𝑑∗𝑝 . | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The device variation, as noise, is injected into embeddings to formalize 𝑒𝑚𝑏 (𝑥𝑖 )𝑑∗𝑝, which will be used in contrastive learning to train the sentence embedding model, as shown in Figure 3.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# 4 EXPERIMENTAL EVALUATION
# 4.1 Experimental Setup
4.1.1 Datasets. To demonstrate our robust CiM-backed RAG, we employ five datasets with different tasks and domains, including Citation Identification (Citation), Movie Tagging (Movie), Product Rating (Rating), News Headline Generation (News), and DBLP-Citation-network V14 (DBLP) to evaluate the proposed framework. The data in each dataset consists of query data and profile data. In our evaluation, the profile data will be used to formalize user history, and the profile corresponding query data will be used as the user input. The first three datasets contain binary, five-class, and fifteen-class classification tasks respectively. The last two datasets contain text generation tasks. In the Citation Identification dataset, every piece of query data consists of a paper title and two references, and the correct reference is provided. RAG uses the profile data corresponding to the paper titles with their detailed contents to choose the appropriate reference. In the Movie Tagging dataset, each query data contains a description of a movie, and RAG uses a similar description and its corresponding tag in the profile data to tag the query data. The Product Rating dataset has a similar structure as the Movie Tagging dataset. In News Headline Generation and DBLP datasets, each query data contains an abstract, which can be summarized into a title. RAG uses a similar abstract and its corresponding title in profile data to generate the title for query data. All five datasets have labels in their query data.
4.1.2 Default Experimental Setting. Our framework chooses all-MiniLM-L6-v2 as the sentence embedding model. For each dataset, we randomly select 2000 documents from profile data as the anchor examples. To examine the data construction method of CDE, we set the augmentation factor 𝑘 = 5 to obtain 10000 negative and positive examples. We set dropout rate as 0.1 to obtain the positive examples while maintaining it as 0 when processing anchor and negative examples. To examine the data construction method CDI, we set the dropout rate for positive examples as 0.1 and the dropout rate for negative examples as 0.9. To align with experiments for CDE, we also set 𝑘 = 5 in the experiments for CDI. For the experimental results, we run five times and get the average. In experiments, we set the device variation 𝜎 = 0.1 and shape embeddings into a dimensionof 64 with precision of 𝑖𝑛𝑡8. The learning rate is 2𝑒 − 5. In all experiments, we adhere to the device variation model previously described. The specific parameters are abstracted and then simplified from three representative NVM devices, two of them. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
| |Accuracy| |SWV| |CorrectNet| |RoCR-CDE|
|---|---|---|---|---|---|---|---|
|0.8|CxDNN|Vanilla RAG|RoCR-CDI|0.8|SWV|CorrectNet|RoCR-CDE|
|0.6| | | |0.6| | | |
|0.4| | | |0.4| | | |
|0.2| | | |0.2| | | |
|0.0| | | |0.0| | | |
(a) Citation on Gemma-2B
(b) Citation on Phi-2
(c) Citation on Mistral-7B
(d) Citation on Llama-2-3B
| |Accuracy| |SWV| |CorrectNet| |RoCR-CDE|
|---|---|---|---|---|---|---|---|
|0.5|CxDNN|Vanilla RAG|RoCR-CDI|0.5|SWV|CorrectNet|RoCR-CDE|
|0.4| | | |0.4| | | |
|0.3| | | |0.3| | | |
|0.2| | | |0.2| | | |
|0.1| | | |0.1| | | |
|0.0| | | |0.0| | | |
(e) Movie on Gemma-2B
(f) Movie on Phi-2
(g) Movie on Mistral-7B
(h) Movie on Llama-2-3B
Figure 6: Performance comparison between our framework and four baselines on RAG utilizing the LLMs including Gemma-2B, Phi-2, Mistral-7B, and Llama-2-3B with device variation specified in Table 2, given dataset 𝐶𝑖𝑡𝑎𝑡𝑖𝑜𝑛 and 𝑀𝑜𝑣𝑖𝑒.
|Name|# of Levels|Device Variations 𝜎𝑣|
|---|---|---|
|𝑅𝑅𝐴𝑀1 (Device-1)|1|0.0100, 0.0100, 0.0100, 0.0100|
|𝐹𝑒𝐹 𝐸𝑇2 (Device-2)|4|0.0067, 0.0135, 0.0135, 0.0067|
|𝐹𝑒𝐹 𝐸𝑇3 (Device-3)|4|0.0049, 0.0146, 0.0146, 0.0049|
|𝑅𝑅𝐴𝑀4 (Device-4)|4|0.0038, 0.0151, 0.0151, 0.0038|
|𝐹𝑒𝐹 𝐸𝑇6 (Device-5)|4|0.0026, 0.0155, 0.0155, 0.0026|
Document embeddings are shaped based on different CiM devices and stored as parallel arrays, similar to how they would be mapped to multiple NVM devices in practical scenarios. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
For example, if an embedding is shaped to contain all uint8 values, when it is mapped to 4-level (2-bit) devices such as 𝐹𝑒𝐹 𝐸𝑇2, each element of the vector is represented by four devices.
Evaluation Methods. Our first three datasets examine the model classification capability, and the rest of two datasets examine the text generation capability. In particular, dataset 𝐶𝑖𝑡𝑎𝑡𝑖𝑜𝑛 and 𝑀𝑜𝑣𝑖𝑒 has two and fifteen labels respectively. We can examine the binary and multiclass classification capabilities of the LLMs enhanced by our framework. In this way, we use accuracy to examine the ability of the models to correctly classify instances across different classes, and we use F1 score to examine the balance between precision and recall in classification tasks. For dataset 𝑅𝑎𝑡𝑖𝑛𝑔, while are resistive random-access memory (RRAM) devices extracted from [27, 41] and the other is a ferroelectric field effect transistor (FeFET) device extracted from [42]. We name them 𝑅𝑅𝐴𝑀1, 𝑅𝑅𝐴𝑀4 and 𝐹𝑒𝐹 𝐸𝑇2, respectively. We also extrapolate the modeling data to obtain two synthesized 𝐹𝑒𝐹 𝐸𝑇3 and 𝐹𝑒𝐹 𝐸𝑇6 devices. Detailed device modeling results are demonstrated in Table 2. A 𝑥-level device means this device can represent 𝑥 distinct values and 𝜎𝐿2 = 0.01 means the variation of this device is 0.01 when it is representing the level value 2. Using the device variations obtained from real CiM devices, we perform our experiments on a single Nvidia A10 GPU. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Ruiyang Qin1, Zheyu Yan, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu, Ahmed Abbasi1, Zhi Zheng1, Ningyuan Cao1, Kai Ni, Jinjun Xiong2, Yiyu Shi11
calibrate the device output embedding. Additionally, we examine In addition, we evaluate the impact of different LLMs on the performance of our framework. As Figure 1 shown, the LLM takes the concatenation of MIPS searched data and user query as the input and generates the response regarding the user query. Since different LLMs may have different response given the same query, we select four emerging edge-friendly medium-size LLMs in our experiments to examine the performance of our framework. Gemma-2B [47] is a new SOTA open model introduced by Google, with 4.95G model weights. According to Google, Gemma can outperform the same sized Llama-2 in reasoning capabilities. Hence, we also use Llama-2-3B [48], one of the earliest open LLMs introduced by Meta, with 6.85G model weights. Similarly, Phi-2 [49] released by Microsoft, is a powerful small LLM with 5G model weights. Additionally, Mistral-7B-GPTQ [50] made by Mistral AI, is a well-performed LLM after Llama model. We select dataset Citation and dataset Movie. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
We use the default experimental setting with 𝜎 = 0.1 and use CiM Device-1 as the experimental environment. The results are shown on Figure 6. It is evident that our framework outperforms each baseline across five CiM devices. Besides, the performance of each baseline on the same dataset can be largely different given different device, while our framework can produce a more robust performance.
**Table 3: Performance (MIPS accuracy) comparison between our framework and baselines.**
|Dataset|Citation|Movie|Rating|News|DBLP|
|---|---|---|---|---|---|
|SWV|0.4200|0.1728|0.1050|0.0855|0.2295|
|CxDNN|0.4401|0.2017|0.0503|0.0754|0.1681|
|CorrectNet|0.4013|0.0699|0.0509|0.0533|0.1609|
|Vanilla RAG|0.4547|0.1694|0.0933|0.0649|0.1747|
|RoCR-CDE|0.9231|0.4639|0.1583|0.1921|0.2750|
|RoCR-CDI|0.9344|0.4355|0.1266|0.1708|0.2905|
After we compare the MIPS performance of our framework and baselines, we further present a comprehensive evaluation to show the RAG performance of them. We use Gemma-2B as the LLM in RAG. Additionally, with Gemma-2B, we run RAG without device variation to observe its ideal performance, where we get 0.5200 of accuracy for Citation, 0.3728 of accuracy for Movie, 0.3150 of MAE for Rating, 0.0855 of ROUGE-1 for News, and 0.2295 of ROUGE-1 for DBLP. On five CiM devices, whose device variations have been shown in Table 2, we examine RAG with five datasets. As shown in Table 1, given the same datasets, it is clear that each device variation significantly compromises the RAG robustness, whereas our framework can mitigate the different device variation. For example, the RAG performance for Citation dataset on Device-2 can range from 0.18 to 0.48, while our framework can boost the accuracy performance of Citation dataset above 0.5 for all five devices. Compared to the four baselines whose performances are relatively worse than the ideal performance, our framework significantly approaches and sometimes outperforms the ideal performance via generating better sentence embeddings. This is because RoCR also serves as a regularization to improve the model’s generalization.
By default, we use 𝜎 = 0.1 to calculate the device variation of the five CiM devices. We also conduct an additional study to evaluate our framework given different 𝜎 values. Since we have already used dataset Citation and dataset Movie to study the performance of our frameworks seen in Figure 6, we choose a different dataset DBLP, using ROUGE-1 as the metric. For the LLM in RAG, we choose Mistral-7B. We examine the 𝜎 values higher and lower than 0.1, including 0, 0.025, 0.05, 0.075, 0.125, and 0.15. The case of 𝜎 = 0 reflects the ideal performance. For the CiM device, we use CiM device-1. As shown in Figure 7, our framework outperforms baselines across different device variation values.
Finally, RoCR is a training method that generates more robust weights for the sentence embedding model. It does not change the model structure. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Thus, there is no hardware (e.g., energy and latency) overhead during inference.
# CONCLUSION
In this paper, we present a novel framework for retrieval-augmented generation (RAG) acceleration via computing-in-memory (CiM) architectures. Our approach provides a solution to free RAG from | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Others are derived from our experiments to improve results. For each hypotheses, we provide the number of experiments that support the claim and those that are valid for the same in the last column, along with sample queries.
We find that retrieval by thresholding on similarity scores is not helpful. For queries 1, 2 and 5, when the query phrase is present in the term or definition, top retrieved score is higher. For query 3, the correct result is retrieved at the second position using definition embedding, but in other cases, result is not retrieved and similarity scores are close. For queries 4 and 6, we are unable to retrieve the correct result, though scores indicate otherwise. Thus, thresholding retriever results based on similarity scores can potentially result in sub-optimal generator augmentation. We evaluate generator performance on our queries based on the retrieved results. This is done using the top k retrieved (a) definitions, and (b) terms and definitions. Better context gives better generated responses. For acronyms and their expansions, the generator does not add any additional value.
For retrieval on the full document, we explore similarity search by sentence and paragraph separately. In the former, we retrieve the paragraph to which the sentence belongs and take top-k distinct paragraphs from top similar sentences. We observe that the results by sentence-based similarity search and paragraphs being used for generator provides better retriever and generator performance. Authors in Chen et al. (2023a) mention order of presented information to be important, but we did not observe different results on permuting the retrieved paragraphs. We observe generator responses to sometimes fail due to incorrect retrieval, hallucinated facts or incorrect synthesis as highlighted in Chen et al. (2023a). | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures
the constraints of latency and scalability on edge devices. By optimizing the sentence embedding model, our framework enable the Conference, pages 463–468, 2022. mizing the sentence embedding model, our framework enable the [26] Shim et al. Two-step write–verify scheme and impact of the read noise inmultilevel rram-based inference engine. Semiconductor Science and Technology. utilization of CiM devices in storing and processing the document [27] Yao et al. Fully hardware-implemented memristor convolutional neural network. embeddings, minimizing the impact of CiM device variations. Experimental results show that our framework achieves superior RAG [28] Shin et al. Fault-free: A fault-resilient deep neural network accelerator based onrealistic reram devices. In 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 2021. performance and largely mitigates the impact of device variations. This paper marks the first RAG acceleration via CiM framework.
# REFERENCES
[1] Marialena Bevilacqua, Kezia Oketch, Ruiyang Qin, Will Stamey, Xinyuan Zhang, Yi Gan, Kai Yang, and Ahmed Abbasi. When automated assessment meets automated content generation: Examining text quality in pe era of gpts. arXiv preprint arXiv:2309.14488, 2023.
[2] Ruiyang Qin, Yuting Hu, Zheyu Yan, Jinjun Xiong, Ahmed Abbasi, and Yiyu Shi. Fl-nas: Towards fairness of nas for resource constrained devices via large language models. arXiv preprint arXiv:2402.06696, 2024.
[3] Sep Neel and Peter Chang. Privacy issues in large language models: A survey, 2023.
[4] Karabacak et al. Embracing large language models for medical applications: Opportunities and challenges. Cureus, May 2023.
[5] Xu et al. Can large language models be good companions? an llm-based eyewear system wip conversational common ground, 2023.
[6] Li et al. Personal llm agents: Insights and survey about pe capability, efficiency and security, 2024.
[7] Ruiyang Qin, Jun Xia, Zhenge Jia, Meng Jiang, Ahmed Abbasi, Peipei Zhou, Jingtong Hu, and Yiyu Shi. Enabling on-device large language model personalization wip self-supervised data selection and synpesis. arXiv preprint arXiv:2311.12275, 2023.
[8] Frantar et al. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022.
[9] Ligeng Zhu, Lanxiang Hu, Ji Lin, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, and Song Han. Pockengine: Sparse and efficient fine-tuning in a pocket. In Proceedings of pe 56p Annual IEEE/ACM International Symposium on Microarchitecture, pages 1381–1394, 2023.
[10] Lewis et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.
[11] Hu et al. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[12] Kang et al. Enabling cost-effective data processing wip smart ssd. In 2013 IEEE 29p symposium on mass storage systems and technologies (MSST). IEEE.
[13] BanaGozar et al. Cim-sim: computation in memory simuiator. In Proceedings of pe 22nd International Workshop on Software and Compilers for Embedded Systems.
[14] Sze et al. Efficient processing of deep neural networks: A tutorial and survey. Proceedings of pe IEEE, 105(12):2295–2329, 2017.
[15] Peng et al. Dnn+ neurosim: An end-to-end benchmarking framework for compute-in-memory accelerators wip versatile device technologies. In 2019 IEEE international electron devices meeting (IEDM), pages 32–5. IEEE, 2019.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[16] Chen et al. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. ACM SIGARCH computer architecture news, 44(3):367–379, 2016.
[17] Yan et al. Compute-in-memory based neural network accelerators for safety-critical systems: Worst-case scenarios and protections. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2024.
[18] Jeong et al. Variation-tolerant and low r-ratio compute-in-memory reram macro wip capacitive ternary mac operation. IEEE Transactions on Circuits and Systems I: Regular Papers, 2022.
[19] Jiang et al. Device-circuit-architecture co-exploration for computing-in-memory neural accelerators. IEEE Transactions on Computers, 70(4):595–605, 2020.
[20] Feinberg et al. Making memristive neural network accelerators reliable. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 52–65. IEEE, 2018.
[21] Yan et al. Swim: Selective write-verify for computing-in-memory neural accelerators. In 2022 59p ACM/IEEE Design Automation Conference (DAC). IEEE.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[22] Yan et al. Uncertainty modeling of emerging device based computing-in-memory neural accelerators wip application to neural architecture search. In 2021 26p Asia and Soup Pacific Design Automation Conference (ASP-DAC). IEEE, 2021.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[23] Gao et al. Bayesian inference based robust computing on memristor crossbar. In 2021 58p ACM/IEEE Design Automation Conference (DAC). IEEE.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[24] Yan et al. Improving realistic worst-case performance of nvcim dnn accelerators prough training wip right-censored gaussian noise. 2023 International Conference on Computer-Aided Design (ICCAD), 2023.
[25] Mengyuan Li, Ann Franchesca Laguna, Dayane Reis, Xunzhao Yin, Michael Niemier, and X Sharon Hu. Imars: An in-memory-computing architecture for recommendation systems. In Proceedings of pe 59p ACM/IEEE Design Automation | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# From Local to Global: A Graph RAG Approach to Query-Focused Summarization
|Darren Edge1†|Ha Trinh1†|Newman Cheng2|Joshua Bradley2|Alex Chao3|
|---|---|---|---|---|
| |Apurva Mody3| |Steven Truitt2|Jonathan Larson1|
1Microsoft Research
2Microsoft Strategic Missions and Technologies
3Microsoft Office of the CTO
{daedge, trinhha, newmancheng, joshbradley, achao, moapurva, steventruitt, jolarso}@microsoft.com
†These authors contributed equally to this work
# Abstract
The use of retrieval-augmented generation (RAG) to retrieve relevant information from an external knowledge source enables large language models (LLMs) to answer questions over private and/or previously unseen document collections. However, RAG fails on global questions directed at an entire text corpus, such as “What are the main themes in the dataset?”, since this is inherently a query-focused summarization (QFS) task, rather than an explicit retrieval task. Prior QFS methods, meanwhile, fail to scale to the quantities of text indexed by typical RAG systems. To combine the strengths of these contrasting methods, we propose a Graph RAG approach to question answering over private text corpora that scales with both the generality of user questions and the quantity of source text to be indexed. Our approach uses an LLM to build a graph-based text index in two stages: first to derive an entity knowledge graph from the source documents, then to pre-generate community summaries for all groups of closely-related entities. Given a question, each community summary is used to generate a partial response, before all partial responses are again summarized in a final response to the user. For a class of global sensemaking questions over datasets in the 1 million token range, we show that Graph RAG leads to substantial improvements over a naïve RAG baseline for both the comprehensiveness and diversity of generated answers. An open-source, Python-based implementation of both global and local Graph RAG approaches is forthcoming at https://aka.ms/graphrag.
# 1 Introduction
Human endeavors across a range of domains rely on our ability to read and reason about large collections of documents, often reaching conclusions that go beyond anything stated in the source texts themselves. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
With the emergence of large language models (LLMs), we are already witnessing attempts to automate human-like sensemaking in complex domains like scientific discovery (Microsoft, 2023) and intelligence analysis (Ranade and Joshi, 2023), where sensemaking is defined as
Preprint. Under review. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Source Documents
|text extraction|query-focused|
|---|---|
|and chunking|summarization|
# Text Chunks
|domain-tailored|query-focused|
|---|---|
|summarization|summarization|
# Element Instances
|domain-tailored|domain-tailored|
|---|---|
|summarization|summarization|
# Element Summaries
|domain-tailored|community|summarization|
|---|---|---|
|summarization|detection|Graph Communities|
# Indexing Time
Pipeline Stage
Query Time
Figure 1: Graph RAG pipeline using an LLM-derived graph index of source document text. This
index spans nodes (e.g., entities), edges (e.g., relationships), and covariates (e.g., claims) that have
been detected, extracted, and summarized by LLM prompts tailored to the domain of the dataset.
Community detection (e.g., Leiden, Traag et al., 2019) is used to partition the graph index into
groups of elements (nodes, edges, covariates) that the LLM can summarize in parallel at both index-
ing time and query time. The “global answer” to a given query is produced using a final round of
query-focused summarization over all community summaries reporting relevance to that query.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
“a motivated, continuous effort to understand connections (which can be among people, places, and
events) in order to anticipate their trajectories and act effectively” (Klein et al., 2006a). Supporting
human-led sensemaking over entire text corpora, however, needs a way for people to both apply and
refine their mental model of the data (Klein et al., 2006b) by asking questions of a global nature.
Retrieval-augmented generation (RAG, Lewis et al., 2020) is an established approach to answering
user questions over entire datasets, but it is designed for situations where these answers are contained
locally within regions of text whose retrieval provides sufficient grounding for the generation task.
Instead, a more appropriate task framing is query-focused summarization (QFS, Dang, 2006), and in
particular, query-focused abstractive summarization that generates natural language summaries and
not just concatenated excerpts (Baumel et al., 2018; Laskar et al., 2020; Yao et al., 2017) . In recent
years, however, such distinctions between summarization tasks that are abstractive versus extractive,
generic versus query-focused, and single-document versus multi-document, have become less rele-
vant. While early applications of the transformer architecture showed substantial improvements on
the state-of-the-art for all such summarization tasks (Goodwin et al., 2020; Laskar et al., 2022; Liu
and Lapata, 2019), these tasks are now trivialized by modern LLMs, including the GPT (Achiam
et al., 2023; Brown et al., 2020), Llama (Touvron et al., 2023), and Gemini (Anil et al., 2023) series,
all of which can use in-context learning to summarize any content provided in their context window.
The challenge remains, however, for query-focused abstractive summarization over an entire corpus.
Such volumes of text can greatly exceed the limits of LLM context windows, and the expansion of
such windows may not be enough given that information can be “lost in the middle” of longer
contexts (Kuratov et al., 2024; Liu et al., 2023). In addition, although the direct retrieval of text
chunks in na¨ıve RAG is likely inadequate for QFS tasks, it is possible that an alternative form of
pre-indexing could support a new RAG approach specifically targeting global summarization.
In this paper, we present a Graph RAG approach based on global summarization of an LLM-derived
knowledge graph (Figure 1). In contrast with related work that exploits the structured retrieval
and traversal affordances of graph indexes (subsection 4.2), we focus on a previously unexplored
quality of graphs in this context: their inherent modularity (Newman, 2006) and the ability of com-
munity detection algorithms to partition graphs into modular communities of closely-related nodes
(e.g., Louvain, Blondel et al., 2008; Leiden, Traag et al., 2019). LLM-generated summaries of these | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
We recommend such approaches for definition QA and long form QA.
# CONCLUSIONS AND FUTURE WORK
We show that chunk length affects retriever embeddings, and generator augmentation by thresholding retriever results on similarity scores can be unreliable. However, use of abbreviations and a large number of related paragraphs for a topic make our observations particularly relevant for long form QA on technical documents. As future work, we would like to use RAG metrics Es et al. (2023); Chen et al. (2023b) to choose retrieval strategies. Also, methods and evaluation metrics to answer follow-up questions would be of interest. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# 30000 600 chunk size
# 1200 chunk size
# 20000 Entity references detected 2400 chunk size
# 10000
0 0 1 2 3 Number of gleanings performed
Figure 2: How the entity references detected in the HotPotQA dataset (Yang et al., 2018) varies with chunk size and gleanings for our generic entity extraction prompt with gpt-4-turbo.
Community descriptions provide complete coverage of the underlying graph index and the input documents it represents. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Query-focused summarization of an entire corpus is then made possible using a map-reduce approach: first using each community summary to answer the query independently and in parallel, then summarizing all relevant partial answers into a final global answer. To evaluate this approach, we used an LLM to generate a diverse set of activity-centered sense-making questions from short descriptions of two representative real-world datasets, containing podcast transcripts and news articles respectively. For the target qualities of comprehensiveness, diversity, and empowerment (defined in subsection 3.4) that develop understanding of broad issues and themes, we both explore the impact of varying the hierarchical level of community summaries used to answer queries, as well as compare to naïve RAG and global map-reduce summarization of source texts. We show that all global approaches outperform naïve RAG on comprehensiveness and diversity, and that Graph RAG with intermediate- and low-level community summaries shows favorable performance over source text summarization on these same metrics, at lower token costs.
# 2 Graph RAG Approach & Pipeline
We now unpack the high-level data flow of the Graph RAG approach (Figure 1) and pipeline, describing key design parameters, techniques, and implementation details for each step.
# 2.1 Source Documents → Text Chunks
A fundamental design decision is the granularity with which input texts extracted from source documents should be split into text chunks for processing. In the following step, each of these chunks will be passed to a set of LLM prompts designed to extract the various elements of a graph index. Longer text chunks require fewer LLM calls for such extraction, but suffer from the recall degradation of longer LLM context windows (Kuratov et al., 2024; Liu et al., 2023). This behavior can be observed in Figure 2 in the case of a single extraction round (i.e., with zero gleanings): on a sample dataset (HotPotQA, Yang et al., 2018), using a chunk size of 600 token extracted almost twice as many entity references as when using a chunk size of 2400. While more references are generally better, any extraction process needs to balance recall and precision for the target activity.
# 2.2 Text Chunks → Element Instances
The baseline requirement for this step is to identify and extract instances of graph nodes and edges from each chunk of source text. We do this using a multipart LLM prompt that first identifies all entities in the text, including their name, type, and description, before identifying all relationships between clearly-related entities, including the source and target entities and a description of their relationship. Both kinds of element instance are output in a single list of delimited tuples. The primary opportunity to tailor this prompt to the domain of the document corpus lies in the choice of few-shot examples provided to the LLM for in-context learning (Brown et al., 2020). | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
For example, while our default prompt extracting the broad class of “named entities” like people, places, and organizations is generally applicable, domains with specialized knowledge (e.g., science, medicine, law) will benefit from few-shot examples specialized to those domains. We also support a secondary extraction prompt for any additional covariates we would like to associate with the extracted node instances. Our default covariate prompt aims to extract claims linked to detected entities, including the subject, object, type, description, source text span, and start and end dates. To balance the needs of efficiency and quality, we use multiple rounds of “gleanings”, up to a specified maximum, to encourage the LLM to detect any additional entities it may have missed on prior extraction rounds. This is a multi-stage process in which we first ask the LLM to assess whether all entities were extracted, using a logit bias of 100 to force a yes/no decision. If the LLM responds that entities were missed, then a continuation indicating that “MANY entities were missed in the last extraction” encourages the LLM to glean these missing entities. This approach allows us to use larger chunk sizes without a drop in quality (Figure 2) or the forced introduction of noise.
# Element Instances → Element Summaries
The use of an LLM to “extract” descriptions of entities, relationships, and claims represented in source texts is already a form of abstractive summarization, relying on the LLM to create independently meaningful summaries of concepts that may be implied but not stated by the text itself (e.g., the presence of implied relationships). To convert all such instance-level summaries into single blocks of descriptive text for each graph element (i.e., entity node, relationship edge, and claim covariate) requires a further round of LLM summarization over matching groups of instances. A potential concern at this stage is that the LLM may not consistently extract references to the same entity in the same text format, resulting in duplicate entity elements and thus duplicate nodes in the entity graph. However, since all closely-related “communities” of entities will be detected and summarized in the following step, and given that LLMs can understand the common entity behind multiple name variations, our overall approach is resilient to such variations provided there is sufficient connectivity from all variations to a shared set of closely-related entities. Overall, our use of rich descriptive text for homogeneous nodes in a potentially noisy graph structure is aligned with both the capabilities of LLMs and the needs of global, query-focused summarization. These qualities also differentiate our graph index from typical knowledge graphs, which rely on concise and consistent knowledge triples (subject, predicate, object) for downstream reasoning tasks.
# Element Summaries → Graph Communities
The index created in the previous step can be modelled as an homogeneous undirected weighted graph in which entity nodes are connected by relationship edges, with edge weights representing the normalized counts of detected relationship instances. Given such a graph, a variety of community detection algorithms may be used to partition the graph into communities of nodes with stronger connections to one another than to the other nodes in the graph (e.g., see the surveys by Fortunato, 2010 and Jin et al., 2021). In our pipeline, we use Leiden (Traag et al., 2019) on account of its ability to recover hierarchical community structure of large-scale graphs efficiently (Figure 3). Each level of this hierarchy provides a community partition that covers the nodes of the graph in a mutually-exclusive, collective-exhaustive way, enabling divide-and-conquer global summarization.
# Graph Communities → Community Summaries
The next step is to create report-like summaries of each community in the Leiden hierarchy, using a method designed to scale to very large datasets. These summaries are independently useful in their own right as a way to understand the global structure and semantics of the dataset, and may themselves be used to make sense of a corpus in the absence of a question. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
For example, a user may scan through community summaries at one level looking for general themes of interest, then follow links to the reports at the lower level that provide more details for each of the subtopics. Here, however, we focus on their utility as part of a graph-based index used for answering global queries. Community summaries are generated in the following way:
4 | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# (a) Root communities at level 0
# (b) Sub-communities at level 1
Figure 3: Graph communities detected using the Leiden algorithm (Traag et al., 2019) over the MultiHop-RAG (Tang and Yang, 2024) dataset as indexed. Circles represent entity nodes with size proportional to their degree. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Node layout was performed via OpenORD (Martin et al., 2011) and Force Atlas 2 (Jacomy et al., 2014). Node colors represent entity communities, shown at two levels of hierarchical clustering: (a) Level 0, corresponding to the hierarchical partition with maximum modularity, and (b) Level 1, which reveals internal structure within these root-level communities.
# Leaf-level communities
The element summaries of a leaf-level community (nodes, edges, covariates) are prioritized and then iteratively added to the LLM context window until the token limit is reached. The prioritization is as follows: for each community edge in decreasing order of combined source and target node degree (i.e., overall prominence), add descriptions of the source node, target node, linked covariates, and the edge itself.
# Higher-level communities
If all element summaries fit within the token limit of the context window, proceed as for leaf-level communities and summarize all element summaries within the community. Otherwise, rank sub-communities in decreasing order of element summary tokens and iteratively substitute sub-community summaries (shorter) for their associated element summaries (longer) until fit within the context window is achieved.
# Community Summaries → Community Answers → Global Answer
Given a user query, the community summaries generated in the previous step can be used to generate a final answer in a multi-stage process. The hierarchical nature of the community structure also means that questions can be answered using the community summaries from different levels, raising the question of whether a particular level in the hierarchical community structure offers the best balance of summary detail and scope for general sensemaking questions (evaluated in section 3). For a given community level, the global answer to any user query is generated as follows:
# Prepare community summaries
Community summaries are randomly shuffled and divided into chunks of pre-specified token size. This ensures relevant information is distributed across chunks, rather than concentrated (and potentially lost) in a single context window.
# Map community answers
Generate intermediate answers in parallel, one for each chunk. The LLM is also asked to generate a score between 0-100 indicating how helpful the generated answer is in answering the target question. Answers with score 0 are filtered out.
# Reduce to global answer
Intermediate community answers are sorted in descending order of helpfulness score and iteratively added into a new context window until the token limit is reached. This final context is used to generate the global answer returned to the user. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Dataset
|Dataset|Example activity framing and generation of global sensemaking questions|
|---|---|
|Podcast transcripts|User: A tech journalist looking for insights and trends in the tech industry|
| |Task: Understanding how tech leaders view the role of policy and regulation|
| |Questions:|
| |1. Which episodes deal primarily with tech policy and government regulation?|
| |2. How do guests perceive the impact of privacy laws on technology development?|
| |3. Do any guests discuss the balance between innovation and ethical considerations?|
| |4. What are the suggested changes to current policies mentioned by the guests?|
| |5. Are collaborations between tech companies and governments discussed and how?|
|News articles|User: Educator incorporating current affairs into curricula|
| |Task: Teaching about health and wellness|
| |Questions:|
| |1. What current topics in health can be integrated into health education curricula?|
| |2. How do news articles address the concepts of preventive medicine and wellness?|
| |3. Are there examples of health articles that contradict each other, and if so, why?|
| |4. What insights can be gleaned about public health priorities based on news coverage?|
| |5. How can educators use the dataset to highlight the importance of health literacy?|
# Table 1: Examples of potential users, tasks, and questions generated by the LLM based on short descriptions of the target datasets. Questions target global understanding rather than specific details.
# Evaluation
# Datasets
We selected two datasets in the one million token range, each equivalent to about 10 novels of text and representative of the kind of corpora that users may encounter in their real-world activities:
| |Podcast transcripts|
|---|---|
| |Compiled transcripts of podcast conversations between Kevin Scott, Microsoft CTO, and other technology leaders (Behind the Tech, Scott, 2024). Size: 1669 × 600-token text chunks, with 100-token overlaps between chunks (~1 million tokens).|
| |News articles|
| |Benchmark dataset comprising news articles published from September 2013 to December 2023 in a range of categories, including entertainment, business, sports, technology, health, and science (MultiHop-RAG; Tang and Yang, 2024). Size: 3197 × 600-token text chunks, with 100-token overlaps between chunks (~1.7 million tokens).|
# Queries
Many benchmark datasets for open-domain question answering exist, including HotPotQA (Yang et al., 2018), MultiHop-RAG (Tang and Yang, 2024), and MT-Bench (Zheng et al., 2024). However, the associated question sets target explicit fact retrieval rather than summarization for the purpose of data sensemaking, i.e., the process though which people inspect, engage with, and contextualize data within the broader scope of real-world activities (Koesten et al., 2021). Similarly, methods for extracting latent summarization queries from source texts also exist (Xu and Lapata, 2021), but such extracted questions can target details that betray prior knowledge of the texts.
To evaluate the effectiveness of RAG systems for more global sensemaking tasks, we need questions that convey only a high-level understanding of dataset contents, and not the details of specific texts. We used an activity-centered approach to automate the generation of such questions: given a short description of a dataset, we asked the LLM to identify N potential users and N tasks per user, then for each (user, task) combination, we asked the LLM to generate N questions that require understanding of the entire corpus. For our evaluation, a value of N = 5 resulted in 125 test questions per dataset. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Table 1 shows example questions for each of the two evaluation datasets. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Conditions
We compare six different conditions in our analysis, including Graph RAG using four levels of graph communities (C0, C1, C2, C3), a text summarization method applying our map-reduce approach directly to source texts (TS), and a na¨ıve “semantic search” RAG approach (SS):
|Condition|Description|
|---|---|
|CO|Uses root-level community summaries (fewest in number) to answer user queries.|
|C1|Uses high-level community summaries to answer queries. These are sub-communities of C0, if present, otherwise C0 communities projected down.|
|C2|Uses intermediate-level community summaries to answer queries. These are sub-communities of C1, if present, otherwise C1 communities projected down.|
|C3|Uses low-level community summaries (greatest in number) to answer queries. These are sub-communities of C2, if present, otherwise C2 communities projected down.|
|TS|The same method as in subsection 2.6, except source texts (rather than community summaries) are shuffled and chunked for the map-reduce summarization stages.|
|SS|An implementation of na¨ıve RAG in which text chunks are retrieved and added to the available context window until the specified token limit is reached.|
The size of the context window and the prompts used for answer generation are the same across all six conditions (except for minor modifications to reference styles to match the types of context information used). Conditions only differ in how the contents of the context window are created. The graph index supporting conditions C0-C3 was created using our generic prompts for entity and relationship extraction only, with entity types and few-shot examples tailored to the domain of the data. The graph indexing process used a context window size of 600 tokens with 1 gleaning for the Podcast dataset and 0 gleanings for the News dataset.
# Metrics
LLMs have been shown to be good evaluators of natural language generation, achieving state-of-the-art or competitive results compared against human judgements (Wang et al., 2023a; Zheng et al., 2024). While this approach can generate reference-based metrics when gold standard answers are known, it is also capable of measuring the qualities of generated texts (e.g., fluency) in a reference-free style (Wang et al., 2023a) as well as in head-to-head comparison of competing outputs (LLM-as-a-judge, Zheng et al., 2024). LLMs have also shown promise at evaluating the performance of conventional RAG systems, automatically evaluating qualities like context relevance, faithfulness, and answer relevance (RAGAS, Es et al., 2023).
Given the multi-stage nature of our Graph RAG mechanism, the multiple conditions we wanted to compare, and the lack of gold standard answers to our activity-based sensemaking questions, we decided to adopt a head-to-head comparison approach using an LLM evaluator. We selected three target metrics capturing qualities that are desirable for sensemaking activities, as well as a control metric (directness) used as an indicator of validity. Since directness is effectively in opposition to comprehensiveness and diversity, we would not expect any method to win across all four metrics.
Our head-to-head measures computed using an LLM evaluator are as follows:
|Target Metric|Description|
|---|---|
|Comprehensiveness|How much detail does the answer provide to cover all aspects and details of the question?|
|Diversity|How varied and rich is the answer in providing different perspectives and insights on the question?|
|Empowerment|How well does the answer help the reader understand and make informed judgements about the topic?|
|Directness|How specifically and clearly does the answer address the question?|
For our evaluation, the LLM is provided with the question, target metric, and a pair of answers, and asked to assess which answer is better according to the metric, as well as why. It returns the winner if one exists, otherwise a tie if they are fundamentally similar and the differences are negligible. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
To account for the stochasticity of LLMs, we run each comparison five times and use mean scores. Table 2 shows an example of LLM-generated assessment. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Published as a Tiny Paper at ICLR 2024
URM STATEMENT
The authors acknowledge that at least one key author of this work meets the URM criteria of ICLR 2024 Tiny Papers Track.
# REFERENCES
IEEE 1881-2016. IEEE standard glossary of stationary battery terminology. IEEE Sp 1881-2016, pp. 1–42, 2016. doi: 10.1109/IEEESTD.2016.7552407.
Hung-Ting Chen, Fangyuan Xu, Shane A Arora, and Eunsol Choi. Understanding retrieval augmentation for long-form question answering. arXiv preprint arXiv:2310.12150, 2023a.
Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. Benchmarking large language models in retrieval-augmented generation. arXiv preprint arXiv:2309.01431, 2023b.
Shahul Es, Jipin James, Luis Espinosa-Anke, and Steven Schockaert. Ragas: Automated evaluation of retrieval augmented generation. arXiv preprint arXiv:2309.15217, 2023.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |