text
stringlengths 2
6.93k
| system_prompt
stringclasses 1
value |
---|---|
# Seven Failure Points When Engineering a Retrieval Augmented Generation System
Scott Barnett, Stefanus Kurniawan, Srikanth Thudumu, Zach Brannelly, Mohamed Abdelrazek
{scott.barnett,stefanus.kurniawan,srikanth.thudumu,zach.brannelly,mohamed.abdelrazek}@deakin.edu.au
Applied Artificial Intelligence Institute
Geelong, Australia
# ABSTRACT
Software engineers are increasingly adding semantic search capabilities to applications using a strategy known as Retrieval Augmented Generation (RAG). A RAG system involves finding documents that semantically match a query and then passing the documents to a large language model (LLM) such as ChatGPT to extract the right answer using an LLM. RAG systems aim to: a) reduce the problem of hallucinated responses from LLMs, b) link sources/references to generated responses, and c) remove the need for annotating documents with meta-data. However, RAG systems suffer from limitations inherent to information retrieval systems and from reliance on LLMs. In this paper, we present an experience report on the failure points of RAG systems from three case studies from separate domains: research, education, and biomedical. We share the lessons learned and present 7 failure points to consider when designing a RAG system. The two key takeaways arising from our work are: 1) validation of a RAG system is only feasible during operation, and 2) the robustness of a RAG system evolves rather than designed in at the start. We conclude with a list of potential research directions on RAG systems for the software engineering community.
# CCS CONCEPTS
• Software and its engineering → Empirical software validation.
# KEYWORDS
Retrieval Augmented Generation, RAG, SE4AI, Case Study
ACM Reference Format:
Scott Barnett, Stefanus Kurniawan, Srikanth Thudumu, Zach Brannelly, Mohamed Abdelrazek . 2024. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
9, pp. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
509–517, 1975.|
|[73]|W. Li, C. Feng, D. Lian et al., “Learning balanced tree indexes for large-scale vector retrieval,” in SIGKDDg, 2023.|
|[74]|M. Datar, N. Immorlica, P. Indyk et al., “Locality-sensitive hashing scheme based on p-stable distributions,” in SCG, 2004.|
|[75]|Y. A. Malkov and D. A. Yashunin, “Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs,” TPAMI, vol. 42, no. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
4, pp. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
824–836, 2018.|
|[76]|S. Jayaram Subramanya, F. Devvrit et al., “Diskann: Fast accurate billion-point nearest neighbor search on a single node,” NeurIPS, 2019.|
|[77]|Y. Wang, Y. Hou, H. Wang et al., “A neural corpus indexer for document retrieval,” in NeurIPS, 2022.|
|[78]|H. Zhang, Y. Wang, Q. Chen et al., “Model-enhanced vector index,” in NeurIPS, 2023.|
|[79]|S. A. Hayati, R. Olivier, P. Avvaru et al., “Retrieval-based neural code generation,” in EMNLP, 2018.|
|[80]|J. Zhang, X. Wang, H. Zhang et al., “Retrieval-based neural source code summarization,” in ICSE, 2020.|
|[81]|G. Poesia, A. Polozov, V. Le et al., “Synchromesh: Reliable code generation from pre-trained language models,” in ICLR, 2022.|
|[82]|X. Ye, S. Yavuz et al., “RNG-KBQA: generation augmented iterative ranking for knowledge base question answering,” in ACL, 2022.|
|[83]|Y. Shu et al., “TIARA: multi-grained retrieval for robust question answering over large knowledge bases,” arXiv:2210.12925, 2022.|
|[84]|X. V. Lin, R. Socher et al., “Bridging textual and tabular data for cross-domain text-to-sql semantic parsing,” arXiv:2012.12627, 2020.|
|[85]|A. Asai, Z. Wu, Y. Wang et al., “Self-rag: Learning to retrieve, generate, and critique through self-reflection,” arxiv:2310.11511, 2023.|
|[86]|W. Shi, S. Min, M. Yasunaga et al., “Replug: Retrieval-augmented black-box language models,” arXiv:2301.12652, 2023.|
|[87]|O. Ram, Y. Levine, I. Dalmedigos et al., “In-context retrieval-augmented language models,” arXiv:2302.00083, 2023.|
|[88]|D. Zan, B. Chen, Z. Lin et al., “When language model meets private library,” in EMNLP Findings, 2022.|
|[89]|N. Nashid, M. Sintaha, and A. Mesbah, “Retrieval-based prompt selection for code-related few-shot learning,” in ICSE, 2023.|
|[90]|M. Jin, S. Shahriar, M. Tufano et al., “Inferfix: End-to-end program repair with llms,” in ESEC/FSE, 2023.|
|[91]|S. Lu, N. Duan, H. Han et al., “Reacc: A retrieval-augmented code completion framework,” in ACL, 2022.|
|[92]|Y. Liu et al., “Uni-parser: Unified semantic parser for question answering on knowledge base and database,” in EMNLP, 2022.|
|[93]|Z. Yang, X. Du, E. Cambria et al., “End-to-end case-based reasoning for commonsense knowledge base completion,” in EACL, 2023.|
|[94]|W. Shi, Y. Zhuang, Y. Zhu et al., “Retrieval-augmented large language models for adolescent idiopathic scoliosis patients in shared decision-making,” in ACM-BCB, 2023.|
|[95]|A. Casanova, M. Careil, J. Verbeek et al., “Instance-conditioned gan,” in NeurIPS, 2021.|
|[96]|A. Bertsch, U. Alon, G. Neubig, and M. R. Gormley, “Unlimiformer: Long-range transformers with unlimited length input,” 2023.|
|[97]|Y. Kuratov, A. Bulatov et al., “In search of needles in a 10m haystack: Recurrent memory finds what llms miss,” arXiv:2402.10790, 2024.|
|[98]|J. Li, Y. Li, G. Li et al., “Editsum: A retrieve-and-edit framework for source code summarization,” in ASE, 2021.|
|[99]|C. Yu, G. Yang, X. Chen et al., “Bashexplainer: Retrieval-augmented bash code comment generation based on fine-tuned codebert,” in ICSME, 2022.|
|[100]|T. B. Hashimoto, K. Guu, Y. Oren, and P. Liang, “A retrieve-and-edit framework for predicting structured outputs,” in NeurIPS, 2018.|
|[101]|B. Wei, Y. Li, G. Li et al., “Retrieve and refine: Exemplar-based neural comment generation,” in ASE, 2020.|
|[102]|E. Shi, Y. Wang, W. Tao et al., “RACE: retrieval-augmented commit message generation,” in EMNLP, 2022.|
|[103]|W. Chen, H. Hu, C. Saharia, and W. W. Cohen, “Re-imagen: Retrieval-augmented text-to-image generator,” in ICLR, 2023.|
|[104]|S. Sheynin, O. Ashual, A. Polyak et al., “Knn-diffusion: Image generation via large-scale retrieval,” in ICLR, 2023.|
|[105]|A. Blattmann, R. Rombach, K. Oktay et al., “Retrieval-augmented diffusion models,” in NeurIPS, 2022.|
|[106]|R. Rombach, A. Blattmann, and B. Ommer, “Text-guided synthesis of artistic images with retrieval-augmented diffusion models,” arXiv:2207.13038, 2022.|
|[107]|B. Li, P. H. Torr, and T. Lukasiewicz, “Memory-driven text-to-image generation,” arXiv:2208.07022, 2022.|
|[108]|B. Oguz, X. Chen, V. Karpukhin et al., “Unik-qa: Unified representations of structured and unstructured knowledge for open-domain question answering,” in NAACL Findings, 2022.|
|[109]|D. Yu, S. Zhang et al., “Decaf: Joint decoding of answers and logical forms for question answering over knowledge bases,” in ICLR, 2023.|
|[110]|G. Dong, R. Li, S. Wang et al., “Bridging the kb-text gap: Leveraging structured knowledge-aware pre-training for KBQA,” in CIKM, 2023.|
|[111]|K. Wang, F. Duan, S. Wang et al., “Knowledge-driven cot: Exploring faithful reasoning in llms for knowledge-intensive question answering,” arXiv:2308.13259, 2023.|
|[112]|D. Yu and Y. Yang, “Retrieval-enhanced generative model for large-scale knowledge graph completion,” in SIGIR, 2023.|
|[113]|T. F´evry, L. B. Soares et al., “Entities as experts: Sparse memory access with entity supervision,” in EMNLP, 2020.|
|[114]|M. de Jong, Y. Zemlyanskiy, N. FitzGerald et al., “Mention memory: incorporating textual knowledge into transformers through entity mention attention,” in ICLR, 2021.|
|[115]|B. Jing, Y. Zhang, Z. Song et al., “Amd: Anatomical motion diffusion with interpretable motion decomposition and fusion,” in AAAI, 2024.|
|[116]|Y. Yuan, H. Liu, X. Liu et al., “Retrieval-augmented text-to-audio generation,” in ICASSP, 2024.|
|[117]|B. Yang, M. Cao, and Y. Zou, “Concept-aware video captioning: Describing videos with effective prior information,” TIP, vol. 32, pp. 5366–5378, 2023.|
|[118]|Z. Zhong, T. Lei, and D. Chen, “Training language models with memory augmentation,” in EMNLP, 2022.|
|[119]|S. Min, W. Shi, M. Lewis et al., “Nonparametric masked language modeling,” in ACL Findings, 2023.|
|[120]|X. Zhang, Y. Zhou, G. Yang, and T. Chen, “Syntax-aware retrieval augmented code generation,” in EMNLP Findings, 2023.|
|[121]|Z. Fei, “Memory-augmented image captioning,” in AAAI, 2021.|
|[122]|Y. Leviathan, M. Kalman, and Y. Matias, “Fast inference from transformers via speculative decoding,” in ICML, 2023.|
|[123]|T. Lan, D. Cai, Y. Wang et al., “Copy is all you need,” in ICLR, 2023.|
|[124]|B. Cao, D. Cai, L. Cui et al., “Retrieval is accurate generation,” arXiv:2402.17532, 2024.|
|[125]|L. Wang, N. Yang, and F. Wei, “Query2doc: Query expansion with large language models,” in EMNLP, 2023.|
|[126]|L. Gao, X. Ma, J. Lin, and J. Callan, “Precise zero-shot dense retrieval without relevance labels,” in ACL, 2023.|
|[127]|G. Kim, S. Kim, B. Jeon et al., “Tree of clarifications: Answering ambiguous questions with retrieval-augmented large language models,” in EMNLP, 2023.|
|[128]|C.-M. Chan, C. Xu et al., “Rq-rag: Learning to refine queries for retrieval augmented generation,” arXiv:2404.00610, 2024.|
|[129]|A. Tayal and A. Tyagi, “Dynamic contexts for generating suggestion questions in rag based conversational systems,” in WWW’24 Companion, 2024.| | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# References
|[130]|M. Xia, S. Malladi, S. Gururangan et al., “LESS: selecting influential data for targeted instruction tuning,” arXiv:2402.04333, 2024.|
|---|---|
|[131]|A.-L. Bornea, F. Ayed et al., “Telco-rag: Navigating the challenges of retrieval-augmented language models for telecommunications,” arXiv:2404.15939, 2024.|
|[132]|S. Yao, J. Zhao, D. Yu et al., “React: Synergizing reasoning and acting in language models,” in ICLR, 2023.|
|[133]|J. Wei, X. Wang, D. Schuurmans et al., “Chain-of-thought prompting elicits reasoning in large language models,” in NeurIPS, 2022.|
|[134]|T. Pouplin, H. Sun, S. Holt, and M. Van der Schaar, “Retrieval-augmented thought process as sequential decision making,” arXiv:2402.07812, 2024.|
|[135]|J. Liu, “LlamaIndex,” 11 2022. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[Online]. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Available: https://github.com/jerryjliu/llama index|
|[136]|P. Sarthi, S. Abdullah, A. Tuli et al., “Raptor: Recursive abstractive processing for tree-organized retrieval,” in ICLR, 2023.|
|[137]|B. Kang, J. Kim et al., “Prompt-rag: Pioneering vector embedding-free retrieval-augmented generation in niche domains, exemplified by korean medicine,” arXiv:2401.11246, 2024.|
|[138]|V. Raina et al., “Question-based retrieval using atomic units for enterprise rag,” arXiv:2405.12363, 2024.|
|[139]|S. Xiao, Z. Liu, P. Zhang et al., “C-pack: Packaged resources to advance general chinese embedding,” arxiv:2309.07597, 2023.|
|[140]|J. Chen, S. Xiao, P. Zhang et al., “Bge m3-embedding: Multi-lingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillation,” arxiv:2309.07597, 2023.|
|[141]|S. Xiao, Z. Liu, P. Zhang, and X. Xing, “Lm-cocktail: Resilient tuning of language models via model merging,” arxiv:2311.13534, 2023.|
|[142]|P. Zhang, S. Xiao, Z. Liu, Z. Dou, and J.-Y. Nie, “Retrieve anything to augment large language models,” arxiv:2310.07554, 2023.|
|[143]|M. Kulkarni, P. Tangarajan, K. Kim et al., “Reinforcement learning for optimizing RAG for domain chatbots,” arXiv:2401.06800, 2024.|
|[144]|W. Wang, Y. Wang et al., “Rap-gen: Retrieval-augmented patch generation with codet5 for automatic program repair,” in ESEC/FSE, 2023.|
|[145]|K. Sawarkar, A. Mangal et al., “Blended rag: Improving rag (retriever-augmented generation) accuracy with semantic search and hybrid query-based retrievers,” arXiv:2404.07220, 2024.|
|[146]|S.-Q. Yan, J.-C. Gu, Y. Zhu, and Z.-H. Ling, “Corrective retrieval augmented generation,” arXiv:2401.15884, 2024.|
|[147]|W. Huang, M. Lapata, P. Vougiouklis et al., “Retrieval augmented generation with rich answer encoding,” in IJCNLP-AACL, 2023.|
|[148]|H. Wang, W. Huang, Y. Deng et al., “Unims-rag: A unified multi-source retrieval-augmented generation for personalized dialogue systems,” arXiv:2401.13256, 2024.|
|[149]|S. Koley, A. K. Bhunia et al., “You’ll never walk alone: A sketch and text duet for fine-grained image retrieval,” in CVPR, 2024.|
|[150]|M. R. Glass, G. Rossiello, M. F. M. Chowdhury et al., “Re2g: Retrieve, rerank, generate,” in NAACL, 2022.|
|[151]|R. F. Nogueira and K. Cho, “Passage re-ranking with BERT,” arxiv:1901.04085, 2019.|
|[152]|J. Li, Y. Zhao, Y. Li et al., “Acecoder: Utilizing existing code to enhance code generation,” arXiv:2303.17780, 2023.|
|[153]|P. Shi, R. Zhang, H. Bai, and J. Lin, “XRICL: cross-lingual retrieval-augmented in-context learning for cross-lingual text-to-sql semantic parsing,” in EMNLP Findings, 2022.|
|[154]|K. Rangan and Y. Yin, “A fine-tuning enhanced rag system with quantized influence measure as ai judge,” arXiv:2402.17081, 2024.|
|[155]|J. Saad-Falcon, O. Khattab, K. Santhanam et al., “Udapdr: Unsupervised domain adaptation via llm prompting and distillation of rerankers,” in EMNLP, 2023.|
|[156]|L. Wang, N. Yang, and F. Wei, “Learning to retrieve in-context examples for large language models,” arXiv:2307.07164, 2023.|
|[157]|P. Finardi, L. Avila et al., “The chronicles of rag: The retriever, the chunk and the generator,” arXiv:2401.07883, 2024.|
|[158]|J. Li, Y. Yuan, and Z. Zhang, “Enhancing llm factual accuracy with rag to counter hallucinations: A case study on domain-specific queries in private knowledge-bases,” arXiv:2403.10446, 2024.|
|[159]|Z. Wang, J. Araki, Z. Jiang et al., “Learning to filter context for retrieval-augmented generation,” arxiv:2311.08377, 2023.|
|[160]|S. Hofst¨atter, J. Chen, K. Raman, and H. Zamani, “Fid-light: Efficient and effective retrieval-augmented text generation,” in SIGIR, 2023.|
|[161]|D. Arora, A. Kini, S. R. Chowdhury et al., “Gar-meets-rag paradigm for zero-shot information retrieval,” arXiv:2310.20158, 2023.|
|[162]|https://www.pinecone.io.|
|[163]|W. Yu, D. Iter et al., “Generate rather than retrieve: Large language models are strong context generators,” arXiv:2209.10063, 2022.| | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Scholarly References
|Reference|Authors|Title|Publication Details|
|---|---|---|---|
|[196]|S. Siriwardhana, R. Weerasekera, T. Kaluarachchi et al.|Improving the domain adaptation of retrieval augmented generation (RAG) models for open domain question answering|TACL, vol. 11, pp. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
1–17, 2023|
|[197]|Y. Tang and Y. Yang|Multihop-rag: Benchmarking retrieval-augmented generation for multi-hop queries|arXiv:2401.15391, 2024|
|[198]|K. Huang, C. Zhai, and H. Ji|CONCRETE: improving cross-lingual fact-checking with cross-lingual retrieval|in COLING, 2022|
|[199]|L. Hagström, D. Saynova, T. Norlund et al.|The effect of scaling, retrieval augmentation and form on the factual consistency of language models|arXiv:2311.01307, 2023|
|[200]|H. Zamani and M. Bendersky|Stochastic rag: End-to-end retrieval-augmented generation through expected utility maximization|arXiv:2405.02816, 2024|
|[201]|Y. Liu, Y. Wan et al.|KG-BART: knowledge graph-augmented BART for generative commonsense reasoning|in AAAI, 2021|
|[202]|A. Wan, E. Wallace, and D. Klein|What evidence do language models find convincing?|arXiv:2402.11782, 2024|
|[203]|H. Zhang, Z. Liu et al.|Grounded conversation generation as guided traverses in commonsense knowledge graphs|in ACL, 2020|
|[204]|D. Cai, Y. Wang, W. Bi et al.|Skeleton-to-response: Dialogue generation guided by retrieval memory|in NAACL-HLT, 2019|
|[205]|M. Komeili, K. Shuster, and J. Weston|Internet-augmented dialogue generation|in ACL, 2022|
|[206]|K. Shuster, J. Xu et al.|Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage|arXiv:2208.03188, 2022|
|[207]|S. Kim, J. Y. Jang, M. Jung, and S. Shin|A model of cross-lingual knowledge-grounded response generation for open-domain dialogue systems|in EMNLP Findings, 2021|
|[208]|E. Nie, S. Liang, H. Schmid, and H. Schütze|Cross-lingual retrieval augmented prompt for low-resource languages|in ACL, 2023|
|[209]|X. Li, E. Nie, and S. Liang|From classification to generation: Insights into crosslingual retrieval augmented icl|in NeurIPS, 2023|
|[210]|W. Li, J. Li, W. Ma, and Y. Liu|Citation-enhanced generation for llm-based chatbot|arXiv:2402.16063, 2024|
|[211]|D. Cai, Y. Wang, H. Li et al.|Neural machine translation with monolingual translation memory|in ACL/IJCNLP, 2021|
|[212]|U. Khandelwal, A. Fan, D. Jurafsky et al.|Nearest neighbor machine translation|in ICLR, 2021|
|[213]|X. Du and H. Ji|Retrieval-augmented generative question answering for event argument extraction|in EMNLP, 2022|
|[214]|Y. Gao, Q. Yin, Z. Li et al.|Retrieval-augmented multilingual keyphrase generation with retriever-generator iterative training|in NAACL Findings, 2022|
|[215]|J. Zhang, E. J. Yu, Q. Chen et al.|Retrieval-based full-length wikipedia generation for emergent events|arXiv:2402.18264, 2024|
|[216]|R. Fan, Y. Fan, J. Chen et al.|RIGHT: retrieval-augmented generation for mainstream hashtag recommendation|arxiv:2312.10466, 2023|
|[217]|Z. Wang, S. X. Teo et al.|M-rag: Reinforcing large language model performance through retrieval-augmented generation with multiple partitions|arXiv:2405.16420, 2024|
|[218]|Y. Wang, H. Le, A. D. Gotmare et al.|Codet5mix: A pretrained mixture of encoder-decoder transformers for code understanding and generation|2022|
|[219]|A. Madaan, S. Zhou, U. Alon et al.|Language models of code are few-shot commonsense learners|in EMNLP, 2022|
|[220]|Y. Wang, H. Le, A. Gotmare et al.|Codet5+: Open code large language models for code understanding and generation|in EMNLP, 2023|
|[221]|J. Chen, X. Hu, Z. Li et al.|Code search is all you need? improving code suggestions with code search|in ICSE, 2024|
|[222]|D. Zan, B. Chen, Y. Gong et al.|Private-library-oriented code generation with large language models|arXiv:2307.15370, 2023|
|[223]|M. Liu, T. Yang, Y. Lou et al.|Codegen4libs: A two-stage approach for library-oriented code generation|in ASE, 2023|
|[224]|D. Liao, S. Pan, Q. Huang et al.|Context-aware code generation framework for code repositories: Local, global, and third-party library awareness|arXiv:2312.05772, 2023|
|[225]|J. Li, Y. Li, G. Li et al.|Skcoder: A sketch-based approach for automatic code generation|in ICSE, 2023|
|[226]|Q. Gou, Y. Dong, Y. Wu, and Q. Ke|Rrgcode: Deep hierarchical search-based code generation|Journal of Systems and Software, vol. 211, p. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
111982, 2024|
|[227]|K. Zhang, J. Li, G. Li et al.|Codeagent: Enhancing code generation with tool-integrated agent systems for real-world repo-level coding challenges|arXiv:2401.07339, 2024|
|[228]|H. Su, S. Jiang, Y. Lai et al.|Arks: Active retrieval in knowledge soup for code generation|arXiv:2402.12317, 2024| | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Seven Failure Points When Engineering a Retrieval Augmented Generation System. In Proceedings of 3rd International Conference on AI Engineering — Software Engineering for AI (CAIN 2024). | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# D. Leake and D. J. Crandall
“On bringing case-based reasoning methodology to deep learning,” in ICCBR, 2020.
# L. Zhang, J. Zhang et al.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
“FC-KBQA: A fine-to-coarse composition framework for knowledge base question answering,” in ACL, 2023.
# J. Jiang, K. Zhou et al.
“Structgpt: A general framework for large language model to reason over structured data,” in EMNLP, 2023.
# J. Baek, A. F. Aji, and A. Saffari
“Knowledge-augmented language model prompting for zero-shot knowledge graph question answering,” arXiv:2306.04136, 2023.
# P. Sen, S. Mavadia, and A. Saffari
“Knowledge graph-augmented language models for complex question answering,” in NLRSE, 2023.
# Y. Wu, N. Hu, S. Bi et al.
“Retrieve-rewrite-answer: A kg-to-text enhanced llms framework for knowledge graph question answering,” arXiv:2309.11206, 2023.
# C. Wang, Y. Xu, Z. Peng et al.
“keqing: knowledge-based question answering is a nature chain-of-thought mentor of LLM,” arXiv:2401.00426, 2024.
# J. Liu, S. Cao, J. Shi et al.
“Probing structured semantics understanding and generation of language models via question answering,” arXiv:2401.05777, 2024.
# G. Xiong, J. Bao, and W. Zhao
“Interactive-kbqa: Multi-turn interactions for knowledge base question answering with large language models,” arXiv:2402.15131, 2024.
# S. Chen, Q. Liu, Z. Yu et al.
“Retrack: A flexible and efficient framework for knowledge base question answering,” in ACL, 2021.
# D. Yu, C. Zhu, Y. Fang et al.
“Kg-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering,” in ACL, 2022.
# M. Ju, W. Yu, T. Zhao et al.
“Grape: Knowledge graph enhanced passage reader for open-domain question answering,” in EMNLP Findings, 2022.
# Z. Hu, Y. Xu, W. Yu et al.
“Empowering language models with knowledge graph reasoning for open-domain question answering,” in EMNLP, 2022.
# Q. Yang, Q. Chen, W. Wang et al.
“Enhancing multi-modal multi-hop question answering via structured knowledge and unified retrieval-generation,” in MM, 2023.
# W. Zhao, Y. Liu, T. Niu et al.
“DIVKNOWQA: assessing the reasoning ability of llms via open-domain question answering over knowledge base and text,” arXiv:2310.20170, 2023.
# X. Wang, Q. Yang, Y. Qiu et al.
“Knowledgpt: Enhancing large language models with retrieval and storage access on knowledge bases,” arXiv:2308.11761, 2023.
# S. Ko, H. Cho, H. Chae et al.
“Evidence-focused fact summarization for knowledge-augmented zero-shot question answering,” arXiv:2403.02966, 2024.
# Y. Gao, L. Qiao, Z. Kan et al.
“Two-stage generative question answering on temporal knowledge graph using large language models,” arXiv:2402.16568, 2024.
# T. Guo, Q. Yang, C. Wang et al.
“Knowledgenavigator: Leveraging large language models for enhanced reasoning over knowledge graph,” arXiv:2312.15880, 2023.
# C. Mavromatis and G. Karypis
“Gnn-rag: Graph neural retrieval for large language model reasoning,” arXiv:2405.20139, 2024.
# S. Min, J. Boyd-Graber, C. Alberti et al.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
“Neurips 2020 efficientqa competition: Systems, analyses and lessons learned,” in NeurIPS 2020 Competition and Demonstration Track, 2021.
# A. H. Li, P. Ng, P. Xu et al.
“Dual reader-parser on hybrid textual and tabular evidence for open domain question answering,” in ACL/IJCNLP, 2021.
# K. Ma, H. Cheng, X. Liu et al.
“Open-domain question answering via chain of reasoning over heterogeneous knowledge,” in EMNLP Findings, 2022.
# P. Christmann, R. S. Roy, and G. Weikum
“Conversational question answering on heterogeneous sources,” in SIGIR, 2022.
# E. Park, S.-M. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Lee et al.
“Rink: reader-inherited evidence reranker for table-and-text open domain question answering,” in AAAI, 2023.
# W. Zhao, Y. Liu, Y. Wan et al.
“Localize, retrieve and fuse: A generalized framework for free-form question answering over tables,” arXiv:2309.11049, 2023.
# F. Pan, M. Canim et al.
“End-to-end table question answering via retrieval-augmented generation,” arXiv:2203.16714, 2022.
# Z. Jiang, Y. Mao, P. He et al.
“Omnitab: Pretraining with natural and synthetic data for few-shot table-based question answering,” in NAACL, 2022.
# W. Zhong, J. Huang, Q. Liu et al.
“Reasoning over hybrid chain for table-and-text open domain question answering,” in IJCAI, 2022. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# References
|[325]|T. Kouzelis and V. Katsouros, “Weakly-supervised automated audio captioning via text only training,” in DCASE Workshop, 2023.|
|---|---|
|[326]|S. Deshmukh, B. Elizalde, D. Emmanouilidou et al., “Training audio captioning models without audio,” in ICASSP, 2024.|
|[327]|Z. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Wang, C. Lu, Y. Wang et al., “Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation,” in NeurIPS, 2024.|
|[328]|L. Yang, Z. Huang, X. Zhou et al., “Prompt-based 3d molecular diffusion models for structure-based drug design,” 2023.|
|[329]|T. Truong Jr and T. Bepler, “Poet: A generative model of protein families as sequences-of-sequences,” NeurIPS, 2024.|
|[330]|G. Frisoni, M. Mizutani, G. Moro, and L. Valgimigli, “Bioreader: a retrieval-enhanced text-to-text transformer for biomedical literature,” in EMNLP, 2022.|
|[331]|X. Yang, M. Ye, Q. You et al., “Writing by memorizing: Hierarchical retrieval-based medical report generation,” arXiv:2106.06471, 2021.|
|[332]|J. Kim and M. Min, “From rag to qa-rag: Integrating generative ai for pharmaceutical regulatory compliance process,” arXiv:2402.01717, 2024.|
|[333]|Y. Ji, Z. Li et al., “Rag-rlrc-laysum at biolaysumm: Integrating retrieval-augmented generation and readability control for layman summarization of biomedical texts,” arXiv:2405.13179, 2024.|
|[334]|K. Yang, A. Swope et al., “Leandojo: Theorem proving with retrieval-augmented language models,” in NeurIPS, 2024.|
|[335]|Z. Levonian, C. Li, W. Zhu et al., “Retrieval-augmented generation to improve math question-answering: Trade-offs between groundedness and human preference,” arXiv:2310.03184, 2023.|
|[336]|J. Chen, H. Lin, X. Han, and L. Sun, “Benchmarking large language models in retrieval-augmented generation,” arxiv:2309.01431, 2023.|
|[337]|S. ES, J. James, L. E. Anke, and S. Schockaert, “RAGAS: automated evaluation of retrieval augmented generation,” arxiv:2309.15217, 2023.|
|[338]|J. Saad-Falcon, O. Khattab, C. Potts et al., “ARES: an automated evaluation framework for retrieval-augmented generation systems,” arxiv:2311.09476, 2023.|
|[339]|https://github.com/truera/trulens.|
|[340]|Y. Lyu, Z. Li, S. Niu et al., “CRUD-RAG: A comprehensive chinese benchmark for retrieval-augmented generation of large language models,” arxiv:2401.17043, 2024.|
|[341]|G. Xiong, Q. Jin, Z. Lu, and A. Zhang, “Benchmarking retrieval-augmented generation for medicine,” arXiv:2402.13178, 2024.|
|[342]|F. Petroni, A. Piktus et al., “Kilt: a benchmark for knowledge intensive language tasks,” in NAACL-HLT, 2021.|
|[343]|S. Barnett, S. Kurniawan, S. Thudumu et al., “Seven failurepoints when engineering a retrieval augmented generation system,” arXiv:2401.05856, 2024.|
|[344]|F. Cuconasu, G. Trappolini, F. Siciliano et al., “The power of noise: Redefining retrieval for RAG systems,” arXiv:2401.14887, 2024.|
|[345]|L. Qiu, P. Shaw, P. Pasupat et al., “Evaluating the impact of model scale for compositional generalization in semantic parsing,” arXiv:2205.12253, 2022.|
|[346]|R. Jagerman, H. Zhuang, Z. Qin et al., “Query expansion by prompting large language models,” arxiv:2305.03653, 2023.|
|[347]|H. Zhang, P. Zhao, X. Miao et al., “Experimental analysis of large-scale learnable vector storage compression,” VLDB, 2023.|
|[348]|R. Aksitov, C. Chang, D. Reitter et al., “Characterizing attribution and fluency tradeoffs for retrieval-augmented large language models,” arXiv:2302.05578, 2023.|
|[349]|C. Han, Q. Wang, W. Xiong et al., “Lm-infinite: Simple on-the-fly length generalization for large language models,” arXiv:2308.16137, 2023.|
|[350]|H. Chase, “Langchain,” https://github.com/langchain-ai/langchain, 2022.|
|[351]|W. Jiang, S. Zhang, B. Han et al., “Piperag: Fast retrieval-augmented generation via algorithm-system co-design,” arXiv:2403.05676, 2024.|
|[352]|K. Meduri et al., “Efficient rag framework for large-scale knowledge bases,” 2024.|
|[353]|S. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Jindal, “Did google gemini 1.5 really kill rag?” https://analyticsindiamag.com/did-google-gemini-1-5-really-kill-rag/, 2024.| | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# arXiv:2403.14374v1 [cs.CL] 21 Mar 2024
FIT-RAG: Black-Box RAG with Factual Information and Token Reduction
YUREN MAO, XUEMEI DONG, WENYI XU, YUNJUN GAO, and BIN WEI, Zhejiang University, China
YING ZHANG, Zhejiang Gongshang University, China
Due to the extraordinarily large number of parameters, fine-tuning Large Language Models (LLMs) to update long-tail or out-of-date knowledge is impractical in lots of applications. To avoid fine-tuning, we can alternatively treat a LLM as a black-box (i.e., freeze the parameters of the LLM) and augment it with a Retrieval-Augmented Generation (RAG) system, namely black-box RAG. Recently, black-box RAG has achieved success in knowledge-intensive tasks and has gained much attention. Existing black-box RAG methods typically fine-tune the retriever to cater to LLMs’ preferences and concatenate all the retrieved documents as the input, which suffers from two issues: (1) Ignorance of Factual Information. The LLM preferred documents may not contain the factual information for the given question, which can mislead the retriever and hurt the effectiveness of black-box RAG; (2) Waste of Tokens. Simply concatenating all the retrieved documents brings large amounts of unnecessary tokens for LLMs, which degenerates the efficiency of black-box RAG. To address these issues, this paper proposes a novel black-box RAG framework which utilizes the factual information in the retrieval and reduces the number of tokens for augmentation, dubbed FIT-RAG. FIT-RAG utilizes the factual information by constructing a bi-label document scorer which takes the factual information and LLMs’ preferences as labels respectively. Besides, it reduces the tokens by introducing a self-knowledge recognizer and a sub-document-level token reducer, which enables FIT-RAG to avoid unnecessary augmentation and reduce augmentation tokens as much as possible. FIT-RAG achieves both superior effectiveness and efficiency, which is validated by extensive experiments across three open-domain question-answering datasets: TriviaQA, NQ and PopQA. FIT-RAG can improve the answering accuracy of Llama2-13B-Chat by 14.3% on TriviaQA, 19.9% on NQ and 27.5% on PopQA, respectively. Furthermore, it can save approximately half of the tokens on average across the three datasets.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
CCS Concepts: • Information systems → Novelty in information retrieval; • Computing methodologies → Natural language generation.
Additional Key Words and Phrases: Retrieval-Augmented Generation, Large Language Models
ACM Reference Format:
Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang. 2018. FIT-RAG: Black-Box RAG with Factual Information and Token Reduction. In . | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
ACM, New York, NY, USA, 24 pages.
# 1 INTRODUCTION
Large language models (LLMs), which typically have billions of parameters, have demonstrated remarkable performance on a wide range of natural language processing tasks [ 5 , 19 , 34 ]. Pretrained on massive text corpora, recent LLMs, such as GPT-4 [ 30 ], showcase a level of sophistication that approaches human-like proficiency especially in text generation. However, the knowledge stored in the parameters of LLMs is fixed, which is susceptible to becoming out-of-date and disable LLMs to address tasks requiring time-sensitive information. Moreover, LLMs struggle to learn long-tail knowledge which appears infrequently in their training data [18 , 28 ]. Additionally, due to the extraordinarily large number of parameters, frequent fine-tuning of LLMs to update knowledge is expensive and infeasible in practice.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
© 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Manuscript submitted to ACM | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
# INTRODUCTION
The new advancements of Large Language Models (LLMs), including ChatGPT, have given software engineers new capabilities to build new HCI solutions, complete complex tasks, summarise documents, answer questions in a given artefact(s), and generate new content. However, LLMs suffer from limitations when it comes to up-to-date knowledge or domain-specific knowledge currently captured in company’s repositories. Two options to address this problem are: a) Finetuning LLMs (continue training an LLM using domain specific artifacts) which requires managing or serving a fine-tuned LLM; or b) use Retrieval-Augmented Generation (RAG) Systems that rely on LLMs for generation of answers using existing (extensible) knowledge artifacts. Both options have pros and cons related to privacy/security of data, scalability, cost, skills required, etc. In this paper, we focus on the RAG option.
Retrieval-Augmented Generation (RAG) systems offer a compelling solution to this challenge. By integrating retrieval mechanisms with the generative capabilities of LLMs, RAG systems can synthesise contextually relevant, accurate, and up-to-date information. A Retrieval-Augmented Generation (RAG) system combines information retrieval capabilities, and generative prowess of LLMs. The retrieval component focuses on retrieving relevant information for a user query from a data store. The generation component focuses on using the retrieved information as a context to generate an answer for the user query. RAG systems are an important use case as all unstructured information can now be indexed and available to query reducing development time no knowledge graph creation and limited data curation and cleaning.
Software engineers building RAG systems are expected to preprocess domain knowledge captured as artifacts in different formats, store processed information in appropriate data store (vector database), implement or integrate the right query-artifact matching strategy, rank matched artifacts, and call the LLMs API passing in user queries and context documents. New advances for building RAG systems are constantly emerging [ 8, 12 ] but how they relate and perform for a specific application context has to be discovered.
In this work we present the lessons learned and 7 failure points arising from 3 case studies. The purpose of this paper is to provide 1) a reference to practitioners and 2) to present a research roadmap for RAG systems. To the best of our knowledge, we present the first empirical insight into the challenges with creating robust RAG systems. As advances in LLMs continue to take place, the software engineering community has a responsibility to provide knowledge on how to realise robust systems with LLMs. This work is an important step for robustness in building RAG systems.
Research questions for this work include:
• What are the failure points that occur when engineering a RAG system? (section 5) We present an empirical experiment using the BioASQ data set to report on potential failure points. The experiment involved 15,000 documents and 1000 question | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# arXiv, preprint
Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
The State Hernitage Museum which Russian city?
Llama
PatersburThe AnswerRussia
The Simpsons cortoon series was originally Port of who TV show
Llama
The Answer The TraceyUllman Show:
Who played Mike Baldwin Corona tion St.?
Llama
The Answer Johnny Briggs.
Fig. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
1. Examples illustrating LLM preferred retrieved documents that do not contain relevant factual information. These examples are obtained from the training set of TriviaQA and the answers are generated using Llama1-13B-Chat.
Out-of-date and long-tail knowledge lead to LLMs struggling with hallucinations and factual errors, especially in knowledge-intensive tasks.
To address these issues, an emerging approach is Retrieval-Augmented Generation (RAG) [4, 15 , 21 , 51 ]. Instead of relying solely on LLM’s inherent knowledge, RAG augments LLMs with external knowledge retrieved from large corpora or knowledge databases. Such RAG systems provide relevant context to ground the LLM’s predictions and fill knowledge gaps. Prior RAG systems typically train both the retriever and the generation model to align with each other and adjust to downstream tasks [14 , 15]. This joint training helps the generation model better utilize the retrieved information, and improves model synergy and generalization performance. However, this approach becomes impractical when the generation module is a large language model, which can have billions of parameters. On one hand, fine-tuning the full LLM is often infeasible due to the massive computational resources required; on the other hand, many existing LLMs are only accessible via APIs [30, 31] and cannot be fine-tuned.
To overcome the infeasibility of fine-tuning LLMs in RAG, black-box RAG, which alternatively regards a LLM as a black-box (i.e., freeze the parameters of the LLM) and augment it without fine-tuning, has achieved success in knowledge-intensive tasks and gained much attention. Existing black-box RAG methods [36 , 44 , 53 , 54 ] typically fine-tune the retriever only based on LLMs’ preferences (e.g., whether LLMs can give correct answer with the retrieved documents) and concatenate all the retrieved documents as the input, which suffers both effectiveness and efficiency issues. Only considering LLMs’ preferences in retrieval causes ignorance of factual information, which can degenerate the effectiveness of RAG for it may mislead the retriever. As demonstrated in Figure 1, the LLM can answer correctly with the retrieved documents, but the documents themselves do not actually contain relevant factual information for the given question. For example, Q1 asks the location of the State Hermitage Museum; however, the retrieved document | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
FIT-RAG: Black-Box RAG with Factual Information and Token Reduction arXiv, preprint
Bi-label Document Scorer
Self-Knowledge Recognizer
Similarity_basedRetriever
HasAnarerLobel
Retricve
Label_2:
LLA_EPreter
Ratrigve
Who was British Prime Minister in 1953?
Input Prompt: Refer to the passage below and answer the following question
Make sure you fully understand the meaning of the question and the passage
Question: Who was the 1953 British Prime Minister?
Input Prompt: Generate the following question and answer
Question: Who was the British Prime Minister in 1953?
Fig. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2. The overview of FIT-RAG
provides information about the Museum of Moscow. Although the LLM can give the correct answer, the retrieved document is actually unnecessary. If these unnecessary documents are used to reward the retriever, they can mislead the retriever. Besides, concatenating all the retrieved documents as the input causes waste of tokens, which can introduce excessive unnecessary tokens and hurt the efficiency of RAG.
To simultaneously avoid the ignorance of factual information and the waste of tokens, this paper proposes a novel black-box RAG framework which utilizes both the factual information and LLM preference in the retrieval and performs token reduction for input tokens, dubbed FIT-RAG. Figure 2 gives the overall overview of FIT-RAG, which consists of five components: a similarity-based retriever, a bi-label document scorer, a bi-faceted self-knowledge recognizer, a sub-document-level token reducer, and a prompt construction module. Among these components, the bi-label document scorer is proposed to effectively model alignment with LLM preferences as well as factual information, which avoids the ignorance of factual information; besides, the bi-faceted self-knowledge recognizer and sub-document-level token reducer are proposed to reduce input tokens, which avoids the waste of tokens.
The bi-label document scorer is learned based on bi-label learning which includes a factual information label (Has_Answer) and a LLM preference label (LLM_Prefer). The factual information label indicates whether the document contains the answer to the question, while the LLM preference label indicates whether the document helps the LLM generate an accurate response. However, there is a serious data imbalance between the labels, which can degenerate the performance of bi-label learning. To address the data imbalance, this paper proposes a data-imbalance-aware bi-label learning method, which allocates different weights for the different data, and the weights are automatically learned with hypergradient-descent. The proposed method can properly solve the data imbalance problem, and the bi-label document scorer can give a comprehensive evaluation for the retrieved documents.
The bi-faceted self-knowledge recognizer reduces input tokens by avoiding unnecessary augmentation, while the sub-document-level token reducer reduces input tokens by eliminating the unnecessary sub-documents. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
arXiv, preprint, Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
self-knowledge recognizer determines whether the LLM requires external knowledge by estimating whether the LLM has self-knowledge from two facet: whether the question is related to long-tail or out-of-date knowledge and whether the question’s nearest neighbors has self-knowledge. Besides, the sub-document-level token reducer eliminates the unnecessary sub-documents by selecting sub-documents combinations from the retrieved documents that have few sub-documents but is eligible to augment the LLM to give correct answers.
To verify the effectiveness of FIT-RAG, we adopt it to augment the Llama2-13B-Chat model on three open domain question answering datasets, TriviaQA, NQ and PopQA datasets, respectively. Compared with the original model Llama2-13B-Chat without retrieval augmentation, FIT-RAG improves the answering accuracy by 14.3% on TriviaQA dataset, 19.9% on NQ dataset and 27.5% on PopQA dataset, respectively. Furthermore, it outperforms all other baseline RAG frameworks, which experimentally demonstrates the effectiveness of our proposed method. Besides, FIT-RAG consumes the least number of input tokens compared to all baseline black-box RAG methods. On average across the datasets, our proposed method can save approximately half of tokens, which greatly improves token efficiency and save computational resources.
# RELATED WORK
# Large Language Models
Recently, Large Language Models (LLMs) have grown rapidly in scale and capabilities. Early language models like BERT and T5 show strong performance on natural language understanding and generation tasks. These early successes spur further expansion of LLMs to even larger scales. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Models such as InstructGPT, LLama, OPT, and BLOOM processes parameters of tens or even hundreds of billions. This substantial increase in scale brings about a significant enhancement in the model’s capacity. Recent models like GPT-4, which possesses an even larger scale, showcases a level of sophistication that approaches human-like proficiency. However, even the strongest GPT-4 model suffers from hallucinations and factual errors as the knowledge stored in the parameters is limited and easy to be out-of-date. To address these issues, a possible solution is Retrieval-Augmented Generation (RAG), which augments LLMs with external knowledge. Traditional RAG frameworks often target models with white-box settings, which may not be accessible in many scenarios since fine-tuning the full LLM requires massive computational resources and many LLMs can only be accessed through APIs. Therefore, we need to investigate RAG systems tailored for LLMs under black-box settings.
# Retrieval-Augmented Generation
Retrieval-Augmented Generation (RAG) is a technique that augments natural language generation models with relevant content retrieved from knowledge sources, aiming at improving the quality and relevance of text generation. Previous works have demonstrated its strong performance in knowledge-intensive tasks such as question answering, fact checking, and content recommendation.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Retrievers interact with external corpus to acquire relevant information. For open-domain question answering, the Wikipedia corpus is commonly used. As for retrieval methods, it can broadly be categorized into two types: sparse retrievers and dense retrievers. Sparse retrievers, such as TF-IDF and BM25, predominantly rely on keyword matching for document retrieval. These methods determine the relevance between queries and documents by analyzing the occurrence and distribution of keywords within the documents. Dense retrievers employ dual-encoders to generate dense vector representations of text for more accurate semantic matching. Consequently, dense retrievers | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# FIT-RAG: Black-Box RAG with Factual Information and Token Reduction
arXiv, preprint,
are considered more suitable for retrieval-augmented applications. Some techniques like vector quantization [25, 47] and embedding optimization [48] also improves the efficiency of dense retrievers. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Common dense retrievers include DPR [20], ANCE [49] and Contriever [13]. Specifically, DPR [20] is trained with supervised learning on question-answer pairs, and focuses on extracting relevant passages by analyzing the semantic content of both questions and answers. ANCE [49] leverages approximate nearest neighbor search and contrastive learning to enhance the model’s ability to discern between relevant and non-relevant documents in a dense vector space. Contriever [13] employs unsupervised contrastive learning to adapt to inherent data structure, especially beneficial when the annotated training data is scarce. To enhance the quality of retrieved documents, some work conduct further reranking to these documents for personalization [3, 40, 56] and diversification [27, 38].
Recent work has explored different ways for language models to leverage retrieved or generated text as external knowledge. One approach is to integrate retrieval into language model pre-training or fine-tuning. For instance, REALM [9] integrates external document retrieval into pre-training, enhancing performance in downstream tasks by retrieving relevant information. RAG [24] adopts a generative approach that blends retrieval and generation. It is specifically fine-tuned for knowledge-intensive tasks like open-domain question answering, leveraging the synergy of retrieval-augmented capabilities with generative modeling. Atlas [15] extends upon the RAG framework by combining RAG’s retrieval-generation method with encoder-decoder language model pre-training, with a focus on fine-tuning for question answering tasks. Another approach RETRO [4] modifies the language model architecture for better text retrieval by employing kNN-LM to retrieve contextually relevant tokens and integrating their distribution with the predictions of the language model.
Instead of retrieval, some methods use text generation as the knowledge source. The concept is that knowledge within the language model can be retrieved through direct text generation [17]. For example, the Selfmem framework [7] ingeniously employs a generator to repeatedly produce synthesized texts, forming an unlimited memory pool for future use. This model uniquely uses its own generated outputs as self-memory to aid subsequent generation, showcasing an innovative approach where text generation helps fabricate new memory references. Another notable method is RECITE [39], which is designed to enable LLMs to produce relevant information without resorting to external data sources. It first prompts the LLM to recite relevant passages based on its internal knowledge. These passages are then used as a pseudo evidence document that the LLM conditions on to produce the final answer. Similarly, GENREAD [52] also utilizes the generative capabilities of LLMs to avoid external retrieval, prompting the LLM to generate context-specific documents in response to the question. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The LLM then reads these synthesized documents and generates the final response.
# Retrieval-Augmentation for Black-box Languages Models
Large language models, such as InstructGPT [31] and GPT-4 [30], are often non-open-source and exist as black-box APIs, allowing users only to send queries and receive responses without access or modification to their internal parameters. Traditional retrieval-augmented models typically focus on a white-box setup that is tunable, but this is infeasible for large-scale black-box language models. Addressing this challenge, recent research has developed retrieval augmentation methods suitable for black-box settings. For instance, REPLUG [36] operates by utilizing a fixed language model to evaluate and provide probability distributions for the documents retrieved. This supervises the retriever to select documents preferred by the LLM. Another method, AAR [54], creates positive and negative documents for a given question and uses these documents to fine-tune the retriever to align with the LLM’s preference. REFEED [53] first creates answers, then uses a retrieval model to obtain relevant information from large document corpus based on the | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
CAIN 2024, April 2024, Lisbon, Portugal Scott Barnett, Stefanus Kurniawan, Srikanth Thudumu, Zach Brannelly, Mohamed Abdelrazek
and answer pairs. We indexed all documents then ran the but is incorrect, and 2) unbounded - no way to direct or update queries and stored the generated responses using GPT-4. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
arXiv, preprint, Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
question and answers, and finally integrates the retrieved information into the in-context demonstration for output refinement. LRT [ 50 ] addresses the high computational cost issue in updating databases by introducing an adaptive similarity matching module and fine-tuning with fewer than one million parameters. Despite the applicability of the above-mentioned methods for retrieval augmentation in black-box settings, these methods typically ignore the importance of factual information and face issues of input token inefficiency, which hurts both the effectiveness and efficiency of the RAG system.
# PRELIMINARIES
# Problem Formulation
This paper focuses on Retrieval-Augmented Generation (RAG) system for black-box Large Language Models (LLMs), namely black-box RAG. In this section, we first give the definition of RAG and subsequently introduce the black-box RAG.
Retrieval-Augmented Generation (RAG). Given a natural language question 𝑞, an external knowledge corpus W and a generative language model M, a RAG system aims to help M generate more accurate and informative responses to 𝑞 using a retrieval model R, which effectively retrieves relevant documents D = (𝑑1, 𝑑2, 𝑑3, ...) from W. The form of introducing external knowledge to the language model varies, including modifying attention weights during generation, incorporating it into input prompts, or using it in post-calibration of the model output. Moreover, existing RAG methods typically require joint fine-tuning of the retriever and the language model (e.g. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Atlas [ 15], REALM [ 9]). However, joint fine-tuning is unaffordable in amount of practical scenarios due to the extremely large parameter scale of LLMs. In these scenarios, we can alternatively treat an LLM as a black-box (i.e., freeze the parameters of the LLM) and augment it with a RAG system, namely black-box RAG. Next, we introduce the definition of the black-box RAG.
Retrieval-Augmented Generation System for Black-box LLM (Black-box RAG). A RAG system for black-box LLM aims to enhance the generation capability of the black-box LLM M𝐵 by retrieving external knowledge without updating the LLM parameters. While the parameters of the black-box LLM M𝐵 are frozen, the parameters of the retrieval model R are learnable. Thus, the RAG system for black-box LLM only optimizes R to improve overall system performance, without modifying M𝐵 . Moreover, existing black-box RAG systems typically inject the retrieved documents D into M𝐵 by constructing an input prompt that concatenates the question 𝑞 and documents D, which leverages the powerful in-context learning capabilities of the LLM.
# Parameter-Efficient Fine-Tuning
Parameter-Efficient Fine-Tuning (PEFT) methods, which fine-tune only a small or additional set of model parameters and keep the majority of pre-trained parameters fixed, can largely reduce the computational cost of model adaptation. This makes PEFT more practical compared to full parameter fine-tuning, especially for large language models. Recently, various PEFT methods have been proposed, such as Adapter [10], Prompt Tuning [ 23], Prefix-Tuning [ 26] and LoRA [ 11 ]. These methods have shown competitive performance compared to full parameter fine-tuning on various downstream tasks. In our framework, we employ LoRA method to fine-tune our T5-based bi-label document scorer. Specifically, LoRA fine-tunes the model by introducing an additional low-rank matrix. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
By optimizing only the parameters constructing the low-rank matrix, LoRA adapts T5 to the downstream task while keeping the original T5 parameters frozen, which greatly reduces the computational cost and keeps competitive performance. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# 3.3 Hallucination of LLMs
The hallucination of LLMs refers to the phenomenon where LLMs generate content that seems reasonable but is actually incorrect, irrelevant to the input prompt, or even self-contradictory. Despite their impressive capabilities, LLMs still face challenges of hallucination. The hallucination phenomenon in LLMs is closely tied to their uncertainty and overconfidence. Without awareness of what they do not know, LLMs tend to exhibit excessive faith in their own predictions, oblivious to potential knowledge gaps. Addressing hallucination in LLMs is crucial for ensuring their reliability in real-world applications. In this work, we aim to alleviate the potential for hallucination in LLMs by augmenting them with external knowledge using our proposed FIT-RAG framework. By providing relevant factual information from knowledge sources, we ground the generation of LLMs in tangible evidence, enhancing their capacity for accurate and contextually relevant outputs.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# 3.4 Prompt Learning
Prompt learning is an emerging technique which stimulates the capabilities of LLMs through designing proper prompts. Unlike traditional fine-tuning, where model parameters are directly updated, this approach emphasizes the crucial role of well-crafted prompts in guiding language models to produce more accurate and contextually relevant outputs. Moreover, by constructing effective prompts, LLMs can be customized for new tasks without costly training. This makes prompt learning much more practical and applicable in various natural language processing domains. In this work, we find that different prompts significantly impact the ability of LLMs to leverage external information and influence the performance of the generated outputs. By carefully designing prompts tailored for our RAG system, the generation quality of the LLM is greatly improved.
# 4 METHODOLOGY
# 4.1 Overall Framework of FIT-RAG
In this section, we present an overview of FIT-RAG, which simultaneously achieves effectiveness and efficiency based on our proposed bi-label document scorer, bi-faceted self-knowledge recognizer and sub-document-level token reducer. The framework and workflow of FIT-RAG are illustrated in Figure 2 and Algorithm 1 respectively.
FIT-RAG comprises five components:
1. a Similarity-based Retriever which selects amount of candidate documents from the corpus;
2. a Bi-label Document Scorer which scores the candidate documents based on both the factual information and LLM preferences;
3. a Bi-faceted Self-Knowledge Recognizer which determines if external knowledge is necessary for the given question;
4. a Sub-document-level Token Reducer which selects the Top-10 candidate documents and compresses them by extracting the sub-documents;
5. a Prompt Construction which constructs a prompt based on the question, results of the self-knowledge recognizer and the output of the token reducer.
When a question comes, FIT-RAG first adopts the similarity-based retriever to obtain the 100 most relevant candidate documents based on vector similarity. Here, we build the similarity-based retriever based on Contriever [13]. Next, the bi-label document scorer is performed to score the candidate documents. The scores include the factual information label score (Has_Answer) and the LLM preference label score (LLM_Prefer), which are given based on our proposed bi-label document scorer. Then, the self-knowledge recognizer evaluates whether external knowledge is necessary for each question according to whether the question is related to long-tail or out-of-date knowledge and whether the question’s nearest neighbors have self-knowledge. If necessary, the candidate documents are inputted into the token reducer, where the Top-10 candidate documents are selected according to the scores of the bi-label document scorer and | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# arXiv, preprint, Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
Algorithm 1: Inference of FIT-RAG
|1 Required:|a given Question 𝑞, the Corpus W, the Similarity-based Retriever R, the Bi-label Document Scorer S, the Self-Knowledge Recognizer K, the Token Reducer T , the Large Language Model M.|
|---|---|
|2 D𝑟 ←|use R to retrieve Top-100 relevant text documents from W given 𝑞 ; // Similarity-Based Retriever|
|3 for 𝑑𝑟 ∈ D𝑟 do| |
|4|(𝑠𝑐𝑜𝑟𝑒1, 𝑠𝑐𝑜𝑟𝑒2) ← use S to generate bi-label scores of 𝑑𝑟 ; // Bi-label Document Scorer|
|5 end| |
|6 𝑆𝑙𝑡𝑜𝑑 (𝑞) ←|measure the relevance to the long-tail knowledge or out-of-date knowledge of 𝑞|
|7 𝑆𝑛𝑛 (𝑞) ←|measure the self-knowledge of the question’s nearest neighbors of 𝑞|
|8 K (𝑞) ←|decide whether retrieval is necessary for 𝑞 using K according to 𝑆𝑙𝑡𝑜𝑑 and 𝑆𝑛𝑛 ; // Bi-faceted Self-Knowledge Recognizer|
|9 if K (𝑞) == No_ Retrieve then| |
|10 P (𝑞) ←|construct the input prompt using only 𝑞 ; // Prompt Construction|
|11 𝑎𝑛𝑠𝑤𝑒𝑟 ←|use M to generate the answer given P (𝑞)|
|12 end| |
|13 else if K (𝑞) == Retrieve then| |
|14 D𝑓 ←|Get the compressed sub-documents combination using T ( D𝑟 ) ; // Sub-document-level Token Reducer|
|15 P (𝑞, D𝑓 ) ←|construct the input prompt using 𝑞 and D𝑓 ; // Prompt Construction|
|16 𝑎𝑛𝑠𝑤𝑒𝑟 ←|use M to generate the answer given P (𝑞, D𝑓 )|
|17 end| |
then are compressed to a set of sub-documents to reduce the number of tokens; after that, the selected sub-documents and the question are jointly integrated into a prompt. Otherwise, if the external knowledge is unnecessary, only the question is integrated into a prompt. Finally, the prompt is fed to the LLM to generate an answer. Benefiting from the proposed bi-label document scorer, bi-faceted self-knowledge recognizer, and sub-document-level token reducer, FIT-RAG can provide effective and efficient augmentation for the LLMs. In the following sections, we introduce the details of these novel components.
# 4.2 Bi-Label Document Scorer
To evaluate both the factual information and LLM’s preference for a retrieved document, we propose to learn a bi-label document scorer using bi-label learning, where the two labels are defined as (1) Has_Answer and (2) LLM_Prefer. The factual information label Has_Answer assesses the likelihood of the document containing the answer to the question, while the LLM preference label LLM_Prefer measures the document’s usefulness in helping the LLM generate an accurate response. To learn the scorer, we first formulate the bi-label learning problem for learning a bi-label document scorer. However, there is serious data imbalance occurring in the training data, which degenerates the performance of the bi-label document scorer. To solve the data imbalance, we propose a data-imbalance-aware bi-label learning algorithm for the bi-label document scorer.
# 4.2.1 Bi-Label Learning for Bi-Label Document Scorer
Consider a bi-label classification problem over an input space X and a output space Y = {0, 1}2. Given a training data 𝐷, we learn a model ℎ(·, 𝜃 ) : X → Y from a parameterized hypothesis class H , where 𝜃 represents the model parameters. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
For a training sample (𝑥, 𝑦) ∈ 𝐷, 𝑦 = (𝑦1, 𝑦2) where 𝑦1 | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
FIT-RAG: Black-Box RAG with Factual Information and Token Reduction arXiv, preprint
Who was the British Prime Minister in 1953?
Anotation of Training Data
|Label|_|
|---|---|
|Has_Answer|The Answer is Clement Atlac;|
|Label_2|The Answer is Winston Churchill;|
Fine-tuning with LoRA
Fig. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
3. The training process of the Bi-Label Document Scorer
and 𝑦2 represent the Has_Answer label and LLM_Prefer label respectively. The loss function is denoted by 𝑙 (·, ·) , and we use binary cross-entropy loss as the loss function, that is 𝑙 (ℎ(𝑥, 𝜃 ), 𝑦) = − Σ2=1 𝑦𝑖 log(ℎ𝑖 (𝑥, 𝜃 )) + (1 −𝑦𝑖 ) log(1 −ℎ𝑖 (𝑥, 𝜃 )).
We learn the bi-label scorer by optimizing the following learning objective.
min𝐿(𝜃, 𝐷) = 𝐷 | (𝑥,𝑦) ∈𝐷𝑙 (ℎ(𝑥, 𝜃 ), 𝑦) | (1)
To learn the bi-label document scorer, we collect the training set 𝐷 and build the classification model ℎ as follows. The process of training data annotation and model fine-tuning is demonstrated in Figure 3.
The training set consists of the Top-50 documents for each question that are retrieved from the corpus using Contriever [ 13 ]. They are annotated with the following principles. For the Has_Answer label, we annotate it by determining whether the document contains the candidate answers in the gold answer set. If it contains, Has_Answer is labeled as 1, otherwise 0. As to the LLM_Prefer label, we append the document to the question and feed it to the LLM to obtain a predicted answer. If appending the document improves the LLM’s performance (i.e., leading to a correct answer), LLM_Prefer is labeled as 1; otherwise, labeled as 0.
The classification model is constructed based on T5 [ 33 ], where the decoder is replaced by a binary classification head. Furthermore, to accelerate training and save computational resources, we use LoRA to fine-tune the T5 model.
However, through experimentally analyzing, we find that the number of data that has {1, 1} or {0, 0} labels is typically more than ten times larger than that for data that has {0, 1} or {1, 0} labels. It brings serious data imbalance for this bi-label learning problem and degenerates the performance of the scorer. To address this problem, we propose a data-imbalance-aware bi-label learning algorithm in the next section.
Data-imbalance-aware Bi-label Learning. To alleviate the impact of data imbalance, we propose to give different weights for different data and automatically learn the weights with hypergradient-descent [ 1 , 2 ]. The weighted loss is | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
arXiv, preprint, Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
given as follows. ∑︁ 𝐿(𝜃, 𝐷) = |𝐷 | (𝑥,𝑦) ∈𝐷[𝑓 (𝑤) · 𝑙 (ℎ(𝑥, 𝜃 ), 𝑦)] 1 (2)
where 𝑓 (𝑤) = 𝑤𝛿𝑦1,𝑦2 + (1 − 𝑤)(1 − 𝛿𝑦1,𝑦2 ), (𝑦1, 𝑦2) ∈ 𝑦. (3)
In the above equation, 𝑤 is the weight, and 𝛿𝑦1,𝑦2 is the Kronecker delta which is 1 when 𝑦1 is equal to 𝑦2 and 0 otherwise. To minimize 𝐿(𝜃, 𝐷), we adopt gradient descent. In the 𝑘𝑡ℎ iteration, 𝜃𝑘 updates as follows.
𝜃𝑘 = 𝜃𝑘 −1 − 𝜂∇𝐿(𝜃𝑘 −1, 𝐷) (4)
where 𝜂 is the learning rate.
We propose to find 𝑤 that optimizes the generalization performance of the model by means of hypergradient-descent. Specifically, we randomly split the training data into two subsets: training set 𝐷𝑡 and validation set 𝐷𝑣 , where 𝐷𝑡 is used to train the classification model, and 𝐷𝑣 is used to estimate generalization loss. The 𝐷𝑡 and 𝐷𝑣 is further divided into two subsets respectively according to the values of the two labels: matched train set (𝐷𝑡𝑚𝑎𝑡) and matched validation set (𝐷𝑣𝑚𝑎𝑡), and mismatched train set (𝐷𝑡𝑚𝑖𝑠) and mismatched validation set 𝐷𝑣𝑚𝑖𝑠. Based on this, the training loss can be defined as
𝐿𝑡 (𝜃, 𝐷𝑡 ) = 𝑤𝐿𝑡 (𝜃, 𝐷𝑡𝑚𝑎𝑡) + (1 − 𝑤)𝐿𝑡 (𝜃, 𝐷𝑡𝑚𝑖𝑠), (5)
where 𝐿𝑡 (𝜃, 𝐷𝑡𝑚𝑎𝑡) = |𝐷𝑡 |1Í(𝑥,𝑦) ∈𝐷𝑡𝑚𝑎𝑡 𝑙 (ℎ(𝑥, 𝜃 ), 𝑦) and 𝐿𝑡 (𝜃, 𝐷𝑡𝑚𝑖𝑠) = |𝐷𝑡 |1 Í(𝑥,𝑦) ∈𝐷𝑡𝑚𝑖𝑠 𝑙 (ℎ(𝑥, 𝜃 ), 𝑦), and the two validation loss can be defined as
𝑚𝑎𝑡(𝜃, 𝐷𝑣𝑚𝑎𝑡) = 1 ∑︁ 𝐿𝑣 |𝐷𝑣𝑚𝑎𝑡| (𝑥,𝑦) ∈𝐷𝑣𝑚𝑎𝑡𝑙 (ℎ(𝑥, 𝜃 ), 𝑦), (6)
𝑚𝑖𝑠(𝜃, 𝐷𝑣𝑚𝑖𝑠) = 1 ∑︁ 𝐿𝑣 |𝐷𝑣𝑚𝑖𝑠| (𝑥,𝑦) ∈𝐷𝑣𝑚𝑖𝑠𝑙 (ℎ(𝑥, 𝜃 ), 𝑦), (7)
respectively. Then we formulate the optimization objective as follows:
min𝑤 𝐿𝑣 𝑣 (8)
s.t. 𝜃𝑘 = 𝜃𝑘 −1 − 𝜂∇𝜃 𝐿𝑡 (𝜃𝑘 −1, 𝐷𝑡 )
Based on Equation(8), we first find the hypergradient direction w.r.t w for the label matched data and the label mismatched data:
𝑑𝑚𝑎𝑡 = 𝜕𝐿𝑣𝑚𝑎𝑡(𝜃𝑘 , 𝐷𝑚𝑎𝑡)𝑣 = 𝜂∇𝜃 𝐿𝑣𝑚𝑎𝑡(𝜃𝑘 , 𝐷𝑣𝑚𝑎𝑡) · (∇𝜃 𝐿𝑡 (𝜃𝑘 −1, 𝐷𝑡𝑚𝑎𝑡) − ∇𝜃 𝐿𝑡 (𝜃𝑘 −1, 𝐷𝑚𝑖𝑠)). (9)
𝑑𝑚𝑖𝑠 = 𝜕𝐿𝑣𝑚𝑖𝑠(𝜃𝑘 , 𝐷𝑚𝑖𝑠)𝑣 = 𝜂∇𝜃 𝐿𝑣𝑚𝑖𝑠(𝜃𝑘 , 𝐷𝑣𝑚𝑖𝑠) · (∇𝜃 𝐿𝑡 (𝜃𝑘 −1, 𝐷𝑡𝑚𝑎𝑡) − ∇𝜃 𝐿𝑡 (𝜃𝑘 −1, 𝐷𝑚𝑖𝑠)). (10)
By uniformly summing these two direction, we expect to obtain a common descent direction. Define this common direction as 𝑑𝑐𝑜𝑚 = (𝑑𝑚𝑎𝑡 + 𝑑𝑚𝑖𝑠 )/2. 𝑤 can be updated as follows.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
All the content of the output (other than through prompt engineering). question and answer pairs were then validated with OpenAI A RAG system is an information retrieval approach designed to overcome the limitations of using a LLM directly.
incorrect, and a sample of correct labels) was analysed to identify the patterns.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
𝑤𝑘 = 𝑤𝑘 −1 − 𝛼𝑑𝑐𝑜𝑚 (11)
where 𝛼 is the hypergradient step size.
Overall, we propose the data-imbalance-aware bi-label learning algorithm, which is presented in algorithmic form in Algorithm 2.
10 | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Algorithm 2: Data-imbalance-aware Bi-label Learning Algorithm
|Input:|Training set D𝑡 , validation set D𝑣 , step size 𝛼 for updating 𝑤, learning rate 𝜂 for updating the model.|
|---|---|
|Initialize:|𝑤0, 𝜃0|
|for 𝑘 = 1 to 𝐾 do| |
| |𝜃𝑘 ← 𝜃𝑘 −1 − 𝜂∇𝜃|
| |𝑑𝑚𝑎𝑡 ← 𝜂∇𝜃 𝐿𝑚𝑎𝑡𝐿𝑡 (𝜃𝑘 −1, 𝐷𝑡 )|
| |𝑑𝑚𝑖𝑠 ← 𝜂∇𝜃 𝐿𝑣𝑚𝑖𝑠(𝜃𝑘 , 𝐷𝑣𝑚𝑎𝑡) · (∇𝜃 𝐿𝑡 (𝜃𝑘 −1, 𝐷𝑡𝑚𝑎𝑡) − ∇𝜃 𝐿𝑡 (𝜃𝑘 −1, 𝐷𝑚𝑖𝑠))|
| |𝑑𝑐𝑜𝑚 = (𝑑𝑚𝑎𝑡 + 𝑑𝑚𝑖𝑠 )/2|
| |𝑤𝑘 = 𝑤𝑘 −1 − 𝛼𝑑𝑐𝑜𝑚|
|end| |
# Bi-faceted Self-Knowledge Recognizer
Given a question 𝑞, we determine whether retrieval is necessary by recognizing whether the LLM has self-knowledge on this question, namely whether the LLM can answer this question without retrieving external documents. This paper determines whether the LLM has the self-knowledge based on two facets: (1) whether the question is related to long-tail knowledge or out-of-date knowledge; (2) whether the question’s nearest neighbors have self-knowledge. We illustrate the inference process of bi-faceted self-knowledge recognizer in Figure 4.
To detect whether the question is related to long-tail knowledge or out-of-date knowledge, we need access to the pretraining corpus and memorization in LLMs; however, they are unavailable for black-box LLMs. To tackle this problem, existing work [29] utilizes Wikipedia’s monthly page views as a proxy to simulate the pretraining corpus and achieves proper performance. Following this idea, this paper utilizes Wikipedia’s monthly page views as a proxy and determines whether the question is related to long-tail knowledge or out-of-date knowledge based on it. Based on the output of the retriever 𝐷𝑟 (𝑞), we adopt the Has_Answer label of the Bi-Label Document Scorer and define a score to measure the degree of the question’s relevance to the long-tail knowledge or out-of-date knowledge. The score is defined as follows.
|𝑆𝑙𝑡𝑜𝑑 (𝑞) =||𝐷𝑟 (𝑞)| 𝑥 ∈𝐷𝑟 (𝑞)1[ℎ𝑎𝑛𝑠 (𝑥,𝜃 )>𝛿𝑙𝑡𝑜𝑑 ](𝑥)|
|---|---|
| |where 1[ℎ𝑎𝑛𝑠 (𝑥,𝜃 )>𝛿𝑙𝑡𝑜𝑑 ](𝑥) is an indicator function which equals to 1 if ℎ𝑎𝑛𝑠 (𝑥, 𝜃 ) > 𝛿𝑙𝑡𝑜𝑑 otherwise 0. ℎ𝑎𝑛𝑠 (𝑥, 𝜃 ) is the output of Bi-Label Document Scorer on the Has_Answer label, and 𝛿𝑙𝑡𝑜𝑑 is a hyper parameter.|
Besides, the self-knowledge of the question’s nearest neighbors is an important facet for recognizing whether the LLM has self-knowledge on the given question [44]. In this paper, we first label a set of questions in the training set as correct_w/o_retrieve or incorrect_w/o_retrieve based on whether the LLM can directly answer correctly, and then transform the questions into embedding space by using the T5 encoder and assess their similarity by measuring the Euclidean distance between them. For a given question, this paper finds k nearest neighbors for it. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The set of nearest neighbors is denoted as 𝐷𝑛 (𝑞). Based on 𝐷𝑛 (𝑞), we design a score to measure self-knowledge of the question’s nearest neighbors for the given question. The score is defined as follows.
|𝑆𝑛𝑛 (𝑞) =||𝐷𝑛 (𝑞)| 𝑥 ∈𝐷𝑛 (𝑞)1[𝑙𝑥 =𝑐𝑜𝑟𝑟𝑒𝑐𝑡_𝑤/𝑜_𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒 ] (𝑥)|
|---|---|
| |where 1[𝑙𝑥 =𝑐𝑜𝑟𝑟𝑒𝑐𝑡_𝑤/𝑜_𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒 ] (𝑥) is an indicator function which equals to 1 if the label of x is correct_w/o_retrieve otherwise 0. 𝑙𝑥 is the label of the corresponding question.| | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# arXiv, preprint
Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
Who was the British Prime Minister in 1953? Embedding Threshold
| |L0|No_Retrieve|Retrieve|
|---|---|---|---|
|Fig. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
4. The inference process of Bi-faceted Self-Knowledge Recognizer.| | | |
Combining the above two facets, this paper constructs a bi-faceted self-knowledge recognizer K (𝑞) as follows:
K (𝑞) =
- No_Retrieve, if 𝑆𝑙𝑡𝑜𝑑 (𝑞) > 𝑠𝑙 and 𝑆𝑛𝑛 (𝑞) > 𝑠𝑛,
- Retrieve, otherwise, (14)
where Retrieve means the given question requires Retrieval and vice versa. 𝑠𝑙 and 𝑠𝑛 are hyperparameters.
# 4.4 Sub-document-level Token Reducer
In the Retrieve case, we first rerank the candidate documents using a bi-criteria reranker and select the Top-10 documents. Then, we further eliminate the redundant tokens by adopting our proposed sub-document-level token reducer. The details of the reranker and token reducer are introduced as follows.
# 4.4.1 Bi-criteria Reranker
The one hundred documents retrieved by the retriever typically contain lots of redundant documents, which not only increase the cost for input tokens but also may confuse the LLM and degenerate the RAG performance. This paper proposes to eliminate the redundant documents based on two criteria, namely Has_Answer and LLM_Prefer scores given by the Bi-Label Document Scorer. Specifically, we score the one hundred documents with the uniformly sum of the Has_Answer and LLM_Prefer scores, and then rank the documents according to the score. Based on this rank, we select the Top-10 documents and input them into the sub-document-level token reducer.
# 4.4.2 Sub-document-level Token Reducer
The retrieved documents typically contain redundant content that is irrelevant to the question. Eliminating these redundant content can significantly reduce the number of tokens while does not degenerate the RAG performance. This paper proposes a sub-document-level token reducer, which splits the documents into sub-documents and selects a small amount of sub-documents to augment the LLM. It consists of three components: sub-document generator, eligible augmentation detector, and sub-document filter. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# FIT-RAG: Black-Box RAG with Factual Information and Token Reduction
arXiv, preprint
Who was the British Prime Minister in 1953?
Scorer
outeut [60.47. 16 September 1953 Denks Thatcher became 1953 - Thatcher was watching 31.36)
Fig. 5. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The inference process of Sub-document-level Token Reducer. Here we take three documents for the question as an example.
Sub-document Generator splits the retrieved documents into sub-documents. Specifically, we apply a sliding window of size 3 (i.e., containing three sentences) with a stride of 1 (i.e., striding one sentence) on each document and generate a set of three-sentence-constructed sub-documents for each document.
Eligible Augmentation Detector is a classifier which can determine whether a combination of sub-documents is eligible to augment the LLM to give correct answers. To learn the classifier, we construct the training data with the following steps:
1. Data Selection. We first select the questions that require retrieval augmentation to be answered correctly from the training data. For each question, we take its Top-10 retrieved documents and split them into sub-documents using the sub-document generator. Then, we randomly combine the sub-documents to form a set of sub-documents combinations and filter out combinations with high overlap. Subsequently, we score each combination using the Bi-Label Document Scorer and generate two scores and only select the sub-documents combinations whose scores are located on the skyline.
2. Feature Engineering. For each combination, we concatenate the two scores of all its sub-documents and pad with zeros at the end to maintain the same total input length.
3. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Labeling. We concatenate each sub-documents combination with the question and input it to Llama2-13B-Chat. If Llama generates the correct answer, the sub-documents combination is labeled as 1; otherwise, it is labeled as 0. We build the classifier with a simple | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
arXiv, preprint, Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
# Algorithm 3: Sub-document Filter
Input: Reranked document set D𝑠
Output: Optimal combination of sub-documents D𝑓
Function GenerateSubDocs(D𝑠 ):
D𝑟𝑒𝑝 ← ∅ ;
// Initialize set for representative sub-documents
foreach 𝑑 ∈ D𝑠 do
𝑚𝑎𝑥_𝑠𝑐𝑜𝑟𝑒 ← −∞, 𝑟𝑒𝑝_𝑠𝑢𝑏𝑑𝑜𝑐 ← null;
foreach 𝑑𝑠 generated by sliding window over 𝑑 do
(𝑠𝑐𝑜𝑟𝑒1, 𝑠𝑐𝑜𝑟𝑒2) ← BiLabelScorer(𝑑𝑠 );
if 𝑠𝑐𝑜𝑟𝑒1 + 𝑠𝑐𝑜𝑟𝑒2 > 𝑚𝑎𝑥_𝑠𝑐𝑜𝑟𝑒 pen
𝑚𝑎𝑥_𝑠𝑐𝑜𝑟𝑒 ← 𝑠𝑐𝑜𝑟𝑒1 + 𝑠𝑐𝑜𝑟𝑒2, 𝑟𝑒𝑝_𝑠𝑢𝑏𝑑𝑜𝑐 ← 𝑑𝑠 ;
D𝑟𝑒𝑝 ← D𝑟𝑒𝑝 ∪ {𝑟𝑒𝑝_𝑠𝑢𝑏𝑑𝑜𝑐};
return D𝑟𝑒𝑝 ;
Function PreRank(D𝑠𝑢𝑏 ):
Sort D𝑠𝑢𝑏 in descending order based on pe sum of peir scores;
return Sorted sub-documents;
Function GreedySearch(D𝑠𝑜𝑟𝑡𝑒𝑑 ):
𝐹 ← ∅, 𝑥 ← ∅ ;
// Initialize final set and feature vector
for 𝑑 ∈ D𝑠𝑜𝑟𝑡𝑒𝑑 do
𝑥 ← 𝑥 ∪ {score1 (𝑑), score2 (𝑑)} ;
// Accumulate reranked score pairs
𝑠𝑖 ← BinaryDocDetector(𝑥) ;
// Use MLP model for document selection
if 𝑠𝑖 == 1 pen
𝐹 ← 𝐹 ∪ {𝑑} ;
// Add to final set if predicted 1
break;
return 𝐹 ;
D𝑠𝑢𝑏 ← GenerateSubDocs(D𝑠 );
D𝑠𝑜𝑟𝑡𝑒𝑑 ← PreRank(D𝑠𝑢𝑏 );
D𝑓 ← GreedySearch(D𝑠𝑜𝑟𝑡𝑒𝑑 );
four-layer fully connected neural network and train it with the training data. Then, we obtain our eligible augmentation detector and use it to filter out the unnecessary sub-documents. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Sub-document Filter selects sub-documents combination that has few sub-documents but is eligible to augment the LLM to give correct answers. Its workflow is demonstrated in Figure 5 and Algorithm 3 respectively, which illustrates a sub-document filtering case involving three sub-documents. From the figure, we can see that the filtering process has three steps: (1) candidate sub-documents generation, where the Sub-document Generator splits each document into multiple sub-documents. These sub-documents are then scored by the Bi-Label Document Scorer, producing two scores for each. The sub-document with the highest sum of scores is selected to represent the original document; (2) eligibility pre-ranking, where the sub-documents obtained in the above step are ranked in descending order according the sum of their two scores; (3) greedy search, where we search the optimal sub-documents combinations in a greedy manner w.r.t the number of sub-documents. The sub-documents combinations are classified with the eligible augmentation detector.
14 | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
- What are the key considerations when engineering a RAG system? (section 6) We present the lessons learned from three case studies involving the implementation of a RAG system. This presents the challenges faced and insights gained.
Contributions arising from this work include:
- A catalogue of failure points (FP) that occur in RAG systems.
- An experience report from 3 case studies of implementing a RAG system. Two currently running at Deakin University.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# FIT-RAG: Black-Box RAG with Factual Information and Token Reduction
arXiv, preprint
|W/o RAG|With RAG|
|---|---|
| |Refer to the Passages below and answer the following question|
|Generate background Passage of the following question and answer it.| |
|Question: Who was the British PrimeMinister in 1953?| |
| |1. 16 September 1953. de Valera met British Prime Minister .|
|2. Denis Thatcher became the first husband of British Prime_| |
|Question: Who was the British Prime Minister in 1953?| |
Fig. 6. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The prompt templates for scenarios with and without RAG
This process begins with one sub-document case. If the current sub-documents combination cannot obtain a positive result from the eligible augmentation detector, the process continues by adding one more sub-document. It stops when the eligible augmentation detector outputs a positive result. The sub-document-level token reducer can effectively reduce the tokens. In Section 5.5, the experimental results demonstrate that it can reduce approximately half of the tokens.
# 4.5 Prompt Construction
This paper adds the retrieved documents into prompts to augment LLMs. The design of the prompt template significantly matters the performance of RAG. This paper designs the prompt template for the case Retrieve as Figure 6 (b) shows. In this template, we propose a sophisticated instruction, which consists of three parts. In the first part, we ask the LLM to refer to the passage below and answer the following question, which leads the LLM to answer with the retrieved passages. In the second part, we emphasize that the LLM need to read and understand the question carefully and make sure it fully understand what the question means, which excites the LLM’s deeper thinking for the question. In the last part, we ask the LLM to answer the question and explain why you choose this answer, where asking the LLM to explain why it choose this answer enables LLM to perform better. Following this instruction, we directly put the retrieved documents as context and then give the question.
As to the No_Retrieve case, we follow the idea of provoking the LLM’s internal knowledge proposed in GenRead [52]. The prompt template is illustrated in Figure 6 (a), where we instruct the LLM to first generate a background passage about the question based on its internal knowledge, and then ask the LLM to answer the question by reasoning over its self-generated context.
# 5 EXPERIMENTS
# 5.1 Experimental Settings
# 5.1.1 Datasets.
Following prior works [15, 36, 54], we choose TriviaQA [16], Natural Questions [22], and PopQA [29] as the datasets for our experiments.
- TriviaQA (TQA) is a dataset specifically designed for enhancing reading comprehension tasks. It comprises over 650,000 question-answer-evidence combinations. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The dataset includes 95K question-answer pairs authored
https://nlp.cs.washington.edu/triviaqa | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
arXiv, preprint, Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
by trivia enthusiasts and independently gathered evidence documents, six per question on average, providing high quality distant supervision for answering the questions.
Natural Questions (NQ) 2 is a large dataset for training and evaluating open-domain question-answering systems. It consists of approximately 300,000 question-answer pairs. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The questions are genuine, anonymized queries from Google search users, and the answers are drawn from Wikipedia. The dataset requires systems to read and comprehend entire Wikipedia articles for effective answering.
PopQA 3 is an open-domain question-answering dataset aimed at assessing the factual knowledge memorization of large language models. Comprising approximately 14,000 questions, it focuses on entities sampled from Wikidata. The dataset emphasizes long-tail knowledge, covering a wide range of factual information often underrepresented in popular QA datasets, making it significant for evaluating language models on less popular factual knowledge.
# Baselines
We conduct experiments on the TQA, PopQA, and NQ datasets and compare our method with the following baselines:
|Baselines|Description|
|---|---|
|Llama2-13B-Chat|The chat-optimized version of Llama2 with around 13 billion parameters. Tailored for conversational tasks, Llama2-13B-Chat is adept at understanding and engaging in dialogues.|
|ChatGPT|A variant of the GPT (Generative Pre-trained Transformer) models, which is specifically enhanced for conversational engagement. With its advanced capabilities in processing and generating human-like text, ChatGPT excels in interactive and coherent dialogue creation.|
|GenRead+Llama2-13B-Chat|A method that combines the Llama2-13B-Chat model with the GenRead method. The GenRead method adopts a generate-then-read approach, which first utilizes the LLM to create contextual documents based on the given question, and then employs the LLM to generate the final answer based on these documents.|
|REFEED+Llama2-13B-Chat|A method that combines the Llama2-13B-Chat model with the REFEED pipeline. The pipeline first prompts the LLM to generate an initial answer and use a retrieval model to retrieve relevant information based on the question and initial answer. Then it uses the retrieved information to help the LLM refine its initial answer.|
|AAR+Llama2-13B-Chat|A method that augments the Llama2-13B-Chat model with the Augmentation-Adapted Retriever (AAR). AAR uses a source language model to provide preference signals and fine-tune the retriever to align with its preferences by utilizing the encoder-decoder attention mechanisms.|
# Evaluation Metrics
Following the evaluation metric used in AAR and GenRead, we evaluate the model performance based on whether gold answers are included in the model generations. Furthermore, to evaluate the retrieval performance, we employ Recall@K (R@K) as the evaluation metric, which is the percentage of top-K retrieved documents that contain the answer, following previous work DPR, Contriever, and ANCE.
# Implementation
We employ the Wikipedia 2018 dataset as the knowledge corpus, utilize the Llama2-13B-Chat model as our black-box large language model, and build the similarity-based retriever based on Contriever. The T5-Base model is used as the base model to construct the bi-label document scorer; furthermore, the model is fine-tuned with LoRA. Specifically, the rank 𝑟 of the LoRA matrix is 16, 𝑙𝑜𝑟𝑎_𝑎𝑙𝑝ℎ𝑎 is 32, 𝑙𝑜𝑟𝑎_𝑑𝑟𝑜𝑝𝑜𝑢𝑡 is 0.05.
References: Natural Questions, PopQA, ChatGPT | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Table 1
|Model|# Parameters|TriviaQA Acc (%)|TriviaQA Input Tokens|NQ Acc (%)|NQ Input Tokens|PopQA Acc (%)|PopQA Input Tokens|
|---|---|---|---|---|---|---|---|
|Llama2-13B-Chat|13B|60.9|47|34.1|35|26.9|37|
|ChatGPT|>13B|69.5|47|34.5|35|35.4|37|
|GenRead + Llama2-13B-Chat|13B|68.0|952|44.3|1272|34.7|1179|
|REFEED + Llama2-13B-Chat|13B|71.7|1793|50.5|1766|42.9|1634|
|AAR + Llama2-13B-Chat|13B|69.8|1689|47.9|1683|46.8|1540|
|FIT-RAG + Llama2-13B-Chat (ours)|13B|75.2|816|54.0|1059|54.4|883|
Training set of TriviaQA and NQ is used for training the bi-label document scorer and the Llama2-13B-Chat [42] is adapted to label the training data. During training, the batch size is 16, the learning rate is 3𝑒 − 4, and the number of epochs is 20. In the bi-faceted self-knowledge recognizer, the hyperparameter 𝛿𝑙𝑡𝑜𝑑 is 4.5, 𝑠𝑙 is 0.04, and 𝑠𝑛 is 0.67 for TriviaQA, 0.55 for NQ and PopQA. In the sub-document-level token reducer, we use the four-layer fully connected neural network as the detector and use the training set of TriviaQA to label the training data. And finally, we utilize the Llama2-13B-Chat model as our black-box large language model. We run all experiments on a single A100 GPU (40G).
# Overall Performance
In Table 1, we report the performance of our method and baseline methods on TriviaQA dataset, NQ dataset, and PopQA dataset respectively. From this table, we can see that our method clearly outperforms all the baseline methods on the three datasets and consumes the lowest tokens compared with other black-box RAG systems like REFEED and AAR. Specifically, our method achieves 75.2% in terms of answering accuracy on TriviaQA dataset, 54.0% on NQ dataset and 54.4% on PopQA dataset.
Compared with the baseline Llama2-13B-Chat, our method improves the accuracy by 14.3% on TriviaQA dataset, 19.9% on NQ dataset and 27.5% on PopQA dataset, respectively, which demonstrates the significant benefits of external knowledge augmentation. Notably, compared with ChatGPT, which is widely considered to have many more parameters than Llama2-13B-Chat, our RAG-augmented Llama model is able to surpass the performance of ChatGPT by 5.7% on the TriviaQA dataset, 19.5% on NQ dataset and 19% on PopQA dataset. This highlights that effectively incorporating external knowledge can compensate the model size, allowing even mid-sized LLMs to rival the capabilities of much larger models. Furthermore, we can see that the gap between different datasets varies. On TriviaQA, the performance gap between ChatGPT and Llama2-13B-Chat enhanced with RAG is relatively small. However, on more challenging QA datasets NQ and PopQA, the gaps are substantial. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
This is because these datasets involve more updated and long-tail knowledge, making it challenging to rely solely on LLM’s inherent knowledge. By effectively augmenting with external information, our method provides greater benefits on such knowledge-intensive datasets, enabling substantial capability enhancement beyond what can be achieved through scaling model parameters alone.
Compared with the state-of-the-art black-box RAG methods, AAR and REFEED, the improvement in our approach is also significant. Moreover, from Table 1 we can see that our method consumes substantially fewer tokens than the other black-box RAG methods. On average, our approach reduces the number of input tokens per question by approximately half across the three datasets compared with REFEED and AAR. It demonstrates that our approach not only enhances | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
arXiv, preprint, Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
| |Contriever|One Label (Has_Answer)|One Label (LLM_Prefer)|Bi-Label|
|---|---|---|---|---|
|Recall@K| | | | |
| |Top-5|Top-10|Top-20|Top-50|
Fig. 7. The recall@k of the reranked top-100 documents on TriviaQA dataset
RAG performance but also improves computational efficiency by preventing the LLM from being overloaded with excessive irrelevant tokens.
# 5.3 Effect of Bi-label Document Scorer
In this section, we experimentally investigate the impact of the bi-label document scorer on RAG performance. We first analyze the effects of the two labels (i.e., the factual information label (Has_Answer) and the LLM preference label (LLM_Prefer)). Then, we conduct an ablation study on the data-imbalance-aware bi-label learning algorithm.
# 5.3.1 Effect of the Two Labels.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
To explore the effect of each label, we evaluate both the recall of the retrieved documents and the answering accuracy obtained with the retrieved documents. For the recall performance, we use the R@k metric, which calculates the proportion of documents containing the gold answers within the Top-k retrieved documents. Specifically, we first use the Contriever to retrieve Top-100 documents from the Wikipedia corpus, then score them by Has_Answer score, LLM_Prefer score and uniformly sum of the two scores, respectively. Based on the scores, we rerank the Top-100 documents and select the Top-k from them. For the answering accuracy, we record the accuracy of answers generated by Llama2-13B-Chat when using the Top-10 reranked documents as the external knowledge. For convenience, the token reducer is not involved, and the Top-10 documents are directly integrated into the prompt template shown as Fig 6 (b). Then, we feed the prompt into Llama2-13B-Chat to generate answers.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The results of R@k on three datasets are recorded in Figure 7 to Figure 9, respectively, and the answering accuracy is recorded in Figure 10. We can see that Has_Answer score-based augmentation has the highest R@k on all the three datasets, and it has higher answering accuracy than the LLM_Prefer score-based augmentation. This result validates the critical importance of factual information that the Has_Answer indicated. As for the label LLM_Prefer, only using the LLM_Prefer score has lower R@k compared to that for only using label Has_Answer. For uniformly sum of the two scores, which considers Has_Answer and LLM_Prefer together, does not improve the R@k, but it significantly improves the answering accuracy. This is because Has_Answer ensures documents contain factual information, and LLM_Prefer selects documents which are useful to the LLM. Combining the two labels can provide documents that contain factual information and are LLMs preferred, which improve the answering accuracy. We can see from Figure 10 that our proposed bi-label scorer improves the accuracy by 2.5% in TriviaQA, 0.4% in NQ and 1.7% in PopQA compared with the similarity-based retriever, showing the effectiveness of our method.
18 | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
- A research direction for RAG systems based on the lessons learned from the 3 case studies.
# RELATED WORK
Retrieval augmented generation encompasses using documents to augment large language models through pre-training and at inference time [ 7, 9, 12 ]. Due to the compute cost, data preparation time and required resources using RAG without training or fine-tuning is an attractive proposition. However, challenges arise when using large language models for information extraction such as performance with long text [8]. A recent survey [ 19 ] showed that large language models are used across the RAG pipeline including retriever, data generation, rewriter, and reader. Our work complements this survey by taking a software engineering perspective to shine a light on what issues engineers will face and what software engineering research is necessary to realize solutions with the current state-of-the-art RAG systems. Emerging work has looked at benchmarking RAG systems [ 3] but not at the failures occurring during implementation. Software engineering research has investigated the use of RAG systems for code-related tasks [15]. However, the application of RAG systems is broader than software engineering tasks. This paper complements existing work by presenting challenges faced during the implementation of a RAG system with a focus on practitioners. Errors and failures that arise from RAG systems overlap with other information retrieval systems including 1) no metrics for query rewriting, 2) document re-ranking, and 3) effective content summarization [19]. Our results confirm this The unique aspects are related to the semantic and generative nature of the use of large language models including evaluating factual accuracy [16].
# RETRIEVAL AUGMENTED GENERATION
With the explosion in popularity of large language model services such as ChatGPT2, Claude3, and Bard 4, people have explored their use as a question and answering systems. While the performance is impressive [16] there are two fundamental challenges: 1) hallucinations - where the LLM produces a response that looks right 21https://github.com/openai/evals 3https://chat.openai.com/ 4https://claude.ai/ https://bard.google.com/ RAG works by taking a natural language query is converted into an embedding which is used to semantically search a set of documents. Retrieved documents are then passed to a large language model to generate an answer. An overview of a RAG system is shown in Figure 1 as two separate processes, Index and Query. See this survey for more details [19]
# Index Process
In a RAG system, the retrieval system works using embeddings that provide a compressed semantic representation of the document. An embedding is expressed as a vector of numbers. During the Index process each document is split into smaller chunks that are converted into an embedding using an embedding model. The original chunk and the embedding are then indexed in a database. Software engineers face design decisions around how best to chunk the document and how large a chunk should be. If chunks are too small certain questions cannot be answered, if the chunks are too long then the answers include generated noise. Different types of documents require different chunking and processing stages. For example, video content requires a transcription pipeline to extract the audio and convert to text prior to encoding (see subsection 4.2. The choice of which embedding to use also matters as changing the embedding strategy requires re-indexing all chunks. An embedding should be chosen based on the ability to semantically retrieve correct responses. This process depends on the size of the chunks, the types of questions expected, the structure of the content and the application domain.
# Query Process
The Query process takes place at run time. A question expressed as natural language is first converted into a general query. To generalize the query a large language model is used which enables additional context such as previous chat history to be included in the new query. An embedding is then calculated from the new query to use for locating relevant documents from the database. Top-k similar documents are retrieved using a similarity method such as cosine similarity (vector databases have techniques such as inverted indexes to speed up retrieval time). The intuition is that chunks that are semantically close to the query are likely to contain the answer. Retrieved documents are then re-ranked to maximize the likelihood that the chunk with the answer is located near the top. The next stage is the Consolidator which is responsible for processing the chunks. This stage is needed to overcome the limitations of large language models 1) token limit and 2) rate limit. Services such as OpenAI have hard limits on the amount of text to include in a prompt. This restricts the number of chunks to include in a prompt to extract out an answer and a reduction strategy is needed to chain prompts to obtain an answer. These online services also restrict the number of tokens to use within a time frame restricting the latency of a system. Software engineers need to consider these tradeoffs when designing a RAG system. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# FIT-RAG: Black-Box RAG with Factual Information and Token Reduction
arXiv, preprint,
|Recall@K|Recall@K|Accuracy|
|---|---|---|
|100|Contriever|One Label (Has_Answer)|
| | |One Label (LLM_Prefer)|
| | |Bi-Label|
|90| | |
|80| | |
|70| | |
|60| | |
|50|Top-5|Top-10|Top-20|Top-50|
Fig. 8. The recall@k of the reranked top-100 documents on NQ dataset
|Accuracy|
|---|
|100|Contriever|One Label (Has_Answer)|
| | |One Label (LLM_Prefer)|
| | |Bi-Label|
|90|
|80|
|70|
|60|
|Accuracy50|Top-5|Top-10|Top-20|Top-50|
Fig. 9. The recall@k of the reranked top-100 documents on PopQA dataset
|TriviaQA|NQ|PopQA|
|---|---|---|
|77.5|56|56|
|75.0|54|54|
|72.5|52|52|
|70.0|50|50|
|67.5|48|48|
| |46|46|
|65.0| | |
|Method|Method|Method|
|---|---|---|
|Contriever (Has_Answer)|One Label (LLM_Prefer)|Bi-Label|
Fig. 10. Comparison between the answering accuracy achieved by contriever, Has_Answer score-based rerank, LLM_Prefer score-based rerank, and bi-label rerank, where contriever represents the method that does not involve reranking.
# 5.3.2 Ablation Study on the Data-imbalance-aware Bi-label Learning Algorithm.
To investigate the effect of data-imbalance-aware bi-label learning algorithm, we conduct ablation study on it. For convenience, the token reducer is not involved, and the Top-10 documents are directly integrated into the prompt template shown as Fig 6 (b). | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Then we use Llama2-13B-Chat to generate the answers according to the input prompt. The results are recorded in Figure 11. From this figure, we can see that the answering accuracy drops by 0.6% on TriviaQA, 0.2% on NQ and 0.1% on PopQA
19 | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
arXiv, preprint, Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
respectively, when the data-imbalance-aware bi-label learning algorithm is excluded. The drops on the NQ and PopQA are not significant because of the labels in these two datasets is highly noisy and the imbalance ratio is undetermined. This result demonstrates that our approach can effectively address data imbalance and improve the performance of black-box RAG.
# 5.4 Effect of Bi-faceted Self-Knowledge Recognizer
To investigate the effect of the bi-faceted self-knowledge recognizer, we conduct an ablation study on it. The results are recorded in Figure 12. From this figure, we can see that our proposed bi-faceted self-knowledge recognizer can significantly reduce the number of tokens while does not decrease the answering accuracy for the TriviaQA dataset. By contrast, the token reduction effects on the NQ and PopQA datasets are not substantial. This is because that Llama2-13B-Chat has less self-knowledge for these two datasets and requires retrieval for most of the questions in these two datasets. Our bi-faceted self-knowledge recognizer reduce input tokens by reducing unnecessary retrieval. For the datasets that requires retrieval for most of the questions, the effect of token reduction of our proposed recognizer is limited.
# 5.5 Effect of Sub-Document-Level Token Reducer
In this section, we investigate the effect of sub-document-level token reducer. We first conduct an ablation study on it. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The results are illustrated in Figure 12. From this figure, we can see that our proposed token reducer can significantly reduce the input tokens while not decreasing the answering accuracy. Specifically, compared with the original input tokens, our method can reduce 49% of input tokens on the TriviaQA dataset, 37% on the NQ dataset and 49% on the PopQA dataset. The results demonstrate the effectiveness of our proposed token reducer.
Next, we investigate the impact of the number of documents that are inputted to the token reducer. Specifically, we set the number of documents to be 5, 10, 15, and 20 respectively and observe the changes of the RAG performance. For each document, we choose the sub-document with the highest score that obtained by uniformly sum of two scores generated by the bi-label document scorer, and add it to the candidate sub-documents list. Then we use the sub-document filter to choose the final sub-document combinations as the external knowledge for the LLM. We report the changes of RAG performance in Figure 13. From this figure, we can see that as the number of documents increases, the number of input tokens also rises. When 5 documents are inputted, the model has the lowest answering accuracy.
| |TriviaQA|NQ|PopQA|
|---|---|---|---|
|Answering Accuracy|75|65|50|
Fig. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
11. Comparison between the answering accuracy of with/without data-imbalance-aware bi-label learning algorithm. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# FIT-RAG: Black-Box RAG with Factual Information and Token Reduction
| |TriviaQA|NQ|PopQA|
|---|---|---|---|
|Accuracy|58|1700|58|
|Input Tokens|1600|1600|1600|
| |78|1600| |
|Input Tokens|1600|1600|1600|
| |76|56| |
|Input Tokens|1400|1500|1400|
| |74|54|1400|
| | | |1400|
| |72|1200|1200|
| |70|1000|1200|
|Accuracy|50|Accuracy|50|
|Input Tokens|1000| | |
| |68|Input Tokens|1100|
| |800|48|48|
FIT-RAG w/o RecognizerReducerComponents Methods
| |TriviaQA|NQ|PopQA|
|---|---|---|---|
|Accuracy|80|1000|60|
| |78|58|1200|
|Input Tokens|900|1400|1100|
| |800|54|1000|
| | |52|900|
| |700| |800|
| |70|50|50|
|Accuracy|800|700| |
|Input Tokens|600| |600|
| |68|46| |
| |500| | |
|# Input Documents|5|10|15|20|5|10|15|20|5|10|15|20|
Fig. 13. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The accuracy of answers and the average input tokens per question when choosing top-k documents that input to the token reducer. For each dataset, we randomly choose 3000 samples for experiment.
This demonstrates that too few input documents restrict the amount of knowledge available for augmentation and degenerates the answering accuracy. With the number of input documents increasing to 10, we observe a corresponding improvement w.r.t answering accuracy. However, as the number of input documents reaches 20, there is a decline in the answering accuracy, which may be cause by the involving of redundant information. Overall, we can see that setting the number as 10 can achieve a proper trade-off between answering accuracy and the effect of token reduction, and we use it in our sub-document-level token reducer.
# 5.6 Effect of Prompt Construction
In this section, we experimentally compare the performance of different prompt templates. As shown in Table 2, we conduct experiments with three types of prompts: (1) Simple Prompt. This basic prompt just simply concatenates a simple instruction and the augmentation documents provided by FIT-RAG; (2) CoT Prompt. This prompt is based on the Chain-of-Thought (CoT) prompting method [45], which guides the model to reason step by step. Specifically, we add the CoT prompt Let’s think step by step at the end of the simple prompt; (3) Comprehensive Prompt. This is our proposed sophisticated prompt used in scenarios where retrieval is needed, as introduced in Section 4.5. The answering accuracy | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Table 2. Comparison between different prompt templates
|Prompt Name|Prompt Template|Acc|
|---|---|---|
|Simple Prompt|Refer to the passage below and answer the following question. Passages: 1. 16 September 1953, de Valera met British Prime Minister...... 2. Denis Thatcher became the first husband of a British Prime...... Question: Who was the British Prime Minister in 1953? The answer is|72.7|
|CoT Prompt|Refer to the passage below and answer the following question. Passages: 1. 16 September 1953, de Valera met British Prime Minister...... 2. Denis Thatcher became the first husband of a British Prime...... Question: Who was the British Prime Minister in 1953? Let’s think step by step.|73.9|
|Comprehensive Prompt (ours)|Refer to the passage below and answer the following question. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Make sure you fully understand the meaning of the question and passages. Then give the answer and explain why you choose this answer. Passages: 1. 16 September 1953, de Valera met British Prime Minister...... 2. Denis Thatcher became the first husband of a British Prime...... Question: Who was the British Prime Minister in 1953?|75.4|
Results regarding different prompts are recorded in Table 2. From this table, we can see that our proposed prompt outperforms the simple prompt and CoT prompt by 2.7% and 1.5%, respectively. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
It demonstrates that our proposed prompt can help to achieve proper RAG performance.
# CONCLUSIONS
In this paper, we propose a novel black-box RAG framework for black-box LLMs, FIT-RAG, which achieves both superior effectiveness and token efficiency. FIT-RAG improves the effectiveness of black-box RAG by utilizing both factual information and LLM preference in retrieval; besides, it boosts the token efficiency of black-box RAG by fully using self-knowledge and conducting sub-document-level token reduction. With the superior effectiveness and token efficiency, FIT-RAG has the potential to be widely applied in vertical domains. However, this paper only considers the input-augmented RAG mode that inputs the retrieved documents in the prompt. In the future, we will extend FIT-RAG to the output-augmented RAG mode where the retrieved documents are utilized to edit the output of LLMs.
# REFERENCES
[1] Luís B Almeida, Thibault Langlois, José D Amaral, and Alexander Plakhov. 1998. Parameter adaptation in stochastic optimization. In On-Line Learning in Neural Networks. Cambridge University Press, 111–134.
[2] Atilim Gunes Baydin, Robert Cornish, David Martínez-Rubio, Mark Schmidt, and Frank Wood. 2018. Online Learning Rate Adaptation with Hypergradient Descent. In ICLR.
[3] Paul N Bennett, Ryen W White, Wei Chu, Susan T Dumais, Peter Bailey, Fedor Borisyuk, and Xiaoyuan Cui. 2012. Modeling the impact of short-and long-term behavior on search personalization. In SIGIR.
[4] Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In ICML. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Seven Failure Points When Engineering a Retrieval Augmented Generation System
CAIN 2024, April 2024, Lisbon, Portugal
|Index Process|Missing Content|Failure point|Data flow|
|---|---|---|---|
|Chunker|Database|Processing stage|Text input/output|
|Documents|Chunks| |Incorrect Specificity|
|Query Process| | |Response Not Extracted|
Figure 1: Indexing and Query processes required for creating a Retrieval Augmented Generation (RAG) system. The indexing process is typically done at development time and queries at runtime. Failure points identified in this study are shown in red boxes. All required stages are underlined. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# FIT-RAG: Black-Box RAG with Factual Information and Token Reduction
[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In NeurIPS.
[6] Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051 (2017).
[7] Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, and Rui Yan. 2023. Lift Yourself Up: Retrieval-augmented Text Generation wip Self Memory. In ACL.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[8] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT.
[9] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In ICML.
[10] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-Efficient Transfer Learning for NLP. In ICML.
[11] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-Rank Adaptation of Large Language Models. In ICLR.
[12] Yushi Hu, Hang Hua, Zhengyuan Yang, Weijia Shi, Noah A Smip, and Jiebo Luo. 2023. PromptCap: Prompt-Guided Image Captioning for VQA wip GPT-3. In ICCV.
[13] Gautier Izacard, Mapilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised Dense Information Retrieval wip Contrastive Learning. Transactions on Machine Learning Research (2022).
[14] Gautier Izacard and Edouard Grave. 2021. Distilling Knowledge from Reader to Retriever for Question Answering. In ICLR.
[15] Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning wip Retrieval Augmented Language Models. Journal of Machine Learning Research 24 (2022), 1–43.
[16] Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In ACL.
[17] Saurav Kadavap, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Epan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models (mostly) know what pey know. arXiv preprint arXiv:2207.05221 (2022).
[18] Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2023. Large language models struggle to learn long-tail knowledge. In ICML.
[19] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 (2020).
[20] Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In EMNLP.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[21] Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization prough Memorization: Nearest Neighbor Language Models. In ICLR.
[22] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matpew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: a Benchmark for Question Answering Research. Transactions of pe Association of Computational Linguistics 7 (2019), 453–466.
[23] Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In EMNLP.
[24] Patrick Lewis, Epan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In NeurIPS.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[25] Chaofan Li, Zheng Liu, Shitao Xiao, Yingxia Shao, Defu Lian, and Zhao Cao. 2023. LibVQ: A Toolkit for Optimizing Vector Quantization and Efficient Neural Retrieval. In SIGIR.
[26] Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In ACL.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[27] Jiongnan Liu, Zhicheng Dou, Xiaojie Wang, Shuqi Lu, and Ji-Rong Wen. 2020. DVGAN: A minimax game for search result diversification combining explicit and implicit features. In SIGIR.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[28] Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In ACL.
[29] Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories. In ACL.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
[30] Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. 2023. Capabilities of gpt-4 on medical challenge problems. arXiv preprint arXiv:2303.13375 (2023).
[31] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions wip human feedback. In NeurIPS.
[32] Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, et al. 2021. KILT: a benchmark for knowledge intensive language tasks. In ACL.
[33] Colin Raffel, Noam Shazeer, Adam Roberts, Kaperine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring pe limits of transfer learning wip a unified text-to-text transformer. The Journal of Machine Learning Research (2020). | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# arXiv, preprint
Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, and Ying Zhang
Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into pe parameters of a language model? arXiv preprint arXiv:2002.08910 (2020).
Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al . 1995. Okapi at TREC-3. Nist Special Publication Sp (1995).
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652 (2023).
Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation 28 (1972), 11–21.
Zhan Su, Zhicheng Dou, Yutao Zhu, Xubo Qin, and Ji-Rong Wen. 2021. Modeling intent graph for search result diversification. In SIGIR.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Zhiqing Sun, Xuezhi Wang, Yi Tay, Yiming Yang, and Denny Zhou. 2023. Recitation-augmented language models. In ICLR.
Jaime Teevan, Susan T Dumais, and Eric Horvitz. 2005. Personalizing search via automated analysis of interests and activities. In SIGIR.
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a Large-scale Dataset for Fact Extraction and VERification. In NAACL.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timopée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
Yile Wang, Peng Li, Maosong Sun, and Yang Liu. 2023. Self-Knowledge Guided Retrieval Augmentation for Large Language Models. In Findings of EMNLP.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of Thought Prompting Elicits Reasoning in Large Language Models. In NeurIPS.
BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022).
Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Defu Lian, Yeyun Gong, Qi Chen, Fan Yang, Hao Sun, Yingxia Shao, and Xing Xie. 2022. Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings. In SIGIR.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Yingxia Shao, Defu Lian, Chaozhuo Li, Hao Sun, Denvy Deng, Liangjie Zhang, Qi Zhang, and Xing Xie. 2022. Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval. In WWW.
| Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Figure expanded from [19].
The final stage of a RAG pipeline is when the answer is extracted from the generated text. Readers are responsible for filtering the noise from the prompt, adhering to formatting instructions (i.e. answer the question as a list of options), and producing the output to return for the query. Implementation of a RAG system requires customising multiple prompts to process questions and answers. This process ensures that questions relevant for the domain are returned. The use of large language models to answer real-time questions from documents opens up new application domains where question and answering is a new capability. Thus, RAG systems are difficult to test as no data exists and needs to be experimentally discovered through either a) synthetic data generation, or b) piloting the system with minimal testing.
# 4 CASE STUDIES
This study conducted three case studies to discover the challenges that arise when implementing RAG systems. A summary of each of the case studies is shown in Table 1. All scripts, data, and examples of each of the failure points for the BioASQ case study are available online 5. The other two case studies have been excluded due to confidentiality concerns.
# 4.1 Cognitive Reviewer
Cognitive Reviewer is a RAG system designed to support researchers in analyzing scientific documents. Researchers specify a research question or objective and then upload a collection of related research papers. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In ICLR.
Xiao-Wen Yang, Hong-Jie You, Peng-Xiao Song, Hao-Ran Hao, Jie-Jing Shao, and Yu-Feng Li. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2023. Lightweight Retrieval Tuning for Black-Box Language Models. In NeurIPS.
Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. Retrieval-augmented multimodal language modeling. In ICML.
Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate raper pan Retrieve: Large Language Models are Strong Context Generators. In ICLR.
Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, and Ashish Sabharwal. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2023. Improving Language Models via Plug-and-Play Retrieval Feedback. arXiv preprint arXiv:2305.14002 (2023).
Zichun Yu, Chenyan Xiong, Shi Yu, and Zhiyuan Liu. 2023. Augmentation-Adapted Retriever Improves Generalization of Language Models as Generic Plug-In. In ACL.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 (2022).
Yujia Zhou, Zhicheng Dou, Yutao Zhu, and Ji-Rong Wen. 2021. PSSL: self-supervised learning for personalized search wip contrastive sampling. In CIKM. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Retrieval-Augmented Generation for Large Language Models: A Survey
Yunfan Gaoa, Yun Xiongb, Xinyu Gaob, Kangxiang Jiab, Jinliu Panb, Yuxi Bic, Yi Daia, Jiawei Suna, MengWangc, and Haofen Wang a,c
aShanghai Research Institute for Intelligent Autonomous Systems, Tongji University
bShanghai Key Laboratory of Data Science, School of Computer Science, Fudan UniversitycCollege of Design and Innovation, Tongji University
Abstract—Large Language Models (LLMs) showcase impressive capabilities but encounter challenges like hallucination, outdated knowledge, and non-transparent, untraceable reasoning processes. Retrieval-Augmented Generation (RAG) has emerged as a promising solution by incorporating knowledge from external databases. This enhances the accuracy and credibility of the generation, particularly for knowledge-intensive tasks, and allows for continuous knowledge updates and integration of domain-specific information. RAG synergistically merges LLMs’ intrinsic knowledge with the vast, dynamic repositories of external databases. This comprehensive review paper offers a detailed examination of the progression of RAG paradigms, encompassing the Naive RAG, the Advanced RAG, and the Modular RAG. It meticulously scrutinizes the tripartite foundation of RAG frameworks, which includes the retrieval, the generation and the augmentation techniques. The paper highlights the state-of-the-art technologies embedded in each of these critical components, providing a profound understanding of the advancements in RAG systems. Furthermore, this paper introduces up-to-date evaluation framework and benchmark. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
At the end, this article delineates the challenges currently faced and points out prospective avenues for research and development.
Index Terms—Large language model, retrieval-augmented generation, natural language processing, information retrieval
# I. INTRODUCTION
Large language models (LLMs) have achieved remarkable success, though they still face significant limitations, especially in domain-specific or knowledge-intensive tasks, notably producing “hallucinations” when handling queries beyond their training data or requiring current information. To overcome challenges, Retrieval-Augmented Generation (RAG) enhances LLMs by retrieving relevant document chunks from external knowledge base through semantic similarity calculation. By referencing external knowledge, RAG effectively reduces the problem of generating factually incorrect content. Its integration into LLMs has resulted in widespread adoption, establishing RAG as a key technology in advancing chatbots and enhancing the suitability of LLMs for real-world applications.
RAG technology has rapidly developed in recent years, and the technology tree summarizing related research is shown in Figure 1. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
The development trajectory of RAG in the era of large models exhibits several distinct stage characteristics. Initially, RAG’s inception coincided with the rise of the Transformer architecture, focusing on enhancing language models by incorporating additional knowledge through Pre-Training Models (PTM). This early stage was characterized by foundational work aimed at refining pre-training techniques. The subsequent arrival of ChatGPT marked a pivotal moment, with LLM demonstrating powerful in context learning (ICL) capabilities. RAG research shifted towards providing better information for LLMs to answer more complex and knowledge-intensive tasks during the inference stage, leading to rapid development in RAG studies. As research progressed, the enhancement of RAG was no longer limited to the inference stage but began to incorporate more with LLM fine-tuning techniques.
The burgeoning field of RAG has experienced swift growth, yet it has not been accompanied by a systematic synthesis that could clarify its broader trajectory. This survey endeavors to fill this gap by mapping out the RAG process and charting its evolution and anticipated future paths, with a focus on the integration of RAG within LLMs. This paper considers both technical paradigms and research methods, summarizing three main research paradigms from over 100 RAG studies, and analyzing key technologies in the core stages of “Retrieval,” “Generation,” and “Augmentation.” On the other hand, current research tends to focus more on methods, lacking analysis and summarization of how to evaluate RAG. This paper comprehensively reviews the downstream tasks, datasets, benchmarks, and evaluation methods applicable to RAG. Overall, this paper sets out to meticulously compile and categorize the foundational technical concepts, historical progression, and the spectrum of RAG methodologies and applications that have emerged post-LLMs. It is designed to equip readers and professionals with a detailed and structured understanding of both large models and RAG. It aims to illuminate the evolution of retrieval augmentation techniques, assess the strengths and weaknesses of various approaches in their respective contexts, and speculate on upcoming trends and innovations.
Our contributions are as follows:
- In this survey, we present a thorough and systematic review of the state-of-the-art RAG methods, delineating its evolution through paradigms including naive RAG,
Corresponding Author. Email: haofen.wang@tongji.edu.cn
Resources are available at https://github.com/Tongji-KGLLM/RAG-Survey | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Technology tree of RAG research
The stages of involving RAG mainly include pre-training, fine-tuning, and inference. With the emergence of LLMs, research on RAG initially focused on leveraging the powerful in-context learning abilities of LLMs, primarily concentrating on the inference stage. Subsequent research has delved deeper, gradually integrating more with the fine-tuning of LLMs. Researchers have also been exploring ways to enhance language models in the pre-training stage through retrieval-augmented techniques.
# Overview of RAG
A typical application of RAG is illustrated in Figure 2. Here, a user poses a question to ChatGPT about a recent, widely discussed news. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
Given ChatGPT’s reliance on pre-training data, it initially lacks the capacity to provide updates on recent developments. RAG bridges this information gap by sourcing and incorporating knowledge from external databases. In this case, it gathers relevant news articles related to the user’s query. These articles, combined with the original question, form a comprehensive prompt that empowers LLMs to generate a well-informed answer.
The RAG research paradigm is continuously evolving, and we categorize it into three stages: Naive RAG, Advanced RAG, and Modular RAG, as shown in Figure 3. Despite RAG methods being cost-effective and surpassing the performance of the native LLM, they also exhibit several limitations. The development of Advanced RAG and Modular RAG is a response to these specific shortcomings in Naive RAG.
# Naive RAG
The Naive RAG research paradigm represents the earliest methodology, which gained prominence shortly after the | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
# Input
|Query|User|
|---|---|
|How do you evaluate the fact that OpenAC's CEO, Sam Altman;|Documents|
|went through sudden dismissal by the board in just three days;|Chunks|
|and then was rehired by the company, resembling real-life version of Game of Thrones|Vectors|
|In terms of power dynamics?|Embeddings|
# Output
(S)without RAG am unable to provide comments on future events; Currently; do not have any information regarding the dismissal and rehiring OpenAl's CEO
This suggests significant internal disagreements within OpenAI regarding the company's future direction and strategic decisions. All of these twists and turns reflect power struggles and corporate governance issues within OpenAI.
# Answer
How do you evaluate the fact that the OpenAI's CEO; OpenAI as CEO, Silicon Valley Drama dynamics?
Please answer the above based on the following information:
- Chunk 1: "The Drama Concludes? Sam Altman Returns"
- Chunk 2: "OpenAI as CEO, Silicon Valley Drama Resembles the 'Zhen Huan' Comedy"
- Chunk 3: "The Personnel Turmoil at OpenAI Comes to an End; Who Won and Who Lost?"
# Fig. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
2. A representative instance of the RAG process applied to question answering.
It mainly consists of 3 steps. 1) Indexing. Documents are split into chunks, encoded into vectors, and stored in a vector database. 2) Retrieval. Retrieve the Top k chunks most relevant to the question based on semantic similarity. 3) Generation. Input the original question and the retrieved chunks together into LLM to generate the final answer.
# Advanced RAG
Advanced RAG introduces specific improvements to overcome the limitations of Naive RAG. Focusing on enhancing retrieval quality, it employs pre-retrieval and post-retrieval strategies. To tackle the indexing issues, Advanced RAG refines its indexing techniques through the use of a sliding window approach, fine-grained segmentation, and the incorporation of metadata. Additionally, it incorporates several optimization methods to streamline the retrieval process. | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |
All of the documents are then ranked in accordance with the stated objective for the researcher to manually review. The researcher can also ask questions directly against all of the documents. Cognitive Reviewer is currently used by PhD students from Deakin University to support their literature reviews. The Cognitive Reviewer does the Index process at runtime and relies on a robust data processing pipeline to handle uploaded documents i.e. no quality control possible at development time. This system also uses a ranking algorithm to sort the uploaded documents.
# 4.2 AI Tutor
The AI Tutor is a RAG system where students ask questions about the unit and answers are sourced from the learning content. Students are able to verify the answers by accessing a sources list from where the answer came from. The AI Tutor works by integrating into Deakin’s learning management system, indexing all of the content including PDF documents, videos, and text documents. As part of the Index process, videos are transcribed using the deep learning model Whisper [17] before being chunked. The AI Tutor was developed between August 2023 to November 2023 for a pilot in a unit with 200 students that commenced on the 30th of October 2023. Our intention is to present the lessons learned during implementation and present follow-up findings at the conclusion of the pilot. This RAG pipeline includes a rewriter to generalize queries.
We implemented a chat interface where previous dialogue between the user and the AI Tutor was used as part of the context for each question. The rewriter considers this context and rewrites the query to resolve ambiguous requests such as ‘Explain this concept further.’
# 4.3 Biomedical Question and Answer
The previous case studies focused on documents with smaller content sizes. To explore the issues at a larger scale we created a RAG system using the BioASQ [10] dataset comprised of questions, links to documents, and answers. The answers to questions were one of yes/no, text summarization, factoid, or list. This dataset was prepared by biomedical experts and contains domain-specific question and answer pairs. We downloaded 4017 open access documents from the BioASQ dataset and had a total of 1000 questions. All documents were indexed and the questions asked against the RAG system. The generated questions were then evaluated using the
HTML: | Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}} |