text
stringlengths
2
6.93k
system_prompt
stringclasses
1 value
Xueying Du, Geng Zheng, Kaixin Wang, Jiayi Feng, Wentai Deng, Mingwei Liu, Bihuan Chen, Xin Peng, Tao Ma, and Yiling Lou # RESULTS # RQ1: Compared to SOTA techniques In RQ1, we evaluate Vul-RAG with the same setting of our preliminary study (Section 3), including the same benchmark (i.e., PairVul), same metrics, and the same baselines (i.e., LLMAO, LineVul, DeepDFA, and Cppcheck). To space limits, we do not repeat the results of the baselines (previously presented in Table 3) and we present the results of Vul-RAG in Table 4. Based on two tables, we have the following findings. First, Vul-RAG achieves the highest accuracy (i.e., 0.61) and pairwise accuracy (0.21) among all baselines, which substantially outperforms the best baseline LLMAO by 12.96% and 110% relative improvements. The significant improvements in pairwise accuracy shows the advantage of Vul-RAG in distinguishing between vulnerable code and similar-but-correct code. Additionally, Vul-RAG achieves the best trade-off between recall and precision, with these two metrics both being 0.61, respectively. Although these scores are not the highest individually, other baselines with higher scores in one metric often fail short in another. For example, LineVul with the highest recall of 0.87, tends to predict most code as vulnerable, leading to a low precision of 0.50 (same as the uniform guess). In particular, we consider the F1 metric with less practical insight in our benchmark, as the uniform guess achieves the highest F1 (with 1.0 recall and 0.5 precision). However, it provides limited practical benefit to suggest all code as vulnerable for developers.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Nevertheless, the overall limited effectiveness of all techniques indicates that capturing the subtle semantic difference is very challenging, which calls for more awareness from the future work. # RQ2: Compared to GPT-4-based techniques RQ2 evaluates the usefulness of the knowledge-level RAG framework by comparing Vul-RAG with two GPT-4-based baselines, i.e., the basic GPT-4-based one and the GPT4-based one enhanced with code-level RAG. # Baselines | |Technique|FN|FP|Acc.|Pair Acc.|Precis.|Recall|F1| |---|---|---|---|---|---|---|---|---| |CWE-416|Basic GPT-4 Code-based|42.5%|38.2%|11.6%|6.4%|0.52|0.50|0.05| | |Vul-RAG|17.8%|21.2%|0.61|0.22|0.60|0.64|0.62| |CWE-476|Basic GPT-4 Code-based|43.3%|37.1%|12.9%|7.9%|0.47|0.50|0.04| | |Vul-RAG|23.0%|15.2%|0.62|0.22|0.64|0.54|0.59| |CWE-362|Basic GPT-4 Code-based|40.1%|9.9%|0.50|0.01|0.50|0.20|0.28| | |Vul-RAG|19.6%|21.7%|0.59|0.20|0.58|0.61|0.60| |CWE-119|Basic GPT-4 Code-based|37.7%|12.3%|0.50|0.02|0.50|0.25|0.33| | |Vul-RAG|17.9%|19.8%|0.62|0.23|0.62|0.64|0.63| |CWE-787|Basic GPT-4 Code-based|37.1%|11.3%|0.52|0.03|0.54|0.26|0.35| | |Vul-RAG|22.1%|17.2%|0.61|0.18|0.62|0.56|0.59| |Overall|Code-based|38.3%|11.5%|0.50|0.01|0.51|0.23|0.32| | |Vul-RAG|19.4%|19.8%|0.61|0.21|0.61|0.61|0.61| # RQ3: Usefulness for Developers In RQ3, we conduct a user study to investigate whether the vulnerability knowledge generated by Vul-RAG can help developers identify vulnerable code more precisely.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
ChatGPT’s scores on the Korean National Licensing Examination for Korean Medicine Doctors barely reached the passing threshold, underperforming in subjects unique to KM, especially Sasang constitutional medicine and public health & medicine-related law.(21) In this niche area, rich in specialized knowledge and distinct from Conventional Medicine (CM), we first demonstrated the functional suboptimality of LLM-based vector embeddings.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG Q: I want you to act as a vulnerability detection expert. Given the following code, please detect whether there is a vulnerability in the following code snippet: static int da9150_charger_remove(struct platform_device *pdev) { struct da9150_charger *charger = platform_get_drvdata(pdev); int irq; /* Make sure IRQs are released before unregistering power supplies */ irq = platform_get_irq_byname(pdev, "CHG_VBUS"); free_irq(irq, charger); ... if (!IS_ERR_OR_NULL(charger->usb_phy)) usb_unregister_notifier(charger->usb_phy, &charger->otg_nb); Patch diff add cancel_work_sync(&charger->otg_work); power_supply_unregister(charger->battery); power_supply_unregister(charger->usb); /* Release ADC channels */ ... [Code Snippet] return 0; } Please answer YES or NO, and explain why you think so. A: YES, there is a potential vulnerability in the code snippet provided... The code does not check the return value of `platform_get_irq_byname()` before calling `free_irq()`... Fails to identify the root cause of the vulnerability. (A) Q: I want you to act as a vulnerability detection expert, given the following code snippet and related vulnerability knowledge, please detect whether there is a vulnerability in the code snippet: static int cedrus_remove(struct platform_device *pdev) { struct cedrus_dev *dev = platform_get_drvdata(pdev); Patch diff cancel_delayed_work_sync(&dev->watchdog_work); if (media_devnode_is_registered(dev->mdev.devnode)) media_device_unregister(&dev->mdev); v4l2_m2m_unregister_media_controller(dev->m2m_dev); media_device_cleanup(&dev->mdev); v4l2_m2m_release(dev->m2m_dev); video_unregister_device(&dev->vfd); v4l2_device_unregister(&dev->v4l2_dev); cedrus_hw_remove(dev); return 0; [Vulnerable Source Code Snippet] Related vulnerable code snippet: Related vulnerability knowledge: Vulnerability Causes: ... Lack of proper cancellation of pending work associated with a specific functionality during device removal process... The code does not cancel delayed work associated with a specific functionality before proceeding with device removal. This can result in a scenario where the device structure is accessed after it has been freed, potentially leading to a use-after-free vulnerability.... Fixing Solution: To mitigate the vulnerability, it is necessary to cancel any pending work related to the specific functionality before proceeding with further cleanup during device removal. This ensures that the code handles asynchronous events properly and prevents the use-after-free vulnerability... [Vulnerability Knowledge] Please answer YES or NO... A: YES, ..., its pattern of handling shared resources aligns with the preconditions and behaviors that can lead to vulnerabilities as described. Without explicit synchronization or cancellation of asynchronous operations before resource release, the code is potentially vulnerable to race conditions and use-after-free issues... Fails to identify the associations therein. Successfully identify the root cause. (B) (C) Figure 4: An example of vulnerability knowledge representation Code Purpose: Handle a logical link complete event in pe Bluetoop stack. Code Function: 1. Log a logical link complete event. 2. Look up a HCI connection based on pe physical handle. 3...6. Confirm pe logical link for a BREDR channel. 7.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Hold pe HCI connection. static void hci_log_link_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) { ... BT_DBG("%s log_handle 0x%4.4x phy_handle 0x%2.2x status 0x%2.2x", hdev->name, le16_to_cpu(ev->handle), ev->phy_handle, ev->status); hcon = hci_conn_hash_lookup_handle(hdev, ev->phy_handle); if (!hcon) return; ... hchan->handle = le16_to_cpu(ev->handle); BT_DBG("hcon %p mgr %p hchan %p", hcon, hcon->amp_mgr, hchan); ... } Irrelevant code with different vulnerability CVE-2021-33034 Relevant code with similar vulnerability CVE-2023-1989 CVE-2023-1855 Code-based Retrieval Result Code Snippet Under Retrieval Functional Semantics based Retrieval Result Figure 5: An example of knowledge retrieval strategy 6.3.1 User study methodology. Tasks.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We select 10 cases from the benchmark PairVul for a user study. Specifically, we randomly select two cases from each of the five CWE categories PairVul, including both true positive (i.e., genuinely vulnerable code snippets) and false positive (i.e., correct code snippets mistakenly predicted by Vul-RAG as vulnerable) instances. To ensure a balanced evaluation, we randomly assign the two cases from each CWE category into two equal groups (𝑇𝐴 and 𝑇𝐵), with each group comprising 5 cases. Participants. We invite 6 participants with 3-5 years c/c++ programming experience for the user study. We conduct a pre-experiment survey on their c/c++ programming expertise, based on which they are divided into two participant groups (𝐺𝐴 and 𝐺𝐵) of similar expertise distribution. Procedure. Each participant is tasked to identify whether the given code snippet is vulnerable. For comparison, participants are asked to identify vulnerability in two settings, i.e., (i) basic setting: provided with the given code snippets and the detection labels generated by Vul-RAG, or (ii) knowledge-accompanied setting: provided with the given code snippets, the detection labels generated by Vul-RAG,
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Xueying Du, Geng Zheng, Kaixin Wang, Jiayi Feng, Wentai Deng, Mingwei Liu, Bihuan Chen, Xin Peng, Tao Ma, and Yiling Lou # Generalizability: The vulnerability knowledge maintain a degree of general applicability, eschewing overly specific descriptions that diminish its broad utility (e.g., narratives overly reliant on variable names from the source code). # Results: Compared to the basic setting, participants provided with vulnerability knowledge generated by Vul-RAG can more precisely identify the vulnerable and non-vulnerable code (i.e., 77% detection accuracy with knowledge v.s. 60% detection accuracy without knowledge). It indicates that the vulnerability knowledge generated by Vul-RAG is indeed helpful for developers to better understand the semantics and vulnerabilities in the given code. In addition, based on the survey feedback, participants rate the helpfulness, preciseness, and generalizability with average scores of 3.00, 3.20, and 2.97, respectively. The results further indicate the high quality and usefulness of the vulnerability knowledge generated by Vul-RAG. |Type|Reason|Number| |---|---|---| |FN|Inaccurate vulnerability knowledge descriptions.|5| | |Unretrieved relevant vulnerability knowledge.|2| | |Non-existent relevant vulnerability knowledge.|12| |FP|Mismatched fixing solutions.|11| | |Irrelevant vulnerability knowledge retrieval|10| # RQ4: Bad Case Analysis To understand the limitation of Vul-RAG, we further manually analyze the bad cases (i.e., false negatives and false positives reported by Vul-RAG). In particular, we include all 19 FN and 21 FP cases from CWE-119 for manual analysis. Table 5 summarizes the reasons and distributions.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In particular, the reasons for false negatives are classified into three primary categories: - Inaccurate Vulnerability Knowledge Descriptions. We observe that for 5 instances (26.3%), Vul-RAG successfully retrieves relevant vulnerability knowledge but fails to detect the vulnerability due to the imprecise knowledge descriptions. For example, given the vulnerable code snippet of CVE-2021-4204, although Vul-RAG successfully retrieves the relevant knowledge of the same CVE, it yields a false negative due to the vague descriptions of vulnerability knowledge (i.e., only briefly mentioning “lacks proper bounds checking” in the vulnerability cause and fixing solution description with explicitly stating what kind of bound checking should be performed). - Unretrieved Relevant Vulnerability Knowledge. We observe that for 2 cases (15.8%) Vul-RAG fails to retrieve relevant vulnerability knowledge, thus leading to false negatives. Although there are instances in the knowledge base that share the similar vulnerability root causes and fixing solutions of the given code, their functional semantics are significantly different. Therefore, Vul-RAG fails to retrieve them from the knowledge base. - Non-existent Relevant Vulnerability Knowledge. Based on our manual checking, the 12 cases (63.2 %) in this category is caused by the absence of relevant vulnerability knowledge in our knowledge base. Even there are other vulnerable and patched code pairs of the same CVE, the vulnerability behaviors and fixing solutions are dissimilar, rendering these cases unsolvable with the current knowledge base. This limitation is inherent to the RAG-based framework. In future work, we will further extend the knowledge base by extracting more CVE information to mitigate this issue. In addition, the reasons for false positive can be classified into the following two categories: - Mismatched Fixing Solutions. There are 11 cases (52.4 %) that although Vul-RAG successfully retrieves relevant vulnerability knowledge, the code snippet is still considered as vulnerable, as it is considered not applied the fixing solution of the retrieved knowledge. It is because one vulnerability can be fixed by more than one alternative solutions. - Irrelevant Vulnerability Knowledge Retrieval. There are 10 (47.6%) false positives caused by Vul-RAG retrieving irrelevant vulnerability knowledge. Based on our manual inspection, these incorrectly-retrieved knowledge descriptions often generally contain “missing proper validation of specific values”, which is too general for GPT4 to precisely identify the vulnerability.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# THREATS TO VALIDITY Threats in benchmarks. There might be potential data leakage issue between the vulnerability benchmark and the GPT-4 training data. Nevertheless, the substantial improvements of Vul-RAG over the basic GPT-4 can show the effectiveness of Vul-RAG is not simply due to data memorization. Threats in generalization. Our benchmark focuses on the Linux kernel CVEs due to its prevalence and rich vulnerability information[41 ], which might limit the generalization of results. However, our approach is not limited to the Linux kernel CVEs and can be extended to CVEs of other systems in the future. In addition, another generalizability issue of Vul-RAG occurs in cases that the constructed knowledge base does not contain the relevant knowledge for the given code under detection, which raises concerns about whether the extracted vulnerability knowledge can generalize to detect code snippet from different CVEs. To mitigate this threat, we manually compile a small-scale benchmark comprising 60 code functions (30 positive and 30 negative samples) across 30 unique CVEs. For each case in this benchmark, we manually verify the presence of relevant vulnerability knowledge extracted from other CVEs in the knowledge base. The performance of Vul-RAG on this benchmark (i.e., a recall rate of 0.83 and a precision rate of 0.76), demonstrates the generalizability of the extracted vulnerability knowledge across different CVEs. # RELATED WORK DL-based Vulnerability Detection. Most DL-based work mainly leverages graph neural network (GNN) models and pre-trained language models (PLMs) for vulnerability detection. Devign [ 1 ] employs GNN to efficiently extract useful features in a joint graph and REVEAL [ 2] conceptualizes function-level code as a Code Property Graph (CPG) and uses GGNN for CPG embedding. VulChecker [4] uses program slicing and a message-passing GNN to precisely locate vulnerabilities in code and classify their type (CWE).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
DeepDFA [ 3] uses a data flow analysis-guided graph learning framework to simulate data flow computation. For PLM-based vulnerability detection, VulBERTa [ 5] uses the RoBERTa model [22 ] as the encoder, while Linevul [6] uses attention scores for line-level prediction.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG LLM-based Vulnerability Detection. Wu et al. [42] and Zhou et al. [43] explore the effectiveness and limits of ChatGPT in software security applications; Gao et al. [44] build a comprehensive vulnerability benchmark VulBench to evaluate the effectiveness of 16 LLMs in vulnerability detection. Zhang et al. [7] investigate various prompts to improve ChatGPT in vulnerability detection. Yang et al. [8] and Shestov et al. [9] fine-tune LLMs for vulnerability detection. Additionally, Li et al. [10] and Sun et al. [11] combine LLMs with static analysis for vulnerability detection. Wang et al. [45] boosts static analysis with LLM-based intention inference to detect resource leaks. To the best of our knowledge, we are the first vulnerability detection technique based on knowledge-level RAG framework. In addition, we also make the first attempt to evaluate existing techniques on distinguishing vulnerable code and similar-but-benign code. # CONCLUSION In this work, we propose a novel LLM-based vulnerability detection technique Vul-RAG, which leverages knowledge-level retrieval-augmented generation (RAG) framework to detect vulnerability for the given code. Overall, compared to four representative baselines, Vul-RAG shows substantial improvements (i.e., 12.96% improvement in accuracy and 110% improvement in pairwise accuracy). Our user study results show that the vulnerability knowledge can improve the manual detection accuracy from 0.6 to 0.77, and the user feedback also shows the high quality of generated knowledge regarding the helpfulness, preciseness, and generalizability. # REFERENCES |[1] Y. Zhou, S. Liu, J. K. Siow, X. Du, and Y. Liu|“Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks,” in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 10 197–10 207. [Online]. Available: Link| |---|---| |[2] S. Chakraborty, R. Krishna, Y. Ding, and B. Ray|“Deep learning based vulnerability detection: Are we there yet?” IEEE Trans. Software Eng., vol. 48, no. 9, pp. 3280–3296, 2022.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[Online]. Available: Link| |[3] B. Steenhoek, H. Gao, and W. Le|“Dataflow analysis-inspired deep learning for efficient vulnerability detection,” in Proceedings of the 46th IEEE/ACM International Conference on Software Engineering, ICSE 2024, Lisbon, Portugal, April 14-20, 2024. ACM, 2024, pp.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
16:1–16:13.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Subsequently, we demonstrated Prompt-RAG's effectiveness in this context. A Question-Answering (QA) chatbot based on Prompt-RAG was built using KM-specific documents, and our model’s performance was compared with that of ChatGPT and conventional vector embedding-based RAG models. This study not only highlights the challenges of conventional RAG methods in niche domains but also showcases the potential of Prompt-RAG as a more effective alternative.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[Online]. Available: Link| |[4] Y. Mirsky, G. Macon, M. D. Brown, C. Yagemann, M. Pruett, E. Downing, S. Mertoguno, and W. Lee|“Vulchecker: Graph-based vulnerability localization in source code,” in 32nd USENIX Security Symposium, USENIX Security 2023, Anaheim, CA, USA, August 9-11, 2023, J. A. Calandrino and C. Troncoso, Eds. USENIX Association, 2023, pp. 6557–6574. [Online]. Available: Link| |[5] H. Hanif and S. Maffeis|“Vulberta: Simplified source code pre-training for vulnerability detection,” in International Joint Conference on Neural Networks, IJCNN 2022, Padua, Italy, July 18-23, 2022. IEEE, 2022, pp. 1–8. [Online]. Available: Link| |[6] M. Fu and C. Tantithamthavorn|“Linevul: A transformer-based line-level vulnerability prediction,” in 19th IEEE/ACM International Conference on Mining Software Repositories, MSR 2022, Pittsburgh, PA, USA, May 23-24, 2022. ACM, 2022, pp. 608–620. [Online]. Available: Link| |[7] C. Zhang, H. Liu, J. Zeng, K. Yang, Y. Li, and H. Li|“Prompt-enhanced software vulnerability detection using chatgpt,” CoRR, vol. abs/2308.12697, 2023. [Online]. Available: Link| |[8] A. Z. H. Yang, C. L. Goues, R. Martins, and V. J. Hellendoorn|“Large language models for test-free fault localization,” in Proceedings of the 46th IEEE/ACM International Conference on Software Engineering, ICSE 2024, Lisbon, Portugal, April 14-20, 2024. ACM, 2024, pp. 17:1–17:12. [Online]. Available: Link|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Xueying Du, Geng Zheng, Kaixin Wang, Jiayi Feng, Wentai Deng, Mingwei Liu, Bihuan Chen, Xin Peng, Tao Ma, and Yiling Lou [33] S. E. Robertson and S. Walker, “Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval,” in Proceedings of the 17th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval. Dublin, Ireland, 3-6 July 1994 (Special Issue of the SIGIR Forum).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
ACM/Springer, 1988, pp. 232–241. [Online]. Available: https://doi.org/10.1016/0306-4573(88)90021-0 [34] M. Çagatayli and E. Çelebi, “The effect of stemming and stop-word-removal on automatic text classification in turkish language,” in Neural Information Processing - 22nd International Conference, ICONIP 2015, Istanbul, Turkey, November 9-12, 2015, Proceedings, Part I, ser. Lecture Notes in Computer Science, S. Arik, T. Huang, W. K. Lai, and Q. Liu, Eds., vol. 9489. Springer, 2015, pp. 168–176. [Online]. Available: https://doi.org/10.1007/978-3-319-26532-2_19 [35] N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang, “Lost in the middle: How language models use long contexts,” CoRR, vol. abs/2307.03172, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2307.03172 [36] (2023) Gpt-3-5-turbo documentation. [Online]. Available: https://platform.openai.com/docs/models/gpt-3-5-turbo [37] (2023) Gpt-4 documentation. [Online]. Available: https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo [38] OpenAI, “GPT-4 technical report,” CoRR, vol. abs/2303.08774, 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[Online]. Available: https://doi.org/10.48550/arXiv.2303.08774 [39] (2023) Elasticsearch. [Online]. Available: https://github.com/elastic/elasticsearch [40] R. Likert, “A technique for the measurement of attitudes.” Archives of psychology, 1932. [41] M. Jimenez, M. Papadakis, and Y. L. Traon, “An empirical analysis of vulnerabilities in openssl and the linux kernel,” in 23rd Asia-Pacific Software Engineering Conference, APSEC 2016, Hamilton, New Zealand, December 6-9, 2016, A. Potanin, G. C. Murphy, S. Reeves, and J. Dietrich, Eds. IEEE Computer Society, 2016, pp. 105–112. [Online]. Available: https://doi.org/10.1109/APSEC.2016.025 [42] F. Wu, Q. Zhang, A. P. Bajaj, T. Bao, N. Zhang, R. Wang, and C. Xiao, “Exploring the limits of chatgpt in software security applications,” CoRR, vol. abs/2312.05275, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2312.05275 [43] X. Zhou, T. Zhang, and D. Lo, “Large language model for vulnerability detection: Emerging results and future directions,” CoRR, vol. abs/2401.15468, 2024. [Online]. Available: https://doi.org/10.48550/arXiv.2401.15468 [44] Z. Gao, H. Wang, Y. Zhou, W. Zhu, and C. Zhang, “How far have we gone in vulnerability detection using large language models,” CoRR, vol. abs/2311.12420, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2311.12420 [45] C. Wang, J. Liu, X. Peng, Y. Liu, and Y. Lou, “Boosting static resource leak detection via llm-based resource-oriented intention inference,” CoRR, vol. abs/2311.04448, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2311.04448
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2401.12599v1 [cs.AI] 23 Jan 2024 Revolutionizing Retrieval-Augmented Generation with Enhanced PDF Structure Recognition Demiao LIN chatdoc.com # Abstract With the rapid development of Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) has become a predominant method in the field of professional knowledge-based question answering. Presently, major foundation model companies have opened up Embedding and Chat API interfaces, and frameworks like LangChain have already integrated the RAG process. It appears that the key models and steps in RAG have been resolved, leading to the question: are professional knowledge QA systems now approaching perfection? This article discovers that current primary methods depend on the premise of accessing high-quality text corpora. However, since professional documents are mainly stored in PDFs, the low accuracy of PDF parsing significantly impacts the effectiveness of professional knowledge-based QA. We conducted an empirical RAG experiment across hundreds of questions from the corresponding real-world professional documents. The results show that, ChatDOC (chatdoc.com), a RAG system equipped with a panoptic and pinpoint PDF parser, retrieves more accurate and complete segments, and thus better answers. Empirical experiments show that ChatDOC is superior to baseline on nearly 47% of questions, ties for 38% of cases, and falls short on only 15% of cases. It shows that we may revolutionize RAG with enhanced PDF structure recognition. # 1 Introduction Large language models (LLM) are trained on data that predominantly come from publicly available internet sources, including web pages, books, news, and dialogue texts. It means that LLMs primarily rely on internet sources as their training data, which are vast, diverse, and easily accessible, supporting them to scale up their capabilities. However, in vertical applications, professional tasks require LLMs to utilize domain knowledge, which unfortunately is private, and not part of their pre-training data. A popular approach to equip LLM with domain knowledge is Retrieval-Augmented Generation (RAG). RAG framework answers a question in four steps: the user proposes a query, the system retrieves the relevant content from private knowledge bases, combines it with the user query as context, and finally asks the LLM to generate an answer. This is illustrated in Figure 1 with a simple example.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This process mirrors the typical cognitive process of encountering a problem, including consulting relevant references and subsequently deriving an answer. In this framework, the pivotal component is the accurate retrieval of pertinent information, which is critical for the efficacy of the RAG model. However, the process of retrieval from PDF files is fraught with challenges. Common issues include inaccuracies in text extraction and disarray in the row-column relationships of tables inside PDF files. Thus, before RAG, we need to convert large documents into retrievable content. The conversion involves several steps, as shown in Figure 2:
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Example Question: How to turn on full self-driving mode? User Embeddings Knowledge Base To enable Full Self-Driving (Beta), touch Controls > Autopilot > Autopilot Features > Full Self-Driving (Beta) Documents [Document Snippets] To enable Full Self-Driving (Beta), touch Controls ... Combine content with prompt [Instructions] Using the provided document snippets, reply to the given query. [Query] How to turn on full self-driving mode? Answer To turn on Full Self-Driving mode in a Tesla Model 3, follow these steps: 1. On the vehicle’s touchscreen, touch “Controls.” Chat Completion LLMs Figure 1.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The workflow of Retrieval-Augmented Generation (RAG). Documents Document Parsing & Chunking |Title|Chunk 1|Store| |---|---|---| |Paragraph|Chunk 2| | |Table|Image| | | |Chunk 3| | | |Chunk 4| | Figure 2. The process of converting PDFs into retrievable contents. - Document Parsing & Chunking. It involves extracting paragraphs, tables, and other content blocks, then dividing the extracted content into chunks for subsequent retrieval. - Embedding. It transforms text chunks into real-valued vectors and stores them in a database. Since each of these steps can lead to information loss, the compounded losses can significantly impact the effectiveness of RAG’s responses. This paper primarily addresses the question of whether the quality of PDF parsing and chunking affects the outcomes of RAG. We will explore the challenges, methodologies, and real-world case studies pertaining to this issue. It will include an examination of two types of methods in this field, namely rule-based and deep learning-based methods, followed by empirical evaluations of their efficacy through practical examples. # 2 PDF Parsing & Chunking # 2.1 Challenges and Methods Overview To humans, the cognitive process of perusing any document page is similar. When we read a page, characters are captured by our retinas. Then, in our brains, these characters are organized into
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Tagged Documents # Untagged Documents We report the development... Exam # Figure 3. Two types of documents in the view of computers.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Paragraphs, tables, and charts, and then understood or memorized. However, computers perceive information as binary codes. From their perspective, as illustrated in Figure 3, documents can be categorized into two distinct types: - Tagged Documents: Examples include Microsoft Word and HTML documents, which contain special tags like <p> and <table> to organize the text into paragraphs, cells, and tables. - Untagged Documents: Examples include PDFs, which store instructions on the placement of characters, lines, and other content elements on each document page. They focus on ’drawing’ these basic content elements in a way that makes the document legible to human readers.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Finally, we use two with "who," with the remainder incorporating a approaches to reassure the dataset quality. First, we small percentage of other interrogative words such manually review a subset sample of the generated as "when." Moreover, the number of evidence re- multi-hop queries, their corresponding evidence quired to answer a multi-hop query varies. Table sets, and the final answers. The results of the man- 4 shows the distribution of evidence numbers for ual examination indicate a high degree of accuracy each query in the dataset. Around 42% of queries and data quality. Second, we utilize GPT-4 to as- can be answered using two pieces of evidence, sess each example in the dataset against the follow- while approximately 30% and 15% of queries can ing criteria: 1) The generated query must utilize be answered using three or four pieces of evidence, all provided evidence in formulating the response; respectively. 2) The query should be answerable solely based on the provided evidence; 3) The response to the 4 Benchmarking RAG system using generated query should be either a single word or MultiHop-RAG a specific entity; 4) The query must conform to its designated query type. MultiHop-RAG can be used as a benchmark for various RAG-related tasks.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Design of Prompt-RAG In this study, we introduce Prompt-RAG, a novel approach distinct from the conventional vector embedding-based RAG. Prompt-RAG consists of three steps: preprocessing, heading selection, and retrieval-augmented generation. The overall scheme of Prompt-RAG might seem similar to that of conventional RAG methods. However, details in each step are quite distinguishable especially in that conventional RAGs rely on a complex multi-step process involving the vectorization of documents and algorithmic retrieval from a vector database for a generative model's response. The workflows of vector embedding-based RAG and our method are depicted in Figure 1. Figure. 1. Comparative workflows of two RAG models. (A) depicts pe vector embedding-based RAG process. Relevant pieces of information are retrieved from a database of document embeddings prough algoripms. The retrieved data are augmented in a generative model to produce a response. (B) illustrates pe process of Prompt-RAG. An LLM-based generative model directly uses a table of contents for constructing a contextual reference, followed by generating a response wip it. Abbreviation: RAG, Retrieval-augmented generation; LLM, Large-language model. # Preprocessing Prompt-RAG initiates by extracting or creating a Table of Contents (ToC) from a user’s document(s), which is the main subject of the retrieval. The procedure can be done flexibly depending on the type of document and the user's preferences.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
They do not store any structural information of the document, like tables or paragraphs. Thus, untagged documents are only for human e-reading, but are unreadable by machines. This becomes evident when attempting to copy a table from a PDF into MS Word, where the original structure of the table is often completely lost. However, Large Language Models (LLMs) exhibit proficiency in processing serialized text. Consequently, to enable LLMs to effectively manage untagged documents, a parser that organizes scattered characters into coherent texts with their structures is necessary. Ideally, a PDF Parser should exhibit the following key features: - Document Structure Recognition: It should adeptly divide pages into different types of content blocks like paragraphs, tables, and charts. This ensures that the divided text blocks are complete and independent semantic units. - Robustness in Complex Document Layout: It should work well even for document pages with complex layouts, such as multi-column pages, border-less tables, and even tables with merged cells. Currently, there are two main types of methods of PDF Parsing: rule-based approaches and deep learning-based approaches. Among them, PyPDF, a widely-used rule-based parser, is a standard method in LangChain for PDF parsing. Conversely, our approach, ChatDOC PDF Parser (https://pdfparser.io/), is grounded in the deep learning models. Next, we illustrate the difference between them by introducing the methods and delving into some real-world cases. # Rule-based Method: PyPDF We first introduce the parsing & chunking workflow based on PyPDF. First, PyPDF serializes characters in a PDF into a long sequence without document structure information. Then, this sequence undergoes segmentation into discrete chunks, utilizing some segmentation rule, such as the “RecursiveCharacterTextSplitter” function in LangChain. Specifically, this function divides the document based on a predefined list of separators, such as the newline character “\n”. After this initial segmentation, adjacent chunks are merged only if the length of the combined chunks is not bigger than a predetermined limit of N characters. Hereafter, we use “PyPDF” to refer to the method of document parsing and chunking using PyPDF+RecursiveCharacterTextSplitter, provided there is no contextual ambiguity. The maximum length of a chunk is set to 300 tokens in the following. Next, we use a case to observe the inherent nature of PyPDF.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Case 1: PyPDF Original Page:Chunking Result: Year ended March 31, 2021We believe that adjusted EBITDA,
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
into multiple lines (e.g. the cell “China commerce(1)”) and some adjacent cells may be arranged in one line (e.g. the third to the fifth cells in the second line, “services(1) Cainiao Cloud”). So, the structure of the table is completely destroyed. If this chunk is retrieved for RAG, LLM is unable to perceive any meaningful information from it. Similar situation for Chunk 2. Moreover, the headers of the table only exist in Chunk 1, so the lower part of the table in Chunk 2 becomes meaningless. 3. It cannot recognize the reading order of the content. The last line of Chunk 5, “Management Discussion and Analysis” is actually located at the top of the page, but is parsed as the last sentence in the result. This is because PyPDF parses the document by the storage order of the characters, instead of their reading order. This may cause chaotic results when faced with complex layouts. The result on another case Case 2 features with a complex and cross-page table is shown in Figure 15 in the Appendix. # 2.3 Deep Learning-based Method: ChatDOC PDF Parser Next, we turn our attention to the method of deep learning-based parsing, exemplified by our ChatDOC PDF Parser. The ChatDOC PDF Parser (https://pdfparser.io/) has been trained on a corpus of over ten million document pages. Following the method in [2], it incorporates a sequence of sophisticated steps, including: 1.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
OCR for text positioning and recognition; 2. Physical document object detection; 3. Cross-column and cross-page trimming; 4. Reading order determination; 5. Table structure recognition; 6. Document logical structure recognition. Readers might refer to [2] for the details of these steps. After parsing, we use the paragraphs and tables as basic blocks, and merge adjacent blocks until reaching the token limit to form a chunk. ChatDOC PDF Parser is designed to consistently deliver parsing results in JSON or HTML formats, even for challenging PDF files. It parses a document into content blocks where each block refers to a table, paragraph, chart, or other type.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
For tables, it outputs the text in each table cell and also tells which cells are merged into a new one. Moreover, for documents with hierarchical headings, it outputs the hierarchical structure of the document. In summary, the parsed result is like a well-organized Word file. Figure 5 shows a scan-copy page and its parsing result. The left side displays the document and the recognized content blocks (with different colored rectangles). The right side shows the parsing result in JSON or HTML format. Readers might refer to [3] for the live demo of this parsing result. Then, we check the result of ChatDOC PDF Parser on Case 1 in Figure 6. It successfully addresses the three shortcomings of PyPDF.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
1. As shown in the “3 Visualization” part, it recognizes the mixed layout and correctly sets the whole table as a separate chunk. For paragraphs, as shown in chunk 2 in the “2 Chunking Result” part, text lines in the same paragraphs are merged together, making it easier to understand. 2. In the “2 Chunking Result” part, in Chunk 1, we can see the table is represented using the markdown format, which preserves the table’s internal structure. Additionally, ChatDOC PDF Parser can recognize the merged cells inside a table. Since the markdown format cannot represent the merged cells, we put the whole text in the merged cell into each original cell in the markdown format. As you can see, in Chunk 1 the text “Year ended March 31, 2021” repeats 9 times, which stands for a merged cell with the original 9 ones. 3. Moreover, “Management Discussion and Analysis” and “112 Alibaba Group Holding Limited” is recognized as the page header and footer, and they are placed at the top and bottom of the parsing result which is consistent with reading order. The result on another case of Case 2 featured with complex and cross-page table is shown in Figure 16 in the Appendix.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# 8030 |JSON|HTML| |---|---| |payu|elerent_Typo| |Fort TABLE Contractors' Background Information|continued: Talso| |styles|Fonts Z0| |margin_top|margin_bottom| |index|Fpago| |elementtype|continued-false| |cols: 10|Fioxt| |0_1: text: Years in business|0_2: Jic Number Of employees| |0_37Ftoxt: Construction typo -| | Figure 5. An example illustrating the results of the ChatDOC PDF Parser.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Zoom in to see the details. # Experiments on the Impact of PDF Recognition on RAG Back to the main topic of this paper, does the way a document is parsed and chunked affect the quality of answers provided by an RAG system? To answer this, we have carried out a systematic experiment to assess the impacts. # Quantitative Evaluation of RAG Answer Quality # Settings We compared two RAG systems as listed in Table 1: - ChatDOC: uses ChatDOC PDF Parser to parse the document and leverage the structure information for chunking. - Baseline: uses PyPDF to parse the document and use RecursiveCharacterTextSplitter function for chunking. Other components, like embedding, retrieval, and QA, are the same for both systems. # Data Preparation For our experiment, we assembled a dataset that closely mirrors real-world conditions, comprising 188 documents from various fields. Specifically, This collection includes 100 academic papers, 28 financial reports, and 60 documents from other categories such as textbooks, courseware, and legislative materials. We then gathered 800 manually generated questions via crowd-sourcing. After careful screening, we removed low-quality questions and got 302 questions for evaluation. These questions were divided into two categories (as shown in Table 2): - Extractive questions are those that can be answered with direct excerpts from the documents. Usually, they require pinpoint answers because they seek specific information. We found when
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Case 1: ChatDOC PDF Parser |1 Original Page:|2 Chunking Result:| |---|---| |[Chunk 1] <Page Header> Management Discussion and Analysis | | Year ended March 31, 2021 | Year ended March 31, 2021 | Year ended March 31, 2021 | Year ended March 31, 2021 | Year ended March 31, 2021 | Year ended March 31, 2021 | Year ended March 31, 2021 | Year ended March 31, 2021 | Year ended March 31, 2021 | |-|-|-|-|-|-|-|-|-|-| | | China commerce(1) | International commerce | Local consumer services(1) | Cainiao | Cloud | Digital media and entertainment | Innovation initiatives and others | Unallocated(2) | Consolidated | | | RMB | RMB | RMB | RMB | RMB | RMB | RMB | RMB | RMB | | | (in millions, except percentages) | (in millions, except percentages) | (in millions, except percentages) | (in millions, except percentages) | (in millions, except percentages) | (in millions, except percentages) | (in millions, except percentages) | (in millions, except percentages) | (in millions, except percentages) | (in millions, except percentages) | | Revenue | 501,379 | 48,851 | 35,746 | 37,258 | 60,558 | 31,186 | 2,311 |—| 717,289 | | Income (Loss) from operations | 197,232 | (9,361) | (29,197) | (3,964) | (12,479) | (10,321) | (7,802) | (34,430) | 89,678 | | Add: Share-based compensation expense | 14,505 | 4,223 | 4,972 | 1,956 | 10,205 | 3,281 | 2,518 | 8,460 | 50,120 | | Add: Amortization of intangible assets | 1,922 | 206 | 7,852 | 1,195 | 23 | 922 | 83 | 224 | 12,427 | | Add: Anti-monopoly Fine(3) |—| —| —| —| —| —| —| 18,228 | 18,228 | | Adjusted EBITA | 213,659 | (4,932) | (16,373) | (813) | (2,251) | (6,118) | (5,201) | (7,518) | 170,453 | | Adjusted EBITA margin | 43% | (10)% | (46)% | (2)% | (4)% | (20)% | (225)% | N/A | 24% ||[Chunk 2] (1) Beginning on October 1, 2022, we reclassified the results of our Instant Supermarket Delivery (全能超市) business, which was previously reported under China commerce segment, to Local consumer services segment following the strategy refinement of Instant Supermarket Delivery business to focus on building customer mindshare for grocery delivery services through Ele.me platform. This reclassification conforms to the way that we manage and monitor segment performance.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Comparative figures were reclassified to conform to this presentation. (2) Unallocated expenses primarily relate to corporate administrative costs and other miscellaneous items that are not allocated to individual segments. The goodwill impairment, and the equity-settled donation expense related to the allotment of shares to a charitable trust, are presented as unallocated items in the segment information because our management does not consider these as part of the segment operating performance measure. (3) For a description of the relevant PRC Anti-monopoly investigation and administrative penalty decision, see“Business Overview — Legal and Administrative Proceedings — PRC Anti-monopoly Investigation and Administrative Penalty Decision.”| [Chunk 3] We use adjusted EBITDA (including adjusted EBITDA margin), adjusted EBITA (including adjusted EBITA margin), non-GAAP net income, non-GAAP diluted earnings per share/ADS and free cash flow, each a non
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
One of the most ideal cases is that a ToC is already prepared, made by the author(s) of the document. And yet, even in the absence of a pre-determined ToC, it can be arbitrarily generated, for example, using a generative model or in a manual way, based on the document's quantitative, semantic, or individual divisions. It should be noted that the size of a ToC must not exceed the context window size of the generative model for heading selection. Consequently, some headings or details of the ToC (e.g., heading or page numbers, or hierarchical structure) might need to be removed in order to reduce the number of tokens. The body of the document should then be divided.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Steps |PDF Parsing|ChatDOC (PDFlux-LLM)|Baseline (PyPDF-LLM)| |---|---|---| | |PDFlux (Deep Learning-based)|PyPDF (Rule-based, default method in LangChain)| |Chunking|≈300 tokens per chunk|≈300 tokens per chunk + separator| | |+ chunking via paragraphs, tables etc.| | |Embedding| |text-embedding-ada-002| |Retrieval| |≤3000 tokens| |QA| |GPT3.5-Turbo| # Table 1. Settings of two RAG systems: ChatDOC and Baseline.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Extractive Questions |Number|86|216| |---|---|---| |Question Examples| | | | |1. Locate the content of section ten, what is the merged operating cost in the income statement?|1. Summarize and analyze the profit forecast and valuation in the research report.| | |2. What is the specific content of table 1.|2. Fully report the research approach of this text.| | |3. Extract financial data and profit forecast tables.|3. Analyze the long-term debt-paying ability based on this report.| | |4. Find the long-term loan table.|4. How is the feasibility analysis done in this article?| | | |5. Give a simple example to explain the encoding steps and algorithm in the paper.| # Evaluation Human Evaluation GPT 4 evaluation # Table 2. The questions in the dataset are categorized into extractive questions and comprehensive analysis questions. A and B to GPT-4 to compare and score them twice. We also flip their order, feed B and A to GPT-4, and repeat the request twice. # Results # Results of Extractive Questions The results of extractive questions are shown in Table 3. Out of the 86 extractive questions, ChatDOC performed better than the baseline on 42 cases, tied on 36 cases, and was inferior to Baseline on only 8 cases. The distribution of rating scores is further detailed in Figure 7. In the distribution table, Tij = k means there are k questions whose answer by ChatDOC is rated as i and the answer by Baseline is rated as j. Cases where ChatDOC scores higher than the baseline (ChatDOC wins) are represented in the lower-left half, while cases where the baseline scores higher are in the upper-right. Notably, most samples with a clear winner are in the lower-left half, indicating ChatDOC’s superiority. Impressively, ChatDOC achieved full marks (10) in nearly half of these cases, amounting to a total of 40. # Results of Comprehensive Analysis Questions
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 3. The comparison result between ChatDOC and Baseline. | |Total|ChatDOC wins|Tie|Baseline wins| |---|---|---|---|---| |Extractive Questions|86|42 (49%)|36 (42%)|8 (9%)| |Comprehensive Questions|216|101 (47%)|79 (37%)|36 (17%)| |Summary|302|143 (47%)|115 (38%)|44 (15%)| # Figure 7. Distribution of rating scores of extractive questions. |Score of Baseline|0-2|3|4|5|6|7|8|9|10| |---|---|---|---|---|---|---|---|---|---| |0-2|5|2|0|1|0|0|1|0|1| |3|1|1|0|0|0|0|0|0| |4|0|0|1|0|0|0|0|0|0| |5|3|1|0|3|1|0|0|0|0| |6|1|0|2|1|0|0|0|0|0| |7|2|0|0|3|0|3|0|0|0| |8|1|0|0|3|0|1|2|0|0| |9|0|1|0|0|1|0|0|4|0| |10|7|3|2|2|0|2|3|1|20| # Figure 8.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Distribution of rating scores of comprehensive analysis questions. |Score of Baseline|0-5|5-6|6-7|7-7.5|7.5-8|8-8.5|8.5-9|9-9.5|9.5-10| |---|---|---|---|---|---|---|---|---|---| |0-5|8|2|0|0|1|0|1|2|1| |5-6|3|3|1|0|0|2|3|0|0| |6-7|1|2|4|2|0|1|1|1|0| |7-7.5|0|0|3|0|0|3|3|2|0| |7.5-8|1|1|5|1|5|7|4|0|0| |8-8.5|0|2|4|4|4|11|5|2|0| |8.5-9|1|0|1|11|7|21|12|4|1| |9-9.5|0|0|2|10|8|8|9|7|1| |9.5-10|0|0|0|0|0|3|2|2|0|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Case A: ChatDOC Query: How much cargo can I carry at most in terms of size?
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Cargo Volume| | |---|---| |Front Trunk|3.1 cu ft (88 L)| |Behind 2nd row|19.8 cu ft (561 L)| |Maximum total cargo volume with 5 passengers|22.9 cu ft (649 L)| Based on the document, the maximum cargo capacity in terms of size for the Model 3 is as follows: - Front Trunk: 3.1 cu ft (88 L) - Behind 2nd row: 19.8 cu ft (561 L) - Maximum total cargo volume with 5 passengers: 22.9 cu ft (649 L) # Case A: Baseline (PyPDF + LangChain) Query: How much cargo can I carry at most in terms of size? Hip Room Front Rear 53.4 in 52.4 in 1,356 mm 1,331 mm Cargo Volume Front Trunk 3.1 cu ft (88 L) Behind 2nd row 19.8 cu ft (561 L) Maximum total cargo volume with 5 passengers 22.9 cu ft (649 L) The maximum cargo size capacity for your Model 3 is not explicitly mentioned in the provided document snippets.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Case B: ChatDOC Query: What is the dependent variable in table 8? Retrieved Text Chunks: Answer: [Chunk 1] TABLE 8 ESTIMATES OF RETURNS TO COGNITIVE ABILITY, NONCOGNITIVE ABILITY, AND FACTOR LOADINGS OF UNOBSERVED HETEROGENEITY IN HIGH SCHOOL AND COLLEGE The dependent variable in Table 8 is the estimate of returns to cognitive ability, noncognitive ability, and factor loadings of unobserved heterogeneity in high school and college. [Chunk 2] | |Cognitive ability|Cognitive ability|Cognitive ability|Noncognitive ability|Noncognitive ability| | |---|---|---|---|---|---|---| | |Unobs. heterogeneity|Unobs. heterogeneity| | | | | |Year|Estimate|Std. error|Estimate|Std.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
error|Estimate|Std. error| |High school|1968|0.038|0.021|0.064|0.024|0.030| | |2000|0.047|0.025|0.091|0.028|0.225|0.023| |College|1968|0.085|0.038|0.082|0.045|-0.129|0.054| | |2000|0.151|0.039|0.100|0.037|0.079|0.037| Original Page Containing the Retrieved Chunks (Correct): Figure 11. ChatDOC’s result on locating a specific table in a research paper (original document: [7]). # 3.2.1 Case A - Find Specific information in the Tesla manual Case A involves a query about Tesla’s user manual specifically targeting cargo volume information. For this query, ChatDOC and Baseline perform differently as in Figure 9 and Figure 10. The figures show the most relevant chunk(s) retrieved and the LLM’s answer. They also show the document page that the relevant chunk(s) are located in, and highlight these chunks. In this case, both models located the table, but the text they feed to LLM is different, so the answer is different. Specifically, - ChatDOC recognizes the table structure, interpreting the text in the markdown format (as shown in the “Retrieved Text Chunks” part), which made it easier for the language model to comprehend.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Case B: Baseline (PyPDF + LangChain) |Query: What is the dependent variable in table 8?|Retrieved Text Chunk:|Answer:| |---|---|---| |[Chunk 1]|The estimates and standard errors can be obtained upon request. coefficients on a quartic in periods, where a period is defined to be calendar year minus 1968. From this figure, we can see that the year dummies in the two different sectors have followed roughly a parallel pattern, except during the late 1960s and early 1970s when they go in opposite directions, increasing in the college sector and declining in the high school sector. In Table 8, we display the point estimates and standard errors of the returns to ability and the factor loadings associated with the unobserved heterogeneity factor. Because...|The dependent variable in Table 8 is not explicitly mentioned in the provided document snippets.| Original Page Containing the Retrieved Chunk (Wrong): Idnakb coclicicm;QUNL- Rmd uher Idnakb coclicicm; QUNL- Rmd uher Text Chunk Figure 12. Baseline’s result in locating a specific table in a research paper (original document: [7]) Baseline erroneously merges the target table and the table above into one chunk and does not have the table structure. Hence, the text in the chunk is not understandable (as shown in the “Retrieved Text Chunks” part) and the LLM can only answer with “not explicitly mentioned”. This case underscores the effectiveness of ChatDOC’s parsing method, particularly in handling tables and presenting them in an LLM-friendly format. Case B - Research paper In Case B, the user’s query is on a specific research paper. It requests the system to identify “Table 8” in the paper and enumerate all the dependent variables it lists.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Both the title and the content of the table were necessary for identifying these variables. Figure 11 and Figure 12 show how ChatDOC and Baseline perform in this case. ChatDOC effectively retrieves the entire table, encompassing both its title and content. This comprehensive retrieval allows for an accurate response to the query.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# 2) Heading selection A prompt, which contains both a query and a ToC, is passed to an LLM-based generative model and the model is asked to autonomously select the headings most pertinent to the query or those that help the most to find information concerning the query. Multiple heading selections can be performed using the hierarchical structure of the headings, narrowing down from main headings to subheadings if a user wants to make use of all the headings from an oversized ToC. As this procedure is a preliminary step for making a reference for answer generation, the number of selected headings can be set in the prompt in advance depending on the budget and the context window size of the generative model for answer generation. It is recommended that the model produce a response in a structured format during heading selection to optimize efficiency for the following retrieval process as well as token usage. # 3) Retrieval-augmented generation Sections of the document under the selected headings are retrieved and concatenated as a reference for answer generation. Again, it should be noted that the size of a reference must be smaller than the context window size of the generative model for answer generation. Therefore, the size of a reference has to be reduced by truncation or summarization when overly large. After a reference is prepared, a prompt including both the query and the reference is forwarded into a generative model.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Figure 13. An example of ChatDOC encountered the ranking and token limit issues. |ChatDOC|Query: how many occupation dummies are included in the regression in table 4?| |---|---| |Retrieved Text Chunks:|Answer:| |[Chunk 1] Table 4: Education, Occupational Choice, and the Height Premium|In Table 4, the regression includes occupation dummies. However, the number of occupation dummies included in the regression is not specified in the document snippets provided.| |[Chunk 2] Notes: The sample includes 5286 observations from 3860 men. Brackets contain standard errors clustered at the PSU-level. Regressions are weighted using sample weights and control for year, ethnicity, age, and age squared. The Raven score is standardized to have a standard deviation of 1 across the entire Mexican adult population. The p-values at the bottom of the table ⁎ p b 0.10.⁎⁎ p b 0.05.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
⁎⁎⁎ p b 0.01.| | # Figure 14. An example that ChatDOC fails to retrieve the relevant table (original document: [8]). - Baseline does not retrieve true “Table 8”, but only a text chunk below “Table 7” (since it contains the text of “Table 8). Due to the baseline’s segmentation strategy, the content of “Table 8” and other content on the same page are combined into a large chunk. This chunk, containing a mix of unrelated content, has a low similarity score and consequently does not show up in the retrieval results. This case highlights ChatDOC’s superior ability to handle complex document structures and its impact on retrieving specific segments for accurate responses.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Discussion on Limitations While ChatDOC generally performs well, there are instances where its retrieval quality is not as good as Baseline’s. We observe two patterns in these cases. Ranking and Token Limit Issue. If ChatDOC retrieves a large, but irrelevant table first, it uses up the context window, preventing access to the relevant information, as the example in Figure 13 shows. This is mainly because the embedding model does not rank the relevant chunk as the top result. This may be addressed by a better embedding model, or a more sophisticated way to handle large tables/paragraphs like only retaining the relevant part of the table for LLM. Fine Segmentation Drawback. Figure 14 shows a case that requires retrieving the whole table with its title. However, ChatDOC wrongly recognizes the title as a regular paragraph, so that the title and the table are stored in different chunks. This led to retrieving only part of the required information, namely the table’s title and footnotes, but not the key content within the table. Improving table title recognition could address these issues. # Applications in ChatDOC We apply the enhanced PDF structure recognition framework on ChatDOC (chatdoc.com), an AI file-reading assistant that helps to summarize long documents, explain complex concepts, and find key information in seconds. In terms of reliability and accuracy, it is the top among all ChatPDF products. Here’s what makes ChatDOC special: - Mastery over tables: Simply select any table or text, and dive right into the details. - Multi-file conversation: Talk about lots of documents at the same time, without worrying about how many pages each one has. - Citation-backed responses: All answers are supported by direct quotes pulled from the source documents. - Handle Many File Types: Works seamlessly with scanned files, ePub, HTML, and docx formats. We are still working on publishing the API of ChatDOC PDF Parser. Please subscribe to the wait list via pdfparser.io. # Conclusion Large Language Models (LLMs) are capable of producing more accurate responses when assisted by a PDF parser that effectively extracts and integrates structured information from documents into the prompts. This process enhances the quality and relevance of the data fed into the models, thereby improving their output. In the future, we will compare more deep learning-based document parsing methods to give a more comprehensive understanding of the relationship between the RAG quality and document parsing quality. Some initial experiments show that some open-sourced PDF parsing methods cannot meet the bar for high-quality RAG. # References 1.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Alibaba Group Holding Limited. Fiscal year annual report 2023. https://static.alibabagroup.com/reports/fy2023/ar/ebook/en/index.html, 2023. 2.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Rongyu Cao, Hongwei Li, Ganbin Zhou, and Ping Luo. Towards document panoptic segmentation with pinpoint accuracy: Method and evaluation. In 16th International Conference on Document Analysis and Recognition, pages 3–18, 2021.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[3] ChatDOC Team. https://pdfparser.io/ [4] Daisho Microline Holdings Limited. Fiscal year annual report 2022. https://www1.hkexnews.hk/listedco/listconews/sehk/2022/0626/2022062600094.pdf, 2022. [5] Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators, 2023. [6] Tesla Inc. Model 3 owner’s manual. https://manual-directory.com/manual/2023-tesla-model-3-owners-manual/, 2023. [7] Fl´avio Cunha, Fatih Karahan, and Ilton Soares.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Returns to skills and pe college premium. Journal of Money, Credit and Banking, 43:39–86, 2011. https://sci-hub.hkvisa.net/https://doi.org/10.1111/j.1538-4616.2011.00410.x. [8] Tom S. Vogl. Height, skills, and labor market outcomes in mexico. NBER Working Paper Series, 2012. https://www.nber.org/system/files/working_papers/w18318/w18318.pdf.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
A More Cases on PDF Parsing &amp; Chunking Case 2 in Figure 15 features a large borderless table that spans two pages. Figure 15 shows the result by PyPDF. A close inspection reveals that tables are represented merely as sequences of text, making them challenging to interpret and understand. And the table is scattered in three chunks. Results on these two cases demonstrate that the rule-based method, like that of PyPDF, tends to dissect a document without a true understanding of its content structure. As a result, tables are often torn apart and paragraphs become jumbled, leading to a disjointed and confusing representation of the original document. For ChatDOC PDF Parser, shown in Figure 16, the parsing outcome is notably different. It not only preserves the document structure but also effectively segments the document in a way that maintains its inherent meaning.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In this case, the table that spans two pages is set into one chunk, with its title at the beginning. So, the information in this chunk is self-contained. If this chunk is retrieved for RAG, the LLM can digest useful information within it.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Case 2: PyPDF |Original Pages:|Chunking Result:| |---|---| |1|Visualization of Chunking Result:| |2| | [Chunk 1] 1 Hong Kong Exchanges and Clearing Limited and The Stock Exchange of Hong Kong Limited announcement, make no takeno responsibility for pe contents of pis representation as to its accuracy or completeness and expressly disclaim any liability whatsoever for any loss howsoever arising from or in reliance upon pe whole or any part of pe contents of pis announcement. |[Chunk 2]|DAISHO MICROLINE HOLDINGS LIMITED| |---|---| | |(Incorporated in Bermuda with limited liability)| | |(Stock Code: 0567)| | |ANNOUNCEMENT OF ANNUAL RESULTS| | |FOR THE YEAR ENDED 31 MARCH 2022| | |The Board of Directors (the“Board”) of Daisho Microline| | |Holdings Limited (the Company”) announces the preliminary consolidated| | |results of the Company and its subsidiaries (the“Group”)for the year ended 31 March 2022| | |together with the comparative figures of the previous corresponding year as| | |follows:| | |CONSOLIDATED STATEMENT OF PROFIT OR LOSS| | |Year ended 31 March 2022| | |2022 2021| | |Note HK$’000 HK$’000| | |Continuing operations| | |Revenue 3 106,471 67,886| | |Cost of sales (98,670) (55,605)| | |Gross profit 7,801 12,281| | |Other income 5 7,341 4,616| | |Selling and distribution expenses (5,083) (3,401)| | |Administrative expenses (31,157) (35,422)| | |Other operating expenses (480) (527)| | |Fair value gain on derivative financial instruments–101| | |Reversal of (Provision for) impairment loss on trade receivables, net 10(b) 1,808 (2,859)| | |Impairment loss on other receivables–(1,780)| | |Impairment loss on property, plant and equipment 15 (5,010) (2,314)| | |Change in fair value of contingent consideration receivable–3,311| | |Gain on bargain purchase arising from the acquisition of subsidiaries–1,197| | |Loss on early redemption of a promissory note–(4,512)| | |Finance costs 6 (2,244) (7,655)| [Chunk 3] Text Chunk 2022 2021 Note HK$’000 HK$’000 Loss before taxation from continuing operations 6 (27,024) (36,964) Income tax expense 7 (444) (532) Loss for pe year from continuing operations (27,468) (37,496) Discontinued operation Loss for pe year from discontinued operation 11 (1,660) (29,480) Loss for pe year (29,128) (66,976) From continuing and discontinued operations Loss per share Basic (Hong Kong cents) 8 (2.80) (10.38) Diluted (Hong Kong cents) 8 (2.80) (10.38) From continuing operations Loss per share Basic (Hong Kong cents) 8 (2.64) (5.81) Diluted (Hong Kong cents) 8 (2.64) (5.81) [Chunk 4] 2022 2021 Note HK$’000 HK$’000 Loss before taxation from continuing operations 6 (27,024) (36,964) Income tax expense 7 (444) (532) Loss for pe year from continuing operations (27,468) (37,496) Discontinued operation Loss for pe year from discontinued operation 11 (1,660) (29,480) Loss for pe year (29,128) (66,976) From continuing and discontinued operations Loss per share Basic (Hong Kong cents) 8 (2.80) (10.38) Diluted (Hong Kong cents) 8 (2.80) (10.38) From continuing operations Loss per share Basic (Hong Kong cents) 8 (2.64) (5.81) Diluted (Hong Kong cents) 8 (2.64) (5.81) Figure 15. Parsing and chunking results of PyPDF on Case 2 (original document: [4]).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In response, the model consults the augmentations to generate a response to the query.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Zoom in to see the details.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Case 2: ChatDOC PDF Parser Original Pages: 2 Chunking Result: [Chunk 1] <td
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Experiments 1) Comparative exploration of LLM-based vector embeddings in the KM and CM domains. This experiment aimed to identify and exemplify the relative representational defects of LLM-based vector embedding in niche domains compared to other well-established domains. To explain this point, we conducted a comparative analysis with vector embeddings from documents in KM and CM domains. For this experiment, we selected 10 documents each from KM and CM domains, specifically regarding their physiological contents. ‘Eastern Medicine Physiology'(22) served as the document pool for KM. This book, compiled in Korean, has been revised by professors from every Korean Medicine college in South Korea and is used as the principal textbook in the physiology curriculum.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
On the other hand, ‘Physiology'(23) was chosen for the CM domain. To investigate the impact of language on representational differences in embeddings, we collected documents with the exactly identical contents from both the English version and the Korean-translated version of ‘Physiology'. The titles of the selected documents from each domain are listed in Appendix Table 1. We extracted the embedding vectors for a total of 30 documents – 10 each from KM physiology, CM physiology in Korean (CM_KR), and CM physiology in English (CM_EN) – using E5-mistral-7b-instruct(24), Voyage AI’s voyage-02, and OpenAI's text-embedding-ada-002 models to figure out LLMs' representations of KM and CM knowledge. Our analysis focused on identifying patterns of the KM and the CM domain embeddings with three key document similarity metrics: human-evaluated document relatedness, embedding correlation coefficients, and token overlap coefficients. We assessed whether the correlation coefficients between embedding pairs closely align with the human-evaluated ground truth or merely follow the surface-level similarity (token overlap) by conducting the correlation analyses across these metrics. It allows us to understand the depth of embedding representations and their correlation with human-perceived document pairwise relevance. For this, the Pearson correlation coefficients(25) were calculated for every embedding vector pair, covering 45 pairs in each of the three categories (KM, CM_KR, CM_EN). To assess explicit similarity in a document pair, we computed the overlap coefficient(26) for tokens in KM, CM_KR, CM_EN documents. The token overlap coefficient was calculated as: |Token overlap coefficient| | |---|---| || A ∩ B ||min(|A|, |B|)| |The count of token co-occurrence between documents A and B.|The minimum token count in either document A or B.| Token overlap coefficients were calculated three times with different tokenizers corresponding to the embedding models: E5-mistral-7b-instruct(24), Voyage AI’s voyage-02, and OpenAI's text-embedding-ada-002. Repeated appearances of a single token in a document were counted and considered separately. To determine the ground truth of document pair correlations within each domain, two KM doctors with national licenses evaluated the relatedness between each pair of the KM and CM documents. A binary scoring system was adopted: a score of 1 indicated that a pair was interrelated, and 0 for unrelated.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
documents. The human-evaluated document relatedness scores were then obtained by averaging the two doctors' scores in KM and CM documents, respectively. The correlation analyses were conducted between human-evaluated document relatedness scores and embedding correlation coefficients, and between embedding correlation coefficients and token overlap coefficients with Scipy(27) in Python 3.11. Bonferroni correction(28) was applied for p-values due to the multiple comparisons. # 2) Performance comparison of Prompt-RAG and existing models # (1) Chatbot Settings For the evaluation, we developed a domain-specific, prompt-RAG-based chatbot for the book 'Introduction to Current Korean Medicine’(29). The chatbot employed GPT architectures: GPT-4-0613 for the heading selection and GPT-3.5-turbo-16k-0613 for the answer generation. The original ToC of the book had already been defined by the authors. Subheadings were added to it, aligning with the book’s actual sections. The expanded table of contents exceeded the context window size for heading selection, so some headings were removed to handle this issue. The body of the book was then segmented according to the modified headings for the subsequent retrieval. We passed a model based on GPT-4 a prompt containing both the revised ToC and a query, asking the model to identify five pertinent headings from the ToC. At the same time, it was instructed to avoid selecting a heading if the query was about greetings or casual talks. The prompt for heading selection is shown in Table 1. Table 1. The prompt for heading selection “Current context: {history}a Question: {question}a Table of Contents: {index}a Each heading (or line) in pe table of contents above represents a fraction in a document. Select pe five headings pat help pe best to find out pe information for pe question. List pe headings in pe order of importance and in pe format of '1. --- 2.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
---
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
5.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
---'. Don't say anyping oper pan pe format. If pe question is about greetings or casual talks, just say 'Disregard pe reference.'.” aThese represent pe placeholders for conversational buffer memory, pe user’s query, and pe table of
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Broadly speaking, RAG-
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Table 2. The prompts for answer generation Prompt 1: Answer generation wip selected headings "You are a chatbot based on a book called '현대한의학개론'. Here is a record of previous conversation for your smoop chats.: {history}a Reference: {context}a Question: {question}a Use pe reference to answer pe question. The reference above is only fractions of '현대한의학개론'.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Be informative, gentle, and formal. If you can't answer pe question wip pe reference, just say like 'I couldn't find pe right answer pis
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# time Answer in Korean:” # Prompt 2: Answer generation without selected headings for casual queries “You are a chatbot based on a book called '현대한의학개론'. Here is a record of previous conversation for your smooth chats.: {history}a Question: {question}a Answer the question.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Be informative, gentle, and formal. Answer in Korean:” These denote the placeholders for conversational buffer memory, the reference based on the selected heading, and the user’s query, respectively, from top to bottom. Conversation buffer memory was incorporated in the prompts for both heading selection and answer generation, within each context window limit. We employed Langchain(30) for the processes above. # Baselines |① ChatGPT|For the first baseline to compare the performance of our model with, we utilized ChatGPT without any retrieval-augmentation process. ChatGPT is based on a diverse, large-scale corpus, equipped with an immense range of global knowledge.(31) Therefore, we evaluated our model's proficiency in generating answers specific to the domain of KM, in contrast with general knowledge of ChatGPT. This baseline included employing both GPT-3.5 and GPT-4 models of ChatGPT (chatGPT-3.5, ChatGPT-4, respectively).| |---|---| |② Chunk retrievals|As our second baseline, we adopted vector embedding-based chunk retrieval. The text of the book was divided into chunks of size 50 and 100, respectively, using Tiktoken(32). Subsequently, each chunk was vectorized through OpenAI’s text-embedding-ada-002. Vectors that most closely matched the query|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Tasks and performance evaluation metrics To evaluate the performance of our domain-specific, prompt-RAG-based chatbot and the other baseline models, we composed a series of 30 questions related to KM. The models were to generate answers to those questions in order.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Each question was categorized into one of the three types to examine the models’ capabilities in direct retrieval, comprehensive understanding, and functional robustness. The questions among the three types followed a ratio of 4:4:2. For the ChatGPT baselines, which do not utilize retrieval augmentation, questions specifically inquiring about the author’s perspective were appropriately adjusted. Further details on the questions and their types are provided in Appendix Table 2. Human evaluation was performed for the generated answers by three KM doctors. The evaluators assessed the models’ answers in terms of three criteria: relevance, readability, and informativeness. Relevance measured how well the answer directly addressed the central topic of the question. Readability evaluated the naturalness and fluency of the answer. Informativeness assessed the depth and significance of the answer's content. Each question was scored in terms of every criterion with either 0, 1, or 2 points. In the evaluation process, each response started with a base score of 2 for each criterion, and evaluators were instructed to deduct points based on the presence of specific flaws. Descriptions for the criteria and the scoring system are provided in Table 3. The Response time taken to generate each answer was also measured for the comparison of our model and chunk retrieval models # Table 3. Evaluation criteria for answers. |Criterion|Point scale|Description|Deduction| |---|---|---|---| |Relevance|0, 1, 2|Assesses direct connection with the central topic of the question. High relevance achievable even with low readability or meaningless content.|Irrelevance to the question.| |Readability|0, 1, 2|Evaluates the naturalness and fluency of an answer. High readability achievable even with irrelevant or meaningless content.|Grammatical errors or incoherence.| |Informativeness|0, 1, 2|Assesses the depth and significance of the answer's content. High informativeness achievable even with low readability or irrelevance.|Superficial or meaningless content including hallucination.| |Scoring guide|0 points|Criterion severely damaged, making the answer unacceptable.| |
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Statistical analysis To evaluate the statistical significance of our model’s scores in relation to those of the others, we performed t-tests and Mann-Whitney U tests. The t-tests compared the scores across the criteria of relevance, readability, and informativeness, while Mann-Whitney U tests were applied to the scores categorized by question types. P-values were adjusted using Bonferroni correction(28) to account for the multiple comparisons.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
All statistical analyses were conducted with the Statsmodels(36) package in Python 3.11.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Results 1) Comparative analysis of LLM-based vector embeddings in KM and CM (1) Comparison of KM and CM document pairs by correlation metrics Human-evaluated document relatedness scores, embedding correlation coefficients, and token overlap coefficients were calculated for KM and CM document pairs using three different embedding models. To compare the overall pattern of these metrics across the domains and the models, they are visually presented in Figure 2. Figure 2. Comparative analysis of human-evaluated document relatedness, embedding correlation coefficients, and token overlap coefficients in KM, CM_KR, and CM_EN. (A) shows clustermaps of human-evaluated document relatedness scores for KM and CM, where each cell represents the perceived relatedness between document pairs as judged by human evaluators. (B) illustrates the embedding correlation coefficients across the different domains and models. (C) depicts the token overlap coefficients, which measure the extent of shared tokens between document pairs. The hierarchical clustering was conducted based on squared Euclidean distance, with embedding correlation coefficients and token overlap coefficients sequentially arranged in an identical order to this cluster structure. Abbreviations: KM, Korean medicine; CM, Conventional medicine; CM_KR, CM physiology in Korean; CM_EN, CM physiology in English; D, Document.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
(2) Correlation analyses between metrics in KM and CM documents 12
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Num. of Evidence Needed |Count|Percentage| |---|---| |0 (Null Query)|301|11.78%| |2|1078|42.18%| |3|779|30.48%| |4|398|15.56%| |Total|2,556|100.00%| Table 4: The distribution of the number of evidence required to answer multi-hop queries in MultiHop-RAG. Related tasks can be categorized as retrieval-related tasks and generation-related tasks.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
To analyze the correlations between human-evaluated document relatedness scores and embedding correlation coefficients, and between embedding correlation coefficients and token overlap coefficients, Pearson or Spearman correlation coefficients were calculated for each metric pair. Figure 3 provides scatter plots for showing the relationship between the metrics in KM, CM_KR, and CM_EN. Figure 3. Correlation of document embedding correlation coefficients with human-evaluated document relatedness, and token overlap coefficients in KM, CM_KR, and CM_EN. The figure displays regression plots for pairwise correlations between the metrics within KM, CM_KR, and CM_EN documents. (A) displays scatter plots with fitted regression lines showing the relationship between human-evaluated document relatedness (x-axis) and the embedding correlation coefficient (y-axis) for each of the three language models. Each point represents a document pair. (B) shows the relationship between the embedding correlation coefficients (x-axis) and token overlap coefficients (y-axis). The colors correspond to the different document sets: KM, CM_KR, and CM_EN. The regression lines and correlation coefficients represent the strength and direction of the relationships. The symbols 'r' and 'ρ' indicate the Pearson and Spearman correlation coefficients, respectively. Abbreviations: KM, Korean medicine; CM, Conventional medicine; CM_KR, CM physiology in Korean; CM_EN, CM physiology in English. For the first metric pair, Spearman's correlation coefficients were calculated between human-evaluated document relatedness scores and the embedding correlation coefficients.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Across all evaluated
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# models E5-mistral-7b-instruct, voyage-02, and text-embedding-ada-002—the correlation coefficients for CM were consistently higher than those for KM, indicating a stronger alignment with human judgment in the context of CM. Within CM, the coefficients for CM_EN were higher than those for CM_KR. Specifically, for the E5-mistral-7b-instruct model, the Spearman's correlation coefficient was 0.503 for KM, while it increased for CM_KR to 0.691 and was highest for CM_EN at 0.725. Similarly, voyage-02 presented a negative correlation for KM (-0.016), but it showed positive correlations of 0.376 for CM_KR and a notably stronger 0.670 for CM_EN. The text-embedding-ada-002 model demonstrated a coefficient of 0.167 for KM, with higher values of 0.563 for CM_KR and 0.625 for CM_EN. Notably, CM_EN exhibited statistically significant positive correlations across all models (0.725, 0.670, and 0.625, respectively), indicating a robust positive correlation in the context of CM and English compared to KM and Korean. In contrast, the correlations in KM were either weak or slightly negative (-0.016 and 0.167), with the exception of the E5-mistral-7b-instruct model, which yielded a moderate 0.503. Secondly, the Pearson correlation coefficients between the embedding correlation coefficients and token overlap coefficients showed varied patterns. In CM_EN, the E5-mistral-7b-instruct model had a Pearson's correlation coefficient of 0.438, and voyage-02 had a coefficient of 0.518, both indicating moderate positive correlations. However, these correlations, including the one for text-embedding-ada-002, were all lower than those observed for human-evaluated document relatedness. For KM, significant positive correlations were observed in voyage-02 and text-embedding-ada-002, with coefficients of 0.429 and 0.501, respectively. These values are in stark contrast to the previously discussed Spearman's correlations between human-evaluated document relatedness scores and embedding correlation coefficients for KM (-0.016 and 0.167, respectively). This suggests that these models may prioritize token-level features of documents over their human-perceived meanings when generating vector representations. These findings are summarized in Table 4.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Embedding model|Embedding correlation coefficient (Spearman's ρ)|Token overlap coefficient (Pearson's r)| |---|---|---| |KM|CM_KR| |CM_EN|KM|CM_KR|CM_EN| |E5-mistral-7b-instruct|0.503b|0.691c|0.725c|0.304|0.365|0.438a| |voyage-02|-0.016|0.376|0.670c|0.429a|0.177|0.518b| |text-embedding-ada-002|0.167|0.563c|0.625c|0.501b|0.343|0.335| Superscripts indicate statistical significance in correlation analysis. ap &lt; 0.05, p &lt; 0.005, p &lt; 0.001b c
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Abbreviations: KM, Korean medicine; CM, CM_KR, CM physiology in Korean; CM_EN, CM physiology in English. Overall, embedding correlations in CM_EN consistently demonstrates a higher alignment with human-evaluated document relatedness compared to KM and CM_KR. On the contrary, the embedding representation of KM tends to be determined by the explicit lexical similarity from token overlaps. These findings illustrate insufficiencies of LLM-based vector embeddings in capturing human-perceived conceptual meanings in niche domains, suggesting that their application in conventional RAG systems may result in suboptimal performances. # Performance comparison of Prompt-RAG and existing models # Main results |Model|Relevance (Mean score)|Readability (Mean score)|Informativeness (Mean score)|Response time (Mean seconds)| |---|---|---|---|---| |ChatGPT-3.5|1.711|1.900|0.667d|-| |ChatGPT-4|1.833|1.922|1.033b|-| |C50-V300|1.733|1.733a|0.644d|6.454d| |C100-V150|1.8|1.722|0.833d|7.033c| |Prompt-RAG|1.956|1.900|1.589|24.840| Superscripts indicate statistical significance in comparison to the Prompt-RAG model. Firstly, we compared the performance of our prompt-RAG model with that of ChatGPT to examine its proficiency in the KM domain. Prompt-RAG achieved mean scores of 1.956 for relevance and 1.589 for informativeness, respectively, surpassing ChatGPT-3.5 (1.711 for relevance, 0.667 for informativeness) and ChatGPT-4 (1.833 for relevance, 1.033 for informativeness). It is noteworthy that our model's informativeness scores were significantly higher, being more than double those of ChatGPT-3.5 and exceeding those of ChatGPT-4 by over 1.5 times. In terms of readability, our model scored 1.900, which was about equal to ChatGPT-3.5's score (1.900) and slightly lower than ChatGPT-4’s (1.922). Overall, our model demonstrated its outperformance against ChatGPT baselines, especially GPT-3.5, in generating domain-specific answers related to KM. Further, we explored whether the prompt-RAG approach could produce better answers than the conventional chunk retrieval method. For all the criteria, our model scored higher than C50-V300 and C100-V150.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The readability scores of our model were significantly higher compared to C100-V150, and especially for informativeness, our model obtained statistically significant scores, approximately 15.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2.5 times that of C50-V300 and around 1.9 times that of C100-V150. However, our mode was significantly slower in terms of average response time, taking an additional 18.356 seconds compared to C50-V300 and 17.806 seconds more than C100-V150. These results find that the Prompt-RAG model excelled in answer quality, while the latency in answer generation was larger than the chunk retrieval method. # Comparison by types of questions To assess the overall quality and applicability of our prompt-RAG, we conducted a comparative analysis of its performance against the other models across different question types: direct retrieval, comprehensive understanding, and functional robustness. The summed scores for relevance, readability, and informativeness by the three evaluators were averaged for each question and each question type, respectively. The results by the question types are illustrated in Figure 4. |Model|Direct retrieval|Comprehensive understanding|Functional robustness| |---|---|---|---| |Prompt-RAG| | | | |Chatgpt5| | | | |Chatgpt-a| | | | Figure 4. Model performance comparison across different question types.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
(A) Direct retrieval questions. (B) Comprehensive understanding questions. (C) Functional robustness questions. The asterisks
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Our model reached an average score of 5.5 for direct retrieval, 5.389 for comprehensive understanding, and 5.444 for functional robustness out of 6, outdoing all other models in every question type. Notably, the scores for direct retrieval were significantly higher compared to those of all the other models, and the scores for comprehensive understanding were also statistically significant in comparison to the chunk retrieval models and ChatGPT-3.5.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This suggests not only our model's advanced capability for retrieval but also its comprehension-based answering performance, which is comparable to ChatGPT-4.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
A retrieval-related task focuses on retrieving relevant text from the knowledge base, while a generation-related task focuses on generating high-quality responses given the retrieved text. In this section, we showcase two use cases for each task where MultiHop-RAG can be employed. # 4.1 Retrieval-related Task An important design choice in an RAG system is the selection of the embedding model. An embedding model converts data into numerical vectors and subsequently stores these vectors in embedding databases. In this experiment, we evaluate different embedding models by examining their retrieval quality. Experiment Setup: We implement an RAG system using the LlamaIndex framework (Liu, 2022). We partition the documents in the MultiHop-RAG knowledge base into chunks, each consisting of 256 tokens. We then convert the chunks using an embedding model and save the embeddings into a vector database. Similarly, in the retrieval step, we convert a query using the same embedding model and retrieve the top-K most relevant chunks that have the highest cosine similarity with the query embedding. In this experiment, we test a variety set of embedding models, including the ada-embeddings by OpenAI (text-embedding-ada-002, text-search-ada-query-001), voyage-02, llm-embedder (Zhang et al., 2023), bge-large-en-v1.5 (Xiao et al., 2023), jina-embeddings-v2-base-en (Günther et al., 2023), e5-base-v2 (Wang et al., 2022), and instructor-large (Su et al., 2023). NULL queries are excluded in this experiment because there is no matching evidence to the query. Additionally, we also include a Reranker module to examine the retrieval performance, using bge-reranker-large (Xiao et al., 2023). After retrieving 20 related chunks using the em- 3https://www.voyageai.com/ # 4.2 Generation-related Task The underlying LLMs play a crucial role in generating responses in an RAG system. In this experiment, we evaluate the quality of generated responses under two different settings. In the first setting, we employ the best-performing retrieval model, namely voyage-02 with bge-reranker-large, as indicated in Table 5, to retrieve the top-K texts and then feed them into the LLM. In the second setting, we use the ground-truth evidence associated with each query as the retrieved text for the LLM. This setting represents a ceiling performance for testing the LLM’s response capabilities, as it utilizes the actual evidences. Experiment Setup: In the first experiment, we retrieve top-6 chunks so that the total length of the retrieved text does not exceed 2,048. All queries in MultiHop-RAG are tested in the experiment.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Discussion In this study, our exploration of LLM-based vector embeddings revealed marked limitations within the KM domain. The analysis showed that vector embeddings are heavily influenced by languages and token overlaps, which are not always compatible with human reasoning, potentially leading to suboptimal performance when used in RAG methods. To address these shortcomings, we introduced Prompt-RAG, a natural language prompt-based RAG methodology, providing a strategic shift from conventional RAGs operated with vector embeddings. This stemmed from the recognition of the limitations inherent in LLMs, utilizing the linguistic capabilities of LLM and addressing its constraints at the same time. As a result, our QA chatbot equipped with Prompt-RAG exhibited promising outcomes in terms of relevance, readability, and informativeness in the KM domain. Moreover, it coped with a variety of types of KM-related questions as well, proving its practical stability. The potential of Prompt-RAG is substantial. Importantly, our model is not confined only to the KM domain but can be applied to other marginal domains that require RAG. GPT is recognized for its emergent properties, potentially helping deal with highly abstract, contextual, or previously unseen expressions. It would facilitate high-quality retrieval with a ToC that contains the comprehensive and essential context of documents, leading to desirable responses across various domains.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Its applicability and efficiency can expand vastly, together with natural language processing techniques developing and improving. As the cognitive abilities of LLMs continue to advance, we look forward to Prompt-RAG becoming an even more powerful tool with full reliance on the capabilities of an LLM itself. Its wide-ranging adaptability derived from the ability to understand and process unacquainted or uncertain concepts and terminologies would raise some challenges for conventional vector embedding-based RAG. For example, a short query has been known to undermine the performance vector embedding-based informational retrieval due to the lack of contexts, even though it is the major form of a search query on the internet. The adoption of the natural language prompts through GPT allows for a nuanced understanding of queries and thus results in a more detailed, accurate, and relevant retrieval. In addition, Prompt-RAG can be much more efficient when it comes to model updates, saving on the expense and time for the renewal of document embeddings, especially with larger documents. These properties would be highlighted in dynamic environments in terms of data with its ability to be applied without the need for repetitive retraining or embedding. However, we acknowledge that Prompt-RAG has certain limitations. Firstly, the requirement for a ToC might sometimes pose an obstacle, depending on the type or structure of the document. Secondly, the recurring latency and expenses associated with running a generative model or making Application Programming Interface (API) calls for heading selection do result in longer response times and higher costs. However, these issues are expected to naturally improve as the generative performance of LLMs continues to develop and model pricing plans become more economical, as has been the trend. Explorations and developments in model compression and light-weight artificial intelligence technologies for resource-constrained devices have been recently encouraged by the popularization of individual edge devices. This trend seems to be extending to natural language processing domains as well, which would help solve the latency issue of our model.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The rapid advancements
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
in generative models suggest that the limitations of our model will become increasingly less problematic in the foreseeable future, likely sooner than anticipated. # 19
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Conclusion We suggest Prompt-RAG as an alternative to the conventional vector embedding RAG methods, addressing the limitations of LLM-based vector embeddings in niche domains where inconsistencies with human reasoning can lead to suboptimal performance. With its derived QA chatbot, Prompt-RAG has achieved notable outcomes as demonstrated by our study on KM, showing its potential as a versatile and effective tool in line with the rapidly evolving LLM field. While there is room for improvement, its practical benefits are expected to grow through internal and external development.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Providing a new paradigm in RAG, it contributes to the advancement of information retrieval in specific domains with remarkable ease.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Reference |1.|Lewis P, Perez E, Piktus A, Petroni F, Karpukhin V, Goyal N, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems. 2020;33:9459-74.| |---|---| |2.|Shuster K, Poff S, Chen M, Kiela D, Weston J. Retrieval augmentation reduces hallucination in conversation. arXiv preprint arXiv:210407567. 2021.| |3.|Yoran O, Wolfson T, Ram O, Berant J. Making Retrieval-Augmented Language Models Robust to Irrelevant Context. arXiv preprint arXiv:231001558. 2023.| |4.|Naveed H, Khan AU, Qiu S, Saqib M, Anwar S, Usman M, et al. A comprehensive overview of large language models. arXiv preprint arXiv:230706435. 2023.| |5.|Izacard G, Lewis P, Lomeli M, Hosseini L, Petroni F, Schick T, et al. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:220803299. 2022.| |6.|Zhao R, Chen H, Wang W, Jiao F, Do XL, Qin C, et al. Retrieving multimodal information for augmented generation: A survey. arXiv preprint arXiv:230310868. 2023.| |7.|Li H, Su Y, Cai D, Wang Y, Liu L. A survey on retrieval-augmented text generation. arXiv preprint arXiv:220201110. 2022.| |8.|Gao Y, Xiong Y, Gao X, Jia K, Pan J, Bi Y, et al. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:231210997. 2023.| |9.|Yunianto I, Permanasari AE, Widyawan W, editors. Domain-Specific Contextualized Embedding: A Systematic Literature Review. 2020 12th International Conference on Information Technology and Electrical Engineering (ICITEE); 2020 6-8 Oct. 2020.| |10.|Yang G, Shi J, Wang Z, Liu X, Wang G. TCM-GPT: Efficient Pre-training of Large Language Models for Domain Adaptation in Traditional Chinese Medicine. arXiv preprint arXiv:231101786. 2023.| |11.|Marreddy M, Oota SR, Vakada LS, Chinni VC, Mamidi R. Am I a Resource-Poor Language? Data Sets, Embeddings, Models and Analysis for four different NLP Tasks in Telugu Language. ACM Trans Asian Low-Resour Lang Inf Process. 2022;22(1):Article 18.| |12.|Hossain MR, Hoque MM, Siddique N. Leveraging the meta-embedding for text classification in a resource-constrained language. Engineering Applications of Artificial Intelligence. 2023;124:106586.| |13.|Hu EJ, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, et al. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:210609685. 2021.| |14.|Fu Z, Yang H, So AM-C, Lam W, Bing L, Collier N. On the Effectiveness of Parameter-Efficient Fine-Tuning. Proceedings of the AAAI Conference on Artificial Intelligence. 2023;37(11):12799-807.| |15.|Ding N, Qin Y, Yang G, Wei F, Yang Z, Su Y, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023;5(3):220-35.| |16.|Cha W-S, Oh J-H, Park H-J, Ahn S-W, Hong S-Y, Kim N-I. Historical difference between traditional Korean medicine and traditional Chinese medicine. Neurological Research. 2007;29(sup1):5-9.| |17.|Yin CS, Ko S-G. Introduction to the History and Current Status of Evidence-Based Korean Medicine: A Unique Integrated System of Allopathic and Holistic Medicine. Evidence-Based Complementary and Alternative Medicine. 2014;2014:740515.| |18.|Nori H, King N, McKinney SM, Carignan D, Horvitz E. Capabilities of gpt-4 on medical challenge problems. arXiv preprint arXiv:230313375. 2023.| |19.|Brin D, Sorin V, Vaid A, Soroush A, Glicksberg BS, Charney AW, et al. Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments. Scientific Reports. 2023;13(1):16492.| |20.|Yang Z, Yao Z, Tasmin M, Vashisht P, Jang WS, Wang B, et al. Performance of Multimodal GPT-4V on USMLE with Image: Potential for Imaging Diagnostic Support with Explanations. medRxiv.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023:2023.10.26.23297629.| |21.|Jang D, Yun T-R, Lee C-Y, Kwon Y-K, Kim C-E. GPT-4 can pass the Korean National Licensing Examination for Korean Medicine Doctors. PLOS Digital Health.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023;2(12):e0000416.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}