text
stringlengths 54
398
|
---|
101,As expected, improved retrieval performance in both single and multi-step settings leads to strong overall im-provements of up to 3%, 17% and 1% F1 scores on MuSiQue, 2WikiMultiHopQA and HotpotQA respectively using the same QA reader. |
102,Notably, single-step HippoRAG is on par or outperforms IRCoT while being 10-30 times cheaper and 6-13 times faster during online retrieval (Appendix G). |
103,To determine if using GPT-3.5 is essential for building our KG, we replace it with an end-to-end OpenIE model REBEL [26] and an instruction-tuned LLM Llama-3 [1]. |
104,As shown in Table 5 row 2, building our KG using REBEL results in large performance drops, underscoring the importance of LLM flexibility. |
105,Specifically, GPT-3.5 produces twice as many triples as REBEL, indicating its bias against producing triples with general concepts and leaving many useful associations behind. |
106,In terms of open-source LLMs, Table 5 (rows 3-4) shows that the performance of Llama-3 8B is comparable to GPT-3.5, although its 70B counterpart performs worse. |
107,This surprising behavior is due to this model鈥檚 production of ill-formatted outputs that result in the loss of around 20% of the passages, compared to about 4% for the 8B model and less than 1% for GPT-3.5. |
108,The strong performance of Llama-3 8B is encouraging because that offers a cheaper alternative for indexing over large corpora. |
109,The statistics for these OpenIE alternatives can be found in Appendix C. |
110,As shown in Table 5 (rows 5-6), to examine how much of our results are due to the strength of PPR, we replace the PPR output with the query node probability #禄n multiplied by node specificity values (row 5) and a version of this that also distributes a small amount of probability to the direct neighbors of each query node (row 6). |
111,First, we find that PPR is a much more effective method for including associations for retrieval on all three datasets compared to both simple baselines. |
112,It is interesting to note that adding the neighborhood of Rq nodes without PPR leads to worse performance than only using the query nodes themselves. |
113,As seen in Table 5 (rows 7-8), node specificity obtains considerable improvements on MuSiQue and HotpotQA and yields almost no change in 2WikiMultiHopQA. |
114,This is likely because 2WikiMultiHopQA relies on named entities with little differences in terms of term weighting. |
115,In contrast, synonymy edges have the largest effect on 2WikiMultiHopQA, suggesting that noisy entity standardization is useful when most relevant concepts are named entities, and improvements to synonymy detection could lead to stronger performance in other datasets. |
116,A major advantage of HippoRAG over conventional RAG methods in multi-hop QA is its ability to perform multi-hop retrieval in a single step. |
117,We demonstrate this by measuring the percentage of queries where all the supporting passages are retrieved successfully, a feat that can only be accomplished through successful multi-hop reasoning. |
118,Table 8 in Appendix D shows that the gap between our method and ColBERTv2, using the top-5 passages, increases even more from 3% to 6% on MuSiQue and from 20% to 38% on 2WikiMultiHopQA, suggesting that large improvements come from obtaining all supporting documents rather than achieving partially retrieval on more questions. |
119,We further illustrate HippoRAG鈥檚 unique single-step multi-hop retrieval ability through the first example in Table 6. |
120,In this example, even though Alhandra was not mentioned in Vila de Xira鈥檚 passage, HippoRAG can directly leverage Vila de Xira鈥檚 connection to Alhandra as his place of birth to determine its importance, something that standard RAG methods would be unable to do directly. |
121,Additionally, even though IRCoT can also solve this multi-hop retrieval problem, as shown in Appendix G, it is 10-30 times more expensive and 6-13 times slower than ours in terms of online retrieval, arguably the most important factor when it comes to serving end users. |
122,The second example in Table 6, also present in Figure 1, shows a type of questions that is trivial for informed humans but out of reach for current retrievers without further training. |
123,This type of questions, which we call path-finding multi-hop questions, requires identifying one path between a set of entities when many paths exist to explore instead of following a specific path, as in standard multi-hop questions. |
124,More specifically, a simple iterative process can retrieve the appropriate passages for the first question by following the one path set by Alhandra鈥檚 one place of birth, as seen by IRCoT鈥檚 perfect performance. |
125,However, an iterative process would struggle to answer the second question given the many possible paths to explore鈥攅ither through professors at Stanford University or professors working on the neuroscience of Alzheimer鈥檚. |
126,It is only by associating disparate information about Thomas S眉dhof that someone who knows about this professor would be able to answer this question easily. |
127,As seen in Table 6, both ColBERTv2 and IRCoT fail to extract the necessary passages since they cannot access these associations. |
128,On the other hand, HippoRAG leverages its web of associations in its hippocampal index and graph search algorithm to determine that Professor Thomas is relevant to this query and retrieves his passages appropriately. |
129,More examples of these path-finding multi-hop questions can be found in our case study in Appendix E. |
130,It is well-accepted, even among skeptical researchers, that the parameters of modern LLMs encode a remarkable amount of world knowledge [2, 10, 17, 21, 24, 31, 47, 62], which can be leveraged by an LLM in flexible and robust ways [64, 65, 74]. |
131,Nevertheless, our ability to update this vast knowledge store, an essential part of any long-term memory system, is still surprisingly limited. |
132,Although many techniques to update LLMs exist, such as standard fine-tuning, model unlearning and model editing [12, 37, 38, 39, 40, 76], it is clear that no methodology has emerged as a robust solution for continual learning in LLMs [20, 35, 78]. |
133,On the other hand, using RAG methods as a long-term memory system offers a simple way to update knowledge over time [28, 33, 50, 56]. |
134,More sophisticated RAG methods, which perform multiple steps of retrieval and generation from an LLM, are even able to integrate information across new or updated knowledge elements[30, 49, 55, 61, 69, 71, 73], another crucial aspect of long-term memory systems. |
135,As discussed above, however, this type of online information integration is unable to solve the more complex knowledge integration tasks that we illustrate with our path-finding multi-hop QA examples. |
136,Some other methods, such as RAPTOR [54], MemWalker [7] and GraphRAG [14], integrate infor-mation during the offline indexing phase similarly to HippoRAG and might be able to handle these more complex tasks. |
137,However, these methods integrate information by summarizing knowledge elements, which means that the summarization process must be repeated any time new data is added. |
138,In contrast, HippoRAG can continuously integrate new knowledge by simply adding edges to its KG. |
139,Context lengths for both open and closed source LLMs have increased dramatically in the past year [9, 13, 16, 46, 51]. |
140,This scaling trend seems to indicate that future LLMs could perform long-term memory storage within massive context windows. |
141,However, the viability of this future remains largely uncertain given the many engineering hurdles involved and the apparent limitations of long-context LLMs, even within current context lengths [32, 34, 77]. |
142,Combining the strengths of language models and knowledge graphs has been an active research direction for many years, both for augmenting LLMs with a KG in different ways [36, 63, 66] or augmenting KGs by either distilling knowledge from an LLM鈥檚 parametric knowledge [5, 67] or using them to parse text directly [6, 22, 75]. |
143,In an exceptionally comprehensive survey, Pan et al. [43] present a roadmap for this research direction and highlight the importance of work which synergizes these two important technologies [29, 57, 72, 80]. |
144,Like these works, HippoRAG is a strong and principled example of the synergy we must strike between these two technologies, combining the power of LLMs for knowledge graph construction with the strengths of structured knowledge and graph search for improved augmentation of an LLM鈥檚 capacities. |
145,Our proposed neurobiologically principled methodology, although simple, already shows promise for overcoming the inherent limitations of standard RAG systems while retaining their advantages over parametric memory. |
146,HippoRAG鈥檚 knowledge integration capabilities, demonstrated by its strong results on path-following multi-hop QA and promise on path-finding multi-hop QA, as well as its dramatic efficiency improvements and continuously updating nature, makes it a powerful middle-ground framework between standard RAG methods and parametric memory and offers a compelling solution for long-term memory in LLMs. |
147,Nevertheless, several limitations can be addressed in future work to enable HippoRAG to achieve this goal better. |
148,First, we note that all components of HippoRAG are currently used off-the-shelf without any extra training. |
149,There is therefore much room to improve our method鈥檚 practical viability by performing specific component fine-tuning. |
150,This is evident in the error analysis discussed in Appendix F that shows most errors made by our system are due to NER and OpenIE, which could benefit from direct fine-tuning. |
151,Given that the rest of the errors are graph search errors, also in Appendix F, we note that several avenues for improvements over simple PPR exist, such as allowing relations to guide graph traversal directly. |
152,Finally, and perhaps most importantly, HippoRAG鈥檚 scalability still calls for further validation. |
153,Although we show that Llama-3 could obtain similar performance to closed-source models and thus reduce costs considerably, we are yet to empirically prove the efficiency and efficacy of our synthetic hippocampal index as its size grows way beyond current benchmarks. |
154,The authors would like to thank colleagues from the OSU NLP group and Percy Liang for their thoughtful comments. |
155,This research was supported in part by NSF OAC 2112606, NIH R01LM014199, ARL W911NF2220144, and Cisco. |
156,The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. government. |
157,The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. |