text
stringlengths
2
6.93k
system_prompt
stringclasses
1 value
# Proof for Equation 5 Proof. The transformation is motivated by Xie et al. [2021] and we apply it to the analysis of RAG: |p(xi|R, x1:i−1) = ∫∫ p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz|(20)| |---|---| |p(xi|R, x1:i−1) = ∫ p(xi|R, x1:i−1, z) p(R, x1:i−1|z)p(z)dz|(21)| |p(xi|R, x1:i−1) ∝ ∫ p(xi|R, x1:i−1, z)p(R, x1:i−1|z)p(z) dz|(22)| |p(xi|R, x1:i−1) = ∫ p(xi|R, x1:i−1, z)exp(r(z))p(z) dz|r(z) = log p(R, x1:i−1|z∗)| # Proof for Equation 6 Proof. For p(R, x1:i−1|z) in r(z) = log p(R,x1:i−1||z), we can make further derivation as: p(R, x1:i−1 z) = p(x1:i−1 R, z)p(R z) (25) According to the definition of latent variable model in the analysis of in-context learning from Xie et al. [2021] that views the latent variable inference as Hidden Markov Model (HMM) and the latent concept z determines the transition probability matrix in HMM hidden states h, we can get the following derivations: |p(x1:i−1|R, z)p(R|z) = Σp(x1:i−1|h, z)p(h|R, z)p(R|z)|(26)| |---|---| |r(z) = log p(R, x1:i−1|z∗)p(R, x1:i−1|z)|(27)| |r(z) = log p(x1:i−1|h, z∗)p(h|R, z∗) + log p(R|z∗) Σp(x1:i−1|h, z)p(h|R, z) p(R|z)|(28)| Based on previous work Xie et al. [2021], Zhang et al.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[2023], we make the following assumptions that: 13
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
See downlink multi-user multiple input, multiple output (DL-MU-MIMO) (in 3.2). (0.343)| |unknown_definition_2:NOTE See IETF RFC 3610. (0.376)|unknown_definition_13: NOTE For the purposes of this Standard, there is at most one portal in a given extended service set s (ESS s) infrastructure. In an implementation, a single logical portal function may be provided by multiple devices that provide integration services for the ESS. How such multiple devices coordinate to appear as a single logical portal is implementation dependent. (0.337)| |unknown_definition_9:NOTE See IETF RFC 4282. (0.370)|unknown_definition_18: NOTE These uses include calculation of transmit steering, calculation of recommended modulation and coding scheme (MCS), and calculation of calibration parameters. (0.334)| service hash: A value used for representing a service. This value is formed from a hash of the service name. (0.322) master session key (MSK): Keying material that is derived between the Extensible Authentication Protocol (EAP) peer and exported by the EAP method to the Authentication Server (AS). (0.491) peer mesh station (STA): A mesh STA to which a mesh peering has been established. (0.460) peer-to-peer link: A direct link within a quality-of-service (QoS) basic service set (BSS), a tunneled direct-link setup (TDLS) link, or a station-to-station (STA-to-STA) communication in an independent basic service set (IBSS). (0.401) IEEE 802.1X authentication: Extensible Authentication Protocol (EAP) authentication transported by the IEEE 802.1X protocol. (0.476) peer-to-peer traffic specification (PTP TSPEC): The quality-of-service (QoS) characteristics of a data flow between non-access point (non-AP) QoS stations (STAs). (0.373) master session key (MSK): Keying material that is derived between the Extensible Authentication Protocol (EAP) peer and exported by the EAP method to the Authentication Server (AS). (0.451)
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Assumption 1 All tokens can be predicted, which means that for every token x, there is some hidden state h lower bounds it that p(x|h, z∗) > c1 > 0. # Assumption 2 Delimiter is an important distinguishing signal between each passage r in the retrieved texts R. For any delimiter hidden state hd and other hidden state h, there are upper and lower bounds on the transition probability from h to hd: 0 ≤ c2 ≤ p(hd|h, z) ≤ c3. Then we can get: P r(z) ≤ log Ph h1 · p(h|R, z)Pc1 · p(h|R, z)∗) + log p(R|z∗)p(R|z) = logPhc1 · p(h|R, z∗) + log p(R|z∗)h1 · p(h|R, z)p(R|z) = −logc1 + log p(R|z∗)p(R|z) = −logc1 + logQiQn=1 p(ri|r1:i−1, z) i=1 p(ri|r1:i−1, z∗) . So we can get: p(R, x1:i−1|z) = p(x1:i−1|R, z)p(R|z) ≈YO(1)p(ri|r1:i-1, z)n i=1 YO(1)p(ri|r1:i-1, z) =Yn X i=1hid−1∈Dp(ri|hid−1, z)p(hid-1|r1:i-1, z), ri is a passage in the retrieved texts list R, hid−1 is the hidden state for the delimiter between ri-1 and ri in R. According to the Assumption 2, p(hid-1|r1:i-1, z) = O(1), then Equation 34 is approximately equal toQn=1 O(1)p(ri|z), which means that p(R, x1:i-1|z) ≈ Qn i=1 O(1)p(r|z), So we can get that: p(R, x1:i-1|z) Qn=1 O(1)p(ri|z)i r(z) = log p(R, x1:i-1|z∗) ≈ logQn=1 O(1)p(ri|z∗)i → O(1) + n ∗ nXlog p(ri|z∗) = O(1) + n ∗ Er∼Prlog p(ri|z))1 n p(ri|z) p(ri|z∗ ∝ pR(r)log p(r|z∗) = pR(r)log p(r|z∗) − pR(r)log p(r|z)p(r|z) pR(r) pR(r) = −(KL(pR(r)∥p(r|z))− KL(pR(r)∥p(r|z∗))),|benefit{z } |detriment,denote as Υ}{z ,denote as Ω pR(·) is the distribution of the retrieved texts, p(·) is the distribution of the LLMs’ pre-trained knowledge. # Effect of r(z) in Knowledge Fusion Recapping the Equation 38, we find r(z) actually regulates the proportion between LLMs’ pre-trained knowledge and retrieved knowledge in knowledge fusion of RAG prediction: - The more benefit outweigh detriment, r(z) → −∞ and exp(r(z)) → 0 for all z = z∗, this indicates that concepts z sampled from LLMs’ space contribute little to p(xi|R, x1:i−1).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
When z = z∗,̸ exp(r(z∗)) = 1, which means that latent variable model concentrates more on z∗ sampled from retrieved texts. As r(z) decreases, the proportion of retrieved knowledge in becomes larger and larger in fusion.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
• The more detriment outweigh benefit, r(z) → +∞ and exp(r(z)) → +∞ for all z = z∗ and when z = z∗, exp(r(z∗)) = 1. This indicates that concepts z sampled from LLMs’ space contribute more and more than z∗ sampled from retrieved texts as r(z) increases. Proof for Theorem 1 Proof. Recapping the Equation 2 that describes the knowledge fusion in RAG via latent variable model: |p(xi|R, x1:i−1) = ∫{Z−{z∗}}p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz+ p(xi|R, x1:i−1, z∗)p(z∗|R, x1:i−1)|denote as Λ{z}| |---|---| | |denote as Φ| (39) Since latent concept z∗ determines the hidden states h, Λ can be transformed as: p(xi|R, x1:i−1, z∗)p(z∗|R, x1:i−1) = Σp(xi|x1:i−1, h, z∗)p(h|R, x1:i−1.z∗)p(z∗|R, x1:i−1). (40) Let p(z∗|R, x1:i−1) = β:p(xi|R, x1:i−1) = Φ + β Σp(xi|x1:i−1, h, z∗)p(h|R, x1:i−1.z∗) (41) pR(xi|x1:i−1) = Σp(xi|x1:i−1, h, z∗)pR(h|x1:i−1) (42) p(h|R, x1:i−1, z∗) ∝ p(x1:i−1|h, z∗)p(h|R, z∗)pR(h|x1:i−1) ∝ p(x1:i−1|h, z∗) (43) Let probabilities p(xi|x1:i−1, h, z∗)p(x1:i−1|h, z∗) in Equation 40 is represented as matrix W ∈ R|X|×|H| for all possible xi ∈ X and h ∈ H, p(h|R, z∗) in Equation 44 is matrix B, pR(h) in Equation 43 is u ∈ R|H|.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We use 1-norm to calculate the difference between p(xi|R, x1:i−1) and pR(xi|x1:i−1), which can be formalized as: ∥p(xi|R, x1:i−1) − pR(xi|x1:i−1)∥1 = ∥Φ + βW B − W u∥1. (45) Then, according to the triangle inequality of 1-norm, the difference between p(xi|R, x1:i−1) and pR(xi|x1:i−1) is bounded by: ∥Φ∥1 − ∥βW B − W u∥1 ≤ ∥Φ + βW B − W u∥1 ≤ ∥Φ∥1 + ∥βW B − W u∥1. (46) We consider to further analyze ∥βW B − W u∥1: ∥βW B − W u∥1 = Σ|W_i|Σ|X|T(βB − u)_i (47) = Σ|X|Σ|X|W_ij (βB − u)_j Σ|X|Σ|H| (48) ≤ Σ|X|Σ|X|W_ij |(βB − u)_j Σ|H| (49) = Σ(XW_ij )|(βB − u)_j Σ|H||X| (50) = Σ|(βB − u)_j Σ|H| (51) = ∥βB − u∥1 (52)
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Then: ∥βB − u∥1 = 2T V (pR(·), βp(·|R, z∗))∗)) TV is Total Variation Distance. (53) ≤ 2βT V (pR(·), p(·|R, z (54) ≤p2KL(pR(·)∥p(·|R, z∗)) Pinsker’s Inequality. (55) ≤p2KL(pR(·)∥p(·|z∗)) (56) in which r is the passage in R, KL(pR(r)∥p(r|z∗∗)),≈p2KL(pR(r)∥p(r|z (57) ) is actually the detrimentin Equation 9. Recapping Equation 46, we can get:∥Φ + βW B − W u∥1 ≤ ∥Φ∥1 +p2KL(pR(r)∥p(r|z∗)) (58) Since 0 ≤ ∥βW B−W u∥1 ≤ 2p2KL(pR(·)∥p(·|z∗)) and ∥Φ+βW B−W u∥1 ≥ ∥Φ∥1 −∥βW B− W u∥1, then the lower bound for ∥Φ + βW B − W u∥1 is included in:h∥Φ∥1 −p2KL(pR(·)∥p(·|z∗)), ∥Φ∥1i, (59) we take the minimum value as the lower bound. Define D = ∥p(xi|R, x1:i−1) − pR(xi|x1:i−1)∥1 is the difference between p(xi|R, x1:i−1) and pR(xi|x1:i−1), according to Equation 45 and 46, the lower and upper bound for D is:s ∥Φ∥1 − 2 KL(pR(r)∥p(r|z∗))|{z } ≤ D ≤ ∥Φ∥1 + 2 KL(pR(r)∥p(r|z∗)),|{z } (60) detriment detriment For ease of description, we denote benefit KL(pR(r)∥p(r|z) as Ω and denote detriment KL(pR(r)∥p(r|z∗) as Υ. Recapping Equation 5 and 9:Z p(xi|R, x1:i−1) =|Z−{z∗}p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz+ p(xi|R, x1:i−1, z∗)p(z∗|R, x1:i−1}.{z } | denote as Λ{z ) Z denote as Φ = p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz = p(xi|R, x1:i−1, z) p(R, x1:i−1|z)p(z)dz = p(R, x1:i−1) ∝ p(xi|R, x1:i−1, z)p(R, x1:i−1|z)p(z) dz, p(R, x1:i−1) is a constant so we drop it = p(xi|R, x1:i−1, z) p(R, x1:i−1||z)p(z) dz,p(R, x1:i−1 1 = z∗) p(R, x1:i−1|z∗) is a constant so we add it = p(xi|R, x1:i−1, z)exp(r(z))p(z) dz, r(z) = log p(R, x1:i−1|z∗)p(R, x1:i−1|z) r(z) = log p(R, x1:i−1|z∗) ≈ − KL(pR(r)∥p(r|z))− KL(pR(r)∥p(r|z∗)) p(R, x1:i−1|z) | benefit{z } | detriment{z,denote as Υ} ,denote as Ω Φ in Equation 60 can be transformed as:Z Φ = Z−{z∗}p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz (61) = p(R, x1:i−1|z)∗)ZZ−{z∗}p(xi|R, x1:i−1, z)exp(r(z))p(z) dzp(R, x1:i−1 (62) Z = α Z−{z∗}p(xi|R, x1:i−1, z)exp(r(z))p(z) dz, (p(R, x1:i−1|z∗) and p(R, x1:i−1) are constants) Z (63) ≈ α Z−{z∗}p(xi|R, x1:i−1, z)exp(−(Ω − Υ))p(z) dz (Equation 9).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
(64) 16
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Now the Theorem 1 has been proven. Proof for Theorem 2 In this section, we try to prove that the gap between values of benefit and detriment is approximately positively correlated with the similarity (1) between p(xi|R, x1:i−1) and pR(xi|x1:i−1). To achieve this, we can start from Equation 60 to prove that the gap between values of benefit and detriment is negatively correlated with the difference (D) between p(xi|R, x1:i−1) and pR(xi|x1:i−1), which is actually the reciprocal of similarity (1). Specifically, we want to prove that the gap between values of benefit and detriment (KL(pR(r)∥p(D r|z) − KL(pR(r)∥p(r|z∗)) is negatively correlated with both lower and upper bound in Equation 60. For ease of description, we denote benefit KL(pR(r)∥p(r|z) as Ω and denote detriment KL(pR(r)∥p(r|z∗) as Υ. Proof.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Recapping Equation 5 and 9: |Z|p(xi|R, x1:i−1) =|Z−{z∗}p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz+ p(xi|R, x1:i−1, z∗)p(z∗|R, x1:i−1).{z } | denote as Λ{z )| |---|---| |Z|= p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz| |ZZ|= p(xi|R, x1:i−1, z) p(R, x1:i−1|z)p(z)dz| |ZZ|∝ p(xi|R, x1:i−1, z)p(R, x1:i−1|z)p(z) dz, p(R, x1:i−1) is a constant so we drop it| |ZZ|= ZZ p(xi|R, x1:i−1, z) p(R, x1:i−1||z)p(z) dz,p(R, x1:i−1z∗) p(R, x1:i−1|z∗) is a constant so we add it1| | |= p(xi|R, x1:i−1, z)exp(r(z))p(z) dz, r(z) = log p(R, x1:i−1|z∗)p(R, x1:i−1|z)| |Z|r(z) = log p(R, x1:i−1|z∗) ≈ − KL(pR(r)∥p(r|z))− KL(pR(r)∥p(r|z∗)) p(R, x1:i−1|z) | benefit{z } | detriment{z,denote as Υ}, denote as Ω| Φ in Equation 60 can be transformed as: Φ = Z−{z∗}p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz (65) = p(R, x1:i−1|z)∗)ZZ−{z∗}p(xi|R, x1:i−1, z)exp(r(z))p(z) dzp(R, x1:i−1 (66) = α Z−{z∗}p(xi|R, x1:i−1, z)exp(r(z))p(z) dz, (p(R, x1:i−1|z∗) and p(R, x1:i−1) are constants) Z (67) ≈ α Z−{z∗}p(xi|R, x1:i−1, z)exp(−(Ω − Υ))p(z) dz (Equation 9). (68) Therefore, the lower bound of Equation 60 is: ∥Φ∥1 − √2Υ ≈ α∥ZZ−{z∗}p(xi|R, x1:i−1, z)exp(−(Ω − Υ))p(z) dz∥1 − √2Υ (69) ∝ exp(−(Ω − Υ)) − √2Υ (70) 17
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
and the upper bound of Equation 60 is:∥Φ∥1 +√2Υ ∝ exp(−(Ω − Υ)) + √2Υ (71) Due to both Ω and Υ being variables, analyzing the result of subtraction between Ω and Υ under their simultaneous changes is complex. Therefore, we use the “Separation of variables“ to simplify our analysis. Specifically, we first assume that one is constant, and then analyze the changes caused by the variation of another: - Assume Ω is constant, as the value of Ω−Υ increases, Υ decreases and the upper bound exp(−(Ω− Υ)) +√2Υ also deceases. In the lower bound exp(−(Ω − Υ)) − √2Υ, since the first term is an exponential function and the second term is a square root function, a decrease of Υ leads to the decrease in the entire lower bound. Therefore, both lower and upper bounds in Equation 60 decrease as Ω − Υ increases. - Assume Υ is constant, as the value of Ω−Υ increases, Ω increases and the upper bound exp(−(Ω− Υ)) +√2Υ deceases. In the lower bound exp(−(Ω − Υ)) − √2Υ, since the first term is an exponential function and the second term is a square root function, an increase of Ω leads to the decrease in the entire lower bound. Therefore, both lower and upper bounds in Equation 60 decrease as Ω − Υ increases.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
On behalf of the analysis above, we can derve that both lower and upper bounds in Equation 60 are approximately negatively correlated with the gap between values of benefit and detriment. Therefore, the difference D between p(xi|R, x1:i−1) and pR(xi|x1:i−1) is approximately negatively correlated1 with the gap between values of benefit and detriment. In other words, D can be treated as the similarity between p(xi|R, x1:i−1) and pR(xi|x1:i−1) and it is approximately positively correlated with the gap between values of benefit and detriment.: KL(pR(r)∥p(r|z))− KL(pR(r)∥p(r|z∗))∝ D 1. (72) | {z } | {z } benefit detriment So we have proved that the gap between values of benefit and detriment is approximately positively1. correlated with D Proof for Theorem 3 This section aims to prove: J = KL(pR(r)∥p(r|z)) < KL(pR(r)∥p(r|z∗)), detriment is equal to benefit. KL(pR(r)∥p(r|z)) = KL(pR(r)∥p(r|z∗)), benefit outweighs detriment.∗)), detriment outweighs benefit. if D if D1 < M 1 = M1 1 (73) KL(pR(r)∥p(r|z)) > KL(pR(r)∥p(r|z if D1 > M1 in which 1 is the similarity between p(xi|R, x1:i−1) and p(xi|x1:i−1) (LLMs’ pre-trained knowl-M edge), D1 is the similarity between p(xi|R, x1:i−1) and pR(xi|x1:i−1) (distribution of retrieved texts) Proof. When benefit is equal to detriment: KL(pR(r)∥p(r|z)) − KL(pR(r)∥p(r|z∗)) = 0, (74) which means that: pR(r)log p(r|z∗) = 0,p(r|z) (75) since pR(r) cannot be 0, then: log p(r|z∗) = 0,p(r|z) (76) p(r|z∗) = 1, p(r|z) (77) p(r|z) = p(r|z∗), (78) 18
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
bridge-topic into a claim set. We restrict the claim Category Avg. Tokens Entry Count set to have at least two claims but no more than four technology 2262.3 172 claims. For each type of query, we feed the claim entertainment 2084.3 114 set to GPT-4 and prompt it with an instruction to sports 2030.6 211 generate a query with information from each claim. science 1745.5 21 Below, we explain the specifications for different business 1723.8 81 multi-hop query types. In the construction of each health 1481.1 10 query, we also include the source of the news article total 2046.5 609 where the supporting evidence is associated with to mimic real-world RAG scenarios. Appendix Table 2: Descriptive statistics of the news article knowledge base in MultiHop-RAG. Inference Query: These queries are formulated Query Category Entry Count Percentage by synthesizing the various characterizations of the Inference Query 816 31.92% bridge-entity across multiple claims, with the final Comparison Query 856 33.49% answer being the identification of the entity itself. Temporal Query 583 22.81% Comparison Query: These queries are struc- Null Query 301 11.78% tured to compare the similarities and differences Total 2,556 100.00 % related to the bridge entity or topic. The resultant Table 3: The distribution of query types in MultiHop- answer to such queries is typically a definitive “yes” RAG. or “no”, based on the comparison. Temporal Query: These queries explore the 3.2 Descriptive Statistics temporal ordering of events across different points in time. The answer to such queries is typically a The MultiHop-RAG dataset contains six different “yes” or “no” or a single temporal indicator word types of news articles, covering 609 distinct news, like “before” or “after”. with an average of 2,046 tokens. The distribution of Null Query: Null query is a query whose an- the news categories is shown in Table 2. MultiHop- swer cannot be derived from the retrieved set. To RAG contains four types of multi-hop queries and create null queries, we generate multi-hop queries the distribution of these queries is shown in Table using entities that do not exist in the existing bridge- 3. In total, about 88% of queries in the dataset are entities. To add complexity, we also include fic- non-null queries where answers can be retrieved tional news source metadata when formulating and reasoned from the knowledge base. In addition, these questions, ensuring that the questions do not the form of queries exhibits considerable diversity. reference any contextually relevant content from Approximately 27% of interrogative queries start the knowledge base. The answer to the null query with "does," around 15% initiate with "what," a should be “insufficient information” or similar. similar proportion start "which," and 14% begin Step 5: Quality Assurance.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Query ID|Query|Expected answer|Search for paragraph|Search by sentence retrieve paragraph|Observations| |---|---|---|---|---|---| |F1|What do the values of RAW Group Indication subfield in RPS element indicate?|The RAW Group Indication subfield indicates whether the RAW Group subfield is present in the RAW Assignment subfield and is interpreted as follows:|The format of the RAW Group subfield is shown in Figure 9-672|When the RAW is an AP PM RAW, the RAW Group Indication subfield equal to 0 indicates that the RAW group does not include any of the non-AP STAs, and the RAW Group subfield is not present. When the RAW Group Indication subfield is equal to 1, the RAW Group subfield is present.|It is clear that similarity search at sentence level and retrieval at paragraph level gives significantly better results. Since we retrieve 3 distinct paragraphs there is far more context available at the generator create good responses| | | | |When the RAW is an AP PM RAW, the RAW Group Indication subfield equal to 0 indicates that the RAW group does not include any of the non-AP STAs, and the RAW Group subfield is not present. When the RAW Group Indication subfield is equal to 1, the RAW Group subfield is present.|The RAW Group Indication subfield indicates whether the RAW Group subfield is present in the RAW Assignment subfield and is interpreted as follows:|When the RAW type is generic RAW, sounding RAW, or triggering frame RAW, the RAW Group Indication subfield indicates whether the RAW group defined in the current RAW assignment is the same RAW group as defined in the previous RAW assignment. When the RAW Group Indication subfield is equal to 0, the RAW group defined in the current RAW assignment is the same as the RAW group defined in the previous RAW assignment and the RAW Group subfield is not present in this RAW assignment. When the RAW Group Indication subfield is equal to 1, the RAW Group subfield is present. The RAW Group Indication subfield indicates whether the RAW Group subfield is present in the RAW Assignment subfield and is interpreted as follows: When the RAW type is generic RAW, sounding RAW, or triggering frame RAW, the RAW Group Indication subfield indicates whether the RAW group defined in the current RAW assignment is the same RAW group as defined in the previous RAW assignment. When the RAW Group Indication subfield is equal to 0, the RAW group defined in the current RAW assignment is the same as the RAW group defined in the previous RAW assignment and the RAW Group subfield is not present in this RAW assignment.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Recapping Equation 2 that z∗ is sampled from retrieved texts and z is sampled from LLMs’ pre-trained knowledge, Equation 78 indicates that the knowledge of retrieved texts has been involved in LLLs’ pre-trained knowledge, so: p(xi|x1:i−1) = pR(xi|x1:i−1), (79) then: ∥p(xi|R, x1:i−1) − p(xi∥x1:i−1)∥1 = ∥p(xi|R, x1:i−1) − pR(xi∥x1:i−1)∥1, (80) which means that D = M KL(pR(r)∥p(r|z)) − KL(pR(r)∥p(r|z∗)) = 01 is an important dividing point. When D = M. Equation 72 indicates1 1, we can get that benefit is equal to detriment and that the gap between values of benefit and detriment (KL(pR(r)∥p(r|z)) − KL(pR(r)∥p(r|z∗))) is approximately positively correlated with D 1. Therefore, when D > M1 1 we can get that benefit outweighs detriment (KL(pR(r)∥p(r|z)) − KL(pR(r)∥p(r|z∗)) > 0). When D < 1 we can get that detriment outweighs benefit (KL(pR(r)∥p(r|z)) − KL(pR(r)∥p(r|z∗)) < 0). Now the proof of1 M Theorem 3 has been finished. Proof for RAG is actually unsupervised In-context Learning This section aims to prove that RAG is actually unsupervised ICL from two perspectives. One is that previous studies find that ICL performs gradient descent as meta-optimizer Von Oswald et al. [2023], Akyürek et al. [2022], Dai et al.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
[2022]. We prove that in this perspective, the distribution of texts in context drives the learning even without explicit input-output supervision. Therefore, the distribution of unsupervised retrieved texts in RAG, which is actually the distribution of context for query, can also drives the learning. Then we can prove that RAG is actually unsupervised in-context learning. The specific proof is: Proof. From the perspective that ICL performs gradient descent as meta-optimizers, ICL can be formalized as the following: Gradient descent in optimization of linear layers have a dual form of linear attention Irie et al. [2022], Aizerman et al. [1964], define a liner layer as: f (x) = W0x, (81) in which W0 is the initial weight matrix. Given a sequence of historical input vectors xi ∈ Rdin and corresponding error signals ei ∈ Rdout , i ∈ [1, N ] obtained by gradient descent, the update of the weight matrix can be represented as: W′ = W0 + ∆W = W0 +Xei ⊗ x.N i (82) Recap that the linear attention can be formulated as: LinearAttn(V, K, q) = Xv(ki i Tq). (83) Then the dual form of updated linear layer with new input xN +1 is: f′(x) = (W0 + ∆W )xN +1 (84) = (W0 +Xei ⊗ xi)xN +1N i (85) = W0xN +1 +X(ei ⊗ xi)xN +1N i (86) = W0xN +1 +Xei ⊗ (xi N TxN +1) i (87) = W0xN +1 + LinearAttn(E, x1:N , xN +1) (88)
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# In-context Learning Equations: 1. \(f_{ICL}(q) = Attn(V, K, q)\) (89) 2. \(= WV [B′ : B]softmax(WK [B′ : B])^T q/\sqrt{d}\) (90) 3. To simplify qualitative analysis, the standard attention is estimated as relaxed linear attention by eliminating the softmax function and the scaling factor: \(f_{ICL}(q) \approx WV [B′ : B](WK [B′ : B])^T q\) (91) \(= WV B(WK B)^T q + WV X′(WK B′)^T q\) (92) \(= WV B(WK B)^T q + LinearAttn(WV B′, WK B′, q)\) (93) 4. The dual form of the Transformer attention is given by: \(f_{ICL}(q) \approx WV B(WK X)^T q + LinearAttn(WV B′, WK B′, q)\) \(= WV B(WK B)^T q + XWV b′i((WK b′)^T q_i)\) (94-96) 5. Comparing Equation 86 with Equation 96, we find that: - \(WV B\) without demonstrations in the context - Zero-shot prediction using meta-gradient to update the weighted matrix - \(WV b′i = e\), \(WK bi = x\) (97-98) 6. In standard gradient descent with loss \(L\): \(ei = -\eta \partial yi/\partial L = -\eta \partial Lxi/\partial Wi = -\eta\partial Wi/\partial LWK b′i = WV b′i\) (99-101) 7. Loss \(L\) can be represented as: \(L = -\eta \int WV b′d(WWK bi′)i\) (102) 8. Supervision signal, both loss and gradient, are directly related to the semantic representation of the tokens in the demonstration of context (97-102) 9. In-context learning does not require explicit input-output pairs in the demonstration, as the distribution of text in context serves as a direct learning signal (97-102) 10. ICL performs gradient descent as a meta-optimizer # Theoretical Perspective: 1. \(p(xi|R, x1:i−1) = \int \int p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz\) (103) 2.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
\(= \int_{-z^*} p(xi|R, x1:i−1, z)p(z|R, x1:i−1) dz + p(xi|R, x1:i−1, z^*)p(z^*|R, x1:i−1)\) (104) 3. \(∝ \int p(xi|R, x1:i−1, z)p(R, x1:i−1|z)p(z) dz\) (105) 4. \(= \int \int p(xi|R, x1:i−1, z)exp(r(z))p(z) dz, r(z) = log p(R, x1:i−1|z^*)p(R, x1:i−1|z)\) (106)
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Experimental details # Baselines For primary experiment that needs methods to determine the value order between benefit and detriment for each token, it is actually a binary classification task (benefit outweigh detriment or not). The mainstream methods in this area are detecting and comparing the degree of hallucination between tokens generated by LLMs (w/o RAG) and RAG.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Below we will describe in detail how we apply these baselines to this task. # Logprobs Logprobs can indicate the confidence for LLMs in generating the tokensKuhn et al. [2023]. We use the value order between top-1 log-probability of the tokens output by pure LLM and RAG to determine the value order between benefit and detriment for these tokens. If the logprobs of tokens generated by RAG is greater than the logprobs of tokens generated by pure LLM, the benefit outweigh the detriment, otherwise the detriment outweigh the benefit. # Uncertainty We use Length-normalized Entropy Malinin and Gales [2020] to measure the uncertainty of the tokens generated by pure LLM and RAG respectively. If the uncertainty of tokens generated by RAG is lower than the uncertainty of tokens generated by pure LLM, the benefit outweigh the detriment, otherwise the detriment outweigh the benefit. # Consistency-Lexical Consistency-based methods make LLMs perform multiple generations for a question and calculate consistency score among multiple answers. If the consistency score of tokens generated by RAG is greater than the consistency score of tokens generated by pure LLM, the benefit outweigh the detriment, otherwise the detriment outweigh the benefit. Lexical-based consistency means calculating consistency score by lexical-similarity among multiple answers. Since the experiment is at token level, we use the number of tokens that are completely consistent in multiple generations as the consistency score.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Consistency-Semantic Chen et al. [2024]. We follow Chen et al. [2024] to use EigenScore to calculate the semantic similarity among hidden states of tokens in multiple generations and use it as the consistency score. For open-domain Q&A under practical autoregressive generation setting, baselines for this include the methods that introduce additional modules to filter irrelevant passages (NLI+RAG Yoran et al. [2024]) or as action triggers (CRAG Yan et al. [2024]), train more robust LLMs for RAG (RetRobust Yoran et al. [2024] and INFO-RAG Xu et al. [2024a]) and train LLMs to dynamically retrieve and critique retrieved texts (Self-RAG Asai et al. [2023]). NLI+RAG. This method use a Natural Language Inference model to filter the possible irrelevant documents in retrieved results and provide the remaining documents to LLMs for generation. We follow Yoran et al. [2024] to use a BART-Large model Lewis et al. [2019] with 407 million parameters trained on the MNLI dataset Williams et al. [2017]. We consider a query-document pair as entailed if the probability for the entailment label is ≥ 0.5 and filter the documents with probability for the entailment label < 0.5. CRAG.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This method uses a retrieval evaluator to assess the correctness of retrieved texts trigger different actions based on the evaluation results. One of the actions is using additional google search API for web search, which is unfair for baselines and our method. So we remove this action and use its knowledge refinement strategy for document filtering Yan et al. [2024].
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
RetRobust. This method fine-tunes LLMs to properly leverage retrieved passages with a mix of relevant and irrelevant contexts Yoran et al. [2024]. INFO-RAG. This method uses unsupervised method to make LLMs learn to use the retrieved texts robustly. It enables LLMs to judge the correctness of the retrieved texts, extract the correct content and revise the wrong content Xu et al. [2024a]. Self-RAG. This method trains LLMs to dynamically retrieve and critique retrieved texts. Self-RAG first decodes a retrieval token to evaluate the utility of retrieval and control a retrieval component. If retrieval is required, LLM calls an external retrieval module to find top relevant documents, using input query and previous generation. If retrieval is not required, LLM continues generation. If retrieval is needed, LLM first generates critique token evaluating whether retrieved documents are relevant and support generation, and then generates continuation conditioned on the retrieved passages Asai et al. [2023].
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Implementation details All models are run on a V100 GPU with Pytorch Paszke et al. [2019] and accelerated by DeepSpeed3. As for retrieval for RAG, we follow Xu et al. [2023, 2024a] to use ColBERTv2 Santhanam et al. [2021], an excellent generalizable model as the retriever, and use Wikipedia consisting of 21,015,324 passages Karpukhin et al. [2020] as retrieval database. All baselines and X-RAG share the same retrieval setup and prompt. We use OPT-6.7B, LLaMA-2-7B and Mistral-7B-v0.1 as LLMs in primary experiment and use greedy-decoding strategy for generation. # Case Study Figure 4 shows the case study for collaborative generation between pure LLM and RAG at token level in our X-RAG. At the step that pure LLM and RAG generates the different tokens, X-RAG use our theoretical results in Theorem 3 to compare the benefit and detriment. If benefit is greater than detriment, the token from RAG is selected, otherwise, the token from pure LLM is selected. The selected tokens are marked by green color and bold. Then discarded tokens are marked by gray. The orange arrow represents the direction of token selection and usage. The selected tokens are used for the next step generation of both pure LLM and RAG. This case study visually demonstrates that our X-RAG effectively enables pure LLM and RAG for collaborative generation to preserve benefit and avoid detriment. 3https://github.com/microsoft/DeepSpeed
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
When the RAW Group Indication subfield is equal to 1, the RAW Group subfield is present. The RAW Group Indication subfield in the first RAW assignment is set to 0 to indicate the RAW group in the first RAW assignment is the same as the range of AIDs in all the TIM bitmaps in the S1G Beacon frame. When the RAW is a non-TIM RAW, the RAW Group Indication subfield is set to 0 and the RAW Group subfield is not present. When the RAW is an AP PM RAW, the RAW Group Indication subfield equal to 0 indicates that the RAW group does not include any of the non-AP STAs, and the RAW Group subfield is not present. When the RAW Group Indication subfield is equal to 1, the RAW Group subfield is present.|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Question: Who is the book of Galatians written to? |Pure LLM:|It was written by the Apostle Peter to the churches in Galatia, a region of present-day Turkey.| |---|---| |RAG:|It was written by the Apostle Paul to the churches in Corinth, a region of present-day Turkey.| |Output:|It was written by the Apostle Paul to the churches in Galatia, a region of present-day Turkey.| # Question: Who conducted the opening concert at Carnegie Hall? In which year? |Pure LLM:|It was conducted by Leopold Damrosch and composer Pyotr Ilyich Tchaikovsky in 1891.| |---|---| |RAG:|It was conducted by Walter Damrosch and composer Pyotr Ilyich Tchaikovsky in 1901.| |Output:|It was conducted by Walter Damrosch and composer Pyotr Ilyich Tchaikovsky in 1891.| # Question: Who says that which we call a rose? |Pure LLM:|It is said by the character Juliet in William Shakespeare's play "Romeo and Juliet."| |---|---| |RAG:|It is said by the character Lysander in William Shakespeare's play "Romeo and Juliet."| |Output:|It is said by the character Juliet in William Shakespeare's play "Romeo and Juliet."| # Question: Who signed the largest on the Declaration of Independence? |Pure LLM:|It belongs to John Hancock, who was the president of the Continental Congress| |---|---| |RAG:|It belongs to Charles Hancock, who was a member of the Continental Congress| |Output:|It belongs to John Hancock, who was the president of the Continental Congress| # Question: Who is under the mask of Darth Vader? |Pure LLM:|It is the character Anakin Skywalker, as revealed in the Star Wars series.| |---|---| |RAG:|It is the character Anakin Skywalker, as revealed in the Star Wars series.| |Output:|It is the character Anakin Skywalker, as revealed in the Star Wars series.| # Question: Where does US Highway 1 start and end? |Pure LLM:|It starts in Fort Kent, Maine, at the Canadian border and ends in Key West, Florida.| |---|---| |RAG:|It starts in Fort Kent, Maine, at the Canadian border and ends in New West, Florida.| |Output:|It starts in Fort Kent, Maine, at the Canadian border and ends in Key West, Florida.| Figure 4: Case study for collaborative generation between pure LLM and RAG at token level in our X-RAG.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Pure LLM and RAG generate the texts in parallel at token level. At the step that pure LLM and RAG generate the different tokens, X-RAG use our theoretical results in Theorem 3 to compare the benefit and detriment. If benefit is greater than detriment, the token from RAG is selected, otherwise, the token from pure LLM is selected. The selected tokens are marked by green color and bold. The discarded tokens are marked by gray. The orange arrow represents the direction of token selection and usage. The selected tokens are used for the next step generation of both pure LLM and RAG.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2311.09476v2 [cs.CL] 31 Mar 2024 |ARES: An Automated Evaluation Framework for Retrieval-AugmentedGeneration Systems| |---| |Jon Saad-Falcon|Omar Khattab| |Stanford University *|Stanford University| |jonsaadfalcon@stanford.edu|okhattab@stanford.edu| |Christopher Potts|Matei Zaharia| |Stanford University|Databricks and UC Berkeley| |cgpotts@stanford.edu|matei@databricks.com| Abstract Evaluating retrieval-augmented generation (RAG) systems traditionally relies on hand annotations for input queries, passages to retrieve, and responses to generate. We introduce ARES, an Automated RAG Evaluation System, for evaluating RAG systems along the dimensions of context relevance, answer faithfulness, and answer relevance. By creating its own synthetic training data, ARES finetunes lightweight LM judges to assess the quality of individual RAG components. To mitigate potential prediction errors, ARES utilizes a small set of human-annotated datapoints for prediction-powered inference (PPI). Across eight different knowledge-intensive tasks in KILT, SuperGLUE, and AIS, ARES accurately evaluates RAG systems while using only a few hundred human annotations during evaluation. Furthermore, ARES judges remain effective across domain shifts, proving accurate even after changing the type of queries and/or documents used in the evaluated RAG systems. We make our code and datasets publicly available on Github. # 1 Introduction Retrieval-augmented generation (RAG) has become a prominent approach for building user-facing NLP applications, such as systems for question answering (QA), fact-checking, and customer support (Petroni et al., 2021; Wang et al., 2019). Typically, a RAG system consists of a retriever and a downstream language model (LM). Given a user question, the retriever finds relevant passages from a corpus and the LM uses these passages to generate a response. This formulation admits a multitude of choices: what retrieval model to use, how to divide the documents into retrieval chunks, and how to prompt or finetune the LM to use the retrieved information, to name only a few of the simplest design decisions. * Project started during research internship at Databricks The best design for a RAG system is not necessarily universal across data domains, corpus sizes, and cost/latency budgets. To tune their own RAG systems, practitioners traditionally need hand annotations for test questions, passages to retrieve (to assess the retriever), and responses to generate, labeled specifically for their target domain. Alternatively, they may evaluate different approaches in production by collecting human preferences that compare the candidate systems.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Unfortunately, both of these strategies demand high expertise and impose considerable annotation costs. Model-based evaluation is an inexpensive strategy to test generative output quality (Zheng et al., 2023). For instance, the open-source RAGAS framework (James and Es, 2023) prompts an LM for evaluating the relevance of retrieved information and the faithfulness and accuracy of generated responses. Unfortunately, such strategies currently rely for evaluation on a fixed set of heuristically hand-written prompts, offering little adaptability to various evaluation contexts and no guarantees about quality. To evaluate RAG systems rapidly and accurately, we propose ARES, the Automated RAG Evaluation System. ARES is the first automated RAG evaluation system to generate tailored LLM judges for each component of a RAG pipeline, leading to substantial boosts in evaluation precision and accuracy compared to existing approaches like RAGAS. Furthermore, unlike existing RAG evaluation systems, ARES provides confidence intervals for its scoring by leveraging prediction-powered inference (PPI; Angelopoulos et al.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023). Given a corpus of documents and a RAG system, ARES reports three evaluation scores: context relevance (is the retrieved information pertinent to the test question), answer faithfulness (is the response generated by the language model properly grounded in the retrieved context), and answer relevance (is the response also relevant to the question). A good
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
RAG system finds relevant contexts and generates answers that are both faithful and relevant. Many existing RAG evaluation frameworks require substantial human annotations for scoring. ARES significantly improves data efficiency during evaluation by only requiring three inputs: an in-domain passage set, a human preference validation set of approximately 150 annotated datapoints or more, and few-shot examples of in-domain queries and answers (e.g. five examples or more), which are used for prompting LLMs in synthetic data generation. Given the corpus of in-domain passages, ARES proceeds in three stages. First, it leverages an LM to construct a synthetic dataset of question–answer pairs, derived from the passages in the corpus. Second, it defines three separate judge models to perform three classification tasks (context relevance, answer faithfulness, and answer relevance). These judges are lightweight models fine-tuned against a contrastive learning objective. Third, ARES scores the different RAG systems being assessed using prediction-powered inference (PPI; Angelopoulos et al. 2023) to improve model-based evaluation accuracy and provide statistical confidence intervals for RAG scoring. PPI utilizes a small set of human annotated datapoints for computing its confidence intervals; we designate this annotated set as our human preference validation set, which is composed of approximately 150 annotated datapoints or more that designate both positive and negative examples for context relevance, answer faithfulness, and answer relevance. We conduct extensive empirical evaluations, demonstrating that ARES accurately scores RAG systems across the six knowledge-intensive datasets in KILT and SuperGLUE, beating existing automated evaluation approaches like RAGAS by 59.3 and 14.4 percentage points on average across context relevance and answer relevance evaluation accuracy, respectively. Additionally, ARES accurately calculates answer hallucination occurrences in the AIS attribution dataset (Rashkin et al., 2022), predicting within 2.5 percentage points of the ground truth average for answer hallucinations. Compared to annotation-based evaluation methods, ARES is substantially more accurate and efficient, requiring 78% less annotations than the baseline approach. We also find that ARES consistently distinguishes competitive RAG systems that are only a few points apart in ground-truth metrics. This precision enables ARES to guide the development and comparison of competitive approaches and configurations. We make the ARES code and datasets publicly available on Github. # Related Work RAG (Guu et al., 2020; Lewis et al., 2020; Khattab et al., 2021; Izacard et al., 2022) is now a common strategy for bolstering LLMs by combining them with retrieval systems. Through retrieval, RAG helps LM systems gather domain-specific knowledge, ground generations in factual information (Shuster et al., 2021; Huo et al., 2023), and offer a degree of transparency or interpretability via citing sources (Mialon et al., 2023). Multiple LLM-based evaluation techniques have emerged for gauging LLM systems.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
This is essential for rapid deployment in new settings, where it is difficult to build a traditional benchmark dataset from scratch. Early attempts at this use LLMs out of the box, as in MT-Bench and Chatbot Arena (Zheng et al., 2023). AutoCalibrate (Liu et al., 2023b) seeks to align an LLM-judge with human preferences, leveraging a self-refinement prompt to iteratively improve the LLM judge. However, AutoCalibrate does not offer any statistical guarantees for the accuracy of its predictions. Other work has used LLM prompting to evaluate system quality across natural language generation tasks, such as translation, summarization, and dialogue (Kocmi and Federmann, 2023; Fu et al., 2023; Liu et al., 2023a; Wang et al., 2023). In the context of knowledge-intensive NLP tasks, LLMs have been explored for assessing attribution and factuality in LLMs (Min et al., 2023; Gekhman et al., 2023; Yue et al., 2023). New guidelines like LongEval (Krishna et al., 2023) and datasets like Hagrid and ALCE (Kamalloo et al., 2023; Gao et al., 2023) provide resources for analyzing knowledge-intensive LLM pipelines. The two most-closely related projects to ARES are EXAM (Sander and Dietz, 2021) and RAGAS (James and Es, 2023). To evaluate RAG systems, the EXAM metric estimates how many exam questions a reader (simulated as a QA system) can answer correctly based on the generated response. This requires a set of queries with several associated sub-questions each, which adds a burden that ARES does not bring.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
RAGAS is based on a handful of heuristic hand-written prompts. These offer little adaptability to new RAG evaluation set.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# ARES ARES proceeds in three stages (Figure 1). There are three required inputs: an in-domain passage set, a human preference validation set of approximately 150 annotated datapoints (or more), and few-shot examples of in-domain queries and answers (five or more examples), which are used for prompting LLMs in synthetic data generation. With our inputs prepared, we begin by generating synthetic queries (and their answers) from the passages in the target corpus. We then use these query–passage–answer triples to train LLM judges. Subsequently, we apply these judges to any RAG system, scoring a sample of its in-domain query-document-answer triples, and use prediction-powered inference (PPI) with our human preference validation set to estimate a confidence interval for the quality of each RAG system. # LLM Generation of Synthetic Dataset We generate synthetic queries and answers from the corpus passages using generative LLMs. The generated data represent both positive and negative examples of query–passage–answer triples (e.g., relevant/irrelevant passages and correct/incorrect answers). For generation, the LLM uses our input set of few-shot examples with in-domain passages mapped to in-domain queries and answers; the model then generates a synthetic question and answer from a given in-domain passage, allowing us to create both positive and negative training examples. We include example prompts for generating synthetic queries and answers in A.6. For creating our synthetic data, we primarily use on FLAN-T5 XXL (discussed in subsection 4.1). ARES works well with this model (see section 5) but our system can ultimately use another high-quality model for generating synthetic queries and answers. We then filter out low-quality queries by testing if a given query can retrieve its original passage as the top result using its retriever. This filtering approach has been used in previous work to isolate high-quality synthetic queries (Dai et al., 2022; Saad-Falcon et al., 2023). To generate negatives for fine-tuning our LLM judges, we rely on two novel strategies, generating the same number of negatives with each strategy: |Weak Negative Generation:|For context relevance negatives, we randomly sample in-domain passages unrelated to a given synthetic query. For answer faithfulness and answer relevance negatives, we randomly sample synthetically-generated answers from other passages, which were created using FLAN-T5 XXL.| |---|---| |Strong Negative Generation:|For context relevance negatives, we randomly sample in-domain passages from the same document as the gold passage. For datasets in which multiple passages are not available for the same document, we use BM25 to retrieve the top-10 passages similar to the passage and sample from them for our context relevance strong negatives. For answer faithfulness and answer relevance negatives, we prompt FLAN-T5 XXL (subsection 4.1) to generate a contradictory answer using the few-shot prompt in subsection A.5.| In total, the number of negatives generated equals the number of positives generated for evaluating context relevance and answer relevance. # Preparing LLM Judges To prepare our RAG evaluation judges, we use our synthetic dataset to fine-tune DeBERTa-v3-Large judges (discussed in subsection 4.1) to evaluate three different capabilities (Chen et al., 2023; James and Es, 2023): 1. Context Relevance: Is the passage returned relevant for answering the given query?
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2. Answer Faithfulness: Is the answer generated faithful to the retrieved passage, or does it contain hallucinated or extrapolated statements beyond the passage? 3.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Query ID|Query|Expected answer|Generated response – similarity by paragraph|Generated Response – similarity by sentence, retrieve paragraph|Observations| |---|---|---|---|---|---| |F1|What do the values of RAW Group Indication subfield in RPS element indicate?|The RAW Group Indication subfield indicates whether the RAW Group subfield is present in the RAW Assignment subfield and is interpreted as follows: - When the RAW type is generic RAW, sounding RAW, or triggering frame RAW, the RAW Group Indication subfield indicates whether the RAW group defined in the current RAW assignment is the same RAW group as defined in the previous RAW assignment. When the RAW Group Indication subfield is equal to 0, the RAW group defined in the current RAW assignment is the same as the RAW group defined in the previous RAW assignment and the RAW Group subfield is not present in this RAW assignment.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Answer Relevance: Is the answer generated relevant given the query and retrieved passage? For each metric, a separate LLM with a binary classifier head is fine-tuned to classify positive and negative examples. For each concatenated query-document-answer, a single LLM judge must classify the triple as positive or negative for that judge’s metric. To fine-tune these judges, we use our human preference validation set to evaluate model improvement after each epoch, stopping when we have three epochs with no improvement in loss (see subsection A.1 for more information).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Step #1: LLM Generation of Synthetic Dataset: Generate synthetic queries and answers from in-domain passages Step #2: Preparing LLM Judges: Train LLM judges with synthetic data Step #3: Ranking RAG Systems with Confidence Intervals: Use LLM judges to evaluate RAG systems with PPI human labels Figure 1: Overview of ARES As inputs, pe ARES pipeline requires an in-domain passage set, a human preference validation set of 150 annotated datapoints or more, and few-shot examples of in-domain queries and answers (five or more), which are used for prompting LLMs in synpetic data generation. To prepare our LLM judges for evaluation, we first generate synpetic queries and answers from pe corpus passages. Using our generated training triples and a constrastive learning framework, we fine-tune an LLM to classify query–passage–answer triples in pree different criteria: context relevance, answer faipfulness, and answer relevance. Finally, we use pe LLM judges to score RAG systems and generate confidence bounds for pe ranking using PPI and pe human preference validation set. # Ranking RAG Systems with Confidence Intervals Once we have prepared our LLM judges, we need to use them to score and rank the competing RAG systems. To do this, ARES samples the in-domain query-document-answer triples produced by each RAG approach, and the judges label each triple, predicting their context relevance, answer faithfulness, and answer relevance. By averaging the individual predicted labels for each in-domain triple, we calculate the RAG system performance across each of the three metrics.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In principle, we could simply report these average scores as quality metrics for each RAG system. However, these scores reflect entirely unlabeled data with predictions from a synthetically-trained LLM judge, and hence they may not be entirely accurate. As an extreme alternative, we could use just the small human preference validation set discussed previously for evaluation, reporting the extent to which each RAG system agrees with (or deviates from) the human annotations. However, an annotation-based evaluation approach would require labeling substantially more generative outputs from each RAG systems separately, which can be costly both in terms of time and financing. To combine the benefits of both, and hence boost the precision of the evaluation, ARES uses prediction-powered inference (PPI; Angelopoulos et al. 2023) to predict the system scores. PPI is a recent statistical method that provides tighter confidence intervals on a small set of annotated datapoints (i.e., our validation set) by leveraging predictions on a much larger set of non-annotated datapoints. PPI can leverage both the labeled datapoints and the ARES judge predictions on the non-annotated datapoints to construct confidence intervals for our RAG system’s performance. To do this, PPI uses the LLM judges on the human preference validation set to learn a rectifier function for constructing a confidence set of the ML model’s performance, using each ML prediction in the larger non-annotated dataset. The confidence set can then be used to create a tighter confidence interval for the performance of the evaluated RAG system (e.g. its context relevance, answer faithfulness, or answer relevance accuracy individually) compared to simply using annotated outputs from the evaluated RAG system. By bolstering the human preference validation set with the much larger set of datapoints with ML predictions, PPI can develop reliable confidence intervals for ML model performance that beat previous classical inference approaches. The PPI rectifier function allows us to estimate the errors of the LLM judge and generate confidence bounds for the success and failure rates of the RAG system, estimating context relevance, answer faithfulness, and answer relevance performance. Additionally, PPI allows us to estimate confidence intervals with a selected level of probability; for our experiments, we use a standard 95% alpha (probability) for our confidence interval.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
With the accuracy confidence interval for each component of the RAG, we find the midpoint of each confidence interval and use the midpoints to rank the RAG systems. With our ranking, we can compare different RAG systems, as well as different configurations of the same RAG system, to find the best-performing approach for a given domain.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Experiments # Models For our fine-tuned judges, ARES relies on generating cheap but quality synthetic queries and answers using LLMs. For generating our synthetic datasets, we use FLAN-T5 XXL (Chung et al., 2022). We selected DeBERTa-v3-Large (He et al., 2021) for our fine-tuned LLM judge. Our fine-tuned LLM judges allow us to rank RAG systems without relying on external APIs, solely using few-shot prompts and deployable LLMs on commercial GPUs. For our in-context learning baseline, we use OpenAI’s gpt-3.5-turbo-16k, version 10/23, (Brown et al., 2020) in a zero/few-shot setting. For similarity search over in-domain passages, we use FAISS IndexFlatL2 for indexing (Johnson et al., 2019) and OpenAI’s text-embedding-ada-002 for generating embeddings. We use similarity search over in-domain passages to filter our synthetic queries that cannot retrieve the passage from which they were generated. We use version 0.0.18 of RAGAS in our experiments (James and Es, 2023). # Datasets Our core experimental goal is to provide a rich picture of where ARES can be applied effectively. To test across multiple types of queries, documents, and answers, we selected all the datasets from the widely-used KILT and SuperGLUE benchmarks for which RAG is appropriate. From KILT (Petroni et al., 2021), we use Natural Questions (NQ), HotpotQA, FEVER, and Wizards of Wikipedia (WoW) (Kwiatkowski et al., 2019; Yang et al., 2018; Akhtar et al., 2023; Dinan et al., 2018). Each dataset uses Wikipedia passages but the queries and answers offer a range of applications. Both NQ and HotpotQA feature direct questions and expect short answers, but NQ uses single passages for reasoning while HotpotQA requires multiple passages for reasoning. Furthermore, FEVER focuses on fact-verification, determining if a passage supports or refutes a given statement, and expects an output of “SUPPORTS” or “REFUTES”. WoW seeks to evaluate dialogue agents by mapping user dialogue to relevant Wikipedia passages before a chatbot generates a paragraph-length chat response incorporating passage knowledge. From SuperGLUE (Wang et al., 2019), we use MultiRC and ReCoRD (Khashabi et al., 2018; Zhang et al., 2018). MultiRC focuses on direct questions for seven different domains (News, Wikipedia articles, articles on society/law/justice, articles on history/anthropology, elementary school science textbooks, 9/11 reports, and fiction). ReCoRD focuses on determining the placeholder entity in a statement, focusing on news articles from CNN and the Daily Mail. For MultiRC and ReCoRD, we create open-domain versions of their tasks. For MultiRC, we perform retrieval over its seven sets of domain passages. For ReCoRD, we perform retrieval over its news article passages. The efficacy of ARES relies on its ability to rank different RAG systems while only using a human preference validation set and domain-targeted LLM judges. To test the limits of ARES, we need to simulate the existence of many RAG systems that are separated by small accuracy margins on our evaluation metrics. For this, we create systems using artificial query-passage-answer triples, in which we empirically know the positive and negative examples of the mock RAG system. We generate these mock splits of the given datasets by selecting (1) The positive and negative query-passage matches for context relevance, and (2) the positive and negative query-passage-answer matches for answer relevance. We include positive and negative examples from our evaluation sets in Table 7. For our positive triples, we can simply use the KILT and SuperGLUE examples without any alteration. For gathering negative query-passage pairs and query-passage-answer triples, we randomly sample passages and answers from either: the same Wikipedia document or an entirely random Wikipedia document. This sampling allows us to artificially create mock RAG systems for testing ARES. By sampling both related and unrelated documents/answers, we hope to better gauge the efficacy of ARES in judging RAG outputs. We do not evaluate answer faithfulness for KILT and SuperGLUE datasets since we do not have human-annotated hallucinated answers to use for evaluation. However, we do test the ARES framework on real attribution datasets in Section 5.2. Using the validation subsets for each KILT and SuperGLUE dataset, we create nine different dataset splits, ranging from 70% success rate to 90% success rate for each of the evaluated RAG criteria; each dataset is separated by 2.5% accuracy points (e.g. 70.0%, 72.5%, 75.0%, .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
. .
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
, 90.0%). Each split also represents a different mock RAG system. Since we know the success percentages of each dataset split, we know the appropriate ranking of each mock RAG system. This allows us to
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
test ARES success at both scoring and ranking the mock RAG systems appropriately across the three evaluation criteria. 4.3 Metrics To calculate the correlation between the correct ranking and the ARES ranking, we use the Kendall rank correlation coefficient or Kendall’s τ : τ = (# of concordant pairs) − (# of discordant pairs) | |# of pairs total| |---|---| |Concordant pairs are defined as two ordinal values in the ranking where the earlier value in the sequence is lower than the later value in the sequence. Discordant pairs are defined as two ordinal values in the ranking where the earlier value in the sequence is greater than or equal to the later value in the sequence. A Kendall’s τ greater than 0.9 is considered successful but it ranges from 0.0 to 1.0.| | |In development, researchers and engineers will be comparing different RAG configurations through individual pairwise comparisons of model choices, retriever selection, and document preprocessing. We want to make sure that ARES has satisfactory accuracy in pairwise comparisons across a variety of performance gaps between RAG systems. Kendall’s τ is explicitly designed for measuring the accuracy of such pairwise comparisons, calculating the correlation between a perfectly accurate pairwise ranking and an experimental pairwise ranking. Thus, it is a popular and widespread metric used in information retrieval, allowing developers to evaluate ranking systems empirically. Therefore, we believe Kendall’s tau and prediction accuracy provide meaningful metrics for testing the efficacy of ARES as a RAG evaluation system.| | 5 Results & Analysis 5.1 ARES Ranking Table 1 summarizes our main evaluation of ARES (with DeBERTa-v3-Large as the pretrained basis for the judges). We compare against RAGAS (version 0.0.18) and a baseline few-shot prompted GPT-3.5 judge (gpt-3.5-turbo-16k). For the few-shot GPT-3.5 judge, we provide few-shot examples for guiding predictions; the prompts are included in Appendices A.2, A.3, and A.4. For both ARES and the GPT-3.5 judge baseline, we augment the LLM with PPI, using a 300-datapoint human preference validation set to rectify the ML predictions and produce confidence intervals. Across almost all settings across the datasets from KILT and SuperGLUE, ARES provides a more accurate ranking of RAG systems than RAGAS. ARES averages a Kendall’s τ 0.065 higher for context relevance and 0.132 higher for answer relevance than RAGAS. Additionally, the LLM-judge is substantially more accurate than RAGAS at predicting context relevance and answer relevance of a query-passage-answer triple. For context relevance, ARES with a fine-tuned LLM-judge is 59.9 percentage points higher than RAGAS while for answer relevance, our system is 14.4 percentage points higher than RAGAS. Overall, ARES provides a more accurate system for automatically evaluating RAG configurations than RAGAS by leveraging domain-adaptive techniques for prompting and training as well as utilizing PPI to bolster model predictions. As an additional comparison, we also include the Kendall’s τ for RAG ranking with the ARES LLM judge without PPI; for all datasets tested, PPI improved the ranking prediction accuracy of the fine-tuned LLM judge. Furthermore, we included a sampled annotations configuration, in which we sampled 150-datapoints from each mock RAG system, totaling 1,350 annotations. Even with all these annotations, the Kendall’s τ for ARES is 0.08 higher on average, across both context and answer relevance, compared to sampled annotations, despite using 78% less annotations. In sum, ARES proves significantly more data-efficient with human annotations while being more accurate at scoring than standard sampled annotation methods. Compared to the GPT-3.5 judge, ARES provides a more accurate ranking of the RAG systems than the GPT-3.5 judge, averaging a Kendall’s tau 0.06 higher over both context relevance and answer relevance. Between the judge configurations, the fine-tuned LLM judge of ARES can more precisely distinguish between RAG systems and guide configuration decisions surrounding document splitting, retriever selection, and generative LLM choice. However, while the fine-tuned LLM judge had a higher Kendall’s tau on average, the GPT-3.5 judge is more readily deployable and does not require any additional fine-tuning. The GPT-3.5 judge does come with its own querying costs, which can vary based on the date of querying as well as the total tokens used in evaluation. We also wanted to better understand the importance of human annotations for ARES.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
To this end, we conducted two sets of experiments.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
First, we
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
When the RAW Group Indication subfield is equal to 1, the RAW Group subfield is present in this RAW assignment. The RAW Group Indication subfield in the first RAW assignment is set to 0 to indicate the RAW group in the first RAW assignment is the same as the range of AIDs in all the TIM bitmaps in the S1G Beacon frame. - When the RAW is a non-TIM RAW, the RAW Group Indication subfield is set to 0 and the RAW Group subfield is not present. - When the RAW is an AP PM RAW, the RAW Group Indication subfield equal to 0 indicates that the RAW group does not include any of the non-AP STAs, and the RAW Group subfield is not present. When the RAW Group Indication subfield is equal to 1, the RAW Group subfield is present.|'According to the given paragraphs, the values of the RAW Group Indication subfield in the RPS (Radio Parameter Set) element indicate whether the RAW Group subfield is present or not in the RAW Assignment subfield.'|According to the given paragraphs, the values of the RAW Group Indication subfield in the RPS (Radio Path Selection) element indicate whether the RAW Group subfield is present in this RAW assignment. If the value of RAW Group Indication subfield is equal to 0, it means that the RAW group defined in the current RAW assignment is the same as the RAW group defined in the previous RAW assignment, and the RAW Group subfield is not present in this RAW assignment. If the value of RAW Group Indication subfield is equal to 1, it means that the RAW Group subfield is present in this RAW assignment|A much broader context in general being available for the LLM because of multiple retrieved paragraphs helps it in create good long form question answering.| |F2|What is IGTK|integrity group temporal key (IGTK): A random value, assigned by the broadcast/multicast source station (STA), which is used to protect group addressed medium access control (MAC) management protocol data units (MMPDUs) from that source STA.|Sure! Based on the given paragraphs, the answer to your question is: IGTK stands for "integrity group temporal key"|According to the given paragraphs, the answer to question 1 is: IGTK stands for "integrity group temporal key"|Very often the generated response is a trivial answer giving the full form of an abbreviated technical term but not giving its definition| |F3|What is idle power indicator|idle power indicator (IPI): A physical layer (PHY) indication of the total channel power (noise and interference) as measured in the channel at the receiving antenna connector while the station (STA) is idle, i.e., neither transmitting nor receiving a frame.|IPI - idle power indicator|IPI - idle power indicator|Very often a term is just abbreviated in a response. This can sometimes be reduced if a user a question like “Define Idle Power Indicator” but since this is left to the user can create issues|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# ARES Ranking of Pseudo RAG Systems | |NQ|HotpotQA|WoW|FEVER|MultiRC|ReCoRD| |---|---|---|---|---|---|---| | |C.R|A.R.|C.R|A.R.|C.R|A.R.|C.R|A.R.|C.R|A.R.|C.R|A.R.| |Kendall’s Tau for Sampled Annotations|0.83|0.89|0.78|0.78|0.78|0.83|0.89|0.89|0.83|0.83|0.72|0.94| |Kendall’s Tau for RAGAS|0.89|0.89|0.94|0.89|0.94|0.94|0.72|0.61|0.83|0.94|0.89|0.44| |Kendall’s Tau for GPT-3.5 Judge|0.89|0.94|0.67|0.94|0.94|0.89|0.78|0.78|0.83|0.89|0.83|0.94| |Kendall’s Tau for ARES LLM Judge|0.89|1.0|0.89|0.94|0.94|1.0|0.83|0.72|0.94|0.83|0.78|0.83| |Kendall’s Tau for ARES| |0.94|1.0|0.94|0.94|1.0|1.0|0.89|0.78|0.94|0.89|0.83|0.89| |RAGAS Accuracy|31.4%|71.2%|17.2%|76.0%|36.4%|77.8%|23.7%|69.2%|16.1%|75.0%|15.0%|72.8%| |GPT-3.5 Judge Accuracy|73.8%|95.5%|75.3%|71.6%|84.3%|85.2%|60.4%|59.6%|72.4%|60.3%|81.0%|65.8%| |ARES Accuracy|79.3%|97.2%|92.3%|81.3%|85.7%|96.1%|88.4%|78.5%|85.8%|82.7%|67.8%|92.3%| Table 1: ARES Ranking with Fine-tuned LLM Judges vs. Sampled Annotations, RAGAS and GPT-3.5 Judge: For scoring context relevance and answer relevance (C.R.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
and A.R. in the table, respectively), we compare ARES with our fine-tuned LLM judges against sampled annotations benchmark, RAGAS, and a few-shot GPT-3.5 judge. For our sampled annotations, we gather 150 annotated datapoints from each mock RAG system and use those labels to score the system. RAGAS also uses GPT-3.5 as its judge but it uses few-shot prompts that are not targeted for each evaluation domain. Overall, we found that ARES ranked RAG systems more accurately than RAGAS and GPT-3.5 across all the explored datasets. The Kendall’s tau for ARES was 0.065 higher on average for scoring context relevance and 0.132 higher on average for scoring answer relevance than RAGAS. Additionally, we include the Kendall’s taus for the ARES LLM Judge without PPI and found that PPI further boosted the ranking accuracy of the judge across the board. We selected GPT-3.5 instead of GPT-4 due to the lower financial costs required to run. For PPI in both ARES and the GPT-3.5 judge, we used 300 human annotations for our human preference validation set. The prompts used for the GPT-3.5 judges are included in Sections A.2, A.3, and A.4. used ARES with human annotation sets ranging other benchmark datasets involve either table reasoning (ToTTo) or focus on passage summarization (QRECC) so we excluded them. In WoW and CNN/DM, each evaluation example includes a query, a retrieved passage, and a generated answer (which is either faithful or non-attributed to the retrieved passage). Table 2 summarizes our AIS results. We found that ARES can effectively score the AIS datasets, getting within 2.5 accuracy points of the correct scores. Furthermore, for scoring each system, we only use 200 annotated datapoints for our human preference validation set. Our results on AIS demonstrate the ability of ARES to reliably distinguish faithful and hallucinated answers in real-world RAG systems. # ARES Performance on AIS | |WoW|CNN / DM| |---|---|---| |ARES Split Prediction|0.478|0.835| |Correct Positive/Negative Split|0.458|0.859| |ARES Judge Accuracy|62.5%|84.0%| |Evaluation Set Size|707|510| |Human Preference Data Size|200|200| Table 2: ARES Results on the AIS benchmark To evaluate whether ARES can effectively gauge answer faithfulness in real RAG systems, we tested ARES on the AIS attribution benchmark (Rashkin et al., 2022). In AIS, we selected the Wizards of Wikipedia (WoW) and CNN/DM datasets; the other benchmark datasets involve either table reasoning (ToTTo) or focus on passage summarization (QRECC) so we excluded them. In WoW and CNN/DM, each evaluation example includes a query, a retrieved passage, and a generated answer (which is either faithful or non-attributed to the retrieved passage). 5.3 ARES Ranking of Existing RAG Systems We also wanted to evaluate whether ARES can score and rank existing RAG systems across both context relevance and answer relevance. For evaluation, we selected the NQ, WoW, and FEVER datasets from KILT.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We consider the answer gen
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In Table 5, we found that ARES can reliably score and rank RAG systems in real-world applications, averaging a Kendall’s tau of 0.91 for context relevance and 0.97 for answer relevance. Compared to RAGAS, ARES is 0.16 higher for context relevance and 0.15 higher for answer relevance, on average. ARES also provided accurate confidence bounds for its predictions, capturing the ground truth average outcomes for context relevance and answer relevance more than 95% of the time; on average, the PPI confidence intervals were 7.4 points wide for context relevance and 6.1 points wide for answer relevance (see Figure 2 and Figure 3 for ARES vs. RAGAS). Among the models tested, the best performing retriever was ColBERTv2 while the best performing generative LLM was GPT-4. # Strengths and Limits of Cross-Domain Applications The generalizability of the LLM judge used in ARES is critical for deploying our framework in specialized domains, particularly domains where in-domain queries, documents, and answers are difficult to gather. Therefore, we wanted to test how the LLM judges used in ARES would be affected by three domain shifts: change in query type from training to test (e.g. NQ to FEVER), change in document type from training to test (e.g. NQ to MultiRC), and change in both query and document type (e.g.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
NQ to ReCoRD). In Table 6, we found that the fine-tuned LLM judges used in ARES proved successful in cross-domain applications. Across all settings, we found that LLM judges in ARES had strong generalizability, even when only using 300 datapoints in our human preference validation set for PPI. Furthermore, we found that even when the LLM judge’s accuracy suffered in cross-domain applications, PPI helped mitigate the loss in accuracy and still allow ARES to be successful. Additional examples for PPI also continued to boost cross-domain ARES performance in subsequent tests. While LLM judges in ARES were successful in cross-domain applications for KILT and SuperGLUE, LLM judges are unable to generalize when making more drastic shifts in domain, such as: switching languages (e.g. English to Spanish, German, and other languages), switching from text to code (e.g.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
questions + passages to coding functions + documentation), and switching from retrieving text to extraction of entities, webpages, or citations. To test cross-lingual transfer, we used the XGLUE datasets (Liang et al., 2020); a LLM judge fine-tuned on NQ achieved a Kendall’s tau of 0.33 over both context relevance and answer relevance scoring for XGLUE. To test text-to-code, we used CodeSearchNet (Husain et al., 2019); an LLM judge fine-tuned on NQ achieved a Kendall’s tau of 0.28 over both context relevance and answer relevance scoring for CodeSearchNet. To test extraction task generalizability, we used T-Rex from KILT (Elsahar et al., 2018; Petroni et al., 2021); an LLM judge fine-tuned on NQ achieved a Kendall’s tau of 0.38 over both context relevance and answer relevance scoring for T-Rex. Each cross-domain shift requires in-domain passages and few-shot query examples for reconfiguring ARES judges. # Conclusion In this work, we present ARES, a novel automated evaluation framework for retrieval-augmented generation (RAG). ARES offers a novel training pipeline for fine-tuning lightweight LLM judges on synthetically generated queries and answers. ARES can evaluate each component of a RAG system separately to help improve system understanding and create targeted solutions, and it requires only minimal human annotations. For the eight different datasets in KILT, SuperGLUE, and AIS requiring RAG-based solutions, we found that ARES can accurately score and rank RAG systems based on context relevance, answer faithfulness, and answer relevance scores, beating the existing RAGAS automated evaluation framework. ARES is a flexible framework, and there may be variants of it that are even more powerful than the ones we explored here. Avenues to explore include GPT-4 as a replacement for human labeling (Table 4), more robust techniques for the synthetic datasets used in fine-tuning LLM judges, utilizing
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
logits in LLM judge prediction to improve PPI Alec Radford, Ilya Sutskever, and Dario Amodei. confidence intervals, and testing more sophisticated 2020. Language models are few-shot learners. LLMs as fine-tuned judges for ARES. Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023. Benchmarking large language models in retrieval-augmented generation. arXiv preprint arXiv:2309.01431. Limitations ARES relies on a small set of annotations in the human preference validation set (roughly 150-300 datapoints but more is better). These annotations often require an annotator familiar with the RAG system’s domain application. While these annotations can be easy to generate for general-domain applications, more specialized domains, such as law, medicine, and finance, may require annotators with specialized expertise. The LLMs used in ARES benefit substantially from GPU-based hardware with substantial storage. In ARES, DeBERTa-v3-Large (304M) and FLAN-T5-XXL (11.3B) required GPUs with about 32GB of memory to run, taking several hours for fine-tuning and generation, respectively. While commercial GPUs are widely available, they are not easily accessible to all NLP researchers and practitioners due to their costs. Additionally, all of the datasets used in our evaluation of ARES are in English, a well-resourced language with abundant annotations. Future work should explore how ARES can be employed in other languages by utilizing different LLMs for the ARES judge and the synthetic data generation. This can help us better understand the strengths and weaknesses of the current ARES framework. References Mubashara Akhtar, Rami Aly, Christos Christodoulopoulos, Oana Cocarascu, Zhijiang Guo, Arpit Mittal, Michael Schlichtkrull, James Thorne, and Andreas Vlachos, editors.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2023. Proceedings of the Sixth Fact Extraction and VERification Workshop (FEVER). Association for Computational Linguistics, Dubrovnik, Croatia. Anastasios N. Angelopoulos, Stephen Bates, Clara Fanjiang, Michael I. Jordan, and Tijana Zrnic. 2023. Prediction-powered inference. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, 2023. Retrieving supporting evidence for LLMs generated answers. arXiv preprint arXiv:2306.13781.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt 2019. Code-SearchNet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436. # Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave 2022. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Prompt-RAG: Pioneering Vector Embedding-Free Retrieval-Augmented Generation in Niche Domains, Exemplified by Korean Medicine Bongsu Kang1, Jundong Kim1, Tae-Rim Yun, Chang-Eop Kim1, 2, * 1Department of Physiology, College of Korean Medicine, Gachon University, Seongnam, Gyeonggi, Republic of Korea 2Department of Neurobiology, Stanford University School of Medicine, Stanford, California, USA * Corresponding Author: Chang-Eop Kim Email: eopchang@gachon.ac.kr ABSTRACT We propose a natural language prompt-based retrieval augmented generation (Prompt-RAG), a novel approach to enhance the performance of generative large language models (LLMs) in niche domains. Conventional RAG methods mostly require vector embeddings, yet the suitability of generic LLM-based embedding representations for specialized domains remains uncertain. To explore and exemplify this point, we compared vector embeddings from Korean Medicine (KM) and Conventional Medicine (CM) documents, finding that KM document embeddings correlated more with token overlaps and less with human-assessed document relatedness, in contrast to CM embeddings. Prompt-RAG, distinct from conventional RAG models, operates without the need for embedding vectors.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Jithin James and Shahul Es 2023. Ragas: Evaluation framework for your retrieval augmented generation (rag) pipelines. # Jeff Johnson, Matthijs Douze, and Hervé Jégou 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535–547. # Ehsan Kamalloo, Aref Jafari, Xinyu Zhang, Nandan Thakur, and Jimmy Lin 2023. Hagrid: A human-llm collaborative dataset for generative information-seeking with attribution. # Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih 2020. Dense passage retrieval for open-domain question answering. # Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252–262. # Omar Khattab, Christopher Potts, and Matei Zaharia 2021. Relevance-guided supervision for openqa with colbert. Transactions of the association for computational linguistics, 9:929–944. # Diederik P. Kingma and Jimmy Ba 2017. Adam: A method for stochastic optimization. # Tom Kocmi and Christian Federmann 2023. Large language models are state-of-the-art evaluators of translation quality. arXiv preprint arXiv:2302.14520.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, and Kyle Lo 2023. LongEval: Guidelines for human evaluation of faithfulness in long-form summarization. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1650–1669, Dubrovnik, Croatia.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Association for Computational Linguistics. # Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466. # Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. # Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474. # Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou 2020. Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation. arXiv, abs/2004.01401. # Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu 2023a. G-eval: Nlg evaluation using gpt-4 with better human alignment, may 2023. arXiv preprint arXiv:2303.16634. # Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang 2023b. Calibrating llm-based evaluator. arXiv preprint arXiv:2309.13308. # Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom 2023. Augmented language models: a survey. # Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi 2023. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. # Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544, Online. Association for Computational Linguistics. # Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter 2022. Measuring attribution in natural language generation models. # Jon Saad-Falcon, Omar Khattab, Keshav Santhanam, Radu Florian, Martin Franz, Salim Roukos, Avirup Sil, Md Arafat Sultan, and Christopher Potts 2023.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Udapdr: Unsupervised domain adaptation via llm prompting and distillation of rerankers. arXiv preprint arXiv:2303.00807.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
David P Sander and Laura Dietz. 2021. Exam: How to evaluate retrieve-and-generate systems for users who do not (yet) know what they want. In DESIRES, pages 136–146. Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
2022. COLBERTv2: Effective and efficient retrieval via lightweight late interaction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3715–3734, Seattle, United States. Association for Computational Linguistics. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. MosaicML NLP Team. 2023. Introducing mpt-30b: Raising the bar for open-source foundation models. Accessed: 2023-06-22. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32. Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Xiang Yue, Boshi Wang, Ziru Chen, Kai Zhang, Yu Su, and Huan Sun. 2023. Automatic evaluation of attribution by large language models. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. # Appendix # Fine-tuning Configuration for LLM Judges For our loss function used in LLM judge training, we selected cross-entropy loss using Adam (Kingma and Ba, 2017). For our classification head, we use a single linear classification layer and apply a 0.1 dropout to the input, which is the final hidden state of the [CLS] token. For our learning schedule, we use linear warmup and linear decay (Howard and Ruder, 2018) with a 5e-6 learning rate and a 32 training batch size across all experimental configurations. # GPT Prompting for Context Relevance Scoring For the NQ, HotpotQA, MultiRC, and ReCoRD datasets, we use 8 few-shot examples with the following prompt to score context relevance: - Given the following question and document, you must analyze the provided document and determine whether it is sufficient for answering the question. In your evaluation, you should consider the content of the document and how it relates to the provided question. Output your final verdict by strictly following this format: "[[Yes]]" if the document is sufficient and "[[No]]" if the document provided is not sufficient. Do not provide any additional explanation for your decision. - Question: &lt;few-shot example here&gt; - Document: &lt;few-shot example here&gt; For FEVER, we use the following prompt to score context relevance: - You are an expert fact-checking agent. Given the following statement and document, you must analyze the provided document and determine whether it is sufficient for determining the statement’s factuality. In your evaluation, you should consider the content of the document and how it relates to the provided statement’s factuality. Output your final verdict by strictly following this format: "[[Yes]]" if the document is sufficient and "[[No]]" if the document is not sufficient. Do not provide any additional explanation for your decision. - Statement: &lt;few-shot example here&gt; - Document: &lt;few-shot example here&gt; For WoW, we use the following prompt to score context relevance: - You are an expert dialogue agent. Given the following dialogue and document, you must
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
analyze the provided document and determine whether it is relevant for responding to the dialogue. In your evaluation, you should consider the content of the document and how it relates to the provided dialogue. Output your final verdict by strictly following this format: "[[Yes]]" if the document is relevant and "[[No]]" if the document provided is not relevant. Do not provide any additional explanation for your decision. Dialogue: &lt;few-shot example here&gt; Document: &lt;few-shot example here&gt; # GPT Prompting for Answer Faithfulness Scoring For the NQ, HotpotQA, MultiRC, and ReCoRD datasets, we use 8 few-shot examples with the following prompt to score answer faithfulness: - Given the following question, document, and answer, you must analyze the provided answer and determine whether it is faithful to the contents of the document. The answer must not offer new information beyond the context provided in the document.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The answer also must not contradict information provided in the document. Output your final verdict by strictly following this format: "[[Yes]]" if the answer is faithful to the document and "[[No]]" if the answer is not faithful to the document. Do not provide any additional explanation for your decision. Question: &lt;few-shot example here&gt; Document: &lt;few-shot example here&gt; Answer: &lt;few-shot example here&gt; For FEVER, we change the word "question" in the prompt to "statement". For WoW, we change the word "question" in the prompt to "dialogue". # GPT Prompting for Answer Relevance Scoring For the NQ, HotpotQA, MultiRC, and ReCoRD datasets, we use 8 few-shot examples with the following prompt to score answer relevance: - Given the following question, document, and answer, you must analyze the provided answer and document before determining whether the answer is relevant for the provided question. In your evaluation, you should consider whether the answer addresses all aspects of the question and provides only correct information from the document for answering the question. Output your final verdict by strictly following this format: "[[Yes]]" if the answer is relevant for the given question and "[[No]]" if the answer is not relevant for the given question. Do not provide any additional explanation for your decision. Question: &lt;few-shot example here&gt; Document: &lt;few-shot example here&gt; Answer: &lt;few-shot example here&gt; For FEVER, we change the word "question" in the prompt to "statement". For WoW, we change the word "question" in the prompt to "dialogue". # Prompting for Generation of Synthetic Queries and Answers To generate synthetic queries and answers using FLAN-T5, we use the following prompt and provide 5 few-shot examples: - Example N Question: &lt;few-shot example here&gt; Document: &lt;few-shot example here&gt; Answer: &lt;few-shot example here&gt; We use the same prompting structure for generating incorrect or contradictory answers; we simply swap out the few-shot examples to be incorrect or contradictory instead. # Synthetic Query and Answer Generation For generating our synthetic questions, we use the following prompt for FLAN-T5 XXL: - Example #1 Document: &lt;few-shot example here&gt; Query: &lt;few-shot example here&gt; - Example #2 Document: &lt;few-shot example here&gt; Query: &lt;few-shot example here&gt; - Example #3 Document: &lt;few-shot example here&gt; Query: &lt;few-shot example here&gt; - Example #4 Document: &lt;in-domain passage&gt; Query:
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# For generating our synthetic answers, we use the following prompt for FLAN-T5 XXL: Example #1 Query: &lt;few-shot example here&gt; Document: &lt;few-shot example here&gt; Answer: &lt;few-shot example here&gt; Example #2 Query: &lt;few-shot example here&gt; Document: &lt;few-shot example here&gt; Answer: &lt;few-shot example here&gt; Example #3 Query: &lt;few-shot example here&gt; Document: &lt;few-shot example here&gt; Answer: &lt;few-shot example here&gt; Example #4 Query: &lt;synpetic query here&gt; Document: &lt;in-domain passage here&gt; Answer:
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Its performance was assessed through a Question-Answering (QA) chatbot application, where responses were evaluated for relevance, readability, and informativeness. The results showed that Prompt-RAG outperformed existing models, including ChatGPT and conventional vector embedding-based RAGs, in terms of relevance and informativeness. Despite challenges like content structuring and response latency, the advancements in LLMs are expected to encourage the use of Prompt-RAG, making it a promising tool for other domains in need of RAG methods. Keywords: Retrieval augmented generation, Natural language process, Korean medicine, Conversational AI, Question-answering, GPT
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# RAG Systems Evaluation on NQ - Context Relevance | |Facebook|BM25|BM25|BM25|OpenAI|OpenAI|OpenAI|CoIBERT|CoIBERT|CoIBERT| |---|---|---|---|---|---|---|---|---|---|---| | |RAG|MPT|GPT3.5|GPT4.0|MPT|GPT3.5|GPT4.0|MPT|GPT3.5|GPT4.0| RAG Systems Evaluation on NQ - Context Relevance # RAG Systems Evaluation on NQ - Answer Relevance | |Facebook|BM25|BM25|BM25|OpenAI|OpenAI|OpenAI|CoIBERT|CoIBERT|CoIBERT| |---|---|---|---|---|---|---|---|---|---|---| | |RAG|MPT|GPT3.5|GPT4.0|MPT|GPT3.5|GPT4.0|MPT|GPT3.5|GPT4.0| RAG Systems Evaluation on NQ - Answer Relevance
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Kendall’s Tau by Dataset | |NQ|NQ|MultiRC|MultiRC|ReCoRD|ReCoRD| |---|---|---|---| |PPI Labeled Count|C.R.|A.R.|C.R.|A.R.|C.R.|A.R.| |400|1.0|1.0|0.89|0.94|0.89|0.94| |300|0.89|1.0|0.94|0.89|0.83|0.89| |200|0.83|1.0|0.83|0.94|0.83|0.83| |150|0.72|1.0|0.83|0.89|0.72|0.83| |100|0.44|1.0|0.67|0.67|0.67|0.83| |50|0.44|0.94|0.61|0.44|0.56|0.67| |25|0.44|0.89|0.56|0.44|0.44|0.56| Table 3: Analysis of PPI Labeled Count vs. ARES Efficacy by Kendall’s Tau: The Kendall’s tau values represent the correlation between the correct ranking and the ARES ranking of the pseudo RAG systems. We use the same experimental set-up as described in subsection 4.2. We find that below about 100-150 datapoints in the human preference validation set, ARES cannot meaningfully distinguish between the alternate RAG systems based on their accuracies in context relevance and answer relevance (C.R. and A.R., respectively). # ARES Ranking of Pseudo RAG Systems using GPT-4 Labels | |NQ| |ReCoRD| |MultiRC| |---|---|---|---|---|---| | |Context Relevance|Answer Relevance|Context Relevance|Answer Relevance|Context Relevance|Answer Relevance| |Kendall’s Tau|0.78|1.0|0.78|0.72|0.89|0.78| |Kendall’s Tau of Human Labeled Approach|0.94|1.0|0.83|0.89|0.94|0.89| |Average PPI Range|9.2%|6.8%|8.2%|9.0%|7.7%|8.3%| |Accuracy on RAG Evaluation Sets|79.3%|96.7%|88.4%|78.3%|85.8%|82.5%| Table 4: GPT-4 Labels vs. Human Labels: We wanted to explore the practicality of using GPT-4 generated labels instead of human annotations for our human preference validation set in ARES. In the experiments, we generated 500 GPT-4 labels as replacements for human labeling using few-shot prompts (see Sections A.2, A.3, and A.4). While GPT-4 generated labels decreased Kendall’s tau in most settings by 0.05 to 0.30, the ability to cheaply produce GPT-4 generated labels significantly reduces the cost of annotation, cutting it from hundreds of annotations to less than ten for few-shot prompts. Additionally, the efficacy of PPI continues improving as we generate more GPT-4 generated labels.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In the table, we define PPI range as the number of percentage points from the lower number to the upper number of the PPI confidence bounding. Additionally, we use the fine-tuned LLM judge (DeBERTa-v3-Large) for evaluation.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 5: ARES Ranking on Real-World RAG Systems | |Kendall’s Tau for Sampled Annotations|Kendall’s Tau for RAGAS|Kendall’s Tau for GPT-3.5 Judge|Kendall’s Tau for ARES LLM Judge|Kendall’s Tau for ARES|RAGAS Accuracy|GPT-3.5 Accuracy|ARES Accuracy| |---|---|---|---|---|---|---|---|---| |C.R.|0.73|0.73|0.73|0.82|0.82|35.9%|80.5%|85.6%| |A.R.|0.78|0.82|0.87|0.96|0.96|68.2%|91.2%|93.3%| | | | | | | | | | | For scoring context relevance and answer relevance (C.R. and A.R. in the table, respectively), we compare ARES with our fine-tuned LLM judges against sampled annotations benchmark, RAGAS, and a few-shot GPT-3.5 judge. For our sampled annotations, we gather 150 annotated datapoints from each mock RAG system and use those labels to score the system. RAGAS also uses GPT-3.5 as its judge but it uses few-shot prompts that are not targeted for each evaluation domain. Overall, we found that ARES ranked RAG systems more accurately than RAGAS and GPT-3.5 across all the explored datasets. Additionally, we include the Kendall’s taus for the ARES LLM Judge without PPI and found that PPI further boosted the ranking accuracy of the judge across the board. We selected GPT-3.5 instead of GPT-4 due to the lower financial costs required to run. For PPI in both ARES and the GPT-3.5 judge, we used 300 human annotations for our human preference validation set. The prompts used for the GPT-3.5 judges are included in Sections A.2, A.3, and A.4. # Table 6: ARES Cross-Domain Ranking of Pseudo RAG Systems | |NQ to FEVER|FEVER to NQ|NQ to MultiRC|MultiRC to NQ|NQ to ReCoRD| |---|---|---|---|---|---| | |C.R.|A.R.|C.R.|A.R.|C.R.|A.R.|C.R.|A.R.|C.R.|A.R.|C.R.|A.R.| |Kendall’s Tau|0.89|0.89|1.0|0.83|0.94|0.89|1.0|0.89|0.78|0.89|0.89|0.94| |Kendall’s Tau of In-Domain LLM Judge|0.89|0.78|0.94|1.0|0.94|0.89|0.94|1.0|0.83|0.89|0.94|1.0| |Average PPI Range|8.7%|7.2%|6.5%|11.5%|10.2%|11.3%|11.9%|11.5%|10.5%|10.1%|9.7%|6.2%| |Accuracy on RAG Evaluation Sets|92.4%|28.4%|85.7%|22.6%|81.5%|92.1%|87.6%|80.2%|29.1%|81.2%|80.1%|92.1%| We tested the cross-domain application of the fine-tuned LLM judge in the ARES framework. We found that for both context relevance and answer relevance (C.R. and A.R. in the table, respectively), fine-tuned LLM judges showed strong generalizability across domains when changing query type (e.g. NQ and FEVER), document type (e.g. NQ and MultiRC), or both (e.g.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
NQ and ReCoRD). For PPI, we used 300 labeled examples for our human preference validation set but also found that additional examples further improved the performance of ARES. Furthermore, we found that even in scenarios where the fine-tuned LLM judge’s accuracy significantly dropped out-of-domain (e.g. answer relevance for NQ to FEVER), PPI mitigated the decrease in judge performance. In the table, we define PPI range as the number of percentage points from the lower bound to the upper bound of the PPI confidence interval.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
|Query|Passage|Answer|Context Relevance|Answer Relevance| |---|---|---|---|---| |How can a ball that is not moving possess energy of position?|Mechanical energy is a combination of the energy of motion or position. This type of energy describes objects that are moving or could move. A moving ball can have energy from motion. An arrow can also have the energy of motion. Both are types of mechanical energy.|The ball holds mechanical energy|1|1| |Who has a Jimmy Stewart-like quality of quiet trust?|One look at Fred Rooney, and you just know he’s the good guy. A trace of childish innocence in his face gives the lanky Bethlehem lawyer a Jimmy Stewart-like quality of quiet trust. In black jeans and button-down shirt, he’s a kind of folk hero in the south Bethlehem melting pot where he’s crafted a law practice catering to working-class families - mostly Latino - in the shadow of the hulkish remnants of Bethlehem Steel.|Fred Rooney|1|1| |Before he murder the doctor and Ralph Smith, where did the stepfather reside?|Surviving being shot and stabbed at the end of the previous film, the stepfather has been institutionalized in Puget Sound, Washington since, spending his time building model houses in the workshop. Assigned a new doctor named Joseph Danvers the stepfather begins confiding in him to gain his trust, ultimately murdering the doctor during a session by stabbing him in the neck with a blade smuggled out of the workshop.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
After killing Danvers the stepfather beats a suspicious guard named Ralph Smith to death with his own nightstick with only two strikes and takes his uniform, successfully sneaking out of the sanitarium. Checking into a hotel after robbing and murdering a traveling salesman the stepfather alters his appearance, takes the name Doctor Gene F. Clifford from the newspaper obituaries and travels to Palm Meadows, Los Angeles after seeing an ad for it on an episode of Dream House.|Los Angeles|1|0| |What was the name of the 2006 film about Pushkin’s death, and who portrayed Pushkin?|During the days following, he was given the keys to the city by Mayor Jimmy Walker and met the president of Columbia University, who described Einstein as "The ruling monarch of the mind." Harry Emerson Fosdick, pastor at New York’s Riverside Church, gave Einstein a tour of the church and showed him a full-size statue that the church made of Einstein, standing at the entrance.|Vasily Szaitsev portrayed Pushkin in the film Pushkin Returns|0|0|
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# arXiv:2406.11147v2 [cs.SE] 19 Jun 2024 Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG Xueying Du Fudan University China Jiayi Feng Fudan University China Bihuan Chen Fudan University China ABSTRACT Vulnerability detection is essential for software quality assurance. In recent years, deep learning models (especially large language models) have shown promise in vulnerability detection. In this work, we propose a novel LLM-based vulnerability detection technique Vul-RAG, which leverages knowledge-level retrieval-augmented generation (RAG) framework to detect vulnerability for the given code in three phases. First, Vul-RAG constructs a vulnerability knowledge base by extracting multi-dimension knowledge via LLMs from existing CVE instances; second, for a given code snippet, Vul-RAG retrieves the relevant vulnerability knowledge from the constructed knowledge base based on functional semantics; third, Vul-RAG leverages LLMs to check the vulnerability of the given code snippet by reasoning the presence of vulnerability causes and fixing solutions of the retrieved vulnerability knowledge. Our evaluation of Vul-RAG on our constructed benchmark PairVul shows that Vul-RAG substantially outperforms all baselines by 12.96%/110% relative improvement in accuracy/pairwise-accuracy. In addition, our user study shows that the vulnerability knowledge generated by Vul-RAG can serve as high-quality explanations which can improve the manual detection accuracy from 0.60 to 0.77. # 1 INTRODUCTION Security vulnerabilities in software leave open doors for the disruptive attacks, resulting in serious consequences during software execution. To date, there has been a large body of research on automated vulnerability detection techniques. In addition to leveraging the traditional program analysis, deep learning has been incorporated into the vulnerability detection techniques given the recent advance in the artificial intelligence domain. Learning-based vulnerability detection techniques [1–6] mainly formulate the vulnerability detection as the binary classification task for the given code, which first train different models (e.g., graph neural networks or pre-trained language models) on existing vulnerable code and benign code, and then predict the vulnerability for the given code. More recently, the advanced progress in |Geng Zheng|Kaixin Wang| |---|---| |Alibaba Group|Fudan University| |China|China| |Wentai Deng|Mingwei Liu| |Nanjing University|Sun Yat-sen University| |China|China| |Xin Peng|Tao Ma| |Fudan University|Alibaba Group| |China|China| |Yiling Lou| | |Fudan University| | |China| | large language models (LLMs) further boosts the learning-based vulnerability detection techniques. Due to the strong code and text comprehension capabilities, LLMs show promising effectiveness in analyzing the malicious behaviors (e.g., bugs or vulnerabilities) in the code [7–11]. For example, existing LLM-based vulnerability detection techniques incorporate prompt engineering (e.g., chain-of-thought [12, 13] and few-shot learning [14]) to facilitate more accurate vulnerability detection. Preliminary Study. However, due to the limited interpretability of deep learning models, it remains unclear whether existing learning-based vulnerability detection techniques really understand and capture the code semantics related to vulnerable behaviors, especially when the only outputs of the models are binary labels (i.e., vulnerable or benign). To fill this knowledge gap, we first perform a preliminary study based on the assumption that "if the technique can precisely distinguish a pair of vulnerable code and non-vulnerable code with high lexical similarity (i.e., only differing in several tokens), we consider the technique with the better capability of capturing the vulnerability-related semantics in code". As two lexically-similar code snippets can differ in code semantics, it is likely that models have captured the high-level vulnerability-related semantics if the models can precisely distinguish between them. As there is no such existing vulnerability detection benchmark focusing on such pairs of vulnerable code and non-vulnerable code with high lexical similarity, we first construct a new benchmark PairVul which contains 4,314 pairs of vulnerable and patched code functions across 2,073 CVEs. We then evaluate the three representative learning-based techniques (i.e., LLMAO [8], LineVul [6], and DeepDFA [3]) along with one static analysis technique (i.e., Cppcheck [15]) on our constructed benchmark to study their distinguishing capability for such code pairs. Based on the results, existing learning-based techniques actually exhibit rather limited effectiveness in distinguishing within such lexically-similar code pairs. In particular, the accuracy on our benchmark PairVul drops to 0.50 ∼ 0.54, which are much lower than that reported in previous benchmarks (e.g., 0.99 accuracy of LineVul [6] on BigVul [16]).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The results demonstrate that existing
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Xueying Du, Geng Zheng, Kaixin Wang, Jiayi Feng, Wentai Deng, Mingwei Liu, Bihuan Chen, Xin Peng, Tao Ma, and Yiling Lou Trained models have limited capabilities of capturing the high-level code semantics related to vulnerable behaviors in the given code. Technique. Inspired by the observation in our preliminary study, our insight is to distinguish the vulnerable code from the similar-but-correct code with high-level vulnerability knowledge. In particular, based on how developers manually identify a vulnerability, understanding a vulnerability often involves the code semantics from the three dimensions: (i) the functionality the code is implementing, (ii) the causes for the vulnerability, and (iii) the fixing solution for the vulnerability. Such high-level code semantics can serve as the vulnerability knowledge for vulnerability detection. To this end, we propose a novel LLM-based vulnerability detection technique Vul-RAG, which leverages knowledge-level retrieval-augmented generation (RAG) framework to detect vulnerability in the given code. The main idea of Vul-RAG is to leverage LLM to reason for vulnerability detection based on the similar vulnerability knowledge from existing vulnerabilities. In particular, Vul-RAG consists of three phases. First, Vul-RAG constructs a vulnerability knowledge base by extracting multi-dimension knowledge (i.e., functional semantics, causes, and fixing solutions) via LLMs from existing CVE instances; second, for a given code snippet, Vul-RAG retrieves the relevant vulnerability knowledge from the constructed knowledge base based on functional semantics; third, Vul-RAG leverages LLMs to check the vulnerability of the given code snippet by reasoning the presence of vulnerability causes and fixing solutions of the retrieved vulnerability knowledge. The main technical novelties of Vul-RAG include: (i) a novel representation of multi-dimension vulnerability knowledge that focuses on more high-level code semantics rather than lexical details, and (ii) a novel knowledge-level RAG framework for LLMs that first retrieves relevant knowledge based on functional semantics and then detects vulnerability by reasoning from the vulnerability causes and fixing solutions. Evaluation. We further evaluate Vul-RAG on our benchmark PairVul. First, we compare Vul-RAG with three representative learning-based vulnerability detection techniques and one static analysis technique. The results show that Vul-RAG substantially outperforms all baselines by more precisely identifying the pairs of vulnerable code and similar-but-correct code, e.g., 12.96% improvement in accuracy and 110% improvement in pairwise accuracy (i.e., the ratio of both non-vulnerable code and vulnerable code in one pair being correctly identified). Second, we evaluate the usefulness of our vulnerability knowledge by comparing Vul-RAG with both the basic GPT-4 and the GPT-4 enhanced with code-level RAG. The results show that Vul-RAG consistently outperforms two GPT-4-based variants in all metrics. Third, we further perform a user study of vulnerability detection with/without the vulnerability knowledge generated by Vul-RAG. The results show that the vulnerability knowledge can improve the manual detection accuracy from 0.6 to 0.77, and the user feedback also shows the high quality of generated knowledge regarding the helpfulness, preciseness, and generalizability. In summary, the evaluation results confirm two-fold benefits of the proposed knowledge-level RAG framework: (i) enhancing automated vulnerability detection by better retrieving and utilizing existing vulnerability knowledge, and (ii) enhancing manual vulnerability detection by providing developer-friendly explanations for understanding vulnerable or non-vulnerable code. In summary, this paper makes the following contributions: - Benchmark.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Introduction Retrieval-Augmented Generation (RAG) models combine a generative model with an information retrieval function, designed to overcome the inherent constraints of generative models. They integrate the robustness of a large language model (LLM) with the relevance and up-to-dateness of external information sources, resulting in responses that are not only natural and human-like but also the latest, accurate, and contextually relevant to the query. The interaction of the two modules (retrieval and generation) enables responses that would not be achievable with either module alone, making RAG more than just the sum of its components. This approach represents a significant milestone in the field of generative models by enabling the induction of high-quality responses in less-explored domains at a low expense. In the conventional RAG operation, the initial step involves converting input queries into vector embeddings, which are then used to retrieve relevant data from the vectorized database. Following this, the generative part of RAG utilizes the retrieved external data for producing contextually rich responses. Thus, both the embedding and generative models are considered crucial factors in the performance of RAG, directly affecting the retrieval process. However, in niche domains, the performance of generic LLM-based embedding models appears suboptimal compared to their effectiveness in more general fields. The lack of specialized training data in these domains results in embeddings that do not adequately capture the nuances and specificity of the domain, leading to less accurate and contextually relevant information retrieval. Despite the evident presence of these functional limitations, they have not been much identified through experiments, therefore the optimality of the conventional LLM-based vector embedding RAG methods for niche domains has remained in obscurity. Researchers have been aware of these shortcomings of LLMs and have explored supplementary processes such as fine-tuning to improve the performance.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We construct a new benchmark PairVul that exclusively contains pairs of vulnerable code and similar-but-correct code. - Preliminary Study.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We perform the first study to find that existing learning-based techniques have limited capabilities of understanding and capturing the vulnerability-related code semantics. - Technique. We construct a vulnerability knowledge base based on the proposed multi-dimension knowledge representation, and propose a novel knowledge-level RAG framework Vul-RAG for vulnerability detection. - Evaluation.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
We evaluate Vul-RAG and find the usefulness of vulnerability knowledge generated by Vul-RAG for both automated and manual vulnerability detection. # BACKGROUND # CVE and CWE Existing vulnerability classification systems, such as Common Vulnerabilities and Exposures (CVE) and Common Weakness Enumeration (CWE), provide a comprehensive taxonomy of categorizing and managing vulnerabilities. CVE is a publicly disclosed list of common security vulnerabilities. Each vulnerability is assigned a unique identifier (CVE ID). A single CVE ID may be associated with multiple distinct code snippets. CWE is a publicly accessible classification system of common software and hardware security vulnerabilities. Each weakness type within this enumeration is assigned a unique identifier (CWE ID). While CWE provides a broad classification of vulnerability types, the specific code behaviors leading to a vulnerability under a given CWE category may vary widely. For example, CWE-416 (Use After Free) signifies the issue of referencing memory after it has been freed. The root cause of this vulnerability might stem from improper synchronization under race conditions (e.g., CVE-2023-30772), or errors in reference counting leading to premature object destruction (e.g., CVE-2023-3609). # Learning-based Vulnerability Detection The recent advance in deep learning has boosted many learning-based vulnerability detection techniques. GNN-based Vulnerability Detection typically represents the code snippets under detection as graph-based intermediate representations, such as Abstract Syntax Trees (ASTs) or Control Flow Graphs (CFGs). Graph neural networks (GNN) are then applied to these abstracted code representations for feature extraction. The features learned by the models are subsequently fed into a binary classifier for vulnerability detection. PLM-based Vulnerability Detection typically involves fine-tuning existing PLMs on vulnerability detection datasets. In this way, code snippets are tokenized and processed by Pre-trained Language Models (PLM, e.g., RoBERTa) to serve as the encoder. The extracted features are then used for binary classification. LLM-based Vulnerability Detection leverages large language models (LLMs) for vulnerability detection via prompt engineering or fine-tuning. The former leverages different prompting strategies, e.g., Chain-of-Thought (CoT) and few-shot learning for more accurate LLM-based vulnerability.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG detection, without modifying the original LLM parameters; the 3.1 Benchmark PairVul latter updates LLM parameters by trained on vulnerability detection datasets, to learn the features of vulnerable code. 2.3 Retrieval-Augmented Generation Retrieval-Augmented Generation (RAG) is a general paradigm which enhances LLMs by including relevant information retrieved from external databases into the input [24]. RAG typically consists of three phases: indexing, retrieval, and generation. First, the indexing phase constructs external databases and their retrieval index from external data sources; Second, given a user query, the retrieval system then utilizes these index to fetch the relevant document chunks as context. Third, the retrieved context is then integrated into the input prompt for LLMs, and LLMs then generate the final output based on the augmented inputs.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
RAG has been widely used in various domains [25–28]. For example, RAG has been specialized to software engineering tasks such as code generation [27, 28], which retrieves the similar code from the code base and augments the prompt with the retrieved code for model inference. 3 PRELIMINARY STUDY Although existing learning-based vulnerability detection techniques show promising effectiveness, it still remains unclear whether these techniques really understand and capture the code semantics related to vulnerable behaviors, due to the weak interpretability of deep learning models. To fill this knowledge gap, in this preliminary study, we make the assumption that “if the technique can precisely distinguish a pair of vulnerable code and non-vulnerable code with high lexical similarity (i.e., only differing in several tokens), we consider the technique with the better capability of capturing the vulnerability-related semantics in code”. As shown by the example in Figure 1, the vulnerable code is fixed by moving the statement inet_frag_lru_add(nf, qp) into the lock-protected code block, while the pair of vulnerable code and the non-vulnerable code share high lexical similarity but differ in the semantics. To this end, we first propose to construct a benchmark that contains pairs of vulnerable code and its corresponding patched code, as patched code often shares high similarity as the original code; then we evaluate the existing learning-based techniques on our constructed benchmark to study their distinguishing capability for such code pairs. |static struct inet_frag_queue *inet_frag_intern(struct netns_frags *nf, struct inet_frag_queue *qp_in, struct inet_frags *f, void *arg)|static struct inet_frag_queue *inet_frag_intern(struct netns_frags *nf, struct inet_frag_queue *qp_in, struct inet_frags *f, void *arg)| |---|---| |...|...| |read_lock(&f->lock); /* Protects against hash rebuild */ hash = f->hashfn(qp_in);|read_lock(&f->lock); /* Protects against hash rebuild */ hash = f->hashfn(qp_in);| |hb = &f->hash[hash];|hb = &f->hash[hash];| |spin_lock(&hb->chain_lock);|spin_lock(&hb->chain_lock);| |...|...| |qp = qp_in; ❌Vulnerable Code|qp = qp_in;| |if (!mod_timer(&qp->timer, jiffies + nf->timeout))|if (!mod_timer(&qp->timer, jiffies + nf->timeout))| |atomic_inc(&qp->refcnt);|atomic_inc(&qp->refcnt);| |hlist_add_head(&qp->list, &hb->chain);|hlist_add_head(&qp->list, &hb->chain);| |spin_unlock(&hb->chain_lock);|inet_frag_lru_add(nf, qp);| |read_unlock(&f->lock);|spin_unlock(&hb->chain_lock);| |inet_frag_lru_add(nf, qp);|return qp;| Diff: Call “inet_frag_lru_add” after unlocking before unlocking Figure 1: A pair of vulnerable code and similar non-vulnerable code (the patched code)
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
# Table 1: Existing Benchmarks for Vulnerability Detection |Benchmark|Time|Positive Number/Ratio|#CVE|Positive LOC|Negative LOC|Patched Code Included|Patched Code Verified| |---|---|---|---|---|---|---|---| |BigVul|2020|10,900 (5.78%)|3,285|73.47|23.83|N|/| |Devign|2019|12,460 (45.61%)|/|54.50|49.53|N|/| |ReVeal|2020|2,240 (9.85%)|/|67.73|28.69|Y|N| |PairVul|2024|1,923 (50.00%)|896|68.58|70.25|Y|Y| # Benchmark Statistics As a result, we obtain a new benchmark PairVul of 4,314 pairs of vulnerable and patched code functions across 2,073 CVEs. In this work, we focus on the top-5 CWEs in our benchmark given the non-trivial costs of model execution and manual analysis. In particular, as this work focuses on learning-based techniques which often require training datasets, we further divide the benchmark into the training set and the testing set in the following steps. For each CVE, we randomly select one instance into the testing set with the remaining instances (if has any) of the CVE into the training set. We exclude cases where the code length exceeds the current token limit of GPT-3.5-turbo (i.e., 16,384 tokens). The final training set includes 896 CVEs with 1,462 pairs of vulnerable and patched code functions, while the testing set includes 373 CVEs with 592 pairs. The statistics of each CWE category in our benchmark is shown in Table 2. # Table 2: Statistics of each CWE in PairVul |CWE|Training Set| |Test Set| | |---|---|---|---|---| |CWE-416|CVE Num. 339|Func.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Pair Num. 587|CVE Num. 145|Func. Pair Num. 267| |CWE-476|CVE Num. 194|Func. Pair Num. 262|CVE Num. 60|Func. Pair Num. 89| |CWE-362|CVE Num. 169|Func. Pair Num. 280|CVE Num. 81|Func. Pair Num. 121| |CWE-119|CVE Num. 129|Func. Pair Num. 163|CVE Num. 42|Func. Pair Num. 53| |CWE-787|CVE Num. 122|Func. Pair Num. 170|CVE Num. 45|Func.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Pair Num. 62| # Studied Baselines We evaluate the following state-of-the-art (SOTA) vulnerability detection techniques on our benchmark PairVul: - LLMAO: An LLM-based fault localization approach fine-tuning LLM (i.e., CodeGen), which has also been fine-tuned on the Devign dataset for vulnerability detection. - LineVul: A PLM-based vulnerability detection model, offering both function-level and line-level detection granularity. - DeepDFA: A GNN-based detection technique with data flow analysis-guided graph learning framework, designed for function-level vulnerability detection. - Cppcheck: A widely-used open-source static analysis tool. # Metrics To further evaluate the capability in distinguishing a pair of vulnerable code and non-vulnerable code with high lexical similarity, we develop a new metric pairwise accuracy, which calculates the ratio of pairs whose vulnerable and patched code are both correctly identified. Besides, we also use six commonly-used metrics in vulnerability detection tasks: FN, FP, accuracy, precision, recall, and F1. FN is the ratio of false negatives; FP is the ratio of false positives; accuracy is the proportion of correctly detected instances; precision is the proportion of true positive predictions among all positive predictions; recall is the proportion of true positive predictions among all instances; and F1-score is the harmonic mean of the precision and recall, which balances both values. # Table 3: Effectiveness of SOTA techniques in PairVul |CWE|Tech.|FN|FP|Acc.|Pair Acc.|Precis.|Recall|F1| |---|---|---|---|---|---|---|---|---| |CWE-416|CppCheck|50.0%|0.0%|0.50|0.00|/|0.00|/| |CWE-416|DeepDFA|9.3%|40.3%|0.50|0.02|0.50|0.81|0.62| |CWE-416|LineVul|0.0%|50.0%|0.50|0.04|0.50|1.00|0.67| |CWE-416|LLMAO|24.5%|20.4%|0.55|0.14|0.56|0.51|0.53| |CWE-476|CppCheck|48.9%|0.6%|0.51|0.01|0.67|0.02|0.04| |CWE-476|DeepDFA|8.5%|42.6%|0.49|0.01|0.49|0.83|0.62| |CWE-476|LineVul|12.9%|33.7%|0.54|0.09|0.53|0.75|0.62| |CWE-476|LLMAO|44.9%|3.4%|0.52|0.03|0.60|0.10|0.17| |CWE-362|CppCheck|49.6%|0.0%|0.50|0.01|1.00|0.01|0.02| |CWE-362|DeepDFA|5.9%|45.1%|0.49|0.00|0.49|0.88|0.63| |CWE-362|LineVul|10.7%|40.9%|0.49|0.02|0.49|0.79|0.61| |CWE-362|LLMAO|16.9%|30.2%|0.53|0.11|0.52|0.66|0.58| |CWE-119|CppCheck|48.4%|1.6%|0.50|0.02|0.50|0.03|0.06| |CWE-119|DeepDFA|9.8%|40.7%|0.50|0.00|0.49|0.80|0.61| |CWE-119|LineVul|19.8%|32.1%|0.49|0.04|0.49|0.62|0.55| |CWE-119|LLMAO|45.3%|2.8%|0.52|0.04|0.63|0.09|0.16| |CWE-787|CppCheck|48.1%|1.6%|0.50|0.02|0.50|0.03|0.06| |CWE-787|DeepDFA|9.8%|40.7%|0.50|0.00|0.49|0.80|0.61| |CWE-787|LineVul|4.0%|46.8%|0.50|0.02|0.50|0.92|0.65| |CWE-787|LLMAO|29.7%|6.3%|0.56|0.11|0.77|0.16|0.27| |Overall|LineVul| | | | |0.50|0.54|0.10|0.02|0.50|0.55|0.87|0.41|0.64|0.47| | | |Uniform Guess| | |0.50|0.00|0.50|1.00|0.67| # Results As shown in Table 3, existing techniques exhibit limited effectiveness on our benchmark PairVul. In particular, compared to the effectiveness reported in previous benchmarks (e.g., 0.99 accuracy of LineVul on BigVul), existing techniques perform much poorer on PairVul (ranging from 0.50 to 0.54 accuracy), which shows even lower accuracy and F1 than the uniform guess (i.e., identifying all instances as vulnerable).
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
In particular, the pairwise accuracy ranges from 0.01 to 0.10, indicating that existing learning-based techniques fail to capture the subtle difference between similar vulnerable code and non-vulnerable code. The observations imply that the learning-based models have limited capability of understanding the semantics related to the vulnerability. Our insight: In fact, two code snippets with subtle lexical difference can have different semantics (i.e., different functionalities). Therefore, identifying vulnerability based on the high-level code semantics can help better distinguish the vulnerable code from the non-vulnerable code.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
similar-but-correct code. In particular, based on how developers manually identify a vulnerability, understanding a vulnerability often involves the code semantics from the three dimensions: (i) the functionality the code is implementing, (ii) the causes for the vulnerability, and (iii) the fixing solution for the vulnerability. Such high-level code semantics can serve as the knowledge for vulnera- bility detection. Therefore, in this work, we propose to distinguish the vulnerable code from the similar-but-correct code by enhanc- ing LLMs with high-level vulnerability knowledge. In particular, we first leverages LLMs to automatically construct a vulnerabil- ity knowledge base from existing vulnerability instances, which further are utilized to boost LLMs in vulnerability detection. # 4 APPROACH # 4.1 Overview In this work, we present a novel LLM-based vulnerability detection technique Vul-RAG, which leverages knowledge-level RAG frame- work to detect vulnerability in the given code. The main idea of Vul-RAG is to leverage LLM to reason for vulnerability detection based on the similar vulnerability knowledge from existing vul- nerabilities. Figure 2 shows the overview of our approach, which includes the following three phases. - Phase-1: Offline Vulnerability Knowledge Base Construc- - tion (Section 4.2): Vul-RAG first constructs a vulnerability - knowledge base by extracting multi-dimension knowledge via - LLMs from existing CVE instances. - Phase-2: Online Vulnerability Knowledge Retrieval (Sec- - tion 4.3). For a given code snippet, Vul-RAG retrieves the rele- - vant vulnerability knowledge from the constructed knowledge - base based on functional semantics. - Phase-3: Online Knowledge-Augmented Vulnerability De- - tection (Section 4.4). Vul-RAG leverages LLMs to check the - vulnerability of the given code snippet by reasoning the pres- - ence of vulnerability causes and fixing solutions of the retrieved - vulnerability knowledge. |Functional Semantics Extraction|Vulnerability Causes and Fixing Solutions Extraction|Knowledge Abstraction| |---|---|---| |4# Linux Kernel CVEs|CVE Corpus Vulnerability Knowledge Extraction|Vulnerability Knowledge Base| |Functional Semantics Extraction|Retrieval Query|Vulnerability Knowledge Retrieval| |Vulnerability Knowledge Base Construction|Documents/s/ed Source Code|DocumentatitSTopNReldeey| |Functional Semantics Extraction|Query|Source Code| |LLM-based Vulnerability Detection|Vulnerability Detection Prompt| | |User Detection Results & Source Code Under Detection|Related Vulnerability Knowledge| | |Knowledge-Augmented Vulnerability Detection| | | The example output of functional semantics is in Figure 3. Vulnerability Causes and Fixing Solutions Extraction. As the causes and fixing solutions are often logically connected, Vul-RAG extracts them together so that the reasoning capabilities of LLMs can be better utilized. In particular, Vul-RAG incorporates two rounds of extraction, i.e., the first round asks LLMs to explain why the vulnerable code accesses a shared data structure within an RCU read lock context without proper synchronization mechanism, allowing a race condition and use- after-free vulnerability. To fix this vulnerability, the patched code add a spin lock to protect the shared data structure. • Functional Semantics: It summarizes the high-level functional- ity (i.e., what this code is doing) of the vulnerable code, including: – Abstract purpose: brief summary of the code intention. – Detailed behavior: detailed description of the code behavior. • Vulnerability Causes: It describes the reasons for triggering vulnerable behaviors by comparing the vulnerable code and its corresponding patch. We consider causes described in different perspectives, including: – Abstract vulnerability description: brief summary of the cause. – Detailed vulnerability description: more concrete descriptions of the causes. – Triggering action: the direct action triggering the vulnerability, e.g., “concurrent access to shared data structures” in Figure 3. • Fixing Solutions: It summarizes the fixing of the vulnerability by comparing the vulnerable code and its corresponding patch.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
However, the cost of fine-tuning, especially when it involves adjusting the entire or majority of parameters in LLM, has rapidly become expensive, thereby increasing the demand for alternative solutions. To address these challenges, we propose a novel methodology: Prompt-RAG. This new approach to RAG eliminates the reliance on vector embeddings, adopting a more direct and flexible retrieval process based on natural language prompts. It involves a large-scale pre-trained generative model that handles the entire steps from document retrieval to response generation without the need for a vector database or an algorithm for indexing and selecting vectors, thus having the processing structure of RAG greatly simplified. Therefore, it not only takes advantage of the RAG’s strength but also circumvents the limitations of conventional vector embedding-based methodology. Prompt-RAG is based on maximizing the use of the advanced natural language processing capabilities of LLMs. Especially using the latest GPT model, our method can compensate for the deficiencies in vector embedding-based RAG arising from the shortage of domain-specific knowledge. To examine the utility of Prompt-RAG in practice, we conducted two exemplary studies focusing on the Korean Medicine (KM) domain. KM, a branch of traditional East Asian medicine, has diverged from traditional Chinese medicine and Japanese Kampo medicine in aspects like physiological theories, treatments, and Sasang constitutional medicine. It was reported that GPT models have achieved excellent results in the United States Medical Licensing Examination (USMLE), while
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
4.2.1 Vulnerability Knowledge Representation. Vul-RAG represents the vulnerability knowledge of a CVE instance from three dimen- sions: functional semantics, vulnerability causes, and fixing solu- tions. Figure 3 exemplifies the three-dimension representation for CVE-2022-38457. In this case, the vulnerable code accesses a shared data structure within an RCU read lock context without proper synchronization mechanism, allowing a race condition and use- after-free vulnerability. To fix this vulnerability, the patched code add a spin lock to protect the shared data structure. 4.2.2 Knowledge Extraction. For each existing vulnerability in- stance (i.e., the vulnerable code and its patch), Vul-RAG prompts LLMs to extract three-dimension knowledge, and then abstracts the extracted knowledge to facilitate more general representation. We then explain each step in detail. Functional Semantics Extraction. Given the vulnerable code snippet, Vul-RAG prompts LLMs with the following instructions to summarize both the abstract purpose and the detailed behavior respectively, where the placeholder “[Vulnerable Code]” denotes the vulnerable code snippet. Prompt for Abstract Purpose Extraction: [Vulnerable Code] What is the purpose of the function in the above code snippet? Please summarize the answer in one sentence with the following format: “Function purpose:”. Prompt for Detailed Behavior Extraction: [Vulnerable Code] Please summarize the functions of the above code snippet in the list format without any other explanation: “The functions of the code snippet are: 1. 2.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
3...”
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Xueying Du, Geng Zheng, Kaixin Wang, Jiayi Feng, Wentai Deng, Mingwei Liu, Bihuan Chen, Xin Peng, Tao Ma, and Yiling Lou #CVE-2022-38457 A use-after-free (UAF) vulnerability was found in function 'vmw_cmd_res_check' in drivers/gpu/vmxgfx/vmxgfx_execbuf.c in Linux kernel's vmwgfx driver with device file '/dev/dri/renderD128 (or Dxxx)'. CVE Description Abstract Purpose: Look up a TTM base object associated with a given key in a TTM object file.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Detailed Behavior: 1. Look up a TTM base object using a key in a TTM object file. 2. Acquire a reference to the base object if found successfully. 3. Return the base object if a reference is acquired, otherwise return NULL. Functional Semantics Extraction Prompt Abstract Vulnerability Description: Use of RCU read lock without proper synchronization mechanism to protect shared data structures. Trigger Action: Concurrent access to the shared data structure while under RCU read lock context. Detailed Vulnerability Description: The code accesses a shared data structure within an RCU read lock context without ensuring exclusive access, allowing a race condition where the object can be freed while still being accessed. Vulnerability Causes Solution Description: To mitigate the vulnerability, it is necessary to use a synchronization mechanism, such as a spin lock, to protect the shared data structure from concurrent modifications. By replacing the RCU read lock with a spin lock, exclusive access to the data structure is ensured, preventing the race condition and use-after-free vulnerability. Fixing Solution Instance-level Vulnerability Knowledge Extraction Input Figure 3: An Example of Vulnerability Knowledge Extraction from Historical Commit of CVE-2022-38457 The modification of the vulnerable code snippet is necessary and the second round asks LLMs to further summarize causes and fixing solutions based on the explanations generated in the first round. Such a two-step strategy is based on the CoT paradigm, which inspires LLM reasoning capabilities by thinking step-by-step and further results in better extraction [12, 13, 31, 32]. In addition, to enable LLMs to summarize the causes and solutions in the proper formats, Vul-RAG incorporates few-shot learning by including two demonstration examples of vulnerability causes and fixing solutions due to the limited input length of GPT models. Following the vulnerability knowledge representation outlined in Section 4.4, we manually construct two examples. The detailed prompts are as follows, where the placeholders “[Vulnerable Code]”, “[Patched Code]”, and “[Patch Diff]” denote the vulnerable code, the patched code, and the code diff of the given vulnerability, and [CVE ID] and [CVE Description] denote the details of the given vulnerability. Extraction Prompt in Round 1: This is a code snippet with a vulnerability [CVE ID]: [Vulnerable Code] The vulnerability is described as follows: [CVE Description] The correct way to fix it is by [Patch Diff] The code after modification is as follows: [Patched Code] Why is the above modification necessary? Extraction Prompt in Round 2: I want you to act as a vulnerability detection expert and organize vulnerability knowledge based on the above vulnerability repair information. Please summarize the generalizable specific behavior of the code that leads to the vulnerability and the specific solution to fix it. Format your findings in JSON. Here are some examples to guide you on the level of detail expected in your extraction: [Vulnerability Causes and Fixing Solution Example 1] [Vulnerability Causes and Fixing Solution Example 2] Knowledge Abstraction. Different vulnerability instances might share common high-level knowledge (e.g., the similar causes and fixing solutions), and thus abstracting the high-level commonality among the extracted vulnerability knowledge can further distill more general knowledge representation less bonded to concrete code implementation details. To this end, Vul-RAG leverages LLMs to abstract high-level knowledge by abstracting the following concrete code elements (i.e., method invocations, variable names, and types) in the extracted vulnerability causes and fixing solutions. We do not abstract functional semantics, as it is utilized only during the retrieval phase, and not provided as enhanced knowledge to LLMs during vulnerability detection process. We then describe the knowledge abstraction guidelines and examples as follows. - Abstracting Method Invocations. The extracted knowledge might contain concrete method invocations with detailed function identifiers (e.g., io_worker_handle_work function) and parameters (e.g., mutex_lock(&dmxdev->mutex)), which can be abstracted into the generalized description (e.g., “during handling of IO work processes” and “employing a locking mechanism akin to mutex_lock()”). - Abstracting Variable Names and Types.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
The extracted knowledge might contain concrete variable names or types (e.g., “without &dev->ref initialization”), which can be abstracted into the more general description (e.g., “without proper reference counter initialization”). Vul-RAG incorporates the following prompt to leverage LLMs for knowledge extraction, which queries LLMs to abstract the method invocations and variable names. Prompt for Knowledge Abstraction: With the detailed vulnerability knowledge extracted from the previous stage, your task is to abstract and generalize this knowledge to enhance its applicability across different scenarios. Please adhere to the following guidelines and examples provided: The final output is the three-dimension knowledge of each vulnerability instance (i.e., denoted as a knowledge item). In particular, given a set of existing vulnerability instances (i.e., the training constructed from PairVul as mentioned in Section 3.1), we repeat the extraction procedure for each vulnerability instance and aggregate the extracted knowledge items of all instances as the final vulnerability knowledge base. Vulnerability Knowledge Retrieval For a given code snippet for vulnerability detection, Vul-RAG retrieves relevant vulnerability knowledge items from the constructed
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG vulnerability knowledge base in a three-step retrieval process: query enhances LLMs with each retrieved knowledge item by sequentially generation, candidate knowledge retrieval, and candidate knowledge re-ranking. Query Generation.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Instead of relying solely on the code as the retrieval query, Vul-RAG incorporates both the code and its functional semantics as a multi-dimension query. Firstly, Vul-RAG prompts LLMs to extract the functional semantics of the given code, as described in the knowledge base construction (Section 4.2.2). The abstract purpose, detailed behavior, and the code itself, form the query for the subsequent retrieval. Candidate Knowledge Retrieval. Vul-RAG conducts similarity-based retrieval using three query elements: the code, the abstract purpose, and the detailed behavior. It separately retrieves the top-n (where n = 10 in our experiments) knowledge items for each query element. Consequently, Vul-RAG retrieves a total of 10 to 30 candidate knowledge items (accounting for potential duplicates among the items retrieved across the different query elements). The retrieval is based on the similarity between each query element and the corresponding elements of the knowledge items. Vul-RAG adopts BM25 [33] for similarity calculation, a method widely used in search engines due to its efficiency and effectiveness [11]. Given a query q and the documentation d for retrieval, BM25 calculates the similarity score between q and d based on the provided Equation 1.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}
Candidate Knowledge Re-ranking. We re-rank candidate knowledge items with the Reciprocal Rank Fusion (RRF) strategy. For each retrieved knowledge item k, we calculate its re-rank score by aggregating the reciprocal of its rank across all three query elements. If a knowledge item k is not retrieved by a particular query element, we assign its rank as infinity. The re-rank score for k is calculated using the provided Equation 2. In the end, we obtain the top 10 candidate knowledge items with the highest re-rank scores as the final knowledge items to be provided to the LLMs for vulnerability detection. Knowledge-Augmented Vulnerability Detection Based on the retrieved knowledge items, Vul-RAG leverages LLMs to reason whether the given code is vulnerable. However, directly incorporating all the retrieved knowledge items into one prompt can hinder the effectiveness of the models, as LLMs often perform poorly on lengthy contexts. Therefore, Vul-RAG iteratively EVALUATION SETUP We evaluate the effectiveness and usefulness of Vul-RAG by answering the following four research questions: - RQ1: Compared to SOTA techniques: How does Vul-RAG perform compared to state-of-the-art (SOTA) vulnerability detection techniques? - RQ2: Compared to GPT-4-based techniques: How does Vul-RAG perform compared to GPT4-based detection techniques? - RQ3: Usefulness for developers: Can the vulnerability knowledge generated by Vul-RAG help developers in manual vulnerability detection? - RQ4: Bad Case Analysis: Why does Vul-RAG fail in detecting some vulnerabilities? Implementation We build Vul-RAG on the top of GPT series models. In particular, for the offline knowledge base construction, given the large number of vulnerability knowledge items to be generated, we use the gpt-3.5-turbo-0125 model [36] due to its rapid response and cost-effectiveness [11]; for the online knowledge-augmented detection, we use the GPT-4 model [37] as it is currently one of the most effective LLMs with superior understanding and logical reasoning capabilities [38]. For the knowledge retrieval process, we utilize Elasticsearch [39] as our search engine, which based on the Lucene library using BM25 as the default score function.
Provided the following context, I want to generate QA embedding pairs for each of my classes that are 3B and 7B LLM based on increasing complexity. For the 3B model, generate simple question-answer pairs, and for the 7B model, generate slightly more complex pairs. Please format the output in JSON as: {3B: {q1:, a1:}, {q2:, a2:}}, {7B: {q1:, a1:}, {q2:, a2:}}