zihanliu commited on
Commit
f0d9637
1 Parent(s): f2ad62a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -5
README.md CHANGED
@@ -62,12 +62,11 @@ configs:
62
  ---
63
 
64
  ## ConvRAG Bench
65
- ConvRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ConvRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ConvRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, and conduct arithmetic calculations.
66
-
67
- ConvRAG Bench also includes evaluations for the unanswerable scenario, where we evaluate models' capability to determine whether the answer to the question can be found within the given context. Equipping models with such capability can substantially decrease the likelihood of hallucination.
68
 
69
  ## Benchmark Results
70
 
 
71
  | | ChatQA-1.0-7B | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
72
  | -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
73
  | Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 38.9 | 39.33 | 41.26 |
@@ -85,8 +84,24 @@ ConvRAG Bench also includes evaluations for the unanswerable scenario, where we
85
 
86
  Note that ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial.
87
 
88
- ## Evaluation
89
- We open-source the scripts for running and evaluating on ConvRAG (including the unanswerable evaluations).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
  ## License
92
  The ConvRAG are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset.
 
62
  ---
63
 
64
  ## ConvRAG Bench
65
+ ConvRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ConvRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ConvRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations, and indicate when questions cannot be found within the context.
 
 
66
 
67
  ## Benchmark Results
68
 
69
+ ### Main Results
70
  | | ChatQA-1.0-7B | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
71
  | -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
72
  | Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 38.9 | 39.33 | 41.26 |
 
84
 
85
  Note that ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial.
86
 
87
+ ### Evaluation of Unanswerable Scenario
88
+
89
+ ConvRAG Bench also includes evaluations for the unanswerable scenario, where we evaluate models' capability to determine whether the answer to the question can be found within the given context. Equipping models with such capability can substantially decrease the likelihood of hallucination.
90
+
91
+ | | GPT-3.5-turbo-0613 | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
92
+ | -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
93
+ | Avg-Both | 73.27 | 68.11 | 76.42 | 80.73 | 77.25 | 75.57 | 71.86 |
94
+ | Avg-QuAC | 78.335 | 69.605 | 81.285 | 87.415 | 80.755 | 79.3 | 72.59 |
95
+ | QuAC (no*) | 61.91 | 41.79 | 66.89 | 83.45 | 77.66 | 63.39 | 48.25 |
96
+ | QuAC (yes*) | 94.76 | 97.42 | 95.68 | 91.38 | 83.85 | 95.21 | 96.93 |
97
+ | Avg-DoQA | 68.21 | 66.62 | 71.555 | 74.05 | 73.74 | 71.84 | 71.125 |
98
+ | DoQA (no*) | 51.99 | 46.37 | 60.78 | 74.28 | 68.81 | 62.76 | 52.24 |
99
+ | DoQA (yes*) | 84.43 | 86.87 | 82.33 | 73.82 | 78.67 | 80.92 | 90.01 |
100
+
101
+ We use QuAC and DoQA datasets which have such unanswerable cases to evaluate such capability. We use both answerable and unanswerable samples for this evaluation. Specifically, for unanswerable case, we consider the model indicating that the question cannot be answered as correct, and as for answerable cases, we consider the model not indicating the question is unanswerable as correct (i.e., the model giving an answer). In the end, we calculate the average accuracy score of unanswerable and answerable cases as the final metric.
102
+
103
+ ## Evaluation Scripts
104
+ We also open-source the [scripts](https://huggingface.co/datasets/nvidia/ConvRAG-Bench/tree/main/evaluation) for running and evaluating on ConvRAG (including the unanswerable scenario evaluations).
105
 
106
  ## License
107
  The ConvRAG are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset.