Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -64,44 +64,44 @@ configs:
|
|
64 |
---
|
65 |
|
66 |
## ChatRAG Bench
|
67 |
-
ChatRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ChatRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ChatRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations, and indicate when questions cannot be found within the context. The details of this benchmark are described in [here](https://arxiv.org/
|
68 |
|
69 |
## Other Resources
|
70 |
-
[Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)   [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)   [Website](https://chatqa-project.github.io/)   [Paper](https://arxiv.org/
|
71 |
|
72 |
## Benchmark Results
|
73 |
|
74 |
### Main Results
|
75 |
-
| | ChatQA-1.0-7B | Command-R-Plus |
|
76 |
-
| --
|
77 |
-
| Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 38.
|
78 |
-
| QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 41.82 | 39.73 | 38.82 |
|
79 |
-
| QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 48.05 | 49.03 | 51.40 |
|
80 |
-
| CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 78.57 | 76.46 | 78.44 |
|
81 |
-
| DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 51.94 | 49.
|
82 |
-
| ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 73.69 | 78.46 | 81.88 |
|
83 |
-
| SQA | 61.87 | 74.07 | 69.61 | 79.21 | 69.14 | 73.28 | 83.82 |
|
84 |
-
| TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 50.98 | 49.96 | 55.63 |
|
85 |
-
| HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 56.44 | 65.76 | 68.27 |
|
86 |
-
| INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 31.
|
87 |
-
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
|
88 |
-
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
|
89 |
|
90 |
-
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5
|
91 |
|
92 |
### Evaluation of Unanswerable Scenario
|
93 |
|
94 |
ChatRAG Bench also includes evaluations for the unanswerable scenario, where we evaluate models' capability to determine whether the answer to the question can be found within the given context. Equipping models with such capability can substantially decrease the likelihood of hallucination.
|
95 |
|
96 |
-
| | GPT-3.5-turbo-0613 | Command-R-Plus |
|
97 |
-
| --
|
98 |
-
| Avg-Both | 73.27 | 68.11 | 76.42 | 80.73 | 77.25 | 75.57 | 71.86 |
|
99 |
-
| Avg-QuAC | 78.335 | 69.605 | 81.285 | 87.
|
100 |
-
| QuAC (no*) | 61.91 | 41.79 | 66.89 | 83.45 | 77.66 | 63.39 | 48.25 |
|
101 |
-
| QuAC (yes*) | 94.76 | 97.42 | 95.68 | 91.38 | 83.85 | 95.21 | 96.93 |
|
102 |
-
| Avg-DoQA | 68.21 | 66.62 | 71.555 | 74.05 | 73.74 | 71.84 | 71.125 |
|
103 |
-
| DoQA (no*) | 51.99 | 46.37 | 60.78 | 74.28 | 68.81 | 62.76 | 52.24 |
|
104 |
-
| DoQA (yes*) | 84.43 | 86.87 | 82.33 | 73.82 | 78.67 | 80.92 | 90.01 |
|
105 |
|
106 |
We use QuAC and DoQA datasets which have such unanswerable cases to evaluate such capability. We use both answerable and unanswerable samples for this evaluation. Specifically, for unanswerable case, we consider the model indicating that the question cannot be answered as correct, and as for answerable cases, we consider the model not indicating the question is unanswerable as correct (i.e., the model giving an answer). In the end, we calculate the average accuracy score of unanswerable and answerable cases as the final metric.
|
107 |
|
@@ -117,7 +117,7 @@ Zihan Liu (zihanl@nvidia.com), Wei Ping (wping@nvidia.com)
|
|
117 |
## Citation
|
118 |
<pre>
|
119 |
@article{liu2024chatqa,
|
120 |
-
title={ChatQA:
|
121 |
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
|
122 |
journal={arXiv preprint arXiv:2401.10225},
|
123 |
year={2024}}
|
|
|
64 |
---
|
65 |
|
66 |
## ChatRAG Bench
|
67 |
+
ChatRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ChatRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ChatRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations, and indicate when questions cannot be found within the context. The details of this benchmark are described in [here](https://arxiv.org/pdf/2401.10225v3). **For more information about ChatQA, check the [website](https://chatqa-project.github.io/)!**
|
68 |
|
69 |
## Other Resources
|
70 |
+
[Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)   [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)   [Website](https://chatqa-project.github.io/)   [Paper](https://arxiv.org/pdf/2401.10225v3)
|
71 |
|
72 |
## Benchmark Results
|
73 |
|
74 |
### Main Results
|
75 |
+
| | ChatQA-1.0-7B | Command-R-Plus | Llama3-instruct-70b | GPT-4-0613 | GPT-4-Turbo | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
|
76 |
+
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|
77 |
+
| Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 35.35 | 38.90 | 39.33 | 41.26 |
|
78 |
+
| QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 40.10 | 41.82 | 39.73 | 38.82 |
|
79 |
+
| QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 51.46 | 48.05 | 49.03 | 51.40 |
|
80 |
+
| CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 77.73 | 78.57 | 76.46 | 78.44 |
|
81 |
+
| DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 41.60 | 51.94 | 49.60 | 50.67 |
|
82 |
+
| ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 84.16 | 73.69 | 78.46 | 81.88 |
|
83 |
+
| SQA | 61.87 | 74.07 | 69.61 | 79.21 | 79.98 | 69.14 | 73.28 | 83.82 |
|
84 |
+
| TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 48.32 | 50.98 | 49.96 | 55.63 |
|
85 |
+
| HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 47.86 | 56.44 | 65.76 | 68.27 |
|
86 |
+
| INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 33.75 | 31.90 | 30.10 | 32.31 |
|
87 |
+
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.03 | 54.14 | 55.17 | 58.25 |
|
88 |
+
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 54.72 | 53.89 | 53.99 | 57.14 |
|
89 |
|
90 |
+
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 models use HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial.
|
91 |
|
92 |
### Evaluation of Unanswerable Scenario
|
93 |
|
94 |
ChatRAG Bench also includes evaluations for the unanswerable scenario, where we evaluate models' capability to determine whether the answer to the question can be found within the given context. Equipping models with such capability can substantially decrease the likelihood of hallucination.
|
95 |
|
96 |
+
| | GPT-3.5-turbo-0613 | Command-R-Plus | Llama3-instruct-70b | GPT-4-0613 | GPT-4-Turbo | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
|
97 |
+
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|
98 |
+
| Avg-Both | 73.27 | 68.11 | 76.42 | 80.73 | 80.47 | 77.25 | 75.57 | 71.86 |
|
99 |
+
| Avg-QuAC | 78.335 | 69.605 | 81.285 | 87.42 | 88.73 | 80.76 | 79.3 | 72.59 |
|
100 |
+
| QuAC (no*) | 61.91 | 41.79 | 66.89 | 83.45 | 80.42 | 77.66 | 63.39 | 48.25 |
|
101 |
+
| QuAC (yes*) | 94.76 | 97.42 | 95.68 | 91.38 | 97.03 | 83.85 | 95.21 | 96.93 |
|
102 |
+
| Avg-DoQA | 68.21 | 66.62 | 71.555 | 74.05 | 72.21 | 73.74 | 71.84 | 71.125 |
|
103 |
+
| DoQA (no*) | 51.99 | 46.37 | 60.78 | 74.28 | 72.28 | 68.81 | 62.76 | 52.24 |
|
104 |
+
| DoQA (yes*) | 84.43 | 86.87 | 82.33 | 73.82 | 72.13 | 78.67 | 80.92 | 90.01 |
|
105 |
|
106 |
We use QuAC and DoQA datasets which have such unanswerable cases to evaluate such capability. We use both answerable and unanswerable samples for this evaluation. Specifically, for unanswerable case, we consider the model indicating that the question cannot be answered as correct, and as for answerable cases, we consider the model not indicating the question is unanswerable as correct (i.e., the model giving an answer). In the end, we calculate the average accuracy score of unanswerable and answerable cases as the final metric.
|
107 |
|
|
|
117 |
## Citation
|
118 |
<pre>
|
119 |
@article{liu2024chatqa,
|
120 |
+
title={ChatQA: Surpassing GPT-4 on Conversational QA and RAG},
|
121 |
author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan},
|
122 |
journal={arXiv preprint arXiv:2401.10225},
|
123 |
year={2024}}
|