Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -67,7 +67,7 @@ configs:
|
|
67 |
ConvRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ConvRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ConvRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations, and indicate when questions cannot be found within the context. The details of this benchmark are described in [here](https://arxiv.org/abs/2401.10225).
|
68 |
|
69 |
## Other Resources
|
70 |
-
[ChatQA-1.5-8B](https://huggingface.co/nvidia/ChatQA-1.5-8B)   [ChatQA-1.5-70B](https://huggingface.co/nvidia/ChatQA-1.5-70B)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)
|
71 |
|
72 |
## Benchmark Results
|
73 |
|
@@ -87,7 +87,7 @@ ConvRAG Bench is a benchmark for evaluating a model's conversational QA capabili
|
|
87 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
|
88 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
|
89 |
|
90 |
-
Note that ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial.
|
91 |
|
92 |
### Evaluation of Unanswerable Scenario
|
93 |
|
|
|
67 |
ConvRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ConvRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ConvRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations, and indicate when questions cannot be found within the context. The details of this benchmark are described in [here](https://arxiv.org/abs/2401.10225).
|
68 |
|
69 |
## Other Resources
|
70 |
+
[Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)   [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)
|
71 |
|
72 |
## Benchmark Results
|
73 |
|
|
|
87 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
|
88 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
|
89 |
|
90 |
+
Note that Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial.
|
91 |
|
92 |
### Evaluation of Unanswerable Scenario
|
93 |
|