Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ size_categories:
|
|
7 |
- 1K<n<10K
|
8 |
tags:
|
9 |
- RAG
|
10 |
-
-
|
11 |
- conversational QA
|
12 |
- multi-turn QA
|
13 |
- QA with context
|
@@ -63,8 +63,8 @@ configs:
|
|
63 |
path: data/sqa/test.json
|
64 |
---
|
65 |
|
66 |
-
##
|
67 |
-
|
68 |
|
69 |
## Other Resources
|
70 |
[Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)   [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)
|
@@ -91,7 +91,7 @@ Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is bui
|
|
91 |
|
92 |
### Evaluation of Unanswerable Scenario
|
93 |
|
94 |
-
|
95 |
|
96 |
| | GPT-3.5-turbo-0613 | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
|
97 |
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|
@@ -106,10 +106,10 @@ ConvRAG Bench also includes evaluations for the unanswerable scenario, where we
|
|
106 |
We use QuAC and DoQA datasets which have such unanswerable cases to evaluate such capability. We use both answerable and unanswerable samples for this evaluation. Specifically, for unanswerable case, we consider the model indicating that the question cannot be answered as correct, and as for answerable cases, we consider the model not indicating the question is unanswerable as correct (i.e., the model giving an answer). In the end, we calculate the average accuracy score of unanswerable and answerable cases as the final metric.
|
107 |
|
108 |
## Evaluation Scripts
|
109 |
-
We also open-source the [scripts](https://huggingface.co/datasets/nvidia/
|
110 |
|
111 |
## License
|
112 |
-
The
|
113 |
|
114 |
## Correspondence to
|
115 |
Zihan Liu (zihanl@nvidia.com), Wei Ping (wping@nvidia.com)
|
|
|
7 |
- 1K<n<10K
|
8 |
tags:
|
9 |
- RAG
|
10 |
+
- ChatRAG
|
11 |
- conversational QA
|
12 |
- multi-turn QA
|
13 |
- QA with context
|
|
|
63 |
path: data/sqa/test.json
|
64 |
---
|
65 |
|
66 |
+
## ChatRAG Bench
|
67 |
+
ChatRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ChatRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ChatRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations, and indicate when questions cannot be found within the context. The details of this benchmark are described in [here](https://arxiv.org/abs/2401.10225).
|
68 |
|
69 |
## Other Resources
|
70 |
[Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)   [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)
|
|
|
91 |
|
92 |
### Evaluation of Unanswerable Scenario
|
93 |
|
94 |
+
ChatRAG Bench also includes evaluations for the unanswerable scenario, where we evaluate models' capability to determine whether the answer to the question can be found within the given context. Equipping models with such capability can substantially decrease the likelihood of hallucination.
|
95 |
|
96 |
| | GPT-3.5-turbo-0613 | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
|
97 |
| -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|
|
|
106 |
We use QuAC and DoQA datasets which have such unanswerable cases to evaluate such capability. We use both answerable and unanswerable samples for this evaluation. Specifically, for unanswerable case, we consider the model indicating that the question cannot be answered as correct, and as for answerable cases, we consider the model not indicating the question is unanswerable as correct (i.e., the model giving an answer). In the end, we calculate the average accuracy score of unanswerable and answerable cases as the final metric.
|
107 |
|
108 |
## Evaluation Scripts
|
109 |
+
We also open-source the [scripts](https://huggingface.co/datasets/nvidia/ChatRAG-Bench/tree/main/evaluation) for running and evaluating on ChatRAG (including the unanswerable scenario evaluations).
|
110 |
|
111 |
## License
|
112 |
+
The ChatRAG are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset.
|
113 |
|
114 |
## Correspondence to
|
115 |
Zihan Liu (zihanl@nvidia.com), Wei Ping (wping@nvidia.com)
|