Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -87,7 +87,7 @@ ConvRAG Bench is a benchmark for evaluating a model's conversational QA capabili
|
|
87 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
|
88 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
|
89 |
|
90 |
-
Note that
|
91 |
|
92 |
### Evaluation of Unanswerable Scenario
|
93 |
|
|
|
87 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
|
88 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
|
89 |
|
90 |
+
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial.
|
91 |
|
92 |
### Evaluation of Unanswerable Scenario
|
93 |
|