zihanliu commited on
Commit
1a60ce8
1 Parent(s): 0f5db73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -43,6 +43,26 @@ configs:
43
  data_files:
44
  - split: test
45
  path: data/qrecc/test.json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ---
47
 
48
  ## ConvRAG Bench
@@ -50,14 +70,31 @@ ConvRAG Bench is a benchmark for evaluating a model's conversational QA capabili
50
 
51
  ConvRAG Bench also includes evaluations for the unanswerable scenario, where we evaluate models' capability to determine whether the answer to the question can be found within the given context. Equipping models with such capability can substantially decrease the likelihood of hallucination.
52
 
53
- ## Evaluation
54
- We open-source the scripts for running and evaluating on ConvRAG
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
 
 
 
 
56
 
57
  ## License
58
  The ConvRAG are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset.
59
 
60
-
61
  ## Citation
62
  If you evaluate using ConvRAG, please cite all the datasets you use.
63
  <pre>
 
43
  data_files:
44
  - split: test
45
  path: data/qrecc/test.json
46
+ - config_name: doqa_cooking
47
+ data_files:
48
+ - split: test
49
+ path: data/doqa/test_cooking.json
50
+ - config_name: doqa_movies
51
+ data_files:
52
+ - split: test
53
+ path: data/doqa/test_movies.json
54
+ - config_name: doqa_travel
55
+ data_files:
56
+ - split: test
57
+ path: data/doqa/test_travel.json
58
+ - config_name: sqa
59
+ data_files:
60
+ - split: test
61
+ path: data/sqa/test.json
62
+ - config_name: convfinqa
63
+ data_files:
64
+ - split: dev
65
+ path: data/convfinqa/dev.json
66
  ---
67
 
68
  ## ConvRAG Bench
 
70
 
71
  ConvRAG Bench also includes evaluations for the unanswerable scenario, where we evaluate models' capability to determine whether the answer to the question can be found within the given context. Equipping models with such capability can substantially decrease the likelihood of hallucination.
72
 
73
+ ## Benchmark Results
74
+
75
+ | | ChatQA-1.0-7B | Command-R-Plus | Llama-3-instruct-70b | GPT-4-0613 | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B |
76
+ | -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|
77
+ | Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 38.9 | 39.33 | 41.26 |
78
+ | QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 41.82 | 39.73 | 38.82 |
79
+ | QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 48.05 | 49.03 | 51.40 |
80
+ | CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 78.57 | 76.46 | 78.44 |
81
+ | DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 51.94 | 49.6 | 50.67 |
82
+ | ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 73.69 | 78.46 | 81.88 |
83
+ | SQA | 61.87 | 74.07 | 69.61 | 79.21 | 69.14 | 73.28 | 83.82 |
84
+ | TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 50.98 | 49.96 | 55.63 |
85
+ | HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 56.44 | 65.76 | 68.27 |
86
+ | INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 31.9 | 30.1 | 32.31 |
87
+ | Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
88
+ | Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
89
 
90
+ Note that ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial.
91
+
92
+ ## Evaluation
93
+ We open-source the scripts for running and evaluating on ConvRAG (including the unanswerable evaluations).
94
 
95
  ## License
96
  The ConvRAG are built on and derived from existing datasets. We refer users to the original licenses accompanying each dataset.
97
 
 
98
  ## Citation
99
  If you evaluate using ConvRAG, please cite all the datasets you use.
100
  <pre>