zihanliu commited on
Commit
751a8a6
1 Parent(s): 68b0d6a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -64,7 +64,10 @@ configs:
64
  ---
65
 
66
  ## ConvRAG Bench
67
- ConvRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ConvRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ConvRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations, and indicate when questions cannot be found within the context.
 
 
 
68
 
69
  ## Benchmark Results
70
 
@@ -184,4 +187,4 @@ We would like to give credits to all the works constructing the datasets we use
184
  journal={Transactions of the Association for Computational Linguistics},
185
  year={2023}
186
  }
187
- </pre>
 
64
  ---
65
 
66
  ## ConvRAG Bench
67
+ ConvRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ConvRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ConvRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations, and indicate when questions cannot be found within the context. The details of this benchmark are described in [here](https://arxiv.org/abs/2401.10225).
68
+
69
+ ## Other Resources
70
+ [ChatQA-1.5-8B](https://huggingface.co/nvidia/ChatQA-1.5-8B) &ensp; [ChatQA-1.5-70B](https://huggingface.co/nvidia/ChatQA-1.5-70B) &ensp; [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data) &ensp; [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)
71
 
72
  ## Benchmark Results
73
 
 
187
  journal={Transactions of the Association for Computational Linguistics},
188
  year={2023}
189
  }
190
+ </pre>