zihanliu commited on
Commit
4ab852d
1 Parent(s): d5a8a52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -15,6 +15,8 @@ tags:
15
  ## Model Details
16
  We introduce ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augumented generation (RAG). ChatQA-1.5 is built using the training recipe from [ChatQA (1.0)](https://arxiv.org/abs/2401.10225), and it is built on top of Llama-3 foundation model. Additionally, we incorporate more conversational QA data to enhance its tabular and arithmatic calculation capability. ChatQA-1.5 has two variants: ChatQA-1.5-8B and ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format.
17
 
 
 
18
 
19
  ## Benchmark Results
20
  Results in ConvRAG Bench are as follows:
 
15
  ## Model Details
16
  We introduce ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augumented generation (RAG). ChatQA-1.5 is built using the training recipe from [ChatQA (1.0)](https://arxiv.org/abs/2401.10225), and it is built on top of Llama-3 foundation model. Additionally, we incorporate more conversational QA data to enhance its tabular and arithmatic calculation capability. ChatQA-1.5 has two variants: ChatQA-1.5-8B and ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format.
17
 
18
+ ## Other Resources
19
+ [ChatQA-1.5-8B](https://huggingface.co/nvidia/ChatQA-1.5-8B)   [Evaluation Data](https://huggingface.co/datasets/nvidia/ConvRAG-Bench)   [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data)   [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder)
20
 
21
  ## Benchmark Results
22
  Results in ConvRAG Bench are as follows: