lmzheng commited on
Commit
b0aa428
1 Parent(s): 87b8bd3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -34,7 +34,7 @@ The primary intended users of the model are researchers and hobbyists in natural
34
  ## Training Details
35
 
36
  Vicuna v1.5 (16k) is fine-tuned from LLaMA with supervised instruction fine-tuning and linear RoPE scaling.
37
- The training data is around 140K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each.
38
  See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
39
 
40
  ## Evaluation
 
34
  ## Training Details
35
 
36
  Vicuna v1.5 (16k) is fine-tuned from LLaMA with supervised instruction fine-tuning and linear RoPE scaling.
37
+ The training data is around 125K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each.
38
  See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
39
 
40
  ## Evaluation