Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ The primary intended users of the model are researchers and hobbyists in natural
|
|
34 |
## Training Details
|
35 |
|
36 |
Vicuna v1.5 (16k) is fine-tuned from LLaMA with supervised instruction fine-tuning and linear RoPE scaling.
|
37 |
-
The training data is around
|
38 |
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
|
39 |
|
40 |
## Evaluation
|
|
|
34 |
## Training Details
|
35 |
|
36 |
Vicuna v1.5 (16k) is fine-tuned from LLaMA with supervised instruction fine-tuning and linear RoPE scaling.
|
37 |
+
The training data is around 125K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each.
|
38 |
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
|
39 |
|
40 |
## Evaluation
|