TinyMistral-248M / README.md
Locutusque's picture
Update README.md
1181621
|
raw
history blame
597 Bytes
---
license: apache-2.0
datasets:
- Skylion007/openwebtext
language:
- en
pipeline_tag: text-generation
---
A pre-trained language model, based on the Mistral 7B model, has been scaled down to approximately 248 million parameters. Currently, it's been trained on 770,000 examples over 256,000 steps within the first epoch. The batch size will remain low for future epochs. This model isn't intended for direct use but for fine-tuning on a downstream task.
During evaluation on InstructMix, this model achieved an average perplexity score of 6.3. More training sessions are planned for this model.