Model Card for Mistral-7B-v0.1-flashback
Mistral-7B-v0.1-flashback model is a continuation of the pretraining process for the base Mistral-7B-v0.1 model, utilizing around 40GB of forum threads from the Swedish website flashback.org. The training was done with the QLoRa method, training about 88 million parameters in a single epoch.
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support