README.md
Browse files
README.md
CHANGED
@@ -9,10 +9,12 @@ library_name: transformers
|
|
9 |
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
-
# Wizard Mega 13B
|
13 |
|
14 |
Wizard Mega is a Llama 13B model fine-tuned on the ShareGPT, WizardLM, and Wizard-Vicuna datasets. These particular datasets have all been filtered to remove responses where the model responds with "As an AI language model...", etc or when the model refuses to respond.
|
15 |
|
16 |
-
|
|
|
|
|
17 |
|
18 |
Wizard Mega was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB for 15 hours. The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/wizard-mega-13b/tree/main/configs).
|
|
|
9 |
pipeline_tag: text-generation
|
10 |
---
|
11 |
|
12 |
+
# Wizard Mega 13B
|
13 |
|
14 |
Wizard Mega is a Llama 13B model fine-tuned on the ShareGPT, WizardLM, and Wizard-Vicuna datasets. These particular datasets have all been filtered to remove responses where the model responds with "As an AI language model...", etc or when the model refuses to respond.
|
15 |
|
16 |
+
Release (Epoch Two) - The Wizard Mega 13B model is being released after two epochs as the eval loss increased during the 3rd (final planned epoch). Because of this, we have preliminarily decided to use the epoch 2 checkpoint as the final release candidate. https://wandb.ai/wing-lian/vicuna-13b/runs/5uebgm49
|
17 |
+
|
18 |
+
## Build
|
19 |
|
20 |
Wizard Mega was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB for 15 hours. The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/wizard-mega-13b/tree/main/configs).
|