Update README.md
Browse files
README.md
CHANGED
@@ -13,12 +13,13 @@ datasets:
|
|
13 |
|
14 |
|
15 |
Yi-34B 200K XLCTX base model fine-tuned on adamo1139/rawrr_v2-2_stage1 (DPO), adamo1139/AEZAKMI_v3-7 (SFT) and adamo1139/toxic-dpo-natural-v5 (ORPO) datasets. Training took around 7 (DPO) + 13 (SFT) + 3 (ORPO) = 23 hours total on RTX 3090 Ti, all finetuning was done locally. This is excluding failed attempts and issues I had with merging script, that basically made me run DPO and SFT stages 2 times over because I thought that my LoRAs were broken, but it turned out to be some bug with new transformers/peft versions.
|
16 |
-
This model is tuned to use more natural language and also be uncensored.
|
17 |
Say goodbye to "It's important to remember"! \
|
18 |
Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot.
|
19 |
Cost of this fine-tune is about $5-$10 in electricity.
|
20 |
Base model used for fine-tuning was Yi-34B-200K model shared by 01.ai, the newer version that has improved long context needle in a haystack retrieval. They didn't give it a new name, giving it numbers would mess up AEZAKMI naming scheme by adding a second number, so I will be calling it XLCTX.
|
21 |
|
|
|
22 |
|
23 |
I had to lower max_positional_embeddings in config.json and model_max_length for training to start, otherwise I was OOMing straight away.
|
24 |
This attempt had both max_position_embeddings and model_max_length set to 4096, which worked perfectly fine. I then reversed this to 200000 once I was uploading it.
|
|
|
13 |
|
14 |
|
15 |
Yi-34B 200K XLCTX base model fine-tuned on adamo1139/rawrr_v2-2_stage1 (DPO), adamo1139/AEZAKMI_v3-7 (SFT) and adamo1139/toxic-dpo-natural-v5 (ORPO) datasets. Training took around 7 (DPO) + 13 (SFT) + 3 (ORPO) = 23 hours total on RTX 3090 Ti, all finetuning was done locally. This is excluding failed attempts and issues I had with merging script, that basically made me run DPO and SFT stages 2 times over because I thought that my LoRAs were broken, but it turned out to be some bug with new transformers/peft versions.
|
16 |
+
This model is tuned to use more natural language and also be very uncensored.
|
17 |
Say goodbye to "It's important to remember"! \
|
18 |
Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot.
|
19 |
Cost of this fine-tune is about $5-$10 in electricity.
|
20 |
Base model used for fine-tuning was Yi-34B-200K model shared by 01.ai, the newer version that has improved long context needle in a haystack retrieval. They didn't give it a new name, giving it numbers would mess up AEZAKMI naming scheme by adding a second number, so I will be calling it XLCTX.
|
21 |
|
22 |
+
[You can see examples of responses to various prompts here (loaded with transformers load_in_4bit)](https://huggingface.co/datasets/adamo1139/misc/blob/main/benchmarks/yi-34b-200k-xlctx-aezakmi-raw-toxic-natural-orpo-0205/benchmark_prompts.txt)
|
23 |
|
24 |
I had to lower max_positional_embeddings in config.json and model_max_length for training to start, otherwise I was OOMing straight away.
|
25 |
This attempt had both max_position_embeddings and model_max_length set to 4096, which worked perfectly fine. I then reversed this to 200000 once I was uploading it.
|