updated model info
Browse files
README.md
CHANGED
@@ -60,9 +60,11 @@ print("ANSWER: " + response_output)
|
|
60 |
|
61 |
- **Trained by:** [Piotr Zalewski](https://huggingface.co/lunahr)
|
62 |
- **License:** llama3.2
|
63 |
-
- **Finetuned from model:** [
|
64 |
- **Dataset used:** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k)
|
65 |
|
66 |
This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
|
67 |
|
68 |
-
Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.
|
|
|
|
|
|
60 |
|
61 |
- **Trained by:** [Piotr Zalewski](https://huggingface.co/lunahr)
|
62 |
- **License:** llama3.2
|
63 |
+
- **Finetuned from model:** [lunahr/Hermes-3-Llama-3.2-3B-abliterated](https://huggingface.co/lunahr/Hermes-3-Llama-3.2-3B-abliterated)*
|
64 |
- **Dataset used:** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k)
|
65 |
|
66 |
This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
|
67 |
|
68 |
+
Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.
|
69 |
+
|
70 |
+
*Created from https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B using a custom abliterator.
|