this is not a base Llama 3.2 uncensored model
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ tags:
|
|
10 |
- sft
|
11 |
- reasoning
|
12 |
- llama-3
|
13 |
-
base_model:
|
14 |
datasets:
|
15 |
- KingNish/reasoning-base-20k
|
16 |
- lunahr/thea-name-overrides
|
@@ -65,4 +65,4 @@ print("ANSWER: " + response_output)
|
|
65 |
|
66 |
This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
|
67 |
|
68 |
-
Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.
|
|
|
10 |
- sft
|
11 |
- reasoning
|
12 |
- llama-3
|
13 |
+
base_model: lunahr/Hermes-3-Llama-3.2-3B-abliterated
|
14 |
datasets:
|
15 |
- KingNish/reasoning-base-20k
|
16 |
- lunahr/thea-name-overrides
|
|
|
65 |
|
66 |
This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
|
67 |
|
68 |
+
Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.
|