Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,6 @@ Updated 240413: Dataset: 14002 rows. Rank: 64/128. Increased diversity of the in
|
|
13 |
A light DPO pass to 'align' the model and make it less prone to saying untrue things. Ref: https://huggingface.co/datasets/neph1/truthy-dpo-v0.1-swe
|
14 |
|
15 |
Qlora trained for ~2 epochs on 14k rows of q&a, python examples and general 'instruct' type questions.
|
16 |
-
Dataset generated using gpt-3.5-turbo and Mixtral 8x7b (about on third) + Manual gathering and some by chatgpt and gemini.
|
17 |
|
18 |
The goal is to improve knowledge in Swedish topics, while improving the quality of the language.
|
19 |
|
|
|
13 |
A light DPO pass to 'align' the model and make it less prone to saying untrue things. Ref: https://huggingface.co/datasets/neph1/truthy-dpo-v0.1-swe
|
14 |
|
15 |
Qlora trained for ~2 epochs on 14k rows of q&a, python examples and general 'instruct' type questions.
|
|
|
16 |
|
17 |
The goal is to improve knowledge in Swedish topics, while improving the quality of the language.
|
18 |
|