Update README.md
Browse files
README.md
CHANGED
@@ -7,15 +7,15 @@ language:
|
|
7 |
- en
|
8 |
library_name: transformers
|
9 |
---
|
10 |
-
More information about previous [Neuronovo/neuronovo-
|
11 |
|
12 |
Author: Jan KocoΕ π[LinkedIn](https://www.linkedin.com/in/jankocon/) π[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao) π[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2)
|
13 |
|
14 |
-
Changes concerning [Neuronovo/neuronovo-
|
15 |
|
16 |
1. **Training Dataset**: In addition to the [Intel/orca_dpo_pairs](Intel/orca_dpo_pairs) dataset, this version incorporates a [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs). The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation.
|
17 |
|
18 |
-
2. **Tokenizer and Formatting**: The tokenizer now originates directly from the [Neuronovo/neuronovo-
|
19 |
|
20 |
3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.
|
21 |
|
|
|
7 |
- en
|
8 |
library_name: transformers
|
9 |
---
|
10 |
+
More information about previous [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2) version available here: π[Don't stop DPOptimizing!](https://www.linkedin.com/pulse/dont-stop-dpoptimizing-jan-koco%2525C5%252584-mq4qf)
|
11 |
|
12 |
Author: Jan KocoΕ π[LinkedIn](https://www.linkedin.com/in/jankocon/) π[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao) π[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2)
|
13 |
|
14 |
+
Changes concerning [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2):
|
15 |
|
16 |
1. **Training Dataset**: In addition to the [Intel/orca_dpo_pairs](Intel/orca_dpo_pairs) dataset, this version incorporates a [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs). The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation.
|
17 |
|
18 |
+
2. **Tokenizer and Formatting**: The tokenizer now originates directly from the [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2) model.
|
19 |
|
20 |
3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.
|
21 |
|