File size: 3,928 Bytes
78d1e01 76ba2c6 60b1901 78d1e01 76ba2c6 60b1901 78d1e01 82a7864 78d1e01 7e45fc9 82a7864 96c2279 82a7864 78d1e01 82a7864 78d1e01 82a7864 78d1e01 82a7864 78d1e01 c1aabab 78d1e01 76ba2c6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
license: apache-2.0
base_model: Rijgersberg/GEITje-7B
tags:
- generated_from_trainer
- GEITje
- conversational
model-index:
- name: GEITje-7B-chat-v2
results: []
datasets:
- Rijgersberg/no_robots_nl
- Rijgersberg/ultrachat_10k_nl
- BramVanroy/dutch_chat_datasets
language:
- nl
pipeline_tag: text-generation
---
# GEITje-7B-chat-v2
**π€οΈ Try the chat model in [π€ Hugging Face Spaces](https://huggingface.co/spaces/Rijgersberg/GEITje-7B-chat)!**
# GEITje-7B
GEITje is a large open Dutch language model with 7 billion parameters, based on Mistral 7B.
It has been further trained on 10 billion tokens of Dutch text.
This has improved its Dutch language skills and increased its knowledge of Dutch topics.
## Model description
### _Mistral_ β Base Model
GEITje is based on [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/).
It's a large open language model with 7 billion parameters,
trained by [Mistral AI](https://mistral.ai).
According to Mistral AI, the 7B model performs better than [Llama 2](https://ai.meta.com/llama/) 13B on all (English-language) benchmarks they tested it on.
Mistral 7B has been released under the Apache 2.0 open source license.
### _GEITje_ β Trained Further on Dutch Texts
GEITje was created by further training Mistral 7B on no less than 10 billion tokens of Dutch text from the [Dutch Gigacorpus](http://gigacorpus.nl) and the [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) web crawling corpus.
It is a so-called _full-parameter finetune_:
performed on all parameters.
It is not a [PEFT](https://huggingface.co/blog/peft) or [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora) finetune.
Like Mistral, GEITje has a _context length_ of 8,192 tokens.
### _GEITje-chat_ β Finetuned for Dialogues
As a demonstration of GEITje's capabilities for chat applications, two initial chat variants of GEITje have also been finetuned: GEITje-chat and GEITje-chat-v2.
They can follow instructions, answer questions, and hold dialogues on a variety of topics.
## More info
Read more about GEITje-chat in the [π README](https://github.com/Rijgersberg/GEITje/blob/main/README-en.md) on GitHub.
## Checkpoints
An intermediate checkpoint is available in the `checkpoints` branch.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7832 | 0.05 | 609 | 0.8844 |
| 0.6904 | 0.1 | 1218 | 0.8698 |
| 0.8195 | 0.15 | 1827 | 0.8583 |
| 0.7463 | 0.2 | 2436 | 0.8475 |
| 0.6739 | 0.25 | 3045 | 0.8395 |
| 0.7604 | 0.3 | 3654 | 0.8332 |
| 0.8024 | 0.35 | 4263 | 0.8261 |
| 0.6881 | 0.4 | 4872 | 0.8203 |
| 0.6466 | 0.45 | 5481 | 0.8167 |
| 0.7042 | 0.5 | 6090 | 0.8121 |
| 0.702 | 0.55 | 6699 | 0.8081 |
| 0.7255 | 0.6 | 7308 | 0.8054 |
| 0.7558 | 0.65 | 7917 | 0.8036 |
| 0.7587 | 0.7 | 8526 | 0.8022 |
| 0.9217 | 0.75 | 9135 | 0.8016 |
| 0.6938 | 0.8 | 9744 | 0.8011 |
| 0.6962 | 0.85 | 10353 | 0.8011 |
| 0.664 | 0.9 | 10962 | 0.8011 |
| 0.6544 | 0.95 | 11571 | 0.8011 |
| 0.6782 | 1.0 | 12180 | 0.8011 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |