Edit model card

image/png

Occiglot-7B-DE-EN-Instruct bf16 format

A polyglot language model for the Occident.

Occiglot-7B-DE-EN-Instruct is a the instruct version of occiglot-7b-eu5, a generative language model with 7B parameters supporting German and English and trained by the Occiglot Research Collective. It was trained on 180M tokens of additional multilingual and code instructions. Note that the model was not safety aligned and might generate problematic outputs.

This is the first release of an ongoing open research project for multilingual language models. If you want to train a model for your own language or are working on evaluations, please contact us or join our Discord server. We are open for collaborations!

*Special thanks go to Disco Research, Jan Philipp Harries, and Björn Plüster for sharing the German dataset with us*

Model details

  • Instruction tuned from: occiglot-7b-de-en
  • Model type: Causal decoder-only transformer language model
  • Languages: English, German, and code.
  • License: Apache 2.0
  • Compute resources: DFKI cluster
  • Contributors: Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting
  • Research labs: Occiglot with support from SAINT and SLT
  • Contact: Discord

How to use

The model was trained using the chatml instruction template. You can use the transformers chat template feature for interaction. Since the generation relies on some randomness, we set a seed for reproducibility:

>>> from transformers import AutoTokenizer, MistralForCausalLM, set_seed
>>> tokenizer = AutoTokenizer.from_pretrained("occiglot/occiglot-7b-de-en-instruct")
>>> model = MistralForCausalLM.from_pretrained('occiglot/occiglot-7b-de-en-instruct')  # You may want to use bfloat16 and/or move to GPU here
>>> set_seed(42)
>>> messages = [
>>>    {"role": "system", 'content': 'You are a helpful assistant. Please give short and concise answers.'},
>>>    {"role": "user", "content": "Wer ist der deutsche Bundeskanzler?"},
>>> ]
>>> tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_dict=False, return_tensors='pt',)
>>> set_seed(42)
>>> outputs = model.generate(tokenized_chat.to('cuda'), max_new_tokens=200,)
>>> tokenizer.decode(out[0][len(tokenized_chat[0]):])
'Der deutsche Bundeskanzler ist Olaf Scholz.'

Dataset

The training data was split evenly between German and English based on the total number of tokens. We would like to thank Disco Research, Jan Philipp Harries, and Björn Plüster for making their dataset available to us.

English and Code

German

Training settings

  • Full instruction fine-tuning on 8xH100.
  • 0.6 - 4 training epochs (depending on dataset sampling).
  • Framework: axolotl
  • Precision: bf16
  • Optimizer: AdamW
  • Global batch size: 128 (with 8192 context length)
  • Cosine Annealing with Warmup

Tokenizer

Tokenizer is unchanged from Mistral-7B-v0.1.

Evaluation

Preliminary evaluation results can be found below. Please note that the non-English results are based on partially machine-translated datasets and English prompts (Belebele and Okapi framework) and thus should be interpreted with caution, e.g., biased towards English model performance. Currently, we are working on more suitable benchmarks for Spanish, French, German, and Italian.

Evaluation results

All 5 Languages

avg arc_challenge belebele hellaswag mmlu truthfulqa
Occiglot-7b-eu5 0.516895 0.508109 0.675556 0.718963 0.402064 0.279782
Occiglot-7b-eu5-instruct 0.537799 0.53632 0.691111 0.731918 0.405198 0.32445
Occiglot-7b-de-en 0.518337 0.496297 0.715111 0.669034 0.412545 0.298697
Occiglot-7b-de-en-instruct 0.543173 0.530826 0.745778 0.67676 0.411326 0.351176
Leo-mistral-hessianai-7b 0.484806 0.462103 0.653556 0.642242 0.379208 0.28692
Mistral-7b-v0.1 0.547111 0.528937 0.768444 0.682516 0.448253 0.307403
Mistral-7b-instruct-v0.2 0.56713 0.547228 0.741111 0.69455 0.422501 0.430262

English

avg arc_challenge belebele hellaswag mmlu truthfulqa
Occiglot-7b-eu5 0.59657 0.530717 0.726667 0.789882 0.531904 0.403678
Occiglot-7b-eu5-instruct 0.617905 0.558874 0.746667 0.799841 0.535109 0.449
Occiglot-7b-de-en 0.518337 0.496297 0.715111 0.669034 0.412545 0.298697
Occiglot-7b-de-en-instruct 0.543173 0.530826 0.745778 0.67676 0.411326 0.351176
Leo-mistral-hessianai-7b 0.600949 0.522184 0.736667 0.777833 0.538812 0.429248
Mistral-7b-v0.1 0.668385 0.612628 0.844444 0.834097 0.624555 0.426201
Mistral-7b-instruct-v0.2 0.713657 0.637372 0.824444 0.846345 0.59201 0.668116

German

avg arc_challenge_de belebele_de hellaswag_de mmlu_de truthfulqa_de
Occiglot-7b-eu5 0.508311 0.493584 0.646667 0.666631 0.483406 0.251269
Occiglot-7b-eu5-instruct 0.531506 0.529512 0.667778 0.685205 0.488234 0.286802
Occiglot-7b-de-en 0.540085 0.50556 0.743333 0.67421 0.514633 0.26269
Occiglot-7b-de-en-instruct 0.566474 0.54491 0.772222 0.688407 0.515915 0.310914
Leo-mistral-hessianai-7b 0.517766 0.474765 0.691111 0.682109 0.488309 0.252538
Mistral-7b-v0.1 0.527957 0.476476 0.738889 0.610589 0.529567 0.284264
Mistral-7b-instruct-v0.2 0.535215 0.485885 0.688889 0.622438 0.501961 0.376904

Acknowledgements

The pre-trained model training was supported by a compute grant at the 42 supercomputer which is a central component in the development of hessian AI, the AI Innovation Lab (funded by the Hessian Ministry of Higher Education, Research and the Art (HMWK) & the Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)) and the AI Service Centers (funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK)). The curation of the training data is partially funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) through the project OpenGPT-X (project no. 68GX21007D).

License

Apache 2.0

See also

Downloads last month
48
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Collection including mayflowergmbh/occiglot-7b-de-en-instruct-GGUF