Richard Neuschulz commited on
Commit
c31932c
1 Parent(s): 9dba3d4

updated new files

Browse files
Files changed (1) hide show
  1. README.md +8 -100
README.md CHANGED
@@ -1,100 +1,8 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - deutsch
5
- - german
6
- - seedbox
7
- - mistral
8
- - mixtral
9
- license: apache-2.0
10
- datasets:
11
- - seedboxai/multitask_german_examples_32k
12
- - seedboxai/ultra_feedback_german_modified_v1
13
- language:
14
- - de
15
- pipeline_tag: text-generation
16
- ---
17
-
18
- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/645ded34a45b4182d7f5c385/9QywLGTbRrHYSq-m6fQmJ.jpeg)
19
-
20
-
21
- # KafkaLM-8x7b-German-V0.1
22
-
23
- **KafkaLM 8x7b** is a MoE model based on [Mistral AI´s Mixtral 8x7b](https://mistral.ai/news/mixtral-of-experts/) which was finetuned on an ensemble of popular high-quality open-source instruction sets (translated from English to German).
24
-
25
- KafkaLM 8x7b is a [Seedbox](https://huggingface.co/seedboxai) project trained by [Dennis Dickmann](https://huggingface.co/doubledsbv).
26
-
27
- **Why Kafka?**
28
- The models are proficient, yet creative, have some tendencies to linguistically push boundaries 😊
29
-
30
-
31
- ## Model Details
32
-
33
- The purpose of releasing the **KafkaLM series** is to contribute to the German AI community with a set of fine-tuned LLMs that are easy to use in everyday applications across a variety of tasks.
34
-
35
- The main goal was to provide LLMs proficient in German, especially to be used in German-speaking business contexts where English alone is not sufficient.
36
-
37
- ### DPO
38
-
39
- The model has been aligned with a german and modified version of the ultra feedback dataset from huggingface.
40
-
41
- ### Dataset
42
-
43
- I used a 8k filtered version of the following [seedboxai/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxai/multitask_german_examples_32k)
44
-
45
- ### Prompt Format
46
-
47
-
48
- This model follows the subsequent prompt format:
49
-
50
- ```
51
- <|system|>
52
- Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen.</s>
53
- <|user|>
54
- Welche Möglichkeiten der energetischen Sanierung habe ich neben Solar und Energiespeicher?</s>
55
- <|assistant|>
56
- ```
57
-
58
- ### Inference
59
-
60
- Getting started with the model is straightforward
61
-
62
- ```python
63
- import transformers
64
-
65
- model_id = "seedboxai/KafkaLM-8x7B-German-V0.1-DPO"
66
-
67
- model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, trust_remote_code=True)
68
-
69
- tokenizer = AutoTokenizer.from_pretrained(model_id)
70
-
71
- def generate_prompt(input):
72
- prompt = ''
73
- sys_prompt = "Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen."
74
-
75
- prompt += f"<|system|>\n{sys_prompt.strip()}</s>\n"
76
- prompt += f"<|user|>\n{input.strip()}</s>\n"
77
- prompt += f"<|assistant|>\n"
78
-
79
- return prompt.strip()
80
-
81
-
82
- generate_text = transformers.pipeline(
83
- model=model, tokenizer=tokenizer,
84
- return_full_text=True,
85
- task='text-generation',
86
- temperature=0.5,
87
- max_new_tokens=512,
88
- top_p=0.95,
89
- top_k=50,
90
- do_sample=True,
91
- )
92
-
93
- print(generate_text(generate_prompt("Wer ist eigentlich dieser Kafka?"))
94
-
95
- ```
96
-
97
- ## Disclaimer
98
-
99
- The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
100
- This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply.
 
1
+ title: KafkaLM 8x7b German
2
+ emoji: 🇩🇪
3
+ colorFrom: Black
4
+ colorTo: Yellow
5
+ sdk: gradio
6
+ sdk_version: 4.13.0
7
+ app_file: app.py
8
+ pinned: false