bjoernp commited on
Commit
8cd25a4
1 Parent(s): b002ce5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +203 -0
README.md ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - LeoLM/OpenSchnabeltier
4
+ - OpenAssistant/OASST-DE
5
+ - FreedomIntelligence/alpaca-gpt4-deutsch
6
+ - FreedomIntelligence/evol-instruct-deutsch
7
+ - LeoLM/German_Poems
8
+ - LeoLM/German_Songs
9
+ language:
10
+ - en
11
+ - de
12
+ library_name: transformers
13
+ pipeline_tag: text-generation
14
+ license: llama2
15
+ ---
16
+ # LAION LeoLM 70b Chat: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
17
+ Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
18
+ Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
19
+ Thanks to a compute grant at HessianAI's new supercomputer **42**, we release a series foundation models trained with 8k context length
20
+ under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt). Now, we're finally releasing the
21
+ much anticipated `leo-hessianai-70b`, the largest model of this series based on `Llama-2-70b`.
22
+ With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
23
+ Read our [blog post](https://laion.ai/blog/leo-lm/) or our paper (preprint coming soon) for more details!
24
+
25
+
26
+ *A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
27
+
28
+ ## LeoLM Chat
29
+ `LeoLM/leo-hessianai-70b-chat` is a German chat model built on our foundation model `LeoLM/leo-hessianai-70b` and finetuned on a selection of German instruction datasets.
30
+ The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench-DE scores:
31
+ ```
32
+ {
33
+ "first_turn": 7.2375,
34
+ "second_turn": 6.5375,
35
+ "categories": {
36
+ "writing": 8.55,
37
+ "roleplay": 7.15,
38
+ "reasoning": 4.2,
39
+ "math": 4.85,
40
+ "coding": 4.85,
41
+ "extraction": 7.75,
42
+ "stem": 8.45,
43
+ "humanities": 9.3
44
+ },
45
+ "average": 6.8875
46
+ }
47
+ ```
48
+
49
+ ## Model Details
50
+
51
+ - **Finetuned from:** [LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-70b)
52
+ - **Model type:** Causal decoder-only transformer language model
53
+ - **Language:** English and German
54
+ - **Demo:** [Web Demo]()
55
+ - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
56
+ - **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:bjoern.pl@outlook.de)
57
+
58
+
59
+ ## Use in 🤗Transformers
60
+ First install direct dependencies:
61
+ ```
62
+ pip install transformers torch
63
+ ```
64
+ Then load the model in transformers:
65
+ ```python
66
+ from transformers import pipeline
67
+ import torch
68
+
69
+ system_prompt = """<|im_start|>system
70
+ Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
71
+ Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
72
+
73
+ """
74
+ prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
75
+ prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
76
+
77
+ generator = pipeline(model="LeoLM/leo-hessianai-70b-chat", device="cuda", torch_dtype=torch.float16)
78
+ print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
79
+ ```
80
+
81
+ "*Hamburg verfügt über ein umfangreiches Netzwerk an Radwegen, das ein effizientes und angenehmes Radfahren in der ganzen Stadt ermöglicht. Die Radwege variieren in Bezug auf ihre Qualität, wobei einige erstklassig mit eigens für Radfahrer reservierten Spuren sind, während andere einfache Fahrradsymbole auf dem Boden haben, die anzeigen, dass Radfahrer abwechselnd mit dem Autoverkehr auf der Straße fahren sollten. Einige Nebenstraßen haben auch spezielle Fahrradspuren, wobei einige mit Bordsteinabsenkungen zur Seite der Autospuren markiert sind. Zusätzlich haben viele Hauptstraßen, insbesondere in Nebenstraßen, fahrradfreundliche Abstände zwischen den geparkten Autos und dem Gehweg, was ein bequemes Fahren auf der Straße ermöglicht. Der Bau von Radschnellwegen, die schnelles und effizientes Radfahren in und aus der Stadt ermöglichen, ist im Gange und wird in den kommenden Jahren fortgesetzt. Insgesamt sind die Radwege in Hamburg weitläufig und gut ausgeschildert, was es zu einem angenehmen Ort macht, um mit dem Fahrrad zu fahren.*"
82
+
83
+ ## Prompting / Prompt Template
84
+
85
+ Prompt dialogue template (ChatML format):
86
+
87
+ ```
88
+ """
89
+ <|im_start|>system
90
+ {system_message}<|im_end|>
91
+ <|im_start|>user
92
+ {prompt}<|im_end|>
93
+ <|im_start|>assistant
94
+ """
95
+ ```
96
+
97
+ The model input can contain multiple conversation turns between user and assistant, e.g.
98
+ ```
99
+ <|im_start|>user
100
+ {prompt 1}<|im_end|>
101
+ <|im_start|>assistant
102
+ {reply 1}<|im_end|>
103
+ <|im_start|>user
104
+ {prompt 2}<|im_end|>
105
+ <|im_start|>assistant
106
+ (...)
107
+ ```
108
+
109
+ ## Ethical Considerations and Limitations
110
+
111
+ LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
112
+ For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-70b-chat` cannot be predicted
113
+ in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
114
+ to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-70b-chat`, developers should
115
+ perform safety testing and tuning tailored to their specific applications of the model.
116
+
117
+ Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
118
+
119
+ ## Finetuning Details
120
+
121
+ | Hyperparameter | Value |
122
+ |---|---|
123
+ | Num epochs | 3 |
124
+ | Examples per epoch | 131214 |
125
+ | Global batch size | 256 |
126
+ | Learning rate | 1.5e-5 |
127
+ | Warmup steps | 15 |
128
+ | LR scheduler | Cosine |
129
+ | Adam betas | (0.9, 0.95) |
130
+ | Weight Decay | 0.01 |
131
+
132
+ ## Dataset Details
133
+ ```
134
+ ## Stats for 'Subset of OpenAssistant/OASST-DE' (3534 samples (100.0%))
135
+ -----------------
136
+ Accepted: 3534/3534 (100.0%)
137
+ Accepted tokens: 2259302
138
+ Skipped: 0 (0.0%)
139
+ Min tokens per sample: 29
140
+ Max tokens per sample: 2484
141
+ Avg tokens per sample: 639.3044708545557
142
+ -----------------
143
+
144
+ ## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
145
+ -----------------
146
+ Accepted: 57841/57841 (100.0%)
147
+ Accepted tokens: 42958192
148
+ Skipped: 0 (0.0%)
149
+ Min tokens per sample: 33
150
+ Max tokens per sample: 5507
151
+ Avg tokens per sample: 742.6944900675991
152
+ -----------------
153
+
154
+ ## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
155
+ -----------------
156
+ Accepted: 48969/48969 (100.0%)
157
+ Accepted tokens: 13372005
158
+ Skipped: 0 (0.0%)
159
+ Min tokens per sample: 19
160
+ Max tokens per sample: 1359
161
+ Avg tokens per sample: 273.07082031489307
162
+ -----------------
163
+
164
+ ## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
165
+ -----------------
166
+ Accepted: 21314/21314 (100.0%)
167
+ Accepted tokens: 8134690
168
+ Skipped: 0 (0.0%)
169
+ Min tokens per sample: 25
170
+ Max tokens per sample: 1202
171
+ Avg tokens per sample: 381.65947264708643
172
+ -----------------
173
+
174
+ ## Stats for 'Subset of LeoLM/German_Poems' (490 samples (100.0%))
175
+ -----------------
176
+ Accepted: 490/490 (100.0%)
177
+ Accepted tokens: 618642
178
+ Skipped: 0 (0.0%)
179
+ Min tokens per sample: 747
180
+ Max tokens per sample: 1678
181
+ Avg tokens per sample: 1262.534693877551
182
+ -----------------
183
+
184
+ ## Stats for 'Subset of LeoLM/German_Songs' (392 samples (100.0%))
185
+ -----------------
186
+ Accepted: 392/392 (100.0%)
187
+ Accepted tokens: 187897
188
+ Skipped: 0 (0.0%)
189
+ Min tokens per sample: 231
190
+ Max tokens per sample: 826
191
+ Avg tokens per sample: 479.3290816326531
192
+ -----------------
193
+
194
+ ## Stats for 'total' (132540 samples (100.0%))
195
+ -----------------
196
+ Accepted: 132540/132540 (100.0%)
197
+ Accepted tokens: 67530728
198
+ Skipped: 0 (0.0%)
199
+ Min tokens per sample: 19
200
+ Max tokens per sample: 5507
201
+ Avg tokens per sample: 509.51205673758864
202
+ -----------------
203
+ ```