mpasila commited on
Commit
27360f0
1 Parent(s): 41c2701

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - cerebras/SlimPajama-627B
5
+ - bigcode/starcoderdata
6
+ - mc4
7
+ language:
8
+ - fi
9
+ - en
10
+ - da
11
+ - sv
12
+ - 'no'
13
+ - nn
14
+ - is
15
+ ---
16
+
17
+ # Viking 7B
18
+
19
+ **NOTE: We are aware of an incompatibility with HF transformers that impacts finetuning and are working to correct it.**
20
+
21
+ _**NOTE:** This is a **research checkpoint** of a model for which **training has not been completed.** It is being provided in its current state for research and testing purposes. **Care should be taken when using the outputs of the model.** Once pretraining has completed we intend to release additional instruction-tuned and chat-tuned varieties._
22
+
23
+ Viking 7B is a 7B parameter decoder-only transformer pretrained on Finnish, English, Swedish, Danish, Norwegian, Icelandic and code. It is being trained on 2 trillion tokens (1 trillion as of this release). Viking 7B is a fully open source model and is made available under the Apache 2.0 License.
24
+
25
+ Viking was created in a collaboration between the [TurkuNLP group](https://turkunlp.org/) of the University of Turku, [SiloGen](https://www.silo.ai/silogen) from [Silo AI](https://www.silo.ai/),and [High Performance Language Technologies](https://hplt-project.org/) (HPLT). Training was conducted on the [LUMI supercomputer](https://www.lumi-supercomputer.eu/), using compute resources generously provided by [CSC](https://csc.fi/) - IT Center for Science, Finland.
26
+
27
+ This project is part of an ongoing effort to create open source large language models for non-English and especially low resource languages like Finnish. The mode is fluent in Finnish, English, the Scandinavian languages and capable of basic translation between them. It is also able to understand and generate code.
28
+
29
+ ## Model Family
30
+
31
+ Viking is the second set of models released by LumiOpen and is available at
32
+ 3 parameter counts:
33
+
34
+ [Viking 7B](https://huggingface.co/LumiOpen/Viking-7B)
35
+
36
+ [Viking 13B](https://huggingface.co/LumiOpen/Viking-13B)
37
+
38
+ [Viking 33B](https://huggingface.co/LumiOpen/Viking-33B)
39
+
40
+
41
+ ## Model Overview
42
+ _**NOTE:** In addition to being an early research release, Viking is a base model which needs further fine tuning for most use cases._
43
+
44
+ Viking is a generative pretrained transformer using a LLaMA-like GPT architecture, and makes use of rotary positional embeddings and flash attention.
45
+
46
+ | Hyperparameter | Value |
47
+ | :------------- | :----: |
48
+ | n_parameters | 7.55B |
49
+ | n_layers | 32 |
50
+ | n_heads | 32 |
51
+ | d_model | 4096 |
52
+ | vocab_size | 131072 |
53
+ | sequence_length | 4096 |
54
+
55
+
56
+ ## Training
57
+
58
+ Viking 7B was trained on the LUMI supercomputer, using 256 AMD MI250X GPUs. Each MI250X GPU has two Graphics Complex Dies (GCDs) for a world size of 512 during training, using activation checkpointing, a micro batch size of 1, gradient accumulation of 16, and a 3D parallelism strategy of TP=1, PP=4, DP=128.
59
+
60
+ Training began in September 2023 using a [custom fork](https://github.com/LumiOpen/Megatron-DeepSpeed) of the Megatron-Deepspeed framework.
61
+
62
+ ## Training Hyperparameters
63
+
64
+ | Hyperparameter | Value | Comment |
65
+ | :------------: | :---: | :------:|
66
+ | Precision | bfloat16 | |
67
+ | Optimizer | AdamW | |
68
+ | Learning rate | 3e-4 | 10B tokens warm-up, cosine decay to 3e-5 |
69
+ | Weight decay | 1e-1 | |
70
+ | Batch size | 1024 | 1024 samples x 4096 tokens = 4194304 tokens |
71
+
72
+ ## Tokenizer
73
+
74
+ Viking uses a custom 128K Bloom tokenizer trained on the same English, Finnish, Swedish, Danish, Norwegian, Icelandic and code dataset used to train the model.
75
+
76
+ ## Dataset
77
+
78
+ Viking is being trained on a 2 trillion token mixed dataset of English, Finnish, Swedish, Danish, Norwegian, Icelandic and code.
79
+
80
+ More details on exact dataset will be published soon.
81
+
82
+ ## Evaluation Results
83
+
84
+ Full evaluation results will be published with the final model.
85
+
86
+ ## Training Checkpoints
87
+
88
+ Training Checkpoints are available as branches in the repository. Checkpoints will be released roughly every 100B tokens. The main branch will always point to the latest checkpoint. The following checkpoints are available:
89
+
90
+ * [100B](https://huggingface.co/LumiOpen/Viking-7B/tree/100B)
91
+ * [200B](https://huggingface.co/LumiOpen/Viking-7B/tree/200B)
92
+ * [300B](https://huggingface.co/LumiOpen/Viking-7B/tree/300B)
93
+ * [400B](https://huggingface.co/LumiOpen/Viking-7B/tree/400B)
94
+ * [500B](https://huggingface.co/LumiOpen/Viking-7B/tree/500B)
95
+ * [600B](https://huggingface.co/LumiOpen/Viking-7B/tree/600B)
96
+ * [700B](https://huggingface.co/LumiOpen/Viking-7B/tree/700B)
97
+ * [800B](https://huggingface.co/LumiOpen/Viking-7B/tree/800B)
98
+ * [900B](https://huggingface.co/LumiOpen/Viking-7B/tree/900B)
99
+ * [1000B](https://huggingface.co/LumiOpen/Viking-7B/tree/1000B)
100
+
101
+ The transformers library allows you to load a checkpoint from a branch as follows:
102
+
103
+ ```python
104
+ branch = "200B"
105
+ model = transformers.AutoModelForCausalLM.from_pretrained(
106
+ "LumiOpen/Viking-7B",
107
+ torch_dtype=torch.bfloat16,
108
+ revision=branch,
109
+ )
110
+ ```
111
+
112
+ ## Ethical Considerations and Limitations
113
+
114
+ _Viking 7B is a release of a partially trained model, and special care should be taken when using any output._
115
+
116
+ Viking is an advanced language model, primarily optimized for English, Finnish, Swedish, Norwegian, Danish, Icelandic and code, with no meaningful proficiency in any other languages. As with most AI-driven systems, Viking is a product of the vast data it has been trained on, which may reflect the imperfections, biases, and idiosyncrasies of the wider web. Viking may, at times, produce outputs that can be considered inaccurate, prejudiced, or controversial. Users and developers engaging with Viking should exercise discretion and consider additional evaluation and customization to ensure the model's responses align with their specific needs and ethical standards.
117
+
118
+ ## License
119
+
120
+ Viking is released under the Apache 2.0 license.
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/scratch/project_462000319/general-tools/viking_v2_checkpoints/viking_v2_7B_iter_0239000_bfloat16",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 11008,
14
+ "max_position_embeddings": 4096,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 32,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 32,
19
+ "pretraining_tp": 1,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_scaling": null,
22
+ "rope_theta": 10000.0,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "bfloat16",
25
+ "transformers_version": "4.37.2",
26
+ "untie_embeddings_and_output_weights": true,
27
+ "use_cache": true,
28
+ "vocab_size": 131072,
29
+ "quantization_config": {
30
+ "quant_method": "exl2",
31
+ "version": "0.0.17",
32
+ "bits": 4.0,
33
+ "head_bits": 6,
34
+ "calibration": {
35
+ "rows": 100,
36
+ "length": 2048,
37
+ "dataset": "(default)"
38
+ }
39
+ }
40
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.37.2"
6
+ }
job_new.json ADDED
The diff for this file is too large to render. See raw diff
 
measurement.json ADDED
The diff for this file is too large to render. See raw diff
 
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:979701ce153735a3408ec379cd5ba4ccdc1a70f7c4643d6da6d64cf4ffd06db0
3
+ size 4738150256
special_tokens_map.json ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<fim_prefix>",
4
+ "<fim_middle>",
5
+ "<fim_suffix>",
6
+ "<fim_pad>",
7
+ "<filename>",
8
+ "<gh_stars>",
9
+ "<issue_start>",
10
+ "<issue_comment>",
11
+ "<issue_closed>",
12
+ "<jupyter_start>",
13
+ "<jupyter_text>",
14
+ "<jupyter_code>",
15
+ "<jupyter_output>",
16
+ "<empty_output>",
17
+ "<commit_before>",
18
+ "<commit_msg>",
19
+ "<commit_after>",
20
+ "<reponame>",
21
+ "<|im_start|>",
22
+ "<|im_end|>"
23
+ ],
24
+ "bos_token": {
25
+ "content": "<s>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ },
31
+ "eos_token": {
32
+ "content": "</s>",
33
+ "lstrip": false,
34
+ "normalized": false,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ },
38
+ "pad_token": {
39
+ "content": "<pad>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false
44
+ },
45
+ "unk_token": {
46
+ "content": "<unk>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false
51
+ }
52
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<unk>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<s>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<pad>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "4": {
37
+ "content": "<fim_prefix>",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "5": {
45
+ "content": "<fim_middle>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "6": {
53
+ "content": "<fim_suffix>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ },
60
+ "7": {
61
+ "content": "<fim_pad>",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": true
67
+ },
68
+ "8": {
69
+ "content": "<filename>",
70
+ "lstrip": false,
71
+ "normalized": false,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "9": {
77
+ "content": "<gh_stars>",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "10": {
85
+ "content": "<issue_start>",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "11": {
93
+ "content": "<issue_comment>",
94
+ "lstrip": false,
95
+ "normalized": false,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": true
99
+ },
100
+ "12": {
101
+ "content": "<issue_closed>",
102
+ "lstrip": false,
103
+ "normalized": false,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": true
107
+ },
108
+ "13": {
109
+ "content": "<jupyter_start>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": true
115
+ },
116
+ "14": {
117
+ "content": "<jupyter_text>",
118
+ "lstrip": false,
119
+ "normalized": false,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": true
123
+ },
124
+ "15": {
125
+ "content": "<jupyter_code>",
126
+ "lstrip": false,
127
+ "normalized": false,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": true
131
+ },
132
+ "16": {
133
+ "content": "<jupyter_output>",
134
+ "lstrip": false,
135
+ "normalized": false,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": true
139
+ },
140
+ "17": {
141
+ "content": "<empty_output>",
142
+ "lstrip": false,
143
+ "normalized": false,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": true
147
+ },
148
+ "18": {
149
+ "content": "<commit_before>",
150
+ "lstrip": false,
151
+ "normalized": false,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": true
155
+ },
156
+ "19": {
157
+ "content": "<commit_msg>",
158
+ "lstrip": false,
159
+ "normalized": false,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": true
163
+ },
164
+ "20": {
165
+ "content": "<commit_after>",
166
+ "lstrip": false,
167
+ "normalized": false,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": true
171
+ },
172
+ "21": {
173
+ "content": "<reponame>",
174
+ "lstrip": false,
175
+ "normalized": false,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": true
179
+ },
180
+ "22": {
181
+ "content": "<|im_start|>",
182
+ "lstrip": false,
183
+ "normalized": false,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": true
187
+ },
188
+ "23": {
189
+ "content": "<|im_end|>",
190
+ "lstrip": false,
191
+ "normalized": false,
192
+ "rstrip": false,
193
+ "single_word": false,
194
+ "special": true
195
+ }
196
+ },
197
+ "additional_special_tokens": [
198
+ "<fim_prefix>",
199
+ "<fim_middle>",
200
+ "<fim_suffix>",
201
+ "<fim_pad>",
202
+ "<filename>",
203
+ "<gh_stars>",
204
+ "<issue_start>",
205
+ "<issue_comment>",
206
+ "<issue_closed>",
207
+ "<jupyter_start>",
208
+ "<jupyter_text>",
209
+ "<jupyter_code>",
210
+ "<jupyter_output>",
211
+ "<empty_output>",
212
+ "<commit_before>",
213
+ "<commit_msg>",
214
+ "<commit_after>",
215
+ "<reponame>",
216
+ "<|im_start|>",
217
+ "<|im_end|>"
218
+ ],
219
+ "bos_token": "<s>",
220
+ "clean_up_tokenization_spaces": false,
221
+ "eos_token": "</s>",
222
+ "model_max_length": 1000000000000000019884624838656,
223
+ "pad_token": "<pad>",
224
+ "padding_side": "left",
225
+ "tokenizer_class": "BloomTokenizer",
226
+ "unk_token": "<unk>"
227
+ }