lachiewyoung commited on
Commit
0da29ac
1 Parent(s): 74b110b

Upload 11 files

Browse files
README.md CHANGED
@@ -1,3 +1,89 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - finetuned
6
+ inference: true
7
+ widget:
8
+ - messages:
9
+ - role: user
10
+ content: What is your favorite condiment?
11
  ---
12
+
13
+ # Model Card for Mistral-7B-Instruct-v0.2
14
+
15
+ The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
16
+
17
+ For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
18
+
19
+ ## Instruction format
20
+
21
+ In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
22
+
23
+ E.g.
24
+ ```
25
+ text = "<s>[INST] What is your favourite condiment? [/INST]"
26
+ "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
27
+ "[INST] Do you have mayonnaise recipes? [/INST]"
28
+ ```
29
+
30
+ This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
31
+
32
+ ```python
33
+ from transformers import AutoModelForCausalLM, AutoTokenizer
34
+
35
+ device = "cuda" # the device to load the model onto
36
+
37
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
38
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
39
+
40
+ messages = [
41
+ {"role": "user", "content": "What is your favourite condiment?"},
42
+ {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
43
+ {"role": "user", "content": "Do you have mayonnaise recipes?"}
44
+ ]
45
+
46
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
47
+
48
+ model_inputs = encodeds.to(device)
49
+ model.to(device)
50
+
51
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
52
+ decoded = tokenizer.batch_decode(generated_ids)
53
+ print(decoded[0])
54
+ ```
55
+
56
+ ## Model Architecture
57
+ This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
58
+ - Grouped-Query Attention
59
+ - Sliding-Window Attention
60
+ - Byte-fallback BPE tokenizer
61
+
62
+ ## Troubleshooting
63
+ - If you see the following error:
64
+ ```
65
+ Traceback (most recent call last):
66
+ File "", line 1, in
67
+ File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
68
+ config, kwargs = AutoConfig.from_pretrained(
69
+ File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
70
+ config_class = CONFIG_MAPPING[config_dict["model_type"]]
71
+ File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
72
+ raise KeyError(key)
73
+ KeyError: 'mistral'
74
+ ```
75
+
76
+ Installing transformers from source should solve the issue
77
+ pip install git+https://github.com/huggingface/transformers
78
+
79
+ This should not be required after transformers-v4.33.4.
80
+
81
+ ## Limitations
82
+
83
+ The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
84
+ It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
85
+ make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
86
+
87
+ ## The Mistral AI Team
88
+
89
+ Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MistralForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 4096,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 14336,
12
+ "max_position_embeddings": 32768,
13
+ "model_type": "mistral",
14
+ "num_attention_heads": 32,
15
+ "num_hidden_layers": 32,
16
+ "num_key_value_heads": 8,
17
+ "rms_norm_eps": 1e-05,
18
+ "rope_theta": 1000000.0,
19
+ "sliding_window": null,
20
+ "tie_word_embeddings": false,
21
+ "torch_dtype": "bfloat16",
22
+ "transformers_version": "4.36.0",
23
+ "use_cache": true,
24
+ "vocab_size": 32000
25
+ }
gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
pytorch_model-00001-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4c03495ea5e36c5191c44b84da5a4532b758d7cc57e1a3b7861bed04d9c922a
3
+ size 2471609582
pytorch_model-00002-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e096d082800d0b19b2c85446e1f8d99ab073fd0765b9545761b7c0e43116bca5
3
+ size 2499940784
pytorch_model-00003-of-00003.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a1596023def7686b0adfdfcc706b188c32ef21683d0748368d98d6963743b66
3
+ size 2270284018
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14483464192
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00003-of-00003.bin",
7
+ "model.embed_tokens.weight": "pytorch_model-00001-of-00003.bin",
8
+ "model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
9
+ "model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
10
+ "model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
11
+ "model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
12
+ "model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
13
+ "model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
14
+ "model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
15
+ "model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
16
+ "model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
17
+ "model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
18
+ "model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
19
+ "model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
20
+ "model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
21
+ "model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
22
+ "model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
23
+ "model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
24
+ "model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
25
+ "model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
26
+ "model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
27
+ "model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
28
+ "model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
29
+ "model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
30
+ "model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
31
+ "model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
32
+ "model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
33
+ "model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
34
+ "model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
35
+ "model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
36
+ "model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
37
+ "model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
38
+ "model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
39
+ "model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
40
+ "model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
41
+ "model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
42
+ "model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
43
+ "model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
44
+ "model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
45
+ "model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
46
+ "model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
47
+ "model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
48
+ "model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
49
+ "model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
50
+ "model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
51
+ "model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
52
+ "model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
53
+ "model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
54
+ "model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
55
+ "model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
56
+ "model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
57
+ "model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
58
+ "model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
59
+ "model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
60
+ "model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
61
+ "model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
62
+ "model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
63
+ "model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
64
+ "model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
65
+ "model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
66
+ "model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
67
+ "model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
68
+ "model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
69
+ "model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
70
+ "model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
71
+ "model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
72
+ "model.layers.15.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
73
+ "model.layers.15.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
74
+ "model.layers.15.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
75
+ "model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
76
+ "model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
77
+ "model.layers.15.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
78
+ "model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
79
+ "model.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
80
+ "model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
81
+ "model.layers.16.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
82
+ "model.layers.16.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
83
+ "model.layers.16.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
84
+ "model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
85
+ "model.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
86
+ "model.layers.16.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
87
+ "model.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
88
+ "model.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
89
+ "model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
90
+ "model.layers.17.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
91
+ "model.layers.17.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
92
+ "model.layers.17.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
93
+ "model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
94
+ "model.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
95
+ "model.layers.17.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
96
+ "model.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
97
+ "model.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
98
+ "model.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
99
+ "model.layers.18.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
100
+ "model.layers.18.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
101
+ "model.layers.18.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
102
+ "model.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
103
+ "model.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
104
+ "model.layers.18.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
105
+ "model.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
106
+ "model.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
107
+ "model.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
108
+ "model.layers.19.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
109
+ "model.layers.19.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
110
+ "model.layers.19.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
111
+ "model.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
112
+ "model.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
113
+ "model.layers.19.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
114
+ "model.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
115
+ "model.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
116
+ "model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
117
+ "model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
118
+ "model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
119
+ "model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
120
+ "model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
121
+ "model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
122
+ "model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
123
+ "model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
124
+ "model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
125
+ "model.layers.20.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
126
+ "model.layers.20.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
127
+ "model.layers.20.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
128
+ "model.layers.20.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
129
+ "model.layers.20.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
130
+ "model.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
131
+ "model.layers.20.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
132
+ "model.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
133
+ "model.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
134
+ "model.layers.21.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
135
+ "model.layers.21.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
136
+ "model.layers.21.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
137
+ "model.layers.21.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
138
+ "model.layers.21.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
139
+ "model.layers.21.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
140
+ "model.layers.21.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
141
+ "model.layers.21.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
142
+ "model.layers.21.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
143
+ "model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
144
+ "model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
145
+ "model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
146
+ "model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
147
+ "model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
148
+ "model.layers.22.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
149
+ "model.layers.22.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
150
+ "model.layers.22.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
151
+ "model.layers.22.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
152
+ "model.layers.23.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
153
+ "model.layers.23.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
154
+ "model.layers.23.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
155
+ "model.layers.23.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
156
+ "model.layers.23.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
157
+ "model.layers.23.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
158
+ "model.layers.23.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
159
+ "model.layers.23.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
160
+ "model.layers.23.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
161
+ "model.layers.24.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
162
+ "model.layers.24.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
163
+ "model.layers.24.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
164
+ "model.layers.24.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
165
+ "model.layers.24.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
166
+ "model.layers.24.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
167
+ "model.layers.24.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
168
+ "model.layers.24.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
169
+ "model.layers.24.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
170
+ "model.layers.25.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
171
+ "model.layers.25.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
172
+ "model.layers.25.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
173
+ "model.layers.25.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
174
+ "model.layers.25.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
175
+ "model.layers.25.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
176
+ "model.layers.25.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
177
+ "model.layers.25.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
178
+ "model.layers.25.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
179
+ "model.layers.26.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
180
+ "model.layers.26.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
181
+ "model.layers.26.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
182
+ "model.layers.26.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
183
+ "model.layers.26.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
184
+ "model.layers.26.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
185
+ "model.layers.26.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
186
+ "model.layers.26.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
187
+ "model.layers.26.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
188
+ "model.layers.27.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
189
+ "model.layers.27.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
190
+ "model.layers.27.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
191
+ "model.layers.27.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
192
+ "model.layers.27.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
193
+ "model.layers.27.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
194
+ "model.layers.27.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
195
+ "model.layers.27.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
196
+ "model.layers.27.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
197
+ "model.layers.28.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
198
+ "model.layers.28.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
199
+ "model.layers.28.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
200
+ "model.layers.28.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
201
+ "model.layers.28.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
202
+ "model.layers.28.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
203
+ "model.layers.28.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
204
+ "model.layers.28.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
205
+ "model.layers.28.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
206
+ "model.layers.29.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
207
+ "model.layers.29.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
208
+ "model.layers.29.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
209
+ "model.layers.29.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
210
+ "model.layers.29.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
211
+ "model.layers.29.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
212
+ "model.layers.29.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
213
+ "model.layers.29.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
214
+ "model.layers.29.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
215
+ "model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
216
+ "model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
217
+ "model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
218
+ "model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
219
+ "model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
220
+ "model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
221
+ "model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
222
+ "model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
223
+ "model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
224
+ "model.layers.30.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
225
+ "model.layers.30.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
226
+ "model.layers.30.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
227
+ "model.layers.30.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
228
+ "model.layers.30.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
229
+ "model.layers.30.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
230
+ "model.layers.30.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
231
+ "model.layers.30.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
232
+ "model.layers.30.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
233
+ "model.layers.31.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
234
+ "model.layers.31.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
235
+ "model.layers.31.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
236
+ "model.layers.31.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
237
+ "model.layers.31.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
238
+ "model.layers.31.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
239
+ "model.layers.31.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
240
+ "model.layers.31.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
241
+ "model.layers.31.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
242
+ "model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
243
+ "model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
244
+ "model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
245
+ "model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
246
+ "model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
247
+ "model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
248
+ "model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
249
+ "model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
250
+ "model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
251
+ "model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
252
+ "model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
253
+ "model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
254
+ "model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
255
+ "model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
256
+ "model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
257
+ "model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
258
+ "model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
259
+ "model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
260
+ "model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
261
+ "model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
262
+ "model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
263
+ "model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
264
+ "model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
265
+ "model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
266
+ "model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
267
+ "model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
268
+ "model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
269
+ "model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
270
+ "model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
271
+ "model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
272
+ "model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
273
+ "model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
274
+ "model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
275
+ "model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
276
+ "model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
277
+ "model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
278
+ "model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
279
+ "model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
280
+ "model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
281
+ "model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
282
+ "model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
283
+ "model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
284
+ "model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
285
+ "model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
286
+ "model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
287
+ "model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
288
+ "model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
289
+ "model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
290
+ "model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
291
+ "model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
292
+ "model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
293
+ "model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
294
+ "model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
295
+ "model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
296
+ "model.norm.weight": "pytorch_model-00003-of-00003.bin"
297
+ }
298
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "unk_token": "<unk>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": null,
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false,
42
+ "chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}"
43
+ }