Dracones commited on
Commit
7088237
1 Parent(s): 98e270d

Upload folder using huggingface_hub

Browse files
LICENSE ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tongyi Qianwen LICENSE AGREEMENT
2
+
3
+ Tongyi Qianwen Release Date: August 3, 2023
4
+
5
+ By clicking to agree or by using or distributing any portion or element of the Tongyi Qianwen Materials, you will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
6
+
7
+ 1. Definitions
8
+ a. This Tongyi Qianwen LICENSE AGREEMENT (this "Agreement") shall mean the terms and conditions for use, reproduction, distribution and modification of the Materials as defined by this Agreement.
9
+ b. "We"(or "Us") shall mean Alibaba Cloud.
10
+ c. "You" (or "Your") shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Materials for any purpose and in any field of use.
11
+ d. "Third Parties" shall mean individuals or legal entities that are not under common control with Us or You.
12
+ e. "Tongyi Qianwen" shall mean the large language models (including Qwen model and Qwen-Chat model), and software and algorithms, consisting of trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Us.
13
+ f. "Materials" shall mean, collectively, Alibaba Cloud's proprietary Tongyi Qianwen and Documentation (and any portion thereof) made available under this Agreement.
14
+ g. "Source" form shall mean the preferred form for making modifications, including but not limited to model source code, documentation source, and configuration files.
15
+ h. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation,
16
+ and conversions to other media types.
17
+
18
+ 2. Grant of Rights
19
+ You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Alibaba Cloud's intellectual property or other rights owned by Us embodied in the Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Materials.
20
+
21
+ 3. Redistribution
22
+ You may reproduce and distribute copies of the Materials or derivative works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
23
+ a. You shall give any other recipients of the Materials or derivative works a copy of this Agreement;
24
+ b. You shall cause any modified files to carry prominent notices stating that You changed the files;
25
+ c. You shall retain in all copies of the Materials that You distribute the following attribution notices within a "Notice" text file distributed as a part of such copies: "Tongyi Qianwen is licensed under the Tongyi Qianwen LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved."; and
26
+ d. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such derivative works as a whole, provided Your use, reproduction, and distribution of the work otherwise complies with the terms and conditions of this Agreement.
27
+
28
+ 4. Restrictions
29
+ If you are commercially using the Materials, and your product or service has more than 100 million monthly active users, You shall request a license from Us. You cannot exercise your rights under this Agreement without our express authorization.
30
+
31
+ 5. Rules of use
32
+ a. The Materials may be subject to export controls or restrictions in China, the United States or other countries or regions. You shall comply with applicable laws and regulations in your use of the Materials.
33
+ b. You can not use the Materials or any output therefrom to improve any other large language model (excluding Tongyi Qianwen or derivative works thereof).
34
+
35
+ 6. Intellectual Property
36
+ a. We retain ownership of all intellectual property rights in and to the Materials and derivatives made by or for Us. Conditioned upon compliance with the terms and conditions of this Agreement, with respect to any derivative works and modifications of the Materials that are made by you, you are and will be the owner of such derivative works and modifications.
37
+ b. No trademark license is granted to use the trade names, trademarks, service marks, or product names of Us, except as required to fulfill notice requirements under this Agreement or as required for reasonable and customary use in describing and redistributing the Materials.
38
+ c. If you commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against Us or any entity alleging that the Materials or any output therefrom, or any part of the foregoing, infringe any intellectual property or other right owned or licensable by you, then all licences granted to you under this Agreement shall terminate as of the date such lawsuit or other proceeding is commenced or brought.
39
+
40
+ 7. Disclaimer of Warranty and Limitation of Liability
41
+
42
+ a. We are not obligated to support, update, provide training for, or develop any further version of the Tongyi Qianwen Materials or to grant any license thereto.
43
+ b. THE MATERIALS ARE PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND INCLUDING WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR A PARTICULAR PURPOSE. WE MAKE NO WARRANTY AND ASSUME NO RESPONSIBILITY FOR THE SAFETY OR STABILITY OF THE MATERIALS AND ANY OUTPUT THEREFROM.
44
+ c. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE MATERIALS OR ANY OUTPUT OF IT, NO MATTER HOW IT’S CAUSED.
45
+ d. You will defend, indemnify and hold harmless Us from and against any claim by any third party arising out of or related to your use or distribution of the Materials.
46
+
47
+ 8. Survival and Termination.
48
+ a. The term of this Agreement shall commence upon your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
49
+ b. We may terminate this Agreement if you breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, you must delete and cease use of the Materials. Sections 7 and 9 shall survive the termination of this Agreement.
50
+
51
+ 9. Governing Law and Jurisdiction.
52
+ a. This Agreement and any dispute arising out of or relating to it will be governed by the laws of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
53
+ b. The People's Courts in Hangzhou City shall have exclusive jurisdiction over any dispute arising out of this Agreement.
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: tongyi-qianwen
4
+ license_link: >-
5
+ https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE
6
+ language:
7
+ - en
8
+ pipeline_tag: text-generation
9
+ base_model: Qwen/CodeQwen1.5-7B-Chat
10
+ tags:
11
+ - exl2
12
+ - chat
13
+ ---
14
+
15
+ # CodeQwen1.5-7B-Chat - EXL2 6.0bpw
16
+
17
+ This is a 6.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
18
+
19
+ Details about the model can be found at the above model page.
20
+
21
+ ## EXL2 Version
22
+
23
+ These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
24
+
25
+ If you have problems loading these models, please update Text Generation WebUI to the latest version.
26
+
27
+
28
+
29
+ ## Quant Details
30
+
31
+ This is the script used for quantization.
32
+
33
+ ```bash
34
+ #!/bin/bash
35
+
36
+ # Activate the conda environment
37
+ source ~/miniconda3/etc/profile.d/conda.sh
38
+ conda activate exllamav2
39
+
40
+ # Set the model name and bit size
41
+ MODEL_NAME="CodeQwen1.5-7B-Chat"
42
+
43
+ # Define variables
44
+ MODEL_DIR="models/$MODEL_NAME"
45
+ OUTPUT_DIR="exl2_$MODEL_NAME"
46
+ MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
47
+
48
+ # Create the measurement file if needed
49
+ if [ ! -f "$MEASUREMENT_FILE" ]; then
50
+ echo "Creating $MEASUREMENT_FILE"
51
+ # Create directories
52
+ if [ -d "$OUTPUT_DIR" ]; then
53
+ rm -r "$OUTPUT_DIR"
54
+ fi
55
+ mkdir "$OUTPUT_DIR"
56
+
57
+ python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
58
+ fi
59
+
60
+ # Choose one of the below. Either create a single quant for testing or a batch of them.
61
+ # BIT_PRECISIONS=(2.25)
62
+ BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
63
+
64
+ for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
65
+ do
66
+ CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
67
+
68
+ # If it doesn't already exist, make the quant
69
+ if [ ! -d "$CONVERTED_FOLDER" ]; then
70
+
71
+ echo "Creating $CONVERTED_FOLDER"
72
+
73
+ # Create directories
74
+ if [ -d "$OUTPUT_DIR" ]; then
75
+ rm -r "$OUTPUT_DIR"
76
+ fi
77
+ mkdir "$OUTPUT_DIR"
78
+ mkdir "$CONVERTED_FOLDER"
79
+
80
+ # Run conversion commands
81
+ python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
82
+
83
+ fi
84
+ done
85
+ ```
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 2,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 4096,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13440,
12
+ "max_position_embeddings": 65536,
13
+ "max_window_layers": 28,
14
+ "model_type": "qwen2",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "num_key_value_heads": 4,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_theta": 1000000,
20
+ "rotary_emb_base": 1000000,
21
+ "seq_length": 65536,
22
+ "sliding_window": 65536,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "bfloat16",
25
+ "transformers_version": "4.39.3",
26
+ "use_cache": true,
27
+ "use_sliding_window": false,
28
+ "vocab_size": 92416,
29
+ "quantization_config": {
30
+ "quant_method": "exl2",
31
+ "version": "0.0.18",
32
+ "bits": 6.0,
33
+ "head_bits": 6,
34
+ "calibration": {
35
+ "rows": 100,
36
+ "length": 2048,
37
+ "dataset": "(default)"
38
+ }
39
+ }
40
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 2,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 4,
6
+ 2
7
+ ],
8
+ "pad_token_id": 92298,
9
+ "repetition_penalty": 1.0,
10
+ "temperature": 1.0,
11
+ "top_p": 0.95,
12
+ "transformers_version": "4.39.3"
13
+ }
model.safetensors.index.json ADDED
@@ -0,0 +1,394 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14500569088
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00004-of-00004.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00004.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
13
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
16
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
18
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
19
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
20
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
22
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
23
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
24
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
25
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
28
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
29
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
30
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
31
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
32
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
33
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
34
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
35
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
36
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
37
+ "model.layers.10.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
38
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
39
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
40
+ "model.layers.10.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
41
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
42
+ "model.layers.10.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
43
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
44
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
45
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
46
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
47
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
48
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
49
+ "model.layers.11.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
50
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
51
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
52
+ "model.layers.11.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
53
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
54
+ "model.layers.11.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
55
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
56
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
57
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
58
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
59
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
60
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
61
+ "model.layers.12.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
62
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
63
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
64
+ "model.layers.12.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
65
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
66
+ "model.layers.12.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
67
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
68
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
69
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
70
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
71
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
72
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
73
+ "model.layers.13.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
74
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
75
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
76
+ "model.layers.13.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
77
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
78
+ "model.layers.13.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
79
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
80
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
81
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
82
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
83
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
84
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
85
+ "model.layers.14.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
86
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
87
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
88
+ "model.layers.14.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
89
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
90
+ "model.layers.14.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
91
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
92
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
93
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
94
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
95
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
96
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
97
+ "model.layers.15.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
98
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
99
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
100
+ "model.layers.15.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
101
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
102
+ "model.layers.15.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
103
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
104
+ "model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
105
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
106
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
107
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
108
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
109
+ "model.layers.16.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
110
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
111
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
112
+ "model.layers.16.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
113
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
114
+ "model.layers.16.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
115
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
116
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00004.safetensors",
117
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
118
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
119
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
120
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
121
+ "model.layers.17.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
122
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
123
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
124
+ "model.layers.17.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
125
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
126
+ "model.layers.17.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
127
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
128
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
129
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
130
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
131
+ "model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
132
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
133
+ "model.layers.18.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
134
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
135
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
136
+ "model.layers.18.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
137
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
138
+ "model.layers.18.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
139
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
140
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
141
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
142
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
143
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
144
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
145
+ "model.layers.19.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
146
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
147
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
148
+ "model.layers.19.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
149
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
150
+ "model.layers.19.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
151
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
152
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
153
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
154
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
155
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
156
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
157
+ "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
158
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
159
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
160
+ "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
161
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
162
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
163
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
164
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
165
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
166
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
167
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
168
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
169
+ "model.layers.20.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
170
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
171
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
172
+ "model.layers.20.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
173
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
174
+ "model.layers.20.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
175
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
176
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
177
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
178
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
179
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
180
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
181
+ "model.layers.21.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
182
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
183
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
184
+ "model.layers.21.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
185
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
186
+ "model.layers.21.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
187
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
188
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
189
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
190
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
191
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
192
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
193
+ "model.layers.22.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
194
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
195
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
196
+ "model.layers.22.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
197
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
198
+ "model.layers.22.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
199
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
200
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
201
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
202
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
203
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
204
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
205
+ "model.layers.23.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
206
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
207
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
208
+ "model.layers.23.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
209
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
210
+ "model.layers.23.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
211
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
212
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
213
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
214
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
215
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
216
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
217
+ "model.layers.24.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
218
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
219
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
220
+ "model.layers.24.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
221
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
222
+ "model.layers.24.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
223
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
224
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
225
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
226
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
227
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
228
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
229
+ "model.layers.25.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
230
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
231
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
232
+ "model.layers.25.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
233
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
234
+ "model.layers.25.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
235
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
236
+ "model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
237
+ "model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
238
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
239
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
240
+ "model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
241
+ "model.layers.26.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
242
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
243
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
244
+ "model.layers.26.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
245
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
246
+ "model.layers.26.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
247
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
248
+ "model.layers.27.input_layernorm.weight": "model-00004-of-00004.safetensors",
249
+ "model.layers.27.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
250
+ "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
251
+ "model.layers.27.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
252
+ "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
253
+ "model.layers.27.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
254
+ "model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
255
+ "model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
256
+ "model.layers.27.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
257
+ "model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
258
+ "model.layers.27.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
259
+ "model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
260
+ "model.layers.28.input_layernorm.weight": "model-00004-of-00004.safetensors",
261
+ "model.layers.28.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
262
+ "model.layers.28.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
263
+ "model.layers.28.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
264
+ "model.layers.28.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
265
+ "model.layers.28.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
266
+ "model.layers.28.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
267
+ "model.layers.28.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
268
+ "model.layers.28.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
269
+ "model.layers.28.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
270
+ "model.layers.28.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
271
+ "model.layers.28.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
272
+ "model.layers.29.input_layernorm.weight": "model-00004-of-00004.safetensors",
273
+ "model.layers.29.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
274
+ "model.layers.29.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
275
+ "model.layers.29.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
276
+ "model.layers.29.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
277
+ "model.layers.29.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
278
+ "model.layers.29.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
279
+ "model.layers.29.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
280
+ "model.layers.29.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
281
+ "model.layers.29.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
282
+ "model.layers.29.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
283
+ "model.layers.29.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
284
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
285
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
286
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
287
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
288
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
289
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
290
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
291
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
292
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
293
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
294
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
295
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
296
+ "model.layers.30.input_layernorm.weight": "model-00004-of-00004.safetensors",
297
+ "model.layers.30.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
298
+ "model.layers.30.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
299
+ "model.layers.30.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
300
+ "model.layers.30.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
301
+ "model.layers.30.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
302
+ "model.layers.30.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
303
+ "model.layers.30.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
304
+ "model.layers.30.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
305
+ "model.layers.30.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
306
+ "model.layers.30.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
307
+ "model.layers.30.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
308
+ "model.layers.31.input_layernorm.weight": "model-00004-of-00004.safetensors",
309
+ "model.layers.31.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
310
+ "model.layers.31.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
311
+ "model.layers.31.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
312
+ "model.layers.31.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
313
+ "model.layers.31.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
314
+ "model.layers.31.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
315
+ "model.layers.31.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
316
+ "model.layers.31.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
317
+ "model.layers.31.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
318
+ "model.layers.31.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
319
+ "model.layers.31.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
320
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
321
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
322
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
323
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
324
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
325
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
326
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
327
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
328
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
329
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
330
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
331
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
332
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
333
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
334
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
335
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
336
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
337
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
338
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
339
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
340
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
341
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
342
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
343
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
344
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
345
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
346
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
347
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
348
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
349
+ "model.layers.6.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
350
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
351
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
352
+ "model.layers.6.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
353
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
354
+ "model.layers.6.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
355
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
356
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
357
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
358
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
359
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
360
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
361
+ "model.layers.7.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
362
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
363
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
364
+ "model.layers.7.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
365
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
366
+ "model.layers.7.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
367
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
368
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
369
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
370
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
371
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
372
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
373
+ "model.layers.8.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
374
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
375
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
376
+ "model.layers.8.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
377
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
378
+ "model.layers.8.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
379
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
380
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
381
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
382
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
383
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
384
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
385
+ "model.layers.9.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
386
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
387
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
388
+ "model.layers.9.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
389
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
390
+ "model.layers.9.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
391
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
392
+ "model.norm.weight": "model-00004-of-00004.safetensors"
393
+ }
394
+ }
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a153bfbfa8bf11977a8ebdacc38e49a684253729d743ad2d184b61bd01c8528b
3
+ size 5927065344
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:656b66a920a54bc45e8e06dc587691ab3c0b2930b9ae56d5fa31e72db2f3bff3
3
+ size 1423961
tokenizer_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": false,
5
+ "additional_special_tokens": ["<|im_start|>", "<|im_end|>", "<fim_prefix>", "<fim_middle>", "<fim_suffix>", "<fim_pad>"],
6
+ "bos_token": "<|endoftext|>",
7
+ "clean_up_tokenization_spaces": false,
8
+ "eos_token": "<|im_end|>",
9
+ "legacy": false,
10
+ "model_max_length": 1000000000000000019884624838656,
11
+ "pad_token": "<fim_pad>",
12
+ "sp_model_kwargs": {},
13
+ "spaces_between_special_tokens": false,
14
+ "tokenizer_class": "PreTrainedTokenizerFast",
15
+ "unk_token": "<unk>",
16
+ "use_default_system_prompt": false,
17
+ "add_prefix_space": true,
18
+ "chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
19
+ }