pom
commited on
Commit
•
ad4d20f
1
Parent(s):
ef04640
release xverse-65b-chat
Browse files- MODEL_LICENSE.pdf +0 -0
- README.md +259 -0
- config.json +27 -0
- configuration_xverse.py +123 -0
- generation_config.json +12 -0
- modeling_xverse.py +881 -0
- pytorch_model-00001-of-00017.bin +3 -0
- pytorch_model-00002-of-00017.bin +3 -0
- pytorch_model-00003-of-00017.bin +3 -0
- pytorch_model-00004-of-00017.bin +3 -0
- pytorch_model-00005-of-00017.bin +3 -0
- pytorch_model-00006-of-00017.bin +3 -0
- pytorch_model-00007-of-00017.bin +3 -0
- pytorch_model-00008-of-00017.bin +3 -0
- pytorch_model-00009-of-00017.bin +3 -0
- pytorch_model-00010-of-00017.bin +3 -0
- pytorch_model-00011-of-00017.bin +3 -0
- pytorch_model-00012-of-00017.bin +3 -0
- pytorch_model-00013-of-00017.bin +3 -0
- pytorch_model-00014-of-00017.bin +3 -0
- pytorch_model-00015-of-00017.bin +3 -0
- pytorch_model-00016-of-00017.bin +3 -0
- pytorch_model-00017-of-00017.bin +3 -0
- pytorch_model.bin.index.json +810 -0
- quantization.py +124 -0
- special_tokens_map.json +23 -0
- tokenizer.json +0 -0
- tokenizer_config.json +5 -0
MODEL_LICENSE.pdf
ADDED
Binary file (306 kB). View file
|
|
README.md
CHANGED
@@ -1,3 +1,262 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
|
4 |
+
inference: false
|
5 |
+
|
6 |
---
|
7 |
+
|
8 |
+
# XVERSE-65B-Chat
|
9 |
+
|
10 |
+
## 模型介绍
|
11 |
+
|
12 |
+
**XVERSE-65B-Chat**为[**XVERSE-65B**](https://huggingface.co/xverse/XVERSE-65B)模型对齐后的版本。
|
13 |
+
|
14 |
+
模型对齐中,不同能力类型的数据采样比例如下:
|
15 |
+
| | 编码能力 | 数学能力 | 对话生成 | 角色扮演 | 工具调用 | 知识问答 | 文本生成 | 安全 | 逻辑推理 | 语言理解 |
|
16 |
+
|:-------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
|
17 |
+
| Ratio(%) | 21.2 | 18.6 | 12.4 | 11.3 | 9.8 | 6.8 | 5.4 | 5.1 | 4.8 | 4.6 |
|
18 |
+
|
19 |
+
**XVERSE-65B** 是由深圳元象科技自主研发的支持多语言的大语言模型(Large Language Model),参数规模为 650 亿,本次开源的模型为底座模型 **XVERSE-65B**,主要特点如下:
|
20 |
+
|
21 |
+
- **模型结构**:XVERSE-65B 使用主流 Decoder-only 的标准 Transformer 网络结构,支持 16K 的上下文长度(Context Length),能满足更长的多轮对话、知识问答与摘要等需求,模型应用场景更广泛。
|
22 |
+
- **训练数据**:构建了 2.6 万亿 token 的高质量、多样化的数据对模型进行充分训练,包含中、英、俄、西等 40 多种语言,通过精细化设置不同类型数据的采样比例,使得中英两种语言表现优异,也能兼顾其他语言效果。
|
23 |
+
- **分词**:基于 BPE(Byte-Pair Encoding)算法,使用上百 GB 语料训练了一个词表大小为 100,534 的分词器,能够同时支持多语言,而无需额外扩展词表。
|
24 |
+
- **训练框架**:训练中采用 FlashAttention2 加速计算,3D 并行基础上采用虚拟流水线(virtual pipeline)技术,降低较长流水线和 16k 上下文窗口产生的过高气泡率,在千卡集群的峰值算力利用率达到业界前列。同时通过集群基础设施运营、资源调度、训练框架和调度平台协同等持续优化,打造出高稳定、低中断、强容错的训练系统,将每周有效训练率提升至 98.6%。
|
25 |
+
|
26 |
+
**XVERSE-65B**的模型大小、架构和学习率如下:
|
27 |
+
|
28 |
+
| params | d_model | n_heads | n_layers | d_ff | learning rate |
|
29 |
+
|:------:|:-------:|:-------:|:--------:|:-----:|:-------------:|
|
30 |
+
| 65B | 8192 | 64 | 80 | 22016 | 1.5e−4 |
|
31 |
+
|
32 |
+
## 底座数据介绍
|
33 |
+
|
34 |
+
在预训练阶段,**XVERSE-65B** 主要使用了 7 类不同的数据类型。以下表格展示了 XVERSE-65B 与其他一些知名模型在预训练数据集方面的比较:
|
35 |
+
|
36 |
+
| 数据类别 | [GPT3](https://arxiv.org/abs/2005.14165) | [Llama](https://arxiv.org/abs/2302.13971) | [BLOOM](https://arxiv.org/abs/2211.05100) | [PaLM](https://arxiv.org/abs/2204.02311) | [Chinchilla](https://arxiv.org/abs/2203.15556) | [Gopher](https://arxiv.org/abs/2112.11446) | [MT-NLG](https://arxiv.org/abs/2201.11990) | XVERSE-65B |
|
37 |
+
|:-------:|:--------:|:---------:|:---------:|:--------:|:--------------:|:----------:|:----------:|:----------:|
|
38 |
+
| 网页类 | Y | Y | Y | Y | Y | Y | Y | Y |
|
39 |
+
| 代码类 | | Y | Y | Y | Y | Y | Y | Y |
|
40 |
+
| 百科类 | Y | Y | | Y | Y | Y | Y | Y |
|
41 |
+
| 书籍类 | Y | Y | | Y | Y | Y | Y | Y |
|
42 |
+
| 论文类 | | Y | | | | | Y | Y |
|
43 |
+
| 问答类 | Y | Y | | Y | | | Y | Y |
|
44 |
+
|
45 |
+
> 注:'Y' 表示使用了该类数据。
|
46 |
+
|
47 |
+
在预训练阶段,不同类别数据的采样比例如下所示:
|
48 |
+
| | 网页类 | 代码类 | 百科类 | 书籍类 | 论文类 | 问答类 | 其他类 |
|
49 |
+
|:-------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
|
50 |
+
| 比例(%) | 72.91 | 7.09 | 4.81 | 5.62 | 6.55 | 1.15 | 1.87 |
|
51 |
+
|
52 |
+
在预训练阶段,**XVERSE-65B** 主要使用了 41 种自然语言,以下表格展示了不同语种在底座数据中的占比:
|
53 |
+
|
54 |
+
| 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) |
|
55 |
+
|:----:|:-------:|:----:|:-------:|:----:|:-------:|:----:|:-------:|:----:|:-------:|:----:|:-------:|
|
56 |
+
| en | 54.91 | pl | 0.48 | hu | 0.19 | ar | 0.12 | fa | 0.07 | sl | 0.05 |
|
57 |
+
| zh | 31.09 | it | 0.36 | ko | 0.18 | ro | 0.11 | hi | 0.07 | et | 0.04 |
|
58 |
+
| ja | 3.22 | pt | 0.34 | sv | 0.15 | bg | 0.10 | no | 0.07 | lv | 0.03 |
|
59 |
+
| ru | 3.15 | cs | 0.27 | el | 0.14 | th | 0.10 | ca | 0.06 | sr | 0.03 |
|
60 |
+
| de | 1.52 | uk | 0.24 | fi | 0.14 | da | 0.09 | iw | 0.06 | ta | 0.03 |
|
61 |
+
| es | 0.91 | tr | 0.23 | id | 0.13 | mr | 0.08 | lt | 0.05 | kk | 0.02 |
|
62 |
+
| fr | 0.73 | nl | 0.20 | vi | 0.13 | sk | 0.08 | ms | 0.05 | | |
|
63 |
+
|
64 |
+
> 注:各种语言简称的对照可参考:[ISO_639-1](https://zh.wikipedia.org/wiki/ISO_639-1)
|
65 |
+
|
66 |
+
对于代码类数据,以下表格展示了不同编程语言的占比:
|
67 |
+
|
68 |
+
| 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) |
|
69 |
+
|:----------:|:-------:|:------:|:-------:|:------------:|:-------:|:----------:|:-------:|:-------------:|:-------:|:-------:|:-------:|
|
70 |
+
| PHP | 17.06 | Go | 3.38 | Shell | 0.74 | PowerShell | 0.23 | Arduino | 0.13 | R | 0.04 |
|
71 |
+
| JavaScript | 15.65 | Rust | 2.33 | Haskell | 0.46 | Groovy | 0.21 | Assembly | 0.13 | ABAP | 0.01 |
|
72 |
+
| Java | 15.18 | Ruby | 1.61 | Common Lisp | 0.43 | Pascal | 0.20 | Clojure | 0.12 | COBOL | 0.0022 |
|
73 |
+
| Python | 14.64 | Swift | 1.40 | Perl | 0.34 | FORTRAN | 0.19 | Cuda | 0.12 | Verilog | 0.0001 |
|
74 |
+
| TypeScript | 6.55 | Kotlin | 1.40 | CSS | 0.32 | Elixir | 0.17 | VHDL | 0.09 | | |
|
75 |
+
| C | 4.84 | Scala | 1.08 | Julia | 0.32 | Solidity | 0.16 | Emacs Lisp | 0.08 | | |
|
76 |
+
| C++ | 4.68 | Dart | 0.95 | Visual Basic | 0.25 | F# | 0.14 | Objective-C++ | 0.08 | | |
|
77 |
+
| C# | 3.44 | SQL | 0.76 | OCaml | 0.24 | Erlang | 0.14 | Crystal | 0.06 | | |
|
78 |
+
|
79 |
+
## Model Introduction
|
80 |
+
|
81 |
+
**XVERSE-65B-Chat** is the aligned version of model [**XVERSE-65B**](https://huggingface.co/xverse/XVERSE-65B)
|
82 |
+
|
83 |
+
In the alignment, the sampling ratio of data of different capability types is as follows:
|
84 |
+
| | Code | Math | Chat | Role-Play | Agent | QA | Text-Gen | Security | Logic | NLU |
|
85 |
+
|:-------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
|
86 |
+
| Ratio(%) | 21.2 | 18.6 | 12.4 | 11.3 | 9.8 | 6.8 | 5.4 | 5.1 | 4.8 | 4.6 |
|
87 |
+
|
88 |
+
**XVERSE-65B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. The models released this time is the base model **XVERSE-65B**. Its key features are as follows:
|
89 |
+
|
90 |
+
- **Model Structure**: XVERSE-65B uses the mainstream Decoder-only Transformer network structure, supports 16k context length, which can meet the need of longer multi-round dialogues, knowledge question-answering, and summarization. This makes the model more versatile in application scenarios.
|
91 |
+
- **Training Data**: The model has been thoroughly trained on a diversified and high-quality dataset consisting of 2.6 trillion of tokens, including more than 40 languages such as Chinese, English, Russian, and Spanish. The sampling ratio of different types of data is finely set, which makes the performance of Chinese and English excellent, and also takes into account the effect of other languages.
|
92 |
+
- **Tokenization**: Based on the BPE (Byte-Pair Encoding) algorithm, a tokenizer with a vocabulary size of 100,534 has been trained using hundreds of gigabytes of language data. This tokenizer is capable of supporting multilingual without the need for additional vocabulary expansion.
|
93 |
+
- **Training Framework**: The training utilizes FlashAttention2 for accelerated computation, and on top of 3D parallelism, virtual pipeline technology is applied to reduce the excessive bubble rate caused by longer pipelines and 16k context windows. This achieves a peak computational efficiency within the industry-leading range in the petaflop-scale cluster. Concurrently, through continuous optimization of cluster infrastructure operations, resource scheduling, training frameworks, and the scheduling platform, a highly stable, low-interruption, and robust fault-tolerant training system has been developed, enhancing the effective weekly training rate to 98.6%.
|
94 |
+
|
95 |
+
The models sizes, architectures and learning rate of **XVERSE-65B** are showed as follows:
|
96 |
+
|
97 |
+
| params | d_model | n_heads | n_layers | d_ff | learning rate |
|
98 |
+
|:------:|:-------:|:-------:|:--------:|:-----:|:-------------:|
|
99 |
+
| 65B | 8192 | 64 | 80 | 22016 | 1.5e−4 |
|
100 |
+
|
101 |
+
## Introduction of Pre-training Data
|
102 |
+
|
103 |
+
During the pre-training phase, **XVERSE-65B** primarily utilized 7 different types of data. The following table shows a comparison of the pre-training datasets of XVERSE-65B with some other well-known models:
|
104 |
+
|
105 |
+
| Data Type | [GPT3](https://arxiv.org/abs/2005.14165) | [Llama](https://arxiv.org/abs/2302.13971) | [BLOOM](https://arxiv.org/abs/2211.05100) | [PaLM](https://arxiv.org/abs/2204.02311) | [Chinchilla](https://arxiv.org/abs/2203.15556) | [Gopher](https://arxiv.org/abs/2112.11446) | [MT-NLG](https://arxiv.org/abs/2201.11990) | XVERSE-65B |
|
106 |
+
|:---------------:|:--------:|:---------:|:---------:|:--------:|:--------------:|:----------:|:----------:|:----------:|
|
107 |
+
| Web Pages | Y | Y | Y | Y | Y | Y | Y | Y |
|
108 |
+
| Code | | Y | Y | Y | Y | Y | Y | Y |
|
109 |
+
| Encyclopedia | Y | Y | | Y | Y | Y | Y | Y |
|
110 |
+
| Books | Y | Y | | Y | Y | Y | Y | Y |
|
111 |
+
| Academic Papers | | Y | | | | | Y | Y |
|
112 |
+
| QA | Y | Y | | Y | | | Y | Y |
|
113 |
+
|
114 |
+
> Note: 'Y' indicates that the data type was used.
|
115 |
+
|
116 |
+
The sampling ratios of different data types during the pre-training phase are as follows:
|
117 |
+
| | Web Pages | Code | Encyclopedia | Books | Academic Papers | QA | Other |
|
118 |
+
|:--------------:|:---------:|:----:|:------------:|:-----:|:---------------:|:----:|:-----:|
|
119 |
+
| Proportion (%) | 72.91 | 7.09 | 4.81 | 5.62 | 6.55 | 1.15 | 1.87 |
|
120 |
+
|
121 |
+
During the pre-training phase, **XVERSE-65B** primarily used 41 kinds of natural language, and the following table shows the proportion of different languages in the pre-training data:
|
122 |
+
|
123 |
+
| Language | Proportion (%) | Language | Proportion (%) | Language | Proportion (%) | Language | Proportion (%) | Language | Proportion (%) | Language | Proportion (%) |
|
124 |
+
|:--------:|:--------------:|:--------:|:--------------:|:--------:|:--------------:|:--------:|:--------------:|:--------:|:--------------:|:--------:|:--------------:|
|
125 |
+
| en | 54.91 | pl | 0.48 | hu | 0.19 | ar | 0.12 | fa | 0.07 | sl | 0.05 |
|
126 |
+
| zh | 31.09 | it | 0.36 | ko | 0.18 | ro | 0.11 | hi | 0.07 | et | 0.04 |
|
127 |
+
| ja | 3.22 | pt | 0.34 | sv | 0.15 | bg | 0.10 | no | 0.07 | lv | 0.03 |
|
128 |
+
| ru | 3.15 | cs | 0.27 | el | 0.14 | th | 0.10 | ca | 0.06 | sr | 0.03 |
|
129 |
+
| de | 1.52 | uk | 0.24 | fi | 0.14 | da | 0.09 | iw | 0.06 | ta | 0.03 |
|
130 |
+
| es | 0.91 | tr | 0.23 | id | 0.13 | mr | 0.08 | lt | 0.05 | kk | 0.02 |
|
131 |
+
| fr | 0.73 | nl | 0.20 | vi | 0.13 | sk | 0.08 | ms | 0.05 | | |
|
132 |
+
|
133 |
+
> Note: Reference to the abbreviations of different languages: [ISO_639-1](https://zh.wikipedia.org/wiki/ISO_639-1)
|
134 |
+
|
135 |
+
For the Code data, the following table shows the proportion of different programming languages:
|
136 |
+
|
137 |
+
| Programming Language | Proportion (%) | Programming Language | Proportion (%) | Programming Language | Proportion (%) | Programming Language | Proportion (%) | Programming Language | Proportion (%) | Programming Language | Proportion (%) |
|
138 |
+
|:--------------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:--------------:|
|
139 |
+
| PHP | 17.06 | Go | 3.38 | Shell | 0.74 | PowerShell | 0.23 | Arduino | 0.13 | R | 0.04 |
|
140 |
+
| JavaScript | 15.65 | Rust | 2.33 | Haskell | 0.46 | Groovy | 0.21 | Assembly | 0.13 | ABAP | 0.01 |
|
141 |
+
| Java | 15.18 | Ruby | 1.61 | Common Lisp | 0.43 | Pascal | 0.20 | Clojure | 0.12 | COBOL | 0.0022 |
|
142 |
+
| Python | 14.64 | Swift | 1.40 | Perl | 0.34 | FORTRAN | 0.19 | Cuda | 0.12 | Verilog | 0.0001 |
|
143 |
+
| TypeScript | 6.55 | Kotlin | 1.40 | CSS | 0.32 | Elixir | 0.17 | VHDL | 0.09 | | |
|
144 |
+
| C | 4.84 | Scala | 1.08 | Julia | 0.32 | Solidity | 0.16 | Emacs Lisp | 0.08 | | |
|
145 |
+
| C++ | 4.68 | Dart | 0.95 | Visual Basic | 0.25 | F# | 0.14 | Objective-C++ | 0.08 | | |
|
146 |
+
| C# | 3.44 | SQL | 0.76 | OCaml | 0.24 | Erlang | 0.14 | Crystal | 0.06 | | |
|
147 |
+
|
148 |
+
## 评测结果
|
149 |
+
|
150 |
+
为了综合评估模型的性能,我们在一系列标准数据集上进行了全面测试,包括C-Eval、CMMLU、Gaokao-Bench、MMLU、GAOKAO-English、AGIEval、RACE-M、CommonSenseQA、PIQA、GSM8K和HumanEval。这些评估覆盖了模型在多个领域的能力,具体包括中文问答、英文问答、语言理解、常识问答、逻辑推理、数学问题解答以及编程能力。评估结果如下:
|
151 |
+
|
152 |
+
| 能力维度 | 数据集 | | XVERSE-65B | Llama1-65B | Llama2-70B | Falcon-180B | GPT-3.5 | GPT-4 |
|
153 |
+
| :--------: | :------------------------: | :----: | :--------: | :--------: | :--------: | :---------: | :-----: | :---: |
|
154 |
+
| 中文问答 | C-Eval | 5-shot | 68.6 | 38.8 | 49.9 | 54.2 | 54.4 | 68.7 |
|
155 |
+
| | CMMLU | 5-shot | 72.6 | 40.6 | 53.6 | 57.2 | 53.9 | 71.0 |
|
156 |
+
| | Gaokao-Bench<sup>1</sup> | 5-shot | 73.9 | 38.9 | 51.4 | 50.5 | - | - |
|
157 |
+
| 英文问答 | MMLU | 5-shot | 70.8 | 63.4 | 68.9 | 70.5 | 70.0 | 86.4 |
|
158 |
+
| | GAOKAO-English<sup>1</sup> | 5-shot | 85.3 | 67.0 | 76.6 | 63.3 | - | - |
|
159 |
+
| 中英文问答 | AGIEval<sup>1</sup> | 5-shot | 61.8 | 42.4 | 51.4 | 51.3 | - | - |
|
160 |
+
| 语言理解 | RACE-M | 0-shot | 90.6 | 67.9 | 81.5 | 87.6 | 85.6 | 93.7 |
|
161 |
+
| 常识问答 | CommonSenseQA | 7-shot | 79.8 | 74.0 | 78.5 | 82.4 | 80.2 | 88.3 |
|
162 |
+
| 推理 | PIQA | 0-shot | 80.4 | 82.8 | 82.8 | 85.3 | 81.7 | 89.2 |
|
163 |
+
| 数学 | GSM8K | 4-shot | 60.3 | 50.9 | 56.8 | 62.6 | 57.1 | 92.0 |
|
164 |
+
| 代码 | HumanEval | 0-shot | 26.8 | 23.7 | 29.9 | - | 48.1 | 67.0 |
|
165 |
+
|
166 |
+
> <sup>1:只针对其中的单项选择题进行测试,即排除了填空题、开放性问题和多项选择题</sup>
|
167 |
+
|
168 |
+
对于上述所有比较模型,我们优先汇报其官方公布的结果。在缺少官方结果的情况下,我们采用了 [OpenCompass 榜单](https://opencompass.org.cn/leaderboard-llm)的报告结果。其他结果则来自于我们自行执行的评估流程所获得的数据。
|
169 |
+
对于 MMLU ,我们采用作者提供的[评测工具](https://github.com/hendrycks/test),C-Eval、AGIEval、GAOKAO-Bench、GAOKAO-English 与 MMLU 的评测方式相同,其余评测数据集使用 [OpenCompass 评估框架](https://github.com/open-compass/OpenCompass/)进行评估。
|
170 |
+
|
171 |
+
## Model Evaluation
|
172 |
+
|
173 |
+
To comprehensively assess the performance of the model, we conducted extensive testing across a range of standard datasets, including C-Eval, CMMLU, Gaokao-Bench, MMLU, GAOKAO-English, AGIEval, RACE-M, CommonSenseQA, PIQA, GSM8K and HumanEval. These evaluations spanned multiple capabilities of the model, specifically including Chinese question answering, English question answering, language comprehension, common sense questioning, logical reasoning, mathematical problem-solving, and coding ability. The results of the evaluations are as follows:
|
174 |
+
|
175 |
+
| Capability Dimension | Dataset | | XVERSE-65B | Llama1-65B | Llama2-70B | Falcon-180B | GPT-3.5 | GPT-4 |
|
176 |
+
| :--------------------: | :------------------------: | :----: | :--------: | :--------: | :--------: | :---------: | :-----: | :---: |
|
177 |
+
| Chinese QA | C-Eval | 5-shot | 68.6 | 38.8 | 49.9 | 54.2 | 54.4 | 68.7 |
|
178 |
+
| | CMMLU | 5-shot | 72.6 | 40.6 | 53.6 | 57.2 | 53.9 | 71.0 |
|
179 |
+
| | Gaokao-Bench<sup>1</sup> | 5-shot | 73.9 | 38.9 | 51.4 | 50.5 | - | - |
|
180 |
+
| English QA | MMLU | 5-shot | 70.8 | 63.4 | 68.9 | 70.5 | 70.0 | 86.4 |
|
181 |
+
| | GAOKAO-English<sup>1</sup> | 5-shot | 85.3 | 67.0 | 76.6 | 63.3 | - | - |
|
182 |
+
| Chinese & English QA | AGIEval<sup>1</sup> | 5-shot | 61.8 | 42.4 | 51.4 | 51.3 | - | - |
|
183 |
+
| Language Understanding | RACE-M | 0-shot | 90.6 | 67.9 | 81.5 | 87.6 | 85.6 | 93.7 |
|
184 |
+
| Common Sense QA | CommonSenseQA | 7-shot | 79.8 | 74.0 | 78.5 | 82.4 | 80.2 | 88.3 |
|
185 |
+
| Reasoning | PIQA | 0-shot | 80.4 | 82.8 | 82.8 | 85.3 | 81.7 | 89.2 |
|
186 |
+
| Math | GSM8K | 4-shot | 60.3 | 50.9 | 56.8 | 62.6 | 57.1 | 92.0 |
|
187 |
+
| Coding | HumanEval | 0-shot | 26.8 | 23.7 | 29.9 | - | 48.1 | 67.0 |
|
188 |
+
|
189 |
+
> <sup>1: Tests are conducted only on single-answer multiple-choice questions, thus excluding fill-in-the-blanks, open-ended questions, and multiple-answer multiple-choice questions.</sup>
|
190 |
+
|
191 |
+
For all the comparison models mentioned above, we prioritize the disclosure of their officially published results. In the absence of official data, we refer to the reported outcomes from [OpenCompass Leaderboard](https://opencompass.org.cn/leaderboard-llm). Results not covered by the aforementioned sources are derived from our own evaluation pipline.
|
192 |
+
For MMLU, we adopt the [evaluation tools](https://github.com/hendrycks/test) provided by the authors, C-Eval, AGIEval, GAOKAO-Bench, GAOKAO-English are the same as MMLU. For the remaining evaluation datasets, the [OpenCompass](https://github.com/open-compass/OpenCompass/) is employed for evaluation.
|
193 |
+
|
194 |
+
## 使用方法
|
195 |
+
|
196 |
+
### 硬件需求
|
197 |
+
下表列出了在 XVERSE-65B 上进行推理和微调所需要的硬件资源:
|
198 |
+
| | 类型 | 方法 | 内存 | GPU |
|
199 |
+
| ---------- | ---- | ---------------- | ------ | ---------- |
|
200 |
+
| XVERSE-65B | 训练 | LoRA with ZeRO-3 | 1500GB | 8*A800 80G |
|
201 |
+
| XVERSE-65B | 推理 | BF16/FP16 | 500GB | 2*A800 80G |
|
202 |
+
|
203 |
+
## Usage
|
204 |
+
|
205 |
+
### Hardware requirements
|
206 |
+
The following table lists the hardware resources required for inference and fine-tuning on XVERSE-65B:
|
207 |
+
| | Type | Kind | Memory | GPU |
|
208 |
+
| ---------- | --------- | ---------------- | ------ | ---------- |
|
209 |
+
| XVERSE-65B | Training | LoRA with ZeRO-3 | 1500GB | 8*A800 80G |
|
210 |
+
| XVERSE-65B | Inference | BF16/FP16 | 500GB | 2*A800 80G |
|
211 |
+
|
212 |
+
### Loading with Transformers
|
213 |
+
|
214 |
+
可通过以下代码加载 XVERSE-65B 模型进行推理:
|
215 |
+
|
216 |
+
The XVERSE-65B model can be loaded for inference using the following code:
|
217 |
+
|
218 |
+
```python
|
219 |
+
import torch
|
220 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
221 |
+
from transformers.generation.utils import GenerationConfig
|
222 |
+
model_path = "xverse/XVERSE-65B-Chat"
|
223 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
224 |
+
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto')
|
225 |
+
model.generation_config = GenerationConfig.from_pretrained(model_path)
|
226 |
+
model = model.eval()
|
227 |
+
history = [{"role": "user", "content": "1955年谁是美国总统?他是什么党派?"}]
|
228 |
+
response = model.chat(tokenizer, history)
|
229 |
+
print(response)
|
230 |
+
history.append({"role": "assistant", "content": response})
|
231 |
+
history.append({"role": "user", "content": "他任职了多少年"})
|
232 |
+
response = model.chat(tokenizer, history)
|
233 |
+
print(response)
|
234 |
+
```
|
235 |
+
|
236 |
+
更多有关相关细节,包括文本生成demo和环境依赖,请参考我们的[Github](https://github.com/xverse-ai/XVERSE-65B)。
|
237 |
+
|
238 |
+
For more details, including the demo of text generation and environmental dependencies, please refer to our [Github](https://github.com/xverse-ai/XVERSE-65B).
|
239 |
+
|
240 |
+
## 局限性与免责申明
|
241 |
+
|
242 |
+
XVERSE-65B 与其他所有 LLM 一样,在某些情况下可能会产生不准确、有偏见或其他令人反感的内容。因此,请谨慎使用模型生成的内容,请勿将生成的有害内容进行传播,在部署任何 XVERSE-65B 的应用之前,开发人员应根据其具体应用对模型进行安全测试和调优。
|
243 |
+
|
244 |
+
我们强烈警告不要将 XVERSE-65B 模型用于制造或传播有害信息,或进行任何可能损害公众、国家、社会安全或违反法规的活动。如果使用 XVERSE-65B 模型产生任何问题,无论是数据安全问题、公共舆论风险,还是模型被误解、滥用、传播或不合规使用所引发的任何风险和问题,我们将不承担任何责任。
|
245 |
+
|
246 |
+
## Limitations and Disclaimer
|
247 |
+
|
248 |
+
Like all other Large Language Models (LLMs), XVERSE-65B may produce inaccurate, biased, or otherwise offensive content under certain circumstances. Therefore, please use the content generated by the model with caution and refrain from disseminating harmful content. Before deploying any application of XVERSE-65B, developers should conduct safety tests and optimization of the model according to its specific application.
|
249 |
+
|
250 |
+
We strongly warn against the use of the XVERSE-65B model for producing or spreading harmful information, or conducting any activities that might harm the public, national, or social security, or violate regulations. We assume no responsibility for any problems arising from the use of the XVERSE-65B model, whether it be data security issues, public opinion risks, or any risks and issues caused by misunderstanding, misuse, dissemination, or non-compliance with the model.
|
251 |
+
|
252 |
+
## 模型开源协议
|
253 |
+
|
254 |
+
使用本仓库的源码需要遵循 [Apache-2.0](https://github.com/xverse-ai/XVERSE-65B/blob/main/LICENSE) 开源协议,使用 XVERSE-65B 的模型权重则需要遵循[模型许可协议](https://github.com/xverse-ai/XVERSE-65B/blob/main/MODEL_LICENSE.pdf)。
|
255 |
+
|
256 |
+
XVERSE-65B 模型权重对学术研究**完全开放**,并且支持**免费商用**。如需申请商业许可证,请填写【[申请表](https://chat.xverse.cn/home/business.html)】,如有其他问题或合作,请联系 <opensource@xverse.cn>。
|
257 |
+
|
258 |
+
## Open Source License
|
259 |
+
|
260 |
+
The use of the source code in this repository must follow the [Apache-2.0](https://github.com/xverse-ai/XVERSE-65B/blob/main/LICENSE) open-source license, while the use of the model weights of XVERSE-65B needs to adhere to the [Model License Agreement](https://github.com/xverse-ai/XVERSE-65B/blob/main/MODEL_LICENSE.pdf).
|
261 |
+
|
262 |
+
The XVERSE-65B model weights are **fully open** to academic research and support **free commercial use**. To apply for a commercial license, please fill in the [application form](https://chat.xverse.cn/home/business.html). For other questions or collaborations, please contact <opensource@xverse.cn>.
|
config.json
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"XverseForCausalLM"
|
4 |
+
],
|
5 |
+
"auto_map": {
|
6 |
+
"AutoConfig": "configuration_xverse.XverseConfig",
|
7 |
+
"AutoModelForCausalLM": "modeling_xverse.XverseForCausalLM"
|
8 |
+
},
|
9 |
+
"pad_token_id": 1,
|
10 |
+
"bos_token_id": 2,
|
11 |
+
"eos_token_id": 3,
|
12 |
+
"hidden_act": "silu",
|
13 |
+
"hidden_size": 8192,
|
14 |
+
"initializer_range": 0.02,
|
15 |
+
"intermediate_size": 22016,
|
16 |
+
"max_position_embeddings": 16384,
|
17 |
+
"max_tokenizer_truncation": 16384,
|
18 |
+
"model_type": "xverse",
|
19 |
+
"num_attention_heads": 64,
|
20 |
+
"num_hidden_layers": 80,
|
21 |
+
"rms_norm_eps": 1e-06,
|
22 |
+
"tie_word_embeddings": false,
|
23 |
+
"torch_dtype": "bfloat16",
|
24 |
+
"transformers_version": "4.28.1",
|
25 |
+
"use_cache": true,
|
26 |
+
"vocab_size": 100534
|
27 |
+
}
|
configuration_xverse.py
ADDED
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
|
5 |
+
# and OPT implementations in this library. It has been modified from its
|
6 |
+
# original forms to accommodate minor architectural differences compared
|
7 |
+
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
|
8 |
+
#
|
9 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
10 |
+
# you may not use this file except in compliance with the License.
|
11 |
+
# You may obtain a copy of the License at
|
12 |
+
#
|
13 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
14 |
+
#
|
15 |
+
# Unless required by applicable law or agreed to in writing, software
|
16 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
17 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
18 |
+
# See the License for the specific language governing permissions and
|
19 |
+
# limitations under the License.
|
20 |
+
""" XVERSE model configuration"""
|
21 |
+
|
22 |
+
from transformers.configuration_utils import PretrainedConfig
|
23 |
+
from transformers.utils import logging
|
24 |
+
|
25 |
+
|
26 |
+
logger = logging.get_logger(__name__)
|
27 |
+
|
28 |
+
XVERSE_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
|
29 |
+
|
30 |
+
|
31 |
+
class XverseConfig(PretrainedConfig):
|
32 |
+
r"""
|
33 |
+
This is the configuration class to store the configuration of a [`XverseModel`]. It is used to instantiate an Xverse
|
34 |
+
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
|
35 |
+
defaults will yield a similar configuration to that of the XVERSE-13B.
|
36 |
+
|
37 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
38 |
+
documentation from [`PretrainedConfig`] for more information.
|
39 |
+
|
40 |
+
|
41 |
+
Args:
|
42 |
+
vocab_size (`int`, *optional*, defaults to 100278):
|
43 |
+
Vocabulary size of the XVERSE model. Defines the number of different tokens that can be represented by the
|
44 |
+
`inputs_ids` passed when calling [`XverseModel`]
|
45 |
+
hidden_size (`int`, *optional*, defaults to 5120):
|
46 |
+
Dimension of the hidden representations.
|
47 |
+
intermediate_size (`int`, *optional*, defaults to 13824):
|
48 |
+
Dimension of the MLP representations.
|
49 |
+
num_hidden_layers (`int`, *optional*, defaults to 40):
|
50 |
+
Number of hidden layers in the Transformer encoder.
|
51 |
+
num_attention_heads (`int`, *optional*, defaults to 40):
|
52 |
+
Number of attention heads for each attention layer in the Transformer encoder.
|
53 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
54 |
+
The non-linear activation function (function or string) in the decoder.
|
55 |
+
max_position_embeddings (`int`, *optional*, defaults to 8192):
|
56 |
+
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
57 |
+
just in case (e.g., 512 or 1024 or 2048).
|
58 |
+
initializer_range (`float`, *optional*, defaults to 0.02):
|
59 |
+
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
60 |
+
rms_norm_eps (`float`, *optional*, defaults to 1e-6):
|
61 |
+
The epsilon used by the rms normalization layers.
|
62 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
63 |
+
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
64 |
+
relevant if `config.is_decoder=True`.
|
65 |
+
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
|
66 |
+
Whether to tie weight embeddings
|
67 |
+
|
68 |
+
Example:
|
69 |
+
|
70 |
+
```python
|
71 |
+
>>> from transformers import XverseModel, XverseConfig
|
72 |
+
|
73 |
+
>>> # Initializing a Xverse XVERSE-13B style configuration
|
74 |
+
>>> configuration = XverseConfig()
|
75 |
+
|
76 |
+
>>> # Initializing a model from the XVERSE-13B style configuration
|
77 |
+
>>> model = XverseModel(configuration)
|
78 |
+
|
79 |
+
>>> # Accessing the model configuration
|
80 |
+
>>> configuration = model.config
|
81 |
+
```"""
|
82 |
+
model_type = "xverse"
|
83 |
+
keys_to_ignore_at_inference = ["past_key_values"]
|
84 |
+
|
85 |
+
def __init__(
|
86 |
+
self,
|
87 |
+
vocab_size=100278,
|
88 |
+
hidden_size=5120,
|
89 |
+
intermediate_size=13824,
|
90 |
+
num_hidden_layers=40,
|
91 |
+
num_attention_heads=40,
|
92 |
+
hidden_act="silu",
|
93 |
+
max_position_embeddings=8192,
|
94 |
+
max_tokenizer_truncation=8192,
|
95 |
+
initializer_range=0.02,
|
96 |
+
rms_norm_eps=1e-6,
|
97 |
+
use_cache=True,
|
98 |
+
pad_token_id=None,
|
99 |
+
bos_token_id=1,
|
100 |
+
eos_token_id=2,
|
101 |
+
tie_word_embeddings=False,
|
102 |
+
**kwargs,
|
103 |
+
):
|
104 |
+
self.vocab_size = vocab_size
|
105 |
+
self.max_position_embeddings = max_position_embeddings
|
106 |
+
self.hidden_size = hidden_size
|
107 |
+
self.intermediate_size = intermediate_size
|
108 |
+
self.num_hidden_layers = num_hidden_layers
|
109 |
+
self.num_attention_heads = num_attention_heads
|
110 |
+
|
111 |
+
self.hidden_act = hidden_act
|
112 |
+
self.initializer_range = initializer_range
|
113 |
+
self.rms_norm_eps = rms_norm_eps
|
114 |
+
self.use_cache = use_cache
|
115 |
+
self.max_tokenizer_truncation = max_tokenizer_truncation
|
116 |
+
|
117 |
+
super().__init__(
|
118 |
+
pad_token_id=pad_token_id,
|
119 |
+
bos_token_id=bos_token_id,
|
120 |
+
eos_token_id=eos_token_id,
|
121 |
+
tie_word_embeddings=tie_word_embeddings,
|
122 |
+
**kwargs,
|
123 |
+
)
|
generation_config.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"pad_token_id": 1,
|
3 |
+
"bos_token_id": 2,
|
4 |
+
"eos_token_id": 3,
|
5 |
+
"max_new_tokens": 2048,
|
6 |
+
"temperature": 0.5,
|
7 |
+
"top_k": 30,
|
8 |
+
"top_p": 0.85,
|
9 |
+
"repetition_penalty": 1.1,
|
10 |
+
"do_sample": true,
|
11 |
+
"transformers_version": "4.29.1"
|
12 |
+
}
|
modeling_xverse.py
ADDED
@@ -0,0 +1,881 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
|
3 |
+
#
|
4 |
+
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
|
5 |
+
# and OPT implementations in this library. It has been modified from its
|
6 |
+
# original forms to accommodate minor architectural differences compared
|
7 |
+
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
|
8 |
+
#
|
9 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
10 |
+
# you may not use this file except in compliance with the License.
|
11 |
+
# You may obtain a copy of the License at
|
12 |
+
#
|
13 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
14 |
+
#
|
15 |
+
# Unless required by applicable law or agreed to in writing, software
|
16 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
17 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
18 |
+
# See the License for the specific language governing permissions and
|
19 |
+
# limitations under the License.
|
20 |
+
""" PyTorch XVERSE model."""
|
21 |
+
import math
|
22 |
+
from typing import List, Optional, Tuple, Union
|
23 |
+
|
24 |
+
import torch
|
25 |
+
import torch.nn.functional as F
|
26 |
+
import torch.utils.checkpoint
|
27 |
+
from torch import nn
|
28 |
+
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
|
29 |
+
|
30 |
+
from transformers.activations import ACT2FN
|
31 |
+
from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
|
32 |
+
from transformers.modeling_utils import PreTrainedModel
|
33 |
+
from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
|
34 |
+
from transformers.generation.utils import GenerationConfig
|
35 |
+
from .configuration_xverse import XverseConfig
|
36 |
+
|
37 |
+
|
38 |
+
logger = logging.get_logger(__name__)
|
39 |
+
|
40 |
+
_CONFIG_FOR_DOC = "XverseConfig"
|
41 |
+
|
42 |
+
|
43 |
+
# Copied from transformers.models.bart.modeling_bart._make_causal_mask
|
44 |
+
def _make_causal_mask(
|
45 |
+
input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
|
46 |
+
):
|
47 |
+
"""
|
48 |
+
Make causal mask used for bi-directional self-attention.
|
49 |
+
"""
|
50 |
+
bsz, tgt_len = input_ids_shape
|
51 |
+
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
|
52 |
+
mask_cond = torch.arange(mask.size(-1), device=device)
|
53 |
+
mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
|
54 |
+
mask = mask.to(dtype)
|
55 |
+
|
56 |
+
if past_key_values_length > 0:
|
57 |
+
mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
|
58 |
+
return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
|
59 |
+
|
60 |
+
|
61 |
+
# Copied from transformers.models.bart.modeling_bart._expand_mask
|
62 |
+
def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
|
63 |
+
"""
|
64 |
+
Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
|
65 |
+
"""
|
66 |
+
bsz, src_len = mask.size()
|
67 |
+
tgt_len = tgt_len if tgt_len is not None else src_len
|
68 |
+
|
69 |
+
expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
|
70 |
+
|
71 |
+
inverted_mask = 1.0 - expanded_mask
|
72 |
+
|
73 |
+
return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
|
74 |
+
|
75 |
+
|
76 |
+
class XverseRMSNorm(nn.Module):
|
77 |
+
def __init__(self, hidden_size, eps=1e-6):
|
78 |
+
"""
|
79 |
+
XverseRMSNorm is equivalent to T5LayerNorm
|
80 |
+
"""
|
81 |
+
super().__init__()
|
82 |
+
self.weight = nn.Parameter(torch.ones(hidden_size))
|
83 |
+
self.variance_epsilon = eps
|
84 |
+
|
85 |
+
def forward(self, hidden_states):
|
86 |
+
input_dtype = hidden_states.dtype
|
87 |
+
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
|
88 |
+
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
|
89 |
+
|
90 |
+
return (self.weight * hidden_states).to(input_dtype)
|
91 |
+
|
92 |
+
|
93 |
+
class XverseRotaryEmbedding(torch.nn.Module):
|
94 |
+
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
|
95 |
+
super().__init__()
|
96 |
+
inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
|
97 |
+
self.register_buffer("inv_freq", inv_freq)
|
98 |
+
|
99 |
+
# Build here to make `torch.jit.trace` work.
|
100 |
+
self.max_seq_len_cached = max_position_embeddings
|
101 |
+
t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
|
102 |
+
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
|
103 |
+
# Different from paper, but it uses a different permutation in order to obtain the same calculation
|
104 |
+
emb = torch.cat((freqs, freqs), dim=-1)
|
105 |
+
self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
|
106 |
+
self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
|
107 |
+
|
108 |
+
def forward(self, x, seq_len=None):
|
109 |
+
# x: [bs, num_attention_heads, seq_len, head_size]
|
110 |
+
# This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case.
|
111 |
+
if seq_len > self.max_seq_len_cached:
|
112 |
+
self.max_seq_len_cached = seq_len
|
113 |
+
t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype)
|
114 |
+
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
|
115 |
+
# Different from paper, but it uses a different permutation in order to obtain the same calculation
|
116 |
+
emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
|
117 |
+
self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
|
118 |
+
self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
|
119 |
+
return (
|
120 |
+
self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
121 |
+
self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
|
122 |
+
)
|
123 |
+
|
124 |
+
|
125 |
+
def rotate_half(x):
|
126 |
+
"""Rotates half the hidden dims of the input."""
|
127 |
+
x1 = x[..., : x.shape[-1] // 2]
|
128 |
+
x2 = x[..., x.shape[-1] // 2 :]
|
129 |
+
return torch.cat((-x2, x1), dim=-1)
|
130 |
+
|
131 |
+
|
132 |
+
def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
|
133 |
+
# The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
|
134 |
+
cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
|
135 |
+
sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
|
136 |
+
cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
|
137 |
+
sin = sin[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]
|
138 |
+
q_embed = (q * cos) + (rotate_half(q) * sin)
|
139 |
+
k_embed = (k * cos) + (rotate_half(k) * sin)
|
140 |
+
return q_embed, k_embed
|
141 |
+
|
142 |
+
|
143 |
+
class XverseMLP(nn.Module):
|
144 |
+
def __init__(
|
145 |
+
self,
|
146 |
+
hidden_size: int,
|
147 |
+
intermediate_size: int,
|
148 |
+
hidden_act: str,
|
149 |
+
):
|
150 |
+
super().__init__()
|
151 |
+
self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
|
152 |
+
self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False)
|
153 |
+
self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
|
154 |
+
self.act_fn = ACT2FN[hidden_act]
|
155 |
+
|
156 |
+
def forward(self, x):
|
157 |
+
return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
|
158 |
+
|
159 |
+
|
160 |
+
class XverseAttention(nn.Module):
|
161 |
+
"""Multi-headed attention from 'Attention Is All You Need' paper"""
|
162 |
+
|
163 |
+
def __init__(self, config: XverseConfig):
|
164 |
+
super().__init__()
|
165 |
+
self.config = config
|
166 |
+
self.hidden_size = config.hidden_size
|
167 |
+
self.num_heads = config.num_attention_heads
|
168 |
+
self.head_dim = self.hidden_size // self.num_heads
|
169 |
+
self.max_position_embeddings = config.max_position_embeddings
|
170 |
+
|
171 |
+
if (self.head_dim * self.num_heads) != self.hidden_size:
|
172 |
+
raise ValueError(
|
173 |
+
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
|
174 |
+
f" and `num_heads`: {self.num_heads})."
|
175 |
+
)
|
176 |
+
self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
|
177 |
+
self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
|
178 |
+
self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
|
179 |
+
self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
|
180 |
+
self.rotary_emb = XverseRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings)
|
181 |
+
|
182 |
+
def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
|
183 |
+
return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
|
184 |
+
|
185 |
+
def forward(
|
186 |
+
self,
|
187 |
+
hidden_states: torch.Tensor,
|
188 |
+
attention_mask: Optional[torch.Tensor] = None,
|
189 |
+
position_ids: Optional[torch.LongTensor] = None,
|
190 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
191 |
+
output_attentions: bool = False,
|
192 |
+
use_cache: bool = False,
|
193 |
+
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
|
194 |
+
bsz, q_len, _ = hidden_states.size()
|
195 |
+
|
196 |
+
query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
197 |
+
key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
198 |
+
value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
|
199 |
+
|
200 |
+
kv_seq_len = key_states.shape[-2]
|
201 |
+
if past_key_value is not None:
|
202 |
+
kv_seq_len += past_key_value[0].shape[-2]
|
203 |
+
cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
|
204 |
+
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
|
205 |
+
# [bsz, nh, t, hd]
|
206 |
+
|
207 |
+
if past_key_value is not None:
|
208 |
+
# reuse k, v, self_attention
|
209 |
+
key_states = torch.cat([past_key_value[0], key_states], dim=2)
|
210 |
+
value_states = torch.cat([past_key_value[1], value_states], dim=2)
|
211 |
+
|
212 |
+
past_key_value = (key_states, value_states) if use_cache else None
|
213 |
+
|
214 |
+
attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
|
215 |
+
|
216 |
+
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
|
217 |
+
raise ValueError(
|
218 |
+
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
|
219 |
+
f" {attn_weights.size()}"
|
220 |
+
)
|
221 |
+
|
222 |
+
if attention_mask is not None:
|
223 |
+
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
|
224 |
+
raise ValueError(
|
225 |
+
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
|
226 |
+
)
|
227 |
+
attn_weights = attn_weights + attention_mask
|
228 |
+
attn_weights = torch.max(
|
229 |
+
attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device)
|
230 |
+
)
|
231 |
+
|
232 |
+
# upcast attention to fp32
|
233 |
+
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
|
234 |
+
attn_output = torch.matmul(attn_weights, value_states)
|
235 |
+
|
236 |
+
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
|
237 |
+
raise ValueError(
|
238 |
+
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
|
239 |
+
f" {attn_output.size()}"
|
240 |
+
)
|
241 |
+
|
242 |
+
attn_output = attn_output.transpose(1, 2)
|
243 |
+
attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
|
244 |
+
|
245 |
+
attn_output = self.o_proj(attn_output)
|
246 |
+
|
247 |
+
if not output_attentions:
|
248 |
+
attn_weights = None
|
249 |
+
|
250 |
+
return attn_output, attn_weights, past_key_value
|
251 |
+
|
252 |
+
|
253 |
+
class XverseDecoderLayer(nn.Module):
|
254 |
+
def __init__(self, config: XverseConfig):
|
255 |
+
super().__init__()
|
256 |
+
self.hidden_size = config.hidden_size
|
257 |
+
self.self_attn = XverseAttention(config=config)
|
258 |
+
self.mlp = XverseMLP(
|
259 |
+
hidden_size=self.hidden_size,
|
260 |
+
intermediate_size=config.intermediate_size,
|
261 |
+
hidden_act=config.hidden_act,
|
262 |
+
)
|
263 |
+
self.input_layernorm = XverseRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
264 |
+
self.post_attention_layernorm = XverseRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
265 |
+
|
266 |
+
def forward(
|
267 |
+
self,
|
268 |
+
hidden_states: torch.Tensor,
|
269 |
+
attention_mask: Optional[torch.Tensor] = None,
|
270 |
+
position_ids: Optional[torch.LongTensor] = None,
|
271 |
+
past_key_value: Optional[Tuple[torch.Tensor]] = None,
|
272 |
+
output_attentions: Optional[bool] = False,
|
273 |
+
use_cache: Optional[bool] = False,
|
274 |
+
) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
|
275 |
+
"""
|
276 |
+
Args:
|
277 |
+
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
|
278 |
+
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
|
279 |
+
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
|
280 |
+
output_attentions (`bool`, *optional*):
|
281 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
|
282 |
+
returned tensors for more detail.
|
283 |
+
use_cache (`bool`, *optional*):
|
284 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
|
285 |
+
(see `past_key_values`).
|
286 |
+
past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
|
287 |
+
"""
|
288 |
+
|
289 |
+
residual = hidden_states
|
290 |
+
|
291 |
+
hidden_states = self.input_layernorm(hidden_states)
|
292 |
+
|
293 |
+
# Self Attention
|
294 |
+
hidden_states, self_attn_weights, present_key_value = self.self_attn(
|
295 |
+
hidden_states=hidden_states,
|
296 |
+
attention_mask=attention_mask,
|
297 |
+
position_ids=position_ids,
|
298 |
+
past_key_value=past_key_value,
|
299 |
+
output_attentions=output_attentions,
|
300 |
+
use_cache=use_cache,
|
301 |
+
)
|
302 |
+
hidden_states = residual + hidden_states
|
303 |
+
|
304 |
+
# Fully Connected
|
305 |
+
residual = hidden_states
|
306 |
+
hidden_states = self.post_attention_layernorm(hidden_states)
|
307 |
+
hidden_states = self.mlp(hidden_states)
|
308 |
+
hidden_states = residual + hidden_states
|
309 |
+
|
310 |
+
outputs = (hidden_states,)
|
311 |
+
|
312 |
+
if output_attentions:
|
313 |
+
outputs += (self_attn_weights,)
|
314 |
+
|
315 |
+
if use_cache:
|
316 |
+
outputs += (present_key_value,)
|
317 |
+
|
318 |
+
return outputs
|
319 |
+
|
320 |
+
|
321 |
+
XVERSE_START_DOCSTRING = r"""
|
322 |
+
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
|
323 |
+
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
|
324 |
+
etc.)
|
325 |
+
|
326 |
+
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
|
327 |
+
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
328 |
+
and behavior.
|
329 |
+
|
330 |
+
Parameters:
|
331 |
+
config ([`XverseConfig`]):
|
332 |
+
Model configuration class with all the parameters of the model. Initializing with a config file does not
|
333 |
+
load the weights associated with the model, only the configuration. Check out the
|
334 |
+
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
335 |
+
"""
|
336 |
+
|
337 |
+
|
338 |
+
@add_start_docstrings(
|
339 |
+
"The bare Xverse Model outputting raw hidden-states without any specific head on top.",
|
340 |
+
XVERSE_START_DOCSTRING,
|
341 |
+
)
|
342 |
+
class XversePreTrainedModel(PreTrainedModel):
|
343 |
+
config_class = XverseConfig
|
344 |
+
base_model_prefix = "model"
|
345 |
+
supports_gradient_checkpointing = True
|
346 |
+
_no_split_modules = ["XverseDecoderLayer"]
|
347 |
+
_skip_keys_device_placement = "past_key_values"
|
348 |
+
_keys_to_ignore_on_load_unexpected = [r"decoder\.version"]
|
349 |
+
|
350 |
+
def _init_weights(self, module):
|
351 |
+
std = self.config.initializer_range
|
352 |
+
if isinstance(module, nn.Linear):
|
353 |
+
module.weight.data.normal_(mean=0.0, std=std)
|
354 |
+
if module.bias is not None:
|
355 |
+
module.bias.data.zero_()
|
356 |
+
elif isinstance(module, nn.Embedding):
|
357 |
+
module.weight.data.normal_(mean=0.0, std=std)
|
358 |
+
if module.padding_idx is not None:
|
359 |
+
module.weight.data[module.padding_idx].zero_()
|
360 |
+
|
361 |
+
def _set_gradient_checkpointing(self, module, value=False):
|
362 |
+
if isinstance(module, XverseModel):
|
363 |
+
module.gradient_checkpointing = value
|
364 |
+
|
365 |
+
|
366 |
+
XVERSE_INPUTS_DOCSTRING = r"""
|
367 |
+
Args:
|
368 |
+
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
|
369 |
+
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
|
370 |
+
it.
|
371 |
+
|
372 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
373 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
374 |
+
|
375 |
+
[What are input IDs?](../glossary#input-ids)
|
376 |
+
attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
|
377 |
+
Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
|
378 |
+
|
379 |
+
- 1 for tokens that are **not masked**,
|
380 |
+
- 0 for tokens that are **masked**.
|
381 |
+
|
382 |
+
[What are attention masks?](../glossary#attention-mask)
|
383 |
+
|
384 |
+
Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
|
385 |
+
[`PreTrainedTokenizer.__call__`] for details.
|
386 |
+
|
387 |
+
If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
|
388 |
+
`past_key_values`).
|
389 |
+
|
390 |
+
If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
|
391 |
+
and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
|
392 |
+
information on the default strategy.
|
393 |
+
|
394 |
+
- 1 indicates the head is **not masked**,
|
395 |
+
- 0 indicates the head is **masked**.
|
396 |
+
position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
397 |
+
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
|
398 |
+
config.n_positions - 1]`.
|
399 |
+
|
400 |
+
[What are position IDs?](../glossary#position-ids)
|
401 |
+
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
|
402 |
+
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
|
403 |
+
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
|
404 |
+
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
|
405 |
+
|
406 |
+
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
|
407 |
+
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
408 |
+
|
409 |
+
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
|
410 |
+
don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
|
411 |
+
`decoder_input_ids` of shape `(batch_size, sequence_length)`.
|
412 |
+
inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
|
413 |
+
Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
|
414 |
+
is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
|
415 |
+
model's internal embedding lookup matrix.
|
416 |
+
use_cache (`bool`, *optional*):
|
417 |
+
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
|
418 |
+
`past_key_values`).
|
419 |
+
output_attentions (`bool`, *optional*):
|
420 |
+
Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
|
421 |
+
tensors for more detail.
|
422 |
+
output_hidden_states (`bool`, *optional*):
|
423 |
+
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
|
424 |
+
more detail.
|
425 |
+
return_dict (`bool`, *optional*):
|
426 |
+
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
|
427 |
+
"""
|
428 |
+
|
429 |
+
@add_start_docstrings(
|
430 |
+
"The bare Xverse Model outputting raw hidden-states without any specific head on top.",
|
431 |
+
XVERSE_START_DOCSTRING,
|
432 |
+
)
|
433 |
+
class XverseModel(XversePreTrainedModel):
|
434 |
+
"""
|
435 |
+
Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`XverseDecoderLayer`]
|
436 |
+
|
437 |
+
Args:
|
438 |
+
config: XverseConfig
|
439 |
+
"""
|
440 |
+
|
441 |
+
def __init__(self, config: XverseConfig):
|
442 |
+
super().__init__(config)
|
443 |
+
self.padding_idx = config.pad_token_id
|
444 |
+
self.vocab_size = config.vocab_size
|
445 |
+
|
446 |
+
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
|
447 |
+
self.layers = nn.ModuleList([XverseDecoderLayer(config) for _ in range(config.num_hidden_layers)])
|
448 |
+
self.norm = XverseRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
449 |
+
|
450 |
+
self.gradient_checkpointing = False
|
451 |
+
# Initialize weights and apply final processing
|
452 |
+
self.post_init()
|
453 |
+
|
454 |
+
def get_input_embeddings(self):
|
455 |
+
return self.embed_tokens
|
456 |
+
|
457 |
+
def set_input_embeddings(self, value):
|
458 |
+
self.embed_tokens = value
|
459 |
+
|
460 |
+
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
|
461 |
+
def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
|
462 |
+
# create causal mask
|
463 |
+
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
|
464 |
+
combined_attention_mask = None
|
465 |
+
if input_shape[-1] > 1:
|
466 |
+
combined_attention_mask = _make_causal_mask(
|
467 |
+
input_shape,
|
468 |
+
inputs_embeds.dtype,
|
469 |
+
device=inputs_embeds.device,
|
470 |
+
past_key_values_length=past_key_values_length,
|
471 |
+
)
|
472 |
+
|
473 |
+
if attention_mask is not None:
|
474 |
+
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
|
475 |
+
expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
|
476 |
+
inputs_embeds.device
|
477 |
+
)
|
478 |
+
combined_attention_mask = (
|
479 |
+
expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
|
480 |
+
)
|
481 |
+
|
482 |
+
return combined_attention_mask
|
483 |
+
|
484 |
+
@add_start_docstrings_to_model_forward(XVERSE_INPUTS_DOCSTRING)
|
485 |
+
def forward(
|
486 |
+
self,
|
487 |
+
input_ids: torch.LongTensor = None,
|
488 |
+
attention_mask: Optional[torch.Tensor] = None,
|
489 |
+
position_ids: Optional[torch.LongTensor] = None,
|
490 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
491 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
492 |
+
use_cache: Optional[bool] = None,
|
493 |
+
output_attentions: Optional[bool] = None,
|
494 |
+
output_hidden_states: Optional[bool] = None,
|
495 |
+
return_dict: Optional[bool] = None,
|
496 |
+
) -> Union[Tuple, BaseModelOutputWithPast]:
|
497 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
498 |
+
output_hidden_states = (
|
499 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
500 |
+
)
|
501 |
+
use_cache = use_cache if use_cache is not None else self.config.use_cache
|
502 |
+
|
503 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
504 |
+
|
505 |
+
# retrieve input_ids and inputs_embeds
|
506 |
+
if input_ids is not None and inputs_embeds is not None:
|
507 |
+
raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
|
508 |
+
elif input_ids is not None:
|
509 |
+
batch_size, seq_length = input_ids.shape
|
510 |
+
elif inputs_embeds is not None:
|
511 |
+
batch_size, seq_length, _ = inputs_embeds.shape
|
512 |
+
else:
|
513 |
+
raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
|
514 |
+
|
515 |
+
seq_length_with_past = seq_length
|
516 |
+
past_key_values_length = 0
|
517 |
+
|
518 |
+
if past_key_values is not None:
|
519 |
+
past_key_values_length = past_key_values[0][0].shape[2]
|
520 |
+
seq_length_with_past = seq_length_with_past + past_key_values_length
|
521 |
+
|
522 |
+
if position_ids is None:
|
523 |
+
device = input_ids.device if input_ids is not None else inputs_embeds.device
|
524 |
+
position_ids = torch.arange(
|
525 |
+
past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
|
526 |
+
)
|
527 |
+
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
|
528 |
+
else:
|
529 |
+
position_ids = position_ids.view(-1, seq_length).long()
|
530 |
+
|
531 |
+
if inputs_embeds is None:
|
532 |
+
inputs_embeds = self.embed_tokens(input_ids)
|
533 |
+
# embed positions
|
534 |
+
if attention_mask is None:
|
535 |
+
attention_mask = torch.ones(
|
536 |
+
(batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
|
537 |
+
)
|
538 |
+
attention_mask = self._prepare_decoder_attention_mask(
|
539 |
+
attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
|
540 |
+
)
|
541 |
+
|
542 |
+
hidden_states = inputs_embeds
|
543 |
+
|
544 |
+
if self.gradient_checkpointing and self.training:
|
545 |
+
if use_cache:
|
546 |
+
logger.warning_once(
|
547 |
+
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
|
548 |
+
)
|
549 |
+
use_cache = False
|
550 |
+
|
551 |
+
# decoder layers
|
552 |
+
all_hidden_states = () if output_hidden_states else None
|
553 |
+
all_self_attns = () if output_attentions else None
|
554 |
+
next_decoder_cache = () if use_cache else None
|
555 |
+
|
556 |
+
for idx, decoder_layer in enumerate(self.layers):
|
557 |
+
if output_hidden_states:
|
558 |
+
all_hidden_states += (hidden_states,)
|
559 |
+
|
560 |
+
past_key_value = past_key_values[idx] if past_key_values is not None else None
|
561 |
+
|
562 |
+
if self.gradient_checkpointing and self.training:
|
563 |
+
|
564 |
+
def create_custom_forward(module):
|
565 |
+
def custom_forward(*inputs):
|
566 |
+
# None for past_key_value
|
567 |
+
return module(*inputs, output_attentions, None)
|
568 |
+
|
569 |
+
return custom_forward
|
570 |
+
|
571 |
+
layer_outputs = torch.utils.checkpoint.checkpoint(
|
572 |
+
create_custom_forward(decoder_layer),
|
573 |
+
hidden_states,
|
574 |
+
attention_mask,
|
575 |
+
position_ids,
|
576 |
+
None,
|
577 |
+
)
|
578 |
+
else:
|
579 |
+
layer_outputs = decoder_layer(
|
580 |
+
hidden_states,
|
581 |
+
attention_mask=attention_mask,
|
582 |
+
position_ids=position_ids,
|
583 |
+
past_key_value=past_key_value,
|
584 |
+
output_attentions=output_attentions,
|
585 |
+
use_cache=use_cache,
|
586 |
+
)
|
587 |
+
|
588 |
+
hidden_states = layer_outputs[0]
|
589 |
+
|
590 |
+
if use_cache:
|
591 |
+
next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
|
592 |
+
|
593 |
+
if output_attentions:
|
594 |
+
all_self_attns += (layer_outputs[1],)
|
595 |
+
|
596 |
+
hidden_states = self.norm(hidden_states)
|
597 |
+
|
598 |
+
# add hidden states from the last decoder layer
|
599 |
+
if output_hidden_states:
|
600 |
+
all_hidden_states += (hidden_states,)
|
601 |
+
|
602 |
+
next_cache = next_decoder_cache if use_cache else None
|
603 |
+
if not return_dict:
|
604 |
+
return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
|
605 |
+
return BaseModelOutputWithPast(
|
606 |
+
last_hidden_state=hidden_states,
|
607 |
+
past_key_values=next_cache,
|
608 |
+
hidden_states=all_hidden_states,
|
609 |
+
attentions=all_self_attns,
|
610 |
+
)
|
611 |
+
|
612 |
+
|
613 |
+
class XverseForCausalLM(XversePreTrainedModel):
|
614 |
+
def __init__(self, config):
|
615 |
+
super().__init__(config)
|
616 |
+
self.model = XverseModel(config)
|
617 |
+
|
618 |
+
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
|
619 |
+
|
620 |
+
# Initialize weights and apply final processing
|
621 |
+
self.post_init()
|
622 |
+
|
623 |
+
def get_input_embeddings(self):
|
624 |
+
return self.model.embed_tokens
|
625 |
+
|
626 |
+
def set_input_embeddings(self, value):
|
627 |
+
self.model.embed_tokens = value
|
628 |
+
|
629 |
+
def get_output_embeddings(self):
|
630 |
+
return self.lm_head
|
631 |
+
|
632 |
+
def set_output_embeddings(self, new_embeddings):
|
633 |
+
self.lm_head = new_embeddings
|
634 |
+
|
635 |
+
def set_decoder(self, decoder):
|
636 |
+
self.model = decoder
|
637 |
+
|
638 |
+
def get_decoder(self):
|
639 |
+
return self.model
|
640 |
+
|
641 |
+
@add_start_docstrings_to_model_forward(XVERSE_INPUTS_DOCSTRING)
|
642 |
+
@replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
|
643 |
+
def forward(
|
644 |
+
self,
|
645 |
+
input_ids: torch.LongTensor = None,
|
646 |
+
attention_mask: Optional[torch.Tensor] = None,
|
647 |
+
position_ids: Optional[torch.LongTensor] = None,
|
648 |
+
past_key_values: Optional[List[torch.FloatTensor]] = None,
|
649 |
+
inputs_embeds: Optional[torch.FloatTensor] = None,
|
650 |
+
labels: Optional[torch.LongTensor] = None,
|
651 |
+
use_cache: Optional[bool] = None,
|
652 |
+
output_attentions: Optional[bool] = None,
|
653 |
+
output_hidden_states: Optional[bool] = None,
|
654 |
+
return_dict: Optional[bool] = None,
|
655 |
+
) -> Union[Tuple, CausalLMOutputWithPast]:
|
656 |
+
r"""
|
657 |
+
Args:
|
658 |
+
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
|
659 |
+
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
|
660 |
+
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
|
661 |
+
(masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
|
662 |
+
|
663 |
+
Returns:
|
664 |
+
|
665 |
+
Example:
|
666 |
+
|
667 |
+
```python
|
668 |
+
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
669 |
+
|
670 |
+
>>> model = AutoModelForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS, trust_remote_code=True)
|
671 |
+
>>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
|
672 |
+
|
673 |
+
>>> prompt = "Hey, are you conscious? Can you talk to me?"
|
674 |
+
>>> inputs = tokenizer(prompt, return_tensors="pt")
|
675 |
+
|
676 |
+
>>> # Generate
|
677 |
+
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
|
678 |
+
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
679 |
+
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
|
680 |
+
```"""
|
681 |
+
|
682 |
+
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
|
683 |
+
output_hidden_states = (
|
684 |
+
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
|
685 |
+
)
|
686 |
+
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
|
687 |
+
|
688 |
+
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
|
689 |
+
outputs = self.model(
|
690 |
+
input_ids=input_ids,
|
691 |
+
attention_mask=attention_mask,
|
692 |
+
position_ids=position_ids,
|
693 |
+
past_key_values=past_key_values,
|
694 |
+
inputs_embeds=inputs_embeds,
|
695 |
+
use_cache=use_cache,
|
696 |
+
output_attentions=output_attentions,
|
697 |
+
output_hidden_states=output_hidden_states,
|
698 |
+
return_dict=return_dict,
|
699 |
+
)
|
700 |
+
|
701 |
+
hidden_states = outputs[0]
|
702 |
+
logits = self.lm_head(hidden_states)
|
703 |
+
|
704 |
+
loss = None
|
705 |
+
if labels is not None:
|
706 |
+
# Shift so that tokens < n predict n
|
707 |
+
shift_logits = logits[..., :-1, :].contiguous()
|
708 |
+
shift_labels = labels[..., 1:].contiguous()
|
709 |
+
# Flatten the tokens
|
710 |
+
loss_fct = CrossEntropyLoss()
|
711 |
+
shift_logits = shift_logits.view(-1, self.config.vocab_size)
|
712 |
+
shift_labels = shift_labels.view(-1)
|
713 |
+
# Enable model parallelism
|
714 |
+
shift_labels = shift_labels.to(shift_logits.device)
|
715 |
+
loss = loss_fct(shift_logits, shift_labels)
|
716 |
+
|
717 |
+
if not return_dict:
|
718 |
+
output = (logits,) + outputs[1:]
|
719 |
+
return (loss,) + output if loss is not None else output
|
720 |
+
|
721 |
+
return CausalLMOutputWithPast(
|
722 |
+
loss=loss,
|
723 |
+
logits=logits,
|
724 |
+
past_key_values=outputs.past_key_values,
|
725 |
+
hidden_states=outputs.hidden_states,
|
726 |
+
attentions=outputs.attentions,
|
727 |
+
)
|
728 |
+
|
729 |
+
def _build_chat_input(self, tokenizer, messages: List[dict], max_new_tokens: int=2048):
|
730 |
+
max_new_tokens = max_new_tokens or self.generation_config.max_new_tokens
|
731 |
+
max_input_tokens = self.config.max_position_embeddings - max_new_tokens
|
732 |
+
max_input_tokens = max(self.config.max_position_embeddings // 2, max_input_tokens)
|
733 |
+
max_input_tokens = min(self.config.max_tokenizer_truncation, max_input_tokens)
|
734 |
+
|
735 |
+
total_input, round_input = [], []
|
736 |
+
user_prompt_tokens = tokenizer.encode("Human: ", return_token_type_ids=False)
|
737 |
+
exec_prompt_tokens = tokenizer.encode("Exec: ", return_token_type_ids=False)
|
738 |
+
assist_prompt_tokens = tokenizer.encode("Assistant: ", return_token_type_ids=False)
|
739 |
+
assist_prompt_len = len(assist_prompt_tokens)
|
740 |
+
|
741 |
+
for i, message in enumerate(messages[::-1]):
|
742 |
+
if message['role'] == 'user' or message['role'] == 'exec':
|
743 |
+
user_content = f"{message['content']}\n\n"
|
744 |
+
content_tokens = user_prompt_tokens + tokenizer.encode(user_content, return_token_type_ids=False) if message['role'] == 'user' else \
|
745 |
+
exec_prompt_tokens + tokenizer.encode(user_content, return_token_type_ids=False)
|
746 |
+
if i == 0:
|
747 |
+
content_tokens = content_tokens[:max_input_tokens-assist_prompt_len]
|
748 |
+
content_tokens += assist_prompt_tokens
|
749 |
+
round_input = content_tokens + round_input
|
750 |
+
|
751 |
+
if i != 0:
|
752 |
+
if len(total_input) + len(round_input) > max_input_tokens:
|
753 |
+
break
|
754 |
+
else:
|
755 |
+
total_input = round_input + total_input
|
756 |
+
else:
|
757 |
+
total_input = round_input + total_input
|
758 |
+
if len(total_input) >= max_input_tokens:
|
759 |
+
break
|
760 |
+
round_input = []
|
761 |
+
elif message['role'] == 'assistant':
|
762 |
+
assist_content = f"{message['content']}"
|
763 |
+
content_tokens = assist_prompt_tokens + tokenizer.encode(assist_content, return_token_type_ids=False)
|
764 |
+
round_input = content_tokens + [self.generation_config.eos_token_id] + round_input
|
765 |
+
elif message['role'] == 'system':
|
766 |
+
assert i == len(messages) - 1
|
767 |
+
user_content = f"{message['content']}\n"
|
768 |
+
content_tokens = tokenizer.encode(user_content, return_token_type_ids=False)
|
769 |
+
round_input = user_prompt_tokens + content_tokens + round_input
|
770 |
+
if len(total_input) + len(round_input) > max_input_tokens:
|
771 |
+
break
|
772 |
+
else:
|
773 |
+
total_input = round_input + total_input
|
774 |
+
else:
|
775 |
+
raise ValueError(f"message role not supported yet: {message['role']}")
|
776 |
+
total_input = torch.LongTensor([total_input]).to(self.device)
|
777 |
+
return total_input
|
778 |
+
|
779 |
+
@torch.no_grad()
|
780 |
+
def chat(self, tokenizer, messages: List[dict], stream=False,
|
781 |
+
generation_config: Optional[GenerationConfig]=None):
|
782 |
+
generation_config = generation_config or self.generation_config
|
783 |
+
input_ids = self._build_chat_input(tokenizer, messages, generation_config.max_new_tokens)
|
784 |
+
if stream:
|
785 |
+
from transformers import TextIteratorStreamer
|
786 |
+
from threading import Thread
|
787 |
+
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True)
|
788 |
+
self.__class__.generate = PreTrainedModel.generate
|
789 |
+
|
790 |
+
def stream_generator():
|
791 |
+
generation_kwargs = dict(inputs=input_ids, generation_config=generation_config, streamer=streamer)
|
792 |
+
thread = Thread(target=self.generate, kwargs=generation_kwargs)
|
793 |
+
thread.start()
|
794 |
+
for next_text in streamer:
|
795 |
+
yield next_text.replace(tokenizer.eos_token, "")
|
796 |
+
|
797 |
+
return stream_generator()
|
798 |
+
else:
|
799 |
+
self.__class__.generate = PreTrainedModel.generate # disable stream
|
800 |
+
outputs = self.generate(input_ids, generation_config=generation_config)
|
801 |
+
response = tokenizer.decode(outputs[0][len(input_ids[0]):], skip_special_tokens=True)
|
802 |
+
return response
|
803 |
+
|
804 |
+
def prepare_inputs_for_generation(
|
805 |
+
self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
|
806 |
+
):
|
807 |
+
if past_key_values:
|
808 |
+
input_ids = input_ids[:, -1:]
|
809 |
+
|
810 |
+
position_ids = kwargs.get("position_ids", None)
|
811 |
+
if attention_mask is not None and position_ids is None:
|
812 |
+
# create position_ids on the fly for batch generation
|
813 |
+
position_ids = attention_mask.long().cumsum(-1) - 1
|
814 |
+
position_ids.masked_fill_(attention_mask == 0, 1)
|
815 |
+
if past_key_values:
|
816 |
+
position_ids = position_ids[:, -1].unsqueeze(-1)
|
817 |
+
|
818 |
+
# if `inputs_embeds` are passed, we only want to use them in the 1st generation step
|
819 |
+
if inputs_embeds is not None and past_key_values is None:
|
820 |
+
model_inputs = {"inputs_embeds": inputs_embeds}
|
821 |
+
else:
|
822 |
+
model_inputs = {"input_ids": input_ids}
|
823 |
+
|
824 |
+
model_inputs.update(
|
825 |
+
{
|
826 |
+
"position_ids": position_ids,
|
827 |
+
"past_key_values": past_key_values,
|
828 |
+
"use_cache": kwargs.get("use_cache"),
|
829 |
+
"attention_mask": attention_mask,
|
830 |
+
}
|
831 |
+
)
|
832 |
+
return model_inputs
|
833 |
+
|
834 |
+
@staticmethod
|
835 |
+
def _reorder_cache(past_key_values, beam_idx):
|
836 |
+
reordered_past = ()
|
837 |
+
for layer_past in past_key_values:
|
838 |
+
reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
|
839 |
+
return reordered_past
|
840 |
+
|
841 |
+
def quantize(self, bit_length: int):
|
842 |
+
from .quantization import QuantizationLinear
|
843 |
+
|
844 |
+
for layer in self.model.layers:
|
845 |
+
layer.self_attn.q_proj = QuantizationLinear(
|
846 |
+
bit_length=bit_length,
|
847 |
+
weight=layer.self_attn.q_proj.weight.to(torch.cuda.current_device()),
|
848 |
+
device=layer.self_attn.q_proj.weight.device,
|
849 |
+
)
|
850 |
+
layer.self_attn.k_proj = QuantizationLinear(
|
851 |
+
bit_length=bit_length,
|
852 |
+
weight=layer.self_attn.k_proj.weight.to(torch.cuda.current_device()),
|
853 |
+
device=layer.self_attn.k_proj.weight.device
|
854 |
+
)
|
855 |
+
layer.self_attn.v_proj = QuantizationLinear(
|
856 |
+
bit_length=bit_length,
|
857 |
+
weight=layer.self_attn.v_proj.weight.to(torch.cuda.current_device()),
|
858 |
+
device=layer.self_attn.v_proj.weight.device
|
859 |
+
)
|
860 |
+
layer.self_attn.o_proj = QuantizationLinear(
|
861 |
+
bit_length=bit_length,
|
862 |
+
weight=layer.self_attn.o_proj.weight.to(torch.cuda.current_device()),
|
863 |
+
device=layer.self_attn.o_proj.weight.device
|
864 |
+
)
|
865 |
+
layer.mlp.gate_proj = QuantizationLinear(
|
866 |
+
bit_length=bit_length,
|
867 |
+
weight=layer.mlp.gate_proj.weight.to(torch.cuda.current_device()),
|
868 |
+
device=layer.mlp.gate_proj.weight.device
|
869 |
+
)
|
870 |
+
layer.mlp.down_proj = QuantizationLinear(
|
871 |
+
bit_length=bit_length,
|
872 |
+
weight=layer.mlp.down_proj.weight.to(torch.cuda.current_device()),
|
873 |
+
device=layer.mlp.down_proj.weight.device
|
874 |
+
)
|
875 |
+
layer.mlp.up_proj = QuantizationLinear(
|
876 |
+
bit_length=bit_length,
|
877 |
+
weight=layer.mlp.up_proj.weight.to(torch.cuda.current_device()),
|
878 |
+
device=layer.mlp.up_proj.weight.device
|
879 |
+
)
|
880 |
+
|
881 |
+
return self
|
pytorch_model-00001-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7c89d4c1e25576cd08861c6b8ff064356a1a5c516bf7a103ed93dac94a1456e0
|
3 |
+
size 7041329028
|
pytorch_model-00002-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5ba7410a29da6eaa438eeb265935e17ad84d20eec39e9a33fc94563de483c300
|
3 |
+
size 8095183924
|
pytorch_model-00003-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d9a5e5b77e3ef6a898513bd3155f9f335be7e6a81a2e0f64256bc623ba691fd9
|
3 |
+
size 8095183924
|
pytorch_model-00004-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f3450ecf60593e33b3ddd76eb54e9fb9f9657a53ca4ec37af061d599063e540b
|
3 |
+
size 8095183988
|
pytorch_model-00005-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c728a7facfd3cc5df0c5efa163f56e69a8cad9e009bd21e99eb24f822c7d7bc3
|
3 |
+
size 8095183988
|
pytorch_model-00006-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f7931f3f052f06d36451bce824522ad02a8809478107de2e478d9070fbfd5a9d
|
3 |
+
size 8095183988
|
pytorch_model-00007-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cbb2a9edd5d39d72b3e2e5015db1891e9bfcc1c7264039194165fd4d80ba1fe0
|
3 |
+
size 8095183988
|
pytorch_model-00008-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e8dca0e51c8fb2ad789f59df5804793a929ffea9b557d59d6916265f8f29ebb4
|
3 |
+
size 8095183988
|
pytorch_model-00009-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3b9cdb564d9d88007da9b0bd6602a21027069212c901a1c42df72ed7c12dd0d1
|
3 |
+
size 8095183988
|
pytorch_model-00010-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4298e4a6a77bf46924e6bb8c746c24b1a1a5da018a9377508625407bde1c84de
|
3 |
+
size 8095183988
|
pytorch_model-00011-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:63f138f4901320545984d63f27a7f99422b77520288bef8f6d76cdc7c6dfdfb7
|
3 |
+
size 8095183988
|
pytorch_model-00012-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:77ce254c74d4582bbf463788d13384d956de6e84c59a44597e127e16206e21ed
|
3 |
+
size 8095183988
|
pytorch_model-00013-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e5f1c4e340bf69330a526a7f61f18a036a600eccda0145460a7ba5d84b1d2d6c
|
3 |
+
size 8095183988
|
pytorch_model-00014-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0a319d1de441c0fba10f162c173faffdd2edf9ac28521e550f2f0ea67bb130fa
|
3 |
+
size 8095183988
|
pytorch_model-00015-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b53140c436f91ae812af53cb4d12826d43f87dc7b83d5cb992f361091cb6440c
|
3 |
+
size 8095183988
|
pytorch_model-00016-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ddb55ea756e7380ab37d13dd28c6e84461bf3afffd403906cb2e198c232fcbb
|
3 |
+
size 8095183988
|
pytorch_model-00017-of-00017.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c4ec707bf3c626469d4009640aa1858e3e21eaa6d482e62e0043eb3c91e08995
|
3 |
+
size 4348497508
|
pytorch_model.bin.index.json
ADDED
@@ -0,0 +1,810 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"metadata": {
|
3 |
+
"total_size": 82485405696
|
4 |
+
},
|
5 |
+
"weight_map": {
|
6 |
+
"lm_head.weight": "pytorch_model-00017-of-00017.bin",
|
7 |
+
"model.embed_tokens.weight": "pytorch_model-00001-of-00017.bin",
|
8 |
+
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00017.bin",
|
9 |
+
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00017.bin",
|
10 |
+
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00017.bin",
|
11 |
+
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00017.bin",
|
12 |
+
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00017.bin",
|
13 |
+
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00017.bin",
|
14 |
+
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00017.bin",
|
15 |
+
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00017.bin",
|
16 |
+
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00017.bin",
|
17 |
+
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00017.bin",
|
18 |
+
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00017.bin",
|
19 |
+
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00017.bin",
|
20 |
+
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00017.bin",
|
21 |
+
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00017.bin",
|
22 |
+
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00017.bin",
|
23 |
+
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00017.bin",
|
24 |
+
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00017.bin",
|
25 |
+
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00017.bin",
|
26 |
+
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00017.bin",
|
27 |
+
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00017.bin",
|
28 |
+
"model.layers.10.input_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
29 |
+
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00003-of-00017.bin",
|
30 |
+
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00003-of-00017.bin",
|
31 |
+
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00003-of-00017.bin",
|
32 |
+
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
33 |
+
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00003-of-00017.bin",
|
34 |
+
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00003-of-00017.bin",
|
35 |
+
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00003-of-00017.bin",
|
36 |
+
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00017.bin",
|
37 |
+
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00003-of-00017.bin",
|
38 |
+
"model.layers.11.input_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
39 |
+
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00003-of-00017.bin",
|
40 |
+
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00003-of-00017.bin",
|
41 |
+
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00003-of-00017.bin",
|
42 |
+
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
43 |
+
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00003-of-00017.bin",
|
44 |
+
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00003-of-00017.bin",
|
45 |
+
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00003-of-00017.bin",
|
46 |
+
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00017.bin",
|
47 |
+
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00003-of-00017.bin",
|
48 |
+
"model.layers.12.input_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
49 |
+
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00003-of-00017.bin",
|
50 |
+
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00003-of-00017.bin",
|
51 |
+
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00003-of-00017.bin",
|
52 |
+
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
53 |
+
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00003-of-00017.bin",
|
54 |
+
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00003-of-00017.bin",
|
55 |
+
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00003-of-00017.bin",
|
56 |
+
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00017.bin",
|
57 |
+
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00003-of-00017.bin",
|
58 |
+
"model.layers.13.input_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
59 |
+
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00004-of-00017.bin",
|
60 |
+
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00004-of-00017.bin",
|
61 |
+
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00004-of-00017.bin",
|
62 |
+
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
63 |
+
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00003-of-00017.bin",
|
64 |
+
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00003-of-00017.bin",
|
65 |
+
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00003-of-00017.bin",
|
66 |
+
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00017.bin",
|
67 |
+
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00003-of-00017.bin",
|
68 |
+
"model.layers.14.input_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
69 |
+
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00004-of-00017.bin",
|
70 |
+
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00004-of-00017.bin",
|
71 |
+
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00004-of-00017.bin",
|
72 |
+
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
73 |
+
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00004-of-00017.bin",
|
74 |
+
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00004-of-00017.bin",
|
75 |
+
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00004-of-00017.bin",
|
76 |
+
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00017.bin",
|
77 |
+
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00004-of-00017.bin",
|
78 |
+
"model.layers.15.input_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
79 |
+
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00004-of-00017.bin",
|
80 |
+
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00004-of-00017.bin",
|
81 |
+
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00004-of-00017.bin",
|
82 |
+
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
83 |
+
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00004-of-00017.bin",
|
84 |
+
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00004-of-00017.bin",
|
85 |
+
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00004-of-00017.bin",
|
86 |
+
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00017.bin",
|
87 |
+
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00004-of-00017.bin",
|
88 |
+
"model.layers.16.input_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
89 |
+
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00004-of-00017.bin",
|
90 |
+
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00004-of-00017.bin",
|
91 |
+
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00004-of-00017.bin",
|
92 |
+
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
93 |
+
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00004-of-00017.bin",
|
94 |
+
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00004-of-00017.bin",
|
95 |
+
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00004-of-00017.bin",
|
96 |
+
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00017.bin",
|
97 |
+
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00004-of-00017.bin",
|
98 |
+
"model.layers.17.input_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
99 |
+
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00004-of-00017.bin",
|
100 |
+
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00004-of-00017.bin",
|
101 |
+
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00004-of-00017.bin",
|
102 |
+
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
103 |
+
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00004-of-00017.bin",
|
104 |
+
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00004-of-00017.bin",
|
105 |
+
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00004-of-00017.bin",
|
106 |
+
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00017.bin",
|
107 |
+
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00004-of-00017.bin",
|
108 |
+
"model.layers.18.input_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
109 |
+
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00005-of-00017.bin",
|
110 |
+
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00005-of-00017.bin",
|
111 |
+
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00005-of-00017.bin",
|
112 |
+
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00004-of-00017.bin",
|
113 |
+
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00004-of-00017.bin",
|
114 |
+
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00004-of-00017.bin",
|
115 |
+
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00004-of-00017.bin",
|
116 |
+
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00017.bin",
|
117 |
+
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00004-of-00017.bin",
|
118 |
+
"model.layers.19.input_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
119 |
+
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00005-of-00017.bin",
|
120 |
+
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00005-of-00017.bin",
|
121 |
+
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00005-of-00017.bin",
|
122 |
+
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
123 |
+
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00005-of-00017.bin",
|
124 |
+
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00005-of-00017.bin",
|
125 |
+
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00005-of-00017.bin",
|
126 |
+
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00017.bin",
|
127 |
+
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00005-of-00017.bin",
|
128 |
+
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00017.bin",
|
129 |
+
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00017.bin",
|
130 |
+
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00017.bin",
|
131 |
+
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00017.bin",
|
132 |
+
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00017.bin",
|
133 |
+
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00017.bin",
|
134 |
+
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00017.bin",
|
135 |
+
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00017.bin",
|
136 |
+
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00017.bin",
|
137 |
+
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00017.bin",
|
138 |
+
"model.layers.20.input_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
139 |
+
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00005-of-00017.bin",
|
140 |
+
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00005-of-00017.bin",
|
141 |
+
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00005-of-00017.bin",
|
142 |
+
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
143 |
+
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00005-of-00017.bin",
|
144 |
+
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00005-of-00017.bin",
|
145 |
+
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00005-of-00017.bin",
|
146 |
+
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00017.bin",
|
147 |
+
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00005-of-00017.bin",
|
148 |
+
"model.layers.21.input_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
149 |
+
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00005-of-00017.bin",
|
150 |
+
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00005-of-00017.bin",
|
151 |
+
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00005-of-00017.bin",
|
152 |
+
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
153 |
+
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00005-of-00017.bin",
|
154 |
+
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00005-of-00017.bin",
|
155 |
+
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00005-of-00017.bin",
|
156 |
+
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00017.bin",
|
157 |
+
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00005-of-00017.bin",
|
158 |
+
"model.layers.22.input_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
159 |
+
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00005-of-00017.bin",
|
160 |
+
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00005-of-00017.bin",
|
161 |
+
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00005-of-00017.bin",
|
162 |
+
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
163 |
+
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00005-of-00017.bin",
|
164 |
+
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00005-of-00017.bin",
|
165 |
+
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00005-of-00017.bin",
|
166 |
+
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00017.bin",
|
167 |
+
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00005-of-00017.bin",
|
168 |
+
"model.layers.23.input_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
169 |
+
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00006-of-00017.bin",
|
170 |
+
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00006-of-00017.bin",
|
171 |
+
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00006-of-00017.bin",
|
172 |
+
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00005-of-00017.bin",
|
173 |
+
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00005-of-00017.bin",
|
174 |
+
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00005-of-00017.bin",
|
175 |
+
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00005-of-00017.bin",
|
176 |
+
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00017.bin",
|
177 |
+
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00005-of-00017.bin",
|
178 |
+
"model.layers.24.input_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
179 |
+
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00006-of-00017.bin",
|
180 |
+
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00006-of-00017.bin",
|
181 |
+
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00006-of-00017.bin",
|
182 |
+
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
183 |
+
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00006-of-00017.bin",
|
184 |
+
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00006-of-00017.bin",
|
185 |
+
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00006-of-00017.bin",
|
186 |
+
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00017.bin",
|
187 |
+
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00006-of-00017.bin",
|
188 |
+
"model.layers.25.input_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
189 |
+
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00006-of-00017.bin",
|
190 |
+
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00006-of-00017.bin",
|
191 |
+
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00006-of-00017.bin",
|
192 |
+
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
193 |
+
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00006-of-00017.bin",
|
194 |
+
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00006-of-00017.bin",
|
195 |
+
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00006-of-00017.bin",
|
196 |
+
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00017.bin",
|
197 |
+
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00006-of-00017.bin",
|
198 |
+
"model.layers.26.input_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
199 |
+
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00006-of-00017.bin",
|
200 |
+
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00006-of-00017.bin",
|
201 |
+
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00006-of-00017.bin",
|
202 |
+
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
203 |
+
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00006-of-00017.bin",
|
204 |
+
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00006-of-00017.bin",
|
205 |
+
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00006-of-00017.bin",
|
206 |
+
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00017.bin",
|
207 |
+
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00006-of-00017.bin",
|
208 |
+
"model.layers.27.input_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
209 |
+
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00006-of-00017.bin",
|
210 |
+
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00006-of-00017.bin",
|
211 |
+
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00006-of-00017.bin",
|
212 |
+
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
213 |
+
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00006-of-00017.bin",
|
214 |
+
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00006-of-00017.bin",
|
215 |
+
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00006-of-00017.bin",
|
216 |
+
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00017.bin",
|
217 |
+
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00006-of-00017.bin",
|
218 |
+
"model.layers.28.input_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
219 |
+
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00007-of-00017.bin",
|
220 |
+
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00007-of-00017.bin",
|
221 |
+
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00007-of-00017.bin",
|
222 |
+
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00006-of-00017.bin",
|
223 |
+
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00006-of-00017.bin",
|
224 |
+
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00006-of-00017.bin",
|
225 |
+
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00006-of-00017.bin",
|
226 |
+
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00017.bin",
|
227 |
+
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00006-of-00017.bin",
|
228 |
+
"model.layers.29.input_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
229 |
+
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00007-of-00017.bin",
|
230 |
+
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00007-of-00017.bin",
|
231 |
+
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00007-of-00017.bin",
|
232 |
+
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
233 |
+
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00007-of-00017.bin",
|
234 |
+
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00007-of-00017.bin",
|
235 |
+
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00007-of-00017.bin",
|
236 |
+
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00017.bin",
|
237 |
+
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00007-of-00017.bin",
|
238 |
+
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00017.bin",
|
239 |
+
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00002-of-00017.bin",
|
240 |
+
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00002-of-00017.bin",
|
241 |
+
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00002-of-00017.bin",
|
242 |
+
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00017.bin",
|
243 |
+
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00017.bin",
|
244 |
+
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00017.bin",
|
245 |
+
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00017.bin",
|
246 |
+
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00017.bin",
|
247 |
+
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00017.bin",
|
248 |
+
"model.layers.30.input_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
249 |
+
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00007-of-00017.bin",
|
250 |
+
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00007-of-00017.bin",
|
251 |
+
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00007-of-00017.bin",
|
252 |
+
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
253 |
+
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00007-of-00017.bin",
|
254 |
+
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00007-of-00017.bin",
|
255 |
+
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00007-of-00017.bin",
|
256 |
+
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00017.bin",
|
257 |
+
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00007-of-00017.bin",
|
258 |
+
"model.layers.31.input_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
259 |
+
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00007-of-00017.bin",
|
260 |
+
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00007-of-00017.bin",
|
261 |
+
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00007-of-00017.bin",
|
262 |
+
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
263 |
+
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00007-of-00017.bin",
|
264 |
+
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00007-of-00017.bin",
|
265 |
+
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00007-of-00017.bin",
|
266 |
+
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00017.bin",
|
267 |
+
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00007-of-00017.bin",
|
268 |
+
"model.layers.32.input_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
269 |
+
"model.layers.32.mlp.down_proj.weight": "pytorch_model-00007-of-00017.bin",
|
270 |
+
"model.layers.32.mlp.gate_proj.weight": "pytorch_model-00007-of-00017.bin",
|
271 |
+
"model.layers.32.mlp.up_proj.weight": "pytorch_model-00007-of-00017.bin",
|
272 |
+
"model.layers.32.post_attention_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
273 |
+
"model.layers.32.self_attn.k_proj.weight": "pytorch_model-00007-of-00017.bin",
|
274 |
+
"model.layers.32.self_attn.o_proj.weight": "pytorch_model-00007-of-00017.bin",
|
275 |
+
"model.layers.32.self_attn.q_proj.weight": "pytorch_model-00007-of-00017.bin",
|
276 |
+
"model.layers.32.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00017.bin",
|
277 |
+
"model.layers.32.self_attn.v_proj.weight": "pytorch_model-00007-of-00017.bin",
|
278 |
+
"model.layers.33.input_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
279 |
+
"model.layers.33.mlp.down_proj.weight": "pytorch_model-00008-of-00017.bin",
|
280 |
+
"model.layers.33.mlp.gate_proj.weight": "pytorch_model-00008-of-00017.bin",
|
281 |
+
"model.layers.33.mlp.up_proj.weight": "pytorch_model-00008-of-00017.bin",
|
282 |
+
"model.layers.33.post_attention_layernorm.weight": "pytorch_model-00007-of-00017.bin",
|
283 |
+
"model.layers.33.self_attn.k_proj.weight": "pytorch_model-00007-of-00017.bin",
|
284 |
+
"model.layers.33.self_attn.o_proj.weight": "pytorch_model-00007-of-00017.bin",
|
285 |
+
"model.layers.33.self_attn.q_proj.weight": "pytorch_model-00007-of-00017.bin",
|
286 |
+
"model.layers.33.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00017.bin",
|
287 |
+
"model.layers.33.self_attn.v_proj.weight": "pytorch_model-00007-of-00017.bin",
|
288 |
+
"model.layers.34.input_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
289 |
+
"model.layers.34.mlp.down_proj.weight": "pytorch_model-00008-of-00017.bin",
|
290 |
+
"model.layers.34.mlp.gate_proj.weight": "pytorch_model-00008-of-00017.bin",
|
291 |
+
"model.layers.34.mlp.up_proj.weight": "pytorch_model-00008-of-00017.bin",
|
292 |
+
"model.layers.34.post_attention_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
293 |
+
"model.layers.34.self_attn.k_proj.weight": "pytorch_model-00008-of-00017.bin",
|
294 |
+
"model.layers.34.self_attn.o_proj.weight": "pytorch_model-00008-of-00017.bin",
|
295 |
+
"model.layers.34.self_attn.q_proj.weight": "pytorch_model-00008-of-00017.bin",
|
296 |
+
"model.layers.34.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00017.bin",
|
297 |
+
"model.layers.34.self_attn.v_proj.weight": "pytorch_model-00008-of-00017.bin",
|
298 |
+
"model.layers.35.input_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
299 |
+
"model.layers.35.mlp.down_proj.weight": "pytorch_model-00008-of-00017.bin",
|
300 |
+
"model.layers.35.mlp.gate_proj.weight": "pytorch_model-00008-of-00017.bin",
|
301 |
+
"model.layers.35.mlp.up_proj.weight": "pytorch_model-00008-of-00017.bin",
|
302 |
+
"model.layers.35.post_attention_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
303 |
+
"model.layers.35.self_attn.k_proj.weight": "pytorch_model-00008-of-00017.bin",
|
304 |
+
"model.layers.35.self_attn.o_proj.weight": "pytorch_model-00008-of-00017.bin",
|
305 |
+
"model.layers.35.self_attn.q_proj.weight": "pytorch_model-00008-of-00017.bin",
|
306 |
+
"model.layers.35.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00017.bin",
|
307 |
+
"model.layers.35.self_attn.v_proj.weight": "pytorch_model-00008-of-00017.bin",
|
308 |
+
"model.layers.36.input_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
309 |
+
"model.layers.36.mlp.down_proj.weight": "pytorch_model-00008-of-00017.bin",
|
310 |
+
"model.layers.36.mlp.gate_proj.weight": "pytorch_model-00008-of-00017.bin",
|
311 |
+
"model.layers.36.mlp.up_proj.weight": "pytorch_model-00008-of-00017.bin",
|
312 |
+
"model.layers.36.post_attention_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
313 |
+
"model.layers.36.self_attn.k_proj.weight": "pytorch_model-00008-of-00017.bin",
|
314 |
+
"model.layers.36.self_attn.o_proj.weight": "pytorch_model-00008-of-00017.bin",
|
315 |
+
"model.layers.36.self_attn.q_proj.weight": "pytorch_model-00008-of-00017.bin",
|
316 |
+
"model.layers.36.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00017.bin",
|
317 |
+
"model.layers.36.self_attn.v_proj.weight": "pytorch_model-00008-of-00017.bin",
|
318 |
+
"model.layers.37.input_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
319 |
+
"model.layers.37.mlp.down_proj.weight": "pytorch_model-00008-of-00017.bin",
|
320 |
+
"model.layers.37.mlp.gate_proj.weight": "pytorch_model-00008-of-00017.bin",
|
321 |
+
"model.layers.37.mlp.up_proj.weight": "pytorch_model-00008-of-00017.bin",
|
322 |
+
"model.layers.37.post_attention_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
323 |
+
"model.layers.37.self_attn.k_proj.weight": "pytorch_model-00008-of-00017.bin",
|
324 |
+
"model.layers.37.self_attn.o_proj.weight": "pytorch_model-00008-of-00017.bin",
|
325 |
+
"model.layers.37.self_attn.q_proj.weight": "pytorch_model-00008-of-00017.bin",
|
326 |
+
"model.layers.37.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00017.bin",
|
327 |
+
"model.layers.37.self_attn.v_proj.weight": "pytorch_model-00008-of-00017.bin",
|
328 |
+
"model.layers.38.input_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
329 |
+
"model.layers.38.mlp.down_proj.weight": "pytorch_model-00009-of-00017.bin",
|
330 |
+
"model.layers.38.mlp.gate_proj.weight": "pytorch_model-00009-of-00017.bin",
|
331 |
+
"model.layers.38.mlp.up_proj.weight": "pytorch_model-00009-of-00017.bin",
|
332 |
+
"model.layers.38.post_attention_layernorm.weight": "pytorch_model-00008-of-00017.bin",
|
333 |
+
"model.layers.38.self_attn.k_proj.weight": "pytorch_model-00008-of-00017.bin",
|
334 |
+
"model.layers.38.self_attn.o_proj.weight": "pytorch_model-00008-of-00017.bin",
|
335 |
+
"model.layers.38.self_attn.q_proj.weight": "pytorch_model-00008-of-00017.bin",
|
336 |
+
"model.layers.38.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00017.bin",
|
337 |
+
"model.layers.38.self_attn.v_proj.weight": "pytorch_model-00008-of-00017.bin",
|
338 |
+
"model.layers.39.input_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
339 |
+
"model.layers.39.mlp.down_proj.weight": "pytorch_model-00009-of-00017.bin",
|
340 |
+
"model.layers.39.mlp.gate_proj.weight": "pytorch_model-00009-of-00017.bin",
|
341 |
+
"model.layers.39.mlp.up_proj.weight": "pytorch_model-00009-of-00017.bin",
|
342 |
+
"model.layers.39.post_attention_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
343 |
+
"model.layers.39.self_attn.k_proj.weight": "pytorch_model-00009-of-00017.bin",
|
344 |
+
"model.layers.39.self_attn.o_proj.weight": "pytorch_model-00009-of-00017.bin",
|
345 |
+
"model.layers.39.self_attn.q_proj.weight": "pytorch_model-00009-of-00017.bin",
|
346 |
+
"model.layers.39.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00017.bin",
|
347 |
+
"model.layers.39.self_attn.v_proj.weight": "pytorch_model-00009-of-00017.bin",
|
348 |
+
"model.layers.4.input_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
349 |
+
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00002-of-00017.bin",
|
350 |
+
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00002-of-00017.bin",
|
351 |
+
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00002-of-00017.bin",
|
352 |
+
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
353 |
+
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00002-of-00017.bin",
|
354 |
+
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00002-of-00017.bin",
|
355 |
+
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00002-of-00017.bin",
|
356 |
+
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00017.bin",
|
357 |
+
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00002-of-00017.bin",
|
358 |
+
"model.layers.40.input_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
359 |
+
"model.layers.40.mlp.down_proj.weight": "pytorch_model-00009-of-00017.bin",
|
360 |
+
"model.layers.40.mlp.gate_proj.weight": "pytorch_model-00009-of-00017.bin",
|
361 |
+
"model.layers.40.mlp.up_proj.weight": "pytorch_model-00009-of-00017.bin",
|
362 |
+
"model.layers.40.post_attention_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
363 |
+
"model.layers.40.self_attn.k_proj.weight": "pytorch_model-00009-of-00017.bin",
|
364 |
+
"model.layers.40.self_attn.o_proj.weight": "pytorch_model-00009-of-00017.bin",
|
365 |
+
"model.layers.40.self_attn.q_proj.weight": "pytorch_model-00009-of-00017.bin",
|
366 |
+
"model.layers.40.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00017.bin",
|
367 |
+
"model.layers.40.self_attn.v_proj.weight": "pytorch_model-00009-of-00017.bin",
|
368 |
+
"model.layers.41.input_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
369 |
+
"model.layers.41.mlp.down_proj.weight": "pytorch_model-00009-of-00017.bin",
|
370 |
+
"model.layers.41.mlp.gate_proj.weight": "pytorch_model-00009-of-00017.bin",
|
371 |
+
"model.layers.41.mlp.up_proj.weight": "pytorch_model-00009-of-00017.bin",
|
372 |
+
"model.layers.41.post_attention_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
373 |
+
"model.layers.41.self_attn.k_proj.weight": "pytorch_model-00009-of-00017.bin",
|
374 |
+
"model.layers.41.self_attn.o_proj.weight": "pytorch_model-00009-of-00017.bin",
|
375 |
+
"model.layers.41.self_attn.q_proj.weight": "pytorch_model-00009-of-00017.bin",
|
376 |
+
"model.layers.41.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00017.bin",
|
377 |
+
"model.layers.41.self_attn.v_proj.weight": "pytorch_model-00009-of-00017.bin",
|
378 |
+
"model.layers.42.input_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
379 |
+
"model.layers.42.mlp.down_proj.weight": "pytorch_model-00009-of-00017.bin",
|
380 |
+
"model.layers.42.mlp.gate_proj.weight": "pytorch_model-00009-of-00017.bin",
|
381 |
+
"model.layers.42.mlp.up_proj.weight": "pytorch_model-00009-of-00017.bin",
|
382 |
+
"model.layers.42.post_attention_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
383 |
+
"model.layers.42.self_attn.k_proj.weight": "pytorch_model-00009-of-00017.bin",
|
384 |
+
"model.layers.42.self_attn.o_proj.weight": "pytorch_model-00009-of-00017.bin",
|
385 |
+
"model.layers.42.self_attn.q_proj.weight": "pytorch_model-00009-of-00017.bin",
|
386 |
+
"model.layers.42.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00017.bin",
|
387 |
+
"model.layers.42.self_attn.v_proj.weight": "pytorch_model-00009-of-00017.bin",
|
388 |
+
"model.layers.43.input_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
389 |
+
"model.layers.43.mlp.down_proj.weight": "pytorch_model-00010-of-00017.bin",
|
390 |
+
"model.layers.43.mlp.gate_proj.weight": "pytorch_model-00010-of-00017.bin",
|
391 |
+
"model.layers.43.mlp.up_proj.weight": "pytorch_model-00010-of-00017.bin",
|
392 |
+
"model.layers.43.post_attention_layernorm.weight": "pytorch_model-00009-of-00017.bin",
|
393 |
+
"model.layers.43.self_attn.k_proj.weight": "pytorch_model-00009-of-00017.bin",
|
394 |
+
"model.layers.43.self_attn.o_proj.weight": "pytorch_model-00009-of-00017.bin",
|
395 |
+
"model.layers.43.self_attn.q_proj.weight": "pytorch_model-00009-of-00017.bin",
|
396 |
+
"model.layers.43.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00017.bin",
|
397 |
+
"model.layers.43.self_attn.v_proj.weight": "pytorch_model-00009-of-00017.bin",
|
398 |
+
"model.layers.44.input_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
399 |
+
"model.layers.44.mlp.down_proj.weight": "pytorch_model-00010-of-00017.bin",
|
400 |
+
"model.layers.44.mlp.gate_proj.weight": "pytorch_model-00010-of-00017.bin",
|
401 |
+
"model.layers.44.mlp.up_proj.weight": "pytorch_model-00010-of-00017.bin",
|
402 |
+
"model.layers.44.post_attention_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
403 |
+
"model.layers.44.self_attn.k_proj.weight": "pytorch_model-00010-of-00017.bin",
|
404 |
+
"model.layers.44.self_attn.o_proj.weight": "pytorch_model-00010-of-00017.bin",
|
405 |
+
"model.layers.44.self_attn.q_proj.weight": "pytorch_model-00010-of-00017.bin",
|
406 |
+
"model.layers.44.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00017.bin",
|
407 |
+
"model.layers.44.self_attn.v_proj.weight": "pytorch_model-00010-of-00017.bin",
|
408 |
+
"model.layers.45.input_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
409 |
+
"model.layers.45.mlp.down_proj.weight": "pytorch_model-00010-of-00017.bin",
|
410 |
+
"model.layers.45.mlp.gate_proj.weight": "pytorch_model-00010-of-00017.bin",
|
411 |
+
"model.layers.45.mlp.up_proj.weight": "pytorch_model-00010-of-00017.bin",
|
412 |
+
"model.layers.45.post_attention_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
413 |
+
"model.layers.45.self_attn.k_proj.weight": "pytorch_model-00010-of-00017.bin",
|
414 |
+
"model.layers.45.self_attn.o_proj.weight": "pytorch_model-00010-of-00017.bin",
|
415 |
+
"model.layers.45.self_attn.q_proj.weight": "pytorch_model-00010-of-00017.bin",
|
416 |
+
"model.layers.45.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00017.bin",
|
417 |
+
"model.layers.45.self_attn.v_proj.weight": "pytorch_model-00010-of-00017.bin",
|
418 |
+
"model.layers.46.input_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
419 |
+
"model.layers.46.mlp.down_proj.weight": "pytorch_model-00010-of-00017.bin",
|
420 |
+
"model.layers.46.mlp.gate_proj.weight": "pytorch_model-00010-of-00017.bin",
|
421 |
+
"model.layers.46.mlp.up_proj.weight": "pytorch_model-00010-of-00017.bin",
|
422 |
+
"model.layers.46.post_attention_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
423 |
+
"model.layers.46.self_attn.k_proj.weight": "pytorch_model-00010-of-00017.bin",
|
424 |
+
"model.layers.46.self_attn.o_proj.weight": "pytorch_model-00010-of-00017.bin",
|
425 |
+
"model.layers.46.self_attn.q_proj.weight": "pytorch_model-00010-of-00017.bin",
|
426 |
+
"model.layers.46.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00017.bin",
|
427 |
+
"model.layers.46.self_attn.v_proj.weight": "pytorch_model-00010-of-00017.bin",
|
428 |
+
"model.layers.47.input_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
429 |
+
"model.layers.47.mlp.down_proj.weight": "pytorch_model-00010-of-00017.bin",
|
430 |
+
"model.layers.47.mlp.gate_proj.weight": "pytorch_model-00010-of-00017.bin",
|
431 |
+
"model.layers.47.mlp.up_proj.weight": "pytorch_model-00010-of-00017.bin",
|
432 |
+
"model.layers.47.post_attention_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
433 |
+
"model.layers.47.self_attn.k_proj.weight": "pytorch_model-00010-of-00017.bin",
|
434 |
+
"model.layers.47.self_attn.o_proj.weight": "pytorch_model-00010-of-00017.bin",
|
435 |
+
"model.layers.47.self_attn.q_proj.weight": "pytorch_model-00010-of-00017.bin",
|
436 |
+
"model.layers.47.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00017.bin",
|
437 |
+
"model.layers.47.self_attn.v_proj.weight": "pytorch_model-00010-of-00017.bin",
|
438 |
+
"model.layers.48.input_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
439 |
+
"model.layers.48.mlp.down_proj.weight": "pytorch_model-00011-of-00017.bin",
|
440 |
+
"model.layers.48.mlp.gate_proj.weight": "pytorch_model-00011-of-00017.bin",
|
441 |
+
"model.layers.48.mlp.up_proj.weight": "pytorch_model-00011-of-00017.bin",
|
442 |
+
"model.layers.48.post_attention_layernorm.weight": "pytorch_model-00010-of-00017.bin",
|
443 |
+
"model.layers.48.self_attn.k_proj.weight": "pytorch_model-00010-of-00017.bin",
|
444 |
+
"model.layers.48.self_attn.o_proj.weight": "pytorch_model-00010-of-00017.bin",
|
445 |
+
"model.layers.48.self_attn.q_proj.weight": "pytorch_model-00010-of-00017.bin",
|
446 |
+
"model.layers.48.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00017.bin",
|
447 |
+
"model.layers.48.self_attn.v_proj.weight": "pytorch_model-00010-of-00017.bin",
|
448 |
+
"model.layers.49.input_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
449 |
+
"model.layers.49.mlp.down_proj.weight": "pytorch_model-00011-of-00017.bin",
|
450 |
+
"model.layers.49.mlp.gate_proj.weight": "pytorch_model-00011-of-00017.bin",
|
451 |
+
"model.layers.49.mlp.up_proj.weight": "pytorch_model-00011-of-00017.bin",
|
452 |
+
"model.layers.49.post_attention_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
453 |
+
"model.layers.49.self_attn.k_proj.weight": "pytorch_model-00011-of-00017.bin",
|
454 |
+
"model.layers.49.self_attn.o_proj.weight": "pytorch_model-00011-of-00017.bin",
|
455 |
+
"model.layers.49.self_attn.q_proj.weight": "pytorch_model-00011-of-00017.bin",
|
456 |
+
"model.layers.49.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00017.bin",
|
457 |
+
"model.layers.49.self_attn.v_proj.weight": "pytorch_model-00011-of-00017.bin",
|
458 |
+
"model.layers.5.input_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
459 |
+
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00002-of-00017.bin",
|
460 |
+
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00002-of-00017.bin",
|
461 |
+
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00002-of-00017.bin",
|
462 |
+
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
463 |
+
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00002-of-00017.bin",
|
464 |
+
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00002-of-00017.bin",
|
465 |
+
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00002-of-00017.bin",
|
466 |
+
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00017.bin",
|
467 |
+
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00002-of-00017.bin",
|
468 |
+
"model.layers.50.input_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
469 |
+
"model.layers.50.mlp.down_proj.weight": "pytorch_model-00011-of-00017.bin",
|
470 |
+
"model.layers.50.mlp.gate_proj.weight": "pytorch_model-00011-of-00017.bin",
|
471 |
+
"model.layers.50.mlp.up_proj.weight": "pytorch_model-00011-of-00017.bin",
|
472 |
+
"model.layers.50.post_attention_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
473 |
+
"model.layers.50.self_attn.k_proj.weight": "pytorch_model-00011-of-00017.bin",
|
474 |
+
"model.layers.50.self_attn.o_proj.weight": "pytorch_model-00011-of-00017.bin",
|
475 |
+
"model.layers.50.self_attn.q_proj.weight": "pytorch_model-00011-of-00017.bin",
|
476 |
+
"model.layers.50.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00017.bin",
|
477 |
+
"model.layers.50.self_attn.v_proj.weight": "pytorch_model-00011-of-00017.bin",
|
478 |
+
"model.layers.51.input_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
479 |
+
"model.layers.51.mlp.down_proj.weight": "pytorch_model-00011-of-00017.bin",
|
480 |
+
"model.layers.51.mlp.gate_proj.weight": "pytorch_model-00011-of-00017.bin",
|
481 |
+
"model.layers.51.mlp.up_proj.weight": "pytorch_model-00011-of-00017.bin",
|
482 |
+
"model.layers.51.post_attention_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
483 |
+
"model.layers.51.self_attn.k_proj.weight": "pytorch_model-00011-of-00017.bin",
|
484 |
+
"model.layers.51.self_attn.o_proj.weight": "pytorch_model-00011-of-00017.bin",
|
485 |
+
"model.layers.51.self_attn.q_proj.weight": "pytorch_model-00011-of-00017.bin",
|
486 |
+
"model.layers.51.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00017.bin",
|
487 |
+
"model.layers.51.self_attn.v_proj.weight": "pytorch_model-00011-of-00017.bin",
|
488 |
+
"model.layers.52.input_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
489 |
+
"model.layers.52.mlp.down_proj.weight": "pytorch_model-00011-of-00017.bin",
|
490 |
+
"model.layers.52.mlp.gate_proj.weight": "pytorch_model-00011-of-00017.bin",
|
491 |
+
"model.layers.52.mlp.up_proj.weight": "pytorch_model-00011-of-00017.bin",
|
492 |
+
"model.layers.52.post_attention_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
493 |
+
"model.layers.52.self_attn.k_proj.weight": "pytorch_model-00011-of-00017.bin",
|
494 |
+
"model.layers.52.self_attn.o_proj.weight": "pytorch_model-00011-of-00017.bin",
|
495 |
+
"model.layers.52.self_attn.q_proj.weight": "pytorch_model-00011-of-00017.bin",
|
496 |
+
"model.layers.52.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00017.bin",
|
497 |
+
"model.layers.52.self_attn.v_proj.weight": "pytorch_model-00011-of-00017.bin",
|
498 |
+
"model.layers.53.input_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
499 |
+
"model.layers.53.mlp.down_proj.weight": "pytorch_model-00012-of-00017.bin",
|
500 |
+
"model.layers.53.mlp.gate_proj.weight": "pytorch_model-00012-of-00017.bin",
|
501 |
+
"model.layers.53.mlp.up_proj.weight": "pytorch_model-00012-of-00017.bin",
|
502 |
+
"model.layers.53.post_attention_layernorm.weight": "pytorch_model-00011-of-00017.bin",
|
503 |
+
"model.layers.53.self_attn.k_proj.weight": "pytorch_model-00011-of-00017.bin",
|
504 |
+
"model.layers.53.self_attn.o_proj.weight": "pytorch_model-00011-of-00017.bin",
|
505 |
+
"model.layers.53.self_attn.q_proj.weight": "pytorch_model-00011-of-00017.bin",
|
506 |
+
"model.layers.53.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00017.bin",
|
507 |
+
"model.layers.53.self_attn.v_proj.weight": "pytorch_model-00011-of-00017.bin",
|
508 |
+
"model.layers.54.input_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
509 |
+
"model.layers.54.mlp.down_proj.weight": "pytorch_model-00012-of-00017.bin",
|
510 |
+
"model.layers.54.mlp.gate_proj.weight": "pytorch_model-00012-of-00017.bin",
|
511 |
+
"model.layers.54.mlp.up_proj.weight": "pytorch_model-00012-of-00017.bin",
|
512 |
+
"model.layers.54.post_attention_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
513 |
+
"model.layers.54.self_attn.k_proj.weight": "pytorch_model-00012-of-00017.bin",
|
514 |
+
"model.layers.54.self_attn.o_proj.weight": "pytorch_model-00012-of-00017.bin",
|
515 |
+
"model.layers.54.self_attn.q_proj.weight": "pytorch_model-00012-of-00017.bin",
|
516 |
+
"model.layers.54.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00017.bin",
|
517 |
+
"model.layers.54.self_attn.v_proj.weight": "pytorch_model-00012-of-00017.bin",
|
518 |
+
"model.layers.55.input_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
519 |
+
"model.layers.55.mlp.down_proj.weight": "pytorch_model-00012-of-00017.bin",
|
520 |
+
"model.layers.55.mlp.gate_proj.weight": "pytorch_model-00012-of-00017.bin",
|
521 |
+
"model.layers.55.mlp.up_proj.weight": "pytorch_model-00012-of-00017.bin",
|
522 |
+
"model.layers.55.post_attention_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
523 |
+
"model.layers.55.self_attn.k_proj.weight": "pytorch_model-00012-of-00017.bin",
|
524 |
+
"model.layers.55.self_attn.o_proj.weight": "pytorch_model-00012-of-00017.bin",
|
525 |
+
"model.layers.55.self_attn.q_proj.weight": "pytorch_model-00012-of-00017.bin",
|
526 |
+
"model.layers.55.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00017.bin",
|
527 |
+
"model.layers.55.self_attn.v_proj.weight": "pytorch_model-00012-of-00017.bin",
|
528 |
+
"model.layers.56.input_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
529 |
+
"model.layers.56.mlp.down_proj.weight": "pytorch_model-00012-of-00017.bin",
|
530 |
+
"model.layers.56.mlp.gate_proj.weight": "pytorch_model-00012-of-00017.bin",
|
531 |
+
"model.layers.56.mlp.up_proj.weight": "pytorch_model-00012-of-00017.bin",
|
532 |
+
"model.layers.56.post_attention_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
533 |
+
"model.layers.56.self_attn.k_proj.weight": "pytorch_model-00012-of-00017.bin",
|
534 |
+
"model.layers.56.self_attn.o_proj.weight": "pytorch_model-00012-of-00017.bin",
|
535 |
+
"model.layers.56.self_attn.q_proj.weight": "pytorch_model-00012-of-00017.bin",
|
536 |
+
"model.layers.56.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00017.bin",
|
537 |
+
"model.layers.56.self_attn.v_proj.weight": "pytorch_model-00012-of-00017.bin",
|
538 |
+
"model.layers.57.input_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
539 |
+
"model.layers.57.mlp.down_proj.weight": "pytorch_model-00012-of-00017.bin",
|
540 |
+
"model.layers.57.mlp.gate_proj.weight": "pytorch_model-00012-of-00017.bin",
|
541 |
+
"model.layers.57.mlp.up_proj.weight": "pytorch_model-00012-of-00017.bin",
|
542 |
+
"model.layers.57.post_attention_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
543 |
+
"model.layers.57.self_attn.k_proj.weight": "pytorch_model-00012-of-00017.bin",
|
544 |
+
"model.layers.57.self_attn.o_proj.weight": "pytorch_model-00012-of-00017.bin",
|
545 |
+
"model.layers.57.self_attn.q_proj.weight": "pytorch_model-00012-of-00017.bin",
|
546 |
+
"model.layers.57.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00017.bin",
|
547 |
+
"model.layers.57.self_attn.v_proj.weight": "pytorch_model-00012-of-00017.bin",
|
548 |
+
"model.layers.58.input_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
549 |
+
"model.layers.58.mlp.down_proj.weight": "pytorch_model-00013-of-00017.bin",
|
550 |
+
"model.layers.58.mlp.gate_proj.weight": "pytorch_model-00013-of-00017.bin",
|
551 |
+
"model.layers.58.mlp.up_proj.weight": "pytorch_model-00013-of-00017.bin",
|
552 |
+
"model.layers.58.post_attention_layernorm.weight": "pytorch_model-00012-of-00017.bin",
|
553 |
+
"model.layers.58.self_attn.k_proj.weight": "pytorch_model-00012-of-00017.bin",
|
554 |
+
"model.layers.58.self_attn.o_proj.weight": "pytorch_model-00012-of-00017.bin",
|
555 |
+
"model.layers.58.self_attn.q_proj.weight": "pytorch_model-00012-of-00017.bin",
|
556 |
+
"model.layers.58.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00017.bin",
|
557 |
+
"model.layers.58.self_attn.v_proj.weight": "pytorch_model-00012-of-00017.bin",
|
558 |
+
"model.layers.59.input_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
559 |
+
"model.layers.59.mlp.down_proj.weight": "pytorch_model-00013-of-00017.bin",
|
560 |
+
"model.layers.59.mlp.gate_proj.weight": "pytorch_model-00013-of-00017.bin",
|
561 |
+
"model.layers.59.mlp.up_proj.weight": "pytorch_model-00013-of-00017.bin",
|
562 |
+
"model.layers.59.post_attention_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
563 |
+
"model.layers.59.self_attn.k_proj.weight": "pytorch_model-00013-of-00017.bin",
|
564 |
+
"model.layers.59.self_attn.o_proj.weight": "pytorch_model-00013-of-00017.bin",
|
565 |
+
"model.layers.59.self_attn.q_proj.weight": "pytorch_model-00013-of-00017.bin",
|
566 |
+
"model.layers.59.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00017.bin",
|
567 |
+
"model.layers.59.self_attn.v_proj.weight": "pytorch_model-00013-of-00017.bin",
|
568 |
+
"model.layers.6.input_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
569 |
+
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00002-of-00017.bin",
|
570 |
+
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00002-of-00017.bin",
|
571 |
+
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00002-of-00017.bin",
|
572 |
+
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
573 |
+
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00002-of-00017.bin",
|
574 |
+
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00002-of-00017.bin",
|
575 |
+
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00002-of-00017.bin",
|
576 |
+
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00017.bin",
|
577 |
+
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00002-of-00017.bin",
|
578 |
+
"model.layers.60.input_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
579 |
+
"model.layers.60.mlp.down_proj.weight": "pytorch_model-00013-of-00017.bin",
|
580 |
+
"model.layers.60.mlp.gate_proj.weight": "pytorch_model-00013-of-00017.bin",
|
581 |
+
"model.layers.60.mlp.up_proj.weight": "pytorch_model-00013-of-00017.bin",
|
582 |
+
"model.layers.60.post_attention_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
583 |
+
"model.layers.60.self_attn.k_proj.weight": "pytorch_model-00013-of-00017.bin",
|
584 |
+
"model.layers.60.self_attn.o_proj.weight": "pytorch_model-00013-of-00017.bin",
|
585 |
+
"model.layers.60.self_attn.q_proj.weight": "pytorch_model-00013-of-00017.bin",
|
586 |
+
"model.layers.60.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00017.bin",
|
587 |
+
"model.layers.60.self_attn.v_proj.weight": "pytorch_model-00013-of-00017.bin",
|
588 |
+
"model.layers.61.input_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
589 |
+
"model.layers.61.mlp.down_proj.weight": "pytorch_model-00013-of-00017.bin",
|
590 |
+
"model.layers.61.mlp.gate_proj.weight": "pytorch_model-00013-of-00017.bin",
|
591 |
+
"model.layers.61.mlp.up_proj.weight": "pytorch_model-00013-of-00017.bin",
|
592 |
+
"model.layers.61.post_attention_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
593 |
+
"model.layers.61.self_attn.k_proj.weight": "pytorch_model-00013-of-00017.bin",
|
594 |
+
"model.layers.61.self_attn.o_proj.weight": "pytorch_model-00013-of-00017.bin",
|
595 |
+
"model.layers.61.self_attn.q_proj.weight": "pytorch_model-00013-of-00017.bin",
|
596 |
+
"model.layers.61.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00017.bin",
|
597 |
+
"model.layers.61.self_attn.v_proj.weight": "pytorch_model-00013-of-00017.bin",
|
598 |
+
"model.layers.62.input_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
599 |
+
"model.layers.62.mlp.down_proj.weight": "pytorch_model-00013-of-00017.bin",
|
600 |
+
"model.layers.62.mlp.gate_proj.weight": "pytorch_model-00013-of-00017.bin",
|
601 |
+
"model.layers.62.mlp.up_proj.weight": "pytorch_model-00013-of-00017.bin",
|
602 |
+
"model.layers.62.post_attention_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
603 |
+
"model.layers.62.self_attn.k_proj.weight": "pytorch_model-00013-of-00017.bin",
|
604 |
+
"model.layers.62.self_attn.o_proj.weight": "pytorch_model-00013-of-00017.bin",
|
605 |
+
"model.layers.62.self_attn.q_proj.weight": "pytorch_model-00013-of-00017.bin",
|
606 |
+
"model.layers.62.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00017.bin",
|
607 |
+
"model.layers.62.self_attn.v_proj.weight": "pytorch_model-00013-of-00017.bin",
|
608 |
+
"model.layers.63.input_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
609 |
+
"model.layers.63.mlp.down_proj.weight": "pytorch_model-00014-of-00017.bin",
|
610 |
+
"model.layers.63.mlp.gate_proj.weight": "pytorch_model-00014-of-00017.bin",
|
611 |
+
"model.layers.63.mlp.up_proj.weight": "pytorch_model-00014-of-00017.bin",
|
612 |
+
"model.layers.63.post_attention_layernorm.weight": "pytorch_model-00013-of-00017.bin",
|
613 |
+
"model.layers.63.self_attn.k_proj.weight": "pytorch_model-00013-of-00017.bin",
|
614 |
+
"model.layers.63.self_attn.o_proj.weight": "pytorch_model-00013-of-00017.bin",
|
615 |
+
"model.layers.63.self_attn.q_proj.weight": "pytorch_model-00013-of-00017.bin",
|
616 |
+
"model.layers.63.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00017.bin",
|
617 |
+
"model.layers.63.self_attn.v_proj.weight": "pytorch_model-00013-of-00017.bin",
|
618 |
+
"model.layers.64.input_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
619 |
+
"model.layers.64.mlp.down_proj.weight": "pytorch_model-00014-of-00017.bin",
|
620 |
+
"model.layers.64.mlp.gate_proj.weight": "pytorch_model-00014-of-00017.bin",
|
621 |
+
"model.layers.64.mlp.up_proj.weight": "pytorch_model-00014-of-00017.bin",
|
622 |
+
"model.layers.64.post_attention_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
623 |
+
"model.layers.64.self_attn.k_proj.weight": "pytorch_model-00014-of-00017.bin",
|
624 |
+
"model.layers.64.self_attn.o_proj.weight": "pytorch_model-00014-of-00017.bin",
|
625 |
+
"model.layers.64.self_attn.q_proj.weight": "pytorch_model-00014-of-00017.bin",
|
626 |
+
"model.layers.64.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00017.bin",
|
627 |
+
"model.layers.64.self_attn.v_proj.weight": "pytorch_model-00014-of-00017.bin",
|
628 |
+
"model.layers.65.input_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
629 |
+
"model.layers.65.mlp.down_proj.weight": "pytorch_model-00014-of-00017.bin",
|
630 |
+
"model.layers.65.mlp.gate_proj.weight": "pytorch_model-00014-of-00017.bin",
|
631 |
+
"model.layers.65.mlp.up_proj.weight": "pytorch_model-00014-of-00017.bin",
|
632 |
+
"model.layers.65.post_attention_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
633 |
+
"model.layers.65.self_attn.k_proj.weight": "pytorch_model-00014-of-00017.bin",
|
634 |
+
"model.layers.65.self_attn.o_proj.weight": "pytorch_model-00014-of-00017.bin",
|
635 |
+
"model.layers.65.self_attn.q_proj.weight": "pytorch_model-00014-of-00017.bin",
|
636 |
+
"model.layers.65.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00017.bin",
|
637 |
+
"model.layers.65.self_attn.v_proj.weight": "pytorch_model-00014-of-00017.bin",
|
638 |
+
"model.layers.66.input_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
639 |
+
"model.layers.66.mlp.down_proj.weight": "pytorch_model-00014-of-00017.bin",
|
640 |
+
"model.layers.66.mlp.gate_proj.weight": "pytorch_model-00014-of-00017.bin",
|
641 |
+
"model.layers.66.mlp.up_proj.weight": "pytorch_model-00014-of-00017.bin",
|
642 |
+
"model.layers.66.post_attention_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
643 |
+
"model.layers.66.self_attn.k_proj.weight": "pytorch_model-00014-of-00017.bin",
|
644 |
+
"model.layers.66.self_attn.o_proj.weight": "pytorch_model-00014-of-00017.bin",
|
645 |
+
"model.layers.66.self_attn.q_proj.weight": "pytorch_model-00014-of-00017.bin",
|
646 |
+
"model.layers.66.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00017.bin",
|
647 |
+
"model.layers.66.self_attn.v_proj.weight": "pytorch_model-00014-of-00017.bin",
|
648 |
+
"model.layers.67.input_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
649 |
+
"model.layers.67.mlp.down_proj.weight": "pytorch_model-00014-of-00017.bin",
|
650 |
+
"model.layers.67.mlp.gate_proj.weight": "pytorch_model-00014-of-00017.bin",
|
651 |
+
"model.layers.67.mlp.up_proj.weight": "pytorch_model-00014-of-00017.bin",
|
652 |
+
"model.layers.67.post_attention_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
653 |
+
"model.layers.67.self_attn.k_proj.weight": "pytorch_model-00014-of-00017.bin",
|
654 |
+
"model.layers.67.self_attn.o_proj.weight": "pytorch_model-00014-of-00017.bin",
|
655 |
+
"model.layers.67.self_attn.q_proj.weight": "pytorch_model-00014-of-00017.bin",
|
656 |
+
"model.layers.67.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00017.bin",
|
657 |
+
"model.layers.67.self_attn.v_proj.weight": "pytorch_model-00014-of-00017.bin",
|
658 |
+
"model.layers.68.input_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
659 |
+
"model.layers.68.mlp.down_proj.weight": "pytorch_model-00015-of-00017.bin",
|
660 |
+
"model.layers.68.mlp.gate_proj.weight": "pytorch_model-00015-of-00017.bin",
|
661 |
+
"model.layers.68.mlp.up_proj.weight": "pytorch_model-00015-of-00017.bin",
|
662 |
+
"model.layers.68.post_attention_layernorm.weight": "pytorch_model-00014-of-00017.bin",
|
663 |
+
"model.layers.68.self_attn.k_proj.weight": "pytorch_model-00014-of-00017.bin",
|
664 |
+
"model.layers.68.self_attn.o_proj.weight": "pytorch_model-00014-of-00017.bin",
|
665 |
+
"model.layers.68.self_attn.q_proj.weight": "pytorch_model-00014-of-00017.bin",
|
666 |
+
"model.layers.68.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00017.bin",
|
667 |
+
"model.layers.68.self_attn.v_proj.weight": "pytorch_model-00014-of-00017.bin",
|
668 |
+
"model.layers.69.input_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
669 |
+
"model.layers.69.mlp.down_proj.weight": "pytorch_model-00015-of-00017.bin",
|
670 |
+
"model.layers.69.mlp.gate_proj.weight": "pytorch_model-00015-of-00017.bin",
|
671 |
+
"model.layers.69.mlp.up_proj.weight": "pytorch_model-00015-of-00017.bin",
|
672 |
+
"model.layers.69.post_attention_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
673 |
+
"model.layers.69.self_attn.k_proj.weight": "pytorch_model-00015-of-00017.bin",
|
674 |
+
"model.layers.69.self_attn.o_proj.weight": "pytorch_model-00015-of-00017.bin",
|
675 |
+
"model.layers.69.self_attn.q_proj.weight": "pytorch_model-00015-of-00017.bin",
|
676 |
+
"model.layers.69.self_attn.rotary_emb.inv_freq": "pytorch_model-00015-of-00017.bin",
|
677 |
+
"model.layers.69.self_attn.v_proj.weight": "pytorch_model-00015-of-00017.bin",
|
678 |
+
"model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
679 |
+
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00017.bin",
|
680 |
+
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00017.bin",
|
681 |
+
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00002-of-00017.bin",
|
682 |
+
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
683 |
+
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00002-of-00017.bin",
|
684 |
+
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00002-of-00017.bin",
|
685 |
+
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00002-of-00017.bin",
|
686 |
+
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00017.bin",
|
687 |
+
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00002-of-00017.bin",
|
688 |
+
"model.layers.70.input_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
689 |
+
"model.layers.70.mlp.down_proj.weight": "pytorch_model-00015-of-00017.bin",
|
690 |
+
"model.layers.70.mlp.gate_proj.weight": "pytorch_model-00015-of-00017.bin",
|
691 |
+
"model.layers.70.mlp.up_proj.weight": "pytorch_model-00015-of-00017.bin",
|
692 |
+
"model.layers.70.post_attention_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
693 |
+
"model.layers.70.self_attn.k_proj.weight": "pytorch_model-00015-of-00017.bin",
|
694 |
+
"model.layers.70.self_attn.o_proj.weight": "pytorch_model-00015-of-00017.bin",
|
695 |
+
"model.layers.70.self_attn.q_proj.weight": "pytorch_model-00015-of-00017.bin",
|
696 |
+
"model.layers.70.self_attn.rotary_emb.inv_freq": "pytorch_model-00015-of-00017.bin",
|
697 |
+
"model.layers.70.self_attn.v_proj.weight": "pytorch_model-00015-of-00017.bin",
|
698 |
+
"model.layers.71.input_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
699 |
+
"model.layers.71.mlp.down_proj.weight": "pytorch_model-00015-of-00017.bin",
|
700 |
+
"model.layers.71.mlp.gate_proj.weight": "pytorch_model-00015-of-00017.bin",
|
701 |
+
"model.layers.71.mlp.up_proj.weight": "pytorch_model-00015-of-00017.bin",
|
702 |
+
"model.layers.71.post_attention_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
703 |
+
"model.layers.71.self_attn.k_proj.weight": "pytorch_model-00015-of-00017.bin",
|
704 |
+
"model.layers.71.self_attn.o_proj.weight": "pytorch_model-00015-of-00017.bin",
|
705 |
+
"model.layers.71.self_attn.q_proj.weight": "pytorch_model-00015-of-00017.bin",
|
706 |
+
"model.layers.71.self_attn.rotary_emb.inv_freq": "pytorch_model-00015-of-00017.bin",
|
707 |
+
"model.layers.71.self_attn.v_proj.weight": "pytorch_model-00015-of-00017.bin",
|
708 |
+
"model.layers.72.input_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
709 |
+
"model.layers.72.mlp.down_proj.weight": "pytorch_model-00015-of-00017.bin",
|
710 |
+
"model.layers.72.mlp.gate_proj.weight": "pytorch_model-00015-of-00017.bin",
|
711 |
+
"model.layers.72.mlp.up_proj.weight": "pytorch_model-00015-of-00017.bin",
|
712 |
+
"model.layers.72.post_attention_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
713 |
+
"model.layers.72.self_attn.k_proj.weight": "pytorch_model-00015-of-00017.bin",
|
714 |
+
"model.layers.72.self_attn.o_proj.weight": "pytorch_model-00015-of-00017.bin",
|
715 |
+
"model.layers.72.self_attn.q_proj.weight": "pytorch_model-00015-of-00017.bin",
|
716 |
+
"model.layers.72.self_attn.rotary_emb.inv_freq": "pytorch_model-00015-of-00017.bin",
|
717 |
+
"model.layers.72.self_attn.v_proj.weight": "pytorch_model-00015-of-00017.bin",
|
718 |
+
"model.layers.73.input_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
719 |
+
"model.layers.73.mlp.down_proj.weight": "pytorch_model-00016-of-00017.bin",
|
720 |
+
"model.layers.73.mlp.gate_proj.weight": "pytorch_model-00016-of-00017.bin",
|
721 |
+
"model.layers.73.mlp.up_proj.weight": "pytorch_model-00016-of-00017.bin",
|
722 |
+
"model.layers.73.post_attention_layernorm.weight": "pytorch_model-00015-of-00017.bin",
|
723 |
+
"model.layers.73.self_attn.k_proj.weight": "pytorch_model-00015-of-00017.bin",
|
724 |
+
"model.layers.73.self_attn.o_proj.weight": "pytorch_model-00015-of-00017.bin",
|
725 |
+
"model.layers.73.self_attn.q_proj.weight": "pytorch_model-00015-of-00017.bin",
|
726 |
+
"model.layers.73.self_attn.rotary_emb.inv_freq": "pytorch_model-00015-of-00017.bin",
|
727 |
+
"model.layers.73.self_attn.v_proj.weight": "pytorch_model-00015-of-00017.bin",
|
728 |
+
"model.layers.74.input_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
729 |
+
"model.layers.74.mlp.down_proj.weight": "pytorch_model-00016-of-00017.bin",
|
730 |
+
"model.layers.74.mlp.gate_proj.weight": "pytorch_model-00016-of-00017.bin",
|
731 |
+
"model.layers.74.mlp.up_proj.weight": "pytorch_model-00016-of-00017.bin",
|
732 |
+
"model.layers.74.post_attention_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
733 |
+
"model.layers.74.self_attn.k_proj.weight": "pytorch_model-00016-of-00017.bin",
|
734 |
+
"model.layers.74.self_attn.o_proj.weight": "pytorch_model-00016-of-00017.bin",
|
735 |
+
"model.layers.74.self_attn.q_proj.weight": "pytorch_model-00016-of-00017.bin",
|
736 |
+
"model.layers.74.self_attn.rotary_emb.inv_freq": "pytorch_model-00016-of-00017.bin",
|
737 |
+
"model.layers.74.self_attn.v_proj.weight": "pytorch_model-00016-of-00017.bin",
|
738 |
+
"model.layers.75.input_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
739 |
+
"model.layers.75.mlp.down_proj.weight": "pytorch_model-00016-of-00017.bin",
|
740 |
+
"model.layers.75.mlp.gate_proj.weight": "pytorch_model-00016-of-00017.bin",
|
741 |
+
"model.layers.75.mlp.up_proj.weight": "pytorch_model-00016-of-00017.bin",
|
742 |
+
"model.layers.75.post_attention_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
743 |
+
"model.layers.75.self_attn.k_proj.weight": "pytorch_model-00016-of-00017.bin",
|
744 |
+
"model.layers.75.self_attn.o_proj.weight": "pytorch_model-00016-of-00017.bin",
|
745 |
+
"model.layers.75.self_attn.q_proj.weight": "pytorch_model-00016-of-00017.bin",
|
746 |
+
"model.layers.75.self_attn.rotary_emb.inv_freq": "pytorch_model-00016-of-00017.bin",
|
747 |
+
"model.layers.75.self_attn.v_proj.weight": "pytorch_model-00016-of-00017.bin",
|
748 |
+
"model.layers.76.input_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
749 |
+
"model.layers.76.mlp.down_proj.weight": "pytorch_model-00016-of-00017.bin",
|
750 |
+
"model.layers.76.mlp.gate_proj.weight": "pytorch_model-00016-of-00017.bin",
|
751 |
+
"model.layers.76.mlp.up_proj.weight": "pytorch_model-00016-of-00017.bin",
|
752 |
+
"model.layers.76.post_attention_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
753 |
+
"model.layers.76.self_attn.k_proj.weight": "pytorch_model-00016-of-00017.bin",
|
754 |
+
"model.layers.76.self_attn.o_proj.weight": "pytorch_model-00016-of-00017.bin",
|
755 |
+
"model.layers.76.self_attn.q_proj.weight": "pytorch_model-00016-of-00017.bin",
|
756 |
+
"model.layers.76.self_attn.rotary_emb.inv_freq": "pytorch_model-00016-of-00017.bin",
|
757 |
+
"model.layers.76.self_attn.v_proj.weight": "pytorch_model-00016-of-00017.bin",
|
758 |
+
"model.layers.77.input_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
759 |
+
"model.layers.77.mlp.down_proj.weight": "pytorch_model-00016-of-00017.bin",
|
760 |
+
"model.layers.77.mlp.gate_proj.weight": "pytorch_model-00016-of-00017.bin",
|
761 |
+
"model.layers.77.mlp.up_proj.weight": "pytorch_model-00016-of-00017.bin",
|
762 |
+
"model.layers.77.post_attention_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
763 |
+
"model.layers.77.self_attn.k_proj.weight": "pytorch_model-00016-of-00017.bin",
|
764 |
+
"model.layers.77.self_attn.o_proj.weight": "pytorch_model-00016-of-00017.bin",
|
765 |
+
"model.layers.77.self_attn.q_proj.weight": "pytorch_model-00016-of-00017.bin",
|
766 |
+
"model.layers.77.self_attn.rotary_emb.inv_freq": "pytorch_model-00016-of-00017.bin",
|
767 |
+
"model.layers.77.self_attn.v_proj.weight": "pytorch_model-00016-of-00017.bin",
|
768 |
+
"model.layers.78.input_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
769 |
+
"model.layers.78.mlp.down_proj.weight": "pytorch_model-00017-of-00017.bin",
|
770 |
+
"model.layers.78.mlp.gate_proj.weight": "pytorch_model-00017-of-00017.bin",
|
771 |
+
"model.layers.78.mlp.up_proj.weight": "pytorch_model-00017-of-00017.bin",
|
772 |
+
"model.layers.78.post_attention_layernorm.weight": "pytorch_model-00016-of-00017.bin",
|
773 |
+
"model.layers.78.self_attn.k_proj.weight": "pytorch_model-00016-of-00017.bin",
|
774 |
+
"model.layers.78.self_attn.o_proj.weight": "pytorch_model-00016-of-00017.bin",
|
775 |
+
"model.layers.78.self_attn.q_proj.weight": "pytorch_model-00016-of-00017.bin",
|
776 |
+
"model.layers.78.self_attn.rotary_emb.inv_freq": "pytorch_model-00016-of-00017.bin",
|
777 |
+
"model.layers.78.self_attn.v_proj.weight": "pytorch_model-00016-of-00017.bin",
|
778 |
+
"model.layers.79.input_layernorm.weight": "pytorch_model-00017-of-00017.bin",
|
779 |
+
"model.layers.79.mlp.down_proj.weight": "pytorch_model-00017-of-00017.bin",
|
780 |
+
"model.layers.79.mlp.gate_proj.weight": "pytorch_model-00017-of-00017.bin",
|
781 |
+
"model.layers.79.mlp.up_proj.weight": "pytorch_model-00017-of-00017.bin",
|
782 |
+
"model.layers.79.post_attention_layernorm.weight": "pytorch_model-00017-of-00017.bin",
|
783 |
+
"model.layers.79.self_attn.k_proj.weight": "pytorch_model-00017-of-00017.bin",
|
784 |
+
"model.layers.79.self_attn.o_proj.weight": "pytorch_model-00017-of-00017.bin",
|
785 |
+
"model.layers.79.self_attn.q_proj.weight": "pytorch_model-00017-of-00017.bin",
|
786 |
+
"model.layers.79.self_attn.rotary_emb.inv_freq": "pytorch_model-00017-of-00017.bin",
|
787 |
+
"model.layers.79.self_attn.v_proj.weight": "pytorch_model-00017-of-00017.bin",
|
788 |
+
"model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
789 |
+
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00003-of-00017.bin",
|
790 |
+
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00003-of-00017.bin",
|
791 |
+
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00003-of-00017.bin",
|
792 |
+
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00017.bin",
|
793 |
+
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00017.bin",
|
794 |
+
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00017.bin",
|
795 |
+
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00017.bin",
|
796 |
+
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00017.bin",
|
797 |
+
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00017.bin",
|
798 |
+
"model.layers.9.input_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
799 |
+
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00003-of-00017.bin",
|
800 |
+
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00003-of-00017.bin",
|
801 |
+
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00003-of-00017.bin",
|
802 |
+
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00003-of-00017.bin",
|
803 |
+
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00003-of-00017.bin",
|
804 |
+
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00003-of-00017.bin",
|
805 |
+
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00003-of-00017.bin",
|
806 |
+
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00017.bin",
|
807 |
+
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00003-of-00017.bin",
|
808 |
+
"model.norm.weight": "pytorch_model-00017-of-00017.bin"
|
809 |
+
}
|
810 |
+
}
|
quantization.py
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import bz2
|
2 |
+
import torch
|
3 |
+
import base64
|
4 |
+
import ctypes
|
5 |
+
from transformers.utils import logging
|
6 |
+
from typing import List
|
7 |
+
|
8 |
+
logger = logging.get_logger(__name__)
|
9 |
+
|
10 |
+
try:
|
11 |
+
from cpm_kernels.kernels.base import LazyKernelCModule, KernelFunction, round_up
|
12 |
+
|
13 |
+
class Kernel:
|
14 |
+
def __init__(self, code: bytes, function_names: List[str]):
|
15 |
+
self.code = code
|
16 |
+
self._function_names = function_names
|
17 |
+
self._cmodule = LazyKernelCModule(self.code)
|
18 |
+
|
19 |
+
for name in self._function_names:
|
20 |
+
setattr(self, name, KernelFunction(self._cmodule, name))
|
21 |
+
|
22 |
+
quantization_code = "QlpoOTFBWSZTWapgbn4ALTZ/////////9f/n9+/r/v//3/Tt7cDwfe5sdXXdZNR/9++P4BkfAfIVSVACQUFCEIJQoJUqkgAACJRSqAAAQgEqKqqoKoAAVINADQDTCMhoAGCZBpoNMg0Bk0AGRiGmQBghkNGmgAaGgGhkyAAADJoMhBoAaAaYRkNAAwTINNBpkGgMmgAyMQ0yAMEMho00ADQ0A0MmQAAAZNBkINADQDTCMhoAGCZBpoNMg0Bk0AGRiGmQBghkNGmgAaGgGhkyAAADJoMhBoAaAaYRkNAAwTINNBpkGgMmgAyMQ0yAMEMho00ADQ0A0MmQAAAZNBkCapJBTEyQ0nqeSZioDeqNlG1PU2o9Go9TTQ09QaPUD0CPSAA9Q9QPKA0AGQaANHqBo0GgNGgB6gaApSSCAJiIwmhlNGmjEmCYmTKbSnqP1J6m8gp56ppN6pp5R7U9Q9JMjxTbIk9Mj1BtUMCeoaAB+qNHqAyaNpHtoqfqcgV6L7jHRddxuUdeeq6Vy1s1q0PGdR98uY7Jv2di02u25eLvCuiUfprinG43A5DrtXzq4XdXQ7HSYXSK3F/dxtXgcbZptcVjbJtpppp/Jf7H57vPBXRiuNZXU0OXvTI3V0W5xmyvSd95rhM4s5HF6D4c/uT8etNOq3zqacJzz03vLmXS4Ti2aY77qnmeN2ze315l+oX4t89/x/uvSeN03beGb50V1vfv6lfNn12n6b0Nr8O/F31q1q6ToOydk+ONGjRhho7FPPyPwV4R1FsuRZeCbL18PZnyFyjkGq+Ies8bnvM1MaZeeT30nU/N9Pp+n+f+V+X3gHeXPR8p8ufwnqncyf4K4K2m5qaj1z2mi04YafvWjcbODfNTcysmVvfrTZjhVkHhejPQeOeZ4p5nknpOSZORxb5wdM783XnrL1K6lli2Oqt87d6wxoq9A/sudf+W6sMUfSaLwsnTN1/8P6Uyxj/fPzay3SR2UxR1BeGaFV7DFFecsUrcenP69fnv+pkybyK9J3V5ZpqYxqanlMnjm+OqO56ZxeWPhrS0vUt7DlmN1p3pckN+KtJYG+cJ3nHdBXlG5cTg3Ttraw5zi0vVz7jzst5xk3rRqVcjCLknW9BsR79+RYyYzSuLBWVvm9o+83EtLZ672k6tvu1Xo2MWpHxd6r2g8tdw5n8p/3GOByV4H8N+K7p+W5DDcuDhPi+JjMY6U+y/sP1GOntZY8C6+dj0maa1YYysvzXR0N1jWrucLof8TTZyB45+y6nFjg/z69n0TMdBu1kbTHvWR3mZvN+zc00PBYssS6HoPcW7g7bUzKwy1Y9c5ji4ufZYcurGDGGMzGMZTGVjLLBXdcro2lT5SwSfEmSqfBMpVxLGGTKxkwsr2E0T3yYkH6DPqsYy0zTGmO8ZS4Xv7G1YxjaloxJ3Y4vrmqo5m5qT7Jbm/aIca41org/rW6bJxTDgY7u31qhvMTpViiu/uvaK4LEptss0nNYk08D89snSsWLBPxFinvnjR7TGODdV8KyfMup+o2rmRjGLK/ErRxyWPKtRuYxh6DFteVdC1NnqHf1F02DtrErv3bdBsZeZP0VtpNLSxiNYV8BZDqLJ51g9ZXYmWJss2t9i4OJ8hbU3uNobm1cLibk4le1PGjxY2jnS8kcJP1JZLuhncnnjScVhX7Bf0TVXyy+CakeSMk9Bo1X0JyPKbQsrvLVLhWJTwGRXlsVdtcvwGMcN0nrsocD6Y9htSjvMReLCx9i9PaU+wwScpkj02EXpV51fKk2qHW7zRTUydk9k5W1epWXltGktGU6S7DMsmWWJcxkc1W0m6PhRhYsCxZL3C7btuqrcq3WL8A63U2WP1WO1tL4NZJvna1XblwdrlxtQ2d9p1f2WLBlqpi5Wi8U4LVjnY/SYXU3Rq4xpUPZuStUdjrvTi8StNHaOS7VVxVwrt2rmrrDTmS5jRWjE5phjDkrFwum1W9LhYai4OuNSca3JbzDYqWjAuPPedpdWzRlWWKbqV5NcDUjDI5JhoyXKsfnPOvymmOFW5RwmTzrLo78mvRdl+A4LhXCuC6vWrDsaNDNrSw4zDS9V6q3t0my3XDK1bB0VlTaHI5myxblfqn9ph0VwriA4jHMZYZRcito3m83W+sWTG4piuqytrJYLFtq1ZmWrWlixZLGGq4XdDRxOJqjz4vZWR58ZXfRlGoy/KlkNRqlvj8mS2W5ZZVe7NHKN9cpyngzbY2NWsWdWrzPM7/3H1f0/1nJV/9OXU6w0yw8NtNYGbJeJjZbXUZjzMeJuvKYvgNNGMaZuTE3G9h8dPBN28mo8L0a3E7lcK0NKyi98dIXNfoHSPUHGcbJ4U945n0jGn0V2D//d45VNjue9eC1k+Vew1Yw2N3FPZbq2sq3LFyOg8j77ctzkUeAxWLCauB1upch/ib0/JMowwyxNjIY5jUuaOjyVs6KsrnhzGczEtssWSy3WZjGWquxNDRgyMpjDEwWW1qWjKxZczLanB0vvuh6Det7gOpYzGctNLLdWaNMzTLS1WYmGWDbGGlq5lhpuatWtGpWI3GW0ppqqmcV2nfcW9Li52FojtWJXwcFeN7E/Dr48+G+292xp/hf5Z+M/4HRWXtpdq/jX+N5q8ReyK6tHoXyi1DsX0zwW1tlcHkN7H7C7l5a1au+sfwL4N/guN5li+BfvL3t7w+nPeP6Ttr6k/Xfqz91+3P3z2z+uXJWj9Gp9lZtXhpYeRf16RelfDx5tbPBs9IxVdowJ7d8Vgww0X23RWkyspfOYn4RMfiPntI+pVl819O+l+A+E3Mzsf5181q97fOMfUTc3ORjmhxXpjFi0dN4LXdsuOq0undqvur5Jj131ThWPoL9ReB1dK/E415JMC+Sb00nVdHXOqch5ZwTvpe7ucytpPsa61y1zLdMcNKfhmzcdLdGVkaYqaMF9A4tUri6XUxtI6jjNhyzlZvmuFbm0NWBqc60TVdl+df6LtVc9bzhGLmSxLtutNy6gwp1kuFcK61ul4w55LtqtVbE/kNzSvCyR4THEyI3MJyMFYeU1OUMlY7n6A69wOl2Hm2bRt2fRlz210T7Lc4hvrIOFXItTLIwadavpRz19Z+6uVe6/vLwq9B6DnH1ZNNy7l1suc1Xk8Tsq83CrvPCP5zXUaa7xyOLzPx3nfoOqNMqu5mzLXpOB/qHkk3sk00mzELVhJwq5G5vV3q3rV31jRjRpXKYlcjkYfTPVN0e+dTi7Ha6nA4Pf16L+0+S3vxntMeQ94dFs3PbPVaaY0eJ4n12LDmerNzmdVeo3jZ0Wn6TztPAY7nfr8r+XufE+rp/K9z+XyPC3zD0WGP902PXvh9ravQvM7fjz584OBXqDj4V5K9ls8TG3C9byv9tsbzDmcrGcWr23E5XqOven2Hqtzmb/OdBj4Drb3oZvxh81qTZstK3MaYx7xps9tonI3mqnla6q+BuWx2T+O/vv8b3vl9Zvdb+/XMruD0z4rpdzQxeVbOR3q5J3THBvPA3tzyP8BTnGH3jENMMyxlMsxg0sxi1aasZS633V9BdDpYx0nq+E6HvHS1fMdRp53c077vvVNjnnfdbZ52HQ6GnOx52mlxMrifkJ3P4H333X1n663ud4n8Jzl5TmdDyPmF+c/cd5xcgD8x0v9puXlOx0HfeL4ryD2fBXTXbX0a+jX5E9weUcTicTicTicTicFXA4HA4HA4HhXm+VetOVcxh4p3LV5Myp1UvGPRPcnqD8Q2NjY2NjY2NjhXuUserunePXK67236bxvkMfCcyu2uZ8dy258NqOlwrcHJZecxORjTpelVvbljocqbI53xWZjlcXl+W+Yuze35blS9+MVctlmKc2VdNh8rjTwHcX5rdXynpPYdFmPw30HBte5bNNm+1egy2vC8TqbNNmm1dK8VtX/s+e/i9jdU8BiDhed2um3V2saPM4ydlp2Mb2m+652Ox8KsbrLlnh5KPVGSded05W584ceNxVev7y3XJbo7DptKOplXrsE6WSfEN903xvyZR8B8R8R73dJyhwjtPcJ6NNyK8lWI3PXbm0bOq7/HpmrdUdC4GjvbLjYsWJ7z4J49j4Vo0f4U/bb6zMf9OGlu3zJ/2za/xP8fG4eAOc7E6ho+uWLqGPFxd727L8aq5f/b0n/MtrjX1r+Bafhm3Ce5WmVixZWNPuNGPdvj2x7rLgZcMMmrlO0Buc7fml0arVwuE47mYw0maYynSMORNTUbjhnI1bGN1XKP2zLGpfJb8ZlwXO0ZjGlz4f1l0LhG6MXzHYHcZVt+EaTxlj/cMm5h+cyHrvzGX8SenN09OZOVkYNPLfqk3e4+8WoXmr5NfbH0XD8iT435PY4uVgxo0000f8q/FvjG+vEzGSuxcLV75vrU8aT9K/GclTcTJMFjIc75taDx2DnXPbrRtjv5qnjxMrpco5Z+NacOENnW0bORzOsxwcH0HFcjneNpC/iri4t9cCfzq/erS70xHTX753jYBxrBisj1cU6LIO2bpNo2V9Qak1TF4H2k8rHsNMewfOXyvwr+McsncOT1j/vObbPYPsLvsHx33mn8R7r+ibzH9Ns7jZp8e/eb3zXBwc5+y24vmOZztqvkN6z2j13or4K9dXiV76vXeeGq9k4GyppTHvZuT4Qr5F+fW9Hxji+kux1Sb57OC+57rR7rBvmJMGTHTVonx3M7xsbN5pvPSq1VirJNVqrpbDtna3zYnfXzK/91fTVP8qb02TYjUFZGUskslT6Bf0WK9rHvY+wN1irdG6NVujaNo1G0exl3aXjnQuQdEyVo7vkktptbrL2024zUv+KZGSslZGRhixZYXVPcHSrpLIFi0/qIG5xjzv7xzvEY3SfivEafafzD+Y0MdLGnGuxWmzHQ/0Di2O9VXI2cF3pzXLdGTnzWtD4T5DsW1eCudDg4ty9qd4/sOJeivWODcvq1xbLmfj1qub6ldHQ8LsOCr126tLjw5utuV6ECx0XBrOH2XlfwMaae0556C/nmGMbDTD9+sDmI/BvcvktXvlb1v+adfWu46W4Z6S0cFvfra47b16DcvCcjg2bMq4OXfbQew9sYxjGLSfaXz13Lr21lllqy2twX75Vfu2AyH0YPf24+yjorQ5+8h0S7dR/rH2qVqDdXUk+Sk5IxXPQf9ir234LZzKm8dy41xkt0PgR76HjQ6JL3SqmSOtJ3UHKcRXRKn+Rsf8j7ir7kC6/UfLOA/TUPSgXdAu4V0VV1q4zxsP4xznEWxYnoSP1ivCrxq7wNVTdC0D0lVOeCvEqOi3R3sLnna9RMsY+MamYxjUf2V661P47v2nXtXzs5wetm+bTIdBwh6e+WvvbVc+bZP7Yy1YvBXOqxJ30PIV7fuqx9SMmMMY7LVjG+S6IyPiqp4Eakui7CvM7Kv2EnFUjjAvfFewVwVOSS7Mao9TDxTlVfClhkspcsB2PcfAe8nYOww6SmqwRlZCysNdLaej4FTwo9nwXBZaWl5uatmmrfbrZbHBYNmrFtbJ7xtbTG6xYerb7LL2UmT9ur54D+5V6tPU3ozGSyZPEV4S6JYfyT7qaNGzVNDs8Feer6i9gfBrFpbzjfYr0Z4huT5zJk5VWwfzX45uNljyNDG05zZZKdtWi/ecFpYbIu+m0cllZZVlZHQPYDnh469YW+YyzGOs1S5DzmlyGLc62l2rDyDKuQ9jai4VknV0tK2tS0sZjDWVzstxY3VaP4afXf6L6++rhYwx42nFYXeLpTqVp3DHVMeFwJplHv2RyzDe41pTmr9dquVk5WI+7tqN7xMY51NrCwxYxWMMMUODVd94IflreuiuDS4Q8B+01PdWIeR7Z2PSboDmZBzTZbm0vlm+YamDe71cHDeLe3NLQx35tNze/Zd4OtZJ4H5jleDtaO1dLTc/Yf6Tit0uDFV/VZHc77k3njf+BwXQY7l3VpE6HQ1bjTVdzLwTiWkXW61jv/bfz2MftP2mn8hsbNmnfe1dlci/cpeBZe9O9Mb3ad5J4ZMoeVvXnvE3Vf8IfuPGfXMf/K309N6h2v/Rji5HmfdXovWedzucx3N7e+s7GzZvK6n3jxvabNmzc+0/BWl3nefl8G5cr7RzMaaP8jFsvuXMcFe0713O7ZfWn3pPaq4rdKejPByrzPUN6cBH6LZsTzcXrHkrrdfMONXA5hhyybH4Ru4LktC3T4dvvdXM7ThXPdPp7m9yTlH6VdKvbzBd4ZjJll3nAi+2sX31iU4m52d+92rlq6g9cftu+HkO1bx/+LuSKcKEhVMDc/A="
|
23 |
+
|
24 |
+
kernels = Kernel(
|
25 |
+
bz2.decompress(base64.b64decode(quantization_code)),
|
26 |
+
[
|
27 |
+
"weightInt8_int4",
|
28 |
+
"weightInt4_fp16",
|
29 |
+
"weightInt4_bf16"
|
30 |
+
],
|
31 |
+
)
|
32 |
+
except Exception as exception:
|
33 |
+
kernels = None
|
34 |
+
logger.warning("Failed to load cpm_kernels:" + str(exception))
|
35 |
+
|
36 |
+
|
37 |
+
def quantize_int8(weight: torch.Tensor, bit_length: int):
|
38 |
+
weight_scale = weight.abs().max(dim=-1).values / ((2 ** (bit_length - 1)) - 1)
|
39 |
+
weight_scale = weight_scale.to(torch.float32)
|
40 |
+
|
41 |
+
weight = torch.round(weight.to(weight_scale.dtype) / weight_scale[:, None]).to(torch.int8)
|
42 |
+
return weight, weight_scale
|
43 |
+
|
44 |
+
|
45 |
+
def compress_int4_weight(weight: torch.Tensor):
|
46 |
+
with torch.cuda.device(weight.device):
|
47 |
+
num_row, num_chan = weight.size(0), weight.size(1)
|
48 |
+
num_chan = num_chan // 2
|
49 |
+
|
50 |
+
int8_weight = torch.empty(num_row, num_chan, dtype=torch.int8, device="cuda")
|
51 |
+
stream = torch.cuda.current_stream()
|
52 |
+
dim_grid = (num_row, 1, 1)
|
53 |
+
dim_block = (min(round_up(num_chan, 32), 1024), 1, 1)
|
54 |
+
|
55 |
+
kernels.weightInt8_int4(
|
56 |
+
dim_grid,
|
57 |
+
dim_block,
|
58 |
+
0,
|
59 |
+
stream,
|
60 |
+
[
|
61 |
+
ctypes.c_void_p(weight.data_ptr()),
|
62 |
+
ctypes.c_void_p(int8_weight.data_ptr()),
|
63 |
+
ctypes.c_int32(num_row),
|
64 |
+
ctypes.c_int32(num_chan)
|
65 |
+
],
|
66 |
+
)
|
67 |
+
|
68 |
+
return int8_weight
|
69 |
+
|
70 |
+
|
71 |
+
def dequantize_float(weight: torch.Tensor, weight_scale: torch.Tensor, bit_length: int, input: torch.Tensor):
|
72 |
+
if bit_length == 8:
|
73 |
+
float_weight = weight.to(input.dtype) * weight_scale.to(input.dtype)[:, None]
|
74 |
+
return float_weight
|
75 |
+
|
76 |
+
assert bit_length == 4, f"unsupported bit length: {bit_length}"
|
77 |
+
|
78 |
+
func = (
|
79 |
+
kernels.weightInt4_fp16 if input.dtype == torch.half else kernels.weightInt4_bf16
|
80 |
+
)
|
81 |
+
with torch.cuda.device(weight.device):
|
82 |
+
num_row, num_chan = weight.size(0), weight.size(1)
|
83 |
+
|
84 |
+
float_weight = torch.empty(num_row, num_chan * 2, dtype=input.dtype, device="cuda")
|
85 |
+
stream = torch.cuda.current_stream()
|
86 |
+
dim_grid = (num_row, 1, 1)
|
87 |
+
dim_block = (min(round_up(num_chan, 32), 1024), 1, 1)
|
88 |
+
|
89 |
+
func(
|
90 |
+
dim_grid,
|
91 |
+
dim_block,
|
92 |
+
0,
|
93 |
+
stream,
|
94 |
+
[
|
95 |
+
ctypes.c_void_p(weight.data_ptr()),
|
96 |
+
ctypes.c_void_p(weight_scale.data_ptr()),
|
97 |
+
ctypes.c_void_p(float_weight.data_ptr()),
|
98 |
+
ctypes.c_int32(num_row),
|
99 |
+
ctypes.c_int32(num_chan),
|
100 |
+
],
|
101 |
+
)
|
102 |
+
return float_weight
|
103 |
+
|
104 |
+
class QuantizationLinear(torch.nn.Module):
|
105 |
+
def __init__(self, bit_length: int, weight: torch.Tensor, device="cuda"):
|
106 |
+
super().__init__()
|
107 |
+
|
108 |
+
self.bit_length = bit_length
|
109 |
+
|
110 |
+
weight, weight_scale = quantize_int8(weight=weight, bit_length=bit_length)
|
111 |
+
if bit_length == 4:
|
112 |
+
weight = compress_int4_weight(weight)
|
113 |
+
|
114 |
+
self.weight = torch.nn.Parameter(weight.to(device), requires_grad=False)
|
115 |
+
self.weight_scale = torch.nn.Parameter(weight_scale.to(device), requires_grad=False)
|
116 |
+
|
117 |
+
def forward(self, input: torch.Tensor):
|
118 |
+
input_size = input.size()
|
119 |
+
|
120 |
+
input = input.contiguous().view(-1, input.size(-1))
|
121 |
+
original_weight = dequantize_float(self.weight, self.weight_scale, self.bit_length, input)
|
122 |
+
|
123 |
+
output = torch.matmul(input, original_weight.t())
|
124 |
+
return output.view(*(input_size[:-1] + (self.weight.size(0),)))
|
special_tokens_map.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<|startoftext|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": {
|
17 |
+
"content": "<pad>",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
}
|
23 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"clean_up_tokenization_spaces": true,
|
3 |
+
"model_max_length": 1000000000000000019884624838656,
|
4 |
+
"tokenizer_class": "PreTrainedTokenizerFast"
|
5 |
+
}
|