kvaishnavi commited on
Commit
aded733
β€’
1 Parent(s): bb34c77

Upload Phi-3.5-mini-instruct ONNX models for GPUs

Browse files
README.md CHANGED
@@ -7,6 +7,7 @@ inference: false
7
 
8
  # Phi-3.5-Mini-Instruct ONNX models
9
  This repository hosts the optimized versions of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) to accelerate inference with ONNX Runtime.
 
10
  Optimized Phi-3.5 Mini models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets.
11
 
12
  To easily get started with Phi-3.5, you can use our newly introduced ONNX Runtime Generate() API. See [here](https://aka.ms/generate-tutorial) for instructions on how to run it.
@@ -14,15 +15,14 @@ To easily get started with Phi-3.5, you can use our newly introduced ONNX Runtim
14
  ## ONNX Models
15
  Here are some of the optimized configurations we have added:
16
 
17
- 1. ONNX model for fp16 CUDA: ONNX model you can use to run for your NVIDIA GPUs.
18
- 2. ONNX model for int4 CUDA: ONNX model for NVIDIA GPUs using int4 quantization via AWQ.
19
- 3. ONNX model for int4 CPU and Mobile: ONNX model for CPU and mobile using int4 quantization via AWQ.
20
 
21
  ## Model Summary
22
- Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
23
 
24
  ## Intended Uses
25
- The Phi 3.5 model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
26
 
27
  1. Memory/compute constrained environments
28
  2. Latency bound scenarios
@@ -30,6 +30,7 @@ The Phi 3.5 model is intended for commercial and research use in multiple langua
30
 
31
  ## Use Case Considerations
32
  Phi 3.5 models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
 
33
  Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
34
 
35
  ## Release Notes
@@ -56,7 +57,7 @@ Minimum Configuration Required:
56
  - **Model Description:** This is a conversion of the Phi-3.5 Mini-Instruct model for ONNX Runtime inference.
57
 
58
  ## How to Get Started with the Model
59
- To make running of the Phi-3 models across a range of devices and platforms across various execution provider backends possible, we introduce a new API to wrap several aspects of generative AI inferencing. This API make it easy to drag and drop LLMs straight into your app. For running the early version of these models with ONNX Runtime, follow the steps [here](http://aka.ms/generate-tutorial).
60
 
61
  For example:
62
 
@@ -152,9 +153,10 @@ The table below shows the average throughput of the first 256 tokens generated (
152
  |----------------------------|----------|
153
  | torch | 2.4.1 |
154
  | triton | 3.0.0 |
155
- | onnxruntime-gpu | 1.19.2 |
156
- | onnxruntime-genai | 0.4.0 |
157
- | onnxruntime-genai-cuda | 0.4.0 |
 
158
  | transformers | 4.44.2 |
159
  | llama.cpp | bdf314f38a2c90e18285f7d7067e8d736a14000a |
160
 
 
7
 
8
  # Phi-3.5-Mini-Instruct ONNX models
9
  This repository hosts the optimized versions of [Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) to accelerate inference with ONNX Runtime.
10
+
11
  Optimized Phi-3.5 Mini models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets.
12
 
13
  To easily get started with Phi-3.5, you can use our newly introduced ONNX Runtime Generate() API. See [here](https://aka.ms/generate-tutorial) for instructions on how to run it.
 
15
  ## ONNX Models
16
  Here are some of the optimized configurations we have added:
17
 
18
+ 1. ONNX model for INT4 CPU: ONNX model for CPUs using int4 quantization via AWQ.
19
+ 2. ONNX model for INT4 GPU: ONNX model for GPUs using int4 quantization via AWQ.
 
20
 
21
  ## Model Summary
22
+ Phi-3.5 mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
23
 
24
  ## Intended Uses
25
+ The Phi 3.5 mini model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
26
 
27
  1. Memory/compute constrained environments
28
  2. Latency bound scenarios
 
30
 
31
  ## Use Case Considerations
32
  Phi 3.5 models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
33
+
34
  Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
35
 
36
  ## Release Notes
 
57
  - **Model Description:** This is a conversion of the Phi-3.5 Mini-Instruct model for ONNX Runtime inference.
58
 
59
  ## How to Get Started with the Model
60
+ To make running of the Phi-3.5 models across a range of devices and platforms across various execution provider backends possible, we introduce a new API to wrap several aspects of generative AI inferencing. This API make it easy to drag and drop LLMs straight into your app. For running the early version of these models with ONNX Runtime, follow the steps [here](http://aka.ms/generate-tutorial).
61
 
62
  For example:
63
 
 
153
  |----------------------------|----------|
154
  | torch | 2.4.1 |
155
  | triton | 3.0.0 |
156
+ | onnxruntime-gpu | 1.20.1 |
157
+ | onnxruntime-genai | 0.5.2 |
158
+ | onnxruntime-genai-cuda | 0.5.2 |
159
+ | onnxruntime-genai-directml | 0.5.2 |
160
  | transformers | 4.44.2 |
161
  | llama.cpp | bdf314f38a2c90e18285f7d7067e8d736a14000a |
162
 
cuda/cuda-fp16/genai_config.json DELETED
@@ -1,59 +0,0 @@
1
- {
2
- "model": {
3
- "bos_token_id": 1,
4
- "context_length": 131072,
5
- "decoder": {
6
- "session_options": {
7
- "log_id": "onnxruntime-genai",
8
- "provider_options": [
9
- {
10
- "cuda": {
11
- "enable_cuda_graph": "0"
12
- }
13
- }
14
- ]
15
- },
16
- "filename": "phi-3.5-mini-instruct-cuda-fp16.onnx",
17
- "head_size": 96,
18
- "hidden_size": 3072,
19
- "inputs": {
20
- "input_ids": "input_ids",
21
- "attention_mask": "attention_mask",
22
- "past_key_names": "past_key_values.%d.key",
23
- "past_value_names": "past_key_values.%d.value"
24
- },
25
- "outputs": {
26
- "logits": "logits",
27
- "present_key_names": "present.%d.key",
28
- "present_value_names": "present.%d.value"
29
- },
30
- "num_attention_heads": 32,
31
- "num_hidden_layers": 32,
32
- "num_key_value_heads": 32
33
- },
34
- "eos_token_id": [
35
- 32007,
36
- 32001,
37
- 32000
38
- ],
39
- "pad_token_id": 32000,
40
- "type": "phi3",
41
- "vocab_size": 32064
42
- },
43
- "search": {
44
- "diversity_penalty": 0.0,
45
- "do_sample": false,
46
- "early_stopping": true,
47
- "length_penalty": 1.0,
48
- "max_length": 131072,
49
- "min_length": 0,
50
- "no_repeat_ngram_size": 0,
51
- "num_beams": 1,
52
- "num_return_sequences": 1,
53
- "past_present_share_buffer": true,
54
- "repetition_penalty": 1.0,
55
- "temperature": 1.0,
56
- "top_k": 1,
57
- "top_p": 1.0
58
- }
59
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cuda/cuda-fp16/phi-3.5-mini-instruct-cuda-fp16.onnx.data DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ec1a077e5ebf1072ef95af5712951af20ce33b003a53717a134b4522b6104067
3
- size 7642159104
 
 
 
 
cuda/cuda-fp16/special_tokens_map.json DELETED
@@ -1,30 +0,0 @@
1
- {
2
- "bos_token": {
3
- "content": "<s>",
4
- "lstrip": false,
5
- "normalized": false,
6
- "rstrip": false,
7
- "single_word": false
8
- },
9
- "eos_token": {
10
- "content": "<|endoftext|>",
11
- "lstrip": false,
12
- "normalized": false,
13
- "rstrip": false,
14
- "single_word": false
15
- },
16
- "pad_token": {
17
- "content": "<|endoftext|>",
18
- "lstrip": false,
19
- "normalized": false,
20
- "rstrip": false,
21
- "single_word": false
22
- },
23
- "unk_token": {
24
- "content": "<unk>",
25
- "lstrip": false,
26
- "normalized": false,
27
- "rstrip": false,
28
- "single_word": false
29
- }
30
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cuda/cuda-int4-awq-block-128/config.json DELETED
@@ -1,138 +0,0 @@
1
- {
2
- "_name_or_path": "Phi-3.5-mini-instruct",
3
- "architectures": [
4
- "Phi3ForCausalLM"
5
- ],
6
- "attention_dropout": 0.0,
7
- "auto_map": {
8
- "AutoConfig": "configuration_phi3.Phi3Config",
9
- "AutoModelForCausalLM": "modeling_phi3.Phi3ForCausalLM"
10
- },
11
- "bos_token_id": 1,
12
- "embd_pdrop": 0.0,
13
- "eos_token_id": 32000,
14
- "hidden_act": "silu",
15
- "hidden_size": 3072,
16
- "initializer_range": 0.02,
17
- "intermediate_size": 8192,
18
- "max_position_embeddings": 131072,
19
- "model_type": "phi3",
20
- "num_attention_heads": 32,
21
- "num_hidden_layers": 32,
22
- "num_key_value_heads": 32,
23
- "original_max_position_embeddings": 4096,
24
- "pad_token_id": 32000,
25
- "resid_pdrop": 0.0,
26
- "rms_norm_eps": 1e-05,
27
- "rope_scaling": {
28
- "long_factor": [
29
- 1.0800000429153442,
30
- 1.1100000143051147,
31
- 1.1399999856948853,
32
- 1.340000033378601,
33
- 1.5899999141693115,
34
- 1.600000023841858,
35
- 1.6200000047683716,
36
- 2.620000123977661,
37
- 3.2300000190734863,
38
- 3.2300000190734863,
39
- 4.789999961853027,
40
- 7.400000095367432,
41
- 7.700000286102295,
42
- 9.09000015258789,
43
- 12.199999809265137,
44
- 17.670000076293945,
45
- 24.46000099182129,
46
- 28.57000160217285,
47
- 30.420001983642578,
48
- 30.840002059936523,
49
- 32.590003967285156,
50
- 32.93000411987305,
51
- 42.320003509521484,
52
- 44.96000289916992,
53
- 50.340003967285156,
54
- 50.45000457763672,
55
- 57.55000305175781,
56
- 57.93000411987305,
57
- 58.21000289916992,
58
- 60.1400032043457,
59
- 62.61000442504883,
60
- 62.62000274658203,
61
- 62.71000289916992,
62
- 63.1400032043457,
63
- 63.1400032043457,
64
- 63.77000427246094,
65
- 63.93000411987305,
66
- 63.96000289916992,
67
- 63.970001220703125,
68
- 64.02999877929688,
69
- 64.06999969482422,
70
- 64.08000183105469,
71
- 64.12000274658203,
72
- 64.41000366210938,
73
- 64.4800033569336,
74
- 64.51000213623047,
75
- 64.52999877929688,
76
- 64.83999633789062
77
- ],
78
- "short_factor": [
79
- 1.0,
80
- 1.0199999809265137,
81
- 1.0299999713897705,
82
- 1.0299999713897705,
83
- 1.0499999523162842,
84
- 1.0499999523162842,
85
- 1.0499999523162842,
86
- 1.0499999523162842,
87
- 1.0499999523162842,
88
- 1.0699999332427979,
89
- 1.0999999046325684,
90
- 1.1099998950958252,
91
- 1.1599998474121094,
92
- 1.1599998474121094,
93
- 1.1699998378753662,
94
- 1.2899998426437378,
95
- 1.339999794960022,
96
- 1.679999828338623,
97
- 1.7899998426437378,
98
- 1.8199998140335083,
99
- 1.8499997854232788,
100
- 1.8799997568130493,
101
- 1.9099997282028198,
102
- 1.9399996995925903,
103
- 1.9899996519088745,
104
- 2.0199997425079346,
105
- 2.0199997425079346,
106
- 2.0199997425079346,
107
- 2.0199997425079346,
108
- 2.0199997425079346,
109
- 2.0199997425079346,
110
- 2.0299997329711914,
111
- 2.0299997329711914,
112
- 2.0299997329711914,
113
- 2.0299997329711914,
114
- 2.0299997329711914,
115
- 2.0299997329711914,
116
- 2.0299997329711914,
117
- 2.0299997329711914,
118
- 2.0299997329711914,
119
- 2.0799996852874756,
120
- 2.0899996757507324,
121
- 2.189999580383301,
122
- 2.2199995517730713,
123
- 2.5899994373321533,
124
- 2.729999542236328,
125
- 2.749999523162842,
126
- 2.8399994373321533
127
- ],
128
- "type": "longrope"
129
- },
130
- "rope_theta": 10000.0,
131
- "sliding_window": 262144,
132
- "tie_word_embeddings": false,
133
- "torch_dtype": "bfloat16",
134
- "transformers_version": "4.43.3",
135
- "use_cache": true,
136
- "attention_bias": false,
137
- "vocab_size": 32064
138
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cuda/cuda-int4-awq-block-128/configuration_phi3.py DELETED
@@ -1,227 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- """ Phi-3 model configuration"""
17
-
18
-
19
- from transformers.configuration_utils import PretrainedConfig
20
- from transformers.utils import logging
21
-
22
-
23
- logger = logging.get_logger(__name__)
24
-
25
- PHI3_PRETRAINED_CONFIG_ARCHIVE_MAP = {
26
- "microsoft/Phi-3-mini-4k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/config.json",
27
- "microsoft/Phi-3-mini-128k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/config.json",
28
- }
29
-
30
-
31
- class Phi3Config(PretrainedConfig):
32
- r"""
33
- This is the configuration class to store the configuration of a [`Phi3Model`]. It is used to instantiate a Phi-3
34
- model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
35
- defaults will yield a similar configuration to that of the
36
- [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
37
-
38
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
39
- documentation from [`PretrainedConfig`] for more information.
40
-
41
- Args:
42
- vocab_size (`int`, *optional*, defaults to 32064):
43
- Vocabulary size of the Phi-3 model. Defines the number of different tokens that can be represented by the
44
- `inputs_ids` passed when calling [`Phi3Model`].
45
- hidden_size (`int`, *optional*, defaults to 3072):
46
- Dimension of the hidden representations.
47
- intermediate_size (`int`, *optional*, defaults to 8192):
48
- Dimension of the MLP representations.
49
- num_hidden_layers (`int`, *optional*, defaults to 32):
50
- Number of hidden layers in the Transformer decoder.
51
- num_attention_heads (`int`, *optional*, defaults to 32):
52
- Number of attention heads for each attention layer in the Transformer decoder.
53
- num_key_value_heads (`int`, *optional*):
54
- This is the number of key_value heads that should be used to implement Grouped Query Attention. If
55
- `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
56
- `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
57
- converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
58
- by meanpooling all the original heads within that group. For more details checkout [this
59
- paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
60
- `num_attention_heads`.
61
- resid_pdrop (`float`, *optional*, defaults to 0.0):
62
- Dropout probability for mlp outputs.
63
- embd_pdrop (`int`, *optional*, defaults to 0.0):
64
- The dropout ratio for the embeddings.
65
- attention_dropout (`float`, *optional*, defaults to 0.0):
66
- The dropout ratio after computing the attention scores.
67
- hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
68
- The non-linear activation function (function or string) in the decoder.
69
- max_position_embeddings (`int`, *optional*, defaults to 4096):
70
- The maximum sequence length that this model might ever be used with.
71
- original_max_position_embeddings (`int`, *optional*, defaults to 4096):
72
- The maximum sequence length that this model was trained with. This is used to determine the size of the
73
- original RoPE embeddings when using long scaling.
74
- initializer_range (`float`, *optional*, defaults to 0.02):
75
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
76
- rms_norm_eps (`float`, *optional*, defaults to 1e-05):
77
- The epsilon value used for the RMSNorm.
78
- use_cache (`bool`, *optional*, defaults to `True`):
79
- Whether or not the model should return the last key/values attentions (not used by all models). Only
80
- relevant if `config.is_decoder=True`. Whether to tie weight embeddings or not.
81
- tie_word_embeddings (`bool`, *optional*, defaults to `False`):
82
- Whether to tie weight embeddings
83
- rope_theta (`float`, *optional*, defaults to 10000.0):
84
- The base period of the RoPE embeddings.
85
- rope_scaling (`dict`, *optional*):
86
- The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
87
- contain the following keys: `type`, `short_factor` and `long_factor`. The `type` must be `longrope` and
88
- the `short_factor` and `long_factor` must be lists of numbers with the same length as the hidden size
89
- divided by the number of attention heads divided by 2.
90
- bos_token_id (`int`, *optional*, defaults to 1):
91
- The id of the "beginning-of-sequence" token.
92
- eos_token_id (`int`, *optional*, defaults to 32000):
93
- The id of the "end-of-sequence" token.
94
- pad_token_id (`int`, *optional*, defaults to 32000):
95
- The id of the padding token.
96
- sliding_window (`int`, *optional*):
97
- Sliding window attention window size. If `None`, no sliding window is applied.
98
-
99
- Example:
100
-
101
- ```python
102
- >>> from transformers import Phi3Model, Phi3Config
103
-
104
- >>> # Initializing a Phi-3 style configuration
105
- >>> configuration = Phi3Config.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
106
-
107
- >>> # Initializing a model from the configuration
108
- >>> model = Phi3Model(configuration)
109
-
110
- >>> # Accessing the model configuration
111
- >>> configuration = model.config
112
- ```"""
113
-
114
- model_type = "phi3"
115
- keys_to_ignore_at_inference = ["past_key_values"]
116
-
117
- def __init__(
118
- self,
119
- vocab_size=32064,
120
- hidden_size=3072,
121
- intermediate_size=8192,
122
- num_hidden_layers=32,
123
- num_attention_heads=32,
124
- num_key_value_heads=None,
125
- resid_pdrop=0.0,
126
- embd_pdrop=0.0,
127
- attention_dropout=0.0,
128
- hidden_act="silu",
129
- max_position_embeddings=4096,
130
- original_max_position_embeddings=4096,
131
- initializer_range=0.02,
132
- rms_norm_eps=1e-5,
133
- use_cache=True,
134
- tie_word_embeddings=False,
135
- rope_theta=10000.0,
136
- rope_scaling=None,
137
- bos_token_id=1,
138
- eos_token_id=32000,
139
- pad_token_id=32000,
140
- sliding_window=None,
141
- **kwargs,
142
- ):
143
- self.vocab_size = vocab_size
144
- self.hidden_size = hidden_size
145
- self.intermediate_size = intermediate_size
146
- self.num_hidden_layers = num_hidden_layers
147
- self.num_attention_heads = num_attention_heads
148
-
149
- if num_key_value_heads is None:
150
- num_key_value_heads = num_attention_heads
151
-
152
- self.num_key_value_heads = num_key_value_heads
153
- self.resid_pdrop = resid_pdrop
154
- self.embd_pdrop = embd_pdrop
155
- self.attention_dropout = attention_dropout
156
- self.hidden_act = hidden_act
157
- self.max_position_embeddings = max_position_embeddings
158
- self.original_max_position_embeddings = original_max_position_embeddings
159
- self.initializer_range = initializer_range
160
- self.rms_norm_eps = rms_norm_eps
161
- self.use_cache = use_cache
162
- self.rope_theta = rope_theta
163
- self.rope_scaling = rope_scaling
164
- self._rope_scaling_adjustment()
165
- self._rope_scaling_validation()
166
- self.sliding_window = sliding_window
167
-
168
- super().__init__(
169
- bos_token_id=bos_token_id,
170
- eos_token_id=eos_token_id,
171
- pad_token_id=pad_token_id,
172
- tie_word_embeddings=tie_word_embeddings,
173
- **kwargs,
174
- )
175
-
176
- def _rope_scaling_adjustment(self):
177
- """
178
- Adjust the `type` of the `rope_scaling` configuration for backward compatibility.
179
- """
180
- if self.rope_scaling is None:
181
- return
182
-
183
- rope_scaling_type = self.rope_scaling.get("type", None)
184
-
185
- # For backward compatibility if previous version used "su" or "yarn"
186
- if rope_scaling_type is not None and rope_scaling_type in ["su", "yarn"]:
187
- self.rope_scaling["type"] = "longrope"
188
-
189
- def _rope_scaling_validation(self):
190
- """
191
- Validate the `rope_scaling` configuration.
192
- """
193
- if self.rope_scaling is None:
194
- return
195
-
196
- if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 3:
197
- raise ValueError(
198
- "`rope_scaling` must be a dictionary with three fields, `type`, `short_factor` and `long_factor`, "
199
- f"got {self.rope_scaling}"
200
- )
201
- rope_scaling_type = self.rope_scaling.get("type", None)
202
- rope_scaling_short_factor = self.rope_scaling.get("short_factor", None)
203
- rope_scaling_long_factor = self.rope_scaling.get("long_factor", None)
204
- if rope_scaling_type is None or rope_scaling_type not in ["longrope"]:
205
- raise ValueError(f"`rope_scaling`'s type field must be one of ['longrope'], got {rope_scaling_type}")
206
- if not (
207
- isinstance(rope_scaling_short_factor, list)
208
- and all(isinstance(x, (int, float)) for x in rope_scaling_short_factor)
209
- ):
210
- raise ValueError(
211
- f"`rope_scaling`'s short_factor field must be a list of numbers, got {rope_scaling_short_factor}"
212
- )
213
- if not len(rope_scaling_short_factor) == self.hidden_size // self.num_attention_heads // 2:
214
- raise ValueError(
215
- f"`rope_scaling`'s short_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_short_factor)}"
216
- )
217
- if not (
218
- isinstance(rope_scaling_long_factor, list)
219
- and all(isinstance(x, (int, float)) for x in rope_scaling_long_factor)
220
- ):
221
- raise ValueError(
222
- f"`rope_scaling`'s long_factor field must be a list of numbers, got {rope_scaling_long_factor}"
223
- )
224
- if not len(rope_scaling_long_factor) == self.hidden_size // self.num_attention_heads // 2:
225
- raise ValueError(
226
- f"`rope_scaling`'s long_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_long_factor)}"
227
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cuda/cuda-int4-awq-block-128/phi-3.5-mini-instruct-cuda-int4-awq-block-128.onnx.data DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:dfba2f38f25040110fd7beb4475a342af7e4dae6f889a6cb7a52131387a42c95
3
- size 2277120000
 
 
 
 
cuda/cuda-int4-awq-block-128/tokenizer.json DELETED
The diff for this file is too large to render. See raw diff
 
cuda/cuda-int4-awq-block-128/tokenizer_config.json DELETED
@@ -1,131 +0,0 @@
1
- {
2
- "add_bos_token": false,
3
- "add_eos_token": false,
4
- "add_prefix_space": null,
5
- "added_tokens_decoder": {
6
- "0": {
7
- "content": "<unk>",
8
- "lstrip": false,
9
- "normalized": false,
10
- "rstrip": false,
11
- "single_word": false,
12
- "special": true
13
- },
14
- "1": {
15
- "content": "<s>",
16
- "lstrip": false,
17
- "normalized": false,
18
- "rstrip": false,
19
- "single_word": false,
20
- "special": true
21
- },
22
- "2": {
23
- "content": "</s>",
24
- "lstrip": false,
25
- "normalized": false,
26
- "rstrip": true,
27
- "single_word": false,
28
- "special": false
29
- },
30
- "32000": {
31
- "content": "<|endoftext|>",
32
- "lstrip": false,
33
- "normalized": false,
34
- "rstrip": false,
35
- "single_word": false,
36
- "special": true
37
- },
38
- "32001": {
39
- "content": "<|assistant|>",
40
- "lstrip": false,
41
- "normalized": false,
42
- "rstrip": true,
43
- "single_word": false,
44
- "special": true
45
- },
46
- "32002": {
47
- "content": "<|placeholder1|>",
48
- "lstrip": false,
49
- "normalized": false,
50
- "rstrip": true,
51
- "single_word": false,
52
- "special": true
53
- },
54
- "32003": {
55
- "content": "<|placeholder2|>",
56
- "lstrip": false,
57
- "normalized": false,
58
- "rstrip": true,
59
- "single_word": false,
60
- "special": true
61
- },
62
- "32004": {
63
- "content": "<|placeholder3|>",
64
- "lstrip": false,
65
- "normalized": false,
66
- "rstrip": true,
67
- "single_word": false,
68
- "special": true
69
- },
70
- "32005": {
71
- "content": "<|placeholder4|>",
72
- "lstrip": false,
73
- "normalized": false,
74
- "rstrip": true,
75
- "single_word": false,
76
- "special": true
77
- },
78
- "32006": {
79
- "content": "<|system|>",
80
- "lstrip": false,
81
- "normalized": false,
82
- "rstrip": true,
83
- "single_word": false,
84
- "special": true
85
- },
86
- "32007": {
87
- "content": "<|end|>",
88
- "lstrip": false,
89
- "normalized": false,
90
- "rstrip": true,
91
- "single_word": false,
92
- "special": true
93
- },
94
- "32008": {
95
- "content": "<|placeholder5|>",
96
- "lstrip": false,
97
- "normalized": false,
98
- "rstrip": true,
99
- "single_word": false,
100
- "special": true
101
- },
102
- "32009": {
103
- "content": "<|placeholder6|>",
104
- "lstrip": false,
105
- "normalized": false,
106
- "rstrip": true,
107
- "single_word": false,
108
- "special": true
109
- },
110
- "32010": {
111
- "content": "<|user|>",
112
- "lstrip": false,
113
- "normalized": false,
114
- "rstrip": true,
115
- "single_word": false,
116
- "special": true
117
- }
118
- },
119
- "bos_token": "<s>",
120
- "chat_template": "{% for message in messages %}{% if message['role'] == 'system' and message['content'] %}{{'<|system|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'user' %}{{'<|user|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'assistant' %}{{'<|assistant|>\n' + message['content'] + '<|end|>\n'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>\n' }}{% else %}{{ eos_token }}{% endif %}",
121
- "clean_up_tokenization_spaces": false,
122
- "eos_token": "<|endoftext|>",
123
- "legacy": false,
124
- "model_max_length": 131072,
125
- "pad_token": "<|endoftext|>",
126
- "padding_side": "left",
127
- "sp_model_kwargs": {},
128
- "tokenizer_class": "LlamaTokenizer",
129
- "unk_token": "<unk>",
130
- "use_default_system_prompt": false
131
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
{cuda/cuda-fp16 β†’ gpu/gpu-int4-awq-block-128}/config.json RENAMED
File without changes
{cuda/cuda-fp16 β†’ gpu/gpu-int4-awq-block-128}/configuration_phi3.py RENAMED
File without changes
{cuda/cuda-int4-awq-block-128 β†’ gpu/gpu-int4-awq-block-128}/genai_config.json RENAMED
@@ -1,59 +1,54 @@
1
- {
2
- "model": {
3
- "bos_token_id": 1,
4
- "context_length": 131072,
5
- "decoder": {
6
- "session_options": {
7
- "log_id": "onnxruntime-genai",
8
- "provider_options": [
9
- {
10
- "cuda": {
11
- "enable_cuda_graph": "0"
12
- }
13
- }
14
- ]
15
- },
16
- "filename": "phi-3.5-mini-instruct-cuda-int4-awq-block-128.onnx",
17
- "head_size": 96,
18
- "hidden_size": 3072,
19
- "inputs": {
20
- "input_ids": "input_ids",
21
- "attention_mask": "attention_mask",
22
- "past_key_names": "past_key_values.%d.key",
23
- "past_value_names": "past_key_values.%d.value"
24
- },
25
- "outputs": {
26
- "logits": "logits",
27
- "present_key_names": "present.%d.key",
28
- "present_value_names": "present.%d.value"
29
- },
30
- "num_attention_heads": 32,
31
- "num_hidden_layers": 32,
32
- "num_key_value_heads": 32
33
- },
34
- "eos_token_id": [
35
- 32007,
36
- 32001,
37
- 32000
38
- ],
39
- "pad_token_id": 32000,
40
- "type": "phi3",
41
- "vocab_size": 32064
42
- },
43
- "search": {
44
- "diversity_penalty": 0.0,
45
- "do_sample": true,
46
- "early_stopping": true,
47
- "length_penalty": 1.0,
48
- "max_length": 131072,
49
- "min_length": 0,
50
- "no_repeat_ngram_size": 0,
51
- "num_beams": 1,
52
- "num_return_sequences": 1,
53
- "past_present_share_buffer": true,
54
- "repetition_penalty": 1.0,
55
- "temperature": 1.0,
56
- "top_k": 1,
57
- "top_p": 1.0
58
- }
59
  }
 
1
+ {
2
+ "model": {
3
+ "bos_token_id": 1,
4
+ "context_length": 131072,
5
+ "decoder": {
6
+ "session_options": {
7
+ "log_id": "onnxruntime-genai",
8
+ "provider_options": []
9
+ },
10
+ "filename": "model.onnx",
11
+ "head_size": 96,
12
+ "hidden_size": 3072,
13
+ "inputs": {
14
+ "input_ids": "input_ids",
15
+ "attention_mask": "attention_mask",
16
+ "position_ids": "position_ids",
17
+ "past_key_names": "past_key_values.%d.key",
18
+ "past_value_names": "past_key_values.%d.value"
19
+ },
20
+ "outputs": {
21
+ "logits": "logits",
22
+ "present_key_names": "present.%d.key",
23
+ "present_value_names": "present.%d.value"
24
+ },
25
+ "num_attention_heads": 32,
26
+ "num_hidden_layers": 32,
27
+ "num_key_value_heads": 32
28
+ },
29
+ "eos_token_id": [
30
+ 32007,
31
+ 32001,
32
+ 32000
33
+ ],
34
+ "pad_token_id": 32000,
35
+ "type": "phi3",
36
+ "vocab_size": 32064
37
+ },
38
+ "search": {
39
+ "diversity_penalty": 0.0,
40
+ "do_sample": true,
41
+ "early_stopping": true,
42
+ "length_penalty": 1.0,
43
+ "max_length": 131072,
44
+ "min_length": 0,
45
+ "no_repeat_ngram_size": 0,
46
+ "num_beams": 1,
47
+ "num_return_sequences": 1,
48
+ "past_present_share_buffer": true,
49
+ "repetition_penalty": 1.0,
50
+ "temperature": 1.0,
51
+ "top_k": 1,
52
+ "top_p": 1.0
53
+ }
 
 
 
 
 
54
  }
cuda/cuda-int4-awq-block-128/phi-3.5-mini-instruct-cuda-int4-awq-block-128.onnx β†’ gpu/gpu-int4-awq-block-128/model.onnx RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fdf3ed5dab1205213634405e69ae41b5a564d93c646783ead672ad4406c1ed5f
3
- size 26214533
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4392f76ffec63b659a83261e08337fbb33194f509816b7f843f7c46a6f37cc1
3
+ size 320891
cuda/cuda-fp16/phi-3.5-mini-instruct-cuda-fp16.onnx β†’ gpu/gpu-int4-awq-block-128/model.onnx.data RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e63bd67da9613d088594a83ca58665f4d5e931d673beab93157e56b5893b64c3
3
- size 26124593
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ccad8fba8b01a75f6ef96bd5f27401b1ba92eca512819eee3128f576453fa15
3
+ size 2303072256
{cuda/cuda-int4-awq-block-128 β†’ gpu/gpu-int4-awq-block-128}/special_tokens_map.json RENAMED
@@ -1,30 +1,30 @@
1
- {
2
- "bos_token": {
3
- "content": "<s>",
4
- "lstrip": false,
5
- "normalized": false,
6
- "rstrip": false,
7
- "single_word": false
8
- },
9
- "eos_token": {
10
- "content": "<|endoftext|>",
11
- "lstrip": false,
12
- "normalized": false,
13
- "rstrip": false,
14
- "single_word": false
15
- },
16
- "pad_token": {
17
- "content": "<|endoftext|>",
18
- "lstrip": false,
19
- "normalized": false,
20
- "rstrip": false,
21
- "single_word": false
22
- },
23
- "unk_token": {
24
- "content": "<unk>",
25
- "lstrip": false,
26
- "normalized": false,
27
- "rstrip": false,
28
- "single_word": false
29
- }
30
- }
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
{cuda/cuda-fp16 β†’ gpu/gpu-int4-awq-block-128}/tokenizer.json RENAMED
File without changes
{cuda/cuda-fp16 β†’ gpu/gpu-int4-awq-block-128}/tokenizer_config.json RENAMED
@@ -1,131 +1,131 @@
1
- {
2
- "add_bos_token": false,
3
- "add_eos_token": false,
4
- "add_prefix_space": null,
5
- "added_tokens_decoder": {
6
- "0": {
7
- "content": "<unk>",
8
- "lstrip": false,
9
- "normalized": false,
10
- "rstrip": false,
11
- "single_word": false,
12
- "special": true
13
- },
14
- "1": {
15
- "content": "<s>",
16
- "lstrip": false,
17
- "normalized": false,
18
- "rstrip": false,
19
- "single_word": false,
20
- "special": true
21
- },
22
- "2": {
23
- "content": "</s>",
24
- "lstrip": false,
25
- "normalized": false,
26
- "rstrip": true,
27
- "single_word": false,
28
- "special": false
29
- },
30
- "32000": {
31
- "content": "<|endoftext|>",
32
- "lstrip": false,
33
- "normalized": false,
34
- "rstrip": false,
35
- "single_word": false,
36
- "special": true
37
- },
38
- "32001": {
39
- "content": "<|assistant|>",
40
- "lstrip": false,
41
- "normalized": false,
42
- "rstrip": true,
43
- "single_word": false,
44
- "special": true
45
- },
46
- "32002": {
47
- "content": "<|placeholder1|>",
48
- "lstrip": false,
49
- "normalized": false,
50
- "rstrip": true,
51
- "single_word": false,
52
- "special": true
53
- },
54
- "32003": {
55
- "content": "<|placeholder2|>",
56
- "lstrip": false,
57
- "normalized": false,
58
- "rstrip": true,
59
- "single_word": false,
60
- "special": true
61
- },
62
- "32004": {
63
- "content": "<|placeholder3|>",
64
- "lstrip": false,
65
- "normalized": false,
66
- "rstrip": true,
67
- "single_word": false,
68
- "special": true
69
- },
70
- "32005": {
71
- "content": "<|placeholder4|>",
72
- "lstrip": false,
73
- "normalized": false,
74
- "rstrip": true,
75
- "single_word": false,
76
- "special": true
77
- },
78
- "32006": {
79
- "content": "<|system|>",
80
- "lstrip": false,
81
- "normalized": false,
82
- "rstrip": true,
83
- "single_word": false,
84
- "special": true
85
- },
86
- "32007": {
87
- "content": "<|end|>",
88
- "lstrip": false,
89
- "normalized": false,
90
- "rstrip": true,
91
- "single_word": false,
92
- "special": true
93
- },
94
- "32008": {
95
- "content": "<|placeholder5|>",
96
- "lstrip": false,
97
- "normalized": false,
98
- "rstrip": true,
99
- "single_word": false,
100
- "special": true
101
- },
102
- "32009": {
103
- "content": "<|placeholder6|>",
104
- "lstrip": false,
105
- "normalized": false,
106
- "rstrip": true,
107
- "single_word": false,
108
- "special": true
109
- },
110
- "32010": {
111
- "content": "<|user|>",
112
- "lstrip": false,
113
- "normalized": false,
114
- "rstrip": true,
115
- "single_word": false,
116
- "special": true
117
- }
118
- },
119
- "bos_token": "<s>",
120
- "chat_template": "{% for message in messages %}{% if message['role'] == 'system' and message['content'] %}{{'<|system|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'user' %}{{'<|user|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'assistant' %}{{'<|assistant|>\n' + message['content'] + '<|end|>\n'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>\n' }}{% else %}{{ eos_token }}{% endif %}",
121
- "clean_up_tokenization_spaces": false,
122
- "eos_token": "<|endoftext|>",
123
- "legacy": false,
124
- "model_max_length": 131072,
125
- "pad_token": "<|endoftext|>",
126
- "padding_side": "left",
127
- "sp_model_kwargs": {},
128
- "tokenizer_class": "LlamaTokenizer",
129
- "unk_token": "<unk>",
130
- "use_default_system_prompt": false
131
- }
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": true,
27
+ "single_word": false,
28
+ "special": false
29
+ },
30
+ "32000": {
31
+ "content": "<|endoftext|>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "32001": {
39
+ "content": "<|assistant|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": true,
43
+ "single_word": false,
44
+ "special": true
45
+ },
46
+ "32002": {
47
+ "content": "<|placeholder1|>",
48
+ "lstrip": false,
49
+ "normalized": false,
50
+ "rstrip": true,
51
+ "single_word": false,
52
+ "special": true
53
+ },
54
+ "32003": {
55
+ "content": "<|placeholder2|>",
56
+ "lstrip": false,
57
+ "normalized": false,
58
+ "rstrip": true,
59
+ "single_word": false,
60
+ "special": true
61
+ },
62
+ "32004": {
63
+ "content": "<|placeholder3|>",
64
+ "lstrip": false,
65
+ "normalized": false,
66
+ "rstrip": true,
67
+ "single_word": false,
68
+ "special": true
69
+ },
70
+ "32005": {
71
+ "content": "<|placeholder4|>",
72
+ "lstrip": false,
73
+ "normalized": false,
74
+ "rstrip": true,
75
+ "single_word": false,
76
+ "special": true
77
+ },
78
+ "32006": {
79
+ "content": "<|system|>",
80
+ "lstrip": false,
81
+ "normalized": false,
82
+ "rstrip": true,
83
+ "single_word": false,
84
+ "special": true
85
+ },
86
+ "32007": {
87
+ "content": "<|end|>",
88
+ "lstrip": false,
89
+ "normalized": false,
90
+ "rstrip": true,
91
+ "single_word": false,
92
+ "special": true
93
+ },
94
+ "32008": {
95
+ "content": "<|placeholder5|>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": true,
99
+ "single_word": false,
100
+ "special": true
101
+ },
102
+ "32009": {
103
+ "content": "<|placeholder6|>",
104
+ "lstrip": false,
105
+ "normalized": false,
106
+ "rstrip": true,
107
+ "single_word": false,
108
+ "special": true
109
+ },
110
+ "32010": {
111
+ "content": "<|user|>",
112
+ "lstrip": false,
113
+ "normalized": false,
114
+ "rstrip": true,
115
+ "single_word": false,
116
+ "special": true
117
+ }
118
+ },
119
+ "bos_token": "<s>",
120
+ "chat_template": "{% for message in messages %}{% if message['role'] == 'system' and message['content'] %}{{'<|system|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'user' %}{{'<|user|>\n' + message['content'] + '<|end|>\n'}}{% elif message['role'] == 'assistant' %}{{'<|assistant|>\n' + message['content'] + '<|end|>\n'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>\n' }}{% else %}{{ eos_token }}{% endif %}",
121
+ "clean_up_tokenization_spaces": false,
122
+ "eos_token": "<|endoftext|>",
123
+ "legacy": false,
124
+ "model_max_length": 131072,
125
+ "pad_token": "<|endoftext|>",
126
+ "padding_side": "left",
127
+ "sp_model_kwargs": {},
128
+ "tokenizer_class": "LlamaTokenizer",
129
+ "unk_token": "<unk>",
130
+ "use_default_system_prompt": false
131
+ }