flozi00 commited on
Commit
10ff798
1 Parent(s): 53a295f

Upload model

Browse files
Files changed (2) hide show
  1. README.md +84 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -158,6 +158,83 @@ The following `bitsandbytes` quantization config was used during training:
158
  - bnb_4bit_use_double_quant: True
159
  - bnb_4bit_compute_dtype: float16
160
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
  The following `bitsandbytes` quantization config was used during training:
162
  - load_in_8bit: False
163
  - load_in_4bit: True
@@ -184,5 +261,12 @@ The following `bitsandbytes` quantization config was used during training:
184
  - PEFT 0.4.0.dev0
185
  - PEFT 0.4.0.dev0
186
  - PEFT 0.4.0.dev0
 
 
 
 
 
 
 
187
 
188
  - PEFT 0.4.0.dev0
 
158
  - bnb_4bit_use_double_quant: True
159
  - bnb_4bit_compute_dtype: float16
160
 
161
+ The following `bitsandbytes` quantization config was used during training:
162
+ - load_in_8bit: False
163
+ - load_in_4bit: True
164
+ - llm_int8_threshold: 6.0
165
+ - llm_int8_skip_modules: None
166
+ - llm_int8_enable_fp32_cpu_offload: False
167
+ - llm_int8_has_fp16_weight: False
168
+ - bnb_4bit_quant_type: fp4
169
+ - bnb_4bit_use_double_quant: True
170
+ - bnb_4bit_compute_dtype: float16
171
+
172
+ The following `bitsandbytes` quantization config was used during training:
173
+ - load_in_8bit: False
174
+ - load_in_4bit: True
175
+ - llm_int8_threshold: 6.0
176
+ - llm_int8_skip_modules: None
177
+ - llm_int8_enable_fp32_cpu_offload: False
178
+ - llm_int8_has_fp16_weight: False
179
+ - bnb_4bit_quant_type: fp4
180
+ - bnb_4bit_use_double_quant: True
181
+ - bnb_4bit_compute_dtype: float16
182
+
183
+ The following `bitsandbytes` quantization config was used during training:
184
+ - load_in_8bit: False
185
+ - load_in_4bit: True
186
+ - llm_int8_threshold: 6.0
187
+ - llm_int8_skip_modules: None
188
+ - llm_int8_enable_fp32_cpu_offload: False
189
+ - llm_int8_has_fp16_weight: False
190
+ - bnb_4bit_quant_type: fp4
191
+ - bnb_4bit_use_double_quant: True
192
+ - bnb_4bit_compute_dtype: float16
193
+
194
+ The following `bitsandbytes` quantization config was used during training:
195
+ - load_in_8bit: False
196
+ - load_in_4bit: True
197
+ - llm_int8_threshold: 6.0
198
+ - llm_int8_skip_modules: None
199
+ - llm_int8_enable_fp32_cpu_offload: False
200
+ - llm_int8_has_fp16_weight: False
201
+ - bnb_4bit_quant_type: fp4
202
+ - bnb_4bit_use_double_quant: True
203
+ - bnb_4bit_compute_dtype: float16
204
+
205
+ The following `bitsandbytes` quantization config was used during training:
206
+ - load_in_8bit: False
207
+ - load_in_4bit: True
208
+ - llm_int8_threshold: 6.0
209
+ - llm_int8_skip_modules: None
210
+ - llm_int8_enable_fp32_cpu_offload: False
211
+ - llm_int8_has_fp16_weight: False
212
+ - bnb_4bit_quant_type: fp4
213
+ - bnb_4bit_use_double_quant: True
214
+ - bnb_4bit_compute_dtype: float16
215
+
216
+ The following `bitsandbytes` quantization config was used during training:
217
+ - load_in_8bit: False
218
+ - load_in_4bit: True
219
+ - llm_int8_threshold: 6.0
220
+ - llm_int8_skip_modules: None
221
+ - llm_int8_enable_fp32_cpu_offload: False
222
+ - llm_int8_has_fp16_weight: False
223
+ - bnb_4bit_quant_type: fp4
224
+ - bnb_4bit_use_double_quant: True
225
+ - bnb_4bit_compute_dtype: float16
226
+
227
+ The following `bitsandbytes` quantization config was used during training:
228
+ - load_in_8bit: False
229
+ - load_in_4bit: True
230
+ - llm_int8_threshold: 6.0
231
+ - llm_int8_skip_modules: None
232
+ - llm_int8_enable_fp32_cpu_offload: False
233
+ - llm_int8_has_fp16_weight: False
234
+ - bnb_4bit_quant_type: fp4
235
+ - bnb_4bit_use_double_quant: True
236
+ - bnb_4bit_compute_dtype: float16
237
+
238
  The following `bitsandbytes` quantization config was used during training:
239
  - load_in_8bit: False
240
  - load_in_4bit: True
 
261
  - PEFT 0.4.0.dev0
262
  - PEFT 0.4.0.dev0
263
  - PEFT 0.4.0.dev0
264
+ - PEFT 0.4.0.dev0
265
+ - PEFT 0.4.0.dev0
266
+ - PEFT 0.4.0.dev0
267
+ - PEFT 0.4.0.dev0
268
+ - PEFT 0.4.0.dev0
269
+ - PEFT 0.4.0.dev0
270
+ - PEFT 0.4.0.dev0
271
 
272
  - PEFT 0.4.0.dev0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7b84f642ca839980d04318d50e84cc6946156ef3313deadf59c8ae5c9cec2b37
3
  size 324598229
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25613c9b9ed16e0aabe1d3b2fc32ac84589d3c3b3bfa6b895b119161112cfd5f
3
  size 324598229