LeoChiuu commited on
Commit
ca4e4e8
1 Parent(s): fdb3656

Add new SentenceTransformer model.

Browse files
Files changed (3) hide show
  1. README.md +108 -108
  2. config_sentence_transformers.json +1 -1
  3. model.safetensors +1 -1
README.md CHANGED
@@ -45,34 +45,34 @@ tags:
45
  - sentence-similarity
46
  - feature-extraction
47
  - generated_from_trainer
48
- - dataset_size:124
49
  - loss:MultipleNegativesRankingLoss
50
  widget:
51
- - source_sentence: なにも要らない
52
  sentences:
53
- - 欲しくない
54
- - 暖炉を調べよう
55
- - キャンドルがいいな
56
- - source_sentence: 試すため
57
  sentences:
58
- - 誰にもらったやつ?
59
- - スカーフはナイトスタンドにある?
60
- - ためすため
61
- - source_sentence: ビーフシチュー作った?
62
  sentences:
63
- - 昨日作ったのはビーフシチュー?
64
- - キャンドル要らない
65
- - 昨日夕飯にビーフシチュー食べた?
66
- - source_sentence: あれってキミのスカーフ?
67
  sentences:
68
- - あの木の上にあるやつはなに?
69
- - あれってレオのスカーフ?
70
- - どっちをさがせばいい?
71
- - source_sentence: どっちも欲しくない
72
  sentences:
73
- - 気にスカーフがひっかかってる
74
- - 花壇を調べよう
75
- - タイマツ要らない
76
  model-index:
77
  - name: SentenceTransformer based on colorfulscoop/sbert-base-ja
78
  results:
@@ -84,109 +84,109 @@ model-index:
84
  type: custom-arc-semantics-data
85
  metrics:
86
  - type: cosine_accuracy
87
- value: 0.967741935483871
88
  name: Cosine Accuracy
89
  - type: cosine_accuracy_threshold
90
- value: 0.2947738766670227
91
  name: Cosine Accuracy Threshold
92
  - type: cosine_f1
93
- value: 0.9836065573770492
94
  name: Cosine F1
95
  - type: cosine_f1_threshold
96
- value: 0.2947738766670227
97
  name: Cosine F1 Threshold
98
  - type: cosine_precision
99
  value: 1.0
100
  name: Cosine Precision
101
  - type: cosine_recall
102
- value: 0.967741935483871
103
  name: Cosine Recall
104
  - type: cosine_ap
105
- value: 0.9999999999999998
106
  name: Cosine Ap
107
  - type: dot_accuracy
108
- value: 0.967741935483871
109
  name: Dot Accuracy
110
  - type: dot_accuracy_threshold
111
- value: 144.98019409179688
112
  name: Dot Accuracy Threshold
113
  - type: dot_f1
114
- value: 0.9836065573770492
115
  name: Dot F1
116
  - type: dot_f1_threshold
117
- value: 144.98019409179688
118
  name: Dot F1 Threshold
119
  - type: dot_precision
120
  value: 1.0
121
  name: Dot Precision
122
  - type: dot_recall
123
- value: 0.967741935483871
124
  name: Dot Recall
125
  - type: dot_ap
126
- value: 0.9999999999999998
127
  name: Dot Ap
128
  - type: manhattan_accuracy
129
- value: 0.967741935483871
130
  name: Manhattan Accuracy
131
  - type: manhattan_accuracy_threshold
132
- value: 585.5504150390625
133
  name: Manhattan Accuracy Threshold
134
  - type: manhattan_f1
135
- value: 0.9836065573770492
136
  name: Manhattan F1
137
  - type: manhattan_f1_threshold
138
- value: 585.5504150390625
139
  name: Manhattan F1 Threshold
140
  - type: manhattan_precision
141
  value: 1.0
142
  name: Manhattan Precision
143
  - type: manhattan_recall
144
- value: 0.967741935483871
145
  name: Manhattan Recall
146
  - type: manhattan_ap
147
- value: 0.9999999999999998
148
  name: Manhattan Ap
149
  - type: euclidean_accuracy
150
- value: 0.967741935483871
151
  name: Euclidean Accuracy
152
  - type: euclidean_accuracy_threshold
153
- value: 26.343276977539062
154
  name: Euclidean Accuracy Threshold
155
  - type: euclidean_f1
156
- value: 0.9836065573770492
157
  name: Euclidean F1
158
  - type: euclidean_f1_threshold
159
- value: 26.343276977539062
160
  name: Euclidean F1 Threshold
161
  - type: euclidean_precision
162
  value: 1.0
163
  name: Euclidean Precision
164
  - type: euclidean_recall
165
- value: 0.967741935483871
166
  name: Euclidean Recall
167
  - type: euclidean_ap
168
- value: 0.9999999999999998
169
  name: Euclidean Ap
170
  - type: max_accuracy
171
- value: 0.967741935483871
172
  name: Max Accuracy
173
  - type: max_accuracy_threshold
174
- value: 585.5504150390625
175
  name: Max Accuracy Threshold
176
  - type: max_f1
177
- value: 0.9836065573770492
178
  name: Max F1
179
  - type: max_f1_threshold
180
- value: 585.5504150390625
181
  name: Max F1 Threshold
182
  - type: max_precision
183
  value: 1.0
184
  name: Max Precision
185
  - type: max_recall
186
- value: 0.967741935483871
187
  name: Max Recall
188
  - type: max_ap
189
- value: 0.9999999999999998
190
  name: Max Ap
191
  ---
192
 
@@ -239,9 +239,9 @@ from sentence_transformers import SentenceTransformer
239
  model = SentenceTransformer("LeoChiuu/sbert-base-ja-arc")
240
  # Run inference
241
  sentences = [
242
- 'どっちも欲しくない',
243
- 'タイマツ要らない',
244
- '花壇を調べよう',
245
  ]
246
  embeddings = model.encode(sentences)
247
  print(embeddings.shape)
@@ -287,40 +287,40 @@ You can finetune this model on your own dataset.
287
 
288
  | Metric | Value |
289
  |:-----------------------------|:---------|
290
- | cosine_accuracy | 0.9677 |
291
- | cosine_accuracy_threshold | 0.2948 |
292
- | cosine_f1 | 0.9836 |
293
- | cosine_f1_threshold | 0.2948 |
294
  | cosine_precision | 1.0 |
295
- | cosine_recall | 0.9677 |
296
  | cosine_ap | 1.0 |
297
- | dot_accuracy | 0.9677 |
298
- | dot_accuracy_threshold | 144.9802 |
299
- | dot_f1 | 0.9836 |
300
- | dot_f1_threshold | 144.9802 |
301
  | dot_precision | 1.0 |
302
- | dot_recall | 0.9677 |
303
  | dot_ap | 1.0 |
304
- | manhattan_accuracy | 0.9677 |
305
- | manhattan_accuracy_threshold | 585.5504 |
306
- | manhattan_f1 | 0.9836 |
307
- | manhattan_f1_threshold | 585.5504 |
308
  | manhattan_precision | 1.0 |
309
- | manhattan_recall | 0.9677 |
310
  | manhattan_ap | 1.0 |
311
- | euclidean_accuracy | 0.9677 |
312
- | euclidean_accuracy_threshold | 26.3433 |
313
- | euclidean_f1 | 0.9836 |
314
- | euclidean_f1_threshold | 26.3433 |
315
  | euclidean_precision | 1.0 |
316
- | euclidean_recall | 0.9677 |
317
  | euclidean_ap | 1.0 |
318
- | max_accuracy | 0.9677 |
319
- | max_accuracy_threshold | 585.5504 |
320
- | max_f1 | 0.9836 |
321
- | max_f1_threshold | 585.5504 |
322
  | max_precision | 1.0 |
323
- | max_recall | 0.9677 |
324
  | **max_ap** | **1.0** |
325
 
326
  <!--
@@ -342,19 +342,19 @@ You can finetune this model on your own dataset.
342
  #### Unnamed Dataset
343
 
344
 
345
- * Size: 124 training samples
346
  * Columns: <code>text1</code>, <code>text2</code>, and <code>label</code>
347
  * Approximate statistics based on the first 1000 samples:
348
  | | text1 | text2 | label |
349
  |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-----------------------------|
350
  | type | string | string | int |
351
- | details | <ul><li>min: 4 tokens</li><li>mean: 8.59 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 8.58 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
352
  * Samples:
353
- | text1 | text2 | label |
354
- |:------------------------|:-----------------------|:---------------|
355
- | <code>昨晩何を食べたの?</code> | <code>昨夜何を食べたの?</code> | <code>1</code> |
356
- | <code>スリッパをはいたの?</code> | <code>スリッパはいてた?</code> | <code>1</code> |
357
- | <code>家の中</code> | <code>家の中へ行こう</code> | <code>1</code> |
358
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
359
  ```json
360
  {
@@ -368,19 +368,19 @@ You can finetune this model on your own dataset.
368
  #### Unnamed Dataset
369
 
370
 
371
- * Size: 31 evaluation samples
372
  * Columns: <code>text1</code>, <code>text2</code>, and <code>label</code>
373
  * Approximate statistics based on the first 1000 samples:
374
  | | text1 | text2 | label |
375
  |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-----------------------------|
376
  | type | string | string | int |
377
- | details | <ul><li>min: 5 tokens</li><li>mean: 8.39 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.06 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
378
  * Samples:
379
- | text1 | text2 | label |
380
- |:----------------------|:-----------------------|:---------------|
381
- | <code>花壇</code> | <code>花壇を調べよう</code> | <code>1</code> |
382
- | <code>タイマツ要らない</code> | <code>キャンドル要らない</code> | <code>1</code> |
383
- | <code>なにも要らない</code> | <code>欲しくない</code> | <code>1</code> |
384
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
385
  ```json
386
  {
@@ -519,27 +519,27 @@ You can finetune this model on your own dataset.
519
  ### Training Logs
520
  | Epoch | Step | Training Loss | loss | custom-arc-semantics-data_max_ap |
521
  |:-----:|:----:|:-------------:|:------:|:--------------------------------:|
522
- | None | 0 | - | - | 1.0000 |
523
- | 1.0 | 16 | 0.5617 | 0.5022 | 1.0000 |
524
- | 2.0 | 32 | 0.2461 | 0.3870 | 1.0000 |
525
- | 3.0 | 48 | 0.0968 | 0.3929 | 1.0000 |
526
- | 4.0 | 64 | 0.0408 | 0.4012 | 1.0000 |
527
- | 5.0 | 80 | 0.0151 | 0.4023 | 1.0000 |
528
- | 6.0 | 96 | 0.0118 | 0.3851 | 1.0000 |
529
- | 7.0 | 112 | 0.0087 | 0.3637 | 1.0000 |
530
- | 8.0 | 128 | 0.0053 | 0.3662 | 1.0000 |
531
- | 9.0 | 144 | 0.0046 | 0.3799 | 1.0000 |
532
- | 10.0 | 160 | 0.002 | 0.3772 | 1.0000 |
533
- | 11.0 | 176 | 0.0025 | 0.3765 | 1.0000 |
534
- | 12.0 | 192 | 0.0021 | 0.3751 | 1.0000 |
535
- | 13.0 | 208 | 0.0015 | 0.3752 | 1.0000 |
536
 
537
 
538
  ### Framework Versions
539
  - Python: 3.10.14
540
  - Sentence Transformers: 3.0.1
541
  - Transformers: 4.44.2
542
- - PyTorch: 2.4.0+cu121
543
  - Accelerate: 0.34.0
544
  - Datasets: 2.20.0
545
  - Tokenizers: 0.19.1
 
45
  - sentence-similarity
46
  - feature-extraction
47
  - generated_from_trainer
48
+ - dataset_size:228
49
  - loss:MultipleNegativesRankingLoss
50
  widget:
51
+ - source_sentence: 家の外を探そう
52
  sentences:
53
+ - ベットを調べよう
54
+ - 何を作ったの?
55
+ - 外を見てみよう
56
+ - source_sentence: 物の姿を変える魔法が使える村人を知っている?
57
  sentences:
58
+ - 中を見てみよう
59
+ - ベッドにある?
60
+ - 物体の形を変えられる魔法使いを知っている?
61
+ - source_sentence: ぬいぐるみが花
62
  sentences:
63
+ - リリアンはどんな呪文が使えるの?
64
+ - ぬいぐるみ
65
+ - 花がぬいぐるみに変えられている
66
+ - source_sentence: ベッドにスカーフはある?
67
  sentences:
68
+ - 井戸へ行ったことある?
69
+ - どっちも要らない
70
+ - スカーフはベッドにある?
71
+ - source_sentence: キャンドル頂戴
72
  sentences:
73
+ - 祭壇の些細な違和感ってなに?
74
+ - やっぱり、キャンドルがいい
75
+ - テーブルを調べよう
76
  model-index:
77
  - name: SentenceTransformer based on colorfulscoop/sbert-base-ja
78
  results:
 
84
  type: custom-arc-semantics-data
85
  metrics:
86
  - type: cosine_accuracy
87
+ value: 0.9827586206896551
88
  name: Cosine Accuracy
89
  - type: cosine_accuracy_threshold
90
+ value: 0.2341834306716919
91
  name: Cosine Accuracy Threshold
92
  - type: cosine_f1
93
+ value: 0.9913043478260869
94
  name: Cosine F1
95
  - type: cosine_f1_threshold
96
+ value: 0.2341834306716919
97
  name: Cosine F1 Threshold
98
  - type: cosine_precision
99
  value: 1.0
100
  name: Cosine Precision
101
  - type: cosine_recall
102
+ value: 0.9827586206896551
103
  name: Cosine Recall
104
  - type: cosine_ap
105
+ value: 1.0
106
  name: Cosine Ap
107
  - type: dot_accuracy
108
+ value: 0.9827586206896551
109
  name: Dot Accuracy
110
  - type: dot_accuracy_threshold
111
+ value: 134.29324340820312
112
  name: Dot Accuracy Threshold
113
  - type: dot_f1
114
+ value: 0.9913043478260869
115
  name: Dot F1
116
  - type: dot_f1_threshold
117
+ value: 134.29324340820312
118
  name: Dot F1 Threshold
119
  - type: dot_precision
120
  value: 1.0
121
  name: Dot Precision
122
  - type: dot_recall
123
+ value: 0.9827586206896551
124
  name: Dot Recall
125
  - type: dot_ap
126
+ value: 1.0
127
  name: Dot Ap
128
  - type: manhattan_accuracy
129
+ value: 0.9827586206896551
130
  name: Manhattan Accuracy
131
  - type: manhattan_accuracy_threshold
132
+ value: 644.1650390625
133
  name: Manhattan Accuracy Threshold
134
  - type: manhattan_f1
135
+ value: 0.9913043478260869
136
  name: Manhattan F1
137
  - type: manhattan_f1_threshold
138
+ value: 644.1650390625
139
  name: Manhattan F1 Threshold
140
  - type: manhattan_precision
141
  value: 1.0
142
  name: Manhattan Precision
143
  - type: manhattan_recall
144
+ value: 0.9827586206896551
145
  name: Manhattan Recall
146
  - type: manhattan_ap
147
+ value: 1.0
148
  name: Manhattan Ap
149
  - type: euclidean_accuracy
150
+ value: 0.9827586206896551
151
  name: Euclidean Accuracy
152
  - type: euclidean_accuracy_threshold
153
+ value: 29.542858123779297
154
  name: Euclidean Accuracy Threshold
155
  - type: euclidean_f1
156
+ value: 0.9913043478260869
157
  name: Euclidean F1
158
  - type: euclidean_f1_threshold
159
+ value: 29.542858123779297
160
  name: Euclidean F1 Threshold
161
  - type: euclidean_precision
162
  value: 1.0
163
  name: Euclidean Precision
164
  - type: euclidean_recall
165
+ value: 0.9827586206896551
166
  name: Euclidean Recall
167
  - type: euclidean_ap
168
+ value: 1.0
169
  name: Euclidean Ap
170
  - type: max_accuracy
171
+ value: 0.9827586206896551
172
  name: Max Accuracy
173
  - type: max_accuracy_threshold
174
+ value: 644.1650390625
175
  name: Max Accuracy Threshold
176
  - type: max_f1
177
+ value: 0.9913043478260869
178
  name: Max F1
179
  - type: max_f1_threshold
180
+ value: 644.1650390625
181
  name: Max F1 Threshold
182
  - type: max_precision
183
  value: 1.0
184
  name: Max Precision
185
  - type: max_recall
186
+ value: 0.9827586206896551
187
  name: Max Recall
188
  - type: max_ap
189
+ value: 1.0
190
  name: Max Ap
191
  ---
192
 
 
239
  model = SentenceTransformer("LeoChiuu/sbert-base-ja-arc")
240
  # Run inference
241
  sentences = [
242
+ 'キャンドル頂戴',
243
+ 'やっぱり、キャンドルがいい',
244
+ 'テーブルを調べよう',
245
  ]
246
  embeddings = model.encode(sentences)
247
  print(embeddings.shape)
 
287
 
288
  | Metric | Value |
289
  |:-----------------------------|:---------|
290
+ | cosine_accuracy | 0.9828 |
291
+ | cosine_accuracy_threshold | 0.2342 |
292
+ | cosine_f1 | 0.9913 |
293
+ | cosine_f1_threshold | 0.2342 |
294
  | cosine_precision | 1.0 |
295
+ | cosine_recall | 0.9828 |
296
  | cosine_ap | 1.0 |
297
+ | dot_accuracy | 0.9828 |
298
+ | dot_accuracy_threshold | 134.2932 |
299
+ | dot_f1 | 0.9913 |
300
+ | dot_f1_threshold | 134.2932 |
301
  | dot_precision | 1.0 |
302
+ | dot_recall | 0.9828 |
303
  | dot_ap | 1.0 |
304
+ | manhattan_accuracy | 0.9828 |
305
+ | manhattan_accuracy_threshold | 644.165 |
306
+ | manhattan_f1 | 0.9913 |
307
+ | manhattan_f1_threshold | 644.165 |
308
  | manhattan_precision | 1.0 |
309
+ | manhattan_recall | 0.9828 |
310
  | manhattan_ap | 1.0 |
311
+ | euclidean_accuracy | 0.9828 |
312
+ | euclidean_accuracy_threshold | 29.5429 |
313
+ | euclidean_f1 | 0.9913 |
314
+ | euclidean_f1_threshold | 29.5429 |
315
  | euclidean_precision | 1.0 |
316
+ | euclidean_recall | 0.9828 |
317
  | euclidean_ap | 1.0 |
318
+ | max_accuracy | 0.9828 |
319
+ | max_accuracy_threshold | 644.165 |
320
+ | max_f1 | 0.9913 |
321
+ | max_f1_threshold | 644.165 |
322
  | max_precision | 1.0 |
323
+ | max_recall | 0.9828 |
324
  | **max_ap** | **1.0** |
325
 
326
  <!--
 
342
  #### Unnamed Dataset
343
 
344
 
345
+ * Size: 228 training samples
346
  * Columns: <code>text1</code>, <code>text2</code>, and <code>label</code>
347
  * Approximate statistics based on the first 1000 samples:
348
  | | text1 | text2 | label |
349
  |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-----------------------------|
350
  | type | string | string | int |
351
+ | details | <ul><li>min: 4 tokens</li><li>mean: 8.28 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.63 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
352
  * Samples:
353
+ | text1 | text2 | label |
354
+ |:----------------------------|:------------------------|:---------------|
355
+ | <code>キャンドルを用意して</code> | <code>ロウソク</code> | <code>1</code> |
356
+ | <code>なんで話せるの?</code> | <code>なんでしゃべれるの?</code> | <code>1</code> |
357
+ | <code>それは物の見た目を変える魔法</code> | <code>物の見た目を変える</code> | <code>1</code> |
358
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
359
  ```json
360
  {
 
368
  #### Unnamed Dataset
369
 
370
 
371
+ * Size: 58 evaluation samples
372
  * Columns: <code>text1</code>, <code>text2</code>, and <code>label</code>
373
  * Approximate statistics based on the first 1000 samples:
374
  | | text1 | text2 | label |
375
  |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-----------------------------|
376
  | type | string | string | int |
377
+ | details | <ul><li>min: 4 tokens</li><li>mean: 8.33 tokens</li><li>max: 13 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.38 tokens</li><li>max: 13 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
378
  * Samples:
379
+ | text1 | text2 | label |
380
+ |:----------------------------|:----------------------------|:---------------|
381
+ | <code>雲より高くってどこ?</code> | <code>雲より高くってなに?</code> | <code>1</code> |
382
+ | <code>気にスカーフがひっかかってる</code> | <code>キにスカーフが引っかかってる</code> | <code>1</code> |
383
+ | <code>夕飯が辛かったから</code> | <code>夕飯に辛いスープを飲んだから</code> | <code>1</code> |
384
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
385
  ```json
386
  {
 
519
  ### Training Logs
520
  | Epoch | Step | Training Loss | loss | custom-arc-semantics-data_max_ap |
521
  |:-----:|:----:|:-------------:|:------:|:--------------------------------:|
522
+ | None | 0 | - | - | 1.0 |
523
+ | 1.0 | 29 | 0.6181 | 0.3774 | 1.0 |
524
+ | 2.0 | 58 | 0.2538 | 0.3356 | 1.0 |
525
+ | 3.0 | 87 | 0.063 | 0.3885 | 1.0 |
526
+ | 4.0 | 116 | 0.015 | 0.4536 | 1.0 |
527
+ | 5.0 | 145 | 0.0061 | 0.4475 | 1.0 |
528
+ | 6.0 | 174 | 0.002 | 0.4805 | 1.0 |
529
+ | 7.0 | 203 | 0.0015 | 0.4826 | 1.0 |
530
+ | 8.0 | 232 | 0.0012 | 0.4831 | 1.0 |
531
+ | 9.0 | 261 | 0.0008 | 0.4848 | 1.0 |
532
+ | 10.0 | 290 | 0.0006 | 0.4862 | 1.0 |
533
+ | 11.0 | 319 | 0.0006 | 0.4883 | 1.0 |
534
+ | 12.0 | 348 | 0.0007 | 0.4903 | 1.0 |
535
+ | 13.0 | 377 | 0.0006 | 0.4912 | 1.0 |
536
 
537
 
538
  ### Framework Versions
539
  - Python: 3.10.14
540
  - Sentence Transformers: 3.0.1
541
  - Transformers: 4.44.2
542
+ - PyTorch: 2.4.1+cu121
543
  - Accelerate: 0.34.0
544
  - Datasets: 2.20.0
545
  - Tokenizers: 0.19.1
config_sentence_transformers.json CHANGED
@@ -2,7 +2,7 @@
2
  "__version__": {
3
  "sentence_transformers": "3.0.1",
4
  "transformers": "4.44.2",
5
- "pytorch": "2.4.0+cu121"
6
  },
7
  "prompts": {},
8
  "default_prompt_name": null,
 
2
  "__version__": {
3
  "sentence_transformers": "3.0.1",
4
  "transformers": "4.44.2",
5
+ "pytorch": "2.4.1+cu121"
6
  },
7
  "prompts": {},
8
  "default_prompt_name": null,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8fc4e45d599bf652009a700bbe34833cd60aa374b5cc54118f054d6133e99844
3
  size 442491744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:039721c1fad0c0fa6d3c342ca79d7eb552b0005e9c34c0bcc96c0455e340a82d
3
  size 442491744