ekurtic commited on
Commit
f8bcfd3
1 Parent(s): eecf962

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-12-downstream-pruned-unstructured-90-mnli
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+
6
+ It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - MNLI 90%`.
7
+
8
+ ```
9
+ Pruning method: oBERT downstream unstructured
10
+ Paper: https://arxiv.org/abs/2203.07259
11
+ Dataset: MNLI
12
+ Sparsity: 90%
13
+ Number of layers: 12
14
+ ```
15
+
16
+ The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
17
+
18
+ ```
19
+ | oBERT 90% | m-acc | mm-acc|
20
+ | ------------ | ----- | ----- |
21
+ | seed=42 | 83.74 | 84.31 |
22
+ | seed=3407 (*)| 83.85 | 84.40 |
23
+ | seed=54321 | 83.77 | 84.33 |
24
+ | ------------ | ----- | ----- |
25
+ | mean | 83.79 | 84.35 |
26
+ | stdev | 0.056 | 0.047 |
27
+ ```
28
+
29
+ Code: _coming soon_
30
+
31
+ ## BibTeX entry and citation info
32
+ ```bibtex
33
+ @article{kurtic2022optimal,
34
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
35
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
36
+ journal={arXiv preprint arXiv:2203.07259},
37
+ year={2022}
38
+ }
39
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24048476b7c55a7419fb13926bee2dbb54035889358775f64df361b83f39375b
3
+ size 806
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ecc5f4bda1b1ec69e586ea9fcc4b442fbd5338d0a924953bb692b1a0c64deb2
3
+ size 825
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cd26b09d1118c560df65cc3658964ba551e63fe20fd2bd8f1429acb4244ab9b
3
+ size 355
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e18fbc17e5e4ea8b7ddf2d1c66d855ecc2f87f5e0d9a281715ffca7fa81c72fe
3
+ size 438027529
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a863c20bb9664ba983f10e20d34c790e0eea92f165fc4716c4bad62f6bdc70b4
3
+ size 285
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96ac29cf4fe5683c93310737f61e6cb6fb49df61eac954a541100306d0a1ec8b
3
+ size 472
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7782a29b03faad002696fe7340f73d1fc2cb5e6f752e080ac870dfd48aa4af1
3
+ size 97755
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c0353b143bc2d1795886624af2b44d92f61ff76abcf86ce5b66fdb5308f8e6b
3
+ size 2415
vocab.txt ADDED
The diff for this file is too large to render. See raw diff