ekurtic commited on
Commit
d0e0bd2
1 Parent(s): b886bf6

Model release

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # oBERT-12-downstream-pruned-unstructured-80-qqp
2
+
3
+ This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
4
+
5
+
6
+ It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - QQP 80%`.
7
+
8
+ ```
9
+ Pruning method: oBERT downstream unstructured
10
+ Paper: https://arxiv.org/abs/2203.07259
11
+ Dataset: QQP
12
+ Sparsity: 80%
13
+ Number of layers: 12
14
+ ```
15
+
16
+ The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
17
+
18
+ ```
19
+ | oBERT 80% | acc | F1 |
20
+ | ------------ | ----- | ----- |
21
+ | seed=42 (*)| 91.66 | 88.72 |
22
+ | seed=3407 | 91.51 | 88.56 |
23
+ | seed=54321 | 91.54 | 88.60 |
24
+ | ------------ | ----- | ----- |
25
+ | mean | 91.57 | 88.63 |
26
+ | stdev | 0.079 | 0.083 |
27
+ ```
28
+
29
+ Code: _coming soon_
30
+
31
+ ## BibTeX entry and citation info
32
+ ```bibtex
33
+ @article{kurtic2022optimal,
34
+ title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
35
+ author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
36
+ journal={arXiv preprint arXiv:2203.07259},
37
+ year={2022}
38
+ }
39
+ ```
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ad535d6bbb3db215d328fb419aacc63367e76d2fca0aad1e5fd06f978ce1e84
3
+ size 890
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17f4bc8a78be847cb5ed1d04ca2c60ba026c4f7145f32c150091612d321d6a29
3
+ size 669
eval_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87fefd345b07c8e058b85ddd81ec851f6d7986daf1afb2881d593febc6bfccf8
3
+ size 438
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6c5637dc6a6e5c2e08b0fbc51de459775362f0de6e564f26676c48aeb0dc7f4
3
+ size 438024457
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
3
+ size 112
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a863c20bb9664ba983f10e20d34c790e0eea92f165fc4716c4bad62f6bdc70b4
3
+ size 285
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48cf64484a52bab200a703254a9d1736edb350166f7f10a80f85c1c66c6afc6d
3
+ size 473
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2aed676a5da414ef0f06e69da4c0e28ba0cde2e6b44723dcef6afdf2270c779a
3
+ size 93685
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:814362915fc1843e5eb3c173600c783de6288f549fa0b2bf2c0792937214242c
3
+ size 2415
vocab.txt ADDED
The diff for this file is too large to render. See raw diff