Model release
Browse files- README.md +39 -0
- config.json +3 -0
- eval_results.txt +3 -0
- nbest_predictions.json +3 -0
- predictions.json +3 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +3 -0
- tokenizer_config.json +3 -0
- training_args.bin +3 -0
- vocab.txt +0 -0
README.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# oBERT-12-downstream-pruned-unstructured-80-squadv1
|
2 |
+
|
3 |
+
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
|
4 |
+
|
5 |
+
|
6 |
+
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - SQuADv1 80%`.
|
7 |
+
|
8 |
+
```
|
9 |
+
Pruning method: oBERT downstream unstructured
|
10 |
+
Paper: https://arxiv.org/abs/2203.07259
|
11 |
+
Dataset: SQuADv1
|
12 |
+
Sparsity: 80%
|
13 |
+
Number of layers: 12
|
14 |
+
```
|
15 |
+
|
16 |
+
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
|
17 |
+
|
18 |
+
```
|
19 |
+
| oBERT 80% | F1 | EM |
|
20 |
+
| ------------ | ----- | ----- |
|
21 |
+
| seed=42 | 88.95 | 82.08 |
|
22 |
+
| seed=3407 (*)| 89.16 | 82.05 |
|
23 |
+
| seed=54321 | 89.01 | 82.12 |
|
24 |
+
| ------------ | ----- | ----- |
|
25 |
+
| mean | 89.04 | 82.08 |
|
26 |
+
| stdev | 0.108 | 0.035 |
|
27 |
+
```
|
28 |
+
|
29 |
+
Code: _coming soon_
|
30 |
+
|
31 |
+
## BibTeX entry and citation info
|
32 |
+
```bibtex
|
33 |
+
@article{kurtic2022optimal,
|
34 |
+
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
|
35 |
+
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
|
36 |
+
journal={arXiv preprint arXiv:2203.07259},
|
37 |
+
year={2022}
|
38 |
+
}
|
39 |
+
```
|
config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c8486603b13aa568cb30a3f12f76029a3365344a234ace7353010ea02ea15338
|
3 |
+
size 659
|
eval_results.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
exact_match = 82.05298013245033
|
2 |
+
f1 = 89.15639642737877
|
3 |
+
epoch = 30.0
|
nbest_predictions.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cf49852f4a2242719a4185b54fa5b4ed2dbe1d29df28b503f769e719edfc6c25
|
3 |
+
size 45958883
|
predictions.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ff5739b8ccf208e2fd3afb0a67fda2b830a552246612e550f34cc20366350b4e
|
3 |
+
size 590409
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f339a6ee278b757c0d5faa448b400956e64dea7784667d9e86df871318c5f9d8
|
3 |
+
size 435661303
|
special_tokens_map.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
|
3 |
+
size 112
|
tokenizer_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a863c20bb9664ba983f10e20d34c790e0eea92f165fc4716c4bad62f6bdc70b4
|
3 |
+
size 285
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9e182a9a298c4c0863b3c4e03437580f056d30844dff7b92409ca8ca9a7bb85f
|
3 |
+
size 2415
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|