pbelcak commited on
Commit
2e95158
1 Parent(s): 8c43fc6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -1
README.md CHANGED
@@ -6,4 +6,77 @@ language:
6
  - en
7
  metrics:
8
  - glue
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  metrics:
8
  - glue
9
+ ---
10
+
11
+ # FastBERT-1x11-long
12
+
13
+ This is the final model described in "Exponentially Faster Language Modelling".
14
+ The model has been pretrained just like crammedBERT but with fast feedforward networks (FFF) in place of the traditional feedforward layers.
15
+ To use this model, you need the code from the repo at https://github.com/pbelcak/FastBERT.
16
+
17
+ You can find the paper here: https://arxiv.org/abs/2311.10770, and the abstract below:
18
+
19
+ > Language models only really need to use an exponential fraction of their neurons for individual inferences.
20
+ > As proof, we present FastBERT, a BERT variant that uses 0.3\% of its neurons during inference while performing on par with similar BERT models. FastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs).
21
+ > While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights.
22
+
23
+
24
+ ## Intended uses & limitations
25
+
26
+ This is the raw pretraining checkpoint. You can use this to fine-tune on a downstream task like GLUE as discussed in the paper. This model is provided only as sanity check for research purposes, it is untested and unfit for deployment.
27
+
28
+ ### How to use
29
+
30
+
31
+ ```python
32
+ import cramming
33
+ from transformers import AutoModelForMaskedLM, AutoTokenizer
34
+
35
+ tokenizer = AutoTokenizer.from_pretrained("pbelcak/FastBERT-1x11-long")
36
+ model = AutoModelForMaskedLM.from_pretrained("pbelcak/FastBERT-1x11-long")
37
+
38
+ text = "Replace me by any text you'd like."
39
+ encoded_input = tokenizer(text, return_tensors='pt')
40
+ output = model(**encoded_input)
41
+ ```
42
+
43
+
44
+ ### Limitations and bias
45
+
46
+ The training data used for this model was further filtered and sorted beyond the normal Pile. These modifications were not tested for unintended consequences.
47
+
48
+ ## Training data, Training procedure, Preprocessing, Pretraining
49
+
50
+ These are discussed in the paper. You can find the final configurations for each in this repository.
51
+
52
+ ## Evaluation results
53
+
54
+ When fine-tuned on downstream tasks, this model achieves the following results:
55
+
56
+ Glue test results:
57
+
58
+ | Task | MNLI-(m-mm) | QQP | QNLI | SST-2 | STS-B | MRPC | RTE | Average |
59
+ |:----:|:-----------:|:----:|:----:|:-----:|:-----:|:----:|:----:|:-------:|
60
+ | Score| 81.3 | 87.6 | 89.7 | 89.9 | 86.4 | 87.5 | 60.7 | 83.0 |
61
+
62
+ These numbers are the median over 5 trials on "GLUE-sane" using the GLUE-dev set. With this variant of GLUE, finetuning cannot be longer than 5 epochs on each task, and hyperparameters have to be chosen equal for all tasks.
63
+
64
+ ### BibTeX entry and citation info
65
+
66
+ ```bibtex
67
+ @article{belcak2023exponential,
68
+ title = {Exponentially {{Faster}} {{Language}} {{Modelling}}},
69
+ author = {Belcak, Peter and Wattenhofer, Roger},
70
+ year = {2023},
71
+ month = nov,
72
+ eprint = {2311.10770},
73
+ eprinttype = {arxiv},
74
+ primaryclass = {cs},
75
+ publisher = {{arXiv}},
76
+ url = {https://arxiv.org/pdf/2311.10770},
77
+ urldate = {2023-11-21},
78
+ archiveprefix = {arXiv},
79
+ keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning},
80
+ journal = {arxiv:2311.10770[cs]}
81
+ }
82
+ ```