language: en
thumbnail: null
license: mit
tags:
- question-answering
- bert
- bert-base
datasets:
- squad
metrics:
- squad
widget:
- text: Where is the Eiffel Tower located?
context: >-
The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in
Paris, France. It is named after the engineer Gustave Eiffel, whose
company designed and built the tower.
- text: Who is Frederic Chopin?
context: >-
Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 –
17 October 1849), was a Polish composer and virtuoso pianist of the
Romantic era who wrote primarily for solo piano.
BERT-base uncased model fine-tuned on SQuAD v1
This model was created using the nn_pruning python library: the linear layers contains 30.0% of the original weights.
This model CANNOT be used without using nn_pruning optimize_model
function, as it uses NoNorms instead of LayerNorms and this is not currently supported by the Transformers library.
It uses ReLUs instead of GeLUs as in the initial BERT network, to speedup inference.
This does not need special handling, as it is supported by the Transformers library, and flagged in the model config by the "hidden_act": "relu"
entry.
The model contains 45.0% of the original weights overall (the embeddings account for a significant part of the model, and they are not pruned by this method).
With a simple resizing of the linear matrices it ran 2.01x as fast as BERT-base on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix.
In terms of accuracy, its F1 is 89.19, compared with 88.5 for BERT-base, a F1 gain of 0.69.
Fine-Pruning details
This model was fine-tuned from the HuggingFace BERT base uncased checkpoint on SQuAD1.1, and distilled from the model bert-large-uncased-whole-word-masking-finetuned-squad. This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning is that some of the attention heads are completely removed: 55 heads were removed on a total of 144 (38.2%). Here is a detailed view on how the remaining heads are distributed in the network after pruning.
Details of the SQuAD1.1 dataset
Dataset | Split | # samples |
---|---|---|
SQuAD1.1 | train | 90.6K |
SQuAD1.1 | eval | 11.1k |
Fine-tuning
Python:
3.8.5
Machine specs:
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
Results
Pytorch model file size: 374M
(original BERT: 438M
)
Metric | # Value | # Original (Table 2) | Variation |
---|---|---|---|
EM | 82.21 | 80.8 | +1.41 |
F1 | 89.19 | 88.5 | +0.69 |
Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
pip install nn_pruning
Then you can use the transformers library
almost as usual: you just have to call optimize_model
when the pipeline has loaded.
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1",
tokenizer="madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1"
)
print("BERT-base parameters: 110M")
print(f"Parameters count (includes head pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M")
qa_pipeline.model = optimize_model(qa_pipeline.model, "dense")
print(f"Parameters count after optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M")
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print("Predictions", predictions)