license: mit
datasets:
- BabyLM_strict-small
language:
- en
metrics:
- glue
Model Card for SzegedAI/babylm-strict-small-mlsm
This base-sized DeBERTa model was created using the Masked Latent Semantic Modeling (MLSM) pre-training objective, which is a sample efficient alternative for classic Masked Language Modeling (MLM).
During MLSM, the objective is to recover the latent semantic profile of the masked tokens, as opposed to recovering their exact identity.
The contextualized latent semantic profile during pre-training is determined by performing sparse coding of the hidden representation of a partially pre-trained model (a base-sized DeBERTa model pre-trained over only 20 million input sequences in this particular case).
Model Details
Model Description
- Developed by: SzegedAI
- Model type: transformer encoder
- Language: Engish
- License: MIT
Model Sources
- Repository: https://github.com/szegedai/MLSM
- Paper: Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling
How to Get Started with the Model
The pre-trained model can be used in the usual manner, e.g., for fine tuning on a particular sequence classification task, invoke the code:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('SzegedAI/babylm-strict-small-mlsm')
model = AutoModelForSequenceClassification.from_pretrained('SzegedAI/babylm-strict-small-mlsm')
Training Details
Training Data
The model was pre-trained using the 10 million token BabyLM strict small dataset.
Training Procedure
Preprocessing
Training Hyperparameters
Pre-training was conducted with a batch size of 128 sequences and a gradient accumulation over 8 batches, resulting in an effective batch size of 1024.
A total of 100,000 update steps were performed using the AdamW optimizer with a linear learning rate scheduling having a peak learning rate of 1e-04.
A maximum sequence length of 128 tokens was employed throughout pre-training.
- Training regime: fp32
Evaluation
Metrics
The model was evaluated using the BabyLM evaluation pipeline.
Results
The evaluation result after fine-tuning the given model on a wide range of tasks.
On each tasks 4 different fine-tuning experiments were performed, during which the only difference was the random initialization of the task-specific classification head.
Apart from reducing the batch size from 64 to 32 (in order to avoid OOM errors), we used the recommended hyperparameter settings of the shared task.
Both the average and the standard deviation are displayed below on each tasks.
(Super)GLUE
Unless stated otherwise (in parenthesis after the task name), the default evaluation metric is accuracy.
Avg. | Std. | |
---|---|---|
BoolQ | 0.667 | 0.013 |
CoLA (MCC) | 0.417 | 0.022 |
MNLI | 0.754 | 0.006 |
MNLI-mm | 0.754 | 0.010 |
MRPC (F1) | 0.765 | 0.019 |
MultiRC | 0.568 | 0.068 |
QNLI | 0.824 | 0.003 |
QQP (F1) | 0.835 | 0.008 |
RTE | 0.520 | 0.024 |
SST2 | 0.892 | 0.006 |
WSC | 0.608 | 0.016 |
MSGS
Results reported in MCC.
Avg. | Std. | |
---|---|---|
control_raising_control | 0.735 | 0.036 |
control_raising_lexical_content_the | -0.073 | 0.300 |
control_raising_relative_token_position | -0.652 | 0.140 |
lexical_content_the_control | 1.000 | 0.000 |
main_verb_control | 0.998 | 0.002 |
main_verb_lexical_content_the | -0.947 | 0.071 |
main_verb_relative_token_position | -0.395 | 0.204 |
relative_position_control | 0.896 | 0.076 |
syntactic_category_control | 0.784 | 0.078 |
syntactic_category_lexical_content_the | -0.166 | 0.119 |
syntactic_category_relative_position | -0.528 | 0.038 |
Environmental Impact
- Hardware Type: RTX A6000
- Hours used: 70
- Carbon Emitted: cca. 9 kg CO2 eq. (based on Machine Learning Impact calculator)
Citation
The MLSM pre-training objective is introduced in the ACL Findings paper Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling.
BibTeX:
@inproceedings{berend-2023-masked,
title = "Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language Modeling",
author = "Berend, G{\'a}bor",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.876",
pages = "13949--13962",
abstract = "In this paper, we propose an alternative to the classic masked language modeling (MLM) pre-training paradigm, where the objective is altered from the reconstruction of the exact identity of randomly selected masked subwords to the prediction of their latent semantic properties. We coin the proposed pre-training technique masked latent semantic modeling (MLSM for short). In order to make the contextualized determination of the latent semantic properties of the masked subwords possible, we rely on an unsupervised technique which uses sparse coding. Our experimental results reveal that the fine-tuned performance of those models that we pre-trained via MLSM is consistently and significantly better compared to the use of vanilla MLM pretraining and other strong baselines.",
}