English

EFTNAS Model Card: eftnas-s1-bert-base

The super-networks fine-tuned on BERT-base with GLUE benchmark using EFTNAS.

Model Details

Information

Training and Evaluation

GLUE benchmark

Results

Results of the optimal sub-network discoverd from the super-network:

Model GFLOPs GLUE Avg. MNLI-m QNLI QQP SST-2 CoLA MRPC RTE
Development Set:
EFTNAS-S1 5.7 82.9 84.6 90.8 91.2 93.5 60.6 90.8 69.0
Test Set:
EFTNAS-S1 5.7 77.7 83.7 89.9 71.8 93.4 52.6 87.6 65.0

Model Sources

Citation

@inproceedings{
  eftnas2024,
  title={Searching for Efficient Language Models in First-Order Weight-Reordered Super-Networks},
  author={J. Pablo Munoz and Yi Zheng and Nilesh Jain},
  booktitle={The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation},
  year={2024},
  url={}
}

License

Apache-2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train IntelLabs/eftnas-s1-bert-base