benjamin commited on
Commit
afea91a
•
1 Parent(s): c596805

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -9,7 +9,7 @@ Model trained with WECHSEL: Effective initialization of subword embeddings for c
9
 
10
  See the code here: https://github.com/CPJKU/wechsel
11
 
12
- And the paper here: https://arxiv.org/abs/2112.06598
13
 
14
  ## Performance
15
 
@@ -65,12 +65,18 @@ See our paper for details.
65
  Please cite WECHSEL as
66
 
67
  ```
68
- @misc{minixhofer2021wechsel,
69
- title={WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models},
70
- author={Benjamin Minixhofer and Fabian Paischer and Navid Rekabsaz},
71
- year={2021},
72
- eprint={2112.06598},
73
- archivePrefix={arXiv},
74
- primaryClass={cs.CL}
 
 
 
 
 
 
75
  }
76
  ```
 
9
 
10
  See the code here: https://github.com/CPJKU/wechsel
11
 
12
+ And the paper here: https://aclanthology.org/2022.naacl-main.293/
13
 
14
  ## Performance
15
 
 
65
  Please cite WECHSEL as
66
 
67
  ```
68
+ @inproceedings{minixhofer-etal-2022-wechsel,
69
+ title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
70
+ author = "Minixhofer, Benjamin and
71
+ Paischer, Fabian and
72
+ Rekabsaz, Navid",
73
+ booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
74
+ month = jul,
75
+ year = "2022",
76
+ address = "Seattle, United States",
77
+ publisher = "Association for Computational Linguistics",
78
+ url = "https://aclanthology.org/2022.naacl-main.293",
79
+ pages = "3992--4006",
80
+ abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
81
  }
82
  ```