asahi417 commited on
Commit
d61ec10
1 Parent(s): 5a3c6b7

commit files to HF hub

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-10000`
2
+ This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
3
+ Following table shows a summary of the trimming process.
4
+
5
+ | | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-10000 |
6
+ |:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
7
+ | parameter_size_full | 278,045,955 | 93,725,955 |
8
+ | parameter_size_embedding | 192,001,536 | 7,681,536 |
9
+ | vocab_size | 250,002 | 10,002 |
10
+ | compression_rate_full | 100.0 | 33.71 |
11
+ | compression_rate_embedding | 100.0 | 4.0 |
12
+
13
+
14
+ Following table shows the parameter used to trim vocabulary.
15
+
16
+ | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
17
+ |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
18
+ | en | vocabtrimmer/mc4_validation | text | en | validation | 10000 | 2 |