nan commited on
Commit
a6bb16f
1 Parent(s): bda5e8d

feat: update the readme

Browse files
Files changed (1) hide show
  1. README.md +103 -1
README.md CHANGED
@@ -1,3 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  Core implementation of Jina XLM-RoBERTa
2
 
3
  This implementation is adapted from [XLM-Roberta](https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta). In contrast to the original implementation, this model uses Rotary positional encodings and supports flash-attention 2.
@@ -9,4 +111,4 @@ to be added soon
9
 
10
  ### Converting weights
11
 
12
- Weights from an [original XLMRoberta model](https://huggingface.co/FacebookAI/xlm-roberta-large) can be converted using the `convert_roberta_weights_to_flash.py` script in the model repository.
 
1
+ ---
2
+ tags:
3
+ - transformers
4
+ - xlm-roberta
5
+ library_name: transformers
6
+ license: cc-by-nc-4.0
7
+ language:
8
+ - multilingual
9
+ - af
10
+ - am
11
+ - ar
12
+ - as
13
+ - az
14
+ - be
15
+ - bg
16
+ - bn
17
+ - br
18
+ - bs
19
+ - ca
20
+ - cs
21
+ - cy
22
+ - da
23
+ - de
24
+ - el
25
+ - en
26
+ - eo
27
+ - es
28
+ - et
29
+ - eu
30
+ - fa
31
+ - fi
32
+ - fr
33
+ - fy
34
+ - ga
35
+ - gd
36
+ - gl
37
+ - gu
38
+ - ha
39
+ - he
40
+ - hi
41
+ - hr
42
+ - hu
43
+ - hy
44
+ - id
45
+ - is
46
+ - it
47
+ - ja
48
+ - jv
49
+ - ka
50
+ - kk
51
+ - km
52
+ - kn
53
+ - ko
54
+ - ku
55
+ - ky
56
+ - la
57
+ - lo
58
+ - lt
59
+ - lv
60
+ - mg
61
+ - mk
62
+ - ml
63
+ - mn
64
+ - mr
65
+ - ms
66
+ - my
67
+ - ne
68
+ - nl
69
+ - 'no'
70
+ - om
71
+ - or
72
+ - pa
73
+ - pl
74
+ - ps
75
+ - pt
76
+ - ro
77
+ - ru
78
+ - sa
79
+ - sd
80
+ - si
81
+ - sk
82
+ - sl
83
+ - so
84
+ - sq
85
+ - sr
86
+ - su
87
+ - sv
88
+ - sw
89
+ - ta
90
+ - te
91
+ - th
92
+ - tl
93
+ - tr
94
+ - ug
95
+ - uk
96
+ - ur
97
+ - uz
98
+ - vi
99
+ - xh
100
+ - yi
101
+ - zh
102
+ ---
103
  Core implementation of Jina XLM-RoBERTa
104
 
105
  This implementation is adapted from [XLM-Roberta](https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta). In contrast to the original implementation, this model uses Rotary positional encodings and supports flash-attention 2.
 
111
 
112
  ### Converting weights
113
 
114
+ Weights from an [original XLMRoberta model](https://huggingface.co/FacebookAI/xlm-roberta-large) can be converted using the `convert_roberta_weights_to_flash.py` script in the model repository.