ZhiyuanChen commited on
Commit
c397eef
1 Parent(s): 2856727

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: rna
3
+ tags:
4
+ - Biology
5
+ - RNA
6
+ license: agpl-3.0
7
+ datasets:
8
+ - multimolecule/rnacentral
9
+ library_name: multimolecule
10
+ pipeline_tag: fill-mask
11
+ mask_token: "<mask>"
12
+ widget:
13
+ - example_title: "microRNA-21"
14
+ text: "UAGC<mask>UAUCAGACUGAUGUUGA"
15
+ output:
16
+ - label: "G"
17
+ score: 0.10253211855888367
18
+ - label: "R"
19
+ score: 0.09673436731100082
20
+ - label: "A"
21
+ score: 0.09126435220241547
22
+ - label: "V"
23
+ score: 0.08036787807941437
24
+ - label: "S"
25
+ score: 0.07541776448488235
26
+ ---
27
+
28
+ # RNAErnie
29
+
30
+ Pre-trained model on non-coding RNA (ncRNA) using a multi-stage masked language modeling (MLM) objective.
31
+
32
+ ## Statement
33
+
34
+ _Multi-purpose RNA language modelling with motif-aware pretraining and type-guided fine-tuning_ is published in [Nature Machine Intelligence](https://doi.org/10.1038/s42256-024-00836-4), which is a Closed Access / Author-Fee journal.
35
+
36
+ > Machine learning has been at the forefront of the movement for free and open access to research.
37
+ >
38
+ > We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of these journals as an outlet of record for the machine learning community would be a retrograde step.
39
+
40
+ The MultiMolecule team is committed to the principles of open access and open science.
41
+
42
+ We do NOT endorse the publication of manuscripts in Closed Access / Author-Fee journals and encourage the community to support Open Access journals.
43
+
44
+ Please consider signing the [Statement on Nature Machine Intelligence](https://openaccess.engineering.oregonstate.edu).
45
+
46
+ ## Disclaimer
47
+
48
+ This is an UNOFFICIAL implementation of the RNAErnie: An RNA Language Model with Structure-enhanced Representations by Ning Wang, Jiang Bian,
49
+ Haoyi Xiong, et al.
50
+
51
+ The OFFICIAL repository of RNAErnie is at [CatIIIIIIII/RNAErnie](https://github.com/CatIIIIIIII/RNAErnie).
52
+
53
+ !!! Danger "Reproducibility"
54
+
55
+ The MultiMolecule team is unable to confirm that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
56
+ This is because
57
+
58
+ The proposed method is published in a Closed Access / Author-Fee journal.
59
+
60
+ **The team releasing RNAErnie did not write this model card for this model so this model card has been written by the MultiMolecule team.**
61
+
62
+ ## Model Details
63
+
64
+ RNAErnie is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
65
+
66
+ Note that during the conversion process, additional tokens such as `[IND]` and ncRNA class symbols are removed.
67
+
68
+ ### Model Specification
69
+
70
+ | Num Layers | Hidden Size | Num Heads | Intermediate Size | Num Parameters (M) | FLOPs (G) | MACs (G) | Max Num Tokens |
71
+ | ---------- | ----------- | --------- | ----------------- | ------------------ | --------- | -------- | -------------- |
72
+ | 12 | 768 | 12 | 3072 | 86.06 | 22.36 | 11.17 | 512 |
73
+
74
+ ### Links
75
+
76
+ - **Code**: [multimolecule.rnaernie](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/rnaernie)
77
+ - **Weights**: [`multimolecule/rnaernie`](https://huggingface.co/multimolecule/rnaernie)
78
+ - **Data**: [RNAcentral](https://rnacentral.org)
79
+ - **Paper**: Multi-purpose RNA language modelling with motif-aware pretraining and type-guided fine-tuning
80
+ - **Developed by**: Ning Wang, Jiang Bian, Yuchen Li, Xuhong Li, Shahid Mumtaz, Linghe Kong, Haoyi Xiong.
81
+ - **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ERNIE](https://huggingface.co/nghuyong/ernie-3.0-base-zh)
82
+ - **Original Repository**: [https://github.com/CatIIIIIIII/RNAErnie](https://github.com/CatIIIIIIII/RNAErnie)
83
+
84
+ ## Usage
85
+
86
+ The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
87
+
88
+ ```bash
89
+ pip install multimolecule
90
+ ```
91
+
92
+ ### Direct Use
93
+
94
+ You can use this model directly with a pipeline for masked language modeling:
95
+
96
+ ```python
97
+ >>> import multimolecule # you must import multimolecule to register models
98
+ >>> from transformers import pipeline
99
+ >>> unmasker = pipeline('fill-mask', model='multimolecule/rnaernie')
100
+ >>> unmasker("uagc<mask>uaucagacugauguuga")
101
+
102
+ [{'score': 0.10253211855888367,
103
+ 'token': 8,
104
+ 'token_str': 'G',
105
+ 'sequence': 'U A G C G U A U C A G A C U G A U G U U G A'},
106
+ {'score': 0.09673436731100082,
107
+ 'token': 18,
108
+ 'token_str': 'R',
109
+ 'sequence': 'U A G C R U A U C A G A C U G A U G U U G A'},
110
+ {'score': 0.09126435220241547,
111
+ 'token': 6,
112
+ 'token_str': 'A',
113
+ 'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
114
+ {'score': 0.08036787807941437,
115
+ 'token': 13,
116
+ 'token_str': 'V',
117
+ 'sequence': 'U A G C V U A U C A G A C U G A U G U U G A'},
118
+ {'score': 0.07541776448488235,
119
+ 'token': 20,
120
+ 'token_str': 'S',
121
+ 'sequence': 'U A G C S U A U C A G A C U G A U G U U G A'}]
122
+ ```
123
+
124
+ ### Downstream Use
125
+
126
+ #### Extract Features
127
+
128
+ Here is how to use this model to get the features of a given sequence in PyTorch:
129
+
130
+ ```python
131
+ from multimolecule import RnaTokenizer, RnaErnieModel
132
+
133
+
134
+ tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnaernie')
135
+ model = RnaErnieModel.from_pretrained('multimolecule/rnaernie')
136
+
137
+ text = "UAGCUUAUCAGACUGAUGUUGA"
138
+ input = tokenizer(text, return_tensors='pt')
139
+
140
+ output = model(**input)
141
+ ```
142
+
143
+ #### Sequence Classification / Regression
144
+
145
+ **Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
146
+
147
+ Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
148
+
149
+ ```python
150
+ import torch
151
+ from multimolecule import RnaTokenizer, RnaErnieForSequencePrediction
152
+
153
+
154
+ tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnaernie')
155
+ model = RnaErnieForSequencePrediction.from_pretrained('multimolecule/rnaernie')
156
+
157
+ text = "UAGCUUAUCAGACUGAUGUUGA"
158
+ input = tokenizer(text, return_tensors='pt')
159
+ label = torch.tensor([1])
160
+
161
+ output = model(**input, labels=label)
162
+ ```
163
+
164
+ #### Nucleotide Classification / Regression
165
+
166
+ **Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
167
+
168
+ Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
169
+
170
+ ```python
171
+ import torch
172
+ from multimolecule import RnaTokenizer, RnaErnieForNucleotidePrediction
173
+
174
+
175
+ tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnaernie')
176
+ model = RnaErnieForNucleotidePrediction.from_pretrained('multimolecule/rnaernie')
177
+
178
+ text = "UAGCUUAUCAGACUGAUGUUGA"
179
+ input = tokenizer(text, return_tensors='pt')
180
+ label = torch.randint(2, (len(text), ))
181
+
182
+ output = model(**input, labels=label)
183
+ ```
184
+
185
+ #### Contact Classification / Regression
186
+
187
+ **Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
188
+
189
+ Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
190
+
191
+ ```python
192
+ import torch
193
+ from multimolecule import RnaTokenizer, RnaErnieForContactPrediction
194
+
195
+
196
+ tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnaernie')
197
+ model = RnaErnieForContactPrediction.from_pretrained('multimolecule/rnaernie')
198
+
199
+ text = "UAGCUUAUCAGACUGAUGUUGA"
200
+ input = tokenizer(text, return_tensors='pt')
201
+ label = torch.randint(2, (len(text), len(text)))
202
+
203
+ output = model(**input, labels=label)
204
+ ```
205
+
206
+ ## Training Details
207
+
208
+ RNAErnie used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
209
+
210
+ ### Training Data
211
+
212
+ The RNAErnie model was pre-trained on [RNAcentral](https://rnacentral.org). RNAcentral is a comprehensive database of non-coding RNA sequences from a wide range of species. It combines 47 different databases, adding up to around 34 million RNA sequences in total.
213
+
214
+ RNAErnie used a subset of RNAcentral for pre-training. The subset contains 23 million sequences.
215
+ RNAErnie preprocessed all tokens by replacing "T"s with "S"s.
216
+
217
+ Note that [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
218
+
219
+ ### Training Procedure
220
+
221
+ #### Preprocessing
222
+
223
+ RNAErnie used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
224
+
225
+ - 15% of the tokens are masked.
226
+ - In 80% of the cases, the masked tokens are replaced by `<mask>`.
227
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
228
+ - In the 10% remaining cases, the masked tokens are left as is.
229
+
230
+ #### PreTraining
231
+
232
+ RNAErnie uses a special 3-stage training pipeline to pre-train the model, each with a different masking strategy:
233
+
234
+ Base-level Masking: The masking applies to each nucleotide in the sequence.
235
+ Subsequence-level Masking: The masking applies to subsequences of 4-8bp in the sequence.
236
+ Motif-level Masking: The model is trained on motif datasets.
237
+
238
+ The model was trained on 4 NVIDIA V100 GPUs with 32GiB memories.
239
+
240
+ - Batch size: 50
241
+ - Learning rate: 1e-4
242
+ - Weight decay: 0.01
243
+ - Optimizer: AdamW
244
+ - Steps: 2,580,000
245
+ - Learning rate warm-up: 129,000 steps
246
+ - Learning rate cool-down: 129,000 steps
247
+ - Minimum learning rate: 5e-5
248
+
249
+ ## Citation
250
+
251
+ Citation information is not available for papers published in Closed Access / Author-Fee journals.
252
+
253
+ ## Contact
254
+
255
+ Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
256
+
257
+ Please contact the authors of the RNAErnie paper for questions or comments on the paper/model.
258
+
259
+ ## License
260
+
261
+ This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
262
+
263
+ ```spdx
264
+ SPDX-License-Identifier: AGPL-3.0-or-later
265
+ ```
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RnaErnieForPreTraining"
4
+ ],
5
+ "attention_dropout": 0.1,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "head": {
9
+ "act": null,
10
+ "bias": true,
11
+ "dropout": 0.0,
12
+ "hidden_size": null,
13
+ "layer_norm_eps": 1e-12,
14
+ "num_labels": null,
15
+ "output_name": null,
16
+ "problem_type": null,
17
+ "transform": null,
18
+ "transform_act": "gelu"
19
+ },
20
+ "hidden_act": "relu",
21
+ "hidden_dropout": 0.1,
22
+ "hidden_size": 768,
23
+ "initializer_range": 0.02,
24
+ "intermediate_size": 3072,
25
+ "layer_norm_eps": 1e-12,
26
+ "lm_head": {
27
+ "act": null,
28
+ "bias": true,
29
+ "dropout": 0.0,
30
+ "hidden_size": 768,
31
+ "layer_norm_eps": 1e-12,
32
+ "output_name": null,
33
+ "transform": "nonlinear",
34
+ "transform_act": "gelu"
35
+ },
36
+ "mask_token_id": 4,
37
+ "max_position_embeddings": 513,
38
+ "model_type": "rnaernie",
39
+ "null_token_id": 5,
40
+ "num_attention_heads": 12,
41
+ "num_hidden_layers": 12,
42
+ "pad_token_id": 0,
43
+ "position_embedding_type": "absolute",
44
+ "torch_dtype": "float32",
45
+ "transformers_version": "4.41.2",
46
+ "type_vocab_size": 2,
47
+ "unk_token_id": 3,
48
+ "use_cache": true,
49
+ "vocab_size": 26
50
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f97a4770e2b11f36e772050d714f2a39967055d1d9538a65dcc436071fda9ef0
3
+ size 346641488
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e82f83cb4f4d511bc41896bef0f54c490908596f0a390282c713a04a1018208
3
+ size 346684922
special_tokens_map.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "pad_token": {
3
+ "content": "<pad>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<cls>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "<eos>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "mask_token": {
31
+ "content": "<mask>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "null_token": {
38
+ "content": "<null>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ }
44
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "RnaTokenizer",
3
+ "clean_up_tokenization_spaces": true,
4
+ "model_max_length": 513
5
+ }