t03i commited on
Commit
f94c9e6
1 Parent(s): e380a83

Update model config

Browse files
Files changed (5) hide show
  1. README.md +79 -1
  2. config.json +28 -0
  3. special_tokens_map.json +1 -0
  4. spiece.model +0 -0
  5. tokenizer_config.json +1 -0
README.md CHANGED
@@ -1,3 +1,81 @@
1
  ---
2
- license: agpl-3.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: protein
3
+ tags:
4
+ - protein language model
5
+ datasets:
6
+ - UniRef50
7
  ---
8
+
9
+ # Encoder only ProtT5-XL-UniRef50, half-precision model
10
+
11
+ An encoder-only, half-precision version of the [ProtT5-XL-UniRef50](https://huggingface.co/Rostlab/prot_t5_xl_uniref50) model. The original model and it's pretraining were introduced in
12
+ [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in
13
+ [this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
14
+
15
+
16
+ ## Model description
17
+
18
+ ProtT5-XL-UniRef50 is based on the `t5-3b` model and was pretrained on a large corpus of protein sequences in a self-supervised fashion.
19
+ This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of
20
+ publicly available data) with an automatic process to generate inputs and labels from those protein sequences.
21
+
22
+ One important difference between this T5 model and the original T5 version is the denosing objective.
23
+ The original T5-3B model was pretrained using a span denosing objective, while this model was pre-trained with a Bart-like MLM denosing objective.
24
+ The masking probability is consistent with the original T5 training by randomly masking 15% of the amino acids in the input.
25
+
26
+ This model only contains the encoder portion of the original ProtT5-XL-UniRef50 model using half precision (float16).
27
+ As such this model can efficiently be used to create protein/ amino acid representations. When used for training downstream networks/ feature extraction, these embeddings produce almost the same performance (established emperically by comparing on several downstream tasks).
28
+
29
+
30
+ ## Intended uses & limitations
31
+
32
+ This version of the original ProtT5-XL-UniRef50 is mostly meant for conveniently creating amino-acid or protein embeddings with a low GPU-memory footprint and reasonable embedding-quality. This model is fully usable on 8GB of video RAM.
33
+
34
+ ### How to use
35
+
36
+ An extensive, interactive example on how to use this model for common tasks can be found [on Google Colab](https://colab.research.google.com/drive/1TUj-ayG3WO52n5N50S7KH9vtt6zRkdmj?usp=sharing#scrollTo=ET2v51slC5ui)
37
+
38
+ Here is how to use this model to extract the features of a given protein sequence in PyTorch:
39
+
40
+ ```python
41
+ from transformers import T5Tokenizer, T5EncoderModel
42
+ import torch
43
+
44
+ tokenizer = T5Tokenizer.from_pretrained('Rostlab/prot_t5_xl_half_uniref50-enc', do_lower_case=False)
45
+
46
+ model = T5EncoderModel.from_pretrained("Rostlab/prot_t5_xl_half_uniref50-enc")
47
+
48
+ sequences_Example = ["A E T C Z A O","S K T Z P"]
49
+
50
+ sequences_Example = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences_Example]
51
+
52
+ ids = tokenizer.batch_encode_plus(seqs, add_special_tokens=True, padding="longest")
53
+
54
+ input_ids = torch.tensor(ids['input_ids'])
55
+ attention_mask = torch.tensor(ids['attention_mask'])
56
+
57
+ with torch.no_grad():
58
+ embedding_rpr = model(input_ids=input_ids,attention_mask=attention_mask)
59
+ emb_0 = embedding_repr.last_hidden_state[0,:6]
60
+ emb_1 = embedding_repr.last_hidden_state[1,:4]
61
+ ```
62
+
63
+
64
+ ### BibTeX entry and citation info
65
+
66
+ ```bibtex
67
+ @article {Elnaggar2020.07.12.199554,
68
+ author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard},
69
+ title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing},
70
+ elocation-id = {2020.07.12.199554},
71
+ year = {2020},
72
+ doi = {10.1101/2020.07.12.199554},
73
+ publisher = {Cold Spring Harbor Laboratory},
74
+ abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \<a href="https://github.com/agemagician/ProtTrans"\>https://github.com/agemagician/ProtTrans\</a\>Competing Interest StatementThe authors have declared no competing interest.},
75
+ URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554},
76
+ eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf},
77
+ journal = {bioRxiv}
78
+ }
79
+ ```
80
+
81
+
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Rostlab/prot_t5_xl_half_uniref50-enc",
3
+ "architectures": [
4
+ "T5EncoderModel"
5
+ ],
6
+ "d_ff": 16384,
7
+ "d_kv": 128,
8
+ "d_model": 1024,
9
+ "decoder_start_token_id": 0,
10
+ "dropout_rate": 0.1,
11
+ "eos_token_id": 1,
12
+ "feed_forward_proj": "relu",
13
+ "initializer_factor": 1.0,
14
+ "is_encoder_decoder": true,
15
+ "layer_norm_epsilon": 1e-06,
16
+ "model_type": "t5",
17
+ "n_positions": 512,
18
+ "num_decoder_layers": 24,
19
+ "num_heads": 32,
20
+ "num_layers": 24,
21
+ "output_past": true,
22
+ "pad_token_id": 0,
23
+ "relative_attention_num_buckets": 32,
24
+ "torch_dtype": "float16",
25
+ "transformers_version": "4.17.0",
26
+ "use_cache": true,
27
+ "vocab_size": 128
28
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "additional_special_tokens": ["<extra_id_0>", "<extra_id_1>", "<extra_id_2>", "<extra_id_3>", "<extra_id_4>", "<extra_id_5>", "<extra_id_6>", "<extra_id_7>", "<extra_id_8>", "<extra_id_9>", "<extra_id_10>", "<extra_id_11>", "<extra_id_12>", "<extra_id_13>", "<extra_id_14>", "<extra_id_15>", "<extra_id_16>", "<extra_id_17>", "<extra_id_18>", "<extra_id_19>", "<extra_id_20>", "<extra_id_21>", "<extra_id_22>", "<extra_id_23>", "<extra_id_24>", "<extra_id_25>", "<extra_id_26>", "<extra_id_27>", "<extra_id_28>", "<extra_id_29>", "<extra_id_30>", "<extra_id_31>", "<extra_id_32>", "<extra_id_33>", "<extra_id_34>", "<extra_id_35>", "<extra_id_36>", "<extra_id_37>", "<extra_id_38>", "<extra_id_39>", "<extra_id_40>", "<extra_id_41>", "<extra_id_42>", "<extra_id_43>", "<extra_id_44>", "<extra_id_45>", "<extra_id_46>", "<extra_id_47>", "<extra_id_48>", "<extra_id_49>", "<extra_id_50>", "<extra_id_51>", "<extra_id_52>", "<extra_id_53>", "<extra_id_54>", "<extra_id_55>", "<extra_id_56>", "<extra_id_57>", "<extra_id_58>", "<extra_id_59>", "<extra_id_60>", "<extra_id_61>", "<extra_id_62>", "<extra_id_63>", "<extra_id_64>", "<extra_id_65>", "<extra_id_66>", "<extra_id_67>", "<extra_id_68>", "<extra_id_69>", "<extra_id_70>", "<extra_id_71>", "<extra_id_72>", "<extra_id_73>", "<extra_id_74>", "<extra_id_75>", "<extra_id_76>", "<extra_id_77>", "<extra_id_78>", "<extra_id_79>", "<extra_id_80>", "<extra_id_81>", "<extra_id_82>", "<extra_id_83>", "<extra_id_84>", "<extra_id_85>", "<extra_id_86>", "<extra_id_87>", "<extra_id_88>", "<extra_id_89>", "<extra_id_90>", "<extra_id_91>", "<extra_id_92>", "<extra_id_93>", "<extra_id_94>", "<extra_id_95>", "<extra_id_96>", "<extra_id_97>", "<extra_id_98>", "<extra_id_99>"]}
spiece.model ADDED
Binary file (238 kB). View file
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false}