jonghyunlee commited on
Commit
2d62678
·
1 Parent(s): bb74661

Create README.md

Browse files

# Molecular BERT Pretrained Using ChEMBL Database

This model has been pretrained based on the methodology outlined in the paper [Pushing the Boundaries of Molecular Property Prediction for Drug Discovery with Multitask Learning BERT Enhanced by SMILES Enumeration](https://spj.science.org/doi/10.34133/research.0004). While the original model was initially trained using custom code, it has been adapted for use within the Hugging Face Transformers framework in this project.

## Model Details
The model architecture utilized is based on BERT. Here are the key configuration details:

```
BertConfig(
vocab_size=len(tokenizer_pretrained.vocab),
hidden_size=256,
num_hidden_layers=8,
num_attention_heads=8,
intermediate_size=1024,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=max_seq_len,
type_vocab_size=1,
pad_token_id=tokenizer_pretrained.vocab["[PAD]"],
position_embedding_type="absolute"
)
```

## Pretraining database
The model was pretrained using data from the ChEMBL database, specifically version 33. You can download the database from [ChEMBL](https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/latest/)

## Performance
The accuracy score achieved by the pretrained model is 0.9672. The testing dataset used for evaluation constitutes 10% of the ChEMBL dataset.

Files changed (1) hide show
  1. README.md +7 -0
README.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ metrics:
4
+ - accuracy
5
+ tags:
6
+ - chemistry
7
+ ---