eduardosoares99 commited on
Commit
75e99a4
1 Parent(s): 1433d3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -26,6 +26,8 @@ Paper: [Arxiv Link](https://github.com/IBM/materials/blob/main/smi-ted/paper/smi
26
 
27
  For more information contact: eduardo.soares@ibm.com or evital@br.ibm.com.
28
 
 
 
29
  ## Introduction
30
 
31
  We present a large encoder-decoder chemical foundation model, SMILES-based Transformer Encoder-Decoder (SMI-TED), pre-trained on a curated dataset of 91 million SMILES samples sourced from PubChem, equivalent to 4 billion molecular tokens. SMI-TED supports various complex tasks, including quantum property prediction, with two main variants ($289M$ and $8 \times 289M$). Our experiments across multiple benchmark datasets demonstrate state-of-the-art performance for various tasks. For more information contact: eduardo.soares@ibm.com or evital@br.ibm.com.
 
26
 
27
  For more information contact: eduardo.soares@ibm.com or evital@br.ibm.com.
28
 
29
+ ![ted-smi](https://github.com/IBM/materials/blob/main/smi-ted/images/smi-ted.png)
30
+
31
  ## Introduction
32
 
33
  We present a large encoder-decoder chemical foundation model, SMILES-based Transformer Encoder-Decoder (SMI-TED), pre-trained on a curated dataset of 91 million SMILES samples sourced from PubChem, equivalent to 4 billion molecular tokens. SMI-TED supports various complex tasks, including quantum property prediction, with two main variants ($289M$ and $8 \times 289M$). Our experiments across multiple benchmark datasets demonstrate state-of-the-art performance for various tasks. For more information contact: eduardo.soares@ibm.com or evital@br.ibm.com.