jannisborn's picture
update
92ab906 unverified
|
raw
history blame
No virus
2.99 kB

Model documentation & parameters

Language model: Type of language model to be used.

Prefix: Task specific prefix for task definition (see the provided examples for specific tasks).

Text prompt: The text input of the model.

Num beams: Number of beams to be used for the text generation.

Model card -- Multitask Text and Chemistry T5

Model Details: Multitask Text and Chemistry T5 : a multi-domain, multi-task language model to solve a wide range of tasks in both the chemical and natural language domains. Published by Christofidellis et al.

Developers: Dimitrios Christofidellis*, Giorgio Giannone*, Jannis Born, Teodoro Laino and Matteo Manica from IBM Research and Ole Winther from Technical University of Denmark.

Distributors: Model natively integrated into GT4SD.

Model date: 2022.

Model type: A Transformer-based language model that is trained on a multi-domain and a multi-task dataset by aggregating available datasets for the tasks of Forward reaction prediction, Retrosynthesis, Molecular captioning, Text-conditional de novo generation and Paragraph to actions.

Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: N.A.

Paper or other resource for more information: The Multitask Text and Chemistry T5 Christofidellis et al.

License: MIT

Where to send questions or comments about the model: Open an issue on GT4SD repository.

Intended Use. Use cases that were envisioned during development: N.A.

Primary intended uses/users: N.A.

Out-of-scope use cases: Production-level inference, producing molecules with harmful properties.

Metrics: N.A.

Datasets: N.A.

Ethical Considerations: Unclear, please consult with original authors in case of questions.

Caveats and Recommendations: Unclear, please consult with original authors in case of questions.

Model card prototype inspired by Mitchell et al. (2019)

Citation

@article{christofidellis2023unifying,
  title =    {Unifying Molecular and Textual Representations via Multi-task Language Modelling},
  author =       {Christofidellis, Dimitrios and Giannone, Giorgio and Born, Jannis and Winther, Ole and Laino, Teodoro and Manica, Matteo},
  booktitle =    {Proceedings of the 40th International Conference on Machine Learning},
  pages =    {6140--6157},
  year =   {2023},
  volume =   {202},
  series =   {Proceedings of Machine Learning Research},
  publisher =    {PMLR},
  pdf =    {https://proceedings.mlr.press/v202/christofidellis23a/christofidellis23a.pdf},
  url =    {https://proceedings.mlr.press/v202/christofidellis23a.html},
}

*equal contribution