NCTCMumbai's picture
Upload 2571 files
0b8359d
|
raw
history blame
2.11 kB

NLP Modeling Library

This libary provides a set of Keras primitives (Layers, Networks, and Models) that can be assembled into transformer-based models. They are flexible, validated, interoperable, and both TF1 and TF2 compatible.

  • layers are the fundamental building blocks for NLP models. They can be used to assemble new layers, networks, or models.

  • networks are combinations of layers (and possibly other networks). They are sub-units of models that would not be trained alone. They encapsulate common network structures like a classification head or a transformer encoder into an easily handled object with a standardized configuration.

  • models are combinations of layers and networks that would be trained. Pre-built canned models are provided as both convenience functions and canonical examples.

  • losses contains common loss computation used in NLP tasks.

Besides the pre-defined primitives, it also provides scaffold classes to allow easy experimentation with noval achitectures, e.g., you don’t need to fork a whole Transformer object to try a different kind of attention primitive, for instance.

BERT and ALBERT models in this repo are implemented using this library. Code examples can be found in the corresponding model folder.