Configuration Parsing Warning: In config.json: "architectures" must be an array

This is a Medium (360M parameter) Transformer trained for 200k steps on arrival-time encoded music from the Lakh MIDI dataset. This model was trained with anticipation.

References for the Anticipatory Music Transformer

The Anticipatory Music Transformer paper is available on ArXiv.

The full model card is available here.

Code for using this model is available on GitHub.

See the accompanying blog post for additional discussion of this model.

Downloads last month
185
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.