Configuration Parsing Warning: In config.json: "architectures" must be an array

Model Card

This model is an Attention (Llama architecture) model pretrained on 30Bn tokens of the Pile corpus.

Model Sources

The model implementation and training code that produced the model are provided here: https://github.com/HazyResearch/based

Downloads last month
5
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Dataset used to train hazyresearch/attn-360M-30B

Collection including hazyresearch/attn-360M-30B