This model has been pushed to the Hub using the PytorchModelHubMixin integration.

Library: pxia

how to load

pip install pxia

use the AutoModel class

from pxia AutoModel
model = AutoModel.from_pretrained("phxia/gpt2")

or you can use the model class directly

from pxia import GPT2
model = GPT2.from_pretrained("phxia/gpt2")

Contributions

Any contributions are welcome at https://github.com/not-lain/pxia

Downloads last month
8
Safetensors
Model size
176M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) does not yet support pxia models for this pipeline type.