omost-llama-3-8b is Omost's llama-3 model with 8k context length in fp16.

Downloads last month
96
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for lllyasviel/omost-llama-3-8b

Finetunes
3 models
Quantizations
10 models

Spaces using lllyasviel/omost-llama-3-8b 15