File size: 1,132 Bytes
0dad8ba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
pipeline_tag: text-generation
---
# mlx-community/Hermes-2-Theta-Llama-3-8B-4bit
Model was converted to MLX format from [`NousResearch/Hermes-2-Theta-Llama-3-8B`](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) using mlx-lm version **0.14.3**.
Converted & uploaded by: @ucheog ([Uche Ogbuji](https://ucheog.carrd.co/)).
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) for more details on the model.
## Use with mlx
```sh
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load('mlx-community/Hermes-2-Theta-Llama-3-8B-4bit')
response = generate(model, tokenizer, prompt='Hello! Tell me something good.', verbose=True)
```
## Conversion command
```sh
python -m mlx_lm.convert --hf-path NousResearch/Hermes-2-Theta-Llama-3-8B --mlx-path ~/.local/share/models/mlx/Hermes-2-Theta-Llama-3-8B -q
```
|