cmcmaster's picture
d1034a7db6edf64a85ed57224ad4887c48a2b3f40fdc624db7dcad2ccb341bd0
18f8e33 verified
|
raw
history blame
638 Bytes
---
base_model: unsloth/gemma-2-2b-it
library_name: transformers
tags:
- medical
- unsloth
- peft
- qlora
- mlx
---
# cmcmaster/rheum-gemma-2-2b-it-mlx
The Model [cmcmaster/rheum-gemma-2-2b-it-mlx](https://huggingface.co/cmcmaster/rheum-gemma-2-2b-it-mlx) was converted to MLX format from [cmcmaster/rheum-gemma-2-2b-it](https://huggingface.co/cmcmaster/rheum-gemma-2-2b-it) using mlx-lm version **0.18.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cmcmaster/rheum-gemma-2-2b-it-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```