aya-23-35B-4bit / README.md
prince-canuma's picture
7de3add7a87cd6d548678f297f4562534ec1f72372af3a7c04204ef77f11333a
c9daca5 verified
|
raw
history blame
686 Bytes
metadata
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - ja
  - ko
  - zh
  - ar
  - el
  - fa
  - pl
  - id
  - cs
  - he
  - hi
  - nl
  - ro
  - ru
  - tr
  - uk
  - vi
license: cc-by-nc-4.0
library_name: transformers
tags:
  - mlx

mlx-community/aya-23-35B-4bit

The Model mlx-community/aya-23-35B-4bit was converted to MLX format from CohereForAI/aya-23-35B using mlx-lm version 0.13.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/aya-23-35B-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)