metadata
license: apache-2.0
library_name: transformers
language:
- en
- zh
pipeline_tag: text-generation
base_model: sthenno-com/miscii-14b-1028
tags:
- chat
- conversational
- custom-research
- mlx
model-index:
- name: miscii-14b-1028
results:
- task:
type: text-generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: exact-match
value: 0.6143
name: exact_match
mlx-community/miscii-14b-1028-4bit
The Model mlx-community/miscii-14b-1028-4bit was converted to MLX format from sthenno-com/miscii-14b-1028 using mlx-lm version 0.19.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/miscii-14b-1028-4bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)