afrideva's picture
Upload README.md with huggingface_hub
b460d08
metadata
base_model: BEE-spoke-data/Mixtral-GQA-400m-v2
inference: false
language:
  - en
license: apache-2.0
model_creator: BEE-spoke-data
model_name: Mixtral-GQA-400m-v2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
  - gguf
  - ggml
  - quantized
  - q2_k
  - q3_k_m
  - q4_k_m
  - q5_k_m
  - q6_k
  - q8_0

BEE-spoke-data/Mixtral-GQA-400m-v2-GGUF

Quantized GGUF model files for Mixtral-GQA-400m-v2 from BEE-spoke-data

Original Model Card:

BEE-spoke-data/Mixtral-GQA-400m-v2

testing code

# !pip install -U -q transformers datasets accelerate sentencepiece
import pprint as pp
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="BEE-spoke-data/Mixtral-GQA-400m-v2",
    device_map="auto",
)
pipe.model.config.pad_token_id = pipe.model.config.eos_token_id

prompt = "My favorite movie is Godfather because"

res = pipe(
    prompt,
    max_new_tokens=256,
    top_k=4,
    penalty_alpha=0.6,
    use_cache=True,
    no_repeat_ngram_size=4,
    repetition_penalty=1.1,
    renormalize_logits=True,
)
pp.pprint(res[0])