metadata
base_model: GeneZC/MiniChat-3B
inference: false
language:
- en
- zh
library_name: transformers
license: apache-2.0
model_creator: GeneZC
model_name: MiniChat-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
GeneZC/MiniChat-3B-GGUF
Quantized GGUF model files for MiniChat-3B from GeneZC
Name | Quant method | Size |
---|---|---|
minichat-3b.q2_k.gguf | q2_k | 1.30 GB |
minichat-3b.q3_k_m.gguf | q3_k_m | 1.51 GB |
minichat-3b.q4_k_m.gguf | q4_k_m | 1.85 GB |
minichat-3b.q5_k_m.gguf | q5_k_m | 2.15 GB |
minichat-3b.q6_k.gguf | q6_k | 2.48 GB |
minichat-3b.q8_0.gguf | q8_0 | 3.21 GB |
Original Model Card:
MiniChat-3B
π arXiv | π€ HuggingFace-MiniMA | π€ HuggingFace-MiniChat | π€ ModelScope-MiniMA | π€ ModelScope-MiniChat
β Must comply with LICENSE of LLaMA2 since it is derived from LLaMA2.
A language model distilled and finetuned from an adapted version of LLaMA2-7B following "Towards the Law of Capacity Gap in Distilling Language Models".
Outperforming a wide range of 3B competitors in GPT4 evaluation and even competing with several 7B chat models.
The following is an example code snippet to use MiniChat-3B:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from conversation import get_default_conv_template
# MiniChat
tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniChat-3B", use_fast=False)
# GPU.
model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval()
# CPU.
# model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval()
conv = get_default_conv_template("minichat")
question = "Implement a program to find the common elements in two arrays without using any extra data structures."
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer([prompt]).input_ids
output_ids = model.generate(
torch.as_tensor(input_ids).cuda(),
do_sample=True,
temperature=0.7,
max_new_tokens=1024,
)
output_ids = output_ids[0][len(input_ids[0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
# output: "def common_elements(arr1, arr2):\n if len(arr1) == 0:\n return []\n if len(arr2) == 0:\n return arr1\n\n common_elements = []\n for element in arr1:\n if element in arr2:\n common_elements.append(element)\n\n return common_elements"
# Multiturn conversation could be realized by continuously appending questions to `conv`.
Bibtex
@article{zhang2023law,
title={Towards the Law of Capacity Gap in Distilling Language Models},
author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan},
year={2023},
url={}
}