Llama.cpp imatrix quantizations of nvidia/Llama-3.1-Minitron-4B-Width-Base
Using llama.cpp commit 2e59d61 for quantization.
Original model: https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base
All quants were made using the imatrix option and Bartowski's calibration file.
Perplexity table (the lower the better)
Quant | Size (MB) | PPL | Size (%) | Accuracy (%) | PPL error rate |
---|---|---|---|---|---|
IQ1_S | 1158 | 81.7502 | 13.44 | 9.26 | 0.69197 |
IQ1_M | 1227 | 40.8601 | 14.24 | 18.53 | 0.31979 |
IQ2_XXS | 1343 | 16.5816 | 15.59 | 45.67 | 0.11466 |
IQ2_XS | 1448 | 13.024 | 16.81 | 58.14 | 0.08768 |
IQ2_S | 1551 | 12.6045 | 18 | 60.07 | 0.08478 |
IQ2_M | 1643 | 11.0911 | 19.07 | 68.27 | 0.07374 |
Q2_K_S | 1654 | 11.0796 | 19.2 | 68.34 | 0.07646 |
Q2_K | 1755 | 10.3111 | 20.37 | 73.44 | 0.07045 |
IQ3_XXS | 1794 | 9.342 | 20.82 | 81.05 | 0.0612 |
IQ3_XS | 1934 | 9.4403 | 22.45 | 80.21 | 0.06137 |
Q3_K_S | 2005 | 8.8949 | 23.27 | 85.13 | 0.05946 |
IQ3_S | 2017 | 9.0714 | 23.41 | 83.47 | 0.05851 |
IQ3_M | 2083 | 8.3352 | 24.18 | 90.84 | 0.0534 |
Q3_K_M | 2191 | 8.1839 | 25.43 | 92.52 | 0.05408 |
Q3_K_L | 2351 | 8.093 | 27.29 | 93.56 | 0.05352 |
IQ4_XS | 2419 | 7.774 | 28.08 | 97.4 | 0.05097 |
Q4_0 | 2533 | 7.8479 | 29.4 | 96.49 | 0.05132 |
IQ4_NL | 2538 | 7.7697 | 29.46 | 97.46 | 0.05091 |
Q4_K_S | 2541 | 7.8125 | 29.49 | 96.92 | 0.05101 |
Q4_K_M | 2650 | 7.7376 | 30.76 | 97.86 | 0.05038 |
Q4_1 | 2772 | 7.8155 | 32.17 | 96.89 | 0.05116 |
Q5_K_S | 3017 | 7.6649 | 35.02 | 98.79 | 0.05021 |
Q5_0 | 3024 | 7.6407 | 35.1 | 99.1 | 0.0499 |
Q5_K_M | 3081 | 7.6283 | 35.76 | 99.26 | 0.04985 |
Q5_1 | 3263 | 7.6439 | 37.87 | 99.06 | 0.04996 |
Q6_K | 3539 | 7.587 | 41.07 | 99.8 | 0.04945 |
Q8_0 | 4581 | 7.5739 | 53.17 | 99.98 | 0.04941 |
F16 | 8616 | 7.5721 | 100 | 100 | 0.04942 |
Llama-3.1-Minitron-4B-Width-Base
Model Overview
Llama-3.1-Minitron-4B-Width-Base is a base text-to-text model that can be adopted for a variety of natural language generation tasks. It is obtained by pruning Llama-3.1-8B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
This model is ready for commercial use.
Model Developer: NVIDIA
Model Dates: Llama-3.1-Minitron-4B-Width-Base was trained between July 29, 2024 and Aug 3, 2024.
License
This model is released under the NVIDIA Open Model License Agreement.
Model Architecture
Llama-3.1-Minitron-4B-Width-Base uses a model embedding size of 3072, 32 attention heads, MLP intermediate dimension of 9216, with 32 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
Architecture Type: Transformer Decoder (Auto-Regressive Language Model)
Network Architecture: Llama-3.1
Input Type(s): Text
Input Format(s): String
Input Parameters: None
Other Properties Related to Input: Works well within 8k characters or less.
Output Type(s): Text
Output Format: String
Output Parameters: 1D
Other Properties Related to Output: None
Usage
Pull requests to support this model in Hugging Face Transformers are currently under review (#32495 and #32502) and are expected to be merged soon. In the meantime, please follow the installation instructions below:
# Fetch PR 32502
$ git clone -b suhara/llama-kv-channels --single-branch https://github.com/suhara/transformers.git && cd transformers
# Fetch changes from PR 32495
$ git fetch https://github.com/suiyoubi/transformers.git aot/head_dim_rope && git cherry-pick FETCH_HEAD --strategy-option theirs
# Install transformers
$ pip install -e .
We can now run inference on this model:
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
# Load the tokenizer and model
model_path = "nvidia/Llama3.1-Minitron-4B-Width-Base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
device = 'cuda'
dtype = torch.bfloat16
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
# Prepare the input text
prompt = 'Complete the paragraph: our solar system is'
inputs = tokenizer.encode(prompt, return_tensors='pt').to(model.device)
# Generate the output
outputs = model.generate(inputs, max_length=20)
# Decode and print the output
output_text = tokenizer.decode(outputs[0])
print(output_text)
Software Integration
Runtime Engine(s):
- NeMo 24.05
Supported Hardware Microarchitecture Compatibility:
- NVIDIA Ampere
- NVIDIA Blackwell
- NVIDIA Hopper
- NVIDIA Lovelace
[Preferred/Supported] Operating System(s):
- Linux
Dataset & Training
Data Collection Method by Dataset: Automated
Labeling Method by Dataset: Not Applicable
Properties: The training corpus for Llama-3.1-Minitron-4B-Width-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
Data Freshness: The pretraining data has a cutoff of June 2023.
Evaluation Results
Overview
5-shot performance. Language Understanding evaluated using Massive Multitask Language Understanding:
Average |
---|
60.5 |
Zero-shot performance. Evaluated using select datasets from the LM Evaluation Harness with additions:
HellaSwag | Winogrande | GSM8K | ARC-Challenge | XLSum |
---|---|---|---|---|
76.1 | 73.5 | 41.2 | 55.6 | 28.7 |
Code generation performance. Evaluated using MBPP:
Score |
---|
32.0 |
Inference
Engine: TensorRT-LLM
Test Hardware: NVIDIA A100
DType: BFloat16
Limitations
The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
References
- Downloads last month
- 819