Converted from Mistral to Huggingface safetensors using Hugglingface's transformers Mistral script
pip install protobuf sentencepiece torch transformers accelerate
python3 ~/convert_mistral_weights_to_hf-22B.py --input_dir ~/Codestral-22B-v0.1/ --model_size 22B --output_dir ~/models/Codestral-22B-v0.1-hf/ --is_v3 --safe_serialization
Then measurements.json was created using exllamav2
python3 convert.py -i ~/models/Codestral-22B-v0.1-hf/ -o /tmp/exl2/ -nr -om ~/models/Machinez_Codestral-22B-v0.1-exl2/measurement.json
Finally quantized (eg. 4.0bpw)
python3 convert.py -i ~/models/Codestral-22B-v0.1-hf/ -o /tmp/exl2/ -nr -m ~/models/Machinez_Codestral-22B-v0.1-exl2/measurement.json -cf ~/models/Machinez_Codestral-22B-v0.1-exl2_4.0bpw/ -b 4.0
Quantization
Model Card for Codestral-22B-v0.1
Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the Blogpost). The model can be queried:
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
Installation
It is recommended to use mistralai/Codestral-22B-v0.1
with mistral-inference.
pip install mistral_inference
Download
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
Chat
After installing mistral_inference
, a mistral-chat
CLI command should be available in your environment.
mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256
Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines:
Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number.
fn fibonacci(n: u32) -> u32 {
match n {
0 => 0,
1 => 1,
_ => fibonacci(n - 1) + fibonacci(n - 2),
}
}
fn main() {
let n = 10;
println!("The {}th Fibonacci number is: {}", n, fibonacci(n));
}
This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers.
Fill-in-the-middle (FIM)
After installing mistral_inference
and running pip install --upgrade mistral_common
to make sure to have mistral_common>=1.2 installed:
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.request import FIMRequest
tokenizer = MistralTokenizer.v3()
model = Transformer.from_folder("~/codestral-22B-240529")
prefix = """def add("""
suffix = """ return sum"""
request = FIMRequest(prompt=prefix, suffix=suffix)
tokens = tokenizer.encode_fim(request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
middle = result.split(suffix)[0].strip()
print(middle)
Should give something along the following lines:
num1, num2):
# Add two numbers
sum = num1 + num2
# return the sum
Download instructions
With git:
git clone --single-branch --branch 4_0 https://huggingface.co/machinez/Codestral-22B-v0.1-exl2
With huggingface hub:
pip3 install -U "huggingface_hub[cli]"
(optional)
git config --global credential.helper 'store --file ~/.my-credentials'
huggingface-cli login
To download the main
(only useful if you only care about measurement.json) branch to a folder called machinez_Codestral-22B-v0.1-exl2
:
mkdir machinez_Codestral-22B-v0.1-exl2
huggingface-cli download machinez/Codestral-22B-v0.1-exl2 --local-dir machinez_Codestral-22B-v0.1-exl2 --local-dir-use-symlinks False
To download from a different branch, add the --revision
parameter:
mkdir machinez_Codestral-22B-v0.1-exl2_4.0bpw
huggingface-cli download machinez/Codestral-22B-v0.1-exl2 --revision 6_0 --local-dir machinez_Codestral-22B-v0.1-exl2_6.0bpw --local-dir-use-symlinks False
Limitations
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
License
Codestral-22B-v0.1 is released under the MNLP-0.1
license.
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall