metadata
pipeline_tag: text-generation
library_name: mlx
inference: false
tags:
- facebook
- meta
- llama
- llama-2
- codellama
- mlx
license: llama2
CodeLlama
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This model is designed for general code synthesis and understanding. This is the repository for the 7B base model, in npz
format suitable for use in Apple's MLX framework.
Weights have been converted to float16
from the original bfloat16
type, because numpy
is not compatible with bfloat16
out of the box.
How to use with MLX.
# Install mlx, mlx-examples, huggingface-cli
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git
# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir CodeLlama-7b-mlx mlx-llama/CodeLlama-7b-mlx
# Run example
python mlx-examples/llama/llama.py --prompt "int main(char argc, char **argv) {" CodeLlama-7b-mlx/ CodeLlama-7b-mlx/tokenizer.model
Please, refer to the original model card for details on CodeLlama.