Edit model card

Model Card: Meta CodeLlama-7b-Python gguf

Origin Meta model CodeLlama-7b-Python, code llama large language model coding, codellama converted into gguf format with llama.cpp

Licen: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved."

Policy

Run model

./main -m ggml-model-f32-00001-of-00010.gguf -p "def fibonacci("

Convert to gguf

python3 convert.py ../codellama/CodeLlama-7b-Python

Split Model

Original Meta CodeLlama-7b-Python model converted with python3 convert.py to gguf and CodeLlama-7b-Python/ggml-model-f32.gguf and splitted with gguf-split to smaller size chunks up to split-max-tensors 32.

python3 convert.py ../codellama/CodeLlama-7b-Python
./gguf-split --split --split-max-tensors 32 ./models/CodeLlama-7b-Python/ggml-model-f32.gguf ./models/CodeLlama-7b-Python/ggml-model-f32

Merge-back model use

./gguf-split --merge ggml-model-f32-00001-of-00010.gguf ggml-model-f32.gguf
Downloads last month
136
GGUF
Model size
6.74B params
Architecture
llama

32-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for ego-hf/CodeLlama-7b-Python

Quantized
(1)
this model