nisten's picture
Update README.md
7eb223e verified
|
raw
history blame
No virus
1.63 kB
---
license: mit
tags:
- merge
---
# Quantizations of BigCodeLLama LFG ๐Ÿš€
## Experimental CodeLlaMA frankenstein of 70b instruct, python and base to see how it benchmarks
### Models Merged
The following models were included in the merge:
* ../CodeLlama-70b-hf
* ../CodeLlama-70b-Instruct-hf
* ../CodeLlama-70b-Python-hf
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 69]
model:
model:
path: ../CodeLlama-70b-hf
- sources:
- layer_range: [66, 76]
model:
model:
path: ../CodeLlama-70b-Instruct-hf
- sources:
- layer_range: [42, 66]
model:
model:
path: ../CodeLlama-70b-hf
- sources:
- layer_range: [13, 37]
model:
model:
path: ../CodeLlama-70b-Python-hf
- sources:
- layer_range: [10, 80]
model:
model:
path: ../CodeLlama-70b-Instruct-hf
```
To reunite each file after downloading do
```bash
cat BigCodeLlama-169b-q2k.gguf.part0 BigCodeLlama-169b-q2k.gguf.part1 > BigCodeLlama-169b-q2k.gguf
cat BigCodeLlama-169b-q3km.gguf.part0 BigCodeLlama-169b-q3km.gguf.part1 > BigCodeLlama-169b-q3km.gguf
cat BigCodeLlama-169b-q4ks.gguf.part0 BigCodeLlama-169b-q4ks.gguf.part1 > BigCodeLlama-169b-q4ks.gguf
cat BigCodeLlama-169b-q5km.gguf.part0 BigCodeLlama-169b-q5km.gguf.part1 BigCodeLlama-169b-q5km.gguf.part2 > BigCodeLlama-169b-q5km.gguf
cat BigCodeLlama-169b-q8.gguf.part0 BigCodeLlama-169b-q8.gguf.part1 BigCodeLlama-169b-q8.gguf.part2 BigCodeLlama-169b-q8.gguf.part3 > BigCodeLlama-169b-q8.gguf
```