File size: 1,630 Bytes
7a1bb49
 
098405a
 
 
 
 
46cb921
098405a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7eb223e
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: mit
tags:
- merge
---
# Quantizations of BigCodeLLama LFG 🚀

## Experimental CodeLlaMA frankenstein of 70b instruct, python and base to see how it benchmarks

### Models Merged

The following models were included in the merge:
* ../CodeLlama-70b-hf
* ../CodeLlama-70b-Instruct-hf
* ../CodeLlama-70b-Python-hf

### Configuration

The following YAML configuration was used to produce this model:

```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 69]
    model:
      model:
        path: ../CodeLlama-70b-hf
- sources:
  - layer_range: [66, 76]
    model:
      model:
        path: ../CodeLlama-70b-Instruct-hf
- sources:
  - layer_range: [42, 66]
    model:
      model:
        path: ../CodeLlama-70b-hf
- sources:
  - layer_range: [13, 37]
    model:
      model:
        path: ../CodeLlama-70b-Python-hf
- sources:
  - layer_range: [10, 80]
    model:
      model:
        path: ../CodeLlama-70b-Instruct-hf
```

To reunite each file after downloading do 
```bash
cat BigCodeLlama-169b-q2k.gguf.part0 BigCodeLlama-169b-q2k.gguf.part1 > BigCodeLlama-169b-q2k.gguf

cat BigCodeLlama-169b-q3km.gguf.part0 BigCodeLlama-169b-q3km.gguf.part1 > BigCodeLlama-169b-q3km.gguf

cat BigCodeLlama-169b-q4ks.gguf.part0 BigCodeLlama-169b-q4ks.gguf.part1 > BigCodeLlama-169b-q4ks.gguf

cat BigCodeLlama-169b-q5km.gguf.part0 BigCodeLlama-169b-q5km.gguf.part1 BigCodeLlama-169b-q5km.gguf.part2 > BigCodeLlama-169b-q5km.gguf

cat BigCodeLlama-169b-q8.gguf.part0 BigCodeLlama-169b-q8.gguf.part1 BigCodeLlama-169b-q8.gguf.part2 BigCodeLlama-169b-q8.gguf.part3 > BigCodeLlama-169b-q8.gguf
```