v000000 commited on
Commit
2ebeb4e
1 Parent(s): e4f09d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -35
README.md CHANGED
@@ -1,54 +1,78 @@
1
  ---
2
- base_model: v000000/L3-11.5B-DuS-MoonRoot
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
- - llama
8
  - llama-cpp
9
- - gguf-my-repo
10
  ---
11
 
12
- # v000000/L3-11.5B-DuS-MoonRoot-Q8_0-GGUF
13
- This model was converted to GGUF format from [`v000000/L3-11.5B-DuS-MoonRoot`](https://huggingface.co/v000000/L3-11.5B-DuS-MoonRoot) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
- Refer to the [original model card](https://huggingface.co/v000000/L3-11.5B-DuS-MoonRoot) for more details on the model.
15
 
16
- ## Use with llama.cpp
17
- Install llama.cpp through brew (works on Mac and Linux)
 
 
 
 
 
 
 
 
 
 
18
 
19
- ```bash
20
- brew install llama.cpp
 
21
 
22
- ```
23
- Invoke the llama.cpp server or the CLI.
24
 
25
- ### CLI:
26
- ```bash
27
- llama-cli --hf-repo v000000/L3-11.5B-DuS-MoonRoot-Q8_0-GGUF --hf-file l3-11.5b-dus-moonroot-q8_0.gguf -p "The meaning to life and the universe is"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ```
29
 
30
- ### Server:
 
 
 
 
 
31
  ```bash
32
- llama-server --hf-repo v000000/L3-11.5B-DuS-MoonRoot-Q8_0-GGUF --hf-file l3-11.5b-dus-moonroot-q8_0.gguf -c 2048
33
- ```
34
 
35
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
36
 
37
- Step 1: Clone llama.cpp from GitHub.
38
- ```
39
- git clone https://github.com/ggerganov/llama.cpp
40
- ```
41
 
42
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
43
- ```
44
- cd llama.cpp && LLAMA_CURL=1 make
45
- ```
46
 
47
- Step 3: Run inference through the main binary.
48
- ```
49
- ./llama-cli --hf-repo v000000/L3-11.5B-DuS-MoonRoot-Q8_0-GGUF --hf-file l3-11.5b-dus-moonroot-q8_0.gguf -p "The meaning to life and the universe is"
50
- ```
51
- or
52
- ```
53
- ./llama-server --hf-repo v000000/L3-11.5B-DuS-MoonRoot-Q8_0-GGUF --hf-file l3-11.5b-dus-moonroot-q8_0.gguf -c 2048
54
- ```
 
1
  ---
2
+ base_model: v000000/L3-11.5B-DuS-FrankenRoot
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
 
7
  - llama-cpp
8
+ - llama
9
  ---
10
 
11
+ # v000000/L3-11.5B-DuS-MoonRoot-Q8_0
12
+ This model was converted to GGUF format from [`v000000/L3-11.5B-DuS-MoonRoot`](https://huggingface.co/v000000/L3-11.5B-DuS-MoonRoot) using llama.cpp
13
+ Refer to the [original model card](https://huggingface.co/v000000/L3-11.5B-DuS-MoonRoot) for more details on the model.'
14
 
15
+ ---
16
+ base_model:
17
+ - Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
18
+ - v000000/L3-8B-Poppy-Moonfall-C
19
+ library_name: transformers
20
+ tags:
21
+ - mergekit
22
+ - merge
23
+ - llama
24
+ ---
25
+ ### Llama-3-11.5-Depth-Upscaled-MoonRoot
26
+ experiement, no continued pretraining
27
 
28
+ # Quants
29
+ * [Q8_0](https://huggingface.co/v000000/L3-11.5B-DuS-MoonRoot-Q8_0-GGUF)
30
+ * [Q6_K](https://huggingface.co/v000000/L3-11.5B-DuS-MoonRoot-Q6_K-GGUF)
31
 
32
+ # merge
 
33
 
34
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
35
+
36
+ ## Merge Details
37
+ ### Merge Method
38
+
39
+ This model was merged using the passthrough merge method.
40
+
41
+ ### Models Merged
42
+
43
+ The following models were included in the merge:
44
+ * [Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B)
45
+ * [v000000/L3-8B-Poppy-Moonfall-C](https://huggingface.co/v000000/L3-8B-Poppy-Moonfall-C)
46
+
47
+ ### Configuration
48
+
49
+ The following YAML configuration was used to produce this model:
50
+
51
+ ```yaml
52
+ slices:
53
+ - sources:
54
+ - model: v000000/L3-8B-Poppy-Moonfall-C
55
+ layer_range: [0, 24]
56
+ - sources:
57
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
58
+ layer_range: [8, 32]
59
+ merge_method: passthrough
60
+ dtype: bfloat16
61
  ```
62
 
63
+ ---
64
+ base_model:
65
+ - Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
66
+ - v000000/L3-8B-Poppy-Moonfall-C
67
+
68
+ # Prompt Template:
69
  ```bash
70
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
 
71
 
72
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
73
 
74
+ {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 
 
 
75
 
76
+ {output}<|eot_id|>
 
 
 
77
 
78
+ ```