dwdcth commited on
Commit
05ade65
1 Parent(s): fe73b80

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +154 -0
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
5
+ datasets:
6
+ - Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1
7
+ base_model: fblgit/cybertron-v4-qw7B-MGS
8
+ library_name: transformers
9
+ tags:
10
+ - generated_from_trainer
11
+ - llama-cpp
12
+ - gguf-my-repo
13
+ language:
14
+ - en
15
+ model-index:
16
+ - name: cybertron-v4-qw7B-MGS
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: IFEval (0-Shot)
23
+ type: HuggingFaceH4/ifeval
24
+ args:
25
+ num_few_shot: 0
26
+ metrics:
27
+ - type: inst_level_strict_acc and prompt_level_strict_acc
28
+ value: 62.64
29
+ name: strict accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-MGS
32
+ name: Open LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: BBH (3-Shot)
38
+ type: BBH
39
+ args:
40
+ num_few_shot: 3
41
+ metrics:
42
+ - type: acc_norm
43
+ value: 37.04
44
+ name: normalized accuracy
45
+ source:
46
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-MGS
47
+ name: Open LLM Leaderboard
48
+ - task:
49
+ type: text-generation
50
+ name: Text Generation
51
+ dataset:
52
+ name: MATH Lvl 5 (4-Shot)
53
+ type: hendrycks/competition_math
54
+ args:
55
+ num_few_shot: 4
56
+ metrics:
57
+ - type: exact_match
58
+ value: 27.72
59
+ name: exact match
60
+ source:
61
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-MGS
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: GPQA (0-shot)
68
+ type: Idavidrein/gpqa
69
+ args:
70
+ num_few_shot: 0
71
+ metrics:
72
+ - type: acc_norm
73
+ value: 8.05
74
+ name: acc_norm
75
+ source:
76
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-MGS
77
+ name: Open LLM Leaderboard
78
+ - task:
79
+ type: text-generation
80
+ name: Text Generation
81
+ dataset:
82
+ name: MuSR (0-shot)
83
+ type: TAUR-Lab/MuSR
84
+ args:
85
+ num_few_shot: 0
86
+ metrics:
87
+ - type: acc_norm
88
+ value: 13.2
89
+ name: acc_norm
90
+ source:
91
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-MGS
92
+ name: Open LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: MMLU-PRO (5-shot)
98
+ type: TIGER-Lab/MMLU-Pro
99
+ config: main
100
+ split: test
101
+ args:
102
+ num_few_shot: 5
103
+ metrics:
104
+ - type: acc
105
+ value: 38.59
106
+ name: accuracy
107
+ source:
108
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-MGS
109
+ name: Open LLM Leaderboard
110
+ ---
111
+
112
+ # dwdcth/cybertron-v4-qw7B-MGS-Q5_K_S-GGUF
113
+ This model was converted to GGUF format from [`fblgit/cybertron-v4-qw7B-MGS`](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
114
+ Refer to the [original model card](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS) for more details on the model.
115
+
116
+ ## Use with llama.cpp
117
+ Install llama.cpp through brew (works on Mac and Linux)
118
+
119
+ ```bash
120
+ brew install llama.cpp
121
+
122
+ ```
123
+ Invoke the llama.cpp server or the CLI.
124
+
125
+ ### CLI:
126
+ ```bash
127
+ llama-cli --hf-repo dwdcth/cybertron-v4-qw7B-MGS-Q5_K_S-GGUF --hf-file cybertron-v4-qw7b-mgs-q5_k_s.gguf -p "The meaning to life and the universe is"
128
+ ```
129
+
130
+ ### Server:
131
+ ```bash
132
+ llama-server --hf-repo dwdcth/cybertron-v4-qw7B-MGS-Q5_K_S-GGUF --hf-file cybertron-v4-qw7b-mgs-q5_k_s.gguf -c 2048
133
+ ```
134
+
135
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
136
+
137
+ Step 1: Clone llama.cpp from GitHub.
138
+ ```
139
+ git clone https://github.com/ggerganov/llama.cpp
140
+ ```
141
+
142
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
143
+ ```
144
+ cd llama.cpp && LLAMA_CURL=1 make
145
+ ```
146
+
147
+ Step 3: Run inference through the main binary.
148
+ ```
149
+ ./llama-cli --hf-repo dwdcth/cybertron-v4-qw7B-MGS-Q5_K_S-GGUF --hf-file cybertron-v4-qw7b-mgs-q5_k_s.gguf -p "The meaning to life and the universe is"
150
+ ```
151
+ or
152
+ ```
153
+ ./llama-server --hf-repo dwdcth/cybertron-v4-qw7B-MGS-Q5_K_S-GGUF --hf-file cybertron-v4-qw7b-mgs-q5_k_s.gguf -c 2048
154
+ ```