legraphista's picture
Upload imatrix.log with huggingface_hub
5df44cb verified
raw
history blame
No virus
10.4 kB
build: 3787 (6026da52) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from Qwen2.5-Math-7B-Instruct-IMat-GGUF/Qwen2.5-Math-7B-Instruct.Q8_0.gguf.hardlink.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 Math 7B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5-Math
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-M...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Math 7B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-M...
llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 28
llama_model_loader: - kv 15: qwen2.context_length u32 = 4096
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 7
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q8_0: 198 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q8_0
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 7.54 GiB (8.50 BPW)
llm_load_print_meta: general.name = Qwen2.5 Math 7B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CPU buffer size = 552.23 MiB
llm_load_tensors: CUDA0 buffer size = 7165.44 MiB
........................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 28.00 MiB
llama_new_context_with_model: KV self size = 28.00 MiB, K (f16): 14.00 MiB, V (f16): 14.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.58 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 304.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 8.01 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 2
system_info: n_threads = 25 (n_threads_batch = 25) / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
compute_imatrix: tokenizing the input ..
compute_imatrix: tokenization took 131.5 ms
compute_imatrix: computing over 128 chunks with batch_size 512
compute_imatrix: 0.69 seconds per pass - ETA 1.47 minutes
[1]28.7884,[2]18.5953,[3]16.3479,[4]20.1465,[5]19.5615,[6]20.3099,[7]21.3095,[8]20.8118,[9]23.2923,[10]22.3917,[11]20.9308,[12]24.0642,[13]28.6004,[14]30.3579,[15]34.0691,[16]36.3716,[17]38.2215,[18]42.9121,[19]43.4196,[20]43.9599,[21]48.1262,[22]49.1824,[23]49.4922,[24]51.4308,[25]53.2112,[26]54.2365,[27]58.6712,[28]61.5836,[29]64.8420,[30]65.0070,[31]65.2907,[32]62.9179,[33]60.2049,[34]57.5294,[35]55.2869,[36]63.1691,[37]74.8448,[38]76.4167,[39]75.7432,[40]77.4725,[41]77.4437,[42]81.9817,[43]85.9957,[44]90.6828,[45]94.4227,[46]95.9006,[47]94.7120,[48]94.5773,[49]93.7139,[50]93.1522,[51]91.8615,[52]91.9521,[53]94.8851,[54]95.4016,[55]98.0729,[56]99.5771,[57]99.4158,[58]100.0638,[59]100.1002,[60]100.4101,[61]98.6097,[62]97.6243,[63]97.8416,[64]99.3454,[65]97.8466,[66]96.3482,[67]95.1378,[68]92.5769,[69]91.1216,[70]89.6284,[71]87.5463,[72]86.2201,[73]84.8736,[74]82.5846,[75]80.3171,[76]78.4186,[77]77.2179,[78]76.3259,[79]75.0681,[80]73.5364,[81]73.1254,[82]72.6815,[83]71.3370,[84]71.2249,[85]70.7469,[86]70.5564,[87]69.6381,[88]69.1689,[89]69.6594,[90]69.8000,[91]69.6801,[92]68.1260,[93]66.6620,[94]64.8501,[95]63.2899,[96]61.9002,[97]60.3517,[98]58.9511,[99]58.9191,[100]58.8341,[101]58.8857,[102]59.9250,[103]61.2395,[104]62.2741,[105]64.0173,[106]65.3101,[107]65.5034,[108]64.8326,[109]64.9166,[110]65.0537,[111]64.1082,[112]63.1934,[113]62.8422,[114]63.3379,[115]63.4099,[116]63.3883,[117]63.7308,[118]64.1004,[119]63.9750,[120]63.7882,[121]63.7849,[122]62.9774,[123]63.4501,[124]64.2247,[125]65.0395,[126]66.1319,[127]66.9557,[128]67.6565,
Final estimate: PPL = 67.6565 +/- 1.42709
llama_perf_context_print: load time = 2621.56 ms
llama_perf_context_print: prompt eval time = 65452.29 ms / 65536 tokens ( 1.00 ms per token, 1001.28 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 68380.69 ms / 65537 tokens