Lin-K76 commited on
Commit
96d2841
1 Parent(s): 39dedd1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - vllm
5
+ ---
6
+
7
+ # gemma-2-9b-it-FP8
8
+
9
+ ## Model Overview
10
+ * <h3 style="display: inline;">Model Architecture:</h3> Based on and identical to the gemma-2-9b-it architecture
11
+ * <h3 style="display: inline;">Model Optimizations:</h3> Weights and activations quantized to FP8
12
+ * <h3 style="display: inline;">Release Date:</h3> July 8, 2024
13
+ * <h3 style="display: inline;">Model Developers:</h3> Neural Magic
14
+
15
+ gemma-2-9b-it quantized to FP8 weights and activations using per-tensor quantization through the [AutoFP8 repository](https://github.com/neuralmagic/AutoFP8), ready for inference with vLLM >= 0.5.0.
16
+ Calibrated with 1 repeat of each token in the tokenizer in random order to achieve 100% performance recovery on the Open LLM Benchmark evaluations.
17
+ Reduces space on disk by ~50%.
18
+ Part of the [FP8 LLMs for vLLM collection](https://huggingface.co/collections/neuralmagic/fp8-llms-for-vllm-666742ed2b78b7ac8df13127).
19
+
20
+
21
+ ## Usage and Creation
22
+ Produced using AutoFP8 with random tokens as calibration, based on [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py).
23
+
24
+ ```python
25
+ from datasets import load_dataset
26
+ from transformers import AutoTokenizer
27
+ import numpy as np
28
+ import torch
29
+
30
+ from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig
31
+
32
+ MODEL_DIR = "google/gemma-2-9b-it"
33
+ final_model_dir = MODEL_DIR.split("/")[-1]
34
+
35
+ CONTEXT_LENGTH = 4096
36
+ NUM_SAMPLES = 512
37
+ NUM_REPEATS = 1
38
+
39
+ pretrained_model_dir = MODEL_DIR
40
+ tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, model_max_length=CONTEXT_LENGTH)
41
+ tokenizer.pad_token = tokenizer.eos_token
42
+
43
+ tokenizer_num_tokens = len(list(tokenizer.get_vocab().values()))
44
+ total_token_samples = NUM_REPEATS * tokenizer_num_tokens
45
+ num_random_samp = -(-total_token_samples // CONTEXT_LENGTH)
46
+
47
+ input_ids = np.tile(np.arange(tokenizer_num_tokens), NUM_REPEATS + 1)[:num_random_samp * CONTEXT_LENGTH]
48
+ np.random.shuffle(input_ids)
49
+ input_ids = input_ids.reshape(num_random_samp, CONTEXT_LENGTH)
50
+ input_ids = torch.tensor(input_ids, dtype=torch.int64).to("cuda")
51
+
52
+ quantize_config = BaseQuantizeConfig(
53
+ quant_method="fp8",
54
+ activation_scheme="static",
55
+ )
56
+
57
+ examples = input_ids
58
+
59
+ model = AutoFP8ForCausalLM.from_pretrained(pretrained_model_dir, quantize_config=quantize_config)
60
+
61
+ model.quantize(examples)
62
+
63
+ quantized_model_dir = f"{final_model_dir}-FP8"
64
+ model.save_quantized(quantized_model_dir)
65
+ ```
66
+
67
+ Evaluated through vLLM>=0.5.1 with the following script:
68
+
69
+ ```bash
70
+ #!/bin/bash
71
+
72
+ # Example usage:
73
+ # CUDA_VISIBLE_DEVICES=0 ./eval_openllm.sh "neuralmagic/gemma-2-9b-it-FP8" "tensor_parallel_size=1,max_model_len=4096,add_bos_token=True,gpu_memory_utilization=0.7"
74
+
75
+ export MODEL_DIR=${1}
76
+ export MODEL_ARGS=${2}
77
+
78
+ declare -A tasks_fewshot=(
79
+ ["arc_challenge"]=25
80
+ ["winogrande"]=5
81
+ ["truthfulqa_mc2"]=0
82
+ ["hellaswag"]=10
83
+ ["mmlu"]=5
84
+ ["gsm8k"]=5
85
+ )
86
+
87
+ declare -A batch_sizes=(
88
+ ["arc_challenge"]="auto"
89
+ ["winogrande"]="auto"
90
+ ["truthfulqa_mc2"]="auto"
91
+ ["hellaswag"]="auto"
92
+ ["mmlu"]=1
93
+ ["gsm8k"]="auto"
94
+ )
95
+
96
+ for TASK in "${!tasks_fewshot[@]}"; do
97
+ NUM_FEWSHOT=${tasks_fewshot[$TASK]}
98
+ BATCH_SIZE=${batch_sizes[$TASK]}
99
+ lm_eval --model vllm \
100
+ --model_args pretrained=$MODEL_DIR,$MODEL_ARGS \
101
+ --tasks ${TASK} \
102
+ --num_fewshot ${NUM_FEWSHOT} \
103
+ --write_out \
104
+ --show_config \
105
+ --device cuda \
106
+ --batch_size ${BATCH_SIZE} \
107
+ --output_path="results/${TASK}"
108
+ done
109
+ ```
110
+
111
+
112
+ ## Evaluation
113
+
114
+ Evaluated on the Open LLM Leaderboard evaluations through vLLM.
115
+
116
+ ### Open LLM Leaderboard evaluation scores
117
+ | | gemma-2-9b-it | neuralmagic/gemma-2-9b-it-FP8<br>(this model) |
118
+ | :------------------: | :----------------------: | :------------------------------------------------: |
119
+ | arc-c<br>25-shot | 71.50 | 71.50 |
120
+ | hellaswag<br>10-shot | 81.91 | 81.70 |
121
+ | mmlu<br>5-shot | 72.28 | 71.99 |
122
+ | truthfulqa<br>0-shot | 60.32 | 60.52 |
123
+ | winogrande<br>5-shot | 77.11 | 78.37 |
124
+ | gsm8k<br>5-shot | 76.26 | 76.87 |
125
+ | **Average<br>Accuracy** | **73.23** | **73.49** |
126
+ | **Recovery** | **100%** | **100.36%** |