ericsorides commited on
Commit
1a44caa
1 Parent(s): b8fce9d

Added new inputs and README

Browse files
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - text-generation-inference
4
+ - mistral
5
+ - 4-bit precision
6
+ - AWQ
7
+ base_model:
8
+ - mistralai/Mistral-7B-v0.1
9
+ ---
10
+
11
+
12
+ # Mistral 7B v0.1 with Key-Value-Cache enabled in ONNX AWQ (4-bit) format
13
+ - Model creator: [MistralAI](https://huggingface.co/mistralai)
14
+ - Original model: [MistralAi Mistral 7B v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
15
+
16
+ <!-- description start -->
17
+ ## Description
18
+
19
+ This repo contains the ONNX files for the ONNX conversion of Mistral 7B v0.1 done by Esperanto Technologies.
20
+ The model is in the 4-bit format quantized with AWQ and has the KVC enabled.
21
+
22
+ ### About AWQ
23
+
24
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
25
+ More here: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
26
+
27
+ <!-- description end -->
28
+
29
+ ## How to download ONNX model and weight files
30
+
31
+ The easiest way to obtain the model is to clone this whole repo.
32
+ Alternatively you can download the files is using the `huggingface-hub` Python library.
33
+
34
+ ```shell
35
+ pip3 install huggingface-hub>=0.17.1
36
+ ```
37
+
38
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
39
+
40
+ ```shell
41
+ huggingface-cli download Esperanto/mistral-7b-kvc-AWQ-int4-onnx --local-dir mistral-7b-kvc-AWQ-int4-onnx --local-dir-use-symlinks False
42
+ ```
43
+
44
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
45
+
46
+ ## How to run from Python code using ONNXRuntime
47
+
48
+ This model can easily be ran in a CPU using [ONNXRuntime](https://onnxruntime.ai/).
49
+
50
+ #### First install the packages
51
+
52
+ ```bash
53
+ pip3 install onnx==1.16.1
54
+ pip3 install onnxruntime==1.17.1
55
+ ```
56
+
57
+ #### Example code: generate text with this model
58
+
59
+ We define the loop with greedy decoding:
60
+ ```python
61
+ import numpy as np
62
+ import onnxruntime
63
+ import onnx
64
+ from transformers import AutoTokenizer
65
+
66
+ def generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context):
67
+ model = onnx.load(model_path)
68
+
69
+ #we create the inputs for the first iteration
70
+ input_tensor = tokenizer(prompt, return_tensors="pt")
71
+ prompt_size = len(input_tensor['input_ids'][0])
72
+ actual_input = input_tensor['input_ids']
73
+ if prompt_size < window:
74
+ actual_input = np.concatenate((tokenizer.bos_token_id*np.ones([1, window - prompt_size], dtype = 'int64'),
75
+ actual_input), axis=1)
76
+ if prompt_size + max_gen_tokens > total_sequence:
77
+ print("ERROR: Longer total sequence is needed!")
78
+ return
79
+ first_attention = np.concatenate((np.zeros([1, total_sequence - window], dtype = 'int64'),
80
+ np.ones((1, window), dtype = 'int64')), axis=1)
81
+ max_gen_tokens += prompt_size #we need to generate on top of parsing the prompt
82
+ inputs_names =[node.name for node in model.graph.input]
83
+ output_names =[node.name for node in model.graph.output]
84
+ n_heads = 8 #gqa-heads of the kvc
85
+ inputs_dict = {}
86
+ inputs_dict['input_ids'] = actual_input[:, :window].reshape(1, window).numpy()
87
+ inputs_dict['attention_mask'] = first_attention
88
+ index_pos = sum(first_attention[0])
89
+ inputs_dict['position_ids'] = np.concatenate((np.zeros([1, total_sequence - index_pos], dtype = 'int64'), np.arange(index_pos, dtype = 'int64').reshape(1, index_pos)), axis=1)
90
+ inputs_dict['tree_attention'] = np.triu(-65504*np.ones(total_sequence), k= 1).astype('float16').reshape(1, 1, total_sequence, total_sequence)
91
+ for name in inputs_names:
92
+ if name == 'input_ids' or name == 'attention_mask' or name == 'position_ids' or name == 'tree_attention': continue
93
+ inputs_dict[name] = np.zeros([1, n_heads, context-window, 128], dtype="float16")
94
+ index = 0
95
+ new_token = np.array([10])
96
+ next_index = window
97
+ old_j = 0
98
+ total_input = actual_input.numpy()
99
+
100
+ rt_session = onnxruntime.InferenceSession(model_path)
101
+ ## We run the inferences
102
+ while next_index < max_gen_tokens:
103
+ if new_token.any() == tokenizer.eos_token_id:
104
+ break
105
+ #inference
106
+ output = rt_session.run(output_names, inputs_dict)
107
+ outs_dictionary = {name: content for (name, content) in zip (output_names, output)}
108
+ #we prepare the inputs for the next inference
109
+ for name in inputs_names:
110
+ if name == 'input_ids':
111
+ old_j = next_index
112
+ if next_index < prompt_size:
113
+ if prompt_size - next_index >= window: next_index += window
114
+ else: next_index = prompt_size
115
+ j = next_index - window
116
+ else:
117
+ next_index +=1
118
+ j = next_index - window
119
+ new_token = outs_dictionary['logits'].argmax(-1).reshape(1, window)
120
+ total_input = np.concatenate((total_input, new_token[: , -1:]), axis = 1)
121
+ inputs_dict['input_ids']= total_input[:, j:next_index].reshape(1, window)
122
+ elif name == 'attention_mask':
123
+ inputs_dict['attention_mask'] = np.concatenate((np.zeros((1, total_sequence-next_index), dtype = 'int64'), np.ones((1, next_index), dtype = 'int64')), axis=1)
124
+ elif name == 'position_ids':
125
+ inputs_dict['position_ids'] = np.concatenate((np.zeros([1, total_sequence - next_index], dtype = 'int64'), np.arange(next_index, dtype = 'int64').reshape(1, next_index)), axis=1)
126
+ elif name == 'tree_attention': continue
127
+ else:
128
+ old_name = name.replace("past_key_values", "present")
129
+ inputs_dict[name] = outs_dictionary[old_name][:, :, next_index-old_j:context-window+(next_index - old_j), :]
130
+
131
+ answer = tokenizer.decode(total_input[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
132
+ return answer
133
+ ```
134
+ We now run the inferences:
135
+
136
+ ```python
137
+ tokenizer = AutoTokenizer.from_pretrained("Esperanto/mistral-7b-kvc-AWQ-int4-onnx")
138
+ model_path = "mistral-7b-kvc-AWQ-int4-onnx/model.onnx"
139
+
140
+ max_gen_tokens = 20 #number of tokens we want tog eneral
141
+ total_sequence = 128 #total sequence_length
142
+ context = 1024 #the context to extend the kvc
143
+ window = 16 #number of tokens we want to parse at the time
144
+ messages = [
145
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
146
+ {"role": "user", "content": "Who are you?"},
147
+ ]
148
+
149
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
150
+
151
+ generated = generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context)
152
+ print(generated)
153
+ ```
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "</s>": 2,
3
+ "<s>": 1,
4
+ "<unk>": 0
5
+ }
config.json ADDED
File without changes
model.onnx CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:765a971ec732e03357d7376fd922e4285ad68cc226c2bbbd5fd37c140b640a6f
3
- size 19129839
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8ebbb43f8a077ecc4d18c6dbef3882ac11719969f25d1cba9a856f379123771
3
+ size 19119214
special_tokens_map.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>"
6
+ ],
7
+ "bos_token": "<s>",
8
+ "eos_token": "</s>",
9
+ "unk_token": "<unk>"
10
+ }
EtGlowExecutionProvider_GLOW_graph_Extracted_from_-Extracted_from_-Extracted_from_-main_graph---_9643334558254212124_0_0_0.onnx → tokenizer.model RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b96191a2f77e07f1ef5d7ab490520ae75c235df5344046ceaac680d2f0d11d7b
3
- size 19003592
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [
31
+ "<unk>",
32
+ "<s>",
33
+ "</s>"
34
+ ],
35
+ "bos_token": "<s>",
36
+ "clean_up_tokenization_spaces": false,
37
+ "eos_token": "</s>",
38
+ "legacy": true,
39
+ "model_max_length": 1000000000000000019884624838656,
40
+ "pad_token": null,
41
+ "sp_model_kwargs": {},
42
+ "spaces_between_special_tokens": false,
43
+ "tokenizer_class": "LlamaTokenizer",
44
+ "tokenizer_file": "models/pytorch_mistral/tokenizer.json",
45
+ "unk_token": "<unk>",
46
+ "use_default_system_prompt": true
47
+ }