AdinaM commited on
Commit
c1ab4ad
1 Parent(s): 3133be3

Upload 12 files

Browse files

copied another project for testing

README.md CHANGED
@@ -1,204 +1,222 @@
1
  ---
 
 
 
 
 
 
2
  license: apache-2.0
3
- datasets:
4
- - HuggingFaceTB/cosmopedia
5
- metrics:
6
- - accuracy
7
- pipeline_tag: question-answering
8
  tags:
9
- - medical
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
- # Model Card for Model ID
12
 
13
- <!-- Provide a quick summary of what the model is/does. -->
 
14
 
15
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
 
16
 
17
- ## Model Details
18
 
19
- ### Model Description
 
 
 
20
 
21
- <!-- Provide a longer summary of what this model is. -->
 
22
 
 
23
 
 
 
24
 
25
- - **Developed by:** [More Information Needed]
26
- - **Funded by [optional]:** [More Information Needed]
27
- - **Shared by [optional]:** [More Information Needed]
28
- - **Model type:** [More Information Needed]
29
- - **Language(s) (NLP):** [More Information Needed]
30
- - **License:** [More Information Needed]
31
- - **Finetuned from model [optional]:** [More Information Needed]
 
32
 
33
- ### Model Sources [optional]
 
 
34
 
35
- <!-- Provide the basic links for the model. -->
 
36
 
37
- - **Repository:** [More Information Needed]
38
- - **Paper [optional]:** [More Information Needed]
39
- - **Demo [optional]:** [More Information Needed]
40
 
41
- ## Uses
 
42
 
43
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
44
 
45
- ### Direct Use
 
 
46
 
47
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
48
 
49
- [More Information Needed]
 
50
 
51
- ### Downstream Use [optional]
52
 
53
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
54
 
55
- [More Information Needed]
 
 
 
56
 
57
- ### Out-of-Scope Use
 
 
58
 
59
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
60
 
61
- [More Information Needed]
 
62
 
63
- ## Bias, Risks, and Limitations
64
 
65
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
66
 
67
- [More Information Needed]
 
 
 
68
 
69
- ### Recommendations
 
 
 
 
 
70
 
71
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
 
 
 
 
 
72
 
73
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
-
75
- ## How to Get Started with the Model
76
-
77
- Use the code below to get started with the model.
78
-
79
- [More Information Needed]
80
-
81
- ## Training Details
82
-
83
- ### Training Data
84
-
85
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
-
87
- [More Information Needed]
88
-
89
- ### Training Procedure
90
-
91
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
-
93
- #### Preprocessing [optional]
94
-
95
- [More Information Needed]
96
-
97
-
98
- #### Training Hyperparameters
99
-
100
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
-
102
- #### Speeds, Sizes, Times [optional]
103
-
104
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
-
106
- [More Information Needed]
107
-
108
- ## Evaluation
109
-
110
- <!-- This section describes the evaluation protocols and provides the results. -->
111
-
112
- ### Testing Data, Factors & Metrics
113
-
114
- #### Testing Data
115
-
116
- <!-- This should link to a Dataset Card if possible. -->
117
-
118
- [More Information Needed]
119
-
120
- #### Factors
121
-
122
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
-
124
- [More Information Needed]
125
-
126
- #### Metrics
127
-
128
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
-
130
- [More Information Needed]
131
-
132
- ### Results
133
-
134
- [More Information Needed]
135
-
136
- #### Summary
137
-
138
-
139
-
140
- ## Model Examination [optional]
141
-
142
- <!-- Relevant interpretability work for the model goes here -->
143
-
144
- [More Information Needed]
145
-
146
- ## Environmental Impact
147
-
148
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
-
150
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
-
152
- - **Hardware Type:** [More Information Needed]
153
- - **Hours used:** [More Information Needed]
154
- - **Cloud Provider:** [More Information Needed]
155
- - **Compute Region:** [More Information Needed]
156
- - **Carbon Emitted:** [More Information Needed]
157
-
158
- ## Technical Specifications [optional]
159
-
160
- ### Model Architecture and Objective
161
-
162
- [More Information Needed]
163
-
164
- ### Compute Infrastructure
165
-
166
- [More Information Needed]
167
-
168
- #### Hardware
169
-
170
- [More Information Needed]
171
-
172
- #### Software
173
-
174
- [More Information Needed]
175
-
176
- ## Citation [optional]
177
-
178
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
-
180
- **BibTeX:**
181
-
182
- [More Information Needed]
183
-
184
- **APA:**
185
-
186
- [More Information Needed]
187
-
188
- ## Glossary [optional]
189
-
190
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
-
192
- [More Information Needed]
193
-
194
- ## More Information [optional]
195
-
196
- [More Information Needed]
197
-
198
- ## Model Card Authors [optional]
199
-
200
- [More Information Needed]
201
-
202
- ## Model Card Contact
203
-
204
- [More Information Needed]
 
1
  ---
2
+ language:
3
+ - fr
4
+ - it
5
+ - de
6
+ - es
7
+ - en
8
  license: apache-2.0
 
 
 
 
 
9
  tags:
10
+ - moe
11
+ model-index:
12
+ - name: Mixtral-8x22B-v0.1
13
+ results:
14
+ - task:
15
+ type: text-generation
16
+ name: Text Generation
17
+ dataset:
18
+ name: AI2 Reasoning Challenge (25-Shot)
19
+ type: ai2_arc
20
+ config: ARC-Challenge
21
+ split: test
22
+ args:
23
+ num_few_shot: 25
24
+ metrics:
25
+ - type: acc_norm
26
+ value: 70.48
27
+ name: normalized accuracy
28
+ source:
29
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1
30
+ name: Open LLM Leaderboard
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: HellaSwag (10-Shot)
36
+ type: hellaswag
37
+ split: validation
38
+ args:
39
+ num_few_shot: 10
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 88.73
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MMLU (5-Shot)
52
+ type: cais/mmlu
53
+ config: all
54
+ split: test
55
+ args:
56
+ num_few_shot: 5
57
+ metrics:
58
+ - type: acc
59
+ value: 77.81
60
+ name: accuracy
61
+ source:
62
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: TruthfulQA (0-shot)
69
+ type: truthful_qa
70
+ config: multiple_choice
71
+ split: validation
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: mc2
76
+ value: 51.08
77
+ source:
78
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1
79
+ name: Open LLM Leaderboard
80
+ - task:
81
+ type: text-generation
82
+ name: Text Generation
83
+ dataset:
84
+ name: Winogrande (5-shot)
85
+ type: winogrande
86
+ config: winogrande_xl
87
+ split: validation
88
+ args:
89
+ num_few_shot: 5
90
+ metrics:
91
+ - type: acc
92
+ value: 84.53
93
+ name: accuracy
94
+ source:
95
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1
96
+ name: Open LLM Leaderboard
97
+ - task:
98
+ type: text-generation
99
+ name: Text Generation
100
+ dataset:
101
+ name: GSM8k (5-shot)
102
+ type: gsm8k
103
+ config: main
104
+ split: test
105
+ args:
106
+ num_few_shot: 5
107
+ metrics:
108
+ - type: acc
109
+ value: 74.15
110
+ name: accuracy
111
+ source:
112
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mistral-community/Mixtral-8x22B-v0.1
113
+ name: Open LLM Leaderboard
114
  ---
115
+ # Mixtral-8x22B
116
 
117
+ > [!TIP]
118
+ > MistralAI has uploaded weights to their organization at [mistralai/Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) and [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) too.
119
 
120
+ > [!TIP]
121
+ > Kudos to [@v2ray](https://huggingface.co/v2ray) for converting the checkpoints and uploading them in `transformers` compatible format. Go give them a follow!
122
 
123
+ Converted to HuggingFace Transformers format using the script [here](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1/blob/main/convert.py).
124
 
125
+ The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
126
+ ## Run the model
127
+ ```python
128
+ from transformers import AutoModelForCausalLM, AutoTokenizer
129
 
130
+ model_id = "mistral-community/Mixtral-8x22B-v0.1"
131
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
132
 
133
+ model = AutoModelForCausalLM.from_pretrained(model_id)
134
 
135
+ text = "Hello my name is"
136
+ inputs = tokenizer(text, return_tensors="pt")
137
 
138
+ outputs = model.generate(**inputs, max_new_tokens=20)
139
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
140
+ ```
141
+ By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
142
+ ### In half-precision
143
+ Note `float16` precision only works on GPU devices
144
+ <details>
145
+ <summary> Click to expand </summary>
146
 
147
+ ```diff
148
+ + import torch
149
+ from transformers import AutoModelForCausalLM, AutoTokenizer
150
 
151
+ model_id = "mistral-community/Mixtral-8x22B-v0.1"
152
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
153
 
154
+ + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
 
 
155
 
156
+ text = "Hello my name is"
157
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
158
 
159
+ outputs = model.generate(**inputs, max_new_tokens=20)
160
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
161
+ ```
162
+ </details>
163
 
164
+ ### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
165
+ <details>
166
+ <summary> Click to expand </summary>
167
 
168
+ ```diff
169
+ + import torch
170
+ from transformers import AutoModelForCausalLM, AutoTokenizer
171
 
172
+ model_id = "mistral-community/Mixtral-8x22B-v0.1"
173
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
174
 
175
+ + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
176
 
177
+ text = "Hello my name is"
178
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
179
 
180
+ outputs = model.generate(**inputs, max_new_tokens=20)
181
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
182
+ ```
183
+ </details>
184
 
185
+ ### Load the model with Flash Attention 2
186
+ <details>
187
+ <summary> Click to expand </summary>
188
 
189
+ ```diff
190
+ + import torch
191
+ from transformers import AutoModelForCausalLM, AutoTokenizer
192
 
193
+ model_id = "mistral-community/Mixtral-8x22B-v0.1"
194
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
195
 
196
+ + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
197
 
198
+ text = "Hello my name is"
199
+ + inputs = tokenizer(text, return_tensors="pt").to(0)
200
 
201
+ outputs = model.generate(**inputs, max_new_tokens=20)
202
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
203
+ ```
204
+ </details>
205
 
206
+ ## Notice
207
+ Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
208
+ # The Mistral AI Team
209
+ Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall.
210
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
211
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistral-community__Mixtral-8x22B-v0.1)
212
 
213
+ | Metric |Value|
214
+ |---------------------------------|----:|
215
+ |Avg. |74.46|
216
+ |AI2 Reasoning Challenge (25-Shot)|70.48|
217
+ |HellaSwag (10-Shot) |88.73|
218
+ |MMLU (5-Shot) |77.81|
219
+ |TruthfulQA (0-shot) |51.08|
220
+ |Winogrande (5-shot) |84.53|
221
+ |GSM8k (5-shot) |74.15|
222
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
RELEASE ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ▄▄▄░░
3
+ ▄▄▄▄▄█████████░░░░
4
+ ▄▄▄▄▄▄████████████████████░░░░░
5
+ █████████████████████████████░░░░░
6
+ ▄▄▄▄▄▄█████░░░ █████████████████████████████░░░░░
7
+ ▄▄▄▄▄██████████████████░░░░░░ ██████████████████████████████░░░░░
8
+ ▄█████████████████████████████░░░░░░░░██████████████████████████████░░░░░
9
+ ███████████████████████████████░░░░░░░██████████████████████████████░░░░░
10
+ ███████████████████████████████░░░░░░░██████████████████████████████░░░░░
11
+ ███████████████████████████████░░░░░░███████████████████████████████░░░░░
12
+ ████████████████████████████████░░░░░███████████████████████████████░░░░░
13
+ ████████████████████████████████░░░░████████████████████████████████░░░░░
14
+ █████████████████████████████████░░░████████████████████████████████░░░░░
15
+ █████████████████████████████████░░░████████████░███████████████████░░░░░
16
+ ██████████████████████████████████░█████████████░███████████████████░░░░░
17
+ ███████████████████░██████████████▄█████████████░███████████████████░░░░░
18
+ ███████████████████░███████████████████████████░░███████████████████░░░░░
19
+ ███████████████████░░██████████████████████████░░███████████████████░░░░░
20
+ ███████████████████░░█████████████████████████░░░███████████████████░░░░░
21
+ ███████████████████░░░████████████████████████░░░███████████████████░░░░░
22
+ ███████████████████░░░████████████████████████░░░███████████████████░░░░░
23
+ ███████████████████░░░░██████████████████████░░░░███████████████████░░░░░
24
+ ███████████████████░░░░██████████████████████░░░░███████████████████░░░░░
25
+ ███████████████████░░░░░█████████████████████░░░░███████████████████░░░░░
26
+ ███████████████████░░░░░██████████████████��█░░░░░███████████████████░░░░░
27
+ ███████████████████░░░░░░███████████████████░░░░░███████████████████░░░░░
28
+ ███████████████████░░░░░░██████████████████░░░░░░███████████████████░░░░░
29
+ ███████████████████░░░░░░░█████████████████░░░░░░███████████████████░░░░░
30
+ ███████████████████░░░░░░░█████████████████░░░░░░███████████████████░░░░░
31
+ ███████████████████░░░░░░░░███████████████░░░░░░░██████████░░░░░░░░░░░░░░
32
+ ███████████████████░░░░░░░░███████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
33
+ ███████████████████░░░░░░░░███████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
34
+ ███████████████████░░░░░░░░░██░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
35
+ ███████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
36
+ ██████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ░░░░░░░
37
+ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ░░░
38
+ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ░░░░░░░░░░░░░░░░░░
39
+ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░
40
+ ░░░░░░░░░░░░░░░░░
41
+ ░░░░░
42
+
43
+
44
+ ╓────────────────────────────────────────────────────────────────────────────╖
45
+ ║ MIXTRAL 8x22B ·· 24/04/10 ║
46
+ ╙────────────────────────────────────────────────────────────────────────────╜
47
+
48
+ ╓────────────────────────────────────────────────────────────────────────────╖
49
+ ║ ║
50
+ ║ ·· md5sum ·· ║
51
+ ║ ║
52
+ ║ 3816cd2c4f827b4b868bc6481d5d3ba2 consolidated.safetensors ║
53
+ ║ 37974873eb68a7ab30c4912fc36264ae tokenizer.model ║
54
+ ╙────────────────────────────────────────────────────────────────────────────╜
55
+
56
+ ╓────────────────────────────────────────────────────────────────────────────╖
57
+ ║ ║
58
+ ║ ·· Released by the Mistral AI team ·· ║
59
+ ║ Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, ║
60
+ ║ Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, ║
61
+ ║ Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, ║
62
+ ║ Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, ║
63
+ ║ Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, ║
64
+ ║ Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon ║
65
+ ║ Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, ║
66
+ ║ Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, ║
67
+ ║ Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, ║
68
+ ║ Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, ║
69
+ ║ Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall ║
70
+ ║ ║
71
+ ╙────────────────────────────────────────────────────────────────────────────╜
app.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ from streamlit_chat import message
3
+ # from langchain.llms import OpenAI #This import has been replaced by the below import please
4
+ from langchain_openai import OpenAI
5
+ from langchain.chains import ConversationChain
6
+ from langchain.chains.conversation.memory import (ConversationBufferMemory,
7
+ ConversationSummaryMemory,
8
+ ConversationBufferWindowMemory
9
+
10
+ )
11
+
12
+ if 'conversation' not in st.session_state:
13
+ st.session_state['conversation'] =None
14
+ if 'messages' not in st.session_state:
15
+ st.session_state['messages'] =[]
16
+ if 'API_Key' not in st.session_state:
17
+ st.session_state['API_Key'] =''
18
+
19
+ # Setting page title and header
20
+ st.set_page_config(page_title="Chat GPT Clone", page_icon=":robot_face:")
21
+ st.markdown("<h1 style='text-align: center;'>How can I assist you? </h1>", unsafe_allow_html=True)
22
+
23
+
24
+ st.sidebar.title("😎")
25
+ st.session_state['API_Key']= st.sidebar.text_input("What's your API key?",type="password")
26
+ summarise_button = st.sidebar.button("Summarise the conversation", key="summarise")
27
+ if summarise_button:
28
+ summarise_placeholder = st.sidebar.write("Nice chatting with you my friend ❤️:\n\n"+st.session_state['conversation'].memory.buffer)
29
+ #summarise_placeholder.write("Nice chatting with you my friend ❤️:\n\n"+st.session_state['conversation'].memory.buffer)
30
+
31
+ #import os
32
+ #os.environ["OPENAI_API_KEY"] = "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"
33
+
34
+ def getresponse(userInput, api_key):
35
+
36
+ if st.session_state['conversation'] is None:
37
+
38
+ llm = OpenAI(
39
+ temperature=0,
40
+ openai_api_key=api_key,
41
+ model_name='gpt-3.5-turbo-instruct' # 'text-davinci-003' model is depreciated now, so we are using the openai's recommended model
42
+ )
43
+
44
+ st.session_state['conversation'] = ConversationChain(
45
+ llm=llm,
46
+ verbose=True,
47
+ memory=ConversationSummaryMemory(llm=llm)
48
+ )
49
+
50
+ response=st.session_state['conversation'].predict(input=userInput)
51
+ print(st.session_state['conversation'].memory.buffer)
52
+
53
+
54
+ return response
55
+
56
+
57
+
58
+ response_container = st.container()
59
+ # Here we will have a container for user input text box
60
+ container = st.container()
61
+
62
+
63
+ with container:
64
+ with st.form(key='my_form', clear_on_submit=True):
65
+ user_input = st.text_area("Your question goes here:", key='input', height=100)
66
+ submit_button = st.form_submit_button(label='Send')
67
+
68
+ if submit_button:
69
+ st.session_state['messages'].append(user_input)
70
+ model_response=getresponse(user_input,st.session_state['API_Key'])
71
+ st.session_state['messages'].append(model_response)
72
+
73
+
74
+ with response_container:
75
+ for i in range(len(st.session_state['messages'])):
76
+ if (i % 2) == 0:
77
+ message(st.session_state['messages'][i], is_user=True, key=str(i) + '_user')
78
+ else:
79
+ message(st.session_state['messages'][i], key=str(i) + '_AI')
80
+
81
+
82
+
83
+
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MixtralForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 6144,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 16384,
12
+ "max_position_embeddings": 65536,
13
+ "model_type": "mixtral",
14
+ "num_attention_heads": 48,
15
+ "num_experts_per_tok": 2,
16
+ "num_hidden_layers": 56,
17
+ "num_key_value_heads": 8,
18
+ "num_local_experts": 8,
19
+ "output_router_logits": false,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_theta": 1000000,
22
+ "router_aux_loss_coef": 0.001,
23
+ "router_jitter_noise": 0.0,
24
+ "sliding_window": null,
25
+ "tie_word_embeddings": false,
26
+ "torch_dtype": "bfloat16",
27
+ "transformers_version": "4.40.0.dev0",
28
+ "use_cache": true,
29
+ "vocab_size": 32000
30
+ }
convert.py ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 Mistral AI and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ import argparse
15
+ import json
16
+ import os
17
+
18
+ import torch
19
+ from safetensors.torch import load_file
20
+
21
+ from transformers import (
22
+ MixtralConfig,
23
+ MixtralForCausalLM,
24
+ )
25
+
26
+ """
27
+ Sample usage:
28
+
29
+ ```
30
+ python src/transformers/models/mixtral/convert_mixtral_weights_to_hf.py \
31
+ --input_dir /path/to/downloaded/mixtral/weights --model_size 7B --output_dir /output/path
32
+ ```
33
+
34
+ Thereafter, models can be loaded via:
35
+
36
+ ```py
37
+ from transformers import MixtralForCausalLM
38
+
39
+ model = MixtralForCausalLM.from_pretrained("/output/path")
40
+ ```
41
+
42
+ Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions
43
+ come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
44
+ """
45
+
46
+ def compute_intermediate_size(n, ffn_dim_multiplier=1, multiple_of=256):
47
+ return multiple_of * ((int(ffn_dim_multiplier * int(8 * n / 3)) + multiple_of - 1) // multiple_of)
48
+
49
+ def read_json(path):
50
+ with open(path, "r") as f:
51
+ return json.load(f)
52
+
53
+ def write_json(text, path):
54
+ with open(path, "w") as f:
55
+ json.dump(text, f)
56
+
57
+ def write_model(model_path, input_base_path, model_size, safe_serialization=True):
58
+ os.makedirs(model_path, exist_ok=True)
59
+
60
+ params = read_json(os.path.join(input_base_path, "params.json"))
61
+ num_shards = 1
62
+
63
+ # For some reason this is a string in the params.json
64
+ sliding_window = int(params["sliding_window"]) if "sliding_window" in params else None
65
+ base = params.get("rope_theta", 10000.0)
66
+ vocab_size = params["vocab_size"]
67
+
68
+ if model_size == "7B":
69
+ dim = params["hidden_size"]
70
+ max_position_embeddings = 4096 * 8
71
+ num_local_experts = params["num_local_experts"]
72
+ ffn_dim = params["intermediate_size"]
73
+ n_layers = params["num_hidden_layers"]
74
+ n_heads = params["num_attention_heads"]
75
+ n_heads_per_shard = n_heads // num_shards
76
+ dims_per_head = dim // n_heads
77
+ if "num_key_value_heads" in params:
78
+ num_key_value_heads = params["num_key_value_heads"] # for GQA / MQA
79
+ num_local_key_value_heads = num_key_value_heads // num_shards
80
+ key_value_dim = dims_per_head * num_local_key_value_heads
81
+ else: # compatibility with other checkpoints
82
+ num_key_value_heads = n_heads
83
+ num_local_key_value_heads = n_heads_per_shard
84
+ key_value_dim = dim
85
+ rms_norm_eps = params["rms_norm_eps"]
86
+ elif model_size == "22B":
87
+ dim = params["dim"]
88
+ max_position_embeddings = params["max_seq_len"]
89
+ num_local_experts = params["moe"]["num_experts"]
90
+ ffn_dim = params["hidden_dim"]
91
+ n_layers = params["n_layers"]
92
+ n_heads = params["n_heads"]
93
+ n_heads_per_shard = n_heads // num_shards
94
+ dims_per_head = dim // n_heads
95
+ if "n_kv_heads" in params:
96
+ num_key_value_heads = params["n_kv_heads"] # for GQA / MQA
97
+ num_local_key_value_heads = num_key_value_heads // num_shards
98
+ key_value_dim = dims_per_head * num_local_key_value_heads
99
+ else: # compatibility with other checkpoints
100
+ num_key_value_heads = n_heads
101
+ num_local_key_value_heads = n_heads_per_shard
102
+ key_value_dim = dim
103
+ rms_norm_eps = params["norm_eps"]
104
+ else:
105
+ raise Exception("Illegal model size:", model_size)
106
+
107
+ # permute for sliced rotary
108
+ def permute(w, n_heads=n_heads, dim1=dim, dim2=dim):
109
+ return w.view(n_heads, dim1 // n_heads // 2, 2, dim2).transpose(1, 2).reshape(dim1, dim2)
110
+
111
+ print(f"Fetching all parameters from the checkpoint at \"{input_base_path}\"...")
112
+ # Load weights
113
+ if model_size == "7B":
114
+ loaded = [
115
+ torch.load(os.path.join(input_base_path, f"consolidated.{i:02d}.pt"), map_location="cpu") for i in range(8)
116
+ ]
117
+ merged_state_dict = {}
118
+ for state_dict in loaded:
119
+ merged_state_dict.update(state_dict)
120
+ elif model_size == "22B":
121
+ merged_state_dict = load_file(os.path.join(input_base_path, "consolidated.safetensors"))
122
+ print("Parameters load finished.")
123
+
124
+ state_dict = {}
125
+ for layer_i in range(n_layers):
126
+ print(f"At layer {layer_i}...")
127
+ # Sharded
128
+ # Note that attention.w{q,k,v,o}, feed_fordward.w[1,2,3], attention_norm.weight and ffn_norm.weight share
129
+ # the same storage object, saving attention_norm and ffn_norm will save other weights too, which is
130
+ # redundant as other weights will be stitched from multiple shards. To avoid that, they are cloned.
131
+
132
+ state_dict.update(
133
+ {
134
+ f"model.layers.{layer_i}.input_layernorm.weight": merged_state_dict[
135
+ f"layers.{layer_i}.attention_norm.weight"
136
+ ].clone(),
137
+ f"model.layers.{layer_i}.post_attention_layernorm.weight": merged_state_dict[
138
+ f"layers.{layer_i}.ffn_norm.weight"
139
+ ].clone(),
140
+ }
141
+ )
142
+
143
+ state_dict[f"model.layers.{layer_i}.self_attn.q_proj.weight"] = permute(
144
+ merged_state_dict[f"layers.{layer_i}.attention.wq.weight"]
145
+ .view(n_heads_per_shard, dims_per_head, dim)
146
+ .reshape(dim, dim)
147
+ )
148
+ state_dict[f"model.layers.{layer_i}.self_attn.k_proj.weight"] = permute(
149
+ merged_state_dict[f"layers.{layer_i}.attention.wk.weight"]
150
+ .view(num_local_key_value_heads, dims_per_head, dim)
151
+ .reshape(key_value_dim, dim),
152
+ num_key_value_heads,
153
+ key_value_dim,
154
+ dim,
155
+ )
156
+ state_dict[f"model.layers.{layer_i}.self_attn.v_proj.weight"] = (
157
+ merged_state_dict[f"layers.{layer_i}.attention.wv.weight"]
158
+ .view(num_local_key_value_heads, dims_per_head, dim)
159
+ .reshape(key_value_dim, dim)
160
+ )
161
+
162
+ state_dict[f"model.layers.{layer_i}.self_attn.o_proj.weight"] = merged_state_dict[
163
+ f"layers.{layer_i}.attention.wo.weight"
164
+ ]
165
+
166
+ if model_size == "7B":
167
+ w1 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w1"]
168
+ w2 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w2"]
169
+ w3 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w3"]
170
+
171
+ experts_w1 = [
172
+ w1[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
173
+ for expert_idx in range(num_local_experts)
174
+ ]
175
+
176
+ for idx, expert_block in enumerate(experts_w1):
177
+ expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w1"
178
+ state_dict[expert_key + ".weight"] = expert_block.clone()
179
+
180
+ experts_w2 = [
181
+ w2[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
182
+ for expert_idx in range(num_local_experts)
183
+ ]
184
+
185
+ for idx, expert_block in enumerate(experts_w2):
186
+ expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w2"
187
+ state_dict[expert_key + ".weight"] = expert_block.T.clone().contiguous()
188
+
189
+ experts_w3 = [
190
+ w3[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
191
+ for expert_idx in range(num_local_experts)
192
+ ]
193
+
194
+ for idx, expert_block in enumerate(experts_w3):
195
+ expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w3"
196
+ state_dict[expert_key + ".weight"] = expert_block.clone()
197
+
198
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.gate.weight"] = merged_state_dict[
199
+ f"layers.{layer_i}.block_sparse_moe.gate.weight"
200
+ ]
201
+ elif model_size == "22B":
202
+ for expert_i in range(num_local_experts):
203
+ w1 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w1.weight"]
204
+ w2 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w2.weight"]
205
+ w3 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w3.weight"]
206
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w1.weight"] = w1.contiguous().clone()
207
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w2.weight"] = w2.contiguous().clone()
208
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w3.weight"] = w3.contiguous().clone()
209
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.gate.weight"] = merged_state_dict[
210
+ f"layers.{layer_i}.feed_forward.gate.weight"
211
+ ]
212
+
213
+ state_dict.update(
214
+ {
215
+ "model.norm.weight": merged_state_dict["norm.weight"],
216
+ "model.embed_tokens.weight": merged_state_dict["tok_embeddings.weight"],
217
+ "lm_head.weight": merged_state_dict["output.weight"],
218
+ }
219
+ )
220
+
221
+ config_additional_kwargs = {}
222
+ if model_size == "22B":
223
+ config_additional_kwargs["num_experts_per_tok"] = params["moe"]["num_experts_per_tok"]
224
+ config = MixtralConfig(
225
+ hidden_size=dim,
226
+ intermediate_size=ffn_dim,
227
+ num_attention_heads=n_heads,
228
+ num_hidden_layers=n_layers,
229
+ rms_norm_eps=rms_norm_eps,
230
+ num_key_value_heads=num_key_value_heads,
231
+ vocab_size=vocab_size,
232
+ rope_theta=base,
233
+ max_position_embeddings=max_position_embeddings,
234
+ sliding_window=sliding_window,
235
+ num_local_experts=num_local_experts,
236
+ **config_additional_kwargs
237
+ )
238
+
239
+ print("Loading the checkpoint in a Mixtral model.")
240
+ with torch.device("meta"):
241
+ model = MixtralForCausalLM(config)
242
+ # Avoid saving this as part of the config.
243
+ del model.config._name_or_path
244
+ model.config.torch_dtype = torch.bfloat16
245
+ print("Saving in the Transformers format.")
246
+
247
+ model.load_state_dict(state_dict, strict=True, assign=True)
248
+
249
+ for n, p in model.named_parameters():
250
+ assert p.device.type != "meta", f"{n} has not been loaded!"
251
+
252
+ model.save_pretrained(model_path, safe_serialization=safe_serialization)
253
+
254
+ def main():
255
+ parser = argparse.ArgumentParser()
256
+ parser.add_argument(
257
+ "--input-dir",
258
+ help="Location of Mixtral weights, which contains tokenizer.model and model folders",
259
+ required=True,
260
+ )
261
+ parser.add_argument(
262
+ "--model-size",
263
+ choices=["7B", "22B"],
264
+ help="'f' models correspond to the finetuned versions, and are specific to the Mixtral official release. For more details on Mixtral, checkout the original repo: https://huggingface.co/mistral-ai",
265
+ default="7B",
266
+ )
267
+ parser.add_argument("--output-dir", help="Location to write HF model", required=True)
268
+ parser.add_argument("--safe-serialization", type=bool, default=True, help="Whether or not to save using `safetensors`.")
269
+ args = parser.parse_args()
270
+ write_model(
271
+ model_path=args.output_dir,
272
+ input_base_path=args.input_dir,
273
+ model_size=args.model_size,
274
+ safe_serialization=args.safe_serialization,
275
+ )
276
+
277
+ if __name__ == "__main__":
278
+ main()
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.40.0.dev0"
6
+ }
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
requirements.txt ADDED
Binary file (202 Bytes). View file
 
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "unk_token": "<unk>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": null,
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }