TheBloke commited on
Commit
29d6440
1 Parent(s): 7454c2f

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +362 -0
README.md ADDED
@@ -0,0 +1,362 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: llama2
4
+ model-index:
5
+ - name: Phind-CodeLlama-34B-v1
6
+ results:
7
+ - dataset:
8
+ name: HumanEval
9
+ type: openai_humaneval
10
+ metrics:
11
+ - name: pass@1
12
+ type: pass@1
13
+ value: 73.8%
14
+ verified: false
15
+ task:
16
+ type: text-generation
17
+ model_creator: Phind
18
+ model_link: https://huggingface.co/Phind/Phind-CodeLlama-34B-v2
19
+ model_name: CodeLlama 34B v2
20
+ model_type: llama
21
+ quantized_by: TheBloke
22
+ tags:
23
+ - code llama
24
+ ---
25
+
26
+ <!-- header start -->
27
+ <!-- 200823 -->
28
+ <div style="width: auto; margin-left: auto; margin-right: auto">
29
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
30
+ </div>
31
+ <div style="display: flex; justify-content: space-between; width: 100%;">
32
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
34
+ </div>
35
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
36
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
37
+ </div>
38
+ </div>
39
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
40
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
41
+ <!-- header end -->
42
+
43
+ # CodeLlama 34B v2 - GPTQ
44
+ - Model creator: [Phind](https://huggingface.co/Phind)
45
+ - Original model: [CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2)
46
+
47
+ <!-- description start -->
48
+ ## Description
49
+
50
+ This repo contains GPTQ model files for [Phind's CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2).
51
+
52
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
53
+
54
+ <!-- description end -->
55
+ <!-- repositories-available start -->
56
+ ## Repositories available
57
+
58
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ)
59
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF)
60
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGML)
61
+ * [Phind's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2)
62
+ <!-- repositories-available end -->
63
+
64
+ <!-- prompt-template start -->
65
+ ## Prompt template: Phind
66
+
67
+ ```
68
+ ### System Prompt
69
+ {system_message}
70
+
71
+ ### User Message
72
+ {prompt}
73
+
74
+ ### Assistant
75
+
76
+ ```
77
+
78
+ <!-- prompt-template end -->
79
+
80
+ <!-- README_GPTQ.md-provided-files start -->
81
+ ## Provided files and GPTQ parameters
82
+
83
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
84
+
85
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
86
+
87
+ All GPTQ files are made with AutoGPTQ.
88
+
89
+ <details>
90
+ <summary>Explanation of GPTQ parameters</summary>
91
+
92
+ - Bits: The bit size of the quantised model.
93
+ - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
94
+ - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
95
+ - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
96
+ - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
97
+ - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
98
+ - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
99
+
100
+ </details>
101
+
102
+ | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
103
+ | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
104
+ | [main](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 17.69 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
105
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 20.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
106
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 18.98 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
107
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 18.33 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
108
+ | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.54 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
109
+ | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 14.14 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
110
+
111
+ <!-- README_GPTQ.md-provided-files end -->
112
+
113
+ <!-- README_GPTQ.md-download-from-branches start -->
114
+ ## How to download from branches
115
+
116
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Phind-CodeLlama-34B-v2-GPTQ:gptq-4bit-32g-actorder_True`
117
+ - With Git, you can clone a branch with:
118
+ ```
119
+ git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GPTQ
120
+ ```
121
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
122
+ <!-- README_GPTQ.md-download-from-branches end -->
123
+ <!-- README_GPTQ.md-text-generation-webui start -->
124
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
125
+
126
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
127
+
128
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
129
+
130
+ 1. Click the **Model tab**.
131
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Phind-CodeLlama-34B-v2-GPTQ`.
132
+ - To download from a specific branch, enter for example `TheBloke/Phind-CodeLlama-34B-v2-GPTQ:gptq-4bit-32g-actorder_True`
133
+ - see Provided Files above for the list of branches for each option.
134
+ 3. Click **Download**.
135
+ 4. The model will start downloading. Once it's finished it will say "Done".
136
+ 5. In the top left, click the refresh icon next to **Model**.
137
+ 6. In the **Model** dropdown, choose the model you just downloaded: `Phind-CodeLlama-34B-v2-GPTQ`
138
+ 7. The model will automatically load, and is now ready for use!
139
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
140
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
141
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
142
+ <!-- README_GPTQ.md-text-generation-webui end -->
143
+
144
+ <!-- README_GPTQ.md-use-from-python start -->
145
+ ## How to use this GPTQ model from Python code
146
+
147
+ ### Install the necessary packages
148
+
149
+ Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
150
+
151
+ ```shell
152
+ pip3 install transformers>=4.32.0 optimum>=1.12.0
153
+ pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
154
+ ```
155
+
156
+ If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
157
+
158
+ ```shell
159
+ pip3 uninstall -y auto-gptq
160
+ git clone https://github.com/PanQiWei/AutoGPTQ
161
+ cd AutoGPTQ
162
+ pip3 install .
163
+ ```
164
+
165
+ ### For CodeLlama models only: you must use Transformers 4.33.0 or later.
166
+
167
+ If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
168
+ ```shell
169
+ pip3 uninstall -y transformers
170
+ pip3 install git+https://github.com/huggingface/transformers.git
171
+ ```
172
+
173
+ ### You can then use the following code
174
+
175
+ ```python
176
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
177
+
178
+ model_name_or_path = "TheBloke/Phind-CodeLlama-34B-v2-GPTQ"
179
+ # To use a different branch, change revision
180
+ # For example: revision="gptq-4bit-32g-actorder_True"
181
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
182
+ torch_dtype=torch.float16,
183
+ device_map="auto",
184
+ revision="main")
185
+
186
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
187
+
188
+ prompt = "Tell me about AI"
189
+ prompt_template=f'''### System Prompt
190
+ {system_message}
191
+
192
+ ### User Message
193
+ {prompt}
194
+
195
+ ### Assistant
196
+
197
+ '''
198
+
199
+ print("\n\n*** Generate:")
200
+
201
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
202
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
203
+ print(tokenizer.decode(output[0]))
204
+
205
+ # Inference can also be done using transformers' pipeline
206
+
207
+ print("*** Pipeline:")
208
+ pipe = pipeline(
209
+ "text-generation",
210
+ model=model,
211
+ tokenizer=tokenizer,
212
+ max_new_tokens=512,
213
+ temperature=0.7,
214
+ top_p=0.95,
215
+ repetition_penalty=1.15
216
+ )
217
+
218
+ print(pipe(prompt_template)[0]['generated_text'])
219
+ ```
220
+ <!-- README_GPTQ.md-use-from-python end -->
221
+
222
+ <!-- README_GPTQ.md-compatibility start -->
223
+ ## Compatibility
224
+
225
+ The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
226
+
227
+ [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
228
+
229
+ [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
230
+ <!-- README_GPTQ.md-compatibility end -->
231
+
232
+ <!-- footer start -->
233
+ <!-- 200823 -->
234
+ ## Discord
235
+
236
+ For further support, and discussions on these models and AI in general, join us at:
237
+
238
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
239
+
240
+ ## Thanks, and how to contribute.
241
+
242
+ Thanks to the [chirper.ai](https://chirper.ai) team!
243
+
244
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
245
+
246
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
247
+
248
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
249
+
250
+ * Patreon: https://patreon.com/TheBlokeAI
251
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
252
+
253
+ **Special thanks to**: Aemon Algiz.
254
+
255
+ **Patreon special mentions**: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
256
+
257
+
258
+ Thank you to all my generous patrons and donaters!
259
+
260
+ And thank you again to a16z for their generous grant.
261
+
262
+ <!-- footer end -->
263
+
264
+ # Original model card: Phind's CodeLlama 34B v2
265
+
266
+
267
+ # **Phind-CodeLlama-34B-v2**
268
+ We've fine-tuned Phind-CodeLlama-34B-v1 on an additional 1.5B tokens high-quality programming-related data, achieving **73.8% pass@1** on HumanEval. It's the current state-of-the-art amongst open-source models.
269
+
270
+ Furthermore, this model is **instruction-tuned** on the Alpaca/Vicuna format to be steerable and easy-to-use.
271
+
272
+ More details can be found on our [blog post](https://www.phind.com/blog/code-llama-beats-gpt4).
273
+
274
+ ## Model Details
275
+ This model is fine-tuned from Phind-CodeLlama-34B-v1 and achieves **73.8% pass@1** on HumanEval.
276
+
277
+ Phind-CodeLlama-34B-v2 is **multi-lingual** and is proficient in Python, C/C++, TypeScript, Java, and more.
278
+
279
+ ## Dataset Details
280
+ We fined-tuned on a proprietary dataset of 1.5B tokens of high quality programming problems and solutions. This dataset consists of instruction-answer pairs instead of code completion examples, making it structurally different from HumanEval. LoRA was not used -- both models are a native finetune. We used DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in 15 hours on 32 A100-80GB GPUs. We used a sequence length of 4096 tokens.
281
+
282
+ ## How to Get Started with the Model
283
+
284
+ Make sure to install Transformers from the main git branch:
285
+
286
+ ```bash
287
+ pip install git+https://github.com/huggingface/transformers.git
288
+ ```
289
+
290
+ ## How to Prompt the Model
291
+ This model accepts the Alpaca/Vicuna instruction format.
292
+
293
+ For example:
294
+
295
+ ```
296
+ ### System Prompt
297
+ You are an intelligent programming assistant.
298
+
299
+ ### User Message
300
+ Implement a linked list in C++
301
+
302
+ ### Assistant
303
+ ...
304
+ ```
305
+
306
+ ## How to reproduce HumanEval Results
307
+
308
+ To reproduce our results:
309
+
310
+ ```python
311
+
312
+ from transformers import AutoTokenizer, LlamaForCausalLM
313
+ from human_eval.data import write_jsonl, read_problems
314
+ from tqdm import tqdm
315
+
316
+ # initialize the model
317
+
318
+ model_path = "Phind/Phind-CodeLlama-34B-v2"
319
+ model = LlamaForCausalLM.from_pretrained(model_path, device_map="auto")
320
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
321
+
322
+ # HumanEval helper
323
+
324
+ def generate_one_completion(prompt: str):
325
+ tokenizer.pad_token = tokenizer.eos_token
326
+ inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096)
327
+
328
+ # Generate
329
+ generate_ids = model.generate(inputs.input_ids.to("cuda"), max_new_tokens=384, do_sample=True, top_p=0.75, top_k=40, temperature=0.1)
330
+ completion = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
331
+ completion = completion.replace(prompt, "").split("\n\n\n")[0]
332
+
333
+ return completion
334
+
335
+ # perform HumanEval
336
+ problems = read_problems()
337
+
338
+ num_samples_per_task = 1
339
+ samples = [
340
+ dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"]))
341
+ for task_id in tqdm(problems)
342
+ for _ in range(num_samples_per_task)
343
+ ]
344
+ write_jsonl("samples.jsonl", samples)
345
+
346
+ # run `evaluate_functional_correctness samples.jsonl` in your HumanEval code sandbox
347
+ ```
348
+
349
+ ## Bias, Risks, and Limitations
350
+
351
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
352
+ This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
353
+
354
+
355
+ ## Training details
356
+
357
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
358
+
359
+ - **Hardware Type:** 32x A100-80GB
360
+ - **Hours used:** 480 GPU-hours
361
+ - **Cloud Provider:** AWS
362
+ - **Compute Region:** us-east-1