Sagar Krishna commited on
Commit
0505e9b
1 Parent(s): ef6d003

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +684 -131
README.md CHANGED
@@ -1,49 +1,326 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
 
 
 
 
11
 
12
  ## Model Details
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  import ctranslate2
43
  import transformers
44
 
45
- model_id = "Llama-3-8B-Text2SQL_Instruct-ct2-int8_float16"
46
- model = ctranslate2.Generator(model_id, device="cpu")
47
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
48
 
49
  messages = [
@@ -68,163 +345,439 @@ results = model.generate_batch([input_tokens], include_prompt_in_result=False, m
68
  output = tokenizer.decode(results[0].sequences_ids[0])
69
 
70
  print(output)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
 
73
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
74
-
75
- [More Information Needed]
76
-
77
- ### Downstream Use [optional]
78
-
79
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
80
-
81
- [More Information Needed]
82
-
83
- ### Out-of-Scope Use
84
-
85
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
86
-
87
- [More Information Needed]
88
-
89
- ## Bias, Risks, and Limitations
90
-
91
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
92
-
93
- [More Information Needed]
94
-
95
- ### Recommendations
96
-
97
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
98
-
99
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
100
-
101
- ## How to Get Started with the Model
102
-
103
- Use the code below to get started with the model.
104
-
105
- [More Information Needed]
106
-
107
- ## Training Details
108
-
109
- ### Training Data
110
-
111
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
112
-
113
- [More Information Needed]
114
-
115
- ### Training Procedure
116
-
117
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
118
-
119
- #### Preprocessing [optional]
120
-
121
- [More Information Needed]
122
-
123
-
124
- #### Training Hyperparameters
125
-
126
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
127
 
128
- #### Speeds, Sizes, Times [optional]
129
 
130
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
131
 
132
- [More Information Needed]
133
 
134
- ## Evaluation
135
 
136
- <!-- This section describes the evaluation protocols and provides the results. -->
137
 
138
- ### Testing Data, Factors & Metrics
139
 
140
- #### Testing Data
141
 
142
- <!-- This should link to a Dataset Card if possible. -->
143
 
144
- [More Information Needed]
145
 
146
- #### Factors
147
 
148
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
149
 
150
- [More Information Needed]
151
 
152
- #### Metrics
153
 
154
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
155
 
156
- [More Information Needed]
157
 
158
- ### Results
159
 
160
- [More Information Needed]
161
 
162
- #### Summary
163
 
 
164
 
165
 
166
- ## Model Examination [optional]
167
 
168
- <!-- Relevant interpretability work for the model goes here -->
169
 
170
- [More Information Needed]
171
 
172
- ## Environmental Impact
173
 
174
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
175
 
176
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
177
 
178
- - **Hardware Type:** [More Information Needed]
179
- - **Hours used:** [More Information Needed]
180
- - **Cloud Provider:** [More Information Needed]
181
- - **Compute Region:** [More Information Needed]
182
- - **Carbon Emitted:** [More Information Needed]
183
 
184
- ## Technical Specifications [optional]
185
 
186
- ### Model Architecture and Objective
187
 
188
- [More Information Needed]
189
 
190
- ### Compute Infrastructure
191
 
192
- [More Information Needed]
193
 
194
- #### Hardware
195
 
196
- [More Information Needed]
197
 
198
- #### Software
199
 
200
- [More Information Needed]
201
 
202
- ## Citation [optional]
203
 
204
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
205
 
206
- **BibTeX:**
207
 
208
- [More Information Needed]
209
 
210
- **APA:**
211
 
212
- [More Information Needed]
213
 
214
- ## Glossary [optional]
215
 
216
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
217
 
218
- [More Information Needed]
219
 
220
- ## More Information [optional]
221
 
222
- [More Information Needed]
223
 
224
- ## Model Card Authors [optional]
225
 
226
- [More Information Needed]
227
 
228
- ## Model Card Contact
229
 
230
- [More Information Needed]
 
1
  ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - facebook
7
+ - meta
8
+ - pytorch
9
+ - llama
10
+ - llama-3
11
+ - ctranslate2
12
+ - quantization
13
+ - int8
14
+ - float16
15
+ license: other
16
+ license_name: llama3
17
+ license_link: LICENSE
18
+ extra_gated_prompt: >-
19
+ ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
20
+
21
+ Meta Llama 3 Version Release Date: April 18, 2024
22
+
23
+ "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
24
+ Llama Materials set forth herein.
25
+
26
+ "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
27
+ distributed by Meta at https://llama.meta.com/get-started/.
28
+
29
+ "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
30
+ this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
31
+ regulations to provide legal consent and that has legal authority to bind your employer or such other
32
+ person or entity if you are entering in this Agreement on their behalf.
33
+
34
+ "Meta Llama 3" means the foundational large language models and software and algorithms, including
35
+ machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
36
+ fine-tuning enabling code and other elements of the foregoing distributed by Meta at
37
+ https://llama.meta.com/llama-downloads.
38
+
39
+ "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
40
+ portion thereof) made available under this Agreement.
41
+
42
+ "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
43
+ principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
44
+ outside of the EEA or Switzerland).
45
+
46
+ 1. License Rights and Redistribution.
47
+
48
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
49
+ limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
50
+ Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
51
+ Llama Materials.
52
+
53
+ b. Redistribution and Use.
54
+
55
+ i. If you distribute or make available the Llama Materials (or any derivative works
56
+ thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
57
+ a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
58
+ Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
59
+ use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
60
+ distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
61
+ name.
62
+
63
+ ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
64
+ of an integrated end user product, then Section 2 of this Agreement will not apply to you.
65
+
66
+ iii. You must retain in all copies of the Llama Materials that you distribute the following
67
+ attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
68
+ licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
69
+ Reserved.”
70
+
71
+ iv. Your use of the Llama Materials must comply with applicable laws and regulations
72
+ (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
73
+ Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
74
+ reference into this Agreement.
75
+
76
+ v. You will not use the Llama Materials or any output or results of the Llama Materials to
77
+ improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
78
+
79
+ 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
80
+ of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
81
+ million monthly active users in the preceding calendar month, you must request a license from Meta,
82
+ which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
83
+ rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
84
+
85
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
86
+ OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
87
+ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
88
+ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
89
+ MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
90
+ DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
91
+ ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
92
+ RESULTS.
93
+
94
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
95
+ LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
96
+ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
97
+ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
98
+ OF THE POSSIBILITY OF ANY OF THE FOREGOING.
99
+
100
+ 5. Intellectual Property.
101
+
102
+ a. No trademark licenses are granted under this Agreement, and in connection with the Llama
103
+ Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
104
+ or any of its affiliates, except as required for reasonable and customary use in describing and
105
+ redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
106
+ use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
107
+ comply with Meta’s brand guidelines (currently accessible at
108
+ https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
109
+ of the Mark will inure to the benefit of Meta.
110
+
111
+ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
112
+ respect to any derivative works and modifications of the Llama Materials that are made by you, as
113
+ between you and Meta, you are and will be the owner of such derivative works and modifications.
114
+
115
+ c. If you institute litigation or other proceedings against Meta or any entity (including a
116
+ cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
117
+ results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
118
+ rights owned or licensable by you, then any licenses granted to you under this Agreement shall
119
+ terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
120
+ harmless Meta from and against any claim by any third party arising out of or related to your use or
121
+ distribution of the Llama Materials.
122
+
123
+ 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
124
+ Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
125
+ accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
126
+ breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
127
+ and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
128
+ Agreement.
129
+
130
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
131
+ the State of California without regard to choice of law principles, and the UN Convention on Contracts
132
+ for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
133
+ exclusive jurisdiction of any dispute arising out of this Agreement.
134
+
135
+ ### Meta Llama 3 Acceptable Use Policy
136
+
137
+ Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
138
+ access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
139
+ this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
140
+
141
+ #### Prohibited Uses
142
+
143
+ We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
144
+ others to use, Meta Llama 3 to:
145
+ 1. Violate the law or others’ rights, including to:
146
+ 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
147
+ 1. Violence or terrorism
148
+ 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
149
+ 3. Human trafficking, exploitation, and sexual violence
150
+ 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
151
+ 5. Sexual solicitation
152
+ 6. Any other criminal activity
153
+ 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
154
+ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
155
+ 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
156
+ 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
157
+ 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
158
+ 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
159
+ 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
160
+ 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
161
+ 2. Guns and illegal weapons (including weapon development)
162
+ 3. Illegal drugs and regulated/controlled substances
163
+ 4. Operation of critical infrastructure, transportation technologies, or heavy machinery
164
+ 5. Self-harm or harm to others, including suicide, cutting, and eating disorders
165
+ 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
166
+ 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
167
+ 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
168
+ 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
169
+ 3. Generating, promoting, or further distributing spam
170
+ 4. Impersonating another individual without consent, authorization, or legal right
171
+ 5. Representing that the use of Meta Llama 3 or outputs are human-generated
172
+ 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
173
+ 4. Fail to appropriately disclose to end users any known dangers of your AI system
174
+
175
+ Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
176
+ of this Policy through one of the following means:
177
+ * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
178
+ * Reporting risky content generated by the model:
179
+ developers.facebook.com/llama_output_feedback
180
+ * Reporting bugs and security concerns: facebook.com/whitehat/info
181
+ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com
182
+ extra_gated_fields:
183
+ First Name: text
184
+ Last Name: text
185
+ Date of birth: date_picker
186
+ Country: country
187
+ Affiliation: text
188
+ geo: ip_location
189
+ By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
190
+ extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
191
+ extra_gated_button_content: Submit
192
+ widget:
193
+ - example_title: Hello
194
+ messages:
195
+ - role: user
196
+ content: Hey my name is Julien! How are you?
197
+ - example_title: Winter holidays
198
+ messages:
199
+ - role: system
200
+ content: You are a helpful and honest assistant. Please, respond concisely and truthfully.
201
+ - role: user
202
+ content: Can you recommend a good destination for Winter holidays?
203
+ - example_title: Programming assistant
204
+ messages:
205
+ - role: system
206
+ content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully.
207
+ - role: user
208
+ content: Write a function that computes the nth fibonacci number.
209
+ inference:
210
+ parameters:
211
+ max_new_tokens: 300
212
+ stop:
213
+ - <|end_of_text|>
214
+ - <|eot_id|>
215
  ---
216
 
217
+ ## meta-llama/Meta-Llama-3-8B-Instruct for CTranslate2
218
 
219
+ **The model is quantized version of the [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) with int8_float16 quantization and can be used in [CTranslate2](https://github.com/OpenNMT/CTranslate2).**
220
 
221
+ ## Conversion details
222
 
223
+ The original model was converted on 2024-4 with the following command:
224
+ ```
225
+ ct2-transformers-converter --model Path\To\Local\meta-llama\Meta-Llama-3-8B-Instruct \
226
+ --quantization int8_float16 --output_dir Meta-Llama-3-8B-Instruct-ct2-int8_float16
227
+ ```
228
 
229
  ## Model Details
230
 
231
+ Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
232
 
233
+ **Model developers** Meta
234
 
235
+ **Variations** Llama 3 comes in two sizes 8B and 70B parameters in pre-trained and instruction tuned variants.
236
 
237
+ **Input** Models input text only.
 
 
 
 
 
 
238
 
239
+ **Output** Models generate text and code only.
240
 
241
+ **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
242
 
 
 
 
243
 
244
+ <table>
245
+ <tr>
246
+ <td>
247
+ </td>
248
+ <td><strong>Training Data</strong>
249
+ </td>
250
+ <td><strong>Params</strong>
251
+ </td>
252
+ <td><strong>Context length</strong>
253
+ </td>
254
+ <td><strong>GQA</strong>
255
+ </td>
256
+ <td><strong>Token count</strong>
257
+ </td>
258
+ <td><strong>Knowledge cutoff</strong>
259
+ </td>
260
+ </tr>
261
+ <tr>
262
+ <td rowspan="2" >Llama 3
263
+ </td>
264
+ <td rowspan="2" >A new mix of publicly available online data.
265
+ </td>
266
+ <td>8B
267
+ </td>
268
+ <td>8k
269
+ </td>
270
+ <td>Yes
271
+ </td>
272
+ <td rowspan="2" >15T+
273
+ </td>
274
+ <td>March, 2023
275
+ </td>
276
+ </tr>
277
+ <tr>
278
+ <td>70B
279
+ </td>
280
+ <td>8k
281
+ </td>
282
+ <td>Yes
283
+ </td>
284
+ <td>December, 2023
285
+ </td>
286
+ </tr>
287
+ </table>
288
 
 
289
 
290
+ **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
291
 
292
+ **Model Release Date** April 18, 2024.
293
+
294
+ **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
295
+
296
+ **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
297
+
298
+ Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
299
+
300
+
301
+ ## Intended Use
302
+
303
+ **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
304
+
305
+ **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
306
+
307
+ **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
308
+
309
+ ## How to use
310
+
311
+ This repository for use with [CTranslate2](https://github.com/OpenNMT/CTranslate2).
312
+
313
+ ### Use with CTranslate2
314
+
315
+ This example code is obtained from [CTranslate2_transformers](https://opennmt.net/CTranslate2/guides/transformers.html#mpt) and [tokenizer AutoTokenizer](https://huggingface.co/docs/transformers/main_classes/tokenizer).
316
+ More detailed information about the `generate_batch` methon can be found at [CTranslate2_Generator.generate_batch](https://opennmt.net/CTranslate2/python/ctranslate2.Generator.html#ctranslate2.Generator.generate_batch).
317
+
318
+ ```python
319
  import ctranslate2
320
  import transformers
321
 
322
+ model_id = "avans06/Meta-Llama-3-8B-Instruct-ct2-int8_float16"
323
+ model = ctranslate2.Generator(model_id, device="auto", compute_type="int8_float16")
324
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
325
 
326
  messages = [
 
345
  output = tokenizer.decode(results[0].sequences_ids[0])
346
 
347
  print(output)
348
+ ```
349
+
350
+ ## Hardware and Software
351
+
352
+ **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
353
+
354
+ **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
355
+
356
+
357
+ <table>
358
+ <tr>
359
+ <td>
360
+ </td>
361
+ <td><strong>Time (GPU hours)</strong>
362
+ </td>
363
+ <td><strong>Power Consumption (W)</strong>
364
+ </td>
365
+ <td><strong>Carbon Emitted(tCO2eq)</strong>
366
+ </td>
367
+ </tr>
368
+ <tr>
369
+ <td>Llama 3 8B
370
+ </td>
371
+ <td>1.3M
372
+ </td>
373
+ <td>700
374
+ </td>
375
+ <td>390
376
+ </td>
377
+ </tr>
378
+ <tr>
379
+ <td>Llama 3 70B
380
+ </td>
381
+ <td>6.4M
382
+ </td>
383
+ <td>700
384
+ </td>
385
+ <td>1900
386
+ </td>
387
+ </tr>
388
+ <tr>
389
+ <td>Total
390
+ </td>
391
+ <td>7.7M
392
+ </td>
393
+ <td>
394
+ </td>
395
+ <td>2290
396
+ </td>
397
+ </tr>
398
+ </table>
399
+
400
+
401
+
402
+ **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
403
+
404
+
405
+ ## Training Data
406
+
407
+ **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
408
+
409
+ **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
410
+
411
+
412
+ ## Benchmarks
413
+
414
+ In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
415
+
416
+
417
+ ### Base pretrained models
418
+
419
+
420
+ <table>
421
+ <tr>
422
+ <td><strong>Category</strong>
423
+ </td>
424
+ <td><strong>Benchmark</strong>
425
+ </td>
426
+ <td><strong>Llama 3 8B</strong>
427
+ </td>
428
+ <td><strong>Llama2 7B</strong>
429
+ </td>
430
+ <td><strong>Llama2 13B</strong>
431
+ </td>
432
+ <td><strong>Llama 3 70B</strong>
433
+ </td>
434
+ <td><strong>Llama2 70B</strong>
435
+ </td>
436
+ </tr>
437
+ <tr>
438
+ <td rowspan="6" >General
439
+ </td>
440
+ <td>MMLU (5-shot)
441
+ </td>
442
+ <td>66.6
443
+ </td>
444
+ <td>45.7
445
+ </td>
446
+ <td>53.8
447
+ </td>
448
+ <td>79.5
449
+ </td>
450
+ <td>69.7
451
+ </td>
452
+ </tr>
453
+ <tr>
454
+ <td>AGIEval English (3-5 shot)
455
+ </td>
456
+ <td>45.9
457
+ </td>
458
+ <td>28.8
459
+ </td>
460
+ <td>38.7
461
+ </td>
462
+ <td>63.0
463
+ </td>
464
+ <td>54.8
465
+ </td>
466
+ </tr>
467
+ <tr>
468
+ <td>CommonSenseQA (7-shot)
469
+ </td>
470
+ <td>72.6
471
+ </td>
472
+ <td>57.6
473
+ </td>
474
+ <td>67.6
475
+ </td>
476
+ <td>83.8
477
+ </td>
478
+ <td>78.7
479
+ </td>
480
+ </tr>
481
+ <tr>
482
+ <td>Winogrande (5-shot)
483
+ </td>
484
+ <td>76.1
485
+ </td>
486
+ <td>73.3
487
+ </td>
488
+ <td>75.4
489
+ </td>
490
+ <td>83.1
491
+ </td>
492
+ <td>81.8
493
+ </td>
494
+ </tr>
495
+ <tr>
496
+ <td>BIG-Bench Hard (3-shot, CoT)
497
+ </td>
498
+ <td>61.1
499
+ </td>
500
+ <td>38.1
501
+ </td>
502
+ <td>47.0
503
+ </td>
504
+ <td>81.3
505
+ </td>
506
+ <td>65.7
507
+ </td>
508
+ </tr>
509
+ <tr>
510
+ <td>ARC-Challenge (25-shot)
511
+ </td>
512
+ <td>78.6
513
+ </td>
514
+ <td>53.7
515
+ </td>
516
+ <td>67.6
517
+ </td>
518
+ <td>93.0
519
+ </td>
520
+ <td>85.3
521
+ </td>
522
+ </tr>
523
+ <tr>
524
+ <td>Knowledge reasoning
525
+ </td>
526
+ <td>TriviaQA-Wiki (5-shot)
527
+ </td>
528
+ <td>78.5
529
+ </td>
530
+ <td>72.1
531
+ </td>
532
+ <td>79.6
533
+ </td>
534
+ <td>89.7
535
+ </td>
536
+ <td>87.5
537
+ </td>
538
+ </tr>
539
+ <tr>
540
+ <td rowspan="4" >Reading comprehension
541
+ </td>
542
+ <td>SQuAD (1-shot)
543
+ </td>
544
+ <td>76.4
545
+ </td>
546
+ <td>72.2
547
+ </td>
548
+ <td>72.1
549
+ </td>
550
+ <td>85.6
551
+ </td>
552
+ <td>82.6
553
+ </td>
554
+ </tr>
555
+ <tr>
556
+ <td>QuAC (1-shot, F1)
557
+ </td>
558
+ <td>44.4
559
+ </td>
560
+ <td>39.6
561
+ </td>
562
+ <td>44.9
563
+ </td>
564
+ <td>51.1
565
+ </td>
566
+ <td>49.4
567
+ </td>
568
+ </tr>
569
+ <tr>
570
+ <td>BoolQ (0-shot)
571
+ </td>
572
+ <td>75.7
573
+ </td>
574
+ <td>65.5
575
+ </td>
576
+ <td>66.9
577
+ </td>
578
+ <td>79.0
579
+ </td>
580
+ <td>73.1
581
+ </td>
582
+ </tr>
583
+ <tr>
584
+ <td>DROP (3-shot, F1)
585
+ </td>
586
+ <td>58.4
587
+ </td>
588
+ <td>37.9
589
+ </td>
590
+ <td>49.8
591
+ </td>
592
+ <td>79.7
593
+ </td>
594
+ <td>70.2
595
+ </td>
596
+ </tr>
597
+ </table>
598
+
599
+
600
+
601
+ ### Instruction tuned models
602
+
603
+
604
+ <table>
605
+ <tr>
606
+ <td><strong>Benchmark</strong>
607
+ </td>
608
+ <td><strong>Llama 3 8B</strong>
609
+ </td>
610
+ <td><strong>Llama 2 7B</strong>
611
+ </td>
612
+ <td><strong>Llama 2 13B</strong>
613
+ </td>
614
+ <td><strong>Llama 3 70B</strong>
615
+ </td>
616
+ <td><strong>Llama 2 70B</strong>
617
+ </td>
618
+ </tr>
619
+ <tr>
620
+ <td>MMLU (5-shot)
621
+ </td>
622
+ <td>68.4
623
+ </td>
624
+ <td>34.1
625
+ </td>
626
+ <td>47.8
627
+ </td>
628
+ <td>82.0
629
+ </td>
630
+ <td>52.9
631
+ </td>
632
+ </tr>
633
+ <tr>
634
+ <td>GPQA (0-shot)
635
+ </td>
636
+ <td>34.2
637
+ </td>
638
+ <td>21.7
639
+ </td>
640
+ <td>22.3
641
+ </td>
642
+ <td>39.5
643
+ </td>
644
+ <td>21.0
645
+ </td>
646
+ </tr>
647
+ <tr>
648
+ <td>HumanEval (0-shot)
649
+ </td>
650
+ <td>62.2
651
+ </td>
652
+ <td>7.9
653
+ </td>
654
+ <td>14.0
655
+ </td>
656
+ <td>81.7
657
+ </td>
658
+ <td>25.6
659
+ </td>
660
+ </tr>
661
+ <tr>
662
+ <td>GSM-8K (8-shot, CoT)
663
+ </td>
664
+ <td>79.6
665
+ </td>
666
+ <td>25.7
667
+ </td>
668
+ <td>77.4
669
+ </td>
670
+ <td>93.0
671
+ </td>
672
+ <td>57.5
673
+ </td>
674
+ </tr>
675
+ <tr>
676
+ <td>MATH (4-shot, CoT)
677
+ </td>
678
+ <td>30.0
679
+ </td>
680
+ <td>3.8
681
+ </td>
682
+ <td>6.7
683
+ </td>
684
+ <td>50.4
685
+ </td>
686
+ <td>11.6
687
+ </td>
688
+ </tr>
689
+ </table>
690
 
691
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
692
 
693
+ ### Responsibility & Safety
694
 
695
+ We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
696
 
697
+ Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
698
 
699
+ Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
700
 
 
701
 
702
+ As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
703
 
 
704
 
705
+ #### Llama 3-Instruct
706
 
707
+ As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
708
 
709
+ <span style="text-decoration:underline;">Safety</span>
710
 
711
+ For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
712
 
713
+ <span style="text-decoration:underline;">Refusals</span>
714
 
715
+ In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
716
 
717
+ We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
718
 
 
719
 
720
+ #### Responsible release
721
 
722
+ In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
723
 
724
+ Misuse
725
 
726
+ If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
727
 
728
 
729
+ #### Critical risks
730
 
731
+ <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
732
 
733
+ We have conducted a two fold assessment of the safety of the model in this area:
734
 
 
735
 
 
736
 
737
+ * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
738
+ * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
739
 
 
 
 
 
 
740
 
741
+ ### <span style="text-decoration:underline;">Cyber Security </span>
742
 
743
+ We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
744
 
 
745
 
746
+ ### <span style="text-decoration:underline;">Child Safety</span>
747
 
748
+ Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
749
 
 
750
 
751
+ ### Community
752
 
753
+ Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
754
 
755
+ Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
756
 
 
757
 
758
+ ## Ethical Considerations and Limitations
759
 
760
+ The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
761
 
762
+ But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
763
 
764
+ Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
765
 
 
766
 
767
+ ## Citation instructions
768
 
769
+ @article{llama3modelcard,
770
 
771
+ title={Llama 3 Model Card},
772
 
773
+ author={AI@Meta},
774
 
775
+ year={2024},
776
 
777
+ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
778
 
779
+ }
780
 
781
+ ## Contributors
782
 
783
+ Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos