Files changed (1) hide show
  1. README.md +10 -28
README.md CHANGED
@@ -4,24 +4,17 @@ tags:
4
  - orca
5
  - orca2
6
  - microsoft
7
- license: other
8
- license_name: microsoft-research-license
9
- license_link: LICENSE
10
  ---
11
 
12
  # Orca 2
13
 
14
  <!-- Provide a quick summary of what the model is/does. -->
15
 
16
- Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reasoning.
 
 
17
 
18
- Note that:
19
-
20
- 1. This is a research model, intended to show that we can use capable models and complex workflows (advanced prompts, multiple calls) to create synthetic data that can teach Small Language Models (SLMs) new capabilities. We chose reasoning because it is a widely useful capability that SLMs lack.
21
- 2. The model is not optimized for chat and has not been trained with RLHF or DPO. It is best used after being finetuned for chat or for a specific task.
22
- 3. Beyond reasoning, the model inherits capabilities and limitations of its base (LLAMA-2 base). We have already seen that the benefits of the Orca training can be applied to other base model too.
23
-
24
- We make Orca 2's weights publicly available to support further research on the development, evaluation, and alignment of SLMs.
25
 
26
  ## What is Orca 2’s intended use(s)?
27
 
@@ -30,19 +23,18 @@ We make Orca 2's weights publicly available to support further research on the d
30
 
31
  ## How was Orca 2 evaluated?
32
 
33
- + Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
34
- to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
35
 
36
  ## Model Details
37
 
38
- Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
39
- All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
40
 
41
  Please refer to LLaMA-2 technical report for details on the model architecture.
42
 
43
  ## License
44
 
45
- Orca 2 is licensed under the [Microsoft Research License](LICENSE).
46
 
47
  Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
48
 
@@ -129,6 +121,7 @@ tokenizer = transformers.AutoTokenizer.from_pretrained(
129
  system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
130
  user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
131
 
 
132
  prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
133
 
134
  inputs = tokenizer(prompt, return_tensors='pt')
@@ -218,6 +211,7 @@ tokenizer = transformers.AutoTokenizer.from_pretrained(
218
  system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
219
  user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
220
 
 
221
  prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
222
 
223
  inputs = tokenizer(prompt, return_tensors='pt')
@@ -230,16 +224,4 @@ answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
230
  final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
231
 
232
  print(final_output)
233
- ```
234
-
235
- ## Citation
236
- ```bibtex
237
- @misc{mitra2023orca,
238
- title={Orca 2: Teaching Small Language Models How to Reason},
239
- author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
240
- year={2023},
241
- eprint={2311.11045},
242
- archivePrefix={arXiv},
243
- primaryClass={cs.AI}
244
- }
245
  ```
 
4
  - orca
5
  - orca2
6
  - microsoft
 
 
 
7
  ---
8
 
9
  # Orca 2
10
 
11
  <!-- Provide a quick summary of what the model is/does. -->
12
 
13
+ Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response
14
+ in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization.
15
+ The model is designed to excel particularly in reasoning.
16
 
17
+ We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.
 
 
 
 
 
 
18
 
19
  ## What is Orca 2’s intended use(s)?
20
 
 
23
 
24
  ## How was Orca 2 evaluated?
25
 
26
+ + Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer to Section 6 and Appendix in the paper for details on evaluations.
 
27
 
28
  ## Model Details
29
 
30
+ Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities. All synthetic training data was moderated using the Microsoft Azure content filters.
31
+ More details about the model can be found at: LINK to Tech Report
32
 
33
  Please refer to LLaMA-2 technical report for details on the model architecture.
34
 
35
  ## License
36
 
37
+ Orca 2 is licensed under the [Microsoft Research License]().
38
 
39
  Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
40
 
 
121
  system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
122
  user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
123
 
124
+ # We use Chat Markup Language https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/includes/chat-markup-language.md#working-with-chat-markup-language-chatml
125
  prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
126
 
127
  inputs = tokenizer(prompt, return_tensors='pt')
 
211
  system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
212
  user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
213
 
214
+ # We use Chat Markup Language https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/includes/chat-markup-language.md#working-with-chat-markup-language-chatml
215
  prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
216
 
217
  inputs = tokenizer(prompt, return_tensors='pt')
 
224
  final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
225
 
226
  print(final_output)
 
 
 
 
 
 
 
 
 
 
 
 
227
  ```