Maximum2000 commited on
Commit
9e6c538
·
verified ·
1 Parent(s): 4b3c11d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -3
README.md CHANGED
@@ -1,3 +1,55 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
4
+ language:
5
+ - multilingual
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - nlp
9
+ - code
10
+ widget:
11
+ - messages:
12
+ - role: user
13
+ content: Can you provide ways to eat combinations of bananas and dragonfruits?
14
+ library_name: transformers
15
+ ---
16
+
17
+ ## Model Summary
18
+
19
+ Phi-3.5-mini ONNX Conversion
20
+ The Phi-3.5-mini model has been converted to ONNX format. This conversion allows for:
21
+
22
+ Improved compatibility across different platforms and frameworks
23
+ Potential performance optimizations through ONNX Runtime
24
+ Easier deployment in production environments
25
+
26
+ Users can now access the ONNX version of Phi-3.5-mini-instruct for their applications. The core capabilities and performance of the model remain unchanged.
27
+ For usage instructions and more details, please refer to the updated documentation.
28
+
29
+ https://github.com/microsoft/onnxruntime/tree/main/csharp
30
+
31
+ **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
32
+
33
+ ## Intended Uses
34
+
35
+ ### Primary Use Cases
36
+
37
+ The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
38
+
39
+ 1) Memory/compute constrained environments
40
+ 2) Latency bound scenarios
41
+ 3) Strong reasoning (especially code, math and logic)
42
+
43
+ Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
44
+
45
+ ### Use Case Considerations
46
+
47
+ Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
48
+
49
+ ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
50
+
51
+ ## Release Notes
52
+
53
+ This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
54
+
55
+ You can find more details here: https://huggingface.co/microsoft/Phi-3.5-mini-instruct