Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ The provided OpenVINO™ IR model is compatible with:
|
|
20 |
* OpenVINO version 2024.1.0 and higher
|
21 |
* Optimum Intel 1.16.0 and higher
|
22 |
|
23 |
-
## Running Model Inference
|
24 |
|
25 |
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
|
26 |
|
@@ -39,20 +39,47 @@ tokenizer = AutoTokenizer.from_pretrained(model_id)
|
|
39 |
model = OVModelForCausalLM.from_pretrained(model_id)
|
40 |
|
41 |
|
42 |
-
|
43 |
-
{"role": "user", "content": "What is your favourite condiment?"},
|
44 |
-
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
|
45 |
-
{"role": "user", "content": "Do you have mayonnaise recipes?"}
|
46 |
-
]
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
52 |
```
|
53 |
|
54 |
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
|
55 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
## Limitations
|
57 |
|
58 |
Check the original model card for [limitations](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1#limitations).
|
|
|
20 |
* OpenVINO version 2024.1.0 and higher
|
21 |
* Optimum Intel 1.16.0 and higher
|
22 |
|
23 |
+
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
|
24 |
|
25 |
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
|
26 |
|
|
|
39 |
model = OVModelForCausalLM.from_pretrained(model_id)
|
40 |
|
41 |
|
42 |
+
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
|
|
|
|
|
|
|
|
|
43 |
|
44 |
+
outputs = model.generate(**inputs, max_length=200)
|
45 |
+
text = tokenizer.batch_decode(outputs)[0]
|
46 |
+
print(text)
|
|
|
47 |
```
|
48 |
|
49 |
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
|
50 |
|
51 |
+
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
|
52 |
+
|
53 |
+
1. Install packages required for using OpenVINO GenAI.
|
54 |
+
```
|
55 |
+
pip install openvino-genai huggingface_hub
|
56 |
+
```
|
57 |
+
|
58 |
+
2. Download model from HuggingFace Hub
|
59 |
+
|
60 |
+
```
|
61 |
+
import huggingface_hub as hf_hub
|
62 |
+
|
63 |
+
model_id = "OpenVINO/mistral-7b-instrcut-v0.1-fp16-ov"
|
64 |
+
model_path = "mistral-7b-instrcut-v0.1-fp16-ov"
|
65 |
+
|
66 |
+
hf_hub.snapshot_download(model_id, local_dir=model_path)
|
67 |
+
|
68 |
+
```
|
69 |
+
|
70 |
+
3. Run model inference:
|
71 |
+
|
72 |
+
```
|
73 |
+
import openvino_genai as ov_genai
|
74 |
+
|
75 |
+
device = "CPU"
|
76 |
+
pipe = ov_genai.LLMPipeline(model_path, device)
|
77 |
+
print(pipe.generate("What is OpenVINO?"))
|
78 |
+
```
|
79 |
+
|
80 |
+
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
|
81 |
+
|
82 |
+
|
83 |
## Limitations
|
84 |
|
85 |
Check the original model card for [limitations](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1#limitations).
|