Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ tags:
|
|
20 |
- code
|
21 |
- granite
|
22 |
model-index:
|
23 |
-
- name: granite-20b-code-instruct
|
24 |
results:
|
25 |
- task:
|
26 |
type: text-generation
|
@@ -206,7 +206,7 @@ model-index:
|
|
206 |
|
207 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png)
|
208 |
|
209 |
-
# Granite-20B-Code-Instruct
|
210 |
|
211 |
## Model Summary
|
212 |
**Granite-20B-Code-Instruct-r1.1** is a 20B parameter model fine tuned from *Granite-20B-Code-Instruct-r1.1* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including mathematical reasoning and problem-solving skills.
|
@@ -230,7 +230,7 @@ This is a simple example of how to use **Granite-20B-Code-Instruct-r1.1** model.
|
|
230 |
import torch
|
231 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
232 |
device = "cuda" # or "cpu"
|
233 |
-
model_path = "ibm-granite/granite-20b-code-instruct"
|
234 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
235 |
# drop device_map if running on CPU
|
236 |
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
|
|
|
20 |
- code
|
21 |
- granite
|
22 |
model-index:
|
23 |
+
- name: granite-20b-code-instruct-r1.1
|
24 |
results:
|
25 |
- task:
|
26 |
type: text-generation
|
|
|
206 |
|
207 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png)
|
208 |
|
209 |
+
# Granite-20B-Code-Instruct-r1.1
|
210 |
|
211 |
## Model Summary
|
212 |
**Granite-20B-Code-Instruct-r1.1** is a 20B parameter model fine tuned from *Granite-20B-Code-Instruct-r1.1* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including mathematical reasoning and problem-solving skills.
|
|
|
230 |
import torch
|
231 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
232 |
device = "cuda" # or "cpu"
|
233 |
+
model_path = "ibm-granite/granite-20b-code-instruct-r1.1"
|
234 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
235 |
# drop device_map if running on CPU
|
236 |
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
|