Update README.md
Browse files
README.md
CHANGED
@@ -113,17 +113,24 @@ Suggestion 1: 71
|
|
113 |
|
114 |
If you don’t have access to a larger GPU but want to try the model out, you can run it in a quantized format in Google Colab. **The quality of the responses might deteriorate significantly.** Follow these steps:
|
115 |
|
116 |
-
### Step 1:
|
117 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
118 |
!pip install -U bitsandbytes
|
119 |
import os
|
120 |
os._exit(00)
|
121 |
```
|
122 |
|
123 |
-
### Step
|
124 |
-
|
125 |
```python
|
126 |
-
|
127 |
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
|
128 |
import torch
|
129 |
|
@@ -145,8 +152,8 @@ generation_pipeline = pipeline(
|
|
145 |
device_map="auto",
|
146 |
)
|
147 |
```
|
148 |
-
### Step
|
149 |
-
```
|
150 |
# This is a rough transcription of Pap.Ups. 106
|
151 |
papyrus_edition = """
|
152 |
ετουσ τεταρτου αυτοκρατοροσ καισαροσ ουεσπασιανου σεβαστου ------------------
|
|
|
113 |
|
114 |
If you don’t have access to a larger GPU but want to try the model out, you can run it in a quantized format in Google Colab. **The quality of the responses might deteriorate significantly.** Follow these steps:
|
115 |
|
116 |
+
### Step 1: Connect to free GPU
|
117 |
+
1. Click Connect arrow_drop_down near the top right of the notebook.
|
118 |
+
2. Select Change runtime type.
|
119 |
+
3. In the modal window, select T4 GPU as your hardware accelerator.
|
120 |
+
4. Click Save.
|
121 |
+
5. Click the Connect button to connect to your runtime. After some time, the button will present a green checkmark, along with RAM and disk usage graphs. This indicates that a server has successfully been created with your required hardware.
|
122 |
+
|
123 |
+
|
124 |
+
### Step 2: Install Dependencies
|
125 |
+
|
126 |
+
```python
|
127 |
!pip install -U bitsandbytes
|
128 |
import os
|
129 |
os._exit(00)
|
130 |
```
|
131 |
|
132 |
+
### Step 3: Download and quantize the model
|
|
|
133 |
```python
|
|
|
134 |
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
|
135 |
import torch
|
136 |
|
|
|
152 |
device_map="auto",
|
153 |
)
|
154 |
```
|
155 |
+
### Step 4: Run inference on a papyrus fragment of your choice
|
156 |
+
```python
|
157 |
# This is a rough transcription of Pap.Ups. 106
|
158 |
papyrus_edition = """
|
159 |
ετουσ τεταρτου αυτοκρατοροσ καισαροσ ουεσπασιανου σεβαστου ------------------
|