francislabounty
commited on
Commit
•
38c23cb
1
Parent(s):
3d443e9
Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ license: mit
|
|
4 |
LoRA weights only and trained for research - nothing from the foundation model. Trained using Open-Assistant's dataset. Shout-out to Open-Assistant and LAION for giving us early research access to the dataset.
|
5 |
|
6 |
Sample usage
|
7 |
-
|
8 |
import torch
|
9 |
import os
|
10 |
import transformers
|
@@ -32,7 +32,7 @@ with torch.no_grad():
|
|
32 |
temperature=1.0
|
33 |
)
|
34 |
print(tokenizer.decode(out[0]))
|
35 |
-
|
36 |
The model will continue the conversation between the user and itself. If you want to use as a chatbot you can alter the generate method to include stop sequences for 'User:' and 'Assistant:' or strip off anything past the assistant's original response before returning.
|
37 |
|
38 |
Trained for 4 epochs with a sequence length of 2048 on 8 A6000s with an effective batch size of 120.
|
|
|
4 |
LoRA weights only and trained for research - nothing from the foundation model. Trained using Open-Assistant's dataset. Shout-out to Open-Assistant and LAION for giving us early research access to the dataset.
|
5 |
|
6 |
Sample usage
|
7 |
+
```python
|
8 |
import torch
|
9 |
import os
|
10 |
import transformers
|
|
|
32 |
temperature=1.0
|
33 |
)
|
34 |
print(tokenizer.decode(out[0]))
|
35 |
+
```
|
36 |
The model will continue the conversation between the user and itself. If you want to use as a chatbot you can alter the generate method to include stop sequences for 'User:' and 'Assistant:' or strip off anything past the assistant's original response before returning.
|
37 |
|
38 |
Trained for 4 epochs with a sequence length of 2048 on 8 A6000s with an effective batch size of 120.
|