Update README.md
Browse files
README.md
CHANGED
@@ -44,6 +44,8 @@ This model is fine-tuned using LoRa (Low-Rank Adaptation) on the "Noticias La Ra
|
|
44 |
|
45 |
This model can be used for **conversational AI tasks** related to Spanish-language news. The fine-tuned LoRa model is especially suitable for use cases that require both understanding and generating text, such as chat-based interactions, answering questions about news, and discussing headlines.
|
46 |
|
|
|
|
|
47 |
### Downstream Use [optional]
|
48 |
|
49 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
@@ -70,19 +72,19 @@ Users (both direct and downstream) should be made aware of the risks, biases and
|
|
70 |
|
71 |
## How to Get Started with the Model
|
72 |
|
73 |
-
|
|
|
|
|
74 |
|
75 |
-
|
76 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
77 |
from peft import PeftModel
|
78 |
|
79 |
-
# Load the tokenizer and model
|
80 |
save_directory = "./fine_tuned_model"
|
81 |
tokenizer = AutoTokenizer.from_pretrained(save_directory)
|
82 |
model = AutoModelForCausalLM.from_pretrained(save_directory)
|
83 |
peft_model = PeftModel.from_pretrained(model, save_directory)
|
84 |
|
85 |
-
# Example usage
|
86 |
input_text = "¿Qué opinas de las noticias recientes sobre la economía?"
|
87 |
inputs = tokenizer(input_text, return_tensors="pt")
|
88 |
output = peft_model.generate(**inputs, max_length=50)
|
|
|
44 |
|
45 |
This model can be used for **conversational AI tasks** related to Spanish-language news. The fine-tuned LoRa model is especially suitable for use cases that require both understanding and generating text, such as chat-based interactions, answering questions about news, and discussing headlines.
|
46 |
|
47 |
+
Copy the code from this Gist for easy chating using Jupyter Notebook: https://gist.github.com/reddgr/20c2e3ea205d1fedfdc8be94dc5c1237
|
48 |
+
|
49 |
### Downstream Use [optional]
|
50 |
|
51 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
|
|
72 |
|
73 |
## How to Get Started with the Model
|
74 |
|
75 |
+
Copy the code from this Gist for easy chating using Jupyter Notebook: https://gist.github.com/reddgr/20c2e3ea205d1fedfdc8be94dc5c1237
|
76 |
+
|
77 |
+
Additionally, you can use the code below to get started with the model.
|
78 |
|
79 |
+
!python
|
80 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
81 |
from peft import PeftModel
|
82 |
|
|
|
83 |
save_directory = "./fine_tuned_model"
|
84 |
tokenizer = AutoTokenizer.from_pretrained(save_directory)
|
85 |
model = AutoModelForCausalLM.from_pretrained(save_directory)
|
86 |
peft_model = PeftModel.from_pretrained(model, save_directory)
|
87 |
|
|
|
88 |
input_text = "¿Qué opinas de las noticias recientes sobre la economía?"
|
89 |
inputs = tokenizer(input_text, return_tensors="pt")
|
90 |
output = peft_model.generate(**inputs, max_length=50)
|