jorge-henao
commited on
Commit
•
dd24d8c
1
Parent(s):
2299cbe
Update README.md
Browse files
README.md
CHANGED
@@ -62,9 +62,66 @@ Testing is a work in progress, we decide to share both model variations with com
|
|
62 |
- [Alpacaca chat Dialogs](https://github.com/project-baize/baize)
|
63 |
- [Medical chat Dialogs](https://github.com/project-baize/baize)
|
64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
## Example outputs
|
66 |
|
67 |
baizemocracy-lora-7B-cfqa model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
```
|
69 |
input:
|
70 |
"Given the context answer the Human question.
|
|
|
62 |
- [Alpacaca chat Dialogs](https://github.com/project-baize/baize)
|
63 |
- [Medical chat Dialogs](https://github.com/project-baize/baize)
|
64 |
|
65 |
+
## How to use it
|
66 |
+
|
67 |
+
```python
|
68 |
+
import time
|
69 |
+
import torch
|
70 |
+
from peft import PeftModel, PeftConfig
|
71 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
72 |
+
|
73 |
+
peft_model_id = "hackathon-somos-nlp-2023/baizemocracy-lora-7B-cfqa"
|
74 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
75 |
+
base_model = AutoModelForCausalLM.from_pretrained("decapoda-research/llama-7b-hf", return_dict=True, load_in_8bit=True, device_map='auto')
|
76 |
+
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
|
77 |
+
|
78 |
+
# Load the Lora model
|
79 |
+
tuned_model = PeftModel.from_pretrained(base_model, peft_model_id)
|
80 |
+
#original_model = PeftModel.from_pretrained(base_model, "project-baize/baize-lora-7B")
|
81 |
+
|
82 |
+
def generate(text):
|
83 |
+
stt = time.time()
|
84 |
+
print("hackathon-somos-nlp-2023/baizemocracy-lora-7B-cfqa response:")
|
85 |
+
inputs = tokenizer(text, return_tensors="pt")
|
86 |
+
input_ids = inputs["input_ids"].cuda()
|
87 |
+
with torch.cuda.amp.autocast():
|
88 |
+
tuned_model.eval()
|
89 |
+
|
90 |
+
generation_output = tuned_model.generate(
|
91 |
+
input_ids=input_ids[:,1:-1],
|
92 |
+
generation_config=generation_config,
|
93 |
+
return_dict_in_generate=True,
|
94 |
+
output_scores=True,
|
95 |
+
max_new_tokens=256
|
96 |
+
)
|
97 |
+
for s in generation_output.sequences:
|
98 |
+
output = tokenizer.decode(s)
|
99 |
+
print(output)
|
100 |
+
ent = time.time()
|
101 |
+
elapsed_time = round(ent - stt, 2)
|
102 |
+
print (f"{elapsed_time} seconds")
|
103 |
+
|
104 |
+
```
|
105 |
+
|
106 |
## Example outputs
|
107 |
|
108 |
baizemocracy-lora-7B-cfqa model:
|
109 |
+
|
110 |
+
```python
|
111 |
+
#Text taken from Mexican political reform from https://www.gob.mx/cms/uploads/attachment/file/3080/EXPLICACION_AMPLIADA_REFORMA_POLITICA_ELECTORAL.pdf
|
112 |
+
text = """
|
113 |
+
Given the Context answer the Question. Answers must be source based, use topics to elaborate on the Response if they're provided.
|
114 |
+
Context:'Se otorga autonomía constitucional al Consejo Nacional de Evaluación de la Política de Desarrollo Social (CONEVAL), hasta ahora un organismo público descentralizado dependiente de la
|
115 |
+
Secretaría de Desarrollo Social. La autonomía garantizará la objetividad, independencia y rigor necesarios para evaluar la política social del país. Esto permitirá perfeccionar el diseño y aplicación de las políticas públicas destinadas a mejorar la calidad de vida de los sectores de menores
|
116 |
+
ingresos'
|
117 |
+
Question: '¿para qué se le dará autonomía al CONEVAL?'"""
|
118 |
+
generate(text)
|
119 |
+
|
120 |
+
output:
|
121 |
+
Respuesta: El CONEVAL recibirá autonomía para garantizar la objetividad, independencia y rigor necesarios para evaluar la política social del país. Esto permitirá perfeccionar el diseño y aplicación de las políticas públicas destinadas a mejorar la calidad de vida de los sectores de menores
|
122 |
+
ingresos.
|
123 |
+
```
|
124 |
+
|
125 |
```
|
126 |
input:
|
127 |
"Given the context answer the Human question.
|