sapinedamo commited on
Commit
925f723
1 Parent(s): 7c1c04e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - bertin-project/alpaca-spanish
4
+ language:
5
+ - es
6
+ license: apache-2.0
7
+ ---
8
+
9
+ <div style="text-align:center;width:350px;height:350px;">
10
+ <img src="https://huggingface.co/hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca/resolve/main/Alpaca2.png" alt="SAlpaca logo"">
11
+ </div>
12
+
13
+
14
+
15
+ # SAlpaca: Spanish + Alpaca
16
+
17
+
18
+ ## Adapter Description
19
+ This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model *bertin-project/bertin-gpt-j-6B* to be fine-tuned on the *Spanish Alpaca Dataset* by using the method *LoRA*.
20
+
21
+
22
+ ## How to use
23
+ ```py
24
+ import torch
25
+ from peft import PeftModel, PeftConfig
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer
27
+
28
+ peft_model_id = "hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca"
29
+ config = PeftConfig.from_pretrained(peft_model_id)
30
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
31
+ # tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
32
+ tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
33
+
34
+ # Load the Lora model
35
+ model = PeftModel.from_pretrained(model, peft_model_id)
36
+
37
+ def gen_conversation(text):
38
+ text = "<SC>instruction: " + text + "\n "
39
+ batch = tokenizer(text, return_tensors='pt')
40
+ with torch.cuda.amp.autocast():
41
+ output_tokens = model.generate(**batch, max_new_tokens=256, eos_token_id=50258, early_stopping = True, temperature=.9)
42
+
43
+ print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=False))
44
+
45
+ text = "hola"
46
+
47
+ gen_conversation(text)
48
+ ```
49
+
50
+
51
+ ## Resources used
52
+ Google Colab machine with the following specifications
53
+ <div style="text-align:center;width:550px;height:550px;">
54
+ <img src="https://huggingface.co/hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca/resolve/main/resource.jpeg" alt="Resource logo">
55
+ </div>
56
+
57
+ ## Citation
58
+ ```
59
+ @misc {hackathon-somos-nlp-2023,
60
+ author = { {Edison Bejarano, Leonardo Bolaños, Santiago Pineda, Nicolay Potes, Daniel Terraza} },
61
+ title = { SAlpaca },
62
+ year = 2023,
63
+ url = { https://huggingface.co/hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca }
64
+ publisher = { Hugging Face }
65
+ }
66
+ ```