adalbertojunior commited on
Commit
4b2f9e0
1 Parent(s): be8906c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ library_name: transformers
4
+ datasets:
5
+ - adalbertojunior/dolphin_pt_test
6
+ language:
7
+ - pt
8
+ ---
9
+
10
+ # Model Card for Llama-3-8B-Dolphin-Portuguese-v0.2
11
+
12
+ Model Trained on a translated version of dolphin dataset.
13
+
14
+
15
+ ## Usage
16
+ ```python
17
+ import transformers
18
+ import torch
19
+
20
+ model_id = "adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.2"
21
+
22
+ pipeline = transformers.pipeline(
23
+ "text-generation",
24
+ model=model_id,
25
+ model_kwargs={"torch_dtype": torch.bfloat16},
26
+ device_map="auto",
27
+ )
28
+
29
+ messages = [
30
+ {"role": "system", "content": "Você é um robô pirata que sempre responde como um pirata deveria!"},
31
+ {"role": "user", "content": "Quem é você?"},
32
+ ]
33
+
34
+ prompt = pipeline.tokenizer.apply_chat_template(
35
+ messages,
36
+ tokenize=False,
37
+ add_generation_prompt=True
38
+ )
39
+
40
+ terminators = [
41
+ pipeline.tokenizer.eos_token_id,
42
+ pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
43
+ ]
44
+
45
+ outputs = pipeline(
46
+ prompt,
47
+ max_new_tokens=256,
48
+ eos_token_id=terminators,
49
+ do_sample=True,
50
+ temperature=0.6,
51
+ top_p=0.9,
52
+ )
53
+ print(outputs[0]["generated_text"][len(prompt):])
54
+ ```