File size: 3,999 Bytes
0f94f71
417fd3d
 
 
0f94f71
417fd3d
 
 
 
 
 
 
 
 
 
0f94f71
 
417fd3d
0f94f71
417fd3d
 
 
0f94f71
 
417fd3d
 
0f94f71
417fd3d
0f94f71
417fd3d
 
 
0f94f71
417fd3d
 
6d5aa3f
0f94f71
417fd3d
 
 
 
0f94f71
417fd3d
 
 
 
0f94f71
417fd3d
0f94f71
417fd3d
 
0f94f71
417fd3d
 
 
 
 
 
 
 
 
 
 
 
 
0f94f71
df4240e
 
 
2321f91
df4240e
 
 
2321f91
df4240e
4a638cd
2321f91
df4240e
 
 
 
 
 
 
417fd3d
0f94f71
417fd3d
 
 
0f94f71
417fd3d
0f94f71
417fd3d
 
 
 
 
 
 
 
 
0f94f71
417fd3d
 
 
 
 
0f94f71
417fd3d
0f94f71
 
417fd3d
0f94f71
417fd3d
0f94f71
417fd3d
0f94f71
417fd3d
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- portugues
- portuguese
- QA
- instruct
- phi
base_model: meta-llama/Llama-2-13b-chat-hf
datasets:
- rhaymison/superset
pipeline_tag: text-generation
---

# portuguese-tom-cat-13b

<p align="center">
  <img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/13b.webp"  width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>


This model was trained with a superset of 300,000 instructions in Portuguese. 
The model comes to help fill the gap in models in Portuguese. Tuned from the Llama-2-13b-chat-hf

# How to use

### FULL MODEL : A100
### HALF MODEL: L4
### 8bit or 4bit : T4 or V100

You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches.
Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response. 
Important points like these help models to perform much better.

```python
!pip install -q -U transformers
!pip install -q -U accelerate
!pip install -q -U bitsandbytes

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("rhaymison/portuguese-tom-cat-13b", device_map= {"": 0})
tokenizer = AutoTokenizer.from_pretrained("rhaymison/portuguese-tom-cat-13b")
model.eval()

```

You can use with Pipeline.
```python

from transformers import pipeline
pipe = pipeline("text-generation",
                model=model,
                tokenizer=tokenizer,
                do_sample=True,
                max_new_tokens=512,
                num_beams=2,
                temperature=0.3,
                top_k=50,
                top_p=0.95,
                early_stopping=True,
                pad_token_id=tokenizer.eos_token_id,
                )


def format_question(input:str)-> str:
  base_instruction = """Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido."""
  _input = f"""<s>[INST] <<SYS>>\n {base_instruction}
  <</SYS>> {input}  [/INST]
  """

  return _input.strip()

prompt = "Me explique sobre os romanos"
pipe(format_question(prompt))
```

```text
Os romanos foram um povo que viveu na Itália antiga, entre o século VIII a.C. e o século V d.C.
Eles eram conhecidos por sua habilidade em construir estradas, edifícios e aquedutos, e também por suas conquistas militares.
O Império Romano, que durou de 27 a.C. a 476 d.C., foi o maior império da história, abrangendo uma área que ia da Grécia até a Inglaterra.
Os romanos também desenvolveram um sistema de leis e instituições políticas que influenciaram profundamente a cultura ocidental.
```

If you are having a memory problem such as "CUDA Out of memory", you should use 4-bit or 8-bit quantization.
For the complete model in colab you will need the A100.
If you want to use 4bits or 8bits, T4 or L4 will already solve the problem.

# 4bits example

```python
from transformers import BitsAndBytesConfig
import torch
nb_4bit_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=True
)

model = AutoModelForCausalLM.from_pretrained(
    base_model,
    quantization_config=bnb_config,
    device_map={"": 0}
)

```


### Comments

Any idea, help or report will always be welcome.

email: rhaymisoncristian@gmail.com

 <div style="display:flex; flex-direction:row; justify-content:left">
    <a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank">
    <img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
  </a>
  <a href="https://github.com/rhaymisonbetini" target="_blank">
    <img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
  </a>