File size: 5,924 Bytes
28c8f56
 
 
 
 
 
 
 
 
 
 
 
afba3b5
63ef4cb
afba3b5
63ef4cb
afba3b5
28c8f56
fb8d41f
28c8f56
63ef4cb
28c8f56
fb8d41f
63ef4cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb8d41f
63ef4cb
 
 
 
28c8f56
2fe8f8b
28c8f56
18e9350
 
28c8f56
 
 
 
 
 
 
 
 
 
 
e029a98
28c8f56
 
 
fb8d41f
28c8f56
 
 
 
 
fb8d41f
28c8f56
8f0da75
28c8f56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f0da75
28c8f56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f0da75
 
 
 
 
 
7cf993b
 
28c8f56
 
8f0da75
 
28c8f56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b41e5bc
 
63ef4cb
b41e5bc
 
 
 
3c41bf2
48de2d7
a513f51
b41e5bc
 
63ef4cb
 
 
 
 
 
fb8d41f
63ef4cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb8d41f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
---
license: apache-2.0
datasets:
- ruanchaves/faquad-nli
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- textual-entailment
widget:
- text: "<s>Qual a capital do Brasil?<s>A capital do Brasil é Brasília!</s>"
  example_title: Exemplo
- text: "<s>Qual a capital do Brasil?<s>Anões são muito mais legais do que elfos!</s>"
  example_title: Exemplo
---
# TeenyTinyLlama-160m-FaQuAD-NLI

TeenyTinyLlama is a series of small foundational models trained in Brazilian Portuguese.

This repository contains a version of [TeenyTinyLlama-160m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m) (`TeenyTinyLlama-160m-HateBR`) fine-tuned on the [FaQuAD-NLI dataset](https://huggingface.co/datasets/ruanchaves/faquad-nli).

## Details

- **Number of Epochs:** 3
- **Batch size:** 16
- **Optimizer:** `torch.optim.AdamW` (learning_rate = 4e-5, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB

## Usage

Using `transformers.pipeline`:

```python
from transformers import pipeline

text = "<s>Qual a capital do Brasil?<s>A capital do Brasil é Brasília!</s>"

classifier = pipeline("text-classification", model="nicholasKluge/TeenyTinyLlama-160m-FaQuAD-NLI")
classifier(text)

# >>> [{'label': 'SUITABLE', 'score': 0.9774010181427002}]
```

## Reproducing

To reproduce the fine-tuning process, use the following code snippet:

```python
# Faquad-nli
! pip install transformers datasets evaluate accelerate -q

import evaluate
import numpy as np
from datasets import load_dataset, Dataset, DatasetDict
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer

# Load the task
dataset = load_dataset("ruanchaves/faquad-nli")

# Create a `ModelForSequenceClassification`
model = AutoModelForSequenceClassification.from_pretrained(
    "nicholasKluge/TeenyTinyLlama-160m", 
    num_labels=2, 
    id2label={0: "UNSUITABLE", 1: "SUITABLE"}, 
    label2id={"UNSUITABLE": 0, "SUITABLE": 1}
)

tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-160m")

# Format the dataset
train = dataset['train'].to_pandas()
train['text'] = train['question'] + tokenizer.bos_token + train['answer'] + tokenizer.eos_token
train = train[['text', 'label']]
train.labels = train.label.astype(int)
train = Dataset.from_pandas(train)

test = dataset['test'].to_pandas()
test['text'] = test['question'] + tokenizer.bos_token + test['answer'] + tokenizer.eos_token
test = test[['text', 'label']]
test.labels = test.label.astype(int)
test = Dataset.from_pandas(test)

dataset = DatasetDict({
    "train": train,  
    "test": test                  
})

# Preprocess the dataset
def preprocess_function(examples):
    return tokenizer(examples["text"], truncation=True)

dataset_tokenized = dataset.map(preprocess_function, batched=True)

# Create a simple data collactor
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)

# Use accuracy as evaluation metric
accuracy = evaluate.load("accuracy")

# Function to compute accuracy
def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    predictions = np.argmax(predictions, axis=1)
    return accuracy.compute(predictions=predictions, references=labels)

# Define training arguments
training_args = TrainingArguments(
    output_dir="checkpoints",
    learning_rate=4e-5,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    num_train_epochs=3,
    weight_decay=0.01,
    evaluation_strategy="epoch",
    save_strategy="epoch",
    load_best_model_at_end=True,
    push_to_hub=True,
    hub_token="your_token_here",
    hub_model_id="username/model-ID"
)

# Define the Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=dataset_tokenized["train"],
    eval_dataset=dataset_tokenized["test"],
    tokenizer=tokenizer,
    data_collator=data_collator,
    compute_metrics=compute_metrics,
)

# Train!
trainer.train()

```

## Fine-Tuning Comparisons

| Models                                                                                     | [FaQuAD-NLI](https://huggingface.co/datasets/ruanchaves/faquad-nli) |
|--------------------------------------------------------------------------------------------|---------------------------------------------------------------------|
| [Bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) | 93.07                                                               |
| [Bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)| 92.26                                                               |
| [Teeny Tiny Llama 460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m)          | 91.18                                                               |
| [Teeny Tiny Llama 160m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m)          | 90.00                                                               |
| [Gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese)        | 86.46                                                               |

## Cite as 🤗

```latex

@misc{nicholas22llama,
  doi = {10.5281/zenodo.6989727},
  url = {https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m},
  author = {Nicholas Kluge Corrêa},
  title = {TeenyTinyLlama},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
}

```

## Funding

This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.

## License

TeenyTinyLlama-160m-FaQuAD-NLI is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.