--- license: apache-2.0 datasets: - christykoh/imdb_pt language: - pt metrics: - accuracy library_name: transformers pipeline_tag: text-classification tags: - sentiment-analysis widget: - text: "Esqueceram de mim 2 é um dos melhores filmes de natal de todos os tempos." example_title: Exemplo - text: "Esqueceram de mim 2 é o pior filme da franquia inteira." example_title: Exemplo --- # TeenyTinyLlama-162m-IMDB TeenyTinyLlama is a series of small foundational models trained on Portuguese. This repository contains a version of [TeenyTinyLlama-162m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-162m) fine-tuned on a translated version of the [IMDB dataset](https://huggingface.co/datasets/christykoh/imdb_pt). ## Reproducing ```python # IMDB ! pip install transformers datasets evaluate accelerate -q import evaluate import numpy as np from datasets import load_dataset from transformers import AutoTokenizer, DataCollatorWithPadding from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer # Load the task dataset = load_dataset("christykoh/imdb_pt") # Create a `ModelForSequenceClassification` model = AutoModelForSequenceClassification.from_pretrained( "nicholasKluge/TeenyTinyLlama-162m", num_labels=2, id2label={0: "NEGATIVE", 1: "POSITIVE"}, label2id={"NEGATIVE": 0, "POSITIVE": 1} ) tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-162m") # Preprocess the dataset def preprocess_function(examples): return tokenizer(examples["text"], truncation=True, max_length=256) dataset_tokenized = dataset.map(preprocess_function, batched=True) # Create a simple data collactor data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # Use accuracy as an evaluation metric accuracy = evaluate.load("accuracy") # Function to compute accuracy def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) # Define training arguments training_args = TrainingArguments( output_dir="checkpoints", learning_rate=4e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, push_to_hub=True, hub_token="your_token_here", hub_model_id="username/model-name-imdb" ) # Define the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=dataset_tokenized["train"], eval_dataset=dataset_tokenized["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # Train! trainer.train() ``` ## Results | Models | [IMDB](https://huggingface.co/datasets/christykoh/imdb_pt) | |--------------------------------------------------------------------------------------------|------------------------------------------------------------| | [Teeny Tiny Llama 162m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-162m) | 91.14 | | [Bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) | 92.22 | | [Gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) | 91.60 |