Edit model card

Model Trained Using AutoTrain

  • Problem type: Binary Classification
  • Model ID: 2489776826
  • Base model: bert-base-portuguese-cased
  • Parameters: 109M
  • Model size: 416MB
  • CO2 Emissions (in grams): 1.7788

Validation Metrics

  • Loss: 0.412
  • Accuracy: 0.815
  • Precision: 0.793
  • Recall: 0.794
  • AUC: 0.895
  • F1: 0.793

Usage

This model was trained on a random subset of the told-br dataset (1/3 of the original size). Our main objective is to provide a small model that can be used to classify Brazilian Portuguese tweets in a binary way ('toxic' or 'non toxic').

You can use cURL to access this model:

$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/alexandreteles/autotrain-told_br_binary_sm_bertimbau-2489776826

Or Python API:

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("alexandreteles/autotrain-told_br_binary_sm_bertimbau-2489776826", use_auth_token=True)

tokenizer = AutoTokenizer.from_pretrained("alexandreteles/autotrain-told_br_binary_sm_bertimbau-2489776826", use_auth_token=True)

inputs = tokenizer("I love AutoTrain", return_tensors="pt")

outputs = model(**inputs)
Downloads last month
14
Safetensors
Model size
109M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train inctdd/told_br_binary_sm_bertimbau

Evaluation results