maximuspowers's picture
Update README.md
3bd1ef5 verified
metadata
license: mit
datasets:
  - ethical-spectacle/biased-corpus
language:
  - en
metrics:
  - f1(0.8998)
  - precision
  - recall()
library_name: transformers
co2_eq_emissions:
  emissions: 10
  source: Code Carbon
  training_type: fine-tuning
  geographical_location: Albany, New York
  hardware_used: T4
base_model:
  - google-bert/bert-base-uncased
pipeline_tag: text-classification
tags:
  - Social Bias

How to Use

classifier = pipeline("text-classification", model="maximuspowers/bias-type-classifier") // pass in return_all_scores=True for multi-label
result = classifier("Tall people are so clumsy")

// Example Result
// [
//    {
//     "label": "physical",
//     "score": 0.9972801208496094
//    }
// ]

This model was trained on a synthetic dataset of biased statements and questions, generated by Mistal 7B as part of the GUS-Net paper.

Model Performance:

Label F1 Score Precision Recall
Macro Average 0.8998 0.9213 0.8807
racial 0.8613 0.9262 0.8049
religious 0.9655 0.9716 0.9595
gender 0.9160 0.9099 0.9223
age 0.9185 0.9683 0.8737
nationality 0.9083 0.9053 0.9113
sexuality 0.9304 0.9484 0.9131
socioeconomic 0.8273 0.8727 0.7864
educational 0.8791 0.9091 0.8511
disability 0.8713 0.8762 0.8665
political 0.9127 0.8914 0.9351
physical 0.9069 0.9547 0.8635

Training Params:

Learning Rate: 5e-5 Batch Size: 16 Epochs: 3