File size: 1,105 Bytes
c5769ef bac46c7 c5769ef e8e72c7 c5769ef e8e72c7 c5769ef bac46c7 c5769ef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: afl-3.0
datasets:
- jigsaw_toxicity_pred
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
## Model description
This model is a fine-tuned version of the [bert-base-uncased](https://huggingface.co/transformers/model_doc/bert.html) model to classify toxic comments.
## How to use
You can use the model with the following code.
```python
from transformers import BertForSequenceClassification, BertTokenizer, TextClassificationPipeline
model_path = "JungleLee/bert-toxic-comment-classification"
tokenizer = BertTokenizer.from_pretrained(model_path)
model = BertForSequenceClassification.from_pretrained(model_path, num_labels=2)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline("You're a fucking nerd."))
```
## Training data
The training data comes this [Kaggle competition](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). We use 90% of the `train.csv` data to train the model.
## Evaluation results
The model achieves 0.95 AUC in a 1500 rows held-out test set. |