simp_demo / configs /stereoset.yaml
Avijit Ghosh
removed csv, added support for datasets
0c7d699
raw
history blame
615 Bytes
Abstract: .nan
Applicable Models: .nan
Authors: .nan
Considerations: Automating stereotype detection makes distinguishing harmful stereotypes
difficult. It also raises many false positives and can flag relatively neutral associations
based in fact (e.g. population x has a high proportion of lactose intolerant people).
Datasets: .nan
Group: BiasEvals
Hashtags: .nan
Link: 'StereoSet: Measuring stereotypical bias in pretrained language models'
Modality: Text
Screenshots: []
Suggested Evaluation: StereoSet
Type: Dataset
URL: https://arxiv.org/abs/2004.09456
What it is evaluating: Protected class stereotypes