File size: 2,613 Bytes
c90d7f9 0b6eb3a 9d93d93 c90d7f9 99fe3d6 c90d7f9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
license: apache-2.0
language:
- fr
pipeline_tag: text-classification
inference: false
---
# Affective Norms Extrapolation Model for French Language
## Model Description
This transformer-based model is designed to extrapolate affective norms for French words, including metrics such as valence, arousal, and imageability. It has been finetuned from the French Toxicity Classifier Plus Model ("EIStakovskii/french_toxicity_classifier_plus_v2"), enhanced with additional layers to predict the affective dimensions. This model was first released as a part of the publication: "Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection." (Plisiecki, Sobieszek; 2023) [ https://doi.org/10.3758/s13428-023-02212-3 ]
## Training Data
The model was trained on the French affective norms dataset by Syssau et al. (2021) (http://dx.doi.org/10.3758/s13428-020-01450-z), which includes 1031 words rated by participants on various emotional and semantic dimensions. The dataset was split into training, validation, and test sets in an 8:1:1 ratio.
## Performance
The model achieved the following Pearson correlations with human judgments on the test set:
- Valence: 0.80
- Arousal: 0.77
## Usage
You can use the model and tokenizer as follows:
First run the bash code below to clone the repository (this will take some time). Because of the custom model class, this model cannot be run with the usual huggingface Model setups.
```bash
git clone https://huggingface.co/hplisiecki/word2affect_french
```
Proceed as follows:
```python
from word2affect_polish.model_script import CustomModel # importing the custom model class
from transformers import AutoTokenizer
model_directory = "word2affect_french" # path to the cloned repository
model = CustomModel.from_pretrained(model_directory)
tokenizer = AutoTokenizer.from_pretrained(model_directory)
inputs = tokenizer("This is a test input.", return_tensors="pt")
outputs = model(inputs['input_ids'], inputs['attention_mask'])
# Print out the emotion ratings
for emotion, rating in zip(['Valence', 'Arousal'], outputs):
print(f"{emotion}: {rating.item()}")
```
## Citation
If you use this model please cite the following paper.
```sql
@article{Plisiecki_Sobieszek_2023,
title={Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection},
author={Plisiecki, Hubert and Sobieszek, Adam},
journal={Behavior Research Methods},
year={2023},
pages={1-16}
doi={https://doi.org/10.3758/s13428-023-02212-3}
}
``` |