metadata
license: mit
datasets:
- allocine
language:
- fr
tags:
- camembert
TextAttack Model Card
This cmarkea/distilcamembert-base
model was fine-tuned using TextAttackand the allocine
dataset loaded using the datasets
library. The model was fine-tuned
for 3 epochs with a batch size of 64,
a maximum sequence length of 512, and an initial learning rate of 5e-05.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9707, as measured by the
eval set accuracy, found after 3 epochs.
For more information, check out TextAttack on Github.