Sentiment at aequa-tech

cite this work

@inproceedings{arthur2023debunker,
  title={Debunker Assistant: a support for detecting online misinformation},
  author={Arthur, Thomas Edward Capozzi Lupi and Cignarella, Alessandra Teresa and Frenda, Simona and Lai, Mirko and Stranisci, Marco Antonio and Urbinati, Alessandra and others},
  booktitle={Proceedings of the Ninth Italian Conference on Computational Linguistics (CLiC-it 2023)},
  volume={3596},
  pages={1--5},
  year={2023},
  organization={Federico Boschetti, Gianluca E. Lebani, Bernardo Magnini, Nicole Novielli}
}

Model Description

This model is a fine-tuned version of AlBERTo Italian model on sentiment analysis

Training Details

Training Data

Training Hyperparameters

  • learning_rate: 2e-5
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam

Evaluation

Testing Data

It was tested on SENTIPOLC 2016 test set

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.1.2
  • Datasets 2.19.0
  • Accelerate 0.30.0

How to use this model:

model = AutoModelForSequenceClassification.from_pretrained('aequa-tech/sentiment-it',num_labels=3, ignore_mismatched_sizes=True) 
tokenizer = AutoTokenizer.from_pretrained("m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0") 
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer, top_k=None)
classifier("L'insostenibile leggerezza dell'essere")
Downloads last month
35
Safetensors
Model size
184M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.