Text Classification
Transformers
PyTorch
xlm-roberta
Inference Endpoints
Davlan commited on
Commit
1019b8b
1 Parent(s): 8c644d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -8
README.md CHANGED
@@ -20,7 +20,7 @@ metrics:
20
  pipeline_tag: text-classification
21
  ---
22
 
23
- # naija-twitter-sentiment-afriberta-large
24
  ## Model description
25
  **afrisenti-twitter-sentiment-afroxlmr-large** is the first multilingual twitter **sentiment classification** model for twelve (12) Nigerian languages (Amharic, Algerian Arabic, Darija, Hausa, Igbo, Kinyarwanda, Nigerian Pidgin, Mozambique Portuguese, Swahili, Tsonga, Twi, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.
26
  It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [AfriSenti corpus](https://github.com/afrisenti-semeval/afrisent-semeval-2023).
@@ -63,13 +63,6 @@ This model is limited by its training dataset and domain i.e Twitter. This may n
63
 
64
  ## Training procedure
65
  This model was trained on a single Nvidia A10 GPU with recommended hyperparameters from the [original AfriSenti paper](https://arxiv.org/abs/2302.08956).
66
- ## Eval results on Test set (F-score), average over 5 runs.
67
- language|F1-score
68
- -|-
69
- hau |81.2
70
- ibo |80.8
71
- pcm |74.5
72
- yor |80.4
73
 
74
  ### BibTeX entry and citation info
75
  ```
 
20
  pipeline_tag: text-classification
21
  ---
22
 
23
+ # AfriSenti-twitter-sentiment-afroxlmr-large
24
  ## Model description
25
  **afrisenti-twitter-sentiment-afroxlmr-large** is the first multilingual twitter **sentiment classification** model for twelve (12) Nigerian languages (Amharic, Algerian Arabic, Darija, Hausa, Igbo, Kinyarwanda, Nigerian Pidgin, Mozambique Portuguese, Swahili, Tsonga, Twi, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.
26
  It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [AfriSenti corpus](https://github.com/afrisenti-semeval/afrisent-semeval-2023).
 
63
 
64
  ## Training procedure
65
  This model was trained on a single Nvidia A10 GPU with recommended hyperparameters from the [original AfriSenti paper](https://arxiv.org/abs/2302.08956).
 
 
 
 
 
 
 
66
 
67
  ### BibTeX entry and citation info
68
  ```