FpOliveira commited on
Commit
c80eb66
·
1 Parent(s): 59baefe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -14,10 +14,10 @@ pipeline_tag: text-classification
14
 
15
  ## Introduction
16
 
17
- Tupi-Bert-Base is a fine-tuned BERT model based on [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) base.For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/)."
18
-
19
- The performance of Language Models can change drastically when there is a domain shift between training and test data. In order create a Portuguese Language Model adapted to a Legal domain, the original BERTimbau model was submitted to a fine-tuning stage where it was performed 1 "PreTraining" epoch over 30 000 legal Portuguese Legal documents available online.
20
 
 
 
 
21
 
22
  ## Available models
23
 
@@ -25,3 +25,5 @@ The performance of Language Models can change drastically when there is a domain
25
  | ---------------------------------------- | ---------- | ------- | ------- |
26
  | `FpOliveira/tupi-bert-base-portuguese-cased` | BERT-Base |12 |110M|
27
  | `FpOliveira/tupi-bert-large-portuguese-cased` | BERT-Large | 24 | 335M |
 
 
 
14
 
15
  ## Introduction
16
 
 
 
 
17
 
18
+ Tupi-BERT-Base represents a fine-tuned BERT model designed specifically for binary classification of hate speech in Portuguese. Derived from the [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased), TuPi are model family dedicated solution for addressing hate speech concerns.
19
+ For more details or specific inquiries, please refer to the [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
20
+ The efficacy of Language Models can exhibit notable variations when confronted with a shift in domain between training and test data. In the creation of a specialized Portuguese Language Model tailored for hate speech classification, the original BERTimbau model underwent meticulous fine-tuning. This process entailed a singular "PreTraining" epoch carried out on the TuPi Hate Speech DataSet, sourced from diverse social networks.
21
 
22
  ## Available models
23
 
 
25
  | ---------------------------------------- | ---------- | ------- | ------- |
26
  | `FpOliveira/tupi-bert-base-portuguese-cased` | BERT-Base |12 |110M|
27
  | `FpOliveira/tupi-bert-large-portuguese-cased` | BERT-Large | 24 | 335M |
28
+ | `FpOliveira/tupi-bert-large-portuguese-cased` | BERT-Large | 24 | 335M |
29
+ | `FpOliveira/tupi-bert-large-portuguese-cased` | BERT-Large | 24 | 335M |