rabindralamsal
commited on
Commit
•
96e9002
1
Parent(s):
126bf22
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CrisisTransformers
|
2 |
+
CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the paper "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://arxiv.org/abs/2309.05494)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the associated paper for more details.
|
3 |
+
|
4 |
+
CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines such as BERT, RoBERTa, BERTweet, etc. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder outperforms the state-of-the-art by more than 17\% in sentence encoding tasks.
|
5 |
+
|
6 |
+
## Uses
|
7 |
+
CrisisTransformers has 8 pre-trained models and a sentence encoder. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoder can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling.
|
8 |
+
|
9 |
+
## Models and naming conventions
|
10 |
+
*CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. SE represents sentence encoder.
|
11 |
+
|
12 |
+
| pre-trained model | source |
|
13 |
+
|--|--|
|
14 |
+
|CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)|
|
15 |
+
|CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)|
|
16 |
+
|CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)|
|
17 |
+
|CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)|
|
18 |
+
|CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)|
|
19 |
+
|CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)|
|
20 |
+
|CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)|
|
21 |
+
|CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)|
|
22 |
+
|
23 |
+
|
24 |
+
| sentence encoder | source |
|
25 |
+
|--|--|
|
26 |
+
|CT-M1-Complete-SE|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)|
|
27 |
+
|
28 |
+
|
29 |
+
## Results
|
30 |
+
Here are the main results from the associated paper.
|
31 |
+
|
32 |
+
<p float="left">
|
33 |
+
<img width="80%" alt="classification" src="https://raw.githubusercontent.com/rabindralamsal/images/main/cls.png" />
|
34 |
+
<img width="55%" alt="sentence encoding" src="https://raw.githubusercontent.com/rabindralamsal/images/main/se.png" />
|
35 |
+
</p>
|
36 |
+
|
37 |
+
## Citation
|
38 |
+
If you use CrisisTransformers, please cite the following paper:
|
39 |
+
```
|
40 |
+
@misc{lamsal2023crisistransformers,
|
41 |
+
title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts},
|
42 |
+
author={Rabindra Lamsal and
|
43 |
+
Maria Rodriguez Read and
|
44 |
+
Shanika Karunasekera},
|
45 |
+
year={2023},
|
46 |
+
eprint={2309.05494},
|
47 |
+
archivePrefix={arXiv},
|
48 |
+
primaryClass={cs.CL}
|
49 |
+
}
|
50 |
+
```
|