--- tags: - distilbert - health - tweet datasets: - custom-phm-tweets metrics: - accuracy model-index: - name: distilbert-phmtweets-sutd results: - task: name: Text Classification type: text-classification dataset: name: custom-phm-tweets type: labelled metrics: - name: Accuracy type: accuracy value: 0.877 --- # distilbert-phmtweets-sutd This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) for text classification to identify public health events through tweets. The project was based on an [Emory University Study on Detection of Personal Health Mentions in Social Media paper](https://arxiv.org/pdf/1802.09130v2.pdf), that worked with this [custom dataset](https://github.com/emory-irlab/PHM2017). It achieves the following results on the evaluation set: - Accuracy: 0.877 ## Usage ```Python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("dibsondivya/distilbert-phmtweets-sutd") model = AutoModelForSequenceClassification.from_pretrained("dibsondivya/distilbert-phmtweets-sutd") ``` ### Model Evaluation Results With Validation Set - Accuracy: 0.8708661417322835 With Test Set - Accuracy: 0.8772961058045555 # Reference for distilbert-base-uncased Model ```bibtex @article{Sanh2019DistilBERTAD, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, journal={ArXiv}, year={2019}, volume={abs/1910.01108} } ```