|
--- |
|
language: en |
|
license: apache-2.0 |
|
datasets: |
|
- ESGBERT/social_2k |
|
tags: |
|
- ESG |
|
- social |
|
--- |
|
|
|
# Model Card for SocialBERT-social |
|
|
|
## Model Description |
|
|
|
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the SocialBERT-social language model. A language model that is trained to better classify social texts in the ESG domain. |
|
|
|
Using the [SocialBERT-base](https://huggingface.co/ESGBERT/SocialBERT-base) model as a starting point, the SocialBERT-social Language Model is additionally fine-trained on a 2k social dataset to detect social text samples. |
|
|
|
## How to Get Started With the Model |
|
You can use the model with a pipeline for text classification: |
|
|
|
```python |
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline |
|
|
|
tokenizer_name = "ESGBERT/SocialBERT-social" |
|
model_name = "ESGBERT/SocialBERT-social" |
|
|
|
model = AutoModelForSequenceClassification.from_pretrained(model_name) |
|
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512) |
|
|
|
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) # set device=0 to use GPU |
|
|
|
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline |
|
print(pipe("We follow rigorous supplier checks to prevent slavery and ensure workers' rights.")) |
|
``` |
|
|
|
## More details can be found in the paper |
|
|
|
```bibtex |
|
@article{Schimanski23ESGBERT, |
|
title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}}, |
|
author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold}, |
|
year={2023}, |
|
journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514}, |
|
} |
|
``` |
|
|