tobischimanski's picture
Update README.md
915882d
|
raw
history blame
1.67 kB
metadata
language: en
license: apache-2.0
datasets:
  - ESGBERT/governance_2k
tags:
  - ESG
  - governance

Model Card for GovRoBERTa-governance

Model Description

This is the GovRoBERTa-governance language model. A language model that is trained to better classify governance texts in the ESG domain.

Using the GovRoBERTa-base model as a starting point, the GovRoBERTa-governance Language Model is additionally fine-trained on a 2k governance dataset to detect governance text samples.

How to Get Started With the Model

You can use the model with a pipeline for text classification:

from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
import datasets
 
tokenizer_name = "ESGBERT/GovRoBERTa-governance"
model_name = "ESGBERT/GovRoBERTa-governance"
 
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512)
 
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, device=0)
 
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
print(pipe("We also intend to improve both the monitoring (compliance) process of how our asset managers engage and engagement outcomes."))

More details can be found in the paper

@article{Schimanski23ESGBERT,
    title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
    author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
    year={2023}
}