siebert commited on
Commit
0561a83
1 Parent(s): e4b6e36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -6,19 +6,19 @@ tags:
6
 
7
 
8
  # Overview
9
- This model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large) (Liu et al. 2019). It enables reliable binary sentiment analysis for various types of English-language text. For each instance it predicts either positive (1) or negative (0) sentiment. The model was fine-tuned and evaluated on 15 data sets from diverse text sources to enhance generalization across different types of texts (reviews, tweets, etc.). Consequently, it outperforms models trained on only one type of text (e.g., movie reviews from the popular SST-2 benchmark) when used on new data as shown below.
10
 
11
  # Usage
12
- The model can be used with few lines of code. We suggest that you manually label a subset of your data to evaluate performance for your use case. For performance benchmark values across different sentiment analysis contexts, refer to our paper ([Heitmann et al. 2020](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3489963)). The model can also be used as a starting point for further [fine-tuning](https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer) on your sentiment analysis task.
13
 
14
- The easiest way to use the model employs Huggingface's [sentiment analysis pipeline](https://huggingface.co/transformers/quicktour.html#getting-started-on-a-task-with-a-pipeline):
15
  ```
16
  from transformers import pipeline
17
  sentiment_analysis = pipeline("sentiment-analysis",model="siebert/sentiment-roberta-large-english")
18
  print(sentiment_analysis("I love this!"))
19
  ```
20
 
21
- Alternatively you can load the model as follows:
22
  ```
23
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
24
  tokenizer = AutoTokenizer.from_pretrained("siebert/sentiment-roberta-large-english")
 
6
 
7
 
8
  # Overview
9
+ This model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large) (Liu et al. 2019). It enables reliable binary sentiment analysis for various types of English-language text. For each instance, it predicts either positive (1) or negative (0) sentiment. The model was fine-tuned and evaluated on 15 data sets from diverse text sources to enhance generalization across different types of texts (reviews, tweets, etc.). Consequently, it outperforms models trained on only one type of text (e.g., movie reviews from the popular SST-2 benchmark) when used on new data as shown below.
10
 
11
  # Usage
12
+ The model can be used with few lines of code. We suggest that you manually label a subset of your data to evaluate performance for your use case. For performance benchmark values across different sentiment analysis contexts, please refer to our paper ([Heitmann et al. 2020](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3489963)). The model can also be used as a starting point for further [fine-tuning](https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer) on your sentiment analysis task.
13
 
14
+ The easiest way to use the model is Huggingface's [sentiment analysis pipeline](https://huggingface.co/transformers/quicktour.html#getting-started-on-a-task-with-a-pipeline):
15
  ```
16
  from transformers import pipeline
17
  sentiment_analysis = pipeline("sentiment-analysis",model="siebert/sentiment-roberta-large-english")
18
  print(sentiment_analysis("I love this!"))
19
  ```
20
 
21
+ Alternatively, you can load the model as follows:
22
  ```
23
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
24
  tokenizer = AutoTokenizer.from_pretrained("siebert/sentiment-roberta-large-english")