Add model card (#2)
Browse files- Add model card (3b0d541517e4b56f955d2784813ea2694dbd6fb4)
- Update README.md (cf1ac3d21e579b7083aa77764db6bc2e9b146007)
- Update README.md (7e2b17308264612860745c0ff0154a8a1d126c36)
- Update README.md (91600d2b0b9adc32fdf38aee307d44fff59018d4)
- Update README.md (2af2c13364475b5fdc6d0f25d64a0097640ae25b)
Co-authored-by: Marissa Gerchick <Marissa@users.noreply.huggingface.co>
README.md
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: mit
|
4 |
+
tags:
|
5 |
+
- exbert
|
6 |
+
datasets:
|
7 |
+
- bookcorpus
|
8 |
+
- wikipedia
|
9 |
+
---
|
10 |
+
|
11 |
+
# RoBERTa Base OpenAI Detector
|
12 |
+
|
13 |
+
## Table of Contents
|
14 |
+
- [Model Details](#model-details)
|
15 |
+
- [Uses](#uses)
|
16 |
+
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
|
17 |
+
- [Training](#training)
|
18 |
+
- [Evaluation](#evaluation)
|
19 |
+
- [Environmental Impact](#environmental-impact)
|
20 |
+
- [Technical Specifications](#technical-specifications)
|
21 |
+
- [Citation Information](#citation-information)
|
22 |
+
- [Model Card Authors](#model-card-author)
|
23 |
+
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
|
24 |
+
|
25 |
+
## Model Details
|
26 |
+
|
27 |
+
**Model Description:** RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version.
|
28 |
+
|
29 |
+
- **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list
|
30 |
+
- **Model Type:** Fine-tuned transformer-based language model
|
31 |
+
- **Language(s):** English
|
32 |
+
- **License:** MIT
|
33 |
+
- **Related Models:** [RoBERTa base](https://huggingface.co/roberta-base), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2)
|
34 |
+
- **Resources for more information:**
|
35 |
+
- [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection).
|
36 |
+
- [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector)
|
37 |
+
- [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
|
38 |
+
- [Explore the detector model here](https://huggingface.co/openai-detector )
|
39 |
+
|
40 |
+
## Uses
|
41 |
+
|
42 |
+
#### Direct Use
|
43 |
+
|
44 |
+
The model is a classifier that can be used to detect text generated by GPT-2 models.
|
45 |
+
|
46 |
+
#### Downstream Use
|
47 |
+
|
48 |
+
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
|
49 |
+
|
50 |
+
#### Misuse and Out-of-scope Use
|
51 |
+
|
52 |
+
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
|
53 |
+
|
54 |
+
## Risks, Limitations and Biases
|
55 |
+
|
56 |
+
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
|
57 |
+
|
58 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|
59 |
+
|
60 |
+
#### Risks and Limitations
|
61 |
+
|
62 |
+
In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
|
63 |
+
|
64 |
+
In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
|
65 |
+
|
66 |
+
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
|
67 |
+
|
68 |
+
The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
|
69 |
+
|
70 |
+
#### Bias
|
71 |
+
|
72 |
+
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa base](https://huggingface.co/roberta-base) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
|
73 |
+
|
74 |
+
## Training
|
75 |
+
|
76 |
+
#### Training Data
|
77 |
+
|
78 |
+
The model is a sequence classifier based on RoBERTa base (see the [RoBERTa base model card](https://huggingface.co/roberta-base) for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
|
79 |
+
|
80 |
+
#### Training Procedure
|
81 |
+
|
82 |
+
The model developers write that:
|
83 |
+
|
84 |
+
> We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
|
85 |
+
|
86 |
+
They later state:
|
87 |
+
|
88 |
+
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
|
89 |
+
|
90 |
+
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
|
91 |
+
|
92 |
+
## Evaluation
|
93 |
+
|
94 |
+
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
|
95 |
+
|
96 |
+
#### Testing Data, Factors and Metrics
|
97 |
+
|
98 |
+
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
|
99 |
+
|
100 |
+
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
|
101 |
+
|
102 |
+
#### Results
|
103 |
+
|
104 |
+
The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf):
|
105 |
+
|
106 |
+
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
|
107 |
+
|
108 |
+
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
|
109 |
+
|
110 |
+
## Environmental Impact
|
111 |
+
|
112 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
113 |
+
|
114 |
+
- **Hardware Type:** Unknown
|
115 |
+
- **Hours used:** Unknown
|
116 |
+
- **Cloud Provider:** Unknown
|
117 |
+
- **Compute Region:** Unknown
|
118 |
+
- **Carbon Emitted:** Unknown
|
119 |
+
|
120 |
+
## Technical Specifications
|
121 |
+
|
122 |
+
The model developers write that:
|
123 |
+
|
124 |
+
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
|
125 |
+
|
126 |
+
## Citation Information
|
127 |
+
|
128 |
+
```bibtex
|
129 |
+
@article{solaiman2019release,
|
130 |
+
title={Release strategies and the social impacts of language models},
|
131 |
+
author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others},
|
132 |
+
journal={arXiv preprint arXiv:1908.09203},
|
133 |
+
year={2019}
|
134 |
+
}
|
135 |
+
```
|
136 |
+
|
137 |
+
APA:
|
138 |
+
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
|
139 |
+
|
140 |
+
## Model Card Authors
|
141 |
+
|
142 |
+
This model card was written by the team at Hugging Face.
|
143 |
+
|
144 |
+
## How to Get Started with the Model
|
145 |
+
|
146 |
+
More information needed.
|