Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,16 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
- generated_from_trainer
|
|
|
4 |
model-index:
|
5 |
- name: gpt2-commongen-finetuned
|
6 |
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -11,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
11 |
|
12 |
# gpt2-commongen-finetuned
|
13 |
|
14 |
-
This model is a fine-tuned version of [gpt2
|
15 |
|
16 |
## Model description
|
17 |
|
@@ -21,11 +28,12 @@ More information needed
|
|
21 |
|
22 |
More information needed
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
27 |
|
28 |
## Training procedure
|
|
|
29 |
|
30 |
### Training hyperparameters
|
31 |
|
@@ -44,4 +52,4 @@ The following hyperparameters were used during training:
|
|
44 |
- Transformers 4.27.3
|
45 |
- Pytorch 1.13.1+cu116
|
46 |
- Datasets 2.13.1
|
47 |
-
- Tokenizers 0.13.2
|
|
|
1 |
---
|
2 |
tags:
|
3 |
- generated_from_trainer
|
4 |
+
- text-generation-inference
|
5 |
model-index:
|
6 |
- name: gpt2-commongen-finetuned
|
7 |
results: []
|
8 |
+
license: cc-by-sa-4.0
|
9 |
+
datasets:
|
10 |
+
- Non-Residual-Prompting/C2Gen
|
11 |
+
language:
|
12 |
+
- en
|
13 |
+
pipeline_tag: text-generation
|
14 |
---
|
15 |
|
16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
18 |
|
19 |
# gpt2-commongen-finetuned
|
20 |
|
21 |
+
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2/) on [Non-Residual-Prompting/C2Gen](Non-Residual-Prompting/C2Gen) dataset.
|
22 |
|
23 |
## Model description
|
24 |
|
|
|
28 |
|
29 |
More information needed
|
30 |
|
31 |
+
### Dataset Summary
|
32 |
|
33 |
+
CommonGen [Lin et al., 2020](https://arxiv.org/abs/1911.03705) is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen [Carlsson et al., 2022](https://aclanthology.org/2022.acl-long.471) where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.
|
34 |
|
35 |
## Training procedure
|
36 |
+
- Causal Language Modelling
|
37 |
|
38 |
### Training hyperparameters
|
39 |
|
|
|
52 |
- Transformers 4.27.3
|
53 |
- Pytorch 1.13.1+cu116
|
54 |
- Datasets 2.13.1
|
55 |
+
- Tokenizers 0.13.2
|