jonra1993 commited on
Commit
802ca46
·
1 Parent(s): ff9f31f

update model card README.md

Browse files
README.md CHANGED
@@ -1,21 +1,10 @@
1
  ---
2
  license: mit
3
  tags:
4
- - text-generation
5
- - superheroes
6
- - superheroe
7
- - name-generation
8
  - generated_from_trainer
9
  model-index:
10
  - name: jrtec-gpt2-superheroes-name-generator
11
  results: []
12
- datasets:
13
- - jrtec/Superheroes
14
- language:
15
- - en
16
- widget:
17
- - text: "hero: "
18
- example_title: "New superhero name"
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -23,9 +12,7 @@ should probably proofread and complete it, then remove this comment. -->
23
 
24
  # jrtec-gpt2-superheroes-name-generator
25
 
26
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the datasetX dataset.
27
- It achieves the following results on the evaluation set:
28
- - Loss: 3.2769
29
 
30
  ## Model description
31
 
@@ -50,13 +37,10 @@ The following hyperparameters were used during training:
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
- - num_epochs: 5
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss |
58
- |:-------------:|:-----:|:----:|:---------------:|
59
- | 1.6025 | 3.45 | 500 | 3.2769 |
60
 
61
 
62
  ### Framework versions
@@ -64,4 +48,4 @@ The following hyperparameters were used during training:
64
  - Transformers 4.25.1
65
  - Pytorch 1.13.0+cu116
66
  - Datasets 2.8.0
67
- - Tokenizers 0.13.2
 
1
  ---
2
  license: mit
3
  tags:
 
 
 
 
4
  - generated_from_trainer
5
  model-index:
6
  - name: jrtec-gpt2-superheroes-name-generator
7
  results: []
 
 
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
12
 
13
  # jrtec-gpt2-superheroes-name-generator
14
 
15
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
 
 
16
 
17
  ## Model description
18
 
 
37
  - seed: 42
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
+ - num_epochs: 3
41
 
42
  ### Training results
43
 
 
 
 
44
 
45
 
46
  ### Framework versions
 
48
  - Transformers 4.25.1
49
  - Pytorch 1.13.0+cu116
50
  - Datasets 2.8.0
51
+ - Tokenizers 0.13.2
runs/Jan08_18-00-17_f279fa53eed4/events.out.tfevents.1673203551.f279fa53eed4.755.12 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fbdac9ac5dfb97518f47fde953a448b2247a1a25e9320fcaf03f80214584251a
3
- size 40
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d31fa974a4aef43650c87d053640dfb9312447424475ad7d293997c6f0dadf0a
3
+ size 4094