dvitel commited on
Commit
272f2b7
1 Parent(s): c004fa8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -16
README.md CHANGED
@@ -1,23 +1,82 @@
1
  ---
2
- language:
3
- - en
4
- tags:
5
- - transformers
6
  license: apache-2.0
7
- datasets:
8
- - dvitel/hearthstone
9
  metrics:
10
- - exact_match
11
  - bleu
12
- - dvitel/codebleu
13
- - chrf
 
14
  ---
15
 
16
- Application of distilgpt2 to HearthStone card code synthesis. Dataset [dvitel/hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) \
17
- Article under consideration: [Abstract Syntax Networks for Code Generation and Semantic Parsing](https://aclanthology.org/P17-1105.pdf) \
18
- We check if distilgpt2 could produce better results than ASNs from article. H0 model has minimal preprocessing of dataset where we normalize python code:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- ```python
21
- def normalize(line:str):
22
- return line.strip().replace("§", "\n").replace(" ", "\t").replace("\\ ", "").replace("\n\n", "\n")
23
- ```
 
1
  ---
 
 
 
 
2
  license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
  metrics:
 
6
  - bleu
7
+ model-index:
8
+ - name: h0
9
+ results: []
10
  ---
11
 
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # h0
16
+
17
+ This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.3117
20
+ - Exact Match: 0.1970
21
+ - Bleu: 0.9085
22
+ - Codebleu: 0.7341
23
+ - Ngram Match Score: 0.7211
24
+ - Weighted Ngram Match Score: 0.7299
25
+ - Syntax Match Score: 0.7536
26
+ - Dataflow Match Score: 0.7317
27
+ - Chrf: 92.8689
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 2e-05
47
+ - train_batch_size: 4
48
+ - eval_batch_size: 4
49
+ - seed: 17
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: cosine
52
+ - num_epochs: 200
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Exact Match | Bleu | Codebleu | Ngram Match Score | Weighted Ngram Match Score | Syntax Match Score | Dataflow Match Score | Chrf |
58
+ |:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:--------:|:-----------------:|:--------------------------:|:------------------:|:--------------------:|:-------:|
59
+ | 0.543 | 11.94 | 1600 | 0.2701 | 0.0152 | 0.8552 | 0.6144 | 0.6027 | 0.6136 | 0.6431 | 0.5982 | 89.0280 |
60
+ | 0.1459 | 23.88 | 3200 | 0.2408 | 0.0909 | 0.8841 | 0.6733 | 0.6610 | 0.6719 | 0.7210 | 0.6393 | 91.2517 |
61
+ | 0.0801 | 35.82 | 4800 | 0.2498 | 0.1515 | 0.8966 | 0.6999 | 0.6954 | 0.7054 | 0.7326 | 0.6662 | 92.1356 |
62
+ | 0.0498 | 47.76 | 6400 | 0.2569 | 0.1818 | 0.9012 | 0.7015 | 0.7022 | 0.7114 | 0.7428 | 0.6496 | 92.4668 |
63
+ | 0.0323 | 59.7 | 8000 | 0.2732 | 0.1667 | 0.9044 | 0.7241 | 0.7025 | 0.7123 | 0.7551 | 0.7266 | 92.5429 |
64
+ | 0.0214 | 71.64 | 9600 | 0.2896 | 0.1667 | 0.9034 | 0.7228 | 0.7101 | 0.7195 | 0.7670 | 0.6945 | 92.4258 |
65
+ | 0.015 | 83.58 | 11200 | 0.2870 | 0.1667 | 0.9046 | 0.7292 | 0.7137 | 0.7228 | 0.7667 | 0.7137 | 92.5979 |
66
+ | 0.0121 | 95.52 | 12800 | 0.2907 | 0.1667 | 0.9075 | 0.7287 | 0.7198 | 0.7297 | 0.7696 | 0.6958 | 92.7074 |
67
+ | 0.0093 | 107.46 | 14400 | 0.2976 | 0.1667 | 0.9073 | 0.7365 | 0.7134 | 0.7238 | 0.7732 | 0.7356 | 92.8347 |
68
+ | 0.0073 | 119.4 | 16000 | 0.3037 | 0.1818 | 0.9085 | 0.7326 | 0.7154 | 0.7241 | 0.7529 | 0.7381 | 92.8343 |
69
+ | 0.006 | 131.34 | 17600 | 0.3047 | 0.1970 | 0.9104 | 0.7410 | 0.7230 | 0.7312 | 0.7667 | 0.7433 | 92.8286 |
70
+ | 0.005 | 143.28 | 19200 | 0.3080 | 0.1970 | 0.9088 | 0.7377 | 0.7232 | 0.7316 | 0.7746 | 0.7214 | 92.8035 |
71
+ | 0.0044 | 155.22 | 20800 | 0.3071 | 0.1970 | 0.9076 | 0.7343 | 0.7196 | 0.7283 | 0.7783 | 0.7112 | 92.7742 |
72
+ | 0.004 | 167.16 | 22400 | 0.3097 | 0.1970 | 0.9082 | 0.7440 | 0.7236 | 0.7334 | 0.7601 | 0.7587 | 92.8117 |
73
+ | 0.0035 | 179.1 | 24000 | 0.3111 | 0.1970 | 0.9080 | 0.7355 | 0.7204 | 0.7295 | 0.7616 | 0.7304 | 92.7990 |
74
+ | 0.0036 | 191.04 | 25600 | 0.3117 | 0.1970 | 0.9085 | 0.7341 | 0.7211 | 0.7299 | 0.7536 | 0.7317 | 92.8689 |
75
+
76
+
77
+ ### Framework versions
78
 
79
+ - Transformers 4.24.0
80
+ - Pytorch 1.13.0
81
+ - Datasets 2.6.1
82
+ - Tokenizers 0.13.1