dvitel commited on
Commit
999b43e
1 Parent(s): de27158

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -13
README.md CHANGED
@@ -1,20 +1,44 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - generated_from_trainer
 
5
  metrics:
6
  - bleu
 
 
 
 
 
7
  model-index:
8
- - name: h3
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
  # h3
16
 
17
- This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
 
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.2782
20
  - Exact Match: 0.2879
@@ -28,15 +52,13 @@ It achieves the following results on the evaluation set:
28
 
29
  ## Model description
30
 
31
- More information needed
 
 
32
 
33
  ## Intended uses & limitations
34
 
35
- More information needed
36
-
37
- ## Training and evaluation data
38
-
39
- More information needed
40
 
41
  ## Training procedure
42
 
 
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - distigpt2
5
+ - hearthstone
6
  metrics:
7
  - bleu
8
+ - dvitel/codebleu
9
+ - exact_match
10
+ - chrf
11
+ datasets:
12
+ - dvitel/hearthstone
13
  model-index:
14
+ - name: h0
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Python Code Synthesis
19
+ dataset:
20
+ type: dvitel/hearthstone
21
+ name: HearthStone
22
+ split: test
23
+ metrics:
24
+ - type: exact_match
25
+ value: 0.30303030303030304
26
+ name: Exact Match
27
+ - type: bleu
28
+ value: 0.8850182403024257
29
+ name: BLEU
30
+ - type: dvitel/codebleu
31
+ value: 0.677852377992836
32
+ name: CodeBLEU
33
+ - type: chrf
34
+ value: 91.00848749530383
35
+ name: chrF
36
  ---
37
 
 
 
 
38
  # h3
39
 
40
+ This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset.
41
+ [GitHub repo](https://github.com/dvitel/nlp-sem-parsing/blob/master/h3.py).
42
  It achieves the following results on the evaluation set:
43
  - Loss: 0.2782
44
  - Exact Match: 0.2879
 
52
 
53
  ## Model description
54
 
55
+ DistilGPT2 fine-tuned on HearthStone dataset for 200 epochs. \
56
+ Related to [dvitel/h0](https://huggingface.co/dvitel/h0) but with preprocessing which anonymizes classes and function variables (Local renaming). \
57
+ [dvitel/h2](https://huggingface.co/dvitel/h2) implements global renaming where all names are removed. Global renaming showed worse results compared to local renaming.
58
 
59
  ## Intended uses & limitations
60
 
61
+ HearthStone card code synthesis.
 
 
 
 
62
 
63
  ## Training procedure
64