File size: 2,853 Bytes
cbeb2d3
 
 
 
 
 
2f2b7d2
 
 
ced5e66
 
2f2b7d2
 
 
 
 
cbeb2d3
 
 
 
2f2b7d2
cbeb2d3
2f2b7d2
cbeb2d3
2f2b7d2
cbeb2d3
2f2b7d2
cbeb2d3
2f2b7d2
cbeb2d3
33bfcd7
cbeb2d3
2f2b7d2
cbeb2d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33bfcd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
tags:
- generated_from_trainer
model-index:
- name: TinyStories-3M-val-Hebrew
  results: []
license: mit
language:
- he
datasets:
- Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT
widget:
  - text: היה פעם
  - text: פעם אחת
  - text: החתול שלך מאוד חמוד ו
pipeline_tag: text-generation
---

# TinyStories-3M-val-Hebrew

This model is trained upon [Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT](https://huggingface.co/datasets/Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT)

Dataset is a machine translation of [TinyStoriesV2-GPT4-valid.txt](https://huggingface.co/datasets/roneneldan/TinyStories/blob/main/TinyStoriesV2-GPT4-valid.txt) by [roneneldan](https://huggingface.co/roneneldan)

Trasnlation was done using [this](https://huggingface.co/datasets/Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT/blob/main/translate_file_2.py) script

Original [Dataset](https://huggingface.co/datasets/roneneldan/TinyStories) containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary.

## Model description

A very very small model (8M params) tarined on a very small dataset

A [sample inference script](https://huggingface.co/Norod78/TinyStories-3M-val-Hebrew/blob/main/TinyStories-3M-val-Hebrew-inference.py) is available

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 300.0

### Framework versions

- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3

- ### Parameter calculation

```
  def gpt_params(seq_len, vocab_size, d_model, num_heads, num_layers):
    """ Given GPT config calculate total number of parameters """
    ffw_size = 4*d_model # in GPT the number of intermediate features is always 4*d_model
    # token and position embeddings
    embeddings = d_model * vocab_size + d_model * seq_len
    # transformer blocks
    attention = 3*d_model**2 + 3*d_model # weights and biases
    attproj = d_model**2 + d_model
    ffw = d_model*(ffw_size) + ffw_size
    ffwproj = ffw_size*d_model + d_model
    layernorms = 2*2*d_model
    # dense
    ln_f = 2*d_model
    dense = d_model*vocab_size # note: no bias here
    # note: embeddings are not included in the param count!
    total_params = num_layers*(attention + attproj + ffw + ffwproj + layernorms) + ln_f + dense
    return total_params

#gpt2 = dict(seq_len = 1024, vocab_size = 50257, d_model = 768, num_heads = 12, num_layers = 12)
gpt2 = dict(seq_len = 256, vocab_size = 50259, d_model = 128, num_heads = 16, num_layers = 8)
result = gpt_params(**gpt2)/1e6
print(result) #Prints 8.019584
```