GPT-2 model having16 4-float attention heads

#2
by MartialTerran - opened

According to your config file, you trained an 3M 8-layer GPT-2 model having 16 4-float attention heads: "n_embd" divided by "n_head" = 64/16 = 4 floats per attention head. Thank you for doing this. (Has anyone else ever used so few floats per attention head? Have you considered using 64 x 1-float wide attention heads or 32 x 2-float wide attention heads, at n_embd =64 ) And, your tiny 3M model accomplished word-level integrity (each word is proper English and spelled correctly) and sentence-level coherence (each sentence has coherence). [Is there a python scripted way to automatically determine how many different words the tiny 3M GPT-2 model actually knows how to spell/form correctly and how to use these words in a coherent sentence?]

I have shown at https://huggingface.co/datasets/roneneldan/TinyStories/discussions that the TinyStoriesV2-GPT4-train.txt
dataset is polluted with many misspellings and many junk words and even some Chinese characters. Can you build or obtain a clean TinyStoriesV3-train.txt dataset that will contain coherent stories that are comprised of only five thousand unique words (e.g., words known to a 5-year old) or a new dataset comprised of ten thousand unique words; and built a proper matching Vocabulary (not using the bloated 50,000 vocab BPE set obtained from web scraping) and see if your next 3M GPT-2 tiny model can perform better based on such a coherent dataset and a proper Vocabulary? Maybe do another model using sentencepiece style whole-word vocab instead of BPE vocab, to further boost tiny model performance.

[Note: Andrej Karpathy developed a Python Script to generate a new (smaller) vocab from the TinyStories dataset itself:
To tokenize data with a custom tokenizer we train ourselves with sentencepiece, e.g.:
python tinystories.py download
python tinystories.py train_vocab --vocab_size=2048
python tinystories.py pretokenize --vocab_size=2048
at https://raw.githubusercontent.com/karpathy/llama2.c/master/tinystories.py ]

By generating and reducing the vocab set to match the words actually used in the training dataset, you eliminate training inefficiencies and you may improve the tiny model's performance further.

Also, can you provide a pure-python version of your GPT-2 model.py (not based on C-code nor on huggingface libraries) so that the GPT-2 model can be run in local inference on my Windows 10 machine, and further modified for study?

Also, can you provide an address table revealing the internal structure of the 3M parameters, so I can datawise study the parameter data in your checkpoints containing the weights and biases you obtained? Such as providing the byte-range addresses of each layer weights and biases and the start and end address of the vocab/tokenizer embeddings, in the model checkpoints?
 
Can you explain why you chose each of these hyperparameters:

{
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_embd": 64,
"n_head": 16,
"n_inner": null,
"n_layer": 8,
"n_positions": 1024,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"torch_dtype": "float32",
"transformers_version": "4.35.0.dev0",
"use_cache": true,
"vocab_size": 50257
}

MartialTerran changed discussion title from GPT-2 model having to GPT-2 model having16 4-float attention heads
MartialTerran changed discussion status to closed
MartialTerran changed discussion status to open

Sign up or log in to comment