gpt2-shikoto
This model was trained on a dataset I obtained from an online novel site. Please be aware that the stories (training data) might contain inappropriate content. This model is intended for research purposes only.
The base model can be found here, which was obtained by patching a GPT2 Chinese model and its tokenizer with Cantonese characters. Refer to the base model for info on the patching process.
Training procedure
Please refer to the script provided by Huggingface.
The model was trained for 400,000 steps on 2 NVIDIA Quadro RTX6000 for around 15 hours at the Research Computing Services of Imperial College London.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 40
- total_eval_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400000
- mixed_precision_training: Native AMP
Training results
How to use it?
from transformers import AutoTokenizer
from transformers import TextGenerationPipeline, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("jed351/gpt2-tiny-zh-hk")
model = AutoModelForCausalLM.from_pretrained("jed351/gpt2_tiny_zh-hk-shikoto")
# try messing around with the parameters
generator = TextGenerationPipeline(model, tokenizer,
max_new_tokens=200,
no_repeat_ngram_size=3) #, device=0) #if you have a GPU
input_string = "your input"
output = generator(input_string)
string = output[0]['generated_text'].replace(' ', '')
print(string)
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.