|
--- |
|
language: zh |
|
datasets: jinyong |
|
inference: |
|
parameters: |
|
max_length: 108 |
|
num_return_sequences: 1 |
|
do_sample: True |
|
widget: |
|
- text: "杨过朗声说道:今番良晤,豪兴不浅,他日江湖相逢,再当杯酒言欢。咱们就此别过。 -" |
|
example_title: "神雕侠侣" |
|
- text: "乱世之际,人不如狗。 -" |
|
example_title: "射雕英雄传" |
|
|
|
|
|
# 飞雪连天射白鹿,笑书神侠倚碧鸳 |
|
|
|
## Model description |
|
|
|
AI生成金庸小说,给出开头续写。 |
|
|
|
## How to use |
|
使用 pipeline 调用模型: |
|
|
|
```python |
|
>>> # 调用微调后的模型 |
|
>>> senc="这些雪花落下来,多么白,多么好看.过几天太阳出来,每一片 雪花都变得无影无踪.到得明年冬天,又有许很多多雪花,只不过已不是 今年这些雪花罢了。" |
|
>>> model_id="jinyong-gpt2-finetuning" |
|
>>> from transformers import AutoTokenizer, GPT2LMHeadModel, TextGenerationPipeline |
|
|
|
>>> tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
>>> model = GPT2LMHeadModel.from_pretrained(model_id) |
|
>>> text_generator = TextGenerationPipeline(model, tokenizer) |
|
>>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id |
|
>>> text_generator( senc,max_length=108, do_sample=True) |
|
[{'generated_text': '这些雪花落下来,多么白,多么好看.过几天太阳出来,每一片 雪花都变得无影无踪.到得明年冬天,又有许很多多雪花,只不过已不是 今年这些雪花罢了。 反正 老天爷 有眼 , 不知 哪里 是甚么 风 险 ?” 正 说到此处 , 突然 听得 谢逊 啸声 渐近 , 忍不住 张口 惊呼 , 一齐 向他 扑去 , 只听 谢逊 一声 怒吼 , 跟着 左手 用力 拍 出一掌 , 以 掌力 化开 。 众人 吃了一惊 , 同时 从 海 道 中 跃出 , 双双 倒退 。 张翠山和殷素素 对望一眼 , 均想 以 这两 大高手 之力 如何 抵挡 , 以 今日 之力 如何 攻敌 之'}] |
|
>>> |
|
``` |
|
Here is how to use this model to get the features of a given text in PyTorch: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("supermy/jinyong-gpt2") |
|
|
|
model = AutoModelForCausalLM.from_pretrained("supermy/jinyong-gpt2") |
|
``` |
|
|
|
|
|
|
|
## Training data |
|
|
|
此数据集基于金庸的【飞雪连天射白鹿,笑书神侠倚碧鸳】小说集训练。 |
|
|
|
## 统计信息 |
|
|
|
``` |
|
``` |
|
|
|
## Training procedure |
|
|
|
基于模型:[GPT2](https://huggingface.co/gpt2) |
|
训练环境:英伟达16G显卡 |
|
|
|
bpe分词:"vocab_size"=30000 |
|
``` |
|
[INFO|trainer.py:1608] 2022-12-02 19:52:59,024 >> ***** Running training ***** |
|
[INFO|trainer.py:1609] 2022-12-02 19:52:59,024 >> Num examples = 9443 |
|
[INFO|trainer.py:1610] 2022-12-02 19:52:59,024 >> Num Epochs = 108 |
|
[INFO|trainer.py:1611] 2022-12-02 19:52:59,024 >> Instantaneous batch size per device = 12 |
|
[INFO|trainer.py:1612] 2022-12-02 19:52:59,024 >> Total train batch size (w. parallel, distributed & accumulation) = 12 |
|
[INFO|trainer.py:1613] 2022-12-02 19:52:59,024 >> Gradient Accumulation steps = 1 |
|
[INFO|trainer.py:1614] 2022-12-02 19:52:59,024 >> Total optimization steps = 84996 |
|
[INFO|trainer.py:1616] 2022-12-02 19:52:59,025 >> Number of trainable parameters = 124439808 |
|
|
|
{'loss': 8.0431, 'learning_rate': 4.970998635229893e-05, 'epoch': 0.64} |
|
{'loss': 7.4867, 'learning_rate': 4.94158548637583e-05, 'epoch': 1.27} |
|
{'loss': 7.322, 'learning_rate': 4.912172337521766e-05, 'epoch': 1.91} |
|
...... |
|
...... |
|
...... |
|
{'loss': 3.8686, 'learning_rate': 9.035719327968376e-07, 'epoch': 106.1} |
|
{'loss': 3.8685, 'learning_rate': 6.094404442562004e-07, 'epoch': 106.73} |
|
{'loss': 3.8678, 'learning_rate': 3.1530895571556306e-07, 'epoch': 107.37} |
|
|
|
{'train_runtime': 71919.9835, 'train_samples_per_second': 14.18, 'train_steps_per_second': 1.182, 'train_loss': 4.661963973798675, 'epoch': 108.0} |
|
***** train metrics ***** |
|
epoch = 108.0 |
|
train_loss = 4.662 |
|
train_runtime = 19:58:39.98 |
|
train_samples = 9443 |
|
train_samples_per_second = 14.18 |
|
train_steps_per_second = 1.182 |
|
12/03/2022 15:51:42 - INFO - __main__ - *** Evaluate *** |
|
[INFO|trainer.py:2929] 2022-12-03 15:51:42,270 >> ***** Running Evaluation ***** |
|
[INFO|trainer.py:2931] 2022-12-03 15:51:42,270 >> Num examples = 283 |
|
[INFO|trainer.py:2934] 2022-12-03 15:51:42,270 >> Batch size = 12 |
|
100%|██████████| 24/24 [00:07<00:00, 3.17it/s] |
|
[INFO|modelcard.py:449] 2022-12-03 15:51:52,077 >> Dropping the following result as it does not have all the necessary fields: |
|
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.2100502721055507}]} |
|
***** eval metrics ***** |
|
epoch = 108.0 |
|
eval_accuracy = 0.2101 |
|
eval_loss = 6.889 |
|
eval_runtime = 0:00:07.90 |
|
eval_samples = 283 |
|
eval_samples_per_second = 35.79 |
|
eval_steps_per_second = 3.035 |
|
perplexity = 981.4321 |
|
``` |