--- license: apache-2.0 datasets: - xmj2002/tang_poems language: - zh --- 使用的预训练模型为[uer/gpt2-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-chinese-cluecorpussmall) ## Usage ```python from transformers import AutoModelForCausalLM from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("xmj2002/gpt2_tang_poetry") model = AutoModelForCausalLM.from_pretrained("xmj2002/gpt2_tang_poetry") text = "白居易《远方》" inputs = tokenizer(text, return_tensors="pt").input_ids outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=100, top_p=0.95) tokenizer.decode(outputs[0], skip_special_tokens=True) ```