Text Generation
Transformers
PyTorch
Safetensors
gpt2
stable-diffusion
prompt-generator
distilgpt2
text-generation-inference
Inference Endpoints
FredZhang7 commited on
Commit
7d8f63e
1 Parent(s): 6f8b642

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -10
README.md CHANGED
@@ -23,14 +23,6 @@ pip install --upgrade transformers
23
  ```
24
 
25
  ```python
26
- # download DistilGPT2 Stable Diffusion if haven't already
27
- import os
28
- if not os.path.exists('./distil-sd-gpt2.pt'):
29
- import urllib.request
30
- print('Downloading model...')
31
- urllib.request.urlretrieve('https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion/resolve/main/distil-sd-gpt2.pt', './distil-sd-gpt2.pt')
32
- print('Model downloaded.')
33
-
34
  from transformers import GPT2Tokenizer, GPT2LMHeadModel
35
 
36
  # load the pretrained tokenizer
@@ -40,8 +32,7 @@ tokenizer.max_len = 512
40
 
41
  # load the fine-tuned model
42
  import torch
43
- model = GPT2LMHeadModel.from_pretrained('distilgpt2')
44
- model.load_state_dict(torch.load('distil-sd-gpt2.pt'))
45
 
46
  # generate text using fine-tuned model
47
  from transformers import pipeline
 
23
  ```
24
 
25
  ```python
 
 
 
 
 
 
 
 
26
  from transformers import GPT2Tokenizer, GPT2LMHeadModel
27
 
28
  # load the pretrained tokenizer
 
32
 
33
  # load the fine-tuned model
34
  import torch
35
+ model = GPT2LMHeadModel.from_pretrained('FredZhang7/distilgpt2-stable-diffusion')
 
36
 
37
  # generate text using fine-tuned model
38
  from transformers import pipeline