sr5434 commited on
Commit
b3e1045
1 Parent(s): 8d7a7da

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -21
README.md CHANGED
@@ -5,11 +5,6 @@ tags:
5
  model-index:
6
  - name: gptQuotes
7
  results: []
8
- datasets:
9
- - Abirate/english_quotes
10
- language:
11
- - en
12
- pipeline_tag: text-generation
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,26 +12,19 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # gptQuotes
19
 
20
- This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an Abirate's English Quotes dataset.
21
 
 
22
 
23
- ## Intended uses & limitations
24
-
25
- Generating quotes with AI
26
 
27
- ## Online demo
28
 
29
- A demo of this AI is availible in a [Huggingface Space](https://huggingface.co/spaces/sr5434/QuoteGeneration). Do not use the version attached to this model, as it doesn't work.
30
 
31
- ## Sample code
32
- ```
33
- from transformers import pipeline
34
 
35
- ai = pipeline('text-generation',model='sr5434/gptQuotes', tokenizer='facebook/opt-350m', device=-1)#,config={'max_length':45})
36
- while True:
37
- result = ai(input("Prompt>>>"))[0]['generated_text']
38
- print(result)
39
- ```
40
 
41
  ## Training procedure
42
 
@@ -52,9 +40,12 @@ The following hyperparameters were used during training:
52
  - lr_scheduler_warmup_steps: 500
53
  - num_epochs: 15
54
 
 
 
 
55
 
56
  ### Framework versions
57
 
58
- - Transformers 4.25.1
59
  - Pytorch 1.13.1+cu116
60
- - Tokenizers 0.13.2
 
5
  model-index:
6
  - name: gptQuotes
7
  results: []
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
12
 
13
  # gptQuotes
14
 
15
+ This model is a fine-tuned version of [sr5434/gptQuotes](https://huggingface.co/sr5434/gptQuotes) on an unknown dataset.
16
 
17
+ ## Model description
18
 
19
+ More information needed
 
 
20
 
21
+ ## Intended uses & limitations
22
 
23
+ More information needed
24
 
25
+ ## Training and evaluation data
 
 
26
 
27
+ More information needed
 
 
 
 
28
 
29
  ## Training procedure
30
 
 
40
  - lr_scheduler_warmup_steps: 500
41
  - num_epochs: 15
42
 
43
+ ### Training results
44
+
45
+
46
 
47
  ### Framework versions
48
 
49
+ - Transformers 4.26.0
50
  - Pytorch 1.13.1+cu116
51
+ - Tokenizers 0.13.2