Text2Text Generation
GGUF
5 languages
bayang commited on
Commit
f5b32b0
1 Parent(s): 0030187

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - ro
6
+ - de
7
+ - multilingual
8
+
9
+ widget:
10
+ - text: "Translate to German: My name is Arthur"
11
+ example_title: "Translation"
12
+ - text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
13
+ example_title: "Question Answering"
14
+ - text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
15
+ example_title: "Logical reasoning"
16
+ - text: "Please answer the following question. What is the boiling point of Nitrogen?"
17
+ example_title: "Scientific knowledge"
18
+ - text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
19
+ example_title: "Yes/no question"
20
+ - text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
21
+ example_title: "Reasoning task"
22
+ - text: "Q: ( False or not False or False ) is? A: Let's think step by step"
23
+ example_title: "Boolean Expressions"
24
+ - text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
25
+ example_title: "Math reasoning"
26
+ - text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
27
+ example_title: "Premise and hypothesis"
28
+
29
+ tags:
30
+ - text2text-generation
31
+
32
+ datasets:
33
+ - svakulenk0/qrecc
34
+ - taskmaster2
35
+ - djaym7/wiki_dialog
36
+ - deepmind/code_contests
37
+ - lambada
38
+ - gsm8k
39
+ - aqua_rat
40
+ - esnli
41
+ - quasc
42
+ - qed
43
+
44
+
45
+ license: apache-2.0
46
+ ---
47
+
48
+ # Model Card for FLAN-T5 XL
49
+
50
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
51
+ alt="drawing" width="600"/>
52
+
53
+ # Table of Contents
54
+
55
+ 0. [TL;DR](#TL;DR)
56
+ 1. [Model Details](#model-details)
57
+ 2. [Usage](#usage)
58
+ 3. [Uses](#uses)
59
+ 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
60
+ 5. [Training Details](#training-details)
61
+ 6. [Evaluation](#evaluation)
62
+ 7. [Environmental Impact](#environmental-impact)
63
+ 8. [Citation](#citation)
64
+
65
+ # TL;DR
66
+
67
+ If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
68
+ As mentioned in the first few lines of the abstract :
69
+ > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
70
+
71
+ **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
72
+
73
+ # Model Details
74
+
75
+ ## Model Description
76
+
77
+
78
+ The detail are in the original [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl)