jbraun19 awacke1 commited on
Commit
f8bb7f3
β€’
0 Parent(s):

Duplicate from awacke1/BigScienceContinualGenerator

Browse files

Co-authored-by: Aaron C Wacker <awacke1@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +34 -0
  2. README.md +14 -0
  3. app.py +112 -0
.gitattributes ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: BigScienceContinualGenerator
3
+ emoji: πŸ‘€πŸ‘€πŸ‘€
4
+ colorFrom: green
5
+ colorTo: indigo
6
+ sdk: gradio
7
+ sdk_version: 3.18.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ duplicated_from: awacke1/BigScienceContinualGenerator
12
+ ---
13
+
14
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+
3
+ #api = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
4
+ api = gr.Interface.load("models/bigscience/bloom")
5
+
6
+
7
+ def complete_with_gpt(text):
8
+ # Use the last 50 characters of the text as context
9
+ return text[:-50] + api(text[-50:])
10
+
11
+
12
+ with gr.Blocks() as demo:
13
+ with gr.Row():
14
+ textbox = gr.Textbox(placeholder="Type here and press enter...", lines=21)
15
+ with gr.Column():
16
+ btn = gr.Button("Generate")
17
+
18
+ btn.click(complete_with_gpt, textbox, textbox)
19
+
20
+ with gr.Row():
21
+ gr.Markdown("""
22
+ # Big Science creates 176 Billion Parameter Large Language Model
23
+
24
+ ## Bloom Is Setting New Record for Most Performant and Efficient AI Model for Science Ever!
25
+
26
+ Bloom stands for:
27
+ B: Big Science
28
+ L: Large Language Model
29
+ O: Open Science
30
+ O: Open Access
31
+ M: Multi Lingual Language Model
32
+
33
+ 1. Video Playlist to Check it out: https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14
34
+ 2. Summary of Important Models and Sizes:
35
+
36
+ # Model Sizes to Date
37
+
38
+ Model Name | Model Size (in Parameters)
39
+ ----------------|---------------------------------
40
+ BigScience-tr11-176B|176 billion
41
+ GPT-3|175 billion
42
+ OpenAI's DALL-E 2.0|500 million
43
+ NVIDIA's Megatron|8.3 billion
44
+ Google's BERT|340 million
45
+ GPT-2|1.5 billion
46
+ OpenAI's GPT-1|117 million
47
+ ELMo|90 million
48
+ ULMFiT|100 million
49
+ Transformer-XL|250 million
50
+ XLNet|210 million
51
+ RoBERTa|125 million
52
+ ALBERT|12 million
53
+ DistilBERT|66 million
54
+
55
+ 3. Background Information on ChatGPT, Bloom from BigScience on HuggingFace Platform, and RLHF DeepRL and One to Few Shot Learning and Generators:
56
+
57
+
58
+
59
+ # ChatGPT Datasets:
60
+ 1. WebText
61
+ 2. Common Crawl
62
+ 3. BooksCorpus
63
+ 4. English Wikipedia
64
+ 5. Toronto Books Corpus
65
+ 6. OpenWebText
66
+
67
+ # Comparison to BigScience Model:
68
+
69
+ # Big Science - How to get started
70
+
71
+ Big Science is a 176B parameter new ML model that was trained on a set of datasets for Natural Language processing, and many other tasks that are not yet explored.. Below is the set of the papers, models, links, and datasets around big science which promises to be the best, most recent large model of its kind benefitting all science pursuits.
72
+
73
+ # Model: https://huggingface.co/bigscience/bloom
74
+
75
+ # Papers:
76
+ 1. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model https://arxiv.org/abs/2211.05100
77
+ 2. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism https://arxiv.org/abs/1909.08053
78
+ 3. 8-bit Optimizers via Block-wise Quantization https://arxiv.org/abs/2110.02861
79
+ 4. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation https://arxiv.org/abs/2108.12409
80
+ 5. https://huggingface.co/models?other=doi:10.57967/hf/0003
81
+ 6. 217 Other Models optimizing use of bloom via specialization: https://huggingface.co/models?other=bloom
82
+
83
+ # Datasets
84
+ 1. Universal Dependencies: https://paperswithcode.com/dataset/universal-dependencies
85
+ 2. WMT 2014: https://paperswithcode.com/dataset/wmt-2014
86
+ 3. The Pile: https://paperswithcode.com/dataset/the-pile
87
+ 4. HumanEval: https://paperswithcode.com/dataset/humaneval
88
+ 5. FLORES-101: https://paperswithcode.com/dataset/flores-101
89
+ 6. CrowS-Pairs: https://paperswithcode.com/dataset/crows-pairs
90
+ 7. WikiLingua: https://paperswithcode.com/dataset/wikilingua
91
+ 8. MTEB: https://paperswithcode.com/dataset/mteb
92
+ 9. xP3: https://paperswithcode.com/dataset/xp3
93
+ 10. DiaBLa: https://paperswithcode.com/dataset/diabla
94
+
95
+ # Deep RL ML Strategy
96
+
97
+ 1. Language Model Preparation, Human Augmented with Supervised Fine Tuning
98
+ 2. Reward Model Training with Prompts Dataset Multi-Model Generate Data to Rank
99
+ 3. Fine Tuning with Reinforcement Reward and Distance Distribution Regret Score
100
+ 4. Proximal Policy Optimization Fine Tuning
101
+
102
+ # Variations - Preference Model Pretraining
103
+
104
+ 1. Use Ranking Datasets Sentiment - Thumbs Up/Down, Distribution
105
+ 2. Online Version Getting Feedback
106
+ 3. OpenAI - InstructGPT - Humans generate LM Training Text
107
+ 4. DeepMind - Advantage Actor Critic Sparrow, GopherCite
108
+ 5. Reward Model Human Prefence Feedback
109
+
110
+ """)
111
+
112
+ demo.launch()