Spaces:
Sleeping
Sleeping
Intradiction
commited on
Commit
•
fe93989
1
Parent(s):
8077ebf
fix typos
Browse files
app.py
CHANGED
@@ -82,8 +82,8 @@ with gr.Blocks(
|
|
82 |
<div style="overflow: hidden;color:#fff;display: flex;flex-direction: column;align-items: center; position: relative; width: 100%; height: 180px;background-size: cover; background-image: url(https://www.grssigns.co.uk/wp-content/uploads/web-Header-Background.jpg);">
|
83 |
<img style="width: 130px;height: 60px;position: absolute;top:10px;left:10px" src="https://www.torontomu.ca/content/dam/tmumobile/images/TMU-Mobile-AppIcon.png"/>
|
84 |
<span style="margin-top: 40px;font-size: 36px ;font-family:fantasy;">Efficient Fine tuning Of Large Language Models</span>
|
85 |
-
<span style="margin-top: 10px;font-size: 14px;">By: Rahul Adams, Greylyn Gao, Rajevan
|
86 |
-
<span style="margin-top: 5px;font-size: 14px;">Group Id: AR06 FLC: Alice
|
87 |
</div>
|
88 |
""")
|
89 |
with gr.Tab("Text Classification"):
|
@@ -92,11 +92,11 @@ with gr.Blocks(
|
|
92 |
with gr.Row():
|
93 |
with gr.Column(scale=0.3,variant="panel"):
|
94 |
gr.Markdown("""
|
95 |
-
<h2>
|
96 |
<p><b>Model:</b> Tiny Bert <br>
|
97 |
<b>Dataset:</b> IMDB Movie review dataset <br>
|
98 |
<b>NLP Task:</b> Text Classification</p>
|
99 |
-
<p>Text classification is an NLP task that focuses on automatically ascribing a predefined category or labels to an input prompt. In this demonstration the
|
100 |
""")
|
101 |
|
102 |
with gr.Column(scale=0.3,variant="panel"):
|
@@ -139,13 +139,13 @@ with gr.Blocks(
|
|
139 |
btn.click(fn=distilBERTwithLORA_fn, inputs=inp, outputs=TextClassOut2)
|
140 |
|
141 |
|
142 |
-
with gr.Tab("
|
143 |
with gr.Row():
|
144 |
-
gr.Markdown("<h1>Efficient Fine Tuning for
|
145 |
with gr.Row():
|
146 |
with gr.Column(scale=0.3, variant="panel"):
|
147 |
gr.Markdown("""
|
148 |
-
<h2>
|
149 |
<p><b>Model:</b> Albert <br>
|
150 |
<b>Dataset:</b> Stanford Natural Language Inference Dataset <br>
|
151 |
<b>NLP Task:</b> Natual Languae Infrencing</p>
|
@@ -200,17 +200,17 @@ with gr.Blocks(
|
|
200 |
nli_btn.click(fn=AlbertnoLORA_fn, inputs=[nli_p1,nli_p2], outputs=NLIOut1)
|
201 |
nli_btn.click(fn=AlbertwithLORA_fn, inputs=[nli_p1,nli_p2], outputs=NLIOut2)
|
202 |
|
203 |
-
with gr.Tab("
|
204 |
with gr.Row():
|
205 |
gr.Markdown("<h1>Efficient Fine Tuning for Semantic Text Similarity</h1>")
|
206 |
with gr.Row():
|
207 |
with gr.Column(scale=0.3,variant="panel"):
|
208 |
gr.Markdown("""
|
209 |
-
<h2>
|
210 |
<p><b>Model:</b> DeBERTa-v3-xsmall <br>
|
211 |
<b>Dataset:</b> Semantic Text Similarity Benchmark <br>
|
212 |
<b>NLP Task:</b> Semantic Text Similarity</p>
|
213 |
-
<p>Semantic text similarity measures the closeness in meaning of two pieces of text despite differences in their wording or structure.This task involves two input prompts which can be sentences, phrases or entire documents and assessing them for similarity.
|
214 |
""")
|
215 |
with gr.Column(scale=0.3,variant="panel"):
|
216 |
sts_p1 = gr.Textbox(placeholder="Prompt One",label= "Enter Query")
|
@@ -244,7 +244,7 @@ with gr.Blocks(
|
|
244 |
</div>""")
|
245 |
|
246 |
with gr.Row(variant="panel"):
|
247 |
-
sts_out1 = gr.Textbox(label= "
|
248 |
gr.Markdown("""<div>
|
249 |
<span><center><B>Training Information</B><center></span>
|
250 |
<span><br><br><br><br><br></span>
|
|
|
82 |
<div style="overflow: hidden;color:#fff;display: flex;flex-direction: column;align-items: center; position: relative; width: 100%; height: 180px;background-size: cover; background-image: url(https://www.grssigns.co.uk/wp-content/uploads/web-Header-Background.jpg);">
|
83 |
<img style="width: 130px;height: 60px;position: absolute;top:10px;left:10px" src="https://www.torontomu.ca/content/dam/tmumobile/images/TMU-Mobile-AppIcon.png"/>
|
84 |
<span style="margin-top: 40px;font-size: 36px ;font-family:fantasy;">Efficient Fine tuning Of Large Language Models</span>
|
85 |
+
<span style="margin-top: 10px;font-size: 14px;">By: Rahul Adams, Greylyn Gao, Rajevan Logarajah & Mahir Faisal</span>
|
86 |
+
<span style="margin-top: 5px;font-size: 14px;">Group Id: AR06 FLC: Alice Reuda</span>
|
87 |
</div>
|
88 |
""")
|
89 |
with gr.Tab("Text Classification"):
|
|
|
92 |
with gr.Row():
|
93 |
with gr.Column(scale=0.3,variant="panel"):
|
94 |
gr.Markdown("""
|
95 |
+
<h2>Specifications</h2>
|
96 |
<p><b>Model:</b> Tiny Bert <br>
|
97 |
<b>Dataset:</b> IMDB Movie review dataset <br>
|
98 |
<b>NLP Task:</b> Text Classification</p>
|
99 |
+
<p>Text classification is an NLP task that focuses on automatically ascribing a predefined category or labels to an input prompt. In this demonstration the Tiny Bert model has been used to classify the text on the basis of sentiment analysis, where the labels (negative and positive) will indicate the emotional state expressed by the input prompt. The tiny bert model was chosen as in its base state its ability to perform sentiment analysis is quite poor, displayed by the untrained model, which often fails to correctly ascribe the label to the sentiment. The models were trained on the IMDB dataset which includes over 100k sentiment pairs pulled from IMDB movie reviews. We can see that when training is performed over [XX] of epochs we see an increase in X% of training time for the LoRA trained model.</p>
|
100 |
""")
|
101 |
|
102 |
with gr.Column(scale=0.3,variant="panel"):
|
|
|
139 |
btn.click(fn=distilBERTwithLORA_fn, inputs=inp, outputs=TextClassOut2)
|
140 |
|
141 |
|
142 |
+
with gr.Tab("Natural Language Inferencing"):
|
143 |
with gr.Row():
|
144 |
+
gr.Markdown("<h1>Efficient Fine Tuning for Natural Language Inferencing</h1>")
|
145 |
with gr.Row():
|
146 |
with gr.Column(scale=0.3, variant="panel"):
|
147 |
gr.Markdown("""
|
148 |
+
<h2>Specifications</h2>
|
149 |
<p><b>Model:</b> Albert <br>
|
150 |
<b>Dataset:</b> Stanford Natural Language Inference Dataset <br>
|
151 |
<b>NLP Task:</b> Natual Languae Infrencing</p>
|
|
|
200 |
nli_btn.click(fn=AlbertnoLORA_fn, inputs=[nli_p1,nli_p2], outputs=NLIOut1)
|
201 |
nli_btn.click(fn=AlbertwithLORA_fn, inputs=[nli_p1,nli_p2], outputs=NLIOut2)
|
202 |
|
203 |
+
with gr.Tab("Semantic Text Similarity"):
|
204 |
with gr.Row():
|
205 |
gr.Markdown("<h1>Efficient Fine Tuning for Semantic Text Similarity</h1>")
|
206 |
with gr.Row():
|
207 |
with gr.Column(scale=0.3,variant="panel"):
|
208 |
gr.Markdown("""
|
209 |
+
<h2>Specifications</h2>
|
210 |
<p><b>Model:</b> DeBERTa-v3-xsmall <br>
|
211 |
<b>Dataset:</b> Semantic Text Similarity Benchmark <br>
|
212 |
<b>NLP Task:</b> Semantic Text Similarity</p>
|
213 |
+
<p>Semantic text similarity measures the closeness in meaning of two pieces of text despite differences in their wording or structure. This task involves two input prompts which can be sentences, phrases or entire documents and assessing them for similarity. In our implementation we compare phrases represented by a score that can range between zero and one. A score of zero implies completely different phrases, while one indicates identical meaning between the text pair. This implementation uses a DeBERTa-v3-xsmall and training was performed on the semantic text similarity benchmark dataset which contains over 86k semantic pairs and their scores. We can see that when training is performed over [XX] epochs we see an increase in X% of training time for the LoRA trained model compared to a conventionally tuned model.</p>
|
214 |
""")
|
215 |
with gr.Column(scale=0.3,variant="panel"):
|
216 |
sts_p1 = gr.Textbox(placeholder="Prompt One",label= "Enter Query")
|
|
|
244 |
</div>""")
|
245 |
|
246 |
with gr.Row(variant="panel"):
|
247 |
+
sts_out1 = gr.Textbox(label= "Conventionally Trained Model")
|
248 |
gr.Markdown("""<div>
|
249 |
<span><center><B>Training Information</B><center></span>
|
250 |
<span><br><br><br><br><br></span>
|