Spaces:
Sleeping
Sleeping
update: font style
Browse files
app.py
CHANGED
@@ -202,7 +202,7 @@ with gr.Blocks(css=custom_css) as interface:
|
|
202 |
)
|
203 |
|
204 |
gr.HTML("""
|
205 |
-
<span style="font-size:
|
206 |
""")
|
207 |
# Results and visualization
|
208 |
with gr.Row(elem_classes="custom-row"):
|
@@ -224,7 +224,7 @@ with gr.Blocks(css=custom_css) as interface:
|
|
224 |
|
225 |
|
226 |
gr.HTML("""
|
227 |
-
<span style="font-size:
|
228 |
<span style="color: #800000;">Concept Discovery</span> is the process of uncovering the hidden, high-level features that a deep learning model has learned. It provides a way to understand the essence of its internal representations, akin to peering into the mind of the model and revealing the meaningful patterns it detects in the data.
|
229 |
<br><br>
|
230 |
<span style="color: #800000;">Deep Feature Factorization</span> (DFF) serves as a tool for breaking down these complex features into simpler, more interpretable components. By applying matrix factorization on activation maps, it untangles the intricate web of learned representations, making it easier to comprehend what the model is truly focusing on. Together, these methods bring us closer to understanding the underlying logic of neural networks, shedding light on the often enigmatic decisions they make.
|
|
|
202 |
)
|
203 |
|
204 |
gr.HTML("""
|
205 |
+
<span style="font-size: 14px; font-weight: bold;">The visualization demonstrates object detection and interpretability. Detected objects are highlighted with bounding boxes, while the heatmap reveals regions of focus, offering insights into the model's decision-making process.</span>
|
206 |
""")
|
207 |
# Results and visualization
|
208 |
with gr.Row(elem_classes="custom-row"):
|
|
|
224 |
|
225 |
|
226 |
gr.HTML("""
|
227 |
+
<span style="font-size: 14px; ">
|
228 |
<span style="color: #800000;">Concept Discovery</span> is the process of uncovering the hidden, high-level features that a deep learning model has learned. It provides a way to understand the essence of its internal representations, akin to peering into the mind of the model and revealing the meaningful patterns it detects in the data.
|
229 |
<br><br>
|
230 |
<span style="color: #800000;">Deep Feature Factorization</span> (DFF) serves as a tool for breaking down these complex features into simpler, more interpretable components. By applying matrix factorization on activation maps, it untangles the intricate web of learned representations, making it easier to comprehend what the model is truly focusing on. Together, these methods bring us closer to understanding the underlying logic of neural networks, shedding light on the often enigmatic decisions they make.
|