sulpha commited on
Commit
8e7a3f8
β€’
1 Parent(s): 0fb6216

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +8 -13
app.py CHANGED
@@ -2,22 +2,17 @@
2
  ===========================================================================
3
  Gradio Demo to Plot Ridge coefficients as a function of the regularization
4
  ===========================================================================
5
-
6
  Shows the effect of collinearity in the coefficients of an estimator.
7
-
8
  .. currentmodule:: sklearn.linear_model
9
-
10
  :class:`Ridge` Regression is the estimator used in this example.
11
  Each color represents a different feature of the
12
  coefficient vector, and this is displayed as a function of the
13
  regularization parameter.
14
-
15
  This example also shows the usefulness of applying Ridge regression
16
  to highly ill-conditioned matrices. For such matrices, a slight
17
  change in the target variable can cause huge variances in the
18
  calculated weights. In such cases, it is useful to set a certain
19
  regularization (alpha) to reduce this variation (noise).
20
-
21
  When alpha is very large, the regularization effect dominates the
22
  squared loss function and the coefficients tend to zero.
23
  At the end of the path, as alpha tends toward zero
@@ -28,6 +23,7 @@ in such a way that a balance is maintained between both.
28
 
29
  # Author: Fabian Pedregosa -- <fabian.pedregosa@inria.fr>
30
  # License: BSD 3 clause
 
31
 
32
  import numpy as np
33
  import matplotlib.pyplot as plt
@@ -74,17 +70,12 @@ model_card=f"""
74
  ## Description
75
  This interactive demo is based on the [Plot Ridge coefficients as a function of the regularization](https://scikit-learn.org/stable/_downloads/9d5a4167bc60f250de65fe21497c1eb6/plot_ridge_path.py) example from the popular [scikit-learn](https://scikit-learn.org/stable/) library, which is a widely-used library for machine learning in Python.
76
  This demo demonstrates the effect of collinearity in the coefficients of an estimator by plotting the regularization selected against the coefficients that are learnt by the model.
77
-
78
  It also shows the usefulness of applying Ridge regression to highly ill-conditioned matrices. For such matrices, a slight change in the target variable can cause huge variances in the calculated weights. In such cases, it is useful to set a certain regularization (alpha) to reduce this variation (noise). You can play with the range of `Alpha` values and the `Training Size`
79
-
80
  When alpha is very large, the regularization effect dominates the squared loss function and the coefficients tend to zero. At the end of the path, as alpha tends toward zero and the solution tends towards the ordinary least squares, coefficients exhibit big oscillations. In practise it is necessary to tune alpha in such a way that a balance is maintained between both.
81
-
82
  ## Model
83
  currentmodule: [sklearn.linear_model](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model)
84
-
85
  class:`Ridge` Regression is the estimator used in this example.
86
  Each color represents a different feature of the coefficient vector, and this is displayed as a function of the regularization parameter.
87
-
88
  """
89
 
90
  with gr.Blocks(title=title) as demo:
@@ -98,12 +89,16 @@ with gr.Blocks(title=title) as demo:
98
  d0 = gr.Slider(1,101,value=10,step=10,label='Select Size of Training Set')
99
  with gr.Column():
100
  with gr.Tab('Select Alpha Range'):
101
- d1 = gr.Slider(-20,20,value=-10,step=1,label='')
102
  d2 = gr.Slider(-20,20,value=-2,step=1,label='')
103
 
104
- btn = gr.Button(value = 'Submit')
 
 
 
 
105
 
106
- btn.click(make_plot,inputs=[d0,d1,d2],outputs=[gr.Plot()])
107
 
108
  demo.launch()
109
 
 
2
  ===========================================================================
3
  Gradio Demo to Plot Ridge coefficients as a function of the regularization
4
  ===========================================================================
 
5
  Shows the effect of collinearity in the coefficients of an estimator.
 
6
  .. currentmodule:: sklearn.linear_model
 
7
  :class:`Ridge` Regression is the estimator used in this example.
8
  Each color represents a different feature of the
9
  coefficient vector, and this is displayed as a function of the
10
  regularization parameter.
 
11
  This example also shows the usefulness of applying Ridge regression
12
  to highly ill-conditioned matrices. For such matrices, a slight
13
  change in the target variable can cause huge variances in the
14
  calculated weights. In such cases, it is useful to set a certain
15
  regularization (alpha) to reduce this variation (noise).
 
16
  When alpha is very large, the regularization effect dominates the
17
  squared loss function and the coefficients tend to zero.
18
  At the end of the path, as alpha tends toward zero
 
23
 
24
  # Author: Fabian Pedregosa -- <fabian.pedregosa@inria.fr>
25
  # License: BSD 3 clause
26
+ # Demo Author: Syed Affan
27
 
28
  import numpy as np
29
  import matplotlib.pyplot as plt
 
70
  ## Description
71
  This interactive demo is based on the [Plot Ridge coefficients as a function of the regularization](https://scikit-learn.org/stable/_downloads/9d5a4167bc60f250de65fe21497c1eb6/plot_ridge_path.py) example from the popular [scikit-learn](https://scikit-learn.org/stable/) library, which is a widely-used library for machine learning in Python.
72
  This demo demonstrates the effect of collinearity in the coefficients of an estimator by plotting the regularization selected against the coefficients that are learnt by the model.
 
73
  It also shows the usefulness of applying Ridge regression to highly ill-conditioned matrices. For such matrices, a slight change in the target variable can cause huge variances in the calculated weights. In such cases, it is useful to set a certain regularization (alpha) to reduce this variation (noise). You can play with the range of `Alpha` values and the `Training Size`
 
74
  When alpha is very large, the regularization effect dominates the squared loss function and the coefficients tend to zero. At the end of the path, as alpha tends toward zero and the solution tends towards the ordinary least squares, coefficients exhibit big oscillations. In practise it is necessary to tune alpha in such a way that a balance is maintained between both.
 
75
  ## Model
76
  currentmodule: [sklearn.linear_model](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model)
 
77
  class:`Ridge` Regression is the estimator used in this example.
78
  Each color represents a different feature of the coefficient vector, and this is displayed as a function of the regularization parameter.
 
79
  """
80
 
81
  with gr.Blocks(title=title) as demo:
 
89
  d0 = gr.Slider(1,101,value=10,step=10,label='Select Size of Training Set')
90
  with gr.Column():
91
  with gr.Tab('Select Alpha Range'):
92
+ d1 = gr.Slider(-20,20,value=-10,step=1,label='Creates an array of regularization values which are fed to the model and plotted against the returned weights')
93
  d2 = gr.Slider(-20,20,value=-2,step=1,label='')
94
 
95
+ o1=gr.Plot()
96
+ #btn = gr.Button(value = 'Submit')
97
+ d0.change(fn=make_plot,inputs=[d0,d1,d2],outputs=[o1])
98
+ d1.change(fn=make_plot,inputs=[d0,d1,d2],outputs=[o1])
99
+ d2.change(fn=make_plot,inputs=[d0,d1,d2],outputs=[o1])
100
 
101
+ #btn.click(make_plot,inputs=[d0,d1,d2],outputs=[gr.Plot()])
102
 
103
  demo.launch()
104