File size: 4,973 Bytes
578504d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e7a3f8
578504d
a5b8bd2
578504d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
df54b25
 
 
578504d
 
df54b25
578504d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e7a3f8
578504d
 
8e7a3f8
 
 
 
 
578504d
8e7a3f8
578504d
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
"""
===========================================================================
Gradio Demo to Plot Ridge coefficients as a function of the regularization
===========================================================================
Shows the effect of collinearity in the coefficients of an estimator.
.. currentmodule:: sklearn.linear_model
:class:`Ridge` Regression is the estimator used in this example.
Each color represents a different feature of the
coefficient vector, and this is displayed as a function of the
regularization parameter.
This example also shows the usefulness of applying Ridge regression
to highly ill-conditioned matrices. For such matrices, a slight
change in the target variable can cause huge variances in the
calculated weights. In such cases, it is useful to set a certain
regularization (alpha) to reduce this variation (noise).
When alpha is very large, the regularization effect dominates the
squared loss function and the coefficients tend to zero.
At the end of the path, as alpha tends toward zero
and the solution tends towards the ordinary least squares, coefficients
exhibit big oscillations. In practise it is necessary to tune alpha
in such a way that a balance is maintained between both.
"""

# Author: Fabian Pedregosa -- <fabian.pedregosa@inria.fr>
# License: BSD 3 clause
# Demo Author: Syed Affan


import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
import gradio as gr

def make_plot(size_X,min_alpha,max_alpha):
# X is the 10x10 Hilbert matrix
    X = 1.0 / (np.arange(1, size_X+1) + np.arange(0, size_X)[:, np.newaxis])
    y = np.ones(size_X)

# %%
# Compute paths
# -------------

    fig = plt.figure()
    n_alphas = 200
    alphas = np.logspace(min_alpha, max_alpha, n_alphas)

    coefs = []
    for a in alphas:
        ridge = linear_model.Ridge(alpha=a, fit_intercept=False)
        ridge.fit(X, y)
        coefs.append(ridge.coef_)

# %%
# Display results
# ---------------

    ax = plt.gca()

    ax.plot(alphas, coefs)
    ax.set_xscale("log")
    ax.set_xlim(ax.get_xlim()[::-1])  # reverse axis
    plt.xlabel("alpha")
    plt.ylabel("weights")
    plt.title("Ridge coefficients as a function of the regularization")
    plt.axis("tight")
    return fig

title='Plot Ridge coefficients as a function of the regularization'

model_card=f"""
## Description
This interactive demo is based on the [Plot Ridge coefficients as a function of the regularization](https://scikit-learn.org/stable/_downloads/9d5a4167bc60f250de65fe21497c1eb6/plot_ridge_path.py) example from the popular [scikit-learn](https://scikit-learn.org/stable/)  library, which is a widely-used library for machine learning in Python. 
This demo demonstrates the effect of collinearity in the coefficients of an estimator by plotting the regularization selected against the coefficients that are learnt by the model.
It also shows the usefulness of applying Ridge regression to highly ill-conditioned matrices. For such matrices, a slight change in the target variable can cause huge variances in the calculated weights. In such cases, it is useful to set a certain regularization (alpha) to reduce this variation (noise). You can play with the range of `Alpha` values and the `Training Size`
When alpha is very large, the regularization effect dominates the squared loss function and the coefficients tend to zero. At the end of the path, as alpha tends toward zero and the solution tends towards the ordinary least squares, coefficients exhibit big oscillations. In practise it is necessary to tune alpha in such a way that a balance is maintained between both.
## Model
currentmodule: [sklearn.linear_model](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model)
class:`Ridge` Regression is the estimator used in this example.
Each color represents a different feature of the coefficient vector, and this is displayed as a function of the regularization parameter.
"""

with gr.Blocks(title=title) as demo:
    gr.Markdown('''
            <div>
            <h1 style='text-align: center'>Plot Ridge coefficients as a function of the regularization</h1>
            </div>
        ''')
    gr.Markdown(model_card)
    gr.Markdown("Author: <a href=\"https://huggingface.co/sulpha\">sulpha</a>")
    d0 = gr.Slider(1,101,value=10,step=10,label='Select Size of Training Set')
    with gr.Column():
        with gr.Tab('Select Alpha Range'):
            d1 = gr.Slider(-20,20,value=-10,step=1,label='Creates an array of regularization values which are fed to the model and plotted against the returned weights')
            d2 = gr.Slider(-20,20,value=-2,step=1,label='')

    o1=gr.Plot()
    #btn = gr.Button(value = 'Submit')
    d0.change(fn=make_plot,inputs=[d0,d1,d2],outputs=[o1])
    d1.change(fn=make_plot,inputs=[d0,d1,d2],outputs=[o1])
    d2.change(fn=make_plot,inputs=[d0,d1,d2],outputs=[o1])

    #btn.click(make_plot,inputs=[d0,d1,d2],outputs=[gr.Plot()])

demo.launch()