sulpha commited on
Commit
df54b25
β€’
1 Parent(s): 578504d

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +4 -4
app.py CHANGED
@@ -72,15 +72,15 @@ title='Plot Ridge coefficients as a function of the regularization'
72
 
73
  model_card=f"""
74
  ## Description
75
- Shows the effect of collinearity in the coefficients of an estimator.
 
76
 
77
- This example also shows the usefulness of applying Ridge regression to highly ill-conditioned matrices.
78
- For such matrices, a slight change in the target variable can cause huge variances in the calculated weights. In such cases, it is useful to set a certain regularization (alpha) to reduce this variation (noise).
79
 
80
  When alpha is very large, the regularization effect dominates the squared loss function and the coefficients tend to zero. At the end of the path, as alpha tends toward zero and the solution tends towards the ordinary least squares, coefficients exhibit big oscillations. In practise it is necessary to tune alpha in such a way that a balance is maintained between both.
81
 
82
  ## Model
83
- currentmodule: sklearn.linear_model
84
 
85
  class:`Ridge` Regression is the estimator used in this example.
86
  Each color represents a different feature of the coefficient vector, and this is displayed as a function of the regularization parameter.
 
72
 
73
  model_card=f"""
74
  ## Description
75
+ This interactive demo is based on the [Plot Ridge coefficients as a function of the regularization](https://scikit-learn.org/stable/_downloads/9d5a4167bc60f250de65fe21497c1eb6/plot_ridge_path.py) example from the popular [scikit-learn](https://scikit-learn.org/stable/) library, which is a widely-used library for machine learning in Python.
76
+ This demo demonstrates the effect of collinearity in the coefficients of an estimator by plotting the regularization selected against the coefficients that are learnt by the model.
77
 
78
+ It also shows the usefulness of applying Ridge regression to highly ill-conditioned matrices. For such matrices, a slight change in the target variable can cause huge variances in the calculated weights. In such cases, it is useful to set a certain regularization (alpha) to reduce this variation (noise). You can play with the range of `Alpha` values and the `Training Size`
 
79
 
80
  When alpha is very large, the regularization effect dominates the squared loss function and the coefficients tend to zero. At the end of the path, as alpha tends toward zero and the solution tends towards the ordinary least squares, coefficients exhibit big oscillations. In practise it is necessary to tune alpha in such a way that a balance is maintained between both.
81
 
82
  ## Model
83
+ currentmodule: [sklearn.linear_model](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model)
84
 
85
  class:`Ridge` Regression is the estimator used in this example.
86
  Each color represents a different feature of the coefficient vector, and this is displayed as a function of the regularization parameter.