linear_model.LassoLars()

class sklearn.linear_model.LassoLars(alpha=1.0, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.2204460492503131e-16, copy_X=True, fit_path=True, positive=False) [source]

Lasso model fit with Least Angle Regression a.k.a. Lars

It is a Linear Model trained with an L1 prior as regularizer.

The optimization objective for Lasso is:

(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1

Read more in the User Guide.

Parameters:

alpha : float

Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.

fit_intercept : boolean

whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered).

positive : boolean (default=False)

Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.

verbose : boolean or integer, optional

Sets the verbosity amount

normalize : boolean, optional, default False

If True, the regressors X will be normalized before regression. This parameter is ignored when fit_intercept is set to False. When the regressors are normalized, note that this makes the hyperparameters learnt more robust and almost independent of the number of samples. The same property is not valid for standardized data. However, if you wish to standardize, please use preprocessing.StandardScaler before calling fit on an estimator with normalize=False.

copy_X : boolean, optional, default True

If True, X will be copied; else, it may be overwritten.

precompute : True | False | ?auto? | array-like

Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.

max_iter : integer, optional

Maximum number of iterations to perform.

eps : float, optional

The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.

fit_path : boolean

If True the full path is stored in the coef_path_ attribute. If you compute the solution for a large problem or many targets, setting fit_path to False will lead to a speedup, especially with a small alpha.

Attributes:

alphas_ : array, shape (n_alphas + 1,) | list of n_targets such arrays

Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features, or the number of nodes in the path with correlation greater than alpha, whichever is smaller.

active_ : list, length = n_alphas | list of n_targets such lists

Indices of active variables at the end of the path.

coef_path_ : array, shape (n_features, n_alphas + 1) or list

If a list is passed it?s expected to be one of n_targets such arrays. The varying values of the coefficients along the path. It is not present if the fit_path parameter is False.

coef_ : array, shape (n_features,) or (n_targets, n_features)

Parameter vector (w in the formulation formula).

intercept_ : float | array, shape (n_targets,)

Independent term in decision function.

n_iter_ : array-like or int.

The number of iterations taken by lars_path to find the grid of alphas for each target.

Examples

>>> from sklearn import linear_model
>>> clf = linear_model.LassoLars(alpha=0.01)
>>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1, 0, -1])
... 
LassoLars(alpha=0.01, copy_X=True, eps=..., fit_intercept=True,
     fit_path=True, max_iter=500, normalize=True, positive=False,
     precompute='auto', verbose=False)
>>> print(clf.coef_) 
[ 0.         -0.963257...]

Methods

decision_function(\*args, \*\*kwargs) DEPRECATED: and will be removed in 0.19.
fit(X, y[, Xy]) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model
score(X, y[, sample_weight]) Returns the coefficient of determination R^2 of the prediction.
set_params(\*\*params) Set the parameters of this estimator.
__init__(alpha=1.0, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.2204460492503131e-16, copy_X=True, fit_path=True, positive=False) [source]
decision_function(*args, **kwargs) [source]

DEPRECATED: and will be removed in 0.19.

Decision function of the linear model.

Parameters:

X : {array-like, sparse matrix}, shape = (n_samples, n_features)

Samples.

Returns:

C : array, shape = (n_samples,)

Returns predicted values.

fit(X, y, Xy=None) [source]

Fit the model using X, y as training data.

Parameters:

X : array-like, shape (n_samples, n_features)

Training data.

y : array-like, shape (n_samples,) or (n_samples, n_targets)

Target values.

Xy : array-like, shape (n_samples,) or (n_samples, n_targets), optional

Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.

Returns:

self : object

returns an instance of self.

get_params(deep=True) [source]

Get parameters for this estimator.

Parameters:

deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

predict(X) [source]

Predict using the linear model

Parameters:

X : {array-like, sparse matrix}, shape = (n_samples, n_features)

Samples.

Returns:

C : array, shape = (n_samples,)

Returns predicted values.

score(X, y, sample_weight=None) [source]

Returns the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

Parameters:

X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True values for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:

score : float

R^2 of self.predict(X) wrt. y.

set_params(**params) [source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it?s possible to update each component of a nested object.

Returns: self :
doc_scikit_learn
2017-01-15 04:23:24
Comments
Leave a Comment

Please login to continue.