sklearn.linear_model.lasso_path()

sklearn.linear_model.lasso_path(X, y, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, **params) [source] Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Whe

sklearn.linear_model.lars_path()

sklearn.linear_model.lars_path(X, y, Xy=None, Gram=None, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.2204460492503131e-16, copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False) [source] Compute Least Angle Regression or Lasso path using LARS algorithm [1] The optimization objective for the case method=?lasso? is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 in the case of method=?lars?, the objective function is only known in the form

sklearn.learning_curve.validation_curve()

Warning DEPRECATED sklearn.learning_curve.validation_curve(estimator, X, y, param_name, param_range, cv=None, scoring=None, n_jobs=1, pre_dispatch='all', verbose=0) [source] Validation curve. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.validation_curve instead. Determine training and test scores for varying parameter values. Compute scores for an estimator with different values of a specified parameter. This is similar to grid search w

sklearn.learning_curve.learning_curve()

Warning DEPRECATED sklearn.learning_curve.learning_curve(estimator, X, y, train_sizes=array([ 0.1, 0.33, 0.55, 0.78, 1. ]), cv=None, scoring=None, exploit_incremental_learning=False, n_jobs=1, pre_dispatch='all', verbose=0, error_score='raise') [source] Learning curve. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.learning_curve instead. Determines cross-validated training and test scores for different training set sizes. A cross-validat

sklearn.isotonic.isotonic_regression()

sklearn.isotonic.isotonic_regression(y, sample_weight=None, y_min=None, y_max=None, increasing=True) [source] Solve the isotonic regression model: min sum w[i] (y[i] - y_[i]) ** 2 subject to y_min = y_[1] <= y_[2] ... <= y_[n] = y_max where: y[i] are inputs (real numbers) y_[i] are fitted w[i] are optional strictly positive weights (default to 1.0) Read more in the User Guide. Parameters: y : iterable of floating-point values The data. sample_weight : iterable of floating-point

sklearn.grid_search.fit_grid_point()

Warning DEPRECATED sklearn.grid_search.fit_grid_point(X, y, estimator, parameters, train, test, scorer, verbose, error_score='raise', **fit_params) [source] Run fit on one set of parameters. Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.fit_grid_point instead. Parameters: X : array-like, sparse matrix or list Input data. y : array-like or None Targets for input data. estimator : estimator object A object of that type is instantiate

sklearn.feature_selection.mutual_info_regression()

sklearn.feature_selection.mutual_info_regression(X, y, discrete_features='auto', n_neighbors=3, copy=True, random_state=None) [source] Estimate mutual information for a continuous target variable. Mutual information (MI) [R173] between two random variables is a non-negative value, which measures the dependency between the variables. It is equal to zero if and only if two random variables are independent, and higher values mean higher dependency. The function relies on nonparametric methods

sklearn.isotonic.check_increasing()

sklearn.isotonic.check_increasing(x, y) [source] Determine whether y is monotonically correlated with x. y is found increasing or decreasing with respect to x based on a Spearman correlation test. Parameters: x : array-like, shape=(n_samples,) Training data. y : array-like, shape=(n_samples,) Training target. Returns: `increasing_bool` : boolean Whether the relationship is increasing or decreasing. Notes The Spearman correlation coefficient is estimated from the data, and the sign

sklearn.feature_selection.mutual_info_classif()

sklearn.feature_selection.mutual_info_classif(X, y, discrete_features='auto', n_neighbors=3, copy=True, random_state=None) [source] Estimate mutual information for a discrete target variable. Mutual information (MI) [R169] between two random variables is a non-negative value, which measures the dependency between the variables. It is equal to zero if and only if two random variables are independent, and higher values mean higher dependency. The function relies on nonparametric methods based

sklearn.feature_selection.f_classif()

sklearn.feature_selection.f_classif(X, y) [source] Compute the ANOVA F-value for the provided sample. Read more in the User Guide. Parameters: X : {array-like, sparse matrix} shape = [n_samples, n_features] The set of regressors that will be tested sequentially. y : array of shape(n_samples) The data matrix. Returns: F : array, shape = [n_features,] The set of F values. pval : array, shape = [n_features,] The set of p-values. See also chi2 Chi-squared stats of non-negative f