linear_model.PassiveAggressiveRegressor()

class sklearn.linear_model.PassiveAggressiveRegressor(C=1.0, fit_intercept=True, n_iter=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=False) [source] Passive Aggressive Regressor Read more in the User Guide. Parameters: C : float Maximum step size (regularization). Defaults to 1.0. epsilon : float If the difference between the current prediction and the correct label is below this threshold, the model is not updated. fit_intercept :

linear_model.PassiveAggressiveClassifier()

class sklearn.linear_model.PassiveAggressiveClassifier(C=1.0, fit_intercept=True, n_iter=5, shuffle=True, verbose=0, loss='hinge', n_jobs=1, random_state=None, warm_start=False, class_weight=None) [source] Passive Aggressive Classifier Read more in the User Guide. Parameters: C : float Maximum step size (regularization). Defaults to 1.0. fit_intercept : bool, default=False Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. n_iter : i

linear_model.OrthogonalMatchingPursuitCV()

class sklearn.linear_model.OrthogonalMatchingPursuitCV(copy=True, fit_intercept=True, normalize=True, max_iter=None, cv=None, n_jobs=1, verbose=False) [source] Cross-validated Orthogonal Matching Pursuit model (OMP) Parameters: copy : bool, optional Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway. fit_intercept : boolean, optional whether to calculate the intercept for this model.

linear_model.OrthogonalMatchingPursuit()

class sklearn.linear_model.OrthogonalMatchingPursuit(n_nonzero_coefs=None, tol=None, fit_intercept=True, normalize=True, precompute='auto') [source] Orthogonal Matching Pursuit model (OMP) Parameters: n_nonzero_coefs : int, optional Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features. tol : float, optional Maximum norm of the residual. If not None, overrides n_nonzero_coefs. fit_intercept : boolean, optional whether to calcul

linear_model.MultiTaskLassoCV()

class sklearn.linear_model.MultiTaskLassoCV(eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=1, random_state=None, selection='cyclic') [source] Multi-task L1/L2 Lasso with built-in cross-validation. The optimization objective for MultiTaskLasso is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read

linear_model.MultiTaskLasso()

class sklearn.linear_model.MultiTaskLasso(alpha=1.0, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic') [source] Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters: alph

linear_model.MultiTaskElasticNetCV()

class sklearn.linear_model.MultiTaskElasticNetCV(l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=1, random_state=None, selection='cyclic') [source] Multi-task L1/L2 ElasticNet with built-in cross-validation. The optimization objective for MultiTaskElasticNet is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where:

linear_model.MultiTaskElasticNet()

class sklearn.linear_model.MultiTaskElasticNet(alpha=1.0, l1_ratio=0.5, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic') [source] Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer The optimization objective for MultiTaskElasticNet is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum

linear_model.LogisticRegressionCV()

class sklearn.linear_model.LogisticRegressionCV(Cs=10, fit_intercept=True, cv=None, dual=False, penalty='l2', scoring=None, solver='lbfgs', tol=0.0001, max_iter=100, class_weight=None, n_jobs=1, verbose=0, refit=True, intercept_scaling=1.0, multi_class='ovr', random_state=None) [source] Logistic Regression CV (aka logit, MaxEnt) classifier. This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regul

linear_model.LogisticRegression()

class sklearn.linear_model.LogisticRegression(penalty='l2', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='liblinear', max_iter=100, multi_class='ovr', verbose=0, warm_start=False, n_jobs=1) [source] Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ?multi_class? option is set to ?ovr?, and uses the cross- entropy loss if the ?multi