linear_model.MultiTaskElasticNet()

class sklearn.linear_model.MultiTaskElasticNet(alpha=1.0, l1_ratio=0.5, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic') [source] Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer The optimization objective for MultiTaskElasticNet is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum

ensemble.IsolationForest()

class sklearn.ensemble.IsolationForest(n_estimators=100, max_samples='auto', contamination=0.1, max_features=1.0, bootstrap=False, n_jobs=1, random_state=None, verbose=0) [source] Isolation Forest Algorithm Return the anomaly score of each sample using the IsolationForest algorithm The IsolationForest ?isolates? observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. Since recursive partitioning c

pipeline.FeatureUnion()

class sklearn.pipeline.FeatureUnion(transformer_list, n_jobs=1, transformer_weights=None) [source] Concatenates results of multiple transformer objects. This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results. This is useful to combine several feature extraction mechanisms into a single transformer. Parameters of the transformers may be set using its name and the parameter name separated by a ?__?. A transformer may be replaced entir

lda.LDA()

Warning DEPRECATED class sklearn.lda.LDA(solver='svd', shrinkage=None, priors=None, n_components=None, store_covariance=False, tol=0.0001) [source] Alias for sklearn.discriminant_analysis.LinearDiscriminantAnalysis. Deprecated since version 0.17: This class will be removed in 0.19. Use sklearn.discriminant_analysis.LinearDiscriminantAnalysis instead. Methods decision_function(X) Predict confidence scores for samples. fit(X, y[, store_covariance, tol]) Fit LinearDiscriminantAnalysis mo

naive_bayes.GaussianNB()

class sklearn.naive_bayes.GaussianNB(priors=None) [source] Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via partial_fit method. For details on algorithm used to update feature means and variance online, see Stanford CS tech report STAN-CS-79-773 by Chan, Golub, and LeVeque: http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf Read more in the User Guide. Parameters: priors : array-like, shape (n_classes,) Prior probabilities of the clas

random_projection.SparseRandomProjection()

class sklearn.random_projection.SparseRandomProjection(n_components='auto', density='auto', eps=0.1, dense_output=False, random_state=None) [source] Reduce dimensionality through sparse random projection Sparse random matrix is an alternative to dense random projection matrix that guarantees similar embedding quality while being much more memory efficient and allowing faster computation of the projected data. If we note s = 1 / density the components of the random matrix are drawn from: -s

mixture.GMM()

Warning DEPRECATED class sklearn.mixture.GMM(*args, **kwargs) [source] Legacy Gaussian Mixture Model Deprecated since version 0.18: This class will be removed in 0.20. Use sklearn.mixture.GaussianMixture instead. Methods aic(X) Akaike information criterion for the current model fit and the proposed data. bic(X) Bayesian information criterion for the current model fit and the proposed data. fit(X[, y]) Estimate model parameters with the EM algorithm. fit_predict(X[, y]) Fit and then

svm.NuSVC()

class sklearn.svm.NuSVC(nu=0.5, kernel='rbf', degree=3, gamma='auto', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=None, random_state=None) [source] Nu-Support Vector Classification. Similar to SVC but uses a parameter to control the number of support vectors. The implementation is based on libsvm. Read more in the User Guide. Parameters: nu : float, optional (default=0.5) An upper bound on

ensemble.GradientBoostingClassifier()

class sklearn.ensemble.GradientBoostingClassifier(loss='deviance', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_split=1e-07, init=None, random_state=None, max_features=None, verbose=0, max_leaf_nodes=None, warm_start=False, presort='auto') [source] Gradient Boosting for classification. GB builds an additive model in a forward stage-wise fashion; it allows for the

1.9. Naive Bayes

Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes? theorem with the ?naive? assumption of independence between every pair of features. Given a class variable and a dependent feature vector through , Bayes? theorem states the following relationship: Using the naive independence assumption that for all , this relationship is simplified to Since is constant given the input, we can use the following classification rule: and we can use Maximum A