sklearn.datasets.make_s_curve()

sklearn.datasets.make_s_curve(n_samples=100, noise=0.0, random_state=None) [source] Generate an S curve dataset. Read more in the User Guide. Parameters: n_samples : int, optional (default=100) The number of sample points on the S curve. noise : float, optional (default=0.0) The standard deviation of the gaussian noise. random_state : int, RandomState instance or None, optional (default=None) If int, random_state is the seed used by the random number generator; If RandomState instance

Support Vector Regression using linear and non-linear kernels

Toy example of 1D regression using linear, polynomial and RBF kernels. print(__doc__) import numpy as np from sklearn.svm import SVR import matplotlib.pyplot as plt Generate sample data X = np.sort(5 * np.random.rand(40, 1), axis=0) y = np.sin(X).ravel() Add noise to targets y[::5] += 3 * (0.5 - np.random.rand(8)) Fit regression model svr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.1) svr_lin = SVR(kernel='linear', C=1e3) svr_poly = SVR(kernel='poly', C=1e3, degree=2) y_rbf = svr_rbf.fit(X, y).

neighbors.NearestCentroid()

class sklearn.neighbors.NearestCentroid(metric='euclidean', shrink_threshold=None) [source] Nearest centroid classifier. Each class is represented by its centroid, with test samples classified to the class with the nearest centroid. Read more in the User Guide. Parameters: metric : string, or callable The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distan

Demo of affinity propagation clustering algorithm

Reference: Brendan J. Frey and Delbert Dueck, ?Clustering by Passing Messages Between Data Points?, Science Feb. 2007 print(__doc__) from sklearn.cluster import AffinityPropagation from sklearn import metrics from sklearn.datasets.samples_generator import make_blobs Generate sample data centers = [[1, 1], [-1, -1], [1, -1]] X, labels_true = make_blobs(n_samples=300, centers=centers, cluster_std=0.5, random_state=0) Compute Affinity Propagation af = AffinityPropag

Feature agglomeration vs. univariate selection

This example compares 2 dimensionality reduction strategies: univariate feature selection with Anova feature agglomeration with Ward hierarchical clustering Both methods are compared in a regression problem using a BayesianRidge as supervised estimator. # Author: Alexandre Gramfort <alexandre.gramfort@inria.fr> # License: BSD 3 clause print(__doc__) import shutil import tempfile import numpy as np import matplotlib.pyplot as plt from scipy import linalg, ndimage from sklearn.featur

sklearn.feature_extraction.image.grid_to_graph()

sklearn.feature_extraction.image.grid_to_graph(n_x, n_y, n_z=1, mask=None, return_as=, dtype=) [source] Graph of the pixel-to-pixel connections Edges exist if 2 voxels are connected. Parameters: n_x : int Dimension in x axis n_y : int Dimension in y axis n_z : int, optional, default 1 Dimension in z axis mask : ndarray of booleans, optional An optional mask of the image, to consider only part of the pixels. return_as : np.ndarray or a sparse matrix class, optional The class to use

sklearn.datasets.make_sparse_coded_signal()

sklearn.datasets.make_sparse_coded_signal(n_samples, n_components, n_features, n_nonzero_coefs, random_state=None) [source] Generate a signal as a sparse combination of dictionary elements. Returns a matrix Y = DX, such as D is (n_features, n_components), X is (n_components, n_samples) and each column of X has exactly n_nonzero_coefs non-zero elements. Read more in the User Guide. Parameters: n_samples : int number of samples to generate n_components: int, : number of components in the

Lasso on dense and sparse data

We show that linear_model.Lasso provides the same results for dense and sparse data and that in the case of sparse data the speed is improved. print(__doc__) from time import time from scipy import sparse from scipy import linalg from sklearn.datasets.samples_generator import make_regression from sklearn.linear_model import Lasso The two Lasso implementations on Dense data print("--- Dense matrices") X, y = make_regression(n_samples=200, n_features=5000, random_state=0) X_sp = sparse.coo_m

sklearn.utils.check_random_state()

sklearn.utils.check_random_state(seed) [source] Turn seed into a np.random.RandomState instance If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded with seed. If seed is already a RandomState instance, return it. Otherwise raise ValueError. Examples using sklearn.utils.check_random_state Isotonic Regression Face completion with a multi-output estimators Empirical evaluation of the impact of k-means in

linear_model.PassiveAggressiveClassifier()

class sklearn.linear_model.PassiveAggressiveClassifier(C=1.0, fit_intercept=True, n_iter=5, shuffle=True, verbose=0, loss='hinge', n_jobs=1, random_state=None, warm_start=False, class_weight=None) [source] Passive Aggressive Classifier Read more in the User Guide. Parameters: C : float Maximum step size (regularization). Defaults to 1.0. fit_intercept : bool, default=False Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. n_iter : i