neighbors.BallTree

class sklearn.neighbors.BallTree BallTree for fast generalized N-point problems BallTree(X, leaf_size=40, metric=?minkowski?, **kwargs) Parameters: X : array-like, shape = [n_samples, n_features] n_samples is the number of points in the data set, and n_features is the dimension of the parameter space. Note: if X is a C-contiguous array of doubles then data will not be copied. Otherwise, an internal copy will be made. leaf_size : positive integer (default = 40) Number of points at which

Nearest Neighbors regression

Demonstrate the resolution of a regression problem using a k-Nearest Neighbor and the interpolation of the target using both barycenter and constant weights. print(__doc__) # Author: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Fabian Pedregosa <fabian.pedregosa@inria.fr> # # License: BSD 3 clause (C) INRIA Generate sample data import numpy as np import matplotlib.pyplot as plt from sklearn import neighbors np.random.seed(0) X = np.sort(5 * np.random.rand(40, 1), a

Nearest Neighbors Classification

Sample usage of Nearest Neighbors classification. It will plot the decision boundaries for each class. print(__doc__) import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import neighbors, datasets n_neighbors = 15 # import some data to play with iris = datasets.load_iris() X = iris.data[:, :2] # we only take the first two features. We could # avoid this ugly slicing by using a two-dim dataset y = iris.targ

Nearest Centroid Classification

Sample usage of Nearest Centroid classification. It will plot the decision boundaries for each class. Out: None 0.813333333333 0.2 0.82 print(__doc__) import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import datasets from sklearn.neighbors import NearestCentroid n_neighbors = 15 # import some data to play with iris = datasets.load_iris() X = iris.data[:, :2] # we only take the first two features. We could

naive_bayes.MultinomialNB()

class sklearn.naive_bayes.MultinomialNB(alpha=1.0, fit_prior=True, class_prior=None) [source] Naive Bayes classifier for multinomial models The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification). The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work. Read more in the User Guide. Parameters: alpha : float, optional (default=1

naive_bayes.GaussianNB()

class sklearn.naive_bayes.GaussianNB(priors=None) [source] Gaussian Naive Bayes (GaussianNB) Can perform online updates to model parameters via partial_fit method. For details on algorithm used to update feature means and variance online, see Stanford CS tech report STAN-CS-79-773 by Chan, Golub, and LeVeque: http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf Read more in the User Guide. Parameters: priors : array-like, shape (n_classes,) Prior probabilities of the clas

naive_bayes.BernoulliNB()

class sklearn.naive_bayes.BernoulliNB(alpha=1.0, binarize=0.0, fit_prior=True, class_prior=None) [source] Naive Bayes classifier for multivariate Bernoulli models. Like MultinomialNB, this classifier is suitable for discrete data. The difference is that while MultinomialNB works with occurrence counts, BernoulliNB is designed for binary/boolean features. Read more in the User Guide. Parameters: alpha : float, optional (default=1.0) Additive (Laplace/Lidstone) smoothing parameter (0 for no

multioutput.MultiOutputRegressor()

class sklearn.multioutput.MultiOutputRegressor(estimator, n_jobs=1) [source] Multi target regression This strategy consists of fitting one regressor per target. This is a simple strategy for extending regressors that do not natively support multi-target regression. Parameters: estimator : estimator object An estimator object implementing fit and predict. n_jobs : int, optional, default=1 The number of jobs to run in parallel for fit. If -1, then the number of jobs is set to the number o

multioutput.MultiOutputClassifier()

class sklearn.multioutput.MultiOutputClassifier(estimator, n_jobs=1) [source] Multi target classification This strategy consists of fitting one classifier per target. This is a simple strategy for extending classifiers that do not natively support multi-target classification Parameters: estimator : estimator object An estimator object implementing fit, score and predict_proba. n_jobs : int, optional, default=1 The number of jobs to use for the computation. If -1 all CPUs are used. If 1

Multilabel classification

This example simulates a multi-label document classification problem. The dataset is generated randomly based on the following process: pick the number of labels: n ~ Poisson(n_labels) n times, choose a class c: c ~ Multinomial(theta) pick the document length: k ~ Poisson(length) k times, choose a word: w ~ Multinomial(theta_c) In the above process, rejection sampling is used to make sure that n is more than 2, and that the document length is never zero. Likewise, we reject classes which ha