cross_validation.PredefinedSplit()

Warning DEPRECATED class sklearn.cross_validation.PredefinedSplit(test_fold) [source] Predefined split cross validation iterator Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.PredefinedSplit instead. Splits the data into training/test set folds according to a predefined scheme. Each sample can be assigned to at most one test set fold, as specified by the user through the test_fold parameter. Read more in the User Guide. Parameters: test

sklearn.datasets.fetch_california_housing()

sklearn.datasets.fetch_california_housing(data_home=None, download_if_missing=True) [source] Loader for the California housing dataset from StatLib. Read more in the User Guide. Parameters: data_home : optional, default: None Specify another download and cache folder for the datasets. By default all scikit learn data is stored in ?~/scikit_learn_data? subfolders. download_if_missing: optional, True by default : If False, raise a IOError if the data is not locally available instead of tr

exceptions.DataDimensionalityWarning

class sklearn.exceptions.DataDimensionalityWarning [source] Custom warning to notify potential issues with data dimensionality. For example, in random projection, this warning is raised when the number of components, which quantifies the dimensionality of the target projection space, is higher than the number of features, which quantifies the dimensionality of the original source space, to imply that the dimensionality of the problem will not be reduced. Changed in version 0.18: Moved from

exceptions.DataConversionWarning

class sklearn.exceptions.DataConversionWarning [source] Warning used to notify implicit data conversions happening in the code. This warning occurs when some input data needs to be converted or interpreted in a way that may not match the user?s expectations. For example, this warning may occur when the user passes an integer array to a function which expects float input and will convert the input requests a non-copying operation, but a copy is required to meet the implementation?s data-typ

exceptions.ConvergenceWarning

class sklearn.exceptions.ConvergenceWarning [source] Custom warning to capture convergence problems Changed in version 0.18: Moved from sklearn.utils. Examples using sklearn.exceptions.ConvergenceWarning Sparse recovery: feature selection for sparse linear models

sklearn.feature_extraction.image.reconstruct_from_patches_2d()

sklearn.feature_extraction.image.reconstruct_from_patches_2d(patches, image_size) [source] Reconstruct the image from all of its patches. Patches are assumed to overlap and the image is constructed by filling in the patches from left to right, top to bottom, averaging the overlapping regions. Read more in the User Guide. Parameters: patches : array, shape = (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels) The complete set of patches. If the patc

sklearn.pipeline.make_union()

sklearn.pipeline.make_union(*transformers) [source] Construct a FeatureUnion from the given transformers. This is a shorthand for the FeatureUnion constructor; it does not require, and does not permit, naming the transformers. Instead, they will be given names automatically based on their types. It also does not allow weighting. Returns: f : FeatureUnion Examples >>> from sklearn.decomposition import PCA, TruncatedSVD >>> make_union(PCA(), TruncatedSVD()) FeatureUnion

sklearn.preprocessing.normalize()

sklearn.preprocessing.normalize(X, norm='l2', axis=1, copy=True, return_norm=False) [source] Scale input vectors individually to unit norm (vector length). Read more in the User Guide. Parameters: X : {array-like, sparse matrix}, shape [n_samples, n_features] The data to normalize, element by element. scipy.sparse matrices should be in CSR format to avoid an un-necessary copy. norm : ?l1?, ?l2?, or ?max?, optional (?l2? by default) The norm to use to normalize each non zero sample (or e

sklearn.preprocessing.binarize()

sklearn.preprocessing.binarize(X, threshold=0.0, copy=True) [source] Boolean thresholding of array-like or scipy.sparse matrix Read more in the User Guide. Parameters: X : {array-like, sparse matrix}, shape [n_samples, n_features] The data to binarize, element by element. scipy.sparse matrices should be in CSR or CSC format to avoid an un-necessary copy. threshold : float, optional (0.0 by default) Feature values below or equal to this are replaced by 0, above it by 1. Threshold may not

sklearn.preprocessing.add_dummy_feature()

sklearn.preprocessing.add_dummy_feature(X, value=1.0) [source] Augment dataset with an additional dummy feature. This is useful for fitting an intercept term with implementations which cannot otherwise fit it directly. Parameters: X : {array-like, sparse matrix}, shape [n_samples, n_features] Data. value : float Value to use for the dummy feature. Returns: X : {array, sparse matrix}, shape [n_samples, n_features + 1] Same data with dummy feature added as first column. Examples >