sklearn.covariance.empirical_covariance()
  • References/Python/scikit-learn/API Reference/covariance

sklearn.covariance.empirical_covariance(X, assume_centered=False)

2025-01-10 15:47:30
preprocessing.Imputer()
  • References/Python/scikit-learn/API Reference/preprocessing

class sklearn.preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0, verbose=0, copy=True)

2025-01-10 15:47:30
2.2.
  • References/Python/scikit-learn/Guide

Manifold learning is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high. 2.2.1. Introduction High-dimensional datasets can be very difficult to visualize. While data in two or three dimensions can be plotted to show the inherent structure of the data, equivalent high-dimensional plots are much less intuitive. To aid visualization of the structure of a dataset, the dimension

2025-01-10 15:47:30
Kernel PCA
  • References/Python/scikit-learn/Examples/Decomposition

This example shows that Kernel PCA is able to find a projection of the data that makes data linearly separable.

2025-01-10 15:47:30
FastICA on 2D point clouds
  • References/Python/scikit-learn/Examples/Decomposition

This example illustrates visually in the feature space a comparison by results using two different component analysis techniques.

2025-01-10 15:47:30
sklearn.linear_model.orthogonal_mp()
  • References/Python/scikit-learn/API Reference/linear_model

sklearn.linear_model.orthogonal_mp(X, y, n_nonzero_coefs=None, tol=None, precompute=False, copy_X=True, return_path=False, r

2025-01-10 15:47:30
gaussian_process.kernels.ConstantKernel()
  • References/Python/scikit-learn/API Reference/gaussian_process

class sklearn.gaussian_process.kernels.ConstantKernel(constant_value=1.0, constant_value_bounds=(1e-05, 100000.0))

2025-01-10 15:47:30
L1 Penalty and Sparsity in Logistic Regression
  • References/Python/scikit-learn/Examples/Generalized Linear Models

Comparison of the sparsity (percentage of zero coefficients) of solutions when L1 and L2 penalty are used for different values of C. We can see

2025-01-10 15:47:30
Feature transformations with ensembles of trees
  • References/Python/scikit-learn/Examples/Ensemble methods

Transform your features into a higher dimensional, sparse space. Then train a linear model on these features. First fit an ensemble of

2025-01-10 15:47:30
2.7.
  • References/Python/scikit-learn/Guide

Many applications require being able to decide whether a new observation belongs to the same distribution as existing observations (it is an inlier)

2025-01-10 15:47:30