Features 1 and 2 of the diabetes-dataset are fitted and plotted below. It illustrates that although feature 2 has a strong coefficient on the
Comparison of the sparsity (percentage of zero coefficients) of solutions when L1 and L2 penalty are used for different values of C. We can see
Transform your features into a higher dimensional, sparse space. Then train a linear model on these features. First fit an ensemble of
We show that linear_model.Lasso provides the same results for dense and sparse data and that in the case of sparse data the speed is improved.
Here a sine function is fit with a polynomial of order 3, for values close to zero. Robust fitting is demoed in different situations: No
This example shows the use of multi-output estimator to complete images. The goal is to predict the lower half of a face given its upper half
This is an example showing how the scikit-learn can be used to cluster documents by topics using a bag-of-words approach. This example uses a scipy.sparse
This example illustrates how sigmoid calibration changes predicted probabilities for a 3-class classification problem. Illustrated is the
This example illustrates the predicted probability of GPC for an RBF kernel with different choices of the hyperparameters
Face, a 1024 x 768 size image of a raccoon face, is used here to illustrate how k-means is used for vector quantization.
Page 16 of 22