Sometimes looking at the learned coefficients of a neural network can provide insight into the learning behavior. For example if weights look unstructured
Comparison of different linear SVM classifiers on a 2D projection of the iris dataset. We only consider the first 2 features of this dataset:
This example shows how quantile regression can be used to create prediction intervals.
class sklearn.model_selection.StratifiedKFold(n_splits=3, shuffle=False, random_state=None)
Plot the classification probability for different classifiers. We use a 3 class dataset, and we classify it with a Support Vector classifier, L1 and L2 penalized
sklearn.feature_selection.mutual_info_classif(X, y, discrete_features='auto', n_neighbors=3, copy=True, random_state=None)
This example demonstrates the Spectral Co-clustering algorithm on the twenty newsgroups dataset. The ?comp.os.ms-windows.misc
This example illustrates the need for robust covariance estimation on a real data set. It is useful both for outlier detection and for a better understanding
class sklearn.preprocessing.FunctionTransformer(func=None, inverse_func=None, validate=True, accept_sparse=False, pass_y=False
Partial dependence plots show the dependence between the target function
Page 27 of 70