sklearn.utils.estimator_checks.check_estimator()

sklearn.utils.estimator_checks.check_estimator(Estimator) [source] Check if estimator adheres to scikit-learn conventions. This estimator will run an extensive test-suite for input validation, shapes, etc. Additional tests for classifiers, regressors, clustering or transformers will be run if the Estimator class inherits from the corresponding mixin from sklearn.base. Parameters: Estimator : class Class to check. Estimator is a class object (not an instance).

sklearn.utils.check_random_state()

sklearn.utils.check_random_state(seed) [source] Turn seed into a np.random.RandomState instance If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded with seed. If seed is already a RandomState instance, return it. Otherwise raise ValueError. Examples using sklearn.utils.check_random_state Isotonic Regression Face completion with a multi-output estimators Empirical evaluation of the impact of k-means in

sklearn.tree.export_graphviz()

sklearn.tree.export_graphviz() [source] Export a decision tree in DOT format. This function generates a GraphViz representation of the decision tree, which is then written into out_file. Once exported, graphical renderings can be generated using, for example: $ dot -Tps tree.dot -o tree.ps (PostScript format) $ dot -Tpng tree.dot -o tree.png (PNG format) The sample counts that are shown are weighted with any sample_weights that might be present. Read more in the User Guide. Paramet

sklearn.svm.libsvm.predict_proba()

sklearn.svm.libsvm.predict_proba() Predict probabilities svm_model stores all parameters needed to predict a given value. For speed, all real work is done at the C level in function copy_predict (libsvm_helper.c). We have to reconstruct model and parameters to make sure we stay in sync with the python object. See sklearn.svm.predict for a complete list of parameters. Parameters: X: array-like, dtype=float : kernel : {?linear?, ?rbf?, ?poly?, ?sigmoid?, ?precomputed?} Returns: dec_values

sklearn.svm.libsvm.predict()

sklearn.svm.libsvm.predict() Predict target values of X given a model (low-level method) Parameters: X: array-like, dtype=float, size=[n_samples, n_features] : svm_type : {0, 1, 2, 3, 4} Type of SVM: C SVC, nu SVC, one class, epsilon SVR, nu SVR kernel : {?linear?, ?rbf?, ?poly?, ?sigmoid?, ?precomputed?} Type of kernel. degree : int Degree of the polynomial kernel. gamma : float Gamma parameter in RBF kernel. coef0 : float Independent parameter in poly/sigmoid kernel. Returns:

sklearn.svm.libsvm.fit()

sklearn.svm.libsvm.fit() Train the model using libsvm (low-level method) Parameters: X : array-like, dtype=float64, size=[n_samples, n_features] Y : array, dtype=float64, size=[n_samples] target vector svm_type : {0, 1, 2, 3, 4}, optional Type of SVM: C_SVC, NuSVC, OneClassSVM, EpsilonSVR or NuSVR respectively. 0 by default. kernel : {?linear?, ?rbf?, ?poly?, ?sigmoid?, ?precomputed?}, optional Kernel to use in the model: linear, polynomial, RBF, sigmoid or precomputed. ?rbf? by defau

sklearn.svm.libsvm.decision_function()

sklearn.svm.libsvm.decision_function() Predict margin (libsvm name for this is predict_values) We have to reconstruct model and parameters to make sure we stay in sync with the python object.

sklearn.svm.libsvm.cross_validation()

sklearn.svm.libsvm.cross_validation() Binding of the cross-validation routine (low-level routine) Parameters: X: array-like, dtype=float, size=[n_samples, n_features] : Y: array, dtype=float, size=[n_samples] : target vector svm_type : {0, 1, 2, 3, 4} Type of SVM: C SVC, nu SVC, one class, epsilon SVR, nu SVR kernel : {?linear?, ?rbf?, ?poly?, ?sigmoid?, ?precomputed?} Kernel to use in the model: linear, polynomial, RBF, sigmoid or precomputed. degree : int Degree of the polynomial

sklearn.svm.l1_min_c()

sklearn.svm.l1_min_c(X, y, loss='squared_hinge', fit_intercept=True, intercept_scaling=1.0) [source] Return the lowest bound for C such that for C in (l1_min_C, infinity) the model is guaranteed not to be empty. This applies to l1 penalized classifiers, such as LinearSVC with penalty=?l1? and linear_model.LogisticRegression with penalty=?l1?. This value is valid if class_weight parameter in fit() is not set. Parameters: X : array-like or sparse matrix, shape = [n_samples, n_features] Trai

sklearn.random_projection.johnson_lindenstrauss_min_dim()

sklearn.random_projection.johnson_lindenstrauss_min_dim(n_samples, eps=0.1) [source] Find a ?safe? number of components to randomly project to The distortion introduced by a random projection p only changes the distance between two points by a factor (1 +- eps) in an euclidean space with good probability. The projection p is an eps-embedding as defined by: (1 - eps) ||u - v||^2 < ||p(u) - p(v)||^2 < (1 + eps) ||u - v||^2 Where u and v are any rows taken from a dataset of shape [n_sam