sklearn.utils.check_random_state()

sklearn.utils.check_random_state(seed) [source] Turn seed into a np.random.RandomState instance If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded with seed. If seed is already a RandomState instance, return it. Otherwise raise ValueError. Examples using sklearn.utils.check_random_state Isotonic Regression Face completion with a multi-output estimators Empirical evaluation of the impact of k-means in

sklearn.utils.resample()

sklearn.utils.resample(*arrays, **options) [source] Resample arrays or sparse matrices in a consistent way The default strategy implements one step of the bootstrapping procedure. Parameters: *arrays : sequence of indexable data-structures Indexable data-structures can be arrays, lists, dataframes or scipy sparse matrices with consistent first dimension. replace : boolean, True by default Implements resampling with replacement. If False, this will implement (sliced) random permutations.

sklearn.tree.export_graphviz()

sklearn.tree.export_graphviz() [source] Export a decision tree in DOT format. This function generates a GraphViz representation of the decision tree, which is then written into out_file. Once exported, graphical renderings can be generated using, for example: $ dot -Tps tree.dot -o tree.ps (PostScript format) $ dot -Tpng tree.dot -o tree.png (PNG format) The sample counts that are shown are weighted with any sample_weights that might be present. Read more in the User Guide. Paramet

sklearn.svm.libsvm.predict_proba()

sklearn.svm.libsvm.predict_proba() Predict probabilities svm_model stores all parameters needed to predict a given value. For speed, all real work is done at the C level in function copy_predict (libsvm_helper.c). We have to reconstruct model and parameters to make sure we stay in sync with the python object. See sklearn.svm.predict for a complete list of parameters. Parameters: X: array-like, dtype=float : kernel : {?linear?, ?rbf?, ?poly?, ?sigmoid?, ?precomputed?} Returns: dec_values

sklearn.svm.libsvm.decision_function()

sklearn.svm.libsvm.decision_function() Predict margin (libsvm name for this is predict_values) We have to reconstruct model and parameters to make sure we stay in sync with the python object.

sklearn.svm.libsvm.fit()

sklearn.svm.libsvm.fit() Train the model using libsvm (low-level method) Parameters: X : array-like, dtype=float64, size=[n_samples, n_features] Y : array, dtype=float64, size=[n_samples] target vector svm_type : {0, 1, 2, 3, 4}, optional Type of SVM: C_SVC, NuSVC, OneClassSVM, EpsilonSVR or NuSVR respectively. 0 by default. kernel : {?linear?, ?rbf?, ?poly?, ?sigmoid?, ?precomputed?}, optional Kernel to use in the model: linear, polynomial, RBF, sigmoid or precomputed. ?rbf? by defau

sklearn.svm.libsvm.predict()

sklearn.svm.libsvm.predict() Predict target values of X given a model (low-level method) Parameters: X: array-like, dtype=float, size=[n_samples, n_features] : svm_type : {0, 1, 2, 3, 4} Type of SVM: C SVC, nu SVC, one class, epsilon SVR, nu SVR kernel : {?linear?, ?rbf?, ?poly?, ?sigmoid?, ?precomputed?} Type of kernel. degree : int Degree of the polynomial kernel. gamma : float Gamma parameter in RBF kernel. coef0 : float Independent parameter in poly/sigmoid kernel. Returns:

sklearn.svm.libsvm.cross_validation()

sklearn.svm.libsvm.cross_validation() Binding of the cross-validation routine (low-level routine) Parameters: X: array-like, dtype=float, size=[n_samples, n_features] : Y: array, dtype=float, size=[n_samples] : target vector svm_type : {0, 1, 2, 3, 4} Type of SVM: C SVC, nu SVC, one class, epsilon SVR, nu SVR kernel : {?linear?, ?rbf?, ?poly?, ?sigmoid?, ?precomputed?} Kernel to use in the model: linear, polynomial, RBF, sigmoid or precomputed. degree : int Degree of the polynomial

sklearn.svm.l1_min_c()

sklearn.svm.l1_min_c(X, y, loss='squared_hinge', fit_intercept=True, intercept_scaling=1.0) [source] Return the lowest bound for C such that for C in (l1_min_C, infinity) the model is guaranteed not to be empty. This applies to l1 penalized classifiers, such as LinearSVC with penalty=?l1? and linear_model.LogisticRegression with penalty=?l1?. This value is valid if class_weight parameter in fit() is not set. Parameters: X : array-like or sparse matrix, shape = [n_samples, n_features] Trai

sklearn.preprocessing.scale()

sklearn.preprocessing.scale(X, axis=0, with_mean=True, with_std=True, copy=True) [source] Standardize a dataset along any axis Center to the mean and component wise scale to unit variance. Read more in the User Guide. Parameters: X : {array-like, sparse matrix} The data to center and scale. axis : int (0 by default) axis used to compute the means and standard deviations along. If 0, independently standardize each feature, otherwise (if 1) standardize each sample. with_mean : boolean, T