tools.numdiff.approx_hess1()

statsmodels.tools.numdiff.approx_hess1 statsmodels.tools.numdiff.approx_hess1(x, f, epsilon=None, args=(), kwargs={}, return_grad=False) [source] Calculate Hessian with finite difference derivative approximation Parameters: x : array_like value at which function derivative is evaluated f : function function of one array f(x, *args, **kwargs) epsilon : float or array-like, optional Stepsize used, if None, then stepsize is automatically chosen according to EPS**(1/3)*x. args : tuple Ar

tools.numdiff.approx_fprime_cs()

statsmodels.tools.numdiff.approx_fprime_cs statsmodels.tools.numdiff.approx_fprime_cs(x, f, epsilon=None, args=(), kwargs={}) [source] Calculate gradient or Jacobian with complex step derivative approximation Parameters: x : array parameters at which the derivative is evaluated f : function f(*((x,)+args), **kwargs) returning either one value or 1d array epsilon : float, optional Stepsize, if None, optimal stepsize is used. Optimal step-size is EPS*x. See note. args : tuple Tuple of

tools.numdiff.approx_fprime()

statsmodels.tools.numdiff.approx_fprime statsmodels.tools.numdiff.approx_fprime(x, f, epsilon=None, args=(), kwargs={}, centered=False) [source] Gradient of function, or Jacobian if function f returns 1d array Parameters: x : array parameters at which the derivative is evaluated f : function f(*((x,)+args), **kwargs) returning either one value or 1d array epsilon : float, optional Stepsize, if None, optimal stepsize is used. This is EPS**(1/2)*x for centered == False and EPS**(1/3)*x f

tools.eval_measures.vare()

statsmodels.tools.eval_measures.vare statsmodels.tools.eval_measures.vare(x1, x2, ddof=0, axis=0) [source] variance of error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: vare : ndarray or float variance of difference along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarray to convert the in

tools.eval_measures.stde()

statsmodels.tools.eval_measures.stde statsmodels.tools.eval_measures.stde(x1, x2, ddof=0, axis=0) [source] standard deviation of error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: stde : ndarray or float standard deviation of difference along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarr

tools.eval_measures.rmse()

statsmodels.tools.eval_measures.rmse statsmodels.tools.eval_measures.rmse(x1, x2, axis=0) [source] root mean squared error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: rmse : ndarray or float root mean squared error along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarray to convert the inp

tools.eval_measures.mse()

statsmodels.tools.eval_measures.mse statsmodels.tools.eval_measures.mse(x1, x2, axis=0) [source] mean squared error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: mse : ndarray or float mean squared error along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarray to convert the input. Whether t

tools.eval_measures.medianbias()

statsmodels.tools.eval_measures.medianbias statsmodels.tools.eval_measures.medianbias(x1, x2, axis=0) [source] median bias, median error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: medianbias : ndarray or float median bias, or median difference along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy

tools.eval_measures.medianabs()

statsmodels.tools.eval_measures.medianabs statsmodels.tools.eval_measures.medianabs(x1, x2, axis=0) [source] median absolute error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: medianabs : ndarray or float median absolute difference along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarray to

tools.eval_measures.meanabs()

statsmodels.tools.eval_measures.meanabs statsmodels.tools.eval_measures.meanabs(x1, x2, axis=0) [source] mean absolute error Parameters: x1, x2 : array_like The performance measure depends on the difference between these two arrays. axis : int axis along which the summary statistic is calculated Returns: meanabs : ndarray or float mean absolute difference along given axis. Notes If x1 and x2 have different shapes, then they need to broadcast. This uses numpy.asanyarray to convert t