statsmodels.discrete.discrete_model.Poisson.fit
-
Poisson.fit(start_params=None, method='newton', maxiter=35, full_output=1, disp=1, callback=None, **kwargs)
[source] -
Fit the model using maximum likelihood.
The rest of the docstring is from statsmodels.base.model.LikelihoodModel.fit
Fit method for likelihood based models
Parameters: start_params : array-like, optional
Initial guess of the solution for the loglikelihood maximization. The default is an array of zeros.
method : str, optional
The
method
determines which solver fromscipy.optimize
is used, and it can be chosen from among the following strings:- ?newton? for Newton-Raphson, ?nm? for Nelder-Mead
- ?bfgs? for Broyden-Fletcher-Goldfarb-Shanno (BFGS)
- ?lbfgs? for limited-memory BFGS with optional box constraints
- ?powell? for modified Powell?s method
- ?cg? for conjugate gradient
- ?ncg? for Newton-conjugate gradient
- ?basinhopping? for global basin-hopping solver
The explicit arguments in
fit
are passed to the solver, with the exception of the basin-hopping solver. Each solver has several optional arguments that are not the same across solvers. See the notes section below (or scipy.optimize) for the available arguments and for the list of explicit arguments that the basin-hopping solver supports.maxiter : int, optional
The maximum number of iterations to perform.
full_output : bool, optional
Set to True to have all available output in the Results object?s mle_retvals attribute. The output is dependent on the solver. See LikelihoodModelResults notes section for more information.
disp : bool, optional
Set to True to print convergence messages.
fargs : tuple, optional
Extra arguments passed to the likelihood function, i.e., loglike(x,*args)
callback : callable callback(xk), optional
Called after each iteration, as callback(xk), where xk is the current parameter vector.
retall : bool, optional
Set to True to return list of solutions at each iteration. Available in Results object?s mle_retvals attribute.
skip_hessian : bool, optional
If False (default), then the negative inverse hessian is calculated after the optimization. If True, then the hessian will not be calculated. However, it will be available in methods that use the hessian in the optimization (currently only with
?newton?
).kwargs : keywords
All kwargs are passed to the chosen solver with one exception. The following keyword controls what happens after the fit:
123warn_convergence :
bool
, optional
If
True
, checks the model
for
the converged flag. If the
converged flag
is
False
, a ConvergenceWarning
is
issued.
Notes
The ?basinhopping? solver ignores
maxiter
,retall
,full_output
explicit arguments.Optional arguments for solvers (see returned Results.mle_settings):
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091'newton'
tol :
float
Relative error
in
params acceptable
for
convergence.
'nm'
-
-
Nelder Mead
xtol :
float
Relative error
in
params acceptable
for
convergence
ftol :
float
Relative error
in
loglike(params) acceptable
for
convergence
maxfun :
int
Maximum number of function evaluations to make.
'bfgs'
gtol :
float
Stop when norm of gradient
is
less than gtol.
norm :
float
Order of norm (np.Inf
is
max
,
-
np.Inf
is
min
)
epsilon
If fprime
is
approximated, use this value
for
the step
size. Only relevant
if
LikelihoodModel.score
is
None
.
'lbfgs'
m :
int
This many terms are used
for
the Hessian approximation.
factr :
float
A stop condition that
is
a variant of relative error.
pgtol :
float
A stop condition that uses the projected gradient.
epsilon
If fprime
is
approximated, use this value
for
the step
size. Only relevant
if
LikelihoodModel.score
is
None
.
maxfun :
int
Maximum number of function evaluations to make.
bounds : sequence
(
min
,
max
) pairs
for
each element
in
x,
defining the bounds on that parameter.
Use
None
for
one of
min
or
max
when there
is
no bound
in
that direction.
'cg'
gtol :
float
Stop when norm of gradient
is
less than gtol.
norm :
float
Order of norm (np.Inf
is
max
,
-
np.Inf
is
min
)
epsilon :
float
If fprime
is
approximated, use this value
for
the step
size. Can be scalar
or
vector. Only relevant
if
Likelihoodmodel.score
is
None
.
'ncg'
fhess_p :
callable
f'(x,
*
args)
Function which computes the Hessian of f times an arbitrary
vector, p. Should only be supplied
if
LikelihoodModel.hessian
is
None
.
avextol :
float
Stop when the average relative error
in
the minimizer
falls below this amount.
epsilon :
float
or
ndarray
If fhess
is
approximated, use this value
for
the step size.
Only relevant
if
Likelihoodmodel.hessian
is
None
.
'powell'
xtol :
float
Line
-
search error tolerance
ftol :
float
Relative error
in
loglike(params)
for
acceptable
for
convergence.
maxfun :
int
Maximum number of function evaluations to make.
start_direc : ndarray
Initial direction
set
.
'basinhopping'
niter : integer
The number of basin hopping iterations.
niter_success : integer
Stop the run
if
the
global
minimum candidate remains the
same
for
this number of iterations.
T :
float
The
"temperature"
parameter
for
the accept
or
reject
criterion. Higher
"temperatures"
mean that larger jumps
in
function value will be accepted. For best results
`T` should be comparable to the separation (
in
function
value) between local minima.
stepsize :
float
Initial step size
for
use
in
the random displacement.
interval : integer
The interval
for
how often to update the `stepsize`.
minimizer :
dict
Extra keyword arguments to be passed to the minimizer
`scipy.optimize.minimize()`,
for
example
'method'
-
the
minimization method (e.g.
'L-BFGS-B'
),
or
'tol'
-
the
tolerance
for
termination. Other arguments are mapped
from
explicit argument of `fit`:
-
`args` <
-
`fargs`
-
`jac` <
-
`score`
-
`hess` <
-
`hess`
Please login to continue.