Discrete Choice Models Overview

Discrete Choice Models Overview

Link to Notebook GitHub

In [1]:
from __future__ import print_function
import numpy as np
import statsmodels.api as sm

Data

Load data from Spector and Mazzeo (1980). Examples follow Greene's Econometric Analysis Ch. 21 (5th Edition).

In [2]:
spector_data = sm.datasets.spector.load()
spector_data.exog = sm.add_constant(spector_data.exog, prepend=False)

Inspect the data:

In [3]:
print(spector_data.exog[:5,:])
print(spector_data.endog[:5])
[[  2.66  20.     0.     1.  ]
 [  2.89  22.     0.     1.  ]
 [  3.28  24.     0.     1.  ]
 [  2.92  12.     0.     1.  ]
 [  4.    21.     0.     1.  ]]
[ 0.  0.  0.  0.  1.]

Linear Probability Model (OLS)

In [4]:
lpm_mod = sm.OLS(spector_data.endog, spector_data.exog)
lpm_res = lpm_mod.fit()
print('Parameters: ', lpm_res.params[:-1])
Parameters:  [ 0.46385168  0.01049512  0.37855479]

Logit Model

In [5]:
logit_mod = sm.Logit(spector_data.endog, spector_data.exog)
logit_res = logit_mod.fit(disp=0)
print('Parameters: ', logit_res.params)
Parameters:  [  2.82611259   0.09515766   2.37868766 -13.02134686]

Marginal Effects

In [6]:
margeff = logit_res.get_margeff()
print(margeff.summary())
        Logit Marginal Effects
=====================================
Dep. Variable:                      y
Method:                          dydx
At:                           overall
==============================================================================
                dy/dx    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             0.3626      0.109      3.313      0.001         0.148     0.577
x2             0.0122      0.018      0.686      0.493        -0.023     0.047
x3             0.3052      0.092      3.304      0.001         0.124     0.486
==============================================================================

As in all the discrete data models presented below, we can print a nice summary of results:

In [7]:
print(logit_res.summary())
                           Logit Regression Results
==============================================================================
Dep. Variable:                      y   No. Observations:                   32
Model:                          Logit   Df Residuals:                       28
Method:                           MLE   Df Model:                            3
Date:                Tue, 02 Dec 2014   Pseudo R-squ.:                  0.3740
Time:                        12:51:46   Log-Likelihood:                -12.890
converged:                       True   LL-Null:                       -20.592
                                        LLR p-value:                  0.001502
==============================================================================
                 coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             2.8261      1.263      2.238      0.025         0.351     5.301
x2             0.0952      0.142      0.672      0.501        -0.182     0.373
x3             2.3787      1.065      2.234      0.025         0.292     4.465
const        -13.0213      4.931     -2.641      0.008       -22.687    -3.356
==============================================================================

Probit Model

In [8]:
probit_mod = sm.Probit(spector_data.endog, spector_data.exog)
probit_res = probit_mod.fit()
probit_margeff = probit_res.get_margeff()
print('Parameters: ', probit_res.params)
print('Marginal effects: ')
print(probit_margeff.summary())
Optimization terminated successfully.
         Current function value: 0.400588
         Iterations 6
Parameters:  [ 1.62581004  0.05172895  1.42633234 -7.45231965]
Marginal effects:
       Probit Marginal Effects
=====================================
Dep. Variable:                      y
Method:                          dydx
At:                           overall
==============================================================================
                dy/dx    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             0.3608      0.113      3.182      0.001         0.139     0.583
x2             0.0115      0.018      0.624      0.533        -0.025     0.048
x3             0.3165      0.090      3.508      0.000         0.140     0.493
==============================================================================

Multinomial Logit

Load data from the American National Election Studies:

In [9]:
anes_data = sm.datasets.anes96.load()
anes_exog = anes_data.exog
anes_exog = sm.add_constant(anes_exog, prepend=False)

Inspect the data:

In [10]:
print(anes_data.exog[:5,:])
print(anes_data.endog[:5])
[[ -2.30258509   7.          36.           3.           1.        ]
 [  5.24755025   3.          20.           4.           1.        ]
 [  3.43720782   2.          24.           6.           1.        ]
 [  4.4200447    3.          28.           6.           1.        ]
 [  6.46162441   5.          68.           6.           1.        ]]
[ 6.  1.  1.  1.  0.]

Fit MNL model:

In [11]:
mlogit_mod = sm.MNLogit(anes_data.endog, anes_exog)
mlogit_res = mlogit_mod.fit()
print(mlogit_res.params)
Optimization terminated successfully.
         Current function value: 1.548647
         Iterations 7
[[ -0.01153597  -0.08875065  -0.1059667   -0.0915567   -0.0932846
   -0.14088069]
 [  0.29771435   0.39166864   0.57345051   1.27877179   1.34696165
    2.07008014]
 [ -0.024945    -0.02289784  -0.01485121  -0.00868135  -0.01790407
   -0.00943265]
 [  0.08249144   0.18104276  -0.00715242   0.19982796   0.21693885
    0.3219257 ]
 [  0.00519655   0.04787398   0.05757516   0.08449838   0.08095841
    0.10889408]
 [ -0.37340168  -2.25091318  -3.66558353  -7.61384309  -7.06047825
  -12.1057509 ]]

Poisson

Load the Rand data. Note that this example is similar to Cameron and Trivedi's Microeconometrics Table 20.5, but it is slightly different because of minor changes in the data.

In [12]:
rand_data = sm.datasets.randhie.load()
rand_exog = rand_data.exog.view(float).reshape(len(rand_data.exog), -1)
rand_exog = sm.add_constant(rand_exog, prepend=False)

Fit Poisson model:

In [13]:
poisson_mod = sm.Poisson(rand_data.endog, rand_exog)
poisson_res = poisson_mod.fit(method="newton")
print(poisson_res.summary())
Optimization terminated successfully.
         Current function value: 3.091609
         Iterations 12
                          Poisson Regression Results
==============================================================================
Dep. Variable:                      y   No. Observations:                20190
Model:                        Poisson   Df Residuals:                    20180
Method:                           MLE   Df Model:                            9
Date:                Tue, 02 Dec 2014   Pseudo R-squ.:                 0.06343
Time:                        12:51:48   Log-Likelihood:                -62420.
converged:                       True   LL-Null:                       -66647.
                                        LLR p-value:                     0.000
==============================================================================
                 coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.0525      0.003    -18.216      0.000        -0.058    -0.047
x2            -0.2471      0.011    -23.272      0.000        -0.268    -0.226
x3             0.0353      0.002     19.302      0.000         0.032     0.039
x4            -0.0346      0.002    -21.439      0.000        -0.038    -0.031
x5             0.2717      0.012     22.200      0.000         0.248     0.296
x6             0.0339      0.001     60.098      0.000         0.033     0.035
x7            -0.0126      0.009     -1.366      0.172        -0.031     0.005
x8             0.0541      0.015      3.531      0.000         0.024     0.084
x9             0.2061      0.026      7.843      0.000         0.155     0.258
const          0.7004      0.011     62.741      0.000         0.678     0.722
==============================================================================

Negative Binomial

The negative binomial model gives slightly different results.

In [14]:
mod_nbin = sm.NegativeBinomial(rand_data.endog, rand_exog)
res_nbin = mod_nbin.fit(disp=False)
print(res_nbin.summary())
                     NegativeBinomial Regression Results
==============================================================================
Dep. Variable:                      y   No. Observations:                20190
Model:               NegativeBinomial   Df Residuals:                    20180
Method:                           MLE   Df Model:                            9
Date:                Tue, 02 Dec 2014   Pseudo R-squ.:                 0.01845
Time:                        12:51:49   Log-Likelihood:                -43384.
converged:                      False   LL-Null:                       -44199.
                                        LLR p-value:                     0.000
==============================================================================
                 coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.0580      0.006     -9.517      0.000        -0.070    -0.046
x2            -0.2678      0.023    -11.802      0.000        -0.312    -0.223
x3             0.0412      0.004      9.937      0.000         0.033     0.049
x4            -0.0381      0.003    -11.219      0.000        -0.045    -0.031
x5             0.2690      0.030      8.981      0.000         0.210     0.328
x6             0.0382      0.001     26.081      0.000         0.035     0.041
x7            -0.0441      0.020     -2.200      0.028        -0.083    -0.005
x8             0.0172      0.036      0.477      0.633        -0.054     0.088
x9             0.1780      0.074      2.397      0.017         0.032     0.324
const          0.6636      0.025     26.787      0.000         0.615     0.712
alpha          1.2930      0.019     69.477      0.000         1.256     1.329
==============================================================================

/home/skipper/statsmodels/statsmodels/statsmodels/base/model.py:466: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)

Alternative solvers

The default method for fitting discrete data MLE models is Newton-Raphson. You can use other solvers by using the method argument:

In [15]:
mlogit_res = mlogit_mod.fit(method='bfgs', maxiter=100)
print(mlogit_res.summary())
Warning: Maximum number of iterations has been exceeded.
         Current function value: 1.548649
         Iterations: 100
         Function evaluations: 111
         Gradient evaluations: 111
                          MNLogit Regression Results
==============================================================================
Dep. Variable:                      y   No. Observations:                  944
Model:                        MNLogit   Df Residuals:                      908
Method:                           MLE   Df Model:                           30
Date:                Tue, 02 Dec 2014   Pseudo R-squ.:                  0.1648
Time:                        12:51:49   Log-Likelihood:                -1461.9
converged:                      False   LL-Null:                       -1750.3
                                        LLR p-value:                1.826e-102
==============================================================================
       y=1       coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.0117      0.034     -0.340      0.734        -0.079     0.056
x2             0.2975      0.094      3.177      0.001         0.114     0.481
x3            -0.0250      0.007     -3.829      0.000        -0.038    -0.012
x4             0.0824      0.074      1.119      0.263        -0.062     0.227
x5             0.0051      0.018      0.291      0.771        -0.029     0.040
const         -0.3689      0.630     -0.586      0.558        -1.603     0.866
------------------------------------------------------------------------------
       y=2       coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.0889      0.039     -2.269      0.023        -0.166    -0.012
x2             0.3911      0.108      3.613      0.000         0.179     0.603
x3            -0.0229      0.008     -2.897      0.004        -0.038    -0.007
x4             0.1808      0.085      2.119      0.034         0.014     0.348
x5             0.0478      0.022      2.147      0.032         0.004     0.091
const         -2.2445      0.763     -2.941      0.003        -3.740    -0.749
------------------------------------------------------------------------------
       y=3       coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.1058      0.057     -1.856      0.063        -0.218     0.006
x2             0.5736      0.159      3.618      0.000         0.263     0.884
x3            -0.0149      0.011     -1.315      0.189        -0.037     0.007
x4            -0.0075      0.126     -0.060      0.952        -0.255     0.240
x5             0.0575      0.034      1.712      0.087        -0.008     0.123
const         -3.6614      1.156     -3.166      0.002        -5.928    -1.395
------------------------------------------------------------------------------
       y=4       coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.0915      0.044     -2.088      0.037        -0.177    -0.006
x2             1.2827      0.129      9.938      0.000         1.030     1.536
x3            -0.0085      0.008     -1.009      0.313        -0.025     0.008
x4             0.2013      0.094      2.137      0.033         0.017     0.386
x5             0.0851      0.026      3.241      0.001         0.034     0.137
const         -7.6593      0.960     -7.981      0.000        -9.540    -5.778
------------------------------------------------------------------------------
       y=5       coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.0935      0.039     -2.376      0.018        -0.171    -0.016
x2             1.3462      0.117     11.490      0.000         1.117     1.576
x3            -0.0180      0.008     -2.362      0.018        -0.033    -0.003
x4             0.2165      0.085      2.547      0.011         0.050     0.383
x5             0.0808      0.023      3.520      0.000         0.036     0.126
const         -7.0479      0.844     -8.350      0.000        -8.702    -5.394
------------------------------------------------------------------------------
       y=6       coef    std err          z      P>|z|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.1410      0.042     -3.347      0.001        -0.224    -0.058
x2             2.0688      0.143     14.433      0.000         1.788     2.350
x3            -0.0094      0.008     -1.160      0.246        -0.025     0.007
x4             0.3214      0.091      3.528      0.000         0.143     0.500
x5             0.1090      0.025      4.309      0.000         0.059     0.159
const        -12.0966      1.059    -11.418      0.000       -14.173   -10.020
==============================================================================

/home/skipper/statsmodels/statsmodels/statsmodels/base/model.py:466: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)

doc_statsmodels
2017-01-18 16:08:04
Comments
Leave a Comment

Please login to continue.