statsmodels.regression.linear_model.OLSResults.el_test¶
-
OLSResults.
el_test
(b0_vals, param_nums, return_weights=0, ret_params=0, method='nm', stochastic_exog=1)[source]¶ Test single or joint hypotheses using Empirical Likelihood.
Parameters: b0_vals : 1darray
The hypothesized value of the parameter to be tested.
param_nums : 1darray
The parameter number to be tested.
return_weights : bool
If true, returns the weights that optimize the likelihood ratio at b0_vals. The default is False.
ret_params : bool
If true, returns the parameter vector that maximizes the likelihood ratio at b0_vals. Also returns the weights. The default is False.
method : str
Can either be ‘nm’ for Nelder-Mead or ‘powell’ for Powell. The optimization method that optimizes over nuisance parameters. The default is ‘nm’.
stochastic_exog : bool
When True, the exogenous variables are assumed to be stochastic. When the regressors are nonstochastic, moment conditions are placed on the exogenous variables. Confidence intervals for stochastic regressors are at least as large as non-stochastic regressors. The default is True.
Returns: tuple
The p-value and -2 times the log-likelihood ratio for the hypothesized values.
Examples
>>> import statsmodels.api as sm >>> data = sm.datasets.stackloss.load(as_pandas=False) >>> endog = data.endog >>> exog = sm.add_constant(data.exog) >>> model = sm.OLS(endog, exog) >>> fitted = model.fit() >>> fitted.params >>> array([-39.91967442, 0.7156402 , 1.29528612, -0.15212252]) >>> fitted.rsquared >>> 0.91357690446068196 >>> # Test that the slope on the first variable is 0 >>> fitted.el_test([0], [1]) >>> (27.248146353888796, 1.7894660442330235e-07)