Prediction (out of sample)

[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt

import statsmodels.api as sm

plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)

Artificial data

[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1-5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)

Estimation

[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
                            OLS Regression Results
==============================================================================
Dep. Variable:                      y   R-squared:                       0.991
Model:                            OLS   Adj. R-squared:                  0.991
Method:                 Least Squares   F-statistic:                     1741.
Date:                Thu, 05 Nov 2020   Prob (F-statistic):           2.42e-47
Time:                        07:28:38   Log-Likelihood:                 16.213
No. Observations:                  50   AIC:                            -24.43
Df Residuals:                      46   BIC:                            -16.78
Df Model:                           3
Covariance Type:            nonrobust
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const          4.9818      0.062     80.129      0.000       4.857       5.107
x1             0.4999      0.010     52.131      0.000       0.481       0.519
x2             0.5705      0.038     15.136      0.000       0.495       0.646
x3            -0.0196      0.001    -23.240      0.000      -0.021      -0.018
==============================================================================
Omnibus:                        1.466   Durbin-Watson:                   2.544
Prob(Omnibus):                  0.481   Jarque-Bera (JB):                1.270
Skew:                          -0.221   Prob(JB):                        0.530
Kurtosis:                       2.357   Cond. No.                         221.
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

In-sample prediction

[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.49265385  4.99974377  5.46310707  5.85164932  6.14549789  6.33926701
  6.44294259  6.48024159  6.48471541  6.49423745  6.54478062  6.66450713
  6.8691415   7.15938703  7.52081031  7.92621256  8.34009829  8.72450714
  9.04525068  9.27752939  9.41000725  9.44667415  9.40619038  9.31882081
  9.22146243  9.15158231  9.14106186  9.21095878  9.36804736  9.60370549
  9.89533122 10.21005623 10.51014599 10.75919941 10.92812949 10.99994216
 10.97252845 10.85901089 10.68558759 10.48722792 10.30192717 10.16446402
 10.10068647 10.12326517 10.22961157 10.40230328 10.61194648 10.82200528
 10.99480442 11.09771924]

Create a new sample of explanatory variables Xnew, predict and plot

[6]:
x1n = np.linspace(20.5,25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n-5)**2))
Xnew = sm.add_constant(Xnew)
ynewpred =  olsres.predict(Xnew) # predict out of sample
print(ynewpred)
[11.09713875 10.94752801 10.67126182 10.31932948  9.95885089  9.65664319
  9.46286165  9.39871988  9.451296    9.57669639]

Plot comparison

[7]:
import matplotlib.pyplot as plt

fig, ax = plt.subplots()
ax.plot(x1, y, 'o', label="Data")
ax.plot(x1, y_true, 'b-', label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), 'r', label="OLS prediction")
ax.legend(loc="best");
../../../_images/examples_notebooks_generated_predict_12_0.png

Predicting with Formulas

Using formulas can make both estimation and prediction a lot easier

[8]:
from statsmodels.formula.api import ols

data = {"x1" : x1, "y" : y}

res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()

We use the I to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2

[9]:
res.params
[9]:
Intercept           4.981789
x1                  0.499863
np.sin(x1)          0.570549
I((x1 - 5) ** 2)   -0.019565
dtype: float64

Now we only have to pass the single variable and we get the transformed right-hand side variables automatically

[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0    11.097139
1    10.947528
2    10.671262
3    10.319329
4     9.958851
5     9.656643
6     9.462862
7     9.398720
8     9.451296
9     9.576696
dtype: float64