Processing math: 75%

Formulae

Version: 2025-03-16

Introduction

We report all the formulae used in the main computations that take place in the package as a reference for the users and developers.

Basic model for the HP filter with jumps

The basic model is based on the state-space representation yt=α(1)t+εt,εtNID(0,σ2ε)α(1)t+1=α(1)t+α(2)t+ηt,ηtNID(0,σ2η,t)α(2)t+1=α(2)t+ζt,ζtNID(0,σ2ζ,t) with initial conditions [α(1)1α(2)1]N([a(1)1a(2)1],[p(11)1p(12)1p(12)1p(22)1]).

Scalar Kalman filtering recursions

The Kalman filtering recursions, written in scalar form (to gain computational speed and insights) are the following (for generality we let also the variance of the measurement error vary over time).

The initial innovation, its variance and the Kalman gains are: i1=y1a(1)1f1=p(11)1+σ2ε,1k(1)1=(p(11)1+p(12)1)/f1k(2)1=p(12)1/f1 For t=1,2,,n1 the recursions are a(1)t+1=a(1)t+a(2)t+k(1)tita(2)t+1=a(2)t+k(2)titp(11)t+1=p(11)t+2p(12)t+p(22)t+σ2η,tk(1)tk(1)tftp(12)t+1=p(12)t+p(22)tk(1)tk(2)tftp(22)t+1=p(22)t+σ2ζ,tk(2)tk(2)tftit+1=yt+1a(1)t+1ft+1=p11t+1+σ2ε,t+1k(1)t+1=(p(11)t+1+p(12)t+1)/ft+1k(2)t+1=p(12)t+1/ft+1

Modifications when missing observations are present

When one or more values of yt are missing, then the only modifications to the above recursions are the following. it+1=0ft+1=k(1)t+1=0k(1)t+1=0.

Diffuse initial conditions

Since the two state variables are nonstationary, their initialization should be diffuse: [α(1)1α(2)1]N([00],[v00v]), with v.

As it will be clear from the computations below, when v is infinite, the mean squared errors of a(1)t and a(2)t, and the variances of the innovations are infinite for t=1,2, while from t=3 on they are finite.

Let us carry out the computations for t=1,2,3 and then take the limit for v.

t=1

a(1)1=0a(2)1=0p(11)1=vp(12)1=0p(22)1=vi1=y1f1=v+σ2ε,1k(1)1=v/(v+σ2ε,1)k(2)1=0

t=2

a(1)2=y1a(2)2=0p(11)2=2v+σ2η,1v2v+σ2ε,1p(12)2=vp(22)2=v+σ2ζ,1i2=y2vv+σ2εy1y2y1f2=2v+σ2η,1v2v+σ2ε+σ2ε,2k(1)2=3v+σ2η,1v2v+σ2ε,12v+σ2η,1v2v+σ2ε,1+σ2ε,22k(2)2=v2v+σ2η,1v2v+σ2ε,1+σ2ε,21

t=3

a(1)3=v2v+σ2ε,1y1+3v+σ2η,1v2v+σ2ε,12v+σ2η,1v2v+σ2ε,1+σ2ε,2(y2v2v+σ2ε,1y1)2y2y1a(2)3=3v+σ2η,1v2v+σ2ε,12v+σ2η,1v2v+σ2ε,1+σ2ε,2(y2v2v+σ2ε,1y1)y2y1p(11)3=5v+σ2η,1v2v+σ2ε,1+σ2ζ,1+σ2η,2(3v+σ2η,1v2v+σ2ε,1)22v+σ2η,1v2v+σ2ε,1+σ2ε,2σ2η,1+σ2η,2+σ2ζ,1p(12)3=2v+σ2ζ,1v(3v+σ2η,1v2v+σ2ε,1)2v+σ2η,1v2v+σ2ε,1+σ2ε,2σ2ζ,1p(22)3=v+σ2ζ,1+σ2ζ,2v22v+σ2η,1v2v+σ2ε,1+σ2ε,2σ2ζ,1+σ2ζ,2i3y32y2+y1f3σ2η,1+σ2η,2+σ2ζ,1+σ2ε,3k(1)3σ2η,1+σ2η,2+2σ2ζ,1σ2η,1+σ2η,2+σ2ζ,1+σ2ε,3k(2)3σ2ζ,1σ2η,1+σ2η,2+σ2ζ,1+σ2ε,3

Smoothing

The smoothing recursions start from t=n and work backwards down to t=1. The following quantities are auxiliar to compute the smoothed values of α(1)t and their MSE. r(1)n+1=0r(2)n+1=0n(11)n+1=0n(12)n+1=0n(22)n+1=0en=in/fndn=1/fn For t=n,n1,,1, compute r(1)t=it/ft+(1k(1)t)r(1)t+1k(2)tr(2)t+1r(2)t=r(1)t+1+r(2)t+1n(11)t=(1k(1)t)2n(11)t+12(1k(1)t)k(2)tn(12)t+1+k(2)tk(2)tn(22)t+1+1/ftn(12)t=(1k(1)t)(n(11)t+1+n(12)t+1)k(2)t(n(12)t+1+n(22)t+1)n(22)t=n(11)t+1+2n(12)t+1+n(22)t+1et1=it1/ft1k(1)t1r(1)tk(2)t1r(2)tdt1=1/ft1+k(1)t1k(1)t1n(11)t+2k(1)t1k(2)t1n(12)t+k(2)t1k(2)t1n(22)t The smoothed values of α(1)t, that is the Hodrick-Prescott filtered time series, and their mean squared errors are given by a(1)t|n=a(1)t+p(11)tr(1)t+p(12)tr(2)tp(11)t|n=p(11)tp(11)tp(11)tn(11)t2p(11)tp(12)tn(12)tp(12)tp(12)tn(22)t

Weights for computing the effective degrees of freedom

Since the smoother is linear in the observations, the vector of smoothed α(1)t, say s, is just a linear transformation of the vector of observations, y: s=Wy. The number of effective degrees of freedom is the trace of the weighting matrix W (cf. Hastie, Tibshirani and Friedman, 2009, The Elements of Statistical Learning, Section 5.4.1). The formulae for computing such weights in a general state-space form can be found in Koopman and Harvey (2003) Journal of Economic Dynamics and Control, vol. 27. In our framework, the diagonal elements of the matrix W are given by wtt=p(11)t(1/ft+k(1)tk(1)tn(11)t+2k(1)tk(2)tn(12)t+k(2)tk(2)tn(22)tk(1)tn(11)tk(2)tn(12)t)p(12)t(k(1)t(n(11)t+n(12)t)+k(2)t(n(12)t+n(22)t))

Analytical scores

The log-likelihood must be maximised with respect to a very large number of parameters (n+3). Thus, providing the numerical optimiser with analytical scores is important for stability and speed. Since all of our parameters are related to quantities in the disturbance covariance matrices, we can adapt the results in Koopman and Shephard (1992, Biometrika vol. 79).

Recall that our (slightly re-parametrised) model is yt=α(1)t+εt,εtNID(0,σ2ε)α(1)t+1=α(1)t+α(2)t+ηt,ηtNID(0,σ2t)α(2)t+1=α(2)t+ζt,ζtNID(0,σ2+γ2σ2t) where the parameters to estimate are σε, σ, γ, and the sequence {σt}t=1,,n, which are all non-negative. Notice that in this parametrisation λ=σ2ε/σ2.

λ free

If λ is not fixed and \ell(\boldsymbol{\theta}) represents the log-likelihood function, with \boldsymbol\theta vector all of the parameters, then \begin{aligned} \frac{\partial \ell}{\partial \sigma_\varepsilon} &= \sigma_\varepsilon \sum_{t=1}^n (e_t e_t - d_t)\\ \frac{\partial \ell}{\partial \sigma} &= \sigma\sum_{t=1}^n (r^{(2)}_t r^{(2)}_t - n^{(22)}_t)\\ \frac{\partial \ell}{\partial \gamma} &= \gamma\sum_{t=1}^n (r^{(2)}_t r^{(2)}_t - n^{(22)}_t) \sigma^2_t\\ \frac{\partial \ell}{\partial \sigma_t} &= \Big(r^{(1)}_t r^{(1)}_t - n^{(11)}_t + (r^{(2)}_t r^{(2)}_t - n^{(22)}_t)\gamma^2\Big)\sigma^2_t \end{aligned}

Generally, constrained optimisation problems also need the derivatives of the constraining function, which in our case is g(\boldsymbol{\theta}) = \sum_{t=1}^n \sigma_t. The solution to the regularised maximum likelihood problem must satisfy g(\boldsymbol{\theta}) \leq M. The derivatives are trivial: \frac{\partial g}{\partial\sigma_\varepsilon} = 0, \; \frac{\partial g}{\partial\sigma} = 0, \; \frac{\partial g}{\partial\gamma} = 0, \; \frac{\partial g}{\partial\sigma_t} = 1.

\lambda fixed

If \lambda is fixed, \sigma^2_\varepsilon = \lambda\sigma^2 and, in the log-likelihood function \ell(\boldsymbol{\theta}), the vector of parameters \boldsymbol\theta does not contain \lambda or \sigma^2_\varepsilon. The derivatives are now \begin{aligned} \frac{\partial \ell}{\partial \sigma} &= \sigma\sum_{t=1}^n (r^{(2)}_t r^{(2)}_t - n^{(22)}_t) - \sigma\lambda\sum_{t=1}^n (e_t e_t - d_t) \\ \frac{\partial \ell}{\partial \gamma} &= \gamma\sum_{t=1}^n (r^{(2)}_t r^{(2)}_t - n^{(22)}_t )\sigma_t\\ \frac{\partial \ell}{\partial \sigma_t} &= \Big(r^{(1)}_t r^{(1)}_t - n^{(11)}_t + (r^{(2)}_t r^{(2)}_t - n^{(22)}_t)\gamma^2\Big)\sigma^2_t \end{aligned} The derivatives of the constraining function are \frac{\partial g}{\partial\sigma} = 0, \; \frac{\partial g}{\partial\gamma} = 0, \; \frac{\partial g}{\partial\sigma_t} = 1.