State Space Time Series Analysis [PDF]

VU University Amsterdam. Tinbergen Institute. 2011. State Space Time Series Analysis – p. ... In a Structural Time Ser

269 downloads 26 Views 220KB Size

Recommend Stories


Modulbeschreibung „Time Series Analysis“
In every community, there is work to be done. In every nation, there are wounds to heal. In every heart,

Time Series Analysis
Seek knowledge from cradle to the grave. Prophet Muhammad (Peace be upon him)

time series analysis
Ask yourself: What kind of person do you enjoy spending time with? Next

Time Series Analysis
Ask yourself: Do I feel and express enough gratitude and appreciation for what I have? Next

Time Series Analysis Ebook
Ask yourself: How can you love yourself more today? Next

Financial Time Series Analysis
You're not going to master the rest of your life in one day. Just relax. Master the day. Than just keep

PDF Time Series Analysis and Its Applications
If you feel beautiful, then you are. Even if you don't, you still are. Terri Guillemets

Time-Frequency analysis of biophysical time series
Ask yourself: What's one thing I would like to do more of and why? How can I make that happen? Next

Analysis of Financial Time Series
Ask yourself: What is your ideal life partner like? Where can you find him/her? Next

Macroeconometrics and Time Series Analysis
Never let your sense of morals prevent you from doing what is right. Isaac Asimov

Idea Transcript


State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman

Department of Econometrics VU University Amsterdam Tinbergen Institute 2011

State Space Time Series Analysis – p. 1

Classical Decomposition A basic model for representing a time series is the additive model yt = µt + γt + εt ,

t = 1, . . . , n,

also known as the Classical Decomposition. yt = observation, µt = slowly changing component (trend), γt = periodic component (seasonal), εt = irregular component (disturbance). In a Structural Time Series Model (STSM) or Unobserved Components Model (UCM), the RHS components are modelled explicitly as stochastic processes.

State Space Time Series Analysis – p. 2

Local Level Model • Components can be deterministic functions of time (e.g. polynomials), or stochastic processes; • Deterministic example: yt = µ + εt with εt ∼ N ID(0, σε2 ). • Stochastic example: the Random Walk plus Noise, or Local Level model: yt = µt + εt , µt+1 = µt + ηt ,

εt ∼ N ID(0, σε2 ) ηt ∼ N ID(0, ση2 ),

• The disturbances εt , ηs are independent for all s, t; • The model is incomplete without a specification for µ1 (note the non-stationarity): µ1 ∼ N (a, P )

State Space Time Series Analysis – p. 3

Local Level Model

yt = µt + εt , µt+1 = µt + ηt ,

εt ∼ N ID(0, σε2 ) ηt ∼ N ID(0, ση2 ),

µ1 ∼ N (a, P ) • The level µt and the irregular εt are unobserved; • Parameters: σε2 , ση2 ; • Trivial special cases: ◦ ση2 = 0 =⇒ yt ∼ N ID(µ1 , σε2 ) (WN with constant level); ◦ σε2 = 0 =⇒ yt+1 = yt + ηt (pure RW); • Local Level is a model representation for EWMA forecasting.

State Space Time Series Analysis – p. 4

Local Linear Trend Model The LLT model extends the LL model with a slope: yt = µt + εt ,

εt ∼ N ID(0, σε2 ),

µt+1 = βt + µt + ηt ,

ηt ∼ N ID(0, ση2 ),

βt+1 = βt + ξt ,

ξt ∼ N ID(0, σξ2 ).

• All disturbances are independent at all lags and leads; • Initial distributions β1 , µ1 need to specified; • If σξ2 = 0 the trend is a random walk with constant drift β1 ; (For β1 = 0 the model reduces to a LL model.) • If additionally ση2 = 0 the trend is a straight line with slope β1 and intercept µ1 ; • If σξ2 > 0 but ση2 = 0, the trend is a smooth curve, or an Integrated Random Walk;

State Space Time Series Analysis – p. 5

Trend and Slope in LLT Model

µ

5.0

2.5

0.0

−2.5 0

10

20

30

40

50

60

70

80

90

100

10

20

30

40

50

60

70

80

90

100

β

0.75 0.50 0.25 0.00 −0.25 0

State Space Time Series Analysis – p. 6

Trend and Slope in Integrated Random Walk Model

µ

10

5

0

0

10

20

30

40

50

60

70

80

90

100

10

20

30

40

50

60

70

80

90

100

β

0.75 0.50 0.25 0.00 −0.25 0

State Space Time Series Analysis – p. 7

Local Linear Trend Model • Reduced form of LLT is ARIMA(0,2,2); • LLT provides a model for Holt-Winters forecasting; • Smooth LLT provides a model for spline-fitting; • Smoother trends: higher order Random Walks ∆d µt = ηt

State Space Time Series Analysis – p. 8

Seasonal Effects We have seen specifications for µt in the basic model yt = µt + γt + εt . Now we will consider the seasonal term γt . Let s denote the number of ‘seasons’ in the data: • s = 12 for monthly data, • s = 4 for quarterly data, • s = 7 for daily data when modelling a weekly pattern.

State Space Time Series Analysis – p. 9

Dummy Seasonal The simplest way to model seasonal effects is by using dummy variables. The effect summed over the seasons should equal zero: γt+1 = −

s−1 X

γt+1−j .

j=1

To allow the pattern to change over time, we introduce a new disturbance term: γt+1 = −

s−1 X

γt+1−j + ωt ,

ωt ∼ N ID(0, σω2 ).

j=1

The expectation of the sum of the seasonal effects is zero.

State Space Time Series Analysis – p. 10

Trigonometric Seasonal Defining γjt as the effect of season j at time t, an alternative specification for the seasonal pattern is [s/2]

γt =

X

γjt ,

j=1

∗ γj,t+1 = γjt cos λj + γjt sin λj + ωjt , ∗ ∗ ∗ γj,t+1 = −γjt sin λj + γjt cos λj + ωjt , ∗ ωjt , ωjt ∼ N ID(0, σω2 ),

λj = 2πj/s.

• Without the disturbance, the trigonometric specification is identical to the deterministic dummy specification. • The autocorrelation in the trigonometric specification lasts through more lags: changes occur in a smoother way;

State Space Time Series Analysis – p. 11

Seatbelt Law 7.9 7.8 7.7 7.6 7.5 7.4 7.3 7.2 7.1 7.0 70

75

80

85

State Space Time Series Analysis – p. 12

State Space Model Linear Gaussian state space model (LGSSM) is defined in three parts: → State equation: αt+1 = Tt αt + Rt ζt ,

ζt ∼ N ID(0, Qt ),

→ Observation equation: yt = Zt αt + εt ,

εt ∼ N ID(0, Ht ),

→ Initial state distribution α1 ∼ N (a1 , P1 ). Notice that • ζt and εs independent for all t, s, and independent from α1 ; • observation yt can be multivariate; • state vector αt is unobserved; • matrices Tt , Zt , Rt , Qt , Ht determine structure of model. State Space Time Series Analysis – p. 13

State Space Model • state space model is linear and Gaussian: therefore properties and results of multivariate normal distribution apply; • state vector αt evolves as a VAR(1) process; • system matrices usually contain unknown parameters; • estimation has therefore two aspects: ◦ measuring the unobservable state (prediction, filtering and smoothing); ◦ estimation of unknown parameters (maximum likelihood estimation); • state space methods offer a unified approach to a wide range of models and techniques: dynamic regression, ARIMA, UC models, latent variable models, spline-fitting and many ad-hoc filters; • next, some well-known model specifications in state space form ...

State Space Time Series Analysis – p. 14

Regression with Time Varying Coefficients General state space model: αt+1 = Tt αt + Rt ζt , yt = Zt αt + εt ,

ζt ∼ N ID(0, Qt ), εt ∼ N ID(0, Ht ).

Put regressors in Zt , Tt = I,

Rt = I,

Result is regression model with coefficient αt following a random walk.

State Space Time Series Analysis – p. 15

ARMA in State Space Form Example: AR(2) model yt+1 = φ1 yt + φ2 yt−1 + ζt , in state space: ζt ∼ N ID(0, Qt ),

αt+1 = Tt αt + Rt ζt , yt = Zt αt + εt ,

εt ∼ N ID(0, Ht ).

with 2 × 1 state vector αt and system matrices: h i Zt = 1 0 , " # φ1 1 Tt = , φ2 0

Ht = 0 " # 1 Rt = , 0

Qt = σ 2

• Zt and Ht = 0 imply that α1t = yt ; • First state equation implies yt+1 = φ1 yt + α2t + ζt with ζt ∼ N ID(0, σ 2 ); • Second state equation implies α2,t+1 = φ2 yt ; State Space Time Series Analysis – p. 16

ARMA in State Space Form Example: MA(1) model yt+1 = ζt + θζt−1 , in state space: αt+1 = Tt αt + Rt ζt , yt = Zt αt + εt ,

ζt ∼ N ID(0, Qt ), εt ∼ N ID(0, Ht ).

with 2 × 1 state vector αt and system matrices: h i Zt = 1 0 , " # 0 1 Tt = , 0 0

Ht = 0 " # 1 Rt = , θ

Qt = σ 2

• Zt and Ht = 0 imply that α1t = yt ; • First state equation implies yt+1 = α2t + ζt with ζt ∼ N ID(0, σ 2 ); • Second state equation implies α2,t+1 = θζt ;

State Space Time Series Analysis – p. 17

ARMA in State Space Form Example: ARMA(2,1) model yt = φ1 yt−1 + φ2 yt−2 + ζt + θζt−1 in state space form αt =

"

yt

#

φ2 yt−1 + θζt h i Zt = 1 0 , Ht = 0, " # " # φ1 1 1 Tt = , Rt = , φ2 0 θ

Qt = σ 2

All ARIMA(p, d, q) models have a (non-unique) state space representation.

State Space Time Series Analysis – p. 18

UC models in State Space Form State space model: αt+1 = Tt αt + Rt ζt ,

yt = Zt αt + εt .

LL model ∆µt+1 = ηt and yt = µt + εt : αt = µt ,

Tt = 1,

Qt = ση2 ,

Rt = 1,

Ht = σε2 .

Zt = 1,

LLT model ∆µt+1 = βt + ηt , " # µt αt = , βt

" # 1 1 Tt = , 0 1 h i Zt = 1 0 ,

∆βt+1 = ξt and yt = µt + εt : " 1 Rt = 0

# 0 , 1

"

ση2 Qt = 0

#

0 , 2 σξ

Ht = σε2 .

State Space Time Series Analysis – p. 19

UC models in State Space Form State space model: αt+1 = Tt αt + Rt ζt , LLT model with season: ∆µt+1 = βt + ηt , S(L)γt+1 = ωt and yt = µt + γt + εt : h αt = µt  1 0   T t = 0  0 0 h Zt = 1

βt

γt

1 0 1 0 0 −1 0 1 0 0

γt−1

yt = Zt αt + εt . ∆βt+1 = ξt ,

i′

γt−2 , 

0 0   0 0 ση2   −1 −1 , Qt =  0  0 0 0 1 0 i 0 1 0 0 , Ht = σε2 .

0 σξ2 0



0  0 , σω2

1 0   Rt = 0  0 0 

0 1 0 0 0

 0 0   1 ,  0 0

State Space Time Series Analysis – p. 20

Kalman Filter • The Kalman filter calculates the mean and variance of the unobserved state, given the observations. • The state is Gaussian: the complete distribution is characterized by the mean and variance. • The filter is a recursive algorithm; the current best estimate is updated whenever a new observation is obtained. • To start the recursion, we need a1 and P1 , which we assumed given. • There are various ways to initialize when a1 and P1 are unknown, which we will not discuss here.

State Space Time Series Analysis – p. 21

Kalman Filter The unobserved state αt can be estimated from the observations with the Kalman filter : vt = yt − Zt at , Ft = Zt Pt Zt′ + Ht , Kt = Tt Pt Zt′ Ft−1 , at+1 = Tt at + Kt vt , Pt+1 = Tt Pt Tt′ + Rt Qt Rt′ − Kt Ft Kt′ , for t = 1, . . . , n and starting with given values for a1 and P1 . • Writing Yt = {y1 , . . . , yt }, at+1 = E(αt+1 |Yt ),

Pt+1 = var(αt+1 |Yt ).

State Space Time Series Analysis – p. 22

Kalman Filter State space model: αt+1 = Tt αt + Rt ζt ,

yt = Zt αt + εt .

• Writing Yt = {y1 , . . . , yt }, define at+1 = E(αt+1 |Yt ),

Pt+1 = var(αt+1 |Yt );

• The prediction error is vt = yt − E(yt |Yt−1 ) = yt − E(Zt αt + εt |Yt−1 ) = yt − Zt E(αt |Yt−1 ) = yt − Zt at ; • It follows that vt = Zt (αt − at ) + εt and E(vt ) = 0; • The prediction error variance is Ft = var(vt ) = Zt Pt Zt′ + Ht .

State Space Time Series Analysis – p. 23

Lemma The proof of the Kalman filter uses a lemma from multivariate Normal regression theory. Lemma Suppose x, y and z are jointly Normally distributed vectors with E(z) = 0 and Σyz = 0. Then E(x|y, z) = E(x|y) + Σxz Σ−1 zz z, ′ var(x|y, z) = var(x|y) − Σxz Σ−1 zz Σxz ,

State Space Time Series Analysis – p. 24

Multivariate local level model Seemingly Unrelated Time Series Equations model: yt = µt + εt ,

εt ∼ N ID(0, Σε ),

µt+1 = µt + ηt ,

ηt ∼ N ID(0, Ση ).

• Observations are p × 1 vectors; • The disturbances εt , ηs are independent for all s, t; • The p different time series are related through correlations in the disturbances. For a full discussion, see Harvey and Koopman (1997). A pdf version (scanned) at http://staff.feweb.vu.nl/koopman under section “Publications” and subsection “Published articles as contributions to books”.

State Space Time Series Analysis – p. 25

Multivariate LL Model The multivariate LL model is given by yt = µt + εt ,

εt ∼ N ID(0, Σε ),

µt+1 = µt + ηt ,

ηt ∼ N ID(0, Ση ).

• First difference ∆yt = ηt−1 + ∆εt is stationary; • Reduced form: ∆yt is VMA(1) or VAR(∞);

State Space Time Series Analysis – p. 26

Multivariate LL Model • Stochastic properties are multivariate analogous of univariate case: Γ0 = E(∆yt ∆yt′ ) = Ση + 2Σε ′ Γ1 = E(∆yt ∆yt−1 ) = −Σε ′ Γτ = E(∆yt ∆yt−τ ) = 0,

τ ≥ 2,

• The unrestricted vector MA(1) process has p2 + p(p + 1)/2 parameters, the SUTSE has p × (p + 1); • Such multivariate reduced form representations can also be established for general models.

State Space Time Series Analysis – p. 27

Homogeneous Multivariate LL Model The homogeneous multivariate LL model is given by yt = µt + εt , µt+1 = µt + ηt ,

εt ∼ N ID(0, Σε ), ηt ∼ N ID(0, qΣε ),

where q is a non-negative scalar. This implies that Ση = qΣε . • The model is restricted, all series in yt have the same dynamic properties (the same acf). • Not so relevant in practical work apart from forecasting. It is the model representation for exponentially weighted moving average (EWMA) forecasting of multiple time series. • This can be generalised to more general components models. • Easy to estimate, only a set of univariate Kalman filters are required. State Space Time Series Analysis – p. 28

Common Levels The common local level model is given by yt = µt + εt ,

εt ∼ N ID(0, Σε ),

µt+1 = µt + ηt ,

ηt ∼ N ID(0, Ση ),

where rank(Ση ) = r < p. • the model can be described by r underlying level components, the common levels; • Ση = AΣc A′ , A is p × r, Σc is r × r of full rank; • interpretation of A: factor loading matrix.

State Space Time Series Analysis – p. 29

Common Levels The common local level model yt = µt + εt , µt+1 = µt + ηt ,

εt ∼ N ID(0, Σε ), ηt ∼ N ID(0, AΣc A′ ),

can be rewritten in terms of underlying levels: yt = a + Aµct + εt , µct+1 = µct + ηtc ,

ηtc ∼ N ID(0, Σc ),

so that µt = a + Aµct ,

ηt = Aηtc .

State Space Time Series Analysis – p. 30

Common Levels For the common local level model yt = µt + εt , µt+1 = µt + ηt ,

εt ∼ N ID(0, Σε ), ηt ∼ N ID(0, AΣc A′ ),

notice that • decomposition Ση = AΣc A′ is not unique; • identification restrictions: Σc is diagonal, Choleski decomposition, principal compoments (based on economic theory); • more interesting interpretation can be obtained by factor rotations; • can be interpreted as dynamic factor analysis, see later.

State Space Time Series Analysis – p. 31

Common components Common dynamic factors: • are useful for interpretation → cointegration; • have consequence for inference and forecasting (dimension of parameter space reduces as a result). • common local level model can be generally represented as a VAR(∞) or VECM models, details can be provided upon request.

State Space Time Series Analysis – p. 32

Multivariate components • So far, we have concentrated on multivariate variants of the local level model; • Similar considerations can be applied to other components such as the slope of the trend, seasonal and cycle components and other time-varying features in the multiple time series. • Harvey and Koopman (1997) review such extensions. • In particular, they define the similar cycle component, see Exercises.

State Space Time Series Analysis – p. 33

Common and idiosyncratic factors Multiple trends can also be decomposed into a one common factor and multiple idiosyncratic factors: yt = µt + εt ,

εt ∼ N ID(0, Σε ),

µt+1 = µt + ηt ,

ηt ∼ N ID(0, Ση ),

where Ση = δδ ′ + Dη with vector δ and diagonal matrix Dη . This implies that the level can be represented by µt = δµct + µ∗t ,

ηt = δηtc + ηt∗

with common level (scalar) µct and ”independent” level µ∗it generated by ∆µct+1 = ηtc ∼ N ID(0, 1),

∆µ∗t+1 = ηt∗ ∼ N ID(0, Dη ).

State Space Time Series Analysis – p. 34

Mulitvariate Kalman filter The Kalman filter is valid for the general multivariate state space model. Computationally it is not convenient when p becomes large, very large. Each step of the Kalman filter requires the inversion of the p × p matrix Ft . This is no problem when p = 1 (univariate) but when p > 20, say, it will slow down the Kalman filter considerably. However, we can treat each element in the p × 1 observation vector yt as a single realisation. In other words, we can "update" each single element of yt within the Kalman filter. The arguments are given in DK book §6.4. The same applies to smoothing.

State Space Time Series Analysis – p. 35

Univariate treatment of Kalman filter • Consider standard model: yt = Zt αt + εt and αt+1 = Tt αt + Rt ηt where Var(εt ) = Ht is diagonal. • Observation vector yt = (yt,1 , . . . , yt,pt )′ is treated and we view observation model as a set of pt separate equations. • We then have, yt,i = Zt,i αt,i + εt,i with αt,i = αt for i = 1, . . . , pt . • The associated transition equations become αt,i+1 = αt,i for i = 1, . . . , pt and αt+1,1 = Tt αt,pt + Rt ηt for t = 1, . . . , n. • This disentangled model can be treated by the Kalman filter and smoother equations straightforwardly. • Innovations are now relative to the past and the “previous” observations inside yt,pt ! • Non-diagonal matrix Ht can be treated by data-transformation or by including εt in the state vector αt . • More details in DK book §6.4.

State Space Time Series Analysis – p. 36

Exercise 1 Consider the common trends model of Harvey and Koopman (1997, §§9.4.1 and 9.4.2). 1. Put the common trends model with (possibly common) stochastic slopes and based on equations (21)-(23) in state space form. 2. Put the common trends model with (possibly common) stochastic slopes and based on equations (24)-(26) in state space form. Define all vectors and matrices precisely. 3. Discuss the generalisation of Ση 6= 0 and the consequences for the state space formulation of the model as in 2.

State Space Time Series Analysis – p. 37

Exercise 2 Consider the multivariate trend model of Harvey and Koopman (1997). 1. Consider a multiple data set of N time series yt . The aim is to decompose the time series into trend and stationary components. It is further required that the multiple trend can be decomposed into a common single trend (common to all N time series) and idiosyncratic trends (specific to the individual time series). • Formulate a model for such a decomposition. • Discuss the identification of the different trends. • Express the model in state space form. 2. Once multiple trend models are expressed in state space form, we need to estimate the parameter coefficients of the model. Please describe shortly some relevant issues of maximum likelihood estimation. Is it feasible ? What problems can you expect ? Any recommendations for a successful implementation ?

State Space Time Series Analysis – p. 38

Exercise 3 Consider the similar cycle model of Harvey and Koopman (1997) with observation equation yt = ψt + εt ,

εt ∼ N (0, Σε ),

where yt is a 3 × 1 observation vector. Cycle ψt represents a common similar cycle component of rank 2. 1. Please provide the state space representation of this model. 2. Comment on the restrictive nature of the similar cycle model. 3. How would you modify the similar cycle model so that each time series in yt has a different cycle frequency λ. 4. Can you apply the univariate Kalman filter of DK §6.4 in case Σε is diagonal ? What if Σε is not diagonal ? Give details.

State Space Time Series Analysis – p. 39

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.