Hi otb,
otb wrote:Let me ask some related questions, and maybe I can get answers to my previous questions as well.
Should the moments generated from a single simulation approach the true theoretical moments as we increase periods?
Yes, this should be true if the DGP is stationary.
otb wrote:If so, why do some papers generate moments from multiple simulations each of which have a small number of periods (typically equal to the number of periods in the data)?
Let us consider a simple example. Suppose that our DGP is the following AR(1) stochastic process:
- Code: Select all
y_t = c + \rho y_{t-1} + \epsilon_t
where |\rho|<1 and {\epsilon_t} is a mean zero white noise with variance \sigma^2, so that {y_t} is an asymptotically second order stationary. One can easily show that the asymptotic first and second order moments (the theoretical moments in Dynare words) are:
- Code: Select all
E_{\infty}[y_t] = \frac{c}{1-\rho}
V_{\infty}[y_t] = \frac{\sigma^2}{1-\rho^2}
Now if we suppose that there exists an initial condition, say, y_0 = 0, the moments will be different from the asymptotic moments because they depend on the initial condition. To understand the reason we just have to note that :
- Code: Select all
E[y_1] = c + \rho E[y_0] = (1 + \rho) \times c
V[y_1] = \rho^2 V[y_0] + \sigma^2 = (1+\rho^2) \times \sigma^2
- Code: Select all
E[y_2] = c + \rho E[y_1] = (1 + \rho + \rho^2) \times c
V[y_2] = \rho^2 V[y_1] + \sigma^2 = (1+\rho^2+\rho^4) \times \sigma^2
and more generally
- Code: Select all
E[y_t] = (1 + \rho + ... +\rho^t) \times c
V[y_t] = (1+\rho^2+ ... + \rho^{2t}) \times \sigma^2
or equivalently:
- Code: Select all
E[y_t] = \frac{1-\rho^{t-1}}{1-\rho} \times c
V[y_t] = \frac{1-\rho^{2(t-1)}}{1-\rho^2} \times \sigma^2
Obviously when t goes to infinity these moments converge to the asymptotic moments. The speed of convergence depends on the value of the autoregressive parameter. The closer to one is \rho, the lower will be the convergence to the asymptotic moments.
One can check that the formulas for the expectation will be affected if we change the initial condition. For instance, if we set y_0 equal to E_{\infty}[y_t] instead of 0 we can see that the expectation becomes time invariant. More generally if the initial condition is a random variable (with non zero variance) the formulas for the second order moment will also be affected.
To sum-up, simulated moments have to be different from asymptotic moments because of the influence of the initial condition on the moments (and also, but that is obvious, because of sampling issues).
Averaging over different path simulations starting from different initial conditions is a way to overcome this difficulty. Ideally we should randomly select initial conditions in a distribution with expectation E_{\infty}[y_t] and
variance V_{\infty}[y_t], but this is not possible since we don't know these moments in the first place.
A single simulation with an infinite number of periods will give results different from the mean of an infinite number of simulations of length t<\infty (because the process is only
asymptotically second order stationary) except if the initial conditions are obtained from the ergodic distribution of the stochastic process (which is unknown). A single simulation with an infinite number of periods will converge to E_{\infty}[y_t]
and V_{\infty}[y_t] in probability, while the mean of an infinite number of simulations of length t will converge in probability to E[y_t] and V[y_t] (these moments will depend on the assumption made about the distribution of the initial condition).
Best Regards,
Stéphane.