oo_.mean v.s. Converged unconditional mean in simulation
Posted: Mon Jan 18, 2016 7:06 pm
Hi all,
I thought oo_.mean is the unconditional mean in a stochastic model. Another way to get the unconditional mean is to simulate a long time series and the simulation will converge to the unconditional mean. Is this right?
But how is oo_.mean calculated? Because if the two are the same, why getting oo_.mean is very quick while simulating a long series is very time consuming?
For another, I change one parameter in a range of numbers, oo_.mean changes smoothly, while the end of simulation varies jaggedly. And the pattern of the two differ.
Appears I cannot reconcile my understanding and what I got. Can anyone tell me what is wrong? e.g. Is it because I didn't simulated long enough, as I use irf=1000? Or discrepancy between theory and approximation? Or some fundamental mistakes? Many thanks!
I thought oo_.mean is the unconditional mean in a stochastic model. Another way to get the unconditional mean is to simulate a long time series and the simulation will converge to the unconditional mean. Is this right?
But how is oo_.mean calculated? Because if the two are the same, why getting oo_.mean is very quick while simulating a long series is very time consuming?
For another, I change one parameter in a range of numbers, oo_.mean changes smoothly, while the end of simulation varies jaggedly. And the pattern of the two differ.
Appears I cannot reconcile my understanding and what I got. Can anyone tell me what is wrong? e.g. Is it because I didn't simulated long enough, as I use irf=1000? Or discrepancy between theory and approximation? Or some fundamental mistakes? Many thanks!