Dear Michel,
Btw, we're amazed by the quality and speed of replies; a tremendous help!
Our question:
We run simul on a deterministic model with temporary shocks. Our model is already linearized in percent deviation from steady state. Thus our steady state values for all variables are zero.
We ran across three related phenomena:
1) if we specify X simulation periods, we get (X+2) simulated values
2) if we specify a number of simulation periods smaller than what seems to be needed for the system to return "naturally" to steady state, there appears a kink in the simulated values towards the last periods, where these values drop to steady state. Is this some sort of forced convergence?
3) if we specify X simulation periods and then subsequently Y simulation periods (where Y>X), the value of any variable in any given period common to both samples will differ (taking a higher value in the case of Y simulations).
Thanks for your help.