policy functions with third order approximation
Posted: Tue Oct 25, 2016 5:35 pm
Hi,
I am running a DSGE model with a third order approximation (Dynare 4.4.3), and after I obtain policy functions
I simulate variables with intentionally provided shocks (like one-time shock at time 0) using simult_ function.
One problem that I encounter was that the simulated series do not converge to its steady state values.
More weird thing is that the simulated series with no shocks provided (just like deterministic case)
do not even stay in the steady state (theoretically, it should).
After quite a bit of investigation, it seems like this problem could be due to g_0 term in the policy function,
which captures uncertainty corrections.
My question is:
Is there a way to adjust the mean of simulated series to its steady state?
By the way, shifting down the simulated series by the gap between the steady state and the mean of simulated series does not solve the problem.
Any help will be appreciated and please let me know if I need to clarify the problem in more details.
I am running a DSGE model with a third order approximation (Dynare 4.4.3), and after I obtain policy functions
I simulate variables with intentionally provided shocks (like one-time shock at time 0) using simult_ function.
One problem that I encounter was that the simulated series do not converge to its steady state values.
More weird thing is that the simulated series with no shocks provided (just like deterministic case)
do not even stay in the steady state (theoretically, it should).
After quite a bit of investigation, it seems like this problem could be due to g_0 term in the policy function,
which captures uncertainty corrections.
My question is:
Is there a way to adjust the mean of simulated series to its steady state?
By the way, shifting down the simulated series by the gap between the steady state and the mean of simulated series does not solve the problem.
Any help will be appreciated and please let me know if I need to clarify the problem in more details.