by jpfeifer » Wed Oct 22, 2014 5:16 pm
There are two things to distinguish.
1. The approximation error of using decision rules that are valid close to the steady state (which determines rho) for simulating time series far away from the steady state. This is the thing you seem to be concerned about. It is not clear a priori how severe this issue is. If your true policy function is a second order polynomial, a first order approximation will be poor once you move away from the approximation point. However, the second order approximation will recover the whole policy function and will be accurate everywhere. Approximating at the steady state won't be a problem in this case. The rho would be correct.
2. I was pointing out that in addition to just having the "wrong" rho for looking at the different point in the state space, you are also missing additional higher order terms that you most probably are interested in.
Bottom line: you need to consider both approximation errors. Conceptually the second one is more problematic for the question you seem to be asking. But admittedly, the first one could be more severe.