You are confusing something here. The moments are given in model units. If you perform a linearization and not a log-linearization, everything will be in absolute values. Say output is measured in apples and you get a standard deviation of 0.01 this means it fluctuates by 0.01 apples. Now if you did a log-linearization instead, everything will be in log units. So 0.01 output standard deviation would mean 0.01 log units and thus approximately 1%.
This is the short answer. The longer story is that most people use first order approximations, where the solution is invariant to the shock size. If you specify a log process for TFP
- Code: Select all
z=rho*z(-1)+eps_z;
and want to set the standard deviation of TFP to 1%, the correct specification would be 0.01 log points;
- Code: Select all
shocks;
var eps_z; stderr 0.01;
end;
This will be correct for all orders of approximation. Say this shock size leads to an output variance of 1%. If you perform a log-linearization, this will show up as a variance of 0.01 log points.
But at first order, due to certainty equivalence, you can also say
- Code: Select all
shocks;
var epsz; stderr 1;
end;
and interpret this as 1 percent. This change in variance will scale up everything by 100: Output variance will be now 1 log point. But if you interpret everything in percent, this is still correct. A 1% standard deviation TFP shock leads to an output variance of 1%. You just change the interpretation of the numbers. However, at higher orders, this will be wrong due to certainty equivalence not holding.