second moments

This forum is closed. You can read the posts but cannot write. We have migrated the forum to a new location where you will have to reset your password.
Forum rules
This forum is closed. You can read the posts but cannot write. We have migrated the forum to a new location (https://forum.dynare.org) where you will have to reset your password.

second moments

Postby DW916 » Sat Mar 12, 2016 9:50 am

Dear Johannes,

Can I ask you a question about second moments of observe variables after estimation?
My model is one-sector model without a specified trend, getting the trend out of trending variables like output, I tried both first-difference filter and one-sided HP-filter, separately.
For first-difference filter, I demean the data to take out the mean. Hence, output, consumption, investment and hours all have mean 0.
After estimation, the absolute value of standard deviation of observables are quite high which are much higher than the data standard deviation. However, the relative standard deviation are similar. These confused me a lot. Do you have any suggestions about such issue?

Thanks in advance,
Catherine
DW916
 
Posts: 31
Joined: Sat Feb 27, 2016 9:05 am

Re: second moments

Postby jpfeifer » Sun Mar 13, 2016 3:07 pm

Did you express everything in percentage deviations? Or did you keep the levels of the variables?
------------
Johannes Pfeifer
University of Cologne
https://sites.google.com/site/pfeiferecon/
jpfeifer
 
Posts: 6940
Joined: Sun Feb 21, 2010 4:02 pm
Location: Cologne, Germany

Re: second moments

Postby DW916 » Mon Mar 14, 2016 1:37 am

jpfeifer wrote:Did you express everything in percentage deviations? Or did you keep the levels of the variables?


Hi, Johannes,

I take log-linearzation of the model by hand, so every variable in the code represents percentage deviation from steady state( which are all 0).
The measurement equation for demeaned growth rates data:
Code: Select all
y_obs=y-y(-1)+y_ME;
c_obs=c-c(-1);
i_obs=i-i(-1);
h_obs=h;


The measurement equation for one-side hp filtered data( cyclical component):
Code: Select all
y_obs=y+y_ME;
c_obs=c;
i_obs=i;
h_obs=h;


Both model second moments are much larger than correspoding data second moments.
Many thanks,
Catherine
DW916
 
Posts: 31
Joined: Sat Feb 27, 2016 9:05 am

Re: second moments

Postby jpfeifer » Mon Mar 14, 2016 8:19 pm

How big are the differences? And how many shocks do you have? And are the data also percentage growth rates?
------------
Johannes Pfeifer
University of Cologne
https://sites.google.com/site/pfeiferecon/
jpfeifer
 
Posts: 6940
Joined: Sun Feb 21, 2010 4:02 pm
Location: Cologne, Germany

Re: second moments

Postby DW916 » Mon Mar 14, 2016 11:44 pm

jpfeifer wrote:How big are the differences? And how many shocks do you have? And are the data also percentage growth rates?


I add four shocks: sunspot shock, transitory technology shock, preference shock and measurement error to output.
The data are percentage growth rate, for example, y_obs=log(Ypc,t)-log(Ypc,t-1)-mean(log(Ypc,t)-log(Ypc,t-1))
h_obs=log(Hpc,t)-log(mean(Hpc))

After estimation, e.g.the data y_obs standard deviation is 0.87, model y_obs standard deviation is 1.62
the data h_obs standard deviation is 4.91, model h_obs standard deviation is 15.87
DW916
 
Posts: 31
Joined: Sat Feb 27, 2016 9:05 am

Re: second moments

Postby jpfeifer » Tue Mar 15, 2016 7:43 am

Unfortunately, this is not entirely unheard of. In Born/Peter/Pfeifer (2013): Fiscal news and macroeconomic volatility, we use the
Code: Select all
endogenous_prior
option for this reason. You should try whether this helps. If yes, the actual estimation might be fine.
------------
Johannes Pfeifer
University of Cologne
https://sites.google.com/site/pfeiferecon/
jpfeifer
 
Posts: 6940
Joined: Sun Feb 21, 2010 4:02 pm
Location: Cologne, Germany

Re: second moments

Postby DW916 » Wed Mar 16, 2016 6:09 am

jpfeifer wrote:Unfortunately, this is not entirely unheard of. In Born/Peter/Pfeifer (2013): Fiscal news and macroeconomic volatility, we use the
Code: Select all
endogenous_prior
option for this reason. You should try whether this helps. If yes, the actual estimation might be fine.


Hi Johannes,

After using this code, my problem has been solved. Thank you so much!
Could you explain a little bit more about endogenous_prior , what is the mechanism to decrease the second moments and is there any disadvantage using this code?
I find that after using this code, the mode check plot is a bit different, especially for the persistence of technology shock and preference shock, shown on the attached file.

Many thanks,
Catherine
Attachments
CheckPlots1.pdf
(6.77 KiB) Downloaded 122 times
DW916
 
Posts: 31
Joined: Sat Feb 27, 2016 9:05 am

Re: second moments

Postby jpfeifer » Wed Mar 16, 2016 8:52 am

See the manual on this option and the reference therein. The Christiano/Trabandt/Walentin endogenous prior is based on "Bayesian learning". It updates a given prior using the first two sample moments.
------------
Johannes Pfeifer
University of Cologne
https://sites.google.com/site/pfeiferecon/
jpfeifer
 
Posts: 6940
Joined: Sun Feb 21, 2010 4:02 pm
Location: Cologne, Germany

Re: second moments

Postby DW916 » Sat Mar 19, 2016 2:04 am

jpfeifer wrote:Unfortunately, this is not entirely unheard of. In Born/Peter/Pfeifer (2013): Fiscal news and macroeconomic volatility, we use the
Code: Select all
endogenous_prior
option for this reason. You should try whether this helps. If yes, the actual estimation might be fine.


Dear Johannes,

Could I ask 2 more questions related to this issue:

1 .I see you answered someone else "for the endogenous prior you need to specify a full prior", I do not quite understand this.Just want to make sure that if I should use endogenous_prior like this?

Code: Select all
estimation(endogenous_prior, datafile=.....)
;

2.To avoiding measurement error hitting upper bound, I increase the value of 4th parameter, therefore it no longer hits bound anymore, but at expense of accounting too much Variance decomposition. Is this worse than hitting upper bound?
Many thanks.
Catherine
DW916
 
Posts: 31
Joined: Sat Feb 27, 2016 9:05 am

Re: second moments

Postby jpfeifer » Sat Mar 19, 2016 8:28 am

1. This refers to the fact that the endogenous prior option updates the normal prior you specify in the estimated_params-block using the data moments. For that reason you need to have both a full estimated_params-block and
Code: Select all
estimation(endogenous_prior, ...

2. There is no general rule to this. If you consider the high variance share of the measurement error a priori as unlikely, I would use an informative prior for it. Do not use an upper bound but use e.g. an inverse gamma distribution for the measurement error.
------------
Johannes Pfeifer
University of Cologne
https://sites.google.com/site/pfeiferecon/
jpfeifer
 
Posts: 6940
Joined: Sun Feb 21, 2010 4:02 pm
Location: Cologne, Germany

Re: second moments

Postby DW916 » Tue Mar 22, 2016 8:14 am

jpfeifer wrote:1. This refers to the fact that the endogenous prior option updates the normal prior you specify in the estimated_params-block using the data moments. For that reason you need to have both a full estimated_params-block and
Code: Select all
estimation(endogenous_prior, ...

2. There is no general rule to this. If you consider the high variance share of the measurement error a priori as unlikely, I would use an informative prior for it. Do not use an upper bound but use e.g. an inverse gamma distribution for the measurement error.


Hi Johannes,

I use endogenous_prior command, and set the 4th parameter of the standard deviation of measurement error to output y_Me to be 100% of standard deviation of data (generally people set 25% or 33%) and use uniform_pdf. The estimated result shows , the posterior mean of std of y_Me is about 50% of corresponding data std. So the variance decomposition after stoch_simul should be about 0.5^2=25%. How it actually is 7% shown in the command window. Is this weird?

Many thanks,
Catherine
DW916
 
Posts: 31
Joined: Sat Feb 27, 2016 9:05 am

Re: second moments

Postby jpfeifer » Tue Mar 22, 2016 10:11 am

I don't know exactly what you are doing, but the variance decomposition is presumably the one at horizon infinity. In that case, persistence of the endogenous variables might play a role. Measurement error is typically i.i.d.
------------
Johannes Pfeifer
University of Cologne
https://sites.google.com/site/pfeiferecon/
jpfeifer
 
Posts: 6940
Joined: Sun Feb 21, 2010 4:02 pm
Location: Cologne, Germany

Re: second moments

Postby DW916 » Mon Mar 28, 2016 5:28 am

Dear Johannes,

endogenous_prior command indeed helps model second moments matching the data second moments (both in growth rate). However, when I use one-side HP or band-pass filter data, the model standard deviation still larger (about 2-4times) than data standard deviation. Do you know why it has such difference?

Thanks a lot!

Catherine
DW916
 
Posts: 31
Joined: Sat Feb 27, 2016 9:05 am

Re: second moments

Postby jpfeifer » Tue Mar 29, 2016 8:56 am

Which objects are you comparing? The theoretical model moments to filtered data? If yes, that comparison is wrong. You would have to look at filtered model variables as well.
More fundamentally, what you ask of your model is hard. You want it not only to match particular moments, but also in a particular frequency band. This is a tall order that most probably the model is not really capable of.
------------
Johannes Pfeifer
University of Cologne
https://sites.google.com/site/pfeiferecon/
jpfeifer
 
Posts: 6940
Joined: Sun Feb 21, 2010 4:02 pm
Location: Cologne, Germany

Re: second moments

Postby DW916 » Wed Mar 30, 2016 7:07 am

jpfeifer wrote:Which objects are you comparing? The theoretical model moments to filtered data? If yes, that comparison is wrong. You would have to look at filtered model variables as well.
More fundamentally, what you ask of your model is hard. You want it not only to match particular moments, but also in a particular frequency band. This is a tall order that most probably the model is not really capable of.


Thank you Johannes, your answer is very important, without your help I would not have known the comparison is wrong.

Could I ask further about this?
As far as I know, people generally compare either simulated model moments (with periods = integer using stoch_simul command after estimation) or theoretical model moments to filtered data moments.

If I would like to compare theoretical model moments to filtered data,
1. If I use one-side hp filtered data,
should I use
Code: Select all
estimation(data=...   filtered_vars);
to compare filtered variables moments with data moments

or Not use filtered_vars command, but use
Code: Select all
stoch_simul(order=1, hp_filter=1600);
after estimation to get model moments? (But data is one-sided , while hp_filter command is two-sided ?)

2. If I use bandpass filtered data to estimate, which is not recommended by you, should I use filtered_vars command
or Not use filtered_vars command, but use
Code: Select all
stoch_simul(order=1,period=same number with the data);
after estimation and then bandpass the simulated data and compare its moments with real data moments?

Many thanks,
Catherine
DW916
 
Posts: 31
Joined: Sat Feb 27, 2016 9:05 am

Next

Return to Dynare help

Who is online

Users browsing this forum: Google [Bot] and 6 guests