2. Choosing a good prior distribution is an art. Ideally you want to let the data speak as freely as possible by choosing a non-informative prior. However, often the data (likelihood) does not contain much information about a parameter. In this case, choosing an informative prior is an important advantage of Bayesian methods that allows nevertheless estimating the model by incorporating prior information. There are attempts to give guidelines about choosing parameters http://www.econ.upenn.edu/~schorf/papers/dummyprior.pdf, but in general it is hard to give definite advice. Most importantly, the posterior assigns 0 likelihood to parameters regions that are not in the support of the prior. So one should be careful in restricting the prior space unless there is good reason too. If you do not know the range of a parameter, choose a wide prior. In contrast, if you know that alpha always is between 0.25 and 0.4 there is no reason to use a wider prior.
3. Sensitivity analysis would proceed in the way you describe. However, I would recommend using the syntax of the estimated_params block to specify starting values, e.g.:
- Code: Select all
gamma,initial value, , ,inv_gamma_pdf,2.9,0.3;
Regarding the estimation-command, you cannot set the mh_replic=10000 as this is the number of MCMC draws which is currently rather small if you want to have a sensible burn-in (Dynare drops 50% by default). Rather, you don't want to have 5 separate chains starting at the same values to test if they converge to the same posterior mode as you want to have several chains from different starting values.
4. Your statement is wrong. Having the posterior diverge from the prior means that your data (the likelihood in Bayes rule) is very informative about the parameter (if the posterior distribution is not too wide around the mode). If all your posterior estimates would coincide with the prior mean, there would be no reason for estimation as the data does not add any new insights. Things are only different, if you are trying to replicate a study and your prior mean is already the posterior of an estimated model (not a calibrated one!). Then having different results may signal trouble.
NB: You may want to have a look at http://lehre.wiwi.hu-berlin.de/Professuren/vwl/wipo/team/former/kriwoluzky/intro_dynare_handout.pdf