Hi Joao,
The marginal density is known to be highly sensible to the choice of the prior. This quantity is defined by integrating the product of the likelihood and the prior density. So if you change the prior density, you change the weights associated to the likelihood and eventually the marginal density.
The most famous case is with the uniform prior. If you change the uniform prior of, say, parameter \rho from U[0,1] to U[0,10] you will automatically decrease the marginal density (because by doing so you lower the weights associated to the likelihood) even if this does not affect the posterior distribution of parameter \rho. That's why this prior distribution is a bad idea if your first concern is model comparison. In this case you may (it's not an advice) ''artificially'' increase the marginal density by reducing the support of the uniform prior around the mode of the likelihood (if the likelihood is single picked along this direction).
Best,
Stéphane.
jmadeira wrote:Hi,
When I run my mod file, I usually get very similar estimation results (and always with low standard deviations, almost zero) for different attempts.
However, despite being very similar parameter estimates the log data density results vary quite a lot (usually bettween 1000 and 2000).
Does anyone know why does this happen? What should I do in order to be more confident about the true value of the log data density results (should I just use the parameter estimates obtained with the highest result...)?
Best,
Joao