reubenpjacob wrote:hi jiho
the kalman filter only helps to evaluate the likelihood or posterior at each draw. it is a 'filter' because we have to take out the effect of the variables in the model that are statistically unobservable.
This will in turn decide the acceptance ratio and guide the MCMC.
the symmetric and reversible jumping distribution , which is a Normal in dynare, will ensure that with a large number of iterations, the simulated distribution will converge towards the target distribution, the 'true' posterior. So we are not really picking from the posterior itself.
cheers
reuben
Dear Reuben and Stépane,
As far as I understand,
KF is doing the following mechanism
E[state(i)|data(i-1)] -> Observe data(i)
-> Obtain new parameters which maximize likelihood given data(i)
-> With transition matrix which are expressed with new parameters, we update our previous prediction to E[state(i)|data(i)]
-> We predict E[state(i+1)|data(i)] using Kalman gain matrix
So we get n values for a parameter where n is the number of observation.
In MCMC,
From prior dist, draw parameter
-> Sample a proposal parameter from jumping distribution
-> If ratio of the density between one based on > 1, accept a proposal parameter
... -> At the end of the day, for example, we get 20000 draw for a parameters.
Then, in MCMC, should I think that we observed data at the beginning?
I mean that there is no prediction and updating mechanism in MCMC.
Whenever we draw, we just compare densities which are based on likelihood.
If ratio is less than 1, compare this with a draw from a unif(0,1).
So I can see the comparison between maximized likelihoods.
However, I cannot see any updating due to the newly observed data.
To my knowledge, KF is a sequential prediction and updating unobservable state vectors whenever we observe new data.
Best regards,
Jiho