Finding the posteriors mode: Which optimization technique?
Posted: Fri Aug 16, 2013 5:06 pm
Hello!
I have 2 questions regarding the theory behind DSGE model estimation:
1. As for most DSGE models the posterior is not analytically tractable, numerical methods have to be applied. There exists several methods to do this: e.g. Direct sampling, Newton.
Which method does Dynare use to maximize the posterior kernel and to calculate the mode?
2. I have been thinking a lot about the purpose of applying the Kalman filter. Although I recognize its essential function at estimation, namely the construction of the likelihood, I miss some intuitive understanding of it.
As far as I understood it correctly, the Kalman filter adresses the fact that most data consist of both “signal” and “noise” components. In this context the (normally distributed) “noise” component results from the problem that the model has only limited power to explain the data. Roughly spoken, the Kalman filter operates as a linear one-step-ahead predictor for the data using the state-space-form of the model. Through iterative forecast error correction the predictions are gradually improved and the signal share in the data is increased.
Is this intuition correct?
Thanks advance for your help!
Greetings
I have 2 questions regarding the theory behind DSGE model estimation:
1. As for most DSGE models the posterior is not analytically tractable, numerical methods have to be applied. There exists several methods to do this: e.g. Direct sampling, Newton.
Which method does Dynare use to maximize the posterior kernel and to calculate the mode?
2. I have been thinking a lot about the purpose of applying the Kalman filter. Although I recognize its essential function at estimation, namely the construction of the likelihood, I miss some intuitive understanding of it.
As far as I understood it correctly, the Kalman filter adresses the fact that most data consist of both “signal” and “noise” components. In this context the (normally distributed) “noise” component results from the problem that the model has only limited power to explain the data. Roughly spoken, the Kalman filter operates as a linear one-step-ahead predictor for the data using the state-space-form of the model. Through iterative forecast error correction the predictions are gradually improved and the signal share in the data is increased.
Is this intuition correct?
Thanks advance for your help!
Greetings