Page 1 of 1

Computing Log Likelihood and Optimization

PostPosted: Tue Oct 12, 2010 12:08 pm
by bigbenmike
Hi, guys:

I have a very practical questions with regard to computing log likelihood. I try to follow the procedure in An and Schorfheide (2005 ER) step by step, but I am stuck at searching the posterior mode / log -likelihood. Although my code is not in dynare, I really appreciate any help or hints.

I believe the way to compute the likelihhood or posterior mode is utilizing Kalman filter. I write my state-space representation in a {A,B,C,D} fashion as Fernandez-Villaverde. The problem is that the searching algorithm, csminwel can not really search the minimizer, insteady it stops at some random place and spit out unreasonable result. However, the dynare version of the code works pretty well. Since the solutions and IRFs from the two versions of the code are identical, I guess the reason is my Kalman filter. The part dynare computes log-likelihood is too complicated, so I wonder if some one can give me some reference in how to compute log-likelihood or use Kalman filter in a general DSGE models. My code to compute Kalman filter is attached.


Thank you for your input.

Re: Computing Log Likelihood and Optimization

PostPosted: Sat Oct 16, 2010 6:08 am
by bigbenmike
It seems that nobody cares about the question. probably it is too dumb to ask such a question. Another related question is if we could get a positive log-likelihood function, i.e, the probability is higher than one. Conceptually, it is impossible, but in practice, we compute the log-likelihood as lln=-1/2*log(det(Dt))-1/2*ut' * inv(Dt)*ut, where Dt is the Kalman estimation of variance matrix of the observable vector Y_t. If D_t is very small the first item could be a large positive number. Therefore, the log-likelihood function is positive. If the Kalman filter gives me such an outcome, does it mean something wrong? If so, can some one kindly give me some reference on how to calculate the log-likelihood function with Kalman filter in practice?

Best

Re: Computing Log Likelihood and Optimization

PostPosted: Sat Oct 16, 2010 6:55 am
by jpfeifer
Checking the correctness of your filter is a lot of work. I would recommend to simply cross-check it with other code available on the internet. Gianni Amisano for example has code for computing the likelihood of DSGE models on his homepage.
Regarding your second question, you are correct. What enters the likelihood is the log of the density of a normal distribution. Imagine the variance goes to 0, then all the mass is concentrated at the mean, i.e. the PDF goes to infinity. Obviously the log will be larger than 0. If you sum up such values, the sum may be positive. Only with discrete distributions the log-likelihood must always be smaller than 0.

Re: Computing Log Likelihood and Optimization

PostPosted: Tue Oct 19, 2010 7:51 am
by StephaneAdjemian
Hi,

If you want to compare your code for the likelihood with other codes (dynare's code for instance) you have to compare the evaluation of the likelihood on one point (for the parameters), not the results of the optimization. If these different evaluations match the differences (after the optimization process) come most likely from the penalities we have in the dynare code. When parameters are not in the prior domain, or the Blanchard and Kahn conditions are not satisfied, or the steady state is not defined... Dynare penalizes the likelihood with an endogenous penality (so that informations are given to the optimizer about the escape road). This is missing in your code and may explain the results.

Concerning the second question. The likelihood is the density of the sample conditional on the model and its parameters. The likelihood is not a probability. Consequently the likelihood can be greater than one and the log likelihood can be positive.Obviously, probabilities may be defined from the likelihood function. For instance, if your sample is just one observation of one variable, say Y, one can evaluate (in principle) the probability (conditional on the model and its parameters) of having Y inferior to y.

Best, Stéphane.

Re: Computing Log Likelihood and Optimization

PostPosted: Tue Oct 26, 2010 11:38 am
by bigbenmike
Hi, guys:

Thanks for the responses and they are helpful. Now, I sort of fix the convergence issue. In the previous version of my code, I check the if the real part of eigenvalues of the P matrix (variance of forecast errors of state variables) is higher then zero, if not, I reset P=the identity matrix. After I remove this part, the computed likelihood values goes up from -5*10^4 to -2*10^3, which is close to the value computed by Dynare. I don't know this move is legitimate or not. I have such a concern because removing the check procedure cause another trouble: the parameter values some times run into the region of complex values of even NaN. I don't know how to control the iteration at this end or it indicates something is wrong.

Best

Re: Computing Log Likelihood and Optimization

PostPosted: Fri Apr 08, 2011 8:31 am
by bigbigben
Hi, guys:

Another question related to the likelihood in Dynare is that what is the likelihood function value reported in Dynare. The procedure in dynare looks including two steps, the first step is to search a minimum point and the second step is to use the MH algorithm to do the MCMC simulation and compute the posterior. My question is whether the prior density matters in the first step, i.e, the likelihood is computed as prior+ log data density or simply just log data density.