Dear Jpfeifer,
My coauthor and I have been working on estimating a DSGE-VAR, and we happen to notice the some 'weird' relationship between lambda and the likelihood that we find hard to understand (In fact, we are trying to trace out how likelihood varies as lambda changes). We wondered if you might know what could have caused that to happen? (We explain the problem in the following; thanks in advance!!!)
Problem (not a bug, so no error message; the codes run well):
When we ESTIMATE lambda as part of the model parameters, the optimal lambda is 0.3321; the corresponding marginal (log) likelihood is 1550.
When we try to SET lambda to higher values (=0.5, =0.75, =1 etc), we can see the likelihood is falling which is what we expected.
But, what is strange and confuses us is that, when we try the lower bound of lambda (=0.3276), it returns a likelihood of 1553.
So why the optimal lambda fails to imply the highest likelihood?
When we estimated the optimal lambda, we did let the lower bound be 0.3276. If lambda=0.3276 does imply the highest likelihood, why does the algorithm fail to find this value but suggests a slightly higher one (0.3321)? At the beginning we thought this might be due to randomness; we then tried to run the same processes for a second time but exactly the same happened. Might you know what is happening here? We really struggle to understand why the optimal lambda is not implying the highest likelihood....
We really appreciate your advice. Thanks a lot for your time!
Kind regards,
Zhirong
PS: We attached the .mod file and data just in case.