Page 1 of 1

Strategies for Solving Large Scale Models

PostPosted: Tue Aug 09, 2016 1:04 pm
by twin
Hello all,

I'm currently solving a large model and would really appreciate advice about the best way to handle it. The model has around 60 variables, 2500 parameters, and fairly complicated expressions. I would like to compute a second order, or ideally even a third order, approximation of the model.

The main obstacle seems to be the way Dynare computes the derivatives of the equilibrium conditions. When I attempt to compute a second order approximation, I get the error message

Error using feval
The current workspace already has too many variables; there is no room for "T854468".

Error in stochastic_solvers (line 107)
[junk,jacobia_,hessian1] = feval([M_.fname '_dynamic'],z(iyr0),...

Error in resol (line 141)
[dr,info] = stochastic_solvers(dr,check_flag,M,options,oo);

Error in stoch_simul (line 82)
[oo_.dr,info,M_,options_,oo_] = resol(0,M_,options_,oo_);

Error in aggregateDynamics (line 8010)
info = stoch_simul(var_list_);

Error in dynare (line 223)
evalin('base',fname) ;


The error comes after preprocessing is completed and the steady state is successfully computed using an external _steadystate.m file. So, I think what must be going on is that the preprocessor creates a lot of temporary terms as part of the differentiation routine, and when Matlab attempts to evaluate _dynamic.m file it runs out of room in the workspace.

I can compute a first order approximation of the model in about 30 seconds (about 16 of them are for the steady state). I can also compute a second order approximation of a smaller version of the model in about 50 seconds including the steady state. So I think that if I can just successfully evaluate the derivatives in Matlab solving for a second order approximation of the full model must be feasible.

If anyone has other ideas about how best to proceed I would really appreciate it! I'm hoping that someone else has dealt with a large-scale model like this and can put me on the right path so that I won't have to waste time trying a bunch of hopeless things. Also any feedback on what I have already tried (listed below) would be useful.

(1) I can successfully compute a second order expansion of the model with the use_dll option to compile the model files instead of using the Matlab _dynamic.m version. However, it takes 2 hours to perform, so is obviously not ideal. I understand that this option can increase run time, especially for large models, but the fact that it takes so much longer than the second order approximation of the smaller version of the model (again, 50 seconds) may indicate I'm doing something incorrectly.

(2) I tried using the "notmpterms" option when invoking Dynare, but the code hasn't stopped running in over four hours. Given that use_dll will finish in half the time, this does not seem competitive.

(3) I can potentially port everything over to Dynare++, but I would prefer not to do this because Dynare++ syntax is much more restrictive than Dynare and would require me to use a lot of workarounds. Is it clear that Dynare++ would even be advantageous in this case, i.e., have a run time significantly slower than two hours?

Re: Strategies for Solving Large Scale Models

PostPosted: Tue Aug 09, 2016 1:33 pm
by jpfeifer
Could you please provide me with the codes to look into this? Email is fine.