Page 1 of 1

Dynare with MPI and Octave

PostPosted: Wed Oct 20, 2010 8:16 am
by MichaelCreel
Hello to the forum, this is my first post. First of all, thanks very much to the Dynare community, the software is a pleasure to use, and the documentation is excellent. I've been using Dynare to do many solutions of a model, drawing the parameter values randomly from a prior. In order to speed up the process, I'm using a cluster, and running Dynare from Octave, using MPI extensions. Basically, there are 32 .mod files, each of which loads its own parameters from a file. The parameter files for each .mod file are updated as the run proceeds. This all works fine, but there are two interesting things I have observed.

1. The first is that Dynare seems to leave the random number state in the same place, independently of the state it starts with. To observe this, here are some results when using 2 MPI ranks. The first column indicates the MPI rank that Dynare is running on. The second can be ignored. The third is a draw from a uniform density (not U(0,1), but that's not important). The first 3 lines are 3 sequential results from MPI rank 2, and the second 3 lines are from MPI rank 1. Note that the random number in line 1, column 3 differs from line 4 column 3. This is normal, because the two Octave instances start up with different states. However, the random numbers in lines 2 and 3 are the same as in lines 5 and 6. The RNG state has become synchronized across the MPI ranks!

2.00000 0.00000 0.15911
2.00000 0.00000 0.26969
2.00000 0.00000 0.33744
1.00000 0.00000 0.32654
1.00000 0.00000 0.26969
1.00000 0.00000 0.33744

I have a solution for this, which is simply to write out the RNG state to disk before calling Dynare, and then reading it afterwards. However, this generates additional disk I/O, which slows things down. That brings me to the second interesting point.

2. The second thing to note is that Dynare creates files and writes to disk during the solution process. This is no big deal when doing a single solution, but it does slow things down considerably when trying to do many solutions. To see this, you could look at http://158.109.174.23/ which shows the activity of a cluster that is working on the problem I described. Note that pel2, pel3 and pel4 are using only about 20% of CPU capacity. pel1 is a little higher because it is taking care of gathering results and writing them to disk. To try to minimize this effect, I have all of the disk activity of Dynare going to ramdisks on each node of the cluster. It seems to me that if the I/O could be reduced, probably the CPU utilization would increase. If there were an option to turn off the writes to disk it would be helpful in a situation like mine.

If anyone would like to see the code I'm using I'd be happy to post it. By the way, I'm using Dynare v4.1.2 and Octave 3.2.4 on Debian GNU/Linux.

Best regards,
Michael

Re: Dynare with MPI and Octave

PostPosted: Mon Nov 22, 2010 11:43 am
by SébastienVillemot
Just for your information, concerning the 1st point, we plan to introduce a better way of managing RNG seed in the next version of Dynare.