Differences between revisions 42 and 43
Revision 42 as of 2010-10-26 13:28:39
Size: 16910
Editor: MarcoRatto
Comment:
Revision 43 as of 2010-10-26 13:32:08
Size: 16909
Editor: MarcoRatto
Comment:
Deletions are marked like this. Additions are marked like this.
Line 130: Line 130:
 * {{{closeSlaves.m}}} is the utility that sends a signal to remote slaves to close themselves. In the standard operation, this is only needed when {{{parallel_slave_open_mode=1}}} and it is called when DYNARE computations are completed. At that point, {{{slaveParallel.m}}} will get a signal to terminate and no longer wait for new jobs. However, this utility is also useful in any parallel mode if, for any reason, the master needs to interrupt the remote computations which are running;  * {{{closeSlave.m}}} is the utility that sends a signal to remote slaves to close themselves. In the standard operation, this is only needed when {{{parallel_slave_open_mode=1}}} and it is called when DYNARE computations are completed. At that point, {{{slaveParallel.m}}} will get a signal to terminate and no longer wait for new jobs. However, this utility is also useful in any parallel mode if, for any reason, the master needs to interrupt the remote computations which are running;

This page documents the parallelization system developped by Marco Ratto for Dynare.

The idea is to provide a general framework in Dynare for parallelizing tasks which require very little inter-process communication.

The implementation is done by running several MATLAB or Octave processes, either on local or on remote machines. Communication between master and slave processes are done through SMB on Windows and SSH on UNIX. Input and output data, and also some short status messages, are exchanged through network filesystems.

Currently the system works only with homogenous grids: only Windows or only Unix machines.

Routines currently parallelized:

  • the Metropolis-Hastings algorithm (implemented in random_walk_metropolis_hastings.m)

  • the independent Metropolis-Hastings algorithm (implemented in independent_metropolis_hastings.m)

  • the Metropolis-Hastings diagnostics (implemented in McMCDiagnostics.m)

  • pm3.m (plotting routine)

  • Posterior_IRF.m

  • prior_posterior_statistics.m

1. Requirements

1.1. For a Windows grid

  1. a standard Windows network (SMB) must be in place
  2. PsTools must be installed in the path of the master Windows machine

  3. the Windows user on the master machine has to be user of any other slave machine in the cluster, and that user will be used for the remote computations.

1.2. For a UNIX grid

  1. SSH must be installed on the master and on the slave machines
  2. SSH keys must be installed so that the SSH connection from the master to the slaves can be done without passwords, or using an SSH agent (see SshKeysHowto)

2. Usage

The parallelization mechanism is triggered by the use of options_.parallel. By default, this option is equal to zero, no parallelization is used.

To trigger the parallelization, this option must be filled with a vector of structures. Each structure represents a slave machine (possibly using several CPU cores on the machine).

The fields are:

  • Local: equal to 0 or 1. Use 1 if this slave is the local machine, 0 if it is a remote machine
  • PcName: for a remote slave, name of the machine. Use the NETBIOS name under Windows, or the DNS name under Unix

  • NumCPU: a vector of integers representing the CPU cores to be used on that slave machine. The first core has number 0. So, on a quadcore, use [0:3] here to use the four cores
  • user: for a remote slave, username to be used. On Windows, the group needs also to be specified here, like DEPT\JohnSmith, i.e. user JohnSmith in windows group DEPT

  • passwd: for a remote slave, password associated to the username
  • RemoteDrive: for a remote Windows slave, letter of the remote drive (C, D, ...) where the computations will take place

  • RemoteFolder: for a remote slave, path of the directory on the remote drive where the computations will take place

There is currently no interface in the preprocessor to construct this option structure vector; this has to be done by hand by the user in the MOD file.

2.1. Example syntax for win and unix, for local parallel runs (assuming quad-core)

All empty fields, except Local and NumCPU

options_.parallel = struct('Local', 1, 'PcName','', 'NumCPU', [0:3], 'user','','passwd','',
'RemoteDrive', '', 'RemoteFolder','', 'MatlabOctavePath', '', 'DynarePath', '');

2.2. Example Windows syntax for remote runs

  • win passwd has to be typed explicitly!
  • RemoteDrive has to be typed explicitly!

  • for user, ALSO the group has to be specified, like DEPT\JohnSmith, i.e. user JohnSmith in windows group DEPT

  • PcName is the name of the computer in the windows network, i.e. the output of hostname, or the full IP adress

options_.parallel = struct('Local', 0, 'PcName','RemotePCName','NumCPU', [4:6], 'user',
'DEPT\JohnSmith','passwd','****', 'RemoteDrive', 'C', 'RemoteFolder','dynare_calcs\Remote', 'MatlabOctavePath', 'matlab', 'DynarePath', 'd:\dynare\matlab');

2.2.1. Example to use several remote PC's to build a grid

A vector of parallel structures has to be built:

options_.parallel = struct('Local', 0, 'PcName','RemotePCName1','NumCPU', [0:3], 
'user', 'DEPT\JohnSmith', 'passwd','****', 'RemoteDrive', 'C', 'RemoteFolder','dynare_calcs\Remote', 'MatlabOctavePath', 'matlab', 'DynarePath', 'c:\dynare\matlab');

options_.parallel(2) = struct('Local', 0, 'PcName','RemotePCName2','NumCPU', [0:3], 
'user', 'DEPT\JohnSmith','passwd','****', 'RemoteDrive', 'D', 'RemoteFolder','dynare_calcs\Remote', 'MatlabOctavePath', 'matlab', 'DynarePath', 'c:\dynare\matlab');

options_.parallel(3) = struct('Local', 0, 'PcName','RemotePCName3','NumCPU', [0:1], 
'user','DEPT\JohnSmith','passwd','****', 'RemoteDrive', 'C', 'RemoteFolder','dynare_calcs\Remote', 'MatlabOctavePath', 'matlab', 'DynarePath', 'c:\dynare\matlab');

options_.parallel(4) = struct('Local', 0, 'PcName','RemotePCName4','NumCPU', [0:3], 
'user','DEPT\JohnSmith','passwd','****', 'RemoteDrive', 'C', 'RemoteFolder','dynare_calcs\Remote', 'MatlabOctavePath', 'matlab', 'DynarePath', 'c:\dynare\matlab');

2.2.2. Example of combining local and remote runs

options_.parallel=struct('Local', 1, 'PcName','','NumCPU', [0:3],
 'user','','passwd','','RemoteDrive', '', 'RemoteFolder','','MatlabOctavePath', '', 'DynarePath', '');

options_.parallel(2)=struct('Local', 0, 'PcName','RemotePCName','NumCPU', [0:1], 
'user','DEPT\JohnSmith','passwd','****', 'RemoteDrive', 'C', 'RemoteFolder','dynare_calcs\Remote', 'MatlabOctavePath', 'matlab', 'DynarePath', 'c:\dynare\matlab');

2.3. Example Unix syntax for remote runs

  • no passwd and RemoteDrive needed!

  • PcName: full IP address or address

2.3.1. Example with only one remote slave

options_.parallel=struct('Local', 0, 'PcName','name.domain.org','NumCPU', [0:3], 
'user','JohnSmith','passwd','', 'RemoteDrive', '', 'RemoteFolder','/home/rattoma/Remote','MatlabOctavePath', 'matlab', 'DynarePath', '/home/rattoma/dynare/matlab');

2.3.2. Example of combining local and remote runs (on unix):

options_.parallel=struct('Local', 1, 'PcName','','NumCPU', [0:3], 
'user','','passwd','','RemoteDrive', '', 'RemoteFolder','','MatlabOctavePath', '', 'DynarePath', '');

options_.parallel(2)=struct('Local', 0, 'PcName','name.domain.org','NumCPU', [0:3], 'user','JohnSmith','passwd','', 'RemoteDrive', '', 'RemoteFolder','/home/rattoma/Remote','MatlabOctavePath', 'matlab', 'DynarePath', '/home/rattoma/dynare/matlab');

3. Informations for the Dynare developers

3.1. General architecture of the system

The generic parallelization system is organized around five routines: masterParallel; slaveParallel.m and fParallel; fMessageStatus.m; closeSlaves.

  • masterParallel is the entry point to the parallelization system.

    • It is called from the master computer, at the point where the parallelization system should be activated. Its main arguments are the name of the function containing the task to be run on every slave computer, inputs to that function stored in two structures (one for local and the other for global variables), and the configuration of the cluster; this function exits when the task has finished on all computers of the cluster, and returns the output in a structure vector (one entry per slave);
    • all file exchange through the filesystem is concentrated in this masterParallel routine: so it prepares and send the input information for slaves, it retrieves from slaves the info about the status of remote computations stored on remote slaves by the remote processes; finally it retrieves outputs stored on remote machines by slave processes;

    • there are two modes of parallel execution, triggered by option parallel_slave_open_mode:

      • when parallel_slave_open_mode=0, the slave processes are closed after the completion of each task, and new instances are initiated when a new job is required; this mode is managed by fParallel.m;

      • when parallel_slave_open_mode=1, the slave processes are kept running after the completion of each task, and wait for new jobs to be performed; this mode is managed by slaveParallel.m;

  • slaveParallel.m and fParallel are the top-level functions to be run on every slave; their main arguments are the name of the function to be run (containing the computing task), and some information identifying the slave; the functions use the input information that has been previously prepared and sent by masterParallel through the filesystem, call the computing task, finally the routines store locally on remote machines the outputs such that masterParallel retrieves back the outputs to the master computer;

  • fMessageStatus.m provides the core for simple message passing during slave execution: using this routine, slave processes can store locally on remote machine basic info on the progress of computations; such information is retrieved by the master process (i.e. masterParallel.m) allowing to echo progress of remote computations on the master; the routine fMessageStatus.m is also the entry-point where a signal of interruption sent by the master can be checked and executed; this routine typically replaces calls to waitbar.m;

  • closeSlave.m is the utility that sends a signal to remote slaves to close themselves. In the standard operation, this is only needed when parallel_slave_open_mode=1 and it is called when DYNARE computations are completed. At that point, slaveParallel.m will get a signal to terminate and no longer wait for new jobs. However, this utility is also useful in any parallel mode if, for any reason, the master needs to interrupt the remote computations which are running;

The parallel toolbox also includes a number of utilities:

  • AnalyseComputationalEnviroment.m: this a testing utility that checks that the cluster works properly and echoes error messages when problems are detected;

  • InitializeComputationalEnviroment.m: initializes some internal variables and remote directories;

  • distributeJobs.m: uses a simple algorithm to distribute evenly jobs across the available CPU's;

  • a number of generalized routines that properly perform delete, copy, mkdir, rmdir commands through the network file-system (i.e. used from the master to operate on slave machines); the routines are adaptive to the actual environment (Windows or Unix);

    • dynareParallelDelete.m: generalized delete;

    • dynareParallelDir.m: generalized dir;

    • dynareParallelGetFiles.m: generalized copy FROM slaves TO master machine;

    • dynareParallelMkDir.m: generalized mkdir on remote machines;

    • dynareParallelRmDir.m: generalized rmdir on remote machined;

    • dynareParallelSendFiles.m: generalized copy TO slaves FROM master machine;

A more complete developer documentation (but a bit outdated) is in parallel.pdf.

3.2. Improvements to be made (by decreasing order of importance)

  • Improve the way we deal with MATLAB's native multithreading, which was introduced in MATLAB 7.4, and enable by default on MATLAB 7.6 (see MatlabVersionsCompatibility). The default behavior of the parallel toolbox should be to disable that feature, which can be done using maxNumCompThreads or -singleCompThread depending on the MATLAB version. An option to the parallel code should exist for giving control of number of threads to MATLAB.

  • Rename internal options to reflect the names of options in the config file (done!)
  • Implement console mode for MATLAB (already done for Octave), by testing options_.console_mode

  • Allow for the possibility of specifying a weight for each slave in the cluster, for taking into account the heterogeneity of performances; slaves with a low weight would be allocated less blocks. [NOTE: This will require a generalization of the routine distributeJobs.m plus a syntax to provide weights (i.e. a new field CPUWeight in the options_.parallel structure)];

  • Network performance: let the master download files from the slaves continuously, instead of waiting for the slaves to end their computations, in order to minimize transfer time. [NOTE: Here the infrastructure build with fMessageStatus.m can be used: slave processes can provide info about files produced by their processes; masterParallel.m can retrieve this info and retrieve files while remote computations are still running].

3.3. Interface

  • The general idea is to put all the configuration of the cluster in a config file different from the MOD file, and to trigger the parallel computation with option(s) on the “dynare” command line

  • Configuration file:
    • In a standard location ($HOME/.dynare under Unix, c:\Documents and Setting\<username>\Application Data\dynare.ini on Windows)

    • Should have provisions for other Dynare configuration parameters unrelated to parallel computation
    • Should allow to specify several clusters, each one associated with a nickname
    • For each cluster, specify a list of slaves with the following options for each slave [if not explicitly specified by the configuration file, the preprocessor sets the options to default]:

Node Options

type

default

Meaning

Req. Local Win

Req. Remote Win

Req. Local Unix

Req. Remote Unix

Name

string

empty

name of the node

*

*

*

*

CPUnbr

Integer

Linux: all available CPU's; Windows: needs to be set by the user

Number of CPU's to be used on that computer

*

*

*

*

ComputerName

string

localhost (implies options_.parallel(?).Local=1)

Computer name on the network or IP address

*

*

UserName

string

empty

required for remote login

*

*

Password

string

empty

required for remote login (only under Windows)

*

RemoteDrive

string

empty

Drive to be used on remote computer

*

RemoteDirectory

string

empty

Directory to be used on remote computer

*

*

DynarePath

string

empty

path to matlab directory within the Dynare installation directory

MatlabOctavePath

string

empty

path to MATLAB or Octave executable

SingleCompThread

boolean

true

disable MATLAB's native multithreading ?

Cluster Options

type

default

Meaning

Required

Name

string

empty

name of the node

*

Members

string

empty

list of members in this cluster

*

  • Command line options:
    • conffile=<path>: specify the location of the configuration file if it is not standard

    • parallel: trigger the parallel computation using the first cluster specified in config file

    • parallel=<clustername>: trigger the parallel computation, using the given cluster

    • parallel_slave_open_mode: use the leaveSlaveOpen mode in the cluster

    • parallel_test: just test the cluster, don’t actually run the MOD file

  • options and commands triggered by the preprocessor:
    • options_.parallel: all the fields are filled in by the preprocessor: if some field is not explicitly defined in the configuration file, preprocessor sets them at their default value;
    • options_.parallel_info: the field options_.parallel_info.leaveSlaveOpen is set at 1 if parallel_slave_open_mode is set in configuraton file; the default value (=0), is set in global_initialization.m;
    • with parallel_test option, the preprocessor triggers simply:

InitializeComputationalEnvironment();
AnalizeComputationalEnvironment(options_.parallel);
  • after options_ are set, the processed matlab main file must contain two configuration commands, one before computations start, the second just before completing dynare computations:
    • InitializeComputationalEnvironment generates the temporary remote subfolder where remote jobs are run;

    • closeSlave closes all slave MATLAB instances and does a clean-up of working directories;

    • in the end, the processed matlab main script should be structured as follows:

preamble, options_ definitions, global initializations, etc...

InitializeComputationalEnvironment();

... all dynare computations ...

if options_.parallel_info.leaveSlaveOpen == 1,
    closeSlave(options_.parallel,options_.parallel_info.RemoteTmpFolder),
end

diary off

...

DynareWiki: ParallelDynare (last edited 2012-05-09 10:05:10 by HoutanBastani)