The Dynare Test Suite is run every time a new commit is made on the master or distribution branches. To add a new test to the test suite, do the following:
For a .mod file
Open tests/Makefile.am
Add your .mod file to the MODFILES variable. Follow the .mod file with a \ unless it is the last .mod file in the list
Example
Add test.mod to the test suite:
MODFILES = \ estimation/fs2000_mc4.mod \ estimation/fs2000_mc4_mf.mod \ gsa/ls2003.mod \ ... path/to/test.mod \ ... smoother2histval/fs2000_smooth_stoch_simul.mod \ reporting/example1.mod
For a .m file
Open tests/Makefile.am
Add the .m file to the end of the line that begins M_TRS_FILES +=, adding the .trs ending to the .m file. So, if the test file is called test.m, you would add test.m.trs to the end of this line.
Do the same for the line that begins O_TRS_FILES += to ensure it is run in the Octave tests as well. For the same file, you would add test.o.trs
Add the .m file to the EXTRA_DIST variable. So, here, you would add test.m to this variable, following it with a \ if it is not on the last line of this variable. NB: sometimes this will be a different file because the .m file won't work the same way on Matlab as in Octave.
Ensure that your .m file produces the correct output file at the end of its execution. Follow the examples at the end of . To do this, in your test file, you will need to keep track of:
- The number of tests run
- The number of tests failed
- The name of the failed tests
Example
Add test.m to the test suite:
M_TRS_FILES += run_block_byte_tests_matlab.m.trs run_reporting_test_matlab.m.trs run_all_unitary_tests.m.trs test.m.trs ... EXTRA_DIST = \ read_trs_files.sh \ run_test_matlab.m \ run_test_octave.m \ test.m \ ...
And, test.m should end with a block of the following form. This block creates a .trs file with the results of the test:
cd(getenv('TOP_TEST_DIR')); fid = fopen('test.m.trs', 'w+'); if size(failedBlock,2) > 0 fprintf(fid,':test-result: FAIL\n'); fprintf(fid,':number-tests: %d\n', num_tests); fprintf(fid,':number-failed-tests: %d\n', num_failed_tests); fprintf(fid,':list-of-failed-tests: %s\n', failed_tests{:}); else fprintf(fid,':test-result: PASS\n'); fprintf(fid,':number-tests: %d\n', num_tests); fprintf(fid,':number-failed-tests: 0\n'); end fclose(fid); exit;
Order of Execution of .mod files
As the test suite can be run in parallel, if you have tests that depend on the execution of a previous .mod file, you can run into problems. Hence, to ensure an order to the execution of .mod files, do something like the following:
arima/mod1a.m.trs: arima/mod1.m.trs arima/mod1b.m.trs: arima/mod1.m.trs arima/mod1c.m.trs: arima/mod1.m.trs
Here, you see that mod1a.m.trs, mod1b.m.trs, and mod1c.m.trs will not be created until {mod1.m.trs has been created. Hence, mod1a.mod, mod1b.mod, and mod1c.mod will not be run until mod1.mod has finished execution.
Cleaning
The standard output files (.m, .log, _static.m, _dynamic.m, .trs, etc) are already automatically cleaned by the instructions in clean-local. However, if the test file you added creates some non-standard output, make sure to put a rm rule in this section of Makefile.am. Say, for example, your test file creates a file with extension .abcd. Then, you would add the following rule to clean-local:
clean-local: rm -f $(M_TRS_FILES) \ ... rm -f path/to/*.abcd