Although GAMS can be used from its IDE, it still has a command line interface. This can be used when calling GAMS from a batch (.cmd) file.
Recently I was asked to speed up running a number of scenarios by exploiting an 8 core machine. As the NLP (actually MCP) models used serial solvers we cannot just use THREADS=8 like when we use MIP solvers such as Cplex or Gurobi. One way to handle this is to write some batch files:
batch1.cmd gams nlpmodel.gms –scenario=1 o=m1.lst lo=2 lf=m1.log gams nlpmodel.gms –scenario=2 o=m2.lst lo=2 lf=m2.log gams nlpmodel.gms –scenario=3 o=m3.lst lo=2 lf=m3.log |
Then call these batch files from another batch file:
runall.cmd start batch1.cmd start batch2.cmd start batch3.cmd start batch4.cmd |
The START command will start the batch file specified and returns immediately. As a result the batch files batch1.cmd, batch2.cmd, … will run in parallel.
To make this a little bit more configurable we can write a GAMS file that generates these files.
$ontext |
As we can see here the GAMS syntax around its PUT facility is very ugly (this is one of the parts of GAMS that really urgently needs some redesign). However, ignoring the ugly syntax, it actually generates batch files of the form:
batch2.cmd runall.cmd |
We use 7 cpus and thus 7 parallel jobs here (the idea is to keep one cpu available for normal work).
We are left to implement RUNMODEL.CMD:
runmodel.cmd gams solve.gms r=save1 lo=2 lf=model%1.log o=model%1.lst --scenario=%1 |
Notes:
1. When running under the IDE you may get a message about the LST (listing) file being used by another process. This is under investigation.
2. GAMS has some new asynchronous calling mechanisms. This does not really improve the above code (adding another put_utility call is never going to look good). For completeness:
$ontext |
Since you are writing a wrapper. You might as well do it in Python.
ReplyDeletePython has much more powerful tools like (multiprocessing) which are easy to use and more readable.
http://docs.python.org/library/multiprocessing.html