For a very large large MCP model (a system of nonlinear equations with complementarity conditions) we observed it became unsolvable, even with a very good starting point. The reason was that the model was kept as small as possible in terms of equations and variables. As a result the model was much more dense and nonlinear (in terms of nonlinear nonzero elements) than needed. Basically, modeling systems and solvers that exploit sparsity should be fed large, sparse models instead of corresponding smaller, denser models. After substituting out some common expressions into separate equations (hence increasing the size in terms of equations and variables) we could solve the larger models (≈400K variables and equations) fairly easily.
An linear example of such a problem is shown here: http://yetanothermathprogrammingconsultant.blogspot.com/2010/04/l1-portfolio-formulation.html.
An example of a nonlinear construct that should raise a red flag is something like:
In the model I was looking at a combination of these two situations was implemented: a large linear expression inside a nonlinear expression, and this nonlinear expression was repeated several times in the model. This caused both extra nonzero elements, and they appeared also in very nonlinear fashion.
For a smaller version of the model, the differences in statistics is striking (I was unable to get the large version to solve with the original formulation):
|old model|| |
BLOCKS OF EQUATIONS 25 SINGLE EQUATIONS 106,219
GENERATION TIME = 52.151 SECONDS
EXECUTION TIME = 52.463 SECONDS
RESOURCE USAGE, LIMIT 281.806 1000.000
|new formulation|| |
BLOCKS OF EQUATIONS 26 SINGLE EQUATIONS 107,605
GENERATION TIME = 7.566 SECONDS
EXECUTION TIME = 7.972 SECONDS
RESOURCE USAGE, LIMIT 57.216 1000.000
Unfortunately solvers and modeling systems don’t give much feedback on these issues. Such a diagnoses requires some modeling knowledge and experience (one reason why I am sometimes hired to help out).