I was given a very large convex linearly constrained model (exceeding half a million variables). The "standard" NLP solvers (Conopt,Minos,Snopt) were very slow on this model, not even being close to optimality after hours of working. Mosek solved the model very fast, but not to optimality: Solution status : NEAR_OPTIMAL.
Unfortunately this near optimal point could not be improved quickly by other solvers. In the end we solved a few QP approximations of the form:
min g(x)=f(x0) + ∇ f(x0) (x-x0) + 0.5 (x-x0)T ∇ 2 f(x0) (x-x0)
s.t. Ax=b
Here x0 is the current value of x.
Basically a Taylor series approximation of the objective. This actually converged rapidly to an optimal solution we could verify with Conopt. (Conopt complained about some reduced gradients being too large, but could not improve the solution further). What this algorithm lacks in refinement it gains in simplicity: just a loop around a solve statement.
Implementing such a problem specific algorithm is easy to do in GAMS using its procedural features (the complete code in this case is less than 20 lines), and for the most difficult problems this often can help. We further benefitted from having access to multiple NLP solvers (in this case Mosek and Conopt).
No comments:
Post a Comment