Wednesday, December 14, 2011

Logs in NLP

In someone seems to claim that:

x.lo = 0.001

is not good modeling practice. I don’t agree: the better NLP solvers don’t evaluate nonlinear functions outside their bounds. And from my more than 20 year experience in nonlinear modeling: I have used this construct all the time.

A large percentage of NLPs that fail can be fixed by looking into specifying:

  • better bounds
  • better starting point
  • and better scaling

The comment below talks about IPOPT. No, even IPOPT will not evaluate nl funcs outside their bounds (come on, it is an interior point code!!). See . Slight complication is bound_relax_factor, but in practice that is not a problem. Maybe the poster is confusing infeasibility and interior points: a point can be inside the bounds but still be infeasible.

Apart from this remark, indeed IPOPT is an excellent solver. It is a fantastic complement to the well-known active-set solvers like CONOPT, MINOS and SNOPT. If you have large models with many superbasics: these solvers tend to have problems with that, while an interior point algorithm does not really care.