Wednesday, December 14, 2011

Logs in NLP

In http://www.or-exchange.com/questions/4214/entropy-maximization-log0-ampl-tricks someone seems to claim that:

y=log(x),
x.lo = 0.001

is not good modeling practice. I don’t agree: the better NLP solvers don’t evaluate nonlinear functions outside their bounds. And from my more than 20 year experience in nonlinear modeling: I have used this construct all the time.

A large percentage of NLPs that fail can be fixed by looking into specifying:

  • better bounds
  • better starting point
  • and better scaling

The comment below talks about IPOPT. No, even IPOPT will not evaluate nl funcs outside their bounds (come on, it is an interior point code!!). See  http://drops.dagstuhl.de/volltexte/2009/2089/pdf/09061.WaechterAndreas.Paper.2089.pdf . Slight complication is bound_relax_factor, but in practice that is not a problem. Maybe the poster is confusing infeasibility and interior points: a point can be inside the bounds but still be infeasible.

Apart from this remark, indeed IPOPT is an excellent solver. It is a fantastic complement to the well-known active-set solvers like CONOPT, MINOS and SNOPT. If you have large models with many superbasics: these solvers tend to have problems with that, while an interior point algorithm does not really care.