Wednesday, December 14, 2011

Logs in NLP

In someone seems to claim that:

x.lo = 0.001

is not good modeling practice. I don’t agree: the better NLP solvers don’t evaluate nonlinear functions outside their bounds. And from my more than 20 year experience in nonlinear modeling: I have used this construct all the time.

A large percentage of NLPs that fail can be fixed by looking into specifying:

  • better bounds
  • better starting point
  • and better scaling

The comment below talks about IPOPT. No, even IPOPT will not evaluate nl funcs outside their bounds (come on, it is an interior point code!!). See . Slight complication is bound_relax_factor, but in practice that is not a problem. Maybe the poster is confusing infeasibility and interior points: a point can be inside the bounds but still be infeasible.

Apart from this remark, indeed IPOPT is an excellent solver. It is a fantastic complement to the well-known active-set solvers like CONOPT, MINOS and SNOPT. If you have large models with many superbasics: these solvers tend to have problems with that, while an interior point algorithm does not really care.


  1. "the better NLP solvers don’t evaluate nonlinear functions outside their bounds"

    I'm not sure that is true. IPOPT is an excellent infeasible path solver which uses a filter line search that is based on minimizing constraint violations. Perhaps your experience has primarily been with feasible path solvers like CONOPT? Then yes, that would be true. There is also the other issue of nonlinearity. The gradient of log(x) changes very rapidly for values of x near 0, while the gradient for x=exp(y) is more benign for x near 0 (from a numerical scaling standpoint, the absolute ranges are much smaller).

  2. You are quite right -- my comment was an utterly ignorant one. The filter method elects to decreases the constraint violation in the equality constraints c(x) only. Interior point methods always stay within the bounds of inequality constraints because of the log-barrier term. I retract my ignorant statement unreservedly.

  3. @Erwin: I agree with you, provided that (a) we can be confident that the a priori bound does not cut off the optimum and (b) it's not close enough to zero that a little rounding error could less you into hot water.