Wednesday, August 27, 2008

Smooth approximation for ifthen()

The construct ifthen( x1 > x2, y, z) can be approximated by the smooth function:

z + (y-z)*0.5*(1+tanh(k*(x1-x2)))

where k is a large constant (e.g. k=1000). This will allow you to use NLP and CNS model types instead of DNLP.

See also: http://en.wikipedia.org/wiki/Heaviside_step_function




The logistic function (http://en.wikipedia.org/wiki/Logistic_function) can be expressed in two ways:
  1. f(x,k) = (1+tanh(k·x))/2
  2. f(x,k) = 1/(1+exp(-2k·x))

The exp() version can overflow more easily. Newer versions of GAMS implement a version of exp() that does not overflow (by truncation) so either choice should work.

This approximation can be used also for max(x,y) and min(x,y) as we have:

  1. max(x,y) = ifthen(x>y, x, y)
  2. min(x,y) = ifthen( y>x, x, y)

6 comments:

  1. which approximation is better, this one or the one suggested by conopt manual for max approx?

    this is the one I'm talking about:

    for max(f(x),g(y)),
    (f(x)+g(y)+SQRT(SQR(f(x)-g(y)) + SQR(delta)))/2

    thanks.

    ReplyDelete
  2. The behavior near the kink is slightly different. E.g. F(x) = abs(x) = max(-x,x). Then for x=0, y=-x we have (f(x)+g(y)+SQRT(SQR(f(x)-g(y)) + SQR(delta)))/2 = sqr(delta)/2. If you want to require F(0) = 0 then the version y + (x-y)*0.5*(1+tanh(k*(x-y))) may be more appropriate as this is 0 for x=0, y=-x.

    See http://yetanothermathprogrammingconsultant.blogspot.com/2008/09/smooth-approximations.html

    ReplyDelete
  3. Thanks for the reply

    yes i can imagine the difference. but which one is better (or suitable)? Is there some kind of rule of thumb of choosing which approx suits me better?

    Because when i implemented yours in my program, the convergent percentage of it decreased significantly. obviously this depends on the case, so I'm still not clear about this matter.

    thanks again, and sorry for my poor english.

    ReplyDelete
  4. That is quite possible. May be the curvature may make your original version better for your case. You may want to try a minlp formulation to make the nlp part easier. You may also want to check if you can exploit some convexities if they are present in your model. Sorry: there is very little one I say in the abstract; I really would need to study the actual model to give better advice. Apart from that you should of course keep the approximation that works best.

    ReplyDelete
  5. I see. Actually before I used MINLP formulation before tried to reformulate to NLP with smooth approximation.

    thanks for your advice.

    ReplyDelete