Monday, May 6, 2024

Rounding inside an optimization model

In [1], the question was asked: how can I round to two decimal places inside an optimization model? I.e., \[\color{darkred}y_{i,j} = \mathbf{round}(\color{darkred}x_{i,j},2)\] To get this off my chest first: I have never encountered a situation like this. Rounding to two decimal places is more for reporting than something we want inside model equations. Given that, let me look into this modeling problem a bit more as an exercise. 

Of course, the first thing we can do is to drop the indices (as mathematicians would say: WLOG, Without Loss Of Generality). So we have as our problem:  \[\color{darkred}y = \mathbf{round}(\color{darkred}x,2)\] 

Rounding to integer (MIP)

Let's first concentrate on a simpler problem: \[\color{darkred}y = \mathbf{round}(\color{darkred}x)\] In a MIP solver, we can reformulate this as: \[\begin{align} & \color{darkred}y - 0.5 \le \color{darkred}x \le \color{darkred}y + 0.5 \\ & \color{darkred}y \>\mathbf{integer} \end{align}\] or if you prefer \[\begin{align} & \color{darkred}x - 0.5 \le \color{darkred}y \le \color{darkred}x + 0.5 \\ & \color{darkred}y \>\mathbf{integer}\end{align}\]This says: \(\color{darkred}x\) and \(\color{darkred}y\) should be within a distance of 0.5 of each other. This approach has some ambiguity when the fractional part is 0.5. Resolving that ambiguity, e.g. by replacing one of the \(\le\) by a \(\lt\): \[\color{darkred}y - 0.5 \le \color{darkred}x \lt \color{darkred}y + 0.5 \] is in my opinion, not worthwhile: the absence of strict inequalities in optimization, the presence of floating point inaccuracies and feasibility tolerances make such an attempt largely futile. Sometimes I see people tinkering with \[\color{darkred}y - 0.5 \le \color{darkred}x \le \color{darkred}y + 0.5 - \color{darkblue}\varepsilon \] (so that \(n+0.5\) is rounded to \(n+1\)), but that requires some thinking about the value of \(\color{darkblue}\varepsilon\gt 0\) and also creates small holes in the domain of \(\color{darkred}x\). Another complication is that we need to define what happens when \(\color{darkred}x\lt 0\) (e.g. consider the rounding scheme "round away from 0"). Furthermore, I think it is better not to have some small tolerance (possibly within numerical noise) to dictate the optimal value of \(\color{darkred}y\): we may miss a better solution for no good reason. Again, I would not bother about all that and use my more ambiguous approach. 


Rounding to two decimal places (MIP)

Using the above approach, we can use scaling to round to two decimal places.  \[\begin{align} & \color{darkred}z - 0.5 \le 100 \color{darkred}x  \le \color{darkred}z + 0.5 \\ & \color{darkred}y = \color{darkred}z/100 \\ & \color{darkred}z \>\mathbf{integer} \end{align}\] 


Rounding to two decimal places (NLP)

In an NLP model, we could directly write \(\color{darkred}y = \mathbf{round}(\color{darkred}x,2)\) as a constraint. However, this is a dangerous idea: this function is discontinuous (and non-differentiable). Using this function directly may cause all kinds of problems. In GAMS, we are warned about this. The error message will tell us to use a DNLP model. (That is just the same as an NLP model but where these warnings are turned off). Using the MIP approach described above, we end up with a MINLP model. Note that most global NLP solvers are also MINLP solvers. 

References

No comments:

Post a Comment