#### Introduction

The problem of modeling \(y=\min(x_1,x_2)\) in optimization models is not completely trivial. Here are some notes.

#### Small example model

To experiment, here is a small test model:

MINLP Model |
---|

\[\begin{align} \max\>& \color{darkred} z= \color{darkred}x_1 + 2\color{darkred}x_2\\ & 2\color{darkred}x_1 + \color{darkred}x_2 = 5 + \min(\color{darkred}x_1,\color{darkred}x_2)\\ & \color{darkred}x_1,\color{darkred}x_2 \in [0,4] \end{align}\] |

#### Global MINLP solvers

Interestingly the global MINLP solvers Baron, Couenne, and Antigone (under GAMS) have no direct facilities for this function:

Solver | Error message |
---|---|

Baron | *** Cannot handle function 'min' |

Couenne | Gams function min not supported. |

Antigone | ERROR: Unsupported function min in equation e0 |

This is unfortunate. I think it would be better if this function was supported directly. Of course, we are not defeated that easily. As the solvers do support the \(\mathrm{abs}()\) function, lets see if we can use that instead. A possible reformulation is to replace \(\min(x_1,x_2)\) by \[\min(x_1,x_2)=\frac{x_1+x_2-|x_1-x_2|}{2}\] The derivation is as follows: \[\frac{x_1+x_2-|x_1-x_2|}{2} = \begin{cases}\displaystyle\frac{x_1+x_2-x_1+x_2}{2}=x_2 & \text{if $x_1\ge x_2$}\\ \displaystyle\frac{x_1+x_2+x_1-x_2}{2}=x_1 & \text{if $x_1\lt x_2$}\end{cases}\] Using this we see:

LOWER LEVEL UPPER ---- VAR z -INF 9.0000 +INF ---- VAR x1 . 1.0000 4.0000 ---- VAR x2 . 4.0000 4.0000

#### MIP Formulation

The construct \(y=\min(x_1,x_2)\) can be interpreted as \[\begin{align} & y\le x_1 \> \mathbf{and}\> y\le x_2\\ & y\ge x_1 \>\mathbf{or}\> y\ge x_2 \end{align}\] With this, we can form the set of constraints:

MIP big-M reformulation of min function |
---|

\[\begin{align} & \color{darkred} y \le \color{darkred}x_1 \\ & \color{darkred} y \le \color{darkred}x_2\\ & \color{darkred} y \ge \color{darkred}x_1 - \color{darkblue}M \color{darkred}\delta \\ & \color{darkred} y \ge \color{darkred}x_2 - \color{darkblue}M (1-\color{darkred}\delta)\\ & \color{darkred}\delta \in \{0,1\} \end{align}\] |

The main problem here is that we need a good value for \(M\). In this case, \(M\) should be the largest difference between \(x_1\) and \(x_2\) we can observe. In our case \(x_1,x_2\in[0,4]\) so a good value would be \(M=4\). The solution can look like:

It may not always be so easy to derive good values. In that case, an alternative is to use SOS1 variables (SOS1 stands for Special Ordered Set of type 1). This type of variable is supported by most MIP solvers (but not all).

LOWER LEVEL UPPER MARGINAL ---- VAR z -INF 9.0000 +INF . ---- VAR x1 . 1.0000 4.0000 . ---- VAR x2 . 4.0000 4.0000 1.0000 ---- VAR y . 1.0000 +INF . ---- VAR delta . . 1.0000 EPS

It may not always be so easy to derive good values. In that case, an alternative is to use SOS1 variables (SOS1 stands for Special Ordered Set of type 1). This type of variable is supported by most MIP solvers (but not all).

MIP SOS1 reformulation of the min function |
---|

\[\begin{align} & \color{darkred} y \le \color{darkred}x_1 \\ & \color{darkred} y \le \color{darkred}x_2\\ & \color{darkred} y \ge \color{darkred}x_1 - \color{darkred}s_1 \\ & \color{darkred} y \ge \color{darkred}x_2 - \color{darkred}s_2\\ & \color{darkred}s_1,\color{darkred}s_2 \ge 0 \\ & (\color{darkred}s_1,\color{darkred}s_1) \in SOS1\end{align}\] |

The SOS1 condition says: only one of the variables \((s_1,s_2)\) can be non-zero (note that they can be both zero). This means: if one of the variables is non-zero, the other one will be zero. As you can see, there are no bounds needed on \(s_i\), so this formulation is useful when we can not find a good value for \(M\) in the previous formulation.

I want to mention that some solvers have a built-in function for \(\min()\). That makes life easier, as there is no reformulation required (the solver will do this behind the scenes).

#### How not to handle the min function

I see often the following suggestion to handle the above model:

Don't do this |
---|

\[\begin{align} \max\>& \color{darkred} z= \color{darkred}x_1 + 2\color{darkred}x_2 + \color{darkblue} \alpha \cdot\color{darkred}y \\ & 2\color{darkred}x_1 + \color{darkred}x_2 = 5 + \color{darkred}y \\ & \color{darkred}y \le \color{darkred}x_1 \\ & \color{darkred}y \le \color{darkred}x_2\\ & \color{darkred}x_1,\color{darkred}x_2 \in [0,4] \end{align}\] |

Here \(\alpha\gt 0\) is a constant. The idea is that the objective will now push \(y\) upwards, so we don't need to worry anymore about \[y \ge x_1 \> \mathbf{or} \> y \ge x_2\] The problem is that we are changing the problem. Indeed the results (with \(\alpha=10\)) look like:

LOWER LEVEL UPPER MARGINAL ---- VAR z -INF 32.5000 +INF . ---- VAR x1 . 2.5000 4.0000 . ---- VAR x2 . 2.5000 4.0000 . ---- VAR y . 2.5000 +INF .

This means the objective \(z_0=x_1+2x_2\) is equal to 7.5 which is less than the objective we saw earlier (we had before \(z=9\)). So this trick is just not the right thing to do.

There are values for \(\alpha\) that actually work. Using \(\alpha=0.1\) we observe:

LOWER LEVEL UPPER MARGINAL ---- VAR z -INF 9.1000 +INF . ---- VAR x1 . 1.0000 4.0000 . ---- VAR x2 . 4.0000 4.0000 0.9000 ---- VAR y . 1.0000 +INF .

This corresponds to the solution we found earlier. With some experimentation it looks like we have: \[\begin{cases} \alpha \le 1 \Rightarrow x_1=1,x_2=4\\ \alpha \gt 1 \Rightarrow x_1,x_2=2.5\end{cases}\] I am not sure if in general there is an easy way to find the threshold or even if there is always some value of \(\alpha\) that works. I don't particularly like this approach as it really changes the problem. As a result, I never use this method.

Your small example model is a bit too simple. Rearranging terms, the constraint can be written as max(x1+x2,2x1) = 5. Because of the objective function, you can replace "=" with "<=" (rearranging terms was not necessary, but makes it easier to see that you can turn the equality into an inequality). Since the constraint is convex, you can do the reformulation with the auxiliary variable y, without adding an extra penalty to the objective function (alpha=0 gives the optimal solution).

ReplyDeleteThanks. Slightly better than right example, wrong idea, but I better fix this.

Delete