Sunday, June 17, 2018

select weights to maximize count

In [1] a problem that is simple at first sight. Looking a bit further there are some interesting angles.

The example data set is:


----     27 PARAMETER a  

            j1          j2          j3

i1       0.870       0.730       0.410
i2       0.820       0.730       0.850
i3       0.820       0.370       0.850
i4       0.580       0.950       0.420
i5       1.000       1.000       0.900

The idea is that we can apply weights \(w_j\) to calculate a final score for each row:\[F_i = \sum_j w_j a_{i,j}\] Weights obey the usual constraints: \(w_j \in [0,1]\) and \(\sum_j w_j=1\). The goal is to find optimal weights such that the number of records with final score in the bucket \([0.9, 1]\) is maximized.

Looking at the data, \(w=(0,1,0)\) is a good choice. This gives us two \(F_i\)'s in our bucket,  Let's see if we can formalize this with a model.

MIP Model


Counting is done with binary variables. So let's define\[\delta_i = \begin{cases} 1 & \text{if $L\le F_i\le U$}\\ 0 & \text{otherwise}\end{cases}\] I used \(L\) and \(U\) to indicate the bucket.

A first model can look like:\[\bbox[lightcyan,10px,border:3px solid darkblue]{\begin{align} \max & \sum_i \delta_i \\  & F_i = \sum_j w_j a_{i,j}\\& \sum_i w_i = 1\\ & L - M(1-\delta_i) \le F_i \le U+M(1-\delta_i)\\ & \delta_i \in \{0,1\} \\ & w_i \in [0,1]\end{align}}\] Here \(M\) is a large enough constant.

The sandwich equation models the implication \[\delta_i=1 \Rightarrow L\le F_i \le U\] The objective will make sure that \(L\le F_i \le U \Rightarrow \delta_i=1 \) holds for the optimal solution.

This is the big picture. This model works: it finds an optimal set of weights so that as many as possible \(F_i \in [L,U]\). In the rest of this post I dive into to some nitty-gritty details. No elegant math, just largely boring stuff. However, this is typically what we need to do when working on non-trivial MIP models.

Big-M


The data seems to suggest \(0 \le a_{i,j} \le 1\). Which means \(0 \le F_i \le 1\). This follows from \[\min_j a_{i,j} \le F_i \le \min_j a_{i,j}\] We can also assume \(0 \le L \le U \le 1\). We can conclude the largest difference possible between \(F_i\) and \(L\) (and \(F_i\) and \(U\)) is one. So, in this case an obvious value for \(M\) is \(M=1\). More generally, if we first make sure \(U \le \max\{a_{i,j}\}\) and \(L \ge \min\{a_{i,j}\}\) by the preprocessing step: \[\begin{align} & U := \min\{U, \max\{a_{i,j}\}\}\\ & L := \max\{L, \min\{a_{i,j}\}\}\end{align}\] we have: \[M = \max \{a_{i,j}\} - \min \{a_{i,j}\}\] We can do even better by using an \(M_i\) for each record instead of a single, global \(M\).  We will get back to this later.

More preprocessing


We can optimize things further by observing that not always both "branches" of \[L - M(1-\delta_i) \le F_i \le U+M(1-\delta_i)\] are needed. With our small example we have \(U=1\), but we know already that \(F_i \le 1\). So in this case we only need to worry about \(L - M(1-\delta_i) \le F_i\).

We can generalize this as follows. First calculate bounds \(\ell_i \le F_i\le u_i\): \[\begin{align} & \ell_i = \min_j a_{i,j}\\ & u_i = \max_j a_{i,j}\end{align}\] Then generate constraints: \[\begin{align} & L - M(1-\delta_i) \le F_i && \forall i | L > \ell_i\\ & F_i \le U+M(1-\delta_i) && \forall i | U < u_i\end{align}\]

Even more preprocessing


The first three records in the example data set are really no candidates for having \(F_i \in [0.9, 1]\). The reason is that for those records we have \(u_i \lt L\). In general we can skip all records with \(u_i \lt L\) or \(\ell_i \gt U\). These records will never have \(\delta_i=1\).

Combining things


I extended the \(a\) matrix with following columns:

  • lo: \(\ell_i=\min_j a_{i,j}\).
  • up: \(u_i = \max_j a_{i,j}\).
  • cand: boolean, indicates if this row is a candidate. We check: \(u_i \ge L\) and \(\ell_i \le U\).
  • chkL: boolean, indicates if we need to check the left "branch". This is equal to one if \(\ell_i\lt L\).
  • chkR: boolean, indicates if we need to check the right "branch". This is equal to one if \(u_i\gt L\). No records have this equal to 1. 
The matrix now looks like:


----     41 PARAMETER a  

            j1          j2          j3          lo          up        cand        chkL

i1       0.870       0.730       0.410       0.410       0.870                   1.000
i2       0.820       0.730       0.850       0.730       0.850                   1.000
i3       0.820       0.370       0.850       0.370       0.850                   1.000
i4       0.580       0.950       0.420       0.420       0.950       1.000       1.000
i5       1.000       1.000       0.900       0.900       1.000       1.000

The last record is interesting. It has \(\mathit{chkL}=\mathit{chkR}=0\). This is correct: it will always allow \(\delta_{i5}=1\) no matter what weights we choose. Note that the constraints \(L - M(1-\delta_i) \le F_i\) are only generated in case both \(\mathit{cand}=\mathit{chkL}=1\).

For a small data set this all does not make much difference, but for large ones, we make the model much smaller.

Results


The optimal weights for this small data set are not unique. Obviously I get the same optimal number of selected records:


----     71 VARIABLE w.L  weights

j1 0.135,    j2 0.865


----     71 VARIABLE f.L  final scores

i1 0.749,    i2 0.742,    i3 0.431,    i4 0.900,    i5 1.000


----     71 VARIABLE delta.L  selected

i4 1.000,    i5 1.000


----     71 VARIABLE z.L                   =        2.000  objective

Big-M revisited


I did not pay too much attention to my big-M's. I just used the calculation \(M = \max \{a_{i,j}\} - \min \{a_{i,j}\}\), which yielded:


----     46 PARAMETER M                    =        0.630  big-M

We can use a tailored \(M^L_i, M^U_i\) for each inequality. For the lower bounds our equation looks like \[L - M^L_i(1-\delta_i) \le F_i\] This means we have \(M^L_i \ge L - F_i\). We have bounds on \(F_i\), (we stored these in \(a_{i,up}\) and \(a_{i,lo}\)).  So we can set \(M^L_i =L - a_{i,lo}\). This gives:


----     78 PARAMETER ML  

i4 0.480

Similar for the U inequality. For our small example this is not so important, but for larger instances with a wider range of data this may be essential.

Larger problem


For a larger random problem:


----     43 PARAMETER a  data

             j1          j2          j3          lo          up        cand        chkL        chkU

i1        0.737       0.866       0.893       0.737       0.893                   1.000
i2        0.434       0.654       0.606       0.434       0.654                   1.000
i3        0.721       0.987       0.648       0.648       0.987       1.000       1.000
i4        0.800       0.618       0.869       0.618       0.869                   1.000
i5        0.668       0.703       1.177       0.668       1.177       1.000       1.000       1.000
i6        0.656       0.540       0.525       0.525       0.656                   1.000
i7        0.864       1.037       0.569       0.569       1.037       1.000       1.000       1.000
i8        0.942       1.003       0.655       0.655       1.003       1.000       1.000       1.000
i9        0.600       0.801       0.601       0.600       0.801                   1.000
i10       0.619       0.933       1.114       0.619       1.114       1.000       1.000       1.000
i11       1.243       0.675       0.756       0.675       1.243       1.000       1.000       1.000
i12       0.608       0.778       0.747       0.608       0.778                   1.000
i13       0.695       0.593       1.196       0.593       1.196       1.000       1.000       1.000
i14       0.965       1.001       0.650       0.650       1.001       1.000       1.000       1.000
i15       0.951       1.040       0.969       0.951       1.040       1.000                   1.000
i16       0.514       1.220       0.935       0.514       1.220       1.000       1.000       1.000
i17       0.834       1.130       1.054       0.834       1.130       1.000       1.000       1.000
i18       1.161       0.550       0.481       0.481       1.161       1.000       1.000       1.000
i19       0.638       0.640       0.691       0.638       0.691                   1.000
i20       1.057       0.724       0.657       0.657       1.057       1.000       1.000       1.000
i21       0.814       0.775       0.448       0.448       0.814                   1.000
i22       0.800       0.737       0.741       0.737       0.800                   1.000
i23       0.573       0.555       0.789       0.555       0.789                   1.000
i24       0.829       0.679       1.059       0.679       1.059       1.000       1.000       1.000
i25       0.761       0.956       0.687       0.687       0.956       1.000       1.000
i26       0.870       0.673       0.822       0.673       0.870                   1.000
i27       0.304       0.900       0.920       0.304       0.920       1.000       1.000
i28       0.544       0.659       0.495       0.495       0.659                   1.000
i29       0.755       0.589       0.520       0.520       0.755                   1.000
i30       0.492       0.617       0.681       0.492       0.681                   1.000
i31       0.657       0.767       0.634       0.634       0.767                   1.000
i32       0.752       0.684       0.596       0.596       0.752                   1.000
i33       0.616       0.846       0.803       0.616       0.846                   1.000
i34       0.677       1.098       1.132       0.677       1.132       1.000       1.000       1.000
i35       0.454       1.037       1.204       0.454       1.204       1.000       1.000       1.000
i36       0.815       0.605       0.890       0.605       0.890                   1.000
i37       0.814       0.587       0.939       0.587       0.939       1.000       1.000
i38       1.019       0.675       0.613       0.613       1.019       1.000       1.000       1.000
i39       0.547       0.946       0.843       0.547       0.946       1.000       1.000
i40       0.724       0.571       0.757       0.571       0.757                   1.000
i41       0.611       0.916       0.891       0.611       0.916       1.000       1.000
i42       0.680       0.624       1.111       0.624       1.111       1.000       1.000       1.000
i43       1.015       0.870       0.823       0.823       1.015       1.000       1.000       1.000
i44       0.587       0.866       0.691       0.587       0.866                   1.000
i45       0.789       1.090       0.649       0.649       1.090       1.000       1.000       1.000
i46       1.436       0.747       0.805       0.747       1.436       1.000       1.000       1.000
i47       0.791       0.885       0.723       0.723       0.885                   1.000
i48       0.718       1.028       0.869       0.718       1.028       1.000       1.000       1.000
i49       0.876       0.772       0.918       0.772       0.918       1.000       1.000
i50       0.498       0.882       0.599       0.498       0.882                   1.000

we see that we can remove a significant number of records and big-M constraints. The model solves instantaneous and shows:


----     74 VARIABLE w.L  weights

j1 0.007,    j2 0.792,    j3 0.202


----     74 VARIABLE f.L  final scores

i1  0.870,    i2  0.643,    i3  0.917,    i4  0.670,    i5  0.798,    i6  0.538,    i7  0.942,    i8  0.933
i9  0.760,    i10 0.967,    i11 0.695,    i12 0.771,    i13 0.715,    i14 0.930,    i15 1.025,    i16 1.158
i17 1.112,    i18 0.541,    i19 0.650,    i20 0.713,    i21 0.710,    i22 0.738,    i23 0.602,    i24 0.757
i25 0.900,    i26 0.705,    i27 0.900,    i28 0.625,    i29 0.576,    i30 0.629,    i31 0.739,    i32 0.667
i33 0.836,    i34 1.102,    i35 1.067,    i36 0.664,    i37 0.660,    i38 0.665,    i39 0.923,    i40 0.609
i41 0.909,    i42 0.722,    i43 0.861,    i44 0.829,    i45 0.999,    i46 0.764,    i47 0.851,    i48 0.994
i49 0.802,    i50 0.822


----     74 VARIABLE delta.L  selected

i3  1.000,    i7  1.000,    i8  1.000,    i10 1.000,    i14 1.000,    i15 1.000,    i16 1.000,    i17 1.000
i25 1.000,    i27 1.000,    i34 1.000,    i35 1.000,    i39 1.000,    i41 1.000,    i45 1.000,    i48 1.000


----     74 VARIABLE z.L                   =       16.000  objective

Conclusion


A simple model becomes not that simple once we start "optimizing" it. Unfortunately this is typical for large MIP models.

References


  1. Simulation/Optimization Package in R for tuning weights to achieve maximum allocation for groups, https://stackoverflow.com/questions/50843023/simulation-optimization-package-in-r-for-tuning-weights-to-achieve-maximum-alloc

Saturday, June 9, 2018

Sparsest solution: a difficult MIP

The problem of finding a solution vector that is as sparse as possible can be formulated as a MIP.

I was looking at solving an underdetermined  problem \[Ax=b\] i.e. the number of rows \(m\) is less than the number of colums \(n\). A solution \(x\) has in general \(m\) nonzero elements. 

The actual system was: \[Ax\approx b\] or to be more precise \[-\epsilon \le Ax-b\le \epsilon \] By adding slack variables \(s_i\) we can write this as: \[\begin{align}&Ax=b+s\\&-\epsilon\le s \le \epsilon\end{align} \] Now we have more wiggle room, and can try to find the vector \(x\) that has as many zero elements as possible.

Big-M formulation


The MIP model seems obvious: \[\bbox[lightcyan,10px,border:3px solid darkblue]{\begin{align} \min & \sum_j \delta_j \\ & Ax=b+s \\ & -M\delta_j \le x_j \le M\delta_j \\ & s_i \in [-\epsilon,\epsilon]\\ & \delta_j \in \{0,1\}\end{align}}\] This turns out to be a very problematic formulation. First we have no good a priori value for \(M\). I used \(M=10,000\) but that gives some real issues. Furthermore the performance is not good. For a very small problem with \(m=20, n=40\) we already see a solution time of about 20 minutes. With just 40 binary variables I expected something like a minute or two. But that is only a minor part of the problem. The results with Cplex look like:


----     82 VARIABLE x.L  

j3  -0.026,    j4   0.576,    j7   0.638,    j11  0.040,    j12 -0.747,    j14  0.039,    j15 -0.169,    j19 -0.088
j23  0.509,    j31 -0.475,    j34 -0.750,    j35 -0.509


----     82 VARIABLE delta.L  

j3  2.602265E-6,    j4        1.000,    j7        1.000,    j11 4.012925E-6,    j12       1.000,    j14 3.921840E-6
j15       1.000,    j19 8.834778E-6,    j23       1.000,    j31       1.000,    j34       1.000,    j35       1.000


----     82 PARAMETER statistics  

m               20.000,    n               40.000,    epsilon          0.050,    big M        10000.000
iterations 5.443661E+7,    nodes      5990913.000,    seconds       1004.422,    nz(x)           12.000
obj              8.000,    gap                EPS

Well, we have some problems here. The optimal objective is reported as 8, so we have 8 \(\delta_j\)'s that are equal to one. But at the same time the number of nonzero values in \(x\) is 12.  This does not match. The reason is: we have a few \(\delta_j\)'s that are very small (approx. \(10^{-6}\)). Cplex considers these small values as being zero, while the model sees opportunities to exploit these small values for what is sometimes called "trickle flow". These results are just not very reliable. We cannot just ignore the small \(\delta_j\)'s and conclude that the optimal number of non-zero elements in \(x\) is 8 (we will see later it is 11). The underlying reason is a relatively large value for \(M\) combined with a Cplex integer feasibility tolerance that is large enough to create leaks.

There a few things we can do to repair this:

  • Reduce the value of \(M\), e.g. reduce it from \(M=10,000\) to \(M=1,000\). 
  • Tighten the integer feasibility tolerance. Cplex has a default tolerance epint=1.0e-5 (it allows it to be set to zero) .
  • Use SOS1 sets instead of big-M's. Special Ordered Sets of Type 1 allow us to model the problem in a slightly different way than with binary variables. 
  • Use indicator constraints instead of big-M's. Indicator constraints can implement the implication \(\delta_j=0 \Rightarrow x_j=0\) directly. 

SOS1 formulation


We can get rid of the big-M problem by using SOS1 variables: \[\bbox[lightcyan,10px,border:3px solid darkblue]{\begin{align} \max & \sum_j y_j \\ & Ax=b+s \\ & (x_j,y_j) \in \text{SOS1} \\ & s_i \in [-\epsilon,\epsilon]\\ & y_j \in [0,1]\end{align}}\] The SOS1 sets say: \(x_j = 0\) or \(y_j=0\) (or both), or in different words: at most one of \(x_j,y_j\) can be non-zero. The objective tries to make as many elements of \(y\) equal to one. The corresponding elements of \(x\) will be zero, making the \(x\) vector sparser. This SOS1 approach is more reliable but can be slower. When we try this on our small problem, we see:


----    119 VARIABLE x.L  

j4   0.601,    j7   0.650,    j11  0.072,    j12 -0.780,    j15 -0.170,    j19 -0.117,    j23  0.515,    j31 -0.483
j34 -0.750,    j35 -0.528,    j36 -0.031


----    119 PARAMETER statistics  

m               20.000,    n               40.000,    epsilon          0.050,    iterations 4.787142E+7
nodes      1.229474E+7,    seconds       3600.156,    nz(x)           11.000,    obj             29.000
gap              0.138

This model could not be solved to optimality within one hour! We stopped on a time limit of 3600 seconds. Remember, we only have 40 discrete structures in this model. The gap is still 13.8% after running for an hour. Otherwise, the results make sense: the objective of 29 corresponds to 11 nonzero values in \(x\).

Indicator constraint formulation


Finally, a formulation using indicator constraints can look like: \[\bbox[lightcyan,10px,border:3px solid darkblue]{\begin{align} \min & \sum_j \delta_j \\ & Ax=b+s \\ & \delta_j = 0 \Rightarrow x_j=0 \\ & s_i \in [-\epsilon,\epsilon]\\ & \delta_j \in \{0,1\}\end{align}}\] This formulation does not use a big-M coefficient either, and the results are consistent.

Performance-wise, this does not help much:


----    178 VARIABLE x.L  

j1   0.158,    j2  -0.459,    j13 -0.155,    j14  0.470,    j17 -0.490,    j22  0.269,    j23  0.211,    j32  1.164
j33  0.147,    j38 -0.604,    j39 -0.563


----    178 VARIABLE delta.L  

j1  1.000,    j2  1.000,    j13 1.000,    j14 1.000,    j17 1.000,    j22 1.000,    j23 1.000,    j32 1.000
j33 1.000,    j38 1.000,    j39 1.000


----    178 PARAMETER statistics  

m               20.000,    n               40.000,    epsilon          0.050,    iterations 7.127289E+7
nodes      1.523042E+7,    seconds       3600.203,    nz(x)           11.000,    obj             11.000
gap              0.273

We are hitting our time limit. The gap is 27.3%.

Optimal solution


The optimal solution is indeed 11 nonzero elements in \(x\). It took me 4 hours to prove this. Here are the MIP bounds:


The optimal solution was found quite early but proving optimality took a long time. The final jump in the best bound (red line : bound on best possible integer solution) can be explained as follows. The objective (blue line) can only assume integer variables, so it jumps by one. You can see this happening early on when jumping from 12 to 11. This also means we are optimal as soon as the best bound (red line) reaches a value larger than 10. At that point we know 11 is the optimal objective value.

NB. To be complete: this run used a big-M formulation with a reduced value of \(M=1,000\) and an integer feasibility tolerance (option epint) of zero. Note that such a value is really providing us with a "paranoia mode" and may result in somewhat longer solution times. Not all solvers allow a integer feasibility tolerance of zero. In addition, I used the option mipemphasis=2 to tell Cplex we want optimality.

Conclusion


Here is a very small MIP model with only 40 binary variables that is just very difficult to solve to optimality. Although we can find the optimal solution relatively quickly, proving optimality proofs to be very challenging. This is not the usual case: for most models with 40 binary variables we can expect very quick turnaround. This model is unusually difficult.

In addition, this models illustrates nicely the dangers of big-M formulations. Even for modest values, like \(M=10,000\), we can be in real trouble.

This is good example that should give us some pause. 


Friday, June 1, 2018

The real optimization problem

Dear Erwin,

Good Morning!!.

I am doing my Master's Degree in Computer Science.

Currently, I am doing some research on Facility Location Problem Optimization. I came across your answers on stackoverflow and your blog as well. Your findings are really helpful, I appreciate you for this. Really well structured and clearly understandable. Mathematical models are well explained.

I kindly request you, If you provide small implementation code in JAVA/Python it will be very helpful. So that I can understand the concept deeply.

Thanks for your time and consideration.

Kindest Regards,

This question is about [1]. I did not use Java or Python to solve the models discussed there. So my answer was easy: sorry.

However, I am a bit uncomfortable with questions like this. Is "research" really not more than just emailing and copy/paste from replies? Should a CS student doing his Master's thesis not be able to translate a detailed (but simple) mathematical model to working code? Would you not learn much more by actually implementing the model yourself?

Often, I find the process to reproduce results in a paper very good way to get a good understanding of the issues. I prefer a mathematical model, or pseudo code if an algorithm needs to be implemented. Just copy/paste is hardly ever a good idea.

I have the feeling the real underlying optimization problem is: minimize effort. Or to make it sound better: maximize efficiency.

I am probably a bit too harsh.

References


  1. Solving a facility location problem as an MIQCP, http://yetanothermathprogrammingconsultant.blogspot.com/2018/01/solving-facility-location-problem-as.html