Friday, February 26, 2010

Generating all non-dominated solutions in a multi-objective integer programming model

In a simple MIP formulation is shown that enumerates all non-dominated solutions in a multi-objective integer programming model. As usual, it is instructive to see if we can reproduce the results. It was not difficult to verify the results by coding the algorithm in GAMS. Actually in GAMS it is quite easy to add the additional constraints using dynamic sets. I am sure this is much more compact (and easier to write) than the C code mentioned in the paper.

In this experiment we test the simple MOIP:

max x1 − 2x2
max -x1+3x2
x1 − 2x2 ≤ 0
x1,x2 ∈ {0,1,2}


This algorithm finds non-dominated solutions of an integer programming model.

Erwin Kalvelagen,, feb 2010.

Sylva, Crema: "A method for finding the set of non-dominated vectors for multiple
objective integer linear programs", EJOR 158 (2004), 46-55.


set i 'variables' /i1,i2/
'objectives' /k1,k2/

'minimal improvement' /k1 1, k2 1/
'initial weights' /k1 4, k2 3/
'big-M' /k1 4, k2 2/
'upperbounds' /i1 2, i2 2/

table C(k,i) 'cost coefficients'
i1  i2
k1   1  -2
k2  -1   3

integer variables x(i);
x.up(i) = xup(i);

variables zw,z(k);


obj(k).. z(k) =e=
sum(i, c(k,i)*x(i));

e.. x(
'i1')-2*x('i2') =L= 0;

wobj.. zw =e=
sum(k, lambda(k)*z(k));

* this mip is just for checking (we don't use it in the algorithm)
option optcr=0;
model m1 /obj,wobj,e/;
solve m1 using mip maximizing zw;

parameter sols(*,*);
display sols;

* here we use the algorithm: find solutions that are not worse in all objectives
* than previously found solutions.

'dynamic set'

binary variable y(tall,k);
'solution not dominated by earlier solutions'
'count number of y-s'
parameter zmem(tall,k) 'memory';

nondominance(t,k).. z(k) =g= (zmem(t,k)+f(k))*y(t,k) - M(k)*(1-y(t,k));
sum(k, y(t,k)) =g= 1;

model m2 /obj,wobj,e,nondominance,county/;

t(tall) =
zmem(tall,k) = 0;

option limrow=0;
option limcol=0;

alias (iter,tall);
scalar ok /1/;
solve m2 using mip maximizing zw;
if (m2.modelstat = 1,

      t(iter) =
      zmem(iter,k) = z.l(k);


      ok = 0;

display sols;

The reported five solutions correspond to the results reported in the paper:

----    114 PARAMETER sols 

             i1          i2          k1          k2

MIP       2.000       2.000      -2.000       4.000
t1        2.000       2.000      -2.000       4.000
t2        2.000       1.000                   1.000
t3        1.000       2.000      -3.000       5.000
t4        1.000       1.000      -1.000       2.000
t5                    2.000      -4.000       6.000

(The line with MIP results can be ignored as that is not part of the algorithm).

This method can work using a good MIP solver with moderate size problems.

Some notes:

  • We can streamline this a bit for zero-one models. Especially in combination with cuts that prevent same solutions.
  • In that case we just can do
    • zk ≥ z*t,k − Mk (1 − yt,k)
    • k yt,k ≥ 1
    • + a cut of the form : ∑i ∈ P(t) xi − ∑i ∈ N(t) xi ≤ |P(t)| − 1
    This is somewhat more natural and intuitive.
  • This would also relax the assumption in the paper that all cost-coefficients are integer valued (ci ∈ Z)
  • This approach seemed to work ok for some larger test data with more objectives.
  • The Mk's can be found automatically by running some models so a bounding box is found.
  • Although this method uses weights, they are not very important, as long as they are nonzero. I.e. a weight of all ones is perfectly appropriate for this algorithm.
  • A modeling language that allows loops and multiple solves is in an advantage here. A system that allows only to model one instance of a model, will have troubles to implement these “mini-algorithms” in a clean way.
  • A subsequent paper by the authors describes a way to generate not all, but well-dispersed non-dominated solutions. For larger problems where all non-dominated solutions are not possible to find quickly, this may be good approach.

Wednesday, February 24, 2010

CONOPT: Convergence problem

Occasionally the NLP solver CONOPT will terminate with:

  Iter Phase Ninf     Objective     RGmax    NSB   Step InItr MX OK
   823   3        1.6227533982E+01 3.0E-02   407 0.0E+00      F  F
   824   3        1.6227533982E+01 3.0E-02   401 0.0E+00      F  F
   825   3        1.6227533982E+01 3.0E-02   405 0.0E+00      F  F
   826   3        1.6227533982E+01 3.0E-02   407 0.0E+00      F  F
   827   3        1.6227533982E+01 3.0E-02   407

** Feasible solution. Convergence too slow. The change in objective
   has been less than 4.8683E-11 for 20 consecutive iterations

In some cases we can reach a (local) optimum just by solving again:

solve ramsey maximizing AggW using nlp;
solve ramsey maximizing AggW using nlp;

Indeed this trick actually works in this case and the second model solves quickly and terminates with:

  Iter Phase Ninf     Objective     RGmax    NSB   Step InItr MX OK
   193   4        1.6948156578E+01 2.7E+00   342 2.0E+00    1 F  T
   194   4        1.6948221678E+01 1.4E-03   441 1.0E+00    1 F  T
   195   4        1.6948492083E+01 1.9E-03   441 1.0E+00    6 T  T
   196   4        1.6949366978E+01 1.3E-03   440 1.0E+00   24 F  T
   197   4        1.6950307538E+01 3.1E-03   440 1.0E+00  113 F  T
   198   4        1.6950345143E+01 1.4E-04   440 1.0E+00  171 F  T
   199   4        1.6950345450E+01 1.6E-05   440 1.0E+00  136 F  T
   200   4        1.6950345453E+01 1.6E-06   440 1.0E+00  101 F  T
   201   4        1.6950345453E+01 1.0E-07   440 1.0E+00    7 F  T
   202   4        1.6950345453E+01 5.6E-08   440

** Optimal solution. Reduced gradient less than tolerance.

Friday, February 19, 2010

Performance of similar MIP models

This is a table with the running times to solve a MIP model with 3000 binary variables. We stop as soon as the gap is 0.05%. The models only differ by a single parameter λ which changes the objective. This table shows how much difference in performance we can see just by going from λ=0.83 to λ=1. This is a good example where just changing one number in the same model can make an easy model very difficult to solve.






0% 19.15 29 477 5.12
4% 19.03 45 426 2.16
8% 18.91 64 513 3.63
13% 18.82 76 611 3.40
17% 18.76 18 657 2.78
21% 18.75 13 549 2.95
25% 18.76 17 763 4.74
29% 18.80 28 830 4.39
33% 18.92 21 704 4.46
38% 19.04 168 1298 6.70
42% 19.25 234 1234 7.12
46% 19.56 642 3686 17.69
50% 19.94 886 4601 16.12
54% 20.64 2292 21567 29.26
58% 21.42 1216 13781 16.64
63% 22.31 3233 34883 34.76
67% 23.47 1873 23906 29.30
71% 24.75 7286 54077 78.80
75% 26.31 5939 54634 82.18
79% 28.04 9486 85297 112.49
83% 29.78 7701 62220 97.78
88% 31.66 26102 202141 332.58
92% 33.65 83790 849745 987.04
96% 35.84 102327 911254 1377.03
100% 38.26 822222 6998739 11749.83

Thursday, February 18, 2010

Threads question

So I have this new laptop which has a quad-core i7 cpu with hyper-threading enabled. That means programs see 8 cores. E.g. the task manager looks like:


When running MIP’s with Cplex and Gurobi, the question is now, what is the best setting to solve models as quickly as possible:

  1. Keep hyper-threading and run with threads=4
  2. Keep hyper-threading and run with threads=8
  3. Turn off hyper-threading and run with threads=4
Assume the model is small enough so memory limits are not an issue. My guess is that is does not really matter in a significant way. Of course in a single run the performance may be very different because the solver takes a different path. One would need to test a large number of models to make any statements about the best settings.

Sunday, February 7, 2010

GAMS: EPS in nonlinear equations

In GAMS the special value EPS is to be considered as zero when evaluated numerically. There may be an issue when some data has the value EPS and this data is part of a nonlinear equation. My intuition said: no problem consider it as a zero in model generation. But this is not actually what is happening:

*  d  /0/


* initial values


e.. x1 =e= x2*(y2/y1-1)*d ;

edummy.. dummy =e= 0;

model m /all/;
option nlp=conopt,iterlim=0;
solve m using nlp minimizing dummy;

This should generate e.. x1=e=0; but it does not. The listing file is showing very strange things:

---- e  =E= 

e..  x1 + (EPS)*x2 + (EPS)*y1 + (EPS)*y2 =E= 0 ; (LHS = EPS)

                           LOWER          LEVEL          UPPER         MARGINAL

---- EQU e                   .          -171.1647          .             EPS    INFES

I.e. GAMS does not get rid of the EPS, actually the partial derivatives are evaluated to EPS which is strange. Secondly GAMS considers this equation feasible at the initial point, but CONOPT does not. When it evaluates the equation at the initial point it says it is infeasible by 171.16747. It looks like something is really wrong here.

When we look at what Convert generates:

Variables  x1,x2,x3,x4,x5;

Equations  e1,e2;

e1.. -(-1 + x4/x3)*x2 + x1 =E= 0;

e2..    x5 =E= 0;

we see that the multiplication by d is dropped. This does not look correct to me, and potentially a serious problem. I start to suspect that wrong code is being generated here by GAMS.

This is not an issue for 99.99% of the models as they have no EPS values residing in the data. I used it to flag that this data came from empty cells in a spreadsheet, so I could distinguish them from zero in certain operations.

As this is generated code by a tool that translates Excel formulas, I can easily generate the following as work around:

e.. x1 =e= x2*(y2/y1-1)*(d$(d<>0)) ;

When this is done for all references to data this looks really ugly but at least this will make sure the correct function evaluation code is generated. In GAMS the condition $d<>0 will evaluate to false for d=EPS.