Thursday, October 21, 2021

A multi-objective nonlinear integer programming problem

Picture of the efficient points


In [1], a small multi-objective non-convex MINLP problem is stated:


Let's see how one could approach a problem like this.

First of all, multi-objective problems don't have an obvious single optimal solution. Instead, we often generate a (subset of) the points that are non-dominated. These points form the efficient frontier. As this is a pure integer problem, the efficient frontier is not a line (surface) but just a collection of points.

First, let's clean up the model a little bit. Objective 1 cannot be evaluated for \(\color{darkred}x_1=0\), so we introduce a lower bound of 1 on \(\color{darkred}x_1\). 

Here I want to demonstrate two very different approaches. The first is just to enumerate all feasible points \((\color{darkred}x_1,\color{darkred}x_2)\), evaluate the objective functions, and then filter out dominated solutions by a Pareto sorting algorithm. The second approach is to have an optimization model generate one Pareto-efficient point at a time, using a weighted objective function. Then we add constraints so that we don't generate new points that are dominated. We solve this model several times until the model becomes infeasible.

The solution sets these two methods are finding are identical. As they should be: they both enumerate the complete Pareto optimal set.


Approach 1: No optimization


We start with a method that does not require optimization at all. The number of possible combinations of the \(x\) variables is small, so we can enumerate them all. Here is some Python code that just does that:


import pandas as pd
pd.set_option("max_rows", 10)

#
# just some values
#
c0 = 1.4; c1 = 1; c2 = 2; c3 = 3; c4 = 4; c5 = 5
d1 = 1; d2 = 2; d3 = 3; d4 = 4; d5 = 5
e1 = 1
a1 = 12; a2 = 8; a3 = 15

# list of lists
L = []

def evalf(x1,x2):
  f1 = c0 + (c1+c2*x2+c5*x2**2)/float(x1) + c3*x1 + c4*x2
  f2 = d1*x1 + d2*x2 + d3*x1**2 + d4*x2**2 + d5*x1*x2
  f3 = -e1*x2 
  return f1,f2,f3

for x1 in range(1,a1+1):
  for x2 in range(0,min(a2,a3-x1)+1):
      f1,f2,f3 = evalf(x1,x2)
      L.append([x1,x2,f1,f2,f3])

# store in data frame
df = pd.DataFrame(L,columns=['x1','x2','f1','f2','f3'])
df


The results are just 93 records:

Feasible solutions


A sorting algorithm can be used to retain only the non-dominated solutions. I used the code from [2]:

pd.DataFrame(eps_sort([L], [2,3,4]),columns=['x1','x2','f1','f2','f3'])

This yielded:



Non-dominated solutions

This approach is very fast.


Approach 2: optimization model to generate non-dominated points


In this model, we find a single new non-dominated point. The idea is to loop over finding a single non-dominated point until the model becomes infeasible. We then know that all points on the efficient frontier have been found.

For this, we first develop a base model. It looks like:

Base model
\[\begin{align}\min&\sum_k \color{darkblue}w_k \cdot\color{darkred}f_k \\ & \color{darkred}f_1 = \color{darkblue}c_0 + (\color{darkblue}c_1+\color{darkblue}c_2\cdot \color{darkred}x_2+\color{darkblue}c_5\cdot\color{darkred}x_2^2)/\color{darkred}x_1 + \color{darkblue}c_3\cdot \color{darkred}x_1 + \color{darkblue}c_4\cdot \color{darkred}x_2 \\ & \color{darkred}f_2 = \color{darkblue}d_1\cdot \color{darkred}x_1 + \color{darkblue}d_2\cdot \color{darkred}x_2 + \color{darkblue}d_3\cdot \color{darkred}x_1^2 + \color{darkblue}d_4\cdot \color{darkred}x_2^2 + \color{darkblue}d_5\cdot \color{darkred}x_1\cdot \color{darkred}x_2 \\ & \color{darkred}f_3 = -\color{darkblue}e_1\cdot \color{darkred}x_2 \\ & \color{darkred}x_1 + \color{darkred}x_2 \le \color{darkblue}a_3 \\ & \color{darkred}x_1 \in \{1,\dots,\color{darkblue}a_1\} \\ & \color{darkred}x_2 \in \{0,\dots,\color{darkblue}a_2\}\end{align}\]

We want to find the box for the \(\color{darkred}f\)-space. We do this by solving the model with different values for the weights \(\color{darkblue}w_k\). The results are:

 
----     92 PARAMETER fbox  objective box

              lo          up

obj1       5.400     373.400
obj2       4.000     706.000
obj3      -8.000


----     95 PARAMETER M  big-M

obj1 368.000,    obj2 702.000,    obj3   8.000


We can see that from the box coordinates we can calculate big-M values for constraints I'll discuss next. 

The main ideas for tracing the frontier are:
  • A point found using positive weights \(\color{darkblue}w_k\) will be on the efficient frontier. That means: it will never be dominated by other feasible points.
  • I used weights \(\color{darkblue}w_k=1\).
  • We add constraints to make sure that a new point is both different and not dominated by earlier points. For this, we require that one of the objectives needs to improve. We can write this as a set of constraints: \[\begin{align} &  \color{darkred}f_k \le \color{darkblue}{\mathit{points}}_{p,k} - \color{darkblue}\epsilon + (\color{darkblue}M_k + \color{darkblue}\epsilon) \cdot (1-\color{darkred}\delta_{p,k}) && \forall p,k\\ &\sum_k\color{darkred}\delta_{p,k} \ge 1 && \forall p \\ & \color{darkred}\delta_{p,k}\in \{0,1\}\end{align}\] Here \(\color{darkblue}{\mathit{points}}_{p,k}\) are the objective values of previously found points, and \(\color{darkblue}\epsilon\gt 0\) is a small tolerance to find improving function values. Without this we may find the same point again. I used \(\color{darkblue}\epsilon=0.01\). This equation is stating what it means if a point is non-dominated. It is a useful concept to know. 
  • The algorithm will stop when there are no more non-dominated points to be found.
  • I used Baron to solve the MINLP subproblems.
  • This approach can only be used for fairly small problems. Finding thousands of points would be too expensive.

The output looks like:


 
----    159 PARAMETER solutions  non-dominated solutions

                 x1          x2        obj1        obj2        obj3

point1        1.000                   5.400       4.000
point2        1.000       1.000      16.400      15.000      -1.000
point3        2.000       1.000      15.400      30.000      -1.000
point4        1.000       2.000      37.400      34.000      -2.000
point5        2.000       2.000      27.900      54.000      -2.000
point6        3.000       2.000      26.733      80.000      -2.000
point7        1.000       3.000      68.400      61.000      -3.000
point8        2.000       3.000      45.400      86.000      -3.000
point9        3.000       3.000      39.733     117.000      -3.000
point10       4.000       3.000      38.400     154.000      -3.000
point11       2.000       4.000      67.900     126.000      -4.000
point12       1.000       4.000     109.400      96.000      -4.000
point13       3.000       4.000      56.067     162.000      -4.000
point14       4.000       4.000      51.650     204.000      -4.000
point15       2.000       5.000      95.400     174.000      -5.000
point16       3.000       5.000      75.733     215.000      -5.000
point17       1.000       5.000     160.400     139.000      -5.000
point18       5.000       4.000      50.200     252.000      -4.000
point19       4.000       5.000      67.400     262.000      -5.000
point20       2.000       6.000     127.900     230.000      -6.000
point21       3.000       6.000      98.733     276.000      -6.000
point22       5.000       5.000      63.600     315.000      -5.000
point23       1.000       6.000     221.400     190.000      -6.000
point24       4.000       6.000      85.650     328.000      -6.000
point25       6.000       5.000      62.067     374.000      -5.000
point26       2.000       7.000     165.400     294.000      -7.000
point27       5.000       6.000      79.000     386.000      -6.000
point28       3.000       7.000     125.067     345.000      -7.000
point29       7.000       5.000      61.829     439.000      -5.000
point30       4.000       7.000     106.400     402.000      -7.000
point31       6.000       6.000      75.567     450.000      -6.000
point32       1.000       7.000     292.400     249.000      -7.000
point33       5.000       7.000      96.400     465.000      -7.000
point34       2.000       8.000     207.900     366.000      -8.000
point35       3.000       8.000     154.733     422.000      -8.000
point36       7.000       6.000      73.971     520.000      -6.000
point37       4.000       8.000     129.650     484.000      -8.000
point38       6.000       7.000      90.733     534.000      -7.000
point39       5.000       8.000     115.800     552.000      -8.000
point40       8.000       6.000      73.525     596.000      -6.000
point41       1.000       8.000     373.400     316.000      -8.000
point42       7.000       7.000      87.543     609.000      -7.000
point43       6.000       8.000     107.567     626.000      -8.000
point44       8.000       7.000      85.900     690.000      -7.000
point45       7.000       8.000     102.543     706.000      -8.000


We see we can reproduce the results of the first approach. It always feels good if solutions are reproducible especially if they are produced in very different ways. 

Of course, there are many variants of this model possible. For instance, we can prevent repeating points explicitly by no-good cuts.

References


Appendix: GAMS model


$ontext

 
https://or.stackexchange.com/questions/7013/how-to-linearize-a-non-convex-optimization-objective-function

 
find efficient frontier point by point

$offtext


*------------------------------------------------------------------------------
* data
*------------------------------------------------------------------------------

scalars
  c0 
/ 1.4 /
  c1 
/ 1 /
  c2 
/ 2 /
  c3 
/ 3 /
  c4 
/ 4 /
  c5 
/ 5 /
  d1 
/ 1 /
  d2 
/ 2 /
  d3 
/ 3 /
  d4 
/ 4 /
  d5 
/ 5 /
  e1 
/ 1 /
  a1 
/ 12 /
  a2 
/ 8 /
  a3 
/ 15 /
;


set dummy /x1,x2/;

*------------------------------------------------------------------------------
* base model
*------------------------------------------------------------------------------

integer variables x1,x2;

set k /obj1*obj3/;
alias (k,k2);

parameter w(k) 'weights';

variables
   f(k)  
'objectives'
   f0    
'weighted sum'
;

equations
   obj1
   obj2
   obj3
   xsum
   fsum
;

x1.lo = 1;
x1.up = a1;
x2.up = a2;


obj1..  f(
'obj1') =e= c0 + (c1+c2*x2+c5*sqr(x2))/x1 + c3*x1 + c4*x2;
obj2..  f(
'obj2') =e= d1*x1 + d2*x2 + d3*sqr(x1) + d4*sqr(x2) + d5*x1*x2;
obj3..  f(
'obj3') =e= -e1*x2;

xsum.. x1+x2 =l= a3;

fsum.. f0 =e=
sum(k2,w(k2)*f(k2));

option optcr=0, minlp=baron;

model base /all/;

*------------------------------------------------------------------------------
* determine fbox: min and max f(k)
*------------------------------------------------------------------------------
base.solprint = %solprint.Silent%;

parameters
   fbox(k,*)
'objective box'
   M(k)     
'big-M'
;

loop(k,
   w(k) = 1;
  
solve base minimizing f0 using minlp;
   fbox(k,
'lo') = f.l(k);
  
solve base maximizing f0 using minlp;
   fbox(k,
'up') = f.l(k);
   w(k) = 0;
);
display fbox;

M(k) = fbox(k,
'up')-fbox(k,'lo');
display M;


*------------------------------------------------------------------------------
* non-dominated constraints
*------------------------------------------------------------------------------
sets
   p
'points' /point1*point1000/
   pf(p)
'found points'
;
pf(p) =
no;

parameter solutions(p,*) 'non-dominated solutions';
solutions(p,k) = 0;

binary variable
  df(p,k)
'non-dominated check'
;

equations
    fbetter1(k,p)
'obj(k) is better compared to point p'
    fbetter2(p)
;

fbetter1(k,pf).. f(k) =l= solutions(pf,k) - 0.01 + (M(k)+0.01)*(1-df(pf,k));
fbetter2(pf)..
sum(k,df(pf,k)) =g= 1;

model ndom /all/;

*------------------------------------------------------------------------------
* big loop
*------------------------------------------------------------------------------


ndom.solprint = %solprint.Silent%;
w(k) = 1;

loop(p,

  
solve ndom minimizing f0 using minlp;
  
break$(ndom.modelstat <> %modelStat.optimal%);


   solutions(p,
'x1') = x1.l;
   solutions(p,
'x2') = x2.l;
   solutions(p,k) = f.l(k);

   pf(p) =
yes;

);

display solutions;

parameter sums(*) 'checksum';
set c /x1,x2,obj1, obj2, obj3/;
sums(c) =
sum(pf,solutions(pf,c));
display sums;

No comments:

Post a Comment