Thursday, October 17, 2024

Equity in optimization models

In optimization models, we often use an aggregate measure in the objective function, such as total profit, the sum of tardiness of jobs, and countrywide GDP. This can lead to particularly bad results for some individuals or groups.

Here is an example I have used on several occasions. 

Problem Statement

We have \(P\) persons. They must be assigned to \(M\) groups or teams. For simplicity, we can assume \(n\) is a multiple of \(m\), and the group size is \[\frac{N}{M}\] Each person \(p_1\) specifies some preferences to be placed in the same group as a person \(p_2\). A negative preference can be used to indicate that I prefer to be in a different group. Find an optimal assignment taking into account these preferences. 


Data

To make this a bit more explicit, I generated some random data.

We have 36 persons to be arranged into 6 teams of 6 persons. 

The preferences are defined by: \[\color{darkblue}{\mathit{pref}}_{p,p'} = \text{preference of person $p$ to be in the same group as $p'$}\] A positive preference means: "I want to work with this other person". Similarly, a negative preference means: "I prefer not to work with this other person".

As we will work with sums of preferences in the models, these numbers represent a cardinal utility (opposed to an ordinal utility).


----     53 PARAMETER pref  preferences of person p1

person1 .person8  -1,    person1 .person16  2,    person1 .person21  3,    person1 .person25  3
person1 .person30 -3,    person2 .person16 -3,    person2 .person23  1,    person2 .person24  1
person2 .person26  3,    person2 .person33 -2,    person2 .person35  3,    person3 .person2  -3
person3 .person5   3,    person3 .person10  3,    person3 .person12 -1,    person3 .person16  3
person3 .person18 -2,    person3 .person22 -1,    person4 .person1   2,    person4 .person8  -3
person4 .person10  2,    person4 .person16 -1,    person4 .person28 -3,    person5 .person11 -3
person5 .person22 -2,    person5 .person30 -3,    person6 .person8   2,    person6 .person9  -3
person6 .person13  2,    person6 .person23 -3,    person6 .person28 -3,    person6 .person30  2
person7 .person5   1,    person7 .person9   2,    person7 .person16 -1,    person7 .person20 -1
person8 .person14  2,    person8 .person22 -2,    person8 .person36 -1,    person9 .person10 -1
person9 .person19  1,    person9 .person20  3,    person9 .person22  1,    person9 .person23 -2
person9 .person27 -3,    person10.person11  1,    person10.person25  2,    person11.person12 -1
person11.person20  3,    person11.person22  3,    person11.person24  1,    person11.person28  3
person11.person33  3,    person12.person1  -1,    person12.person19 -2,    person12.person21 -3
person12.person28  2,    person13.person19  3,    person13.person24  1,    person13.person29  3
person13.person36 -2,    person14.person11 -1,    person14.person12  2,    person14.person17 -3
person14.person34  1,    person15.person6  -1,    person15.person10 -2,    person15.person16 -3
person15.person25  3,    person16.person6  -1,    person16.person12 -2,    person16.person22 -1
person16.person27 -2,    person17.person1  -2,    person17.person15  2,    person17.person23 -1
person17.person24 -3,    person17.person26 -2,    person17.person27  1,    person18.person8  -3
person18.person9  -3,    person18.person11  1,    person18.person14 -3,    person18.person15 -3
person18.person23 -2,    person18.person28  1,    person19.person16 -2,    person19.person30  2
person20.person11  2,    person20.person22 -3,    person20.person25  1,    person20.person32  2
person20.person33 -2,    person20.person34 -1,    person21.person2  -2,    person21.person24 -2
person21.person31 -1,    person21.person32 -3,    person21.person36  2,    person22.person3   1
person22.person9   3,    person22.person20  3,    person22.person21 -3,    person22.person33 -3
person23.person8  -1,    person23.person18 -1,    person23.person29 -2,    person23.person30  2
person23.person32  3,    person23.person35 -3,    person24.person5   3,    person24.person17  3
person24.person21  3,    person24.person28  1,    person24.person29 -3,    person24.person33 -1
person24.person35 -3,    person25.person6  -2,    person25.person19 -1,    person25.person26 -3
person26.person2  -3,    person26.person4   2,    person26.person31  1,    person26.person35  2
person28.person10 -2,    person28.person13 -1,    person28.person15  2,    person28.person21 -1
person28.person23 -3,    person28.person24 -3,    person28.person29  3,    person29.person11 -3
person29.person23  1,    person31.person6  -1,    person31.person7   1,    person31.person18  1
person31.person22  2,    person31.person33 -2,    person31.person36 -3,    person32.person4   3
person32.person12 -3,    person32.person17  1,    person32.person18 -3,    person32.person21  2
person32.person24  3,    person32.person25  3,    person32.person28 -1,    person32.person36 -1
person33.person11  1,    person33.person16 -3,    person33.person19  3,    person33.person20 -2
person33.person22  2,    person33.person25 -3,    person33.person30  2,    person34.person1   2
person34.person2  -3,    person34.person11 -1,    person34.person17  1,    person34.person20  3
person34.person21 -3,    person34.person24 -2,    person35.person2  -2,    person35.person10 -3
person35.person18 -1,    person36.person6   3,    person36.person8  -1,    person36.person9  -3
person36.person24  3,    person36.person28 -2,    person36.person33 -3


Note that this is a square, sparse, and asymmetric matrix. Preferences are not symmetric: person1 may not like to work with person2, but person2 may not have the same dislike for person1

In this data set, all groups are of the same size. It is not very difficult to generalize the models below to handle groups of different sizes.

Model 1: max sum preferences

In this section, we will focus on overall system efficiency. We maximize the total sum of achieved preference scores. A first model can look like:

Model 1: MIQP Model
\[\begin{align}\max & \sum_{p,p',g}\color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot\color{darkred}{\mathit{assign}}_{p',g} \\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 & \forall p &\>\> \text{place person in exactly one group} \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} & \forall g & \>\>\text{sizing of groups} \\& \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \end{align}\]

Notes: 

  • This is a non-convex MIQP model.
  • Some solvers will linearize this automatically. Somewhat buried in the Cplex log you may see:
        Classifier predicts products in MIQP should be linearized.
  • The model has a lot of symmetry. As all groups are identical, we can fix person1 to group1 and restrict person2 to group1 or group2, person3 to one of groups 1 to 3, etc. This can be implemented simply as \(\color{darkred}{\mathit{assign}}_{p,g}=0\) for \(g \gt p\). 
  • We can make the matrix \(\color{darkblue}{\mathit{pref}}_{p,p'}\) strictly triangular making the preferences even sparser. This triangularization only works for objectives with a quadratic form like \(x'Qx\). This trick is sometimes used in portfolio models. We apply: \[\color{darkblue}{\mathit{pref}\,'}_{p,p'} = \color{darkblue}{\mathit{pref}}_{p,p'}+\color{darkblue}{\mathit{pref}}_{p',p} \>\forall p\lt p'\] and then use \(\color{darkblue}{\mathit{pref}\,'}\) in the model.
When we apply the simple symmetry and triangularization tricks, we achieve a small performance improvement:

----    145 PARAMETER report  performance statistics

               model 1     model 1
            MIQP/basic    MIQP/adv

Variables      217.000     202.000
Equations       43.000      43.000
Nonzeros       649.000     604.000
Objective       82.000      82.000
Time             2.859       2.235
Nodes           27.000     322.000
Iterations   11788.000   32877.000


Cplex automatically linearizes this model. We can also linearize this model ourselves at the model level. This is an interesting exercise and not totally trivial. The basic approach for the multiplication of binary variables is \[y_{i,j}=x_i\cdot x_j \iff \begin{align}&y_{i,j} \le x_i \\ & y_{i,j} \le x_j \\& y_{i,j} \ge x_i+x_j-1 \\ & y_{i,j}\in\{0,1\} \end{align} \] where we can relax the \(y\) variables to be continuous between 0 and 1: \(y_{i,j}\in [0,1]\).  A basic linearization can look like:

Model 1: Basic MIP Model
\[\begin{align}\max & \sum_{p,p',g}\color{darkblue}{\mathit{pref\,'}}_{p,p'}\cdot\color{darkred}{\mathit{assign'}}_{p,p',g} \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \le \color{darkred}{\mathit{assign}}_{p,g} && \forall p,p',g \\& \color{darkred}{\mathit{assign'}}_{p,p',g} \le \color{darkred}{\mathit{assign}}_{p',g} && \forall p,p',g \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \ge\color{darkred}{\mathit{assign}}_{p,g} + \color{darkred}{\mathit{assign}}_{p',g} - 1 && \forall p,p',g \\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 && \forall p  \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} && \forall g  \\& \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \in [0,1]  \end{align}\]

The results are a bit disappointing:

----    184 PARAMETER report  performance statistics

               model 1     model 1     model 1
            MIQP/basic    MIQP/adv   MIP/basic

Variables      217.000     202.000    7777.000
Equations       43.000      43.000   22723.000
Nonzeros       649.000     604.000   54379.000
Objective       82.000      82.000      82.000
Time             2.859       2.235      33.187
Nodes           27.000     322.000     647.000
Iterations   11788.000   32877.000  309695.000

We need to pay more attention to the size of the model. As \(\color{darkblue}{\mathit{pref\,'}}_{p,p'}\) is very sparse, it can help a lot to do this only for nonzero \((p,p')\) combos. One additional optimization is that the sign of \(\color{darkblue}{\mathit{pref\,'}}_{p,p'}\) can help us dropping either \(y_{i,j} \le x_i, y_{i,j} \le x_j \) or \(y_{i,j} \ge x_i+x_j-1\). If  \(\color{darkblue}{\mathit{pref\,'}}_{p,p'}\gt 0\), we drive the corresponding variable upwards, so we don't need the \(\ge\) constraint, and vice versa. A somewhat sophisticated linearized model can be as follows.

Model 1: Advanced MIP implementation
\[\begin{align}\max & \sum_{p,p',g}\color{darkblue}{\mathit{pref\,'}}_{p,p'}\cdot\color{darkred}{\mathit{assign'}}_{p,p',g} \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \le \color{darkred}{\mathit{assign}}_{p,g} && \forall p,p',g|\color{darkblue}{\mathit{pref\,'}}_{p,p'}\gt 0\\& \color{darkred}{\mathit{assign'}}_{p,p',g} \le \color{darkred}{\mathit{assign}}_{p',g} && \forall p,p',g|\color{darkblue}{\mathit{pref\,'}}_{p,p'}\gt 0 \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \ge\color{darkred}{\mathit{assign}}_{p,g} + \color{darkred}{\mathit{assign}}_{p',g} - 1 && \forall p,p',g|\color{darkblue}{\mathit{pref\,'}}_{p,p'}\lt 0 \\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 && \forall p  \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} && \forall g  \\& \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \in [0,1]  \end{align}\]

Note that this linearization ignores all cases where \(\color{darkblue}{\mathit{pref}}_{p,p'}= 0\). 

The results are better. But not much better than the quadratic formulations.

----    197 PARAMETER report  performance statistics

               model 1     model 1     model 1     model 1
            MIQP/basic    MIQP/adv   MIP/basic     MIP/adv

Variables      217.000     202.000    7777.000    1117.000
Equations       43.000      43.000   22723.000    1333.000
Nonzeros       649.000     604.000   54379.000    4423.000
Objective       82.000      82.000      82.000      82.000
Time             2.859       2.235      33.187       2.468
Nodes           27.000     322.000     647.000     263.000
Iterations   11788.000   32877.000  309695.000   42328.000

We really need to pay attention to the sparsity pattern in the preference matrix. The automatic MIQP → MIP transformations inside the solver do this for us, and we don't have to worry. When doing this by hand, we have to be very careful. 

When solving this model, we see the following results. 

----    197 VARIABLE assign.L  assign person to group

              group1      group2      group3      group4      group5      group6

person1            1
person2                        1
person3                                    1
person4            1
person5                                    1
person6                                                            1
person7                                                1
person8                        1
person9                                                1
person10           1
person11                                               1
person12                       1
person13                                                           1
person14                       1
person15                                                                       1
person16                                   1
person17                                                                       1
person18                                                           1
person19                                                           1
person20                                               1
person21                                   1
person22                                               1
person23           1
person24                                   1
person25           1
person26                       1
person27                                                                       1
person28                                                                       1
person29                                                                       1
person30                                                           1
person31                                               1
person32           1
person33                                                           1
person34                                                                       1
person35                       1
person36                                   1

This report is not so interesting. It just shows the assignment of persons to groups worked. More interesting is to see how each person is doing with respect to his or her preferences. Here, we sum up per person how the realized preferences pan out. I.e., we print: 

----    202 PARAMETER SumPreferences  Final sum of preferences for each person

             model 1

person1        3.000
person2        6.000
person3        6.000
person4        4.000
person6        4.000
person7        1.000
person8        2.000
person9        4.000
person10       2.000
person11       6.000
person13       3.000
person14       2.000
person17       3.000
person19       2.000
person20      -1.000
person22       6.000
person23       3.000
person24       6.000
person26      -1.000
person28       5.000
person31       3.000
person32       6.000
person33       5.000
person34       1.000
person35      -2.000
person36       3.000
sum           82.000
min           -2.000
max            6.000

Our total objective is 82. Persons 20, 26 and 35 are doing poorly, with a net negative score. Some persons are doing very well. Note that the zero entries are not printed. For example, person 5 has a net score of 0. 

We maximized the overall sum of achieved preferences. This can hurt some individuals who are forced to take one for the team. In the next sections, we discuss how we can repair this. 

What have we learned here?

  1. Proper linearization of a quadratic model is not very easy. 
  2. The problem of equity: a model that optimizes overall performance may excessively hurt some individuals.


Model 2: maximize the minimum score 

In this section we focus on one thing: prevent very low scores. We can do this by maximizing the minimum score. The high-level model is:


Model 2: MAXIMIN Model
\[\begin{align}\max\> & \min_p \> \sum_{p',g}\color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot\color{darkred}{\mathit{assign}}_{p',g}  \\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 && \forall p  \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} && \forall g  \\& \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \end{align}\]


We maximize the minimum score for each person \(p\). This is sometimes called a maximin objective. A standard MIQCP model is just the epigraph version of this:


Model 2: MIQCP Model
\[\begin{align}\max\> & \color{darkred}z \\ &\color{darkred}z \le \sum_{p',g}\color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot\color{darkred}{\mathit{assign}}_{p',g} && \forall p \\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 && \forall p  \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} && \forall g  \\& \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \end{align}\]


This is no longer a traditional MIQP model, as the quadratic terms are in the constraints. It is now an MIQCP (Mixed-Integer Quadratically Constrained Program) model. 

The quadratic constraint is non-convex, so hopefully, the solver will linearize this. Of course, we can also do this linearization at the model level: 


Model 2: MIP Model
\[\begin{align}\max\> & \color{darkred}z \\ &\color{darkred}z \le \sum_{p',g}\color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign'}}_{p,p',g} && \forall p \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \le \color{darkred}{\mathit{assign}}_{p,g} && \forall p,p',g|\color{darkblue}{\mathit{pref}}_{p,p'}\ne 0\\& \color{darkred}{\mathit{assign'}}_{p,p',g} \le \color{darkred}{\mathit{assign}}_{p',g} && \forall p,p',g|\color{darkblue}{\mathit{pref}}_{p,p'}\ne 0 \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \ge\color{darkred}{\mathit{assign}}_{p,g} + \color{darkred}{\mathit{assign}}_{p',g} - 1 && \forall p,p',g|\color{darkblue}{\mathit{pref}}_{p,p'}\ne 0 \\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 && \forall p  \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} && \forall g  \\& \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \\ & \color{darkred}{\mathit{assign'}}_{p,p',g} \in [0,1]  \end{align}\]


Not all the modeling tricks we applied in the linearization of Model 1 can be reused here. For instance, we can't drop inequalities based on the sign of the preferences or use the triangular preference matrix (exercise: explain why). We exploit, however, the sparsity in the preferences.

----    237 PARAMETER report  performance statistics

               model 1     model 1     model 1     model 1     model 2     model 2
            MIQP/basic    MIQP/adv   MIP/basic     MIP/adv       MIQCP         MIP

Variables      217.000     202.000    7777.000    1117.000     217.000    1243.000
Equations       43.000      43.000   22723.000    1333.000      78.000    3156.000
Nonzeros       649.000     604.000   54379.000    4423.000    1698.000    8676.000
Objective       82.000      82.000      82.000      82.000
Time             2.859       2.235      33.187       2.468       0.703       0.109
Nodes           27.000     322.000     647.000     263.000     388.000
Iterations   11788.000   32877.000  309695.000   42328.000   15038.000     265.000


We see that the optimal objective is 0, indicating that all persons achieved a non-negative net preference score. Indeed:

----    239 PARAMETER SumPreferences  Final sum of preferences for each person

             model 1     model 2

person1        3.000
person2        6.000
person3        6.000
person4        4.000
person6        4.000       2.000
person7        1.000
person8        2.000
person9        4.000
person10       2.000
person11       6.000       3.000
person13       3.000       1.000
person14       2.000
person17       3.000
person18                   1.000
person19       2.000
person20      -1.000       3.000
person22       6.000
person23       3.000
person24       6.000
person26      -1.000       2.000
person28       5.000
person31       3.000       1.000
person32       6.000
person33       5.000
person34       1.000
person35      -2.000
person36       3.000
sum           82.000      13.000
min           -2.000
max            6.000       3.000


Apart from improving these few bad cases, the solution looks terrible. Our overall sum of achieved preferences ended up as 13. A big drop from 82. The reason is that this model only focuses on preventing bad outcomes and does not care at all about any individuals with a score higher than the minimum of zero. It confirms we cannot make the minimum better than zero, but that is all this says.

We need to combine models 1 and 2 to find solutions that both maximize the minimum score and maximize the overall performance.


Model 3: Preemptive Multi-objective Model

The two single optimization models (maximize total sum and the maximin model) deliver unattractive solutions. Taking a step back, we have a multi-objective problem: 

Multi-objective model
\[\begin{align}\max\> & \color{darkred}z_1 = \sum_{p,p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot \color{darkred}{\mathit{assign}}_{p',g} \\ \max\> & \color{darkred}z_2 = \min_p \sum_{p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot \color{darkred}{\mathit{assign}}_{p',g} \\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 & \forall p \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} & \forall g \\ & \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \end{align}\]


There are two popular ways to attack a problem like this:

  • Preemptive or lexicographic
    Optimize \(\color{darkred}z_1\). Turn objective 1 into a constraint with \(\color{darkred}z_1=\color{darkblue}z^*_1\) fixed to its optimal value. Then optimize \(\color{darkred}z_2\). Of course, we can also do this the other way around: optimize \(\color{darkred}z_2\) and then \(\color{darkred}z_1\). The latter version is what we try next.
  • Weighted sum.
    Combine both objectives into one by forming a linear combination using some weights. 
In this section, we work on the preemptive model:
  1. Solve model 2 from the previous section. We did this already, so this step is done. Denote the optimal objective value by \(\color{darkblue}z^*_2\). 
  2. Replace the objective \[\max\> \color{darkred}z_2 = \min_p \sum_{p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot \color{darkred}{\mathit{assign}}_{p',g}\] by a constraint: \[\color{darkblue}z^*_2 = \min_p \sum_{p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot \color{darkred}{\mathit{assign}}_{p',g}\] 
  3. Solve with objective: \[\max\> \color{darkred}z_1 = \sum_{p,p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot \color{darkred}{\mathit{assign}}_{p',g}\]
We can streamline the last step a bit by introducing the variable \(\color{darkred}z_p\), which holds the score of person \(p\):

Model 3: Quadratic Preemptive Model
\[\begin{align}\max& \sum_p \color{darkred}z_p \\ & \color{darkred}z_p = \sum_{p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot \color{darkred}{\mathit{assign}}_{p,g}\cdot\color{darkred}{\mathit{assign}}_{p',g} \\ & \color{darkred}z_{model2}^* \le \color{darkred}z_p & \forall p\\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 & \forall p \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} & \forall g \\& \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\}\end{align}\]

The constraint \(\color{darkred}z_{model2}^* \le \color{darkred}z_p\) can be implemented by setting a lowerbound on the variable \(\color{darkred}z_p\).  Interestingly, Cplex did linearize the MIQP model of the previous sections but refuses to do so for this model. I threw this model in a global MINLP solver. Notes:

  • Gurobi handles this better than Cplex and accepts this model as is.
  • We can trick Cplex into accepting the model by using explicit constraints \(\color{darkred}z_{model2}^* \le \color{darkred}z_p\). However, as Cplex does not linearize this, the resulting model is very slow. 

Of course, we can linearize the model in a similar way as was shown before.

The results are:

----    289 PARAMETER SumPreferences  Final sum of preferences for each person

             model 1     model 2     model 3

person1        3.000                   3.000
person2        6.000
person3        6.000                   6.000
person4        4.000                   4.000
person6        4.000       2.000       6.000
person7        1.000                   2.000
person8        2.000
person9        4.000                   1.000
person10       2.000                   2.000
person11       6.000       3.000       3.000
person12                               2.000
person13       3.000       1.000       3.000
person14       2.000                   2.000
person17       3.000                   1.000
person18                   1.000       1.000
person19       2.000                   2.000
person20      -1.000       3.000       1.000
person22       6.000                   3.000
person23       3.000                   3.000
person24       6.000                   6.000
person26      -1.000       2.000       3.000
person28       5.000                   5.000
person31       3.000       1.000       3.000
person32       6.000                   6.000
person33       5.000                   5.000
person34       1.000                   3.000
person35      -2.000
person36       3.000                   3.000
sum           82.000      13.000      79.000
min           -2.000
max            6.000       3.000       6.000


Here, we see that imposing the condition that all individuals should see a non-negative net score is not as harmful to the overall objective as we feared after running Model 2. Now, we only have a decrease in overall attained preferences of 3: the objective goes from 82 to 79. 

We see that we really should look at this as a multi-objective problem. Both the single-objective models (model 1 and model 2) yield unsatisfactory solutions. Using a preemptive approach, we can properly get a handle on the trade-offs between equity and overall performance.


Model 4: weighted sum approach


In the previous section, we used a preemptive or lexicographic approach to tackle the multi-objective problem. Here, we use a weighted sum method. We should be able to find our preemptive solution using a suitable set of weights [2]. As we want to put a lot of weight on the model 2 objective, I chose as weights \(\color{darkblue}w_2 = 0.99\) and \(\color{darkblue}w_1 = 0.01\). Note that there are algorithms to find such weights [3,4], but here I just chose a "large" and a "small" weight. The weighted sum model looks like:


Model 4: Weighted Sum Multi-objective Model
\[\begin{align}\max\> & \color{darkblue}w_1\cdot\color{darkred}z_1 +\color{darkblue}w_2\cdot\color{darkred}z_2 \\ &\color{darkred}z_1 = \sum_{p,p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot \color{darkred}{\mathit{assign}}_{p',g} \\ & \color{darkred}z_2 = \min_p \sum_{p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot\color{darkred}{\mathit{assign}}_{p,g}\cdot \color{darkred}{\mathit{assign}}_{p',g} \\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 & \forall p \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} & \forall g \\ & \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \end{align}\]

We can streamline this a bit by introducing the score per person \(\color{darkred}z_p\):


Model 4: Reformulated Weighted Sum Model
\[\begin{align}\max\> & \color{darkblue}w_1\cdot\color{darkred}z_1 +\color{darkblue}w_2\cdot\color{darkred}z_2 \\ &\color{darkred}z_1 = \sum_{p} \color{darkred}z_p \\ & \color{darkred}z_2 \le \color{darkred}z_p & \forall p \\&\color{darkred}z_p =\sum_{p',g} \color{darkblue}{\mathit{pref}}_{p,p'}\cdot \color{darkred}{\mathit{assign}}_{p,g}\cdot\color{darkred}{\mathit{assign}}_{p',g}\\ & \sum_g \color{darkred}{\mathit{assign}}_{p,g} = 1 & \forall p \\ & \sum_p \color{darkred}{\mathit{assign}}_{p,g} = \color{darkblue}{\mathit{groupSize}} & \forall g \\ & \color{darkred}{\mathit{assign}}_{p,g} \in \{0,1\} \end{align}\]


We can linearize this using techniques discussed earlier. The results are similar to the model 3 from the previous section:


----    326 PARAMETER SumPreferences  Final sum of preferences for each person

             model 1     model 2     model 3     model 4

person1        3.000                   3.000       3.000
person2        6.000
person3        6.000                   6.000       6.000
person4        4.000                   4.000       4.000
person6        4.000       2.000       6.000       4.000
person7        1.000                   2.000       2.000
person8        2.000                               2.000
person9        4.000                   1.000       1.000
person10       2.000                   2.000       2.000
person11       6.000       3.000       3.000       3.000
person12                               2.000       2.000
person13       3.000       1.000       3.000       6.000
person14       2.000                   2.000       2.000
person17       3.000                   1.000       1.000
person18                   1.000       1.000       1.000
person19       2.000                   2.000       2.000
person20      -1.000       3.000       1.000       1.000
person22       6.000                   3.000       3.000
person23       3.000                   3.000       3.000
person24       6.000                   6.000       6.000
person26      -1.000       2.000       3.000       3.000
person28       5.000                   5.000       2.000
person31       3.000       1.000       3.000       3.000
person32       6.000                   6.000       6.000
person33       5.000                   5.000       5.000
person34       1.000                   3.000       3.000
person35      -2.000
person36       3.000                   3.000       3.000
sum           82.000      13.000      79.000      79.000
min           -2.000
max            6.000       3.000       6.000       6.000

 



Conclusions

  1. MINLP formulations can be a powerful tool. They can be more expressive and significantly simpler than their linearized MIP versions. Some solvers provide automatic reformulations. Somewhat surprisingly, current modeling tools do not provide much in this respect. Automatic reformulations are very helpful here, as proper linearizations may require much attention to detail.
  2. Equity or fairness is an interesting aspect of optimization models. Often, this is ignored. Thinking about equity is not just a technical issue but is also a more conceptual exercise. It often helps to look at this from a multi-objective perspective.   


References

  1. Violet Xinying Chen & J. N. Hooker, 2023. "A guide to formulating fairness in an optimization model," Annals of Operations Research, Springer, vol. 326(1), pages 581-619, July.
  2. Sherali, H.D., Soyster, A.L. Preemptive and nonpreemptive multi-objective programming: Relationship and counterexamples. J Optim Theory Appl 39, 173–186 (1983)
  3. Hanif D. Sherali, Equivalent weights for lexicographic multi-objective programs: Characterizations and computations, European Journal of Operational Research, Volume 11, Issue 4, 1982, Pages 367-379
  4. James P. Ignizio, Lyn C. Thomas, An enhanced conversion scheme for lexicographic, multiobjective integer programs, European Journal of Operational Research, Volume 18, Issue 1, 1984, Pages 57-61.

Appendix: GAMS Model


$onText

 

   Assign persons to groups according to stated preferences

  

   Model 1: maximize overall sum

            may disadvantage some individuals

   Model 2: max min sum per person

            best lower bound per person, but bad solution otherwise

   Model 3: preemptive (lexicographic) model

            first optimize min sum per person (obj from model 2)

            (we skip this step as we already did this previously)

            then optimize overall sum (obj from model 1)

   Model 4: weighted sum model

             weights  0.01 for model 1 obj

                      0.99 for model 2 obj

 

$offText

 

option mip=cplexmiqcp=cplex;

 

*-----------------------------------------------------------------------

problem size

*-----------------------------------------------------------------------

 

Set

   'persons' /person1*person36/

   'groups'  /group1*group6/

;

 

abort$(card(p)/card(g)<>round(card(p)/card(g)))

    "number of persons should be a multiple of number of groups";

 

alias (p,p1,p2);

 

sets

   ne(p1,p2) 'off diagonal'

   lt(p1,p2) 'strict triangular'

;

ne(p1,p2) = ord(p1)<>ord(p2);

lt(p1,p2) = ord(p1)<ord(p2);

 

scalar groupsize 'number of persons in each group';

groupsize = round(card(p)/card(g));

 

*-----------------------------------------------------------------------

random sparse preferences

*-----------------------------------------------------------------------

 

option seed = 45634;

parameter pref(p1,p2) 'preferences of person p1';

pref(ne)$(uniform(0,1)<0.15) = uniformint(-3,3);  

option pref:0:0:4;

display pref;

 

*-----------------------------------------------------------------------

reporting macros

*-----------------------------------------------------------------------

 

parameter report(*,*,*) 'performance statistics';

option report:3:1:2;

 

$macro statistics(m,name,desc) \

assign.l(p,g) = round(assign.l(p,g));  \

option assign:0; display assign.l;     \

report('Variables',name,desc)  = m.numvar;  \

report('Equations',name,desc)  = m.numequ;  \

report('Nonzeros',name,desc)   = m.numnz;   \

report('Objective',name,desc)  = m.objval;  \

report('Time',name,desc)       = m.resusd;  \

report('Nodes',name,desc)      = m.nodusd;  \

report('Iterations',name,desc) = m.iterusd; \

display report;

 

set inSameGroup(p1,p2);

parameter SumPreferences(*,*) 'Final sum of preferences for each person';

 

$macro reporting(name) \

inSameGroup(ne(p1,p2)) = sum(g,assign.l(p1,g)*assign.l(p2,g));  \

SumPreferences(p1,name) = sum(inSameGroup(p1,p2),pref(p1,p2));  \

SumPreferences('sum',name) = sum(p1,SumPreferences(p1,name));   \

SumPreferences('min',name) = smin(p1,SumPreferences(p1,name));  \

SumPreferences('max',name) = smax(p1,SumPreferences(p1,name));  \

display SumPreferences;

 

*-----------------------------------------------------------------------

* Model 1: max sum preferences

* MIQP basic version

*-----------------------------------------------------------------------

 

parameter upref(p,p'used preferences in objective';

 

binary variable assign(p,g'assign person to group';

variable z1 'model 1 obj';

 

equations

    qsumobj     'quadratic objective: maximize sum of preferences'

    eassign1(p) 'each person is assigned to exactly one group'

    eassign2(g) 'size of group'

;

 

objective: sum of preferences (quadratic)

qsumobj.. z1 =e= sum((p1,p2,g),upref(p1,p2)*assign(p1,g)*assign(p2,g));

 

assignment constraints

eassign1(p).. sum(g,assign(p,g)) =e= 1;

eassign2(g).. sum(p,assign(p,g)) =e= groupsize;

 

use default preferences

upref(p1,p2) = pref(p1,p2);

 

model m1_q /all/;

solve m1_q maximizing z1 using miqcp;

statistics(m1_q,'model 1','MIQP/basic')

 

*-----------------------------------------------------------------------

* Model 1: max sum preferences

* MIQP more advanced version

*   1. make preferences triangular

*   2. fix some variables (symmetry)

basically small potatoes

*-----------------------------------------------------------------------

 

make the preferences triangular

this can be beneficial for very large problems as we make

things sparser. here it is largely a gimmick

 

parameter prefTri(p1,p2) 'triangularized version of pref';

prefTri(lt(p1,p2)) = pref(p1,p2)+pref(p2,p1);

use triangular preferences

upref(p1,p2) = prefTri(p1,p2);

 

symmetry

wlog we can require:

*   person1 -> group1

*   person2 -> group1 or group2

*   person3 -> group1 or group2 or group3

etc

assign.fx(p,g)$(ord(g)>ord(p)) = 0;

 

remove fixed variables from model so we can

see a change in the number of variables

m1_q.holdfixed=1;

 

solve m1_q maximizing z1 using miqcp;

statistics(m1_q,'model 1','MIQP/adv')

 

this version is marginally better than the previous one.

 

*-----------------------------------------------------------------------

* Model 1: max sum preferences

linearized MIQP model, basic version

* does not properly account for sparseness in pref()

*-----------------------------------------------------------------------

 

Sets

   uplin(p1,p2) '(p1,p2) combos used in <= linearization constraints'

   lolin(p1,p2) '(p1,p2) combos used in >= linearization constraints'

;

in this basic version use all offdiagonal linearizations

uplin(p1,p2) = ne(p1,p2);

lolin(p1,p2) = ne(p1,p2);

 

use default preferences

upref(p1,p2)=pref(p1,p2);

 

positive variable assign2(p1,p2,g) 'assign combo (p1,p2) to group';

assign2.up(p1,p2,g) = 1;

 

equations

    linsumobj     'linearized sum pref objective'

    lin1(p1,p2,g) 'basic linearization of assign(p1,g)*assign(p2,g)'

    lin2(p1,p2,g) 'basic linearization of assign(p1,g)*assign(p2,g)'

    lin3(p1,p2,g) 'basic linearization of assign(p1,g)*assign(p2,g)'

;

 

linsumobj.. z1 =e= sum((p1,p2,g),upref(p1,p2)*assign2(p1,p2,g));

 

lin1(uplin(p1,p2),g).. assign2(p1,p2,g) =l= assign(p1,g);

lin2(uplin(p1,p2),g).. assign2(p1,p2,g) =l= assign(p2,g);

lin3(lolin(p1,p2),g).. assign2(p1,p2,g) =g= assign(p1,g)+assign(p2,g)-1;

 

model m1_lin /linsumobj,lin1,lin2,lin3,eassign1,eassign2/;

solve m1_lin maximizing z1 using mip;

statistics(m1_lin,'model 1','MIP/basic')

 

*-----------------------------------------------------------------------

* Model 1: linearized MIQP model, sparse version

use sparseness and signs of prefTri to generate fewer

variables and constraints

*-----------------------------------------------------------------------

 

uplin(p1,p2) = prefTri(p1,p2)>0;

lolin(p1,p2) = prefTri(p1,p2)<0;

upref(p1,p2) = prefTri(p1,p2);

 

solve m1_lin maximizing z1 using mip;

statistics(m1_lin,'model 1','MIP/adv')

 

this version is much better than the previous one

* i.e. we need to pay attention to the details

 

reporting('model 1')

 

abort$(SumPreferences('min','model 1')>=-1.1) "want negative min sum";

 

*-----------------------------------------------------------------------

* Model 2: max min prefs model MIQCP model

*-----------------------------------------------------------------------

 

variable z2 'objective variable model 2';

equations maxminqobj(p) 'maximize the minimum net score of each person';

 

* don't use the triangular version of pref

we need the original asymmetric version

maxminqobj(p1).. z2 =l= sum((p2,g),pref(p1,p2)*assign(p1,g)*assign(p2,g));

 

model m2_q /maxminqobj,eassign1,eassign2/;

solve m2_q maximizing z2 using miqcp;

statistics(m2_q,'model 2','MIQCP')

 

*-----------------------------------------------------------------------

* Model 2: max min pref model MIP model

*-----------------------------------------------------------------------

 

* don't use the triangular version of pref

we need the original asymmetric version

equations maxminlinobj(p) 'maximize the minimum net score of each person';

maxminlinobj(p1).. z2 =l= sum((p2,g),pref(p1,p2)*assign2(p1,p2,g));

 

we need the constraints used in the linearization for all

pref(p1,p2)<>0

uplin(p1,p2) = pref(p1,p2);

lolin(p1,p2) = pref(p1,p2);

 

model m2_lin /maxminlinobj,lin1,lin2,lin3,eassign1,eassign2/;

solve m2_lin maximizing z2 using mip;

statistics(m2_lin,'model 2','MIP')

 

reporting('model 2')

 

*-----------------------------------------------------------------------

* Model 3: preemptive multi obj approach

fix model 2, then apply model 1 obj

* Quadratic model

*-----------------------------------------------------------------------

 

cplex refuses this model (Gurobi works fine)

option miqcp = scip;

 

variable zp(p)  'sum preferences for each person';

equations

   zpdef(p)     'quadratic constraint: sum preferences each person'

   m3obj2       'second obj: maximize overall sum preferences'

;

 

fix min zp(p) using previous result

zp.lo(p) = z2.l;

for Cplex: to make this model accepted by Cplex use explcit constraints

instead of bounds. But then Cplex starts to use Quadratic relaxations.

* That is very, very slow on this model. The qtolin option has no effect on

constraints

 

zpdef(p1).. zp(p1) =e= sum((p2,g),pref(p1,p2)*assign(p1,g)*assign(p2,g));

 

m3obj2.. z1 =e= sum(p,zp(p));

 

model m3_q/zpdef,m3obj2,eassign1,eassign2/;

solve m3_q maximizing z1 using miqcp;

statistics(m3_q,'model 3','MIQCP')

 

*-----------------------------------------------------------------------

* Model 3: preemptive multi obj approach

fix model 2, then apply model 1 obj

* Linear model

*-----------------------------------------------------------------------

 

equation zpdef2(p) 'linearized version of zpdef';

 

zpdef2(p1).. zp(p1) =e= sum((p2,g),pref(p1,p2)*assign2(p1,p2,g));

 

note: we still have:

zp.lo(p) = z2.l;

 

model m3_lin /zpdef2,m3obj2,lin1,lin2,lin3,eassign1,eassign2/;

solve m3_lin maximizing z1 using mip;

statistics(m3_lin,'model 3','MIP')

 

reporting('model 3')

 

*-----------------------------------------------------------------------

* Model 4: weighted sum multi obj approach

* Quadratic model

*-----------------------------------------------------------------------

 

reset

zp.lo(p) = -INF;

 

put most weight on the second objective

* 0.01 on obj1 (overall sum)

* 0.99 on obj2 (max min sum)

scalar 'weight for obj z1' /0.01/;

 

variable z4 'model 4 objective';

equations

   obj4     'weighted sum objective'

   zpbnd(p) 'z2 is bound on zp

;

 

obj4.. z4 =e= w*z1 + (1-w)*z2;

zpbnd(p).. z2 =l= zp(p);

 

model m4_q /obj4,zpdef,m3Obj2,zpbnd,eassign1,eassign2/;

solve m4_q maximizing z4 using miqcp;

statistics(m4_q,'model 4','MIQCP')

 

*-----------------------------------------------------------------------

* Model 4: weighted sum multi obj approach

linear model

*-----------------------------------------------------------------------

 

model m4_lin /obj4,zpdef2,m3Obj2,zpbnd,lin1,lin2,lin3,eassign1,eassign2/;

solve m4_lin maximizing z4 using mip;

statistics(m4_lin,'model 4','MIP')

 

reporting('model 4')

 

No comments:

Post a Comment