## Monday, November 11, 2019

### Interesting constraint

#### Problem statement

In [1] the following problem is proposed:

I'm trying to solve a knapsack-style optimization problem with additional complexity.
Here is a simple example. I'm trying to select 5 items that maximize value. 2 of the items must be orange, one must be blue, one must be yellow, and one must be red. This is straightforward. However, I want to add a constraint that the selected yellow, red, and orange items can only have one shape in common with the selected blue item.

#### Data

The example data looks like:

 item   color     shape  value
A    blue    circle   0.454
B  yellow    square   0.570
C     red  triangle   0.789
D     red    circle   0.718
E     red    square   0.828
F  orange    square   0.709
G    blue    circle   0.696
H  orange    square   0.285
I  orange    square   0.698
J  orange  triangle   0.861
K    blue  triangle   0.658
L  yellow    circle   0.819
M    blue    square   0.352
N  orange    circle   0.883
O  yellow  triangle   0.755


The complicated constraint is a bit difficult to grasp right away. The two solutions listed below may help to understand how I interpreted this.

#### Derived data

Let's see if we can model this. First we slice and dice the data a bit to make the modeling a bit easier. Here is some derived data:

----     65 SET i  item

A,    B,    C,    D,    E,    F,    G,    H,    I,    J,    K,    L,    M,    N,    O

----     65 SET c  color

blue  ,    yellow,    red   ,    orange

----     65 SET s  shape

circle  ,    square  ,    triangle

----     65 SET ICS  (i,c,s) combinations

circle      square    triangle

A.blue           YES
B.yellow                     YES
C.red                                    YES
D.red            YES
E.red                        YES
F.orange                     YES
G.blue           YES
H.orange                     YES
I.orange                     YES
J.orange                                 YES
K.blue                                   YES
L.yellow         YES
M.blue                       YES
N.orange         YES
O.yellow                                 YES

----     65 SET IC  (i,c) combinations

blue      yellow         red      orange

A         YES
B                     YES
C                                 YES
D                                 YES
E                                 YES
F                                             YES
G         YES
H                                             YES
I                                             YES
J                                             YES
K         YES
L                     YES
M         YES
N                                             YES
O                     YES

----     65 SET IS  (i,s) combinations

circle      square    triangle

A         YES
B                     YES
C                                 YES
D         YES
E                     YES
F                     YES
G         YES
H                     YES
I                     YES
J                                 YES
K                                 YES
L         YES
M                     YES
N         YES
O                                 YES

----     65 SET blue  blue colored items

A,    G,    K,    M

----     65 PARAMETER value  value of item

A 0.454,    B 0.570,    C 0.789,    D 0.718,    E 0.828,    F 0.709,    G 0.696,    H 0.285,    I 0.698,    J 0.861
K 0.658,    L 0.819,    M 0.352,    N 0.883,    O 0.755


I often introduce data in more convenient formats. This will help us in writing the model in a more readable way. We can develop and debug this derived data before we work on the model equations itself.

Note that the set CS(c,s) is complete. However, I will assume that there is a possibility that this set has some missing entries. In other words, I will not assume that all combinations of colors and shapes exist in the data.

#### High-level model I

Let's introduce the following zero-one variables:\begin{align} & x_i = \begin{cases} 1 & \text{if item i is selected}\\ 0 & \text{otherwise}\end{cases} \\ & y_{c,s} = \begin{cases} 1 & \text{if items with color c and shape s are selected}\\ 0 & \text{otherwise} \end{cases}\end{align}

My high-level model is:

High-level Model 1
\begin{align}\max & \sum_i \color{darkblue}{\mathit{Value}}_i \cdot \color{darkred}x_i \\ &\sum_i \color{darkred}x_i = \color{darkblue}{\mathit{NumItems}}\\ &\sum_{i | \color{darkblue}{\mathit{IC}}(i,c)} \color{darkred}x_i = \color{darkblue}{\mathit{NumColor}}_c && \forall c\\ & \color{darkred}y_{c,s} = \max_{i|\color{darkblue}{\mathit{ICS}}(i,c,s)} \color{darkred}x_i && \forall c,s|\color{darkblue}{\mathit{CS}}(c,s)\\ & \color{darkred}y_{\color{darkblue}{\mathit{blue}},s} = 1 \Rightarrow \sum_{c|\color{darkblue}{\mathit{YRO}}(c)} \color{darkred}y_{c,s} \le 1 && \forall s \\&\color{darkred}x_i, \color{darkred}y_{c,s} \in \{0,1\} \end{align}

The max constraint implements the definition of the $$y_{c,s}$$ variables: if any of the selected items has color/shape combination $$(c,s)$$  then $$y_{c,s}=1$$ (and else it stays zero). The implication constraint says: if we have a blue shape $$s$$, then there can be only one shape of type $$s$$ of other color. The model is a bit complicated with all subsets and indexing because I wanted to be precise. No hand-waving. This helps when implementing it.

#### MIP Model I

This is not yet a MIP model, but translation of the above model into a normal MIP is not too difficult.

Mixed Integer Programming Model 1
\begin{align}\max & \sum_i \color{darkblue}{\mathit{Value}}_i \cdot \color{darkred}x_i \\ &\sum_i \color{darkred}x_i = \color{darkblue}{\mathit{NumItems}}\\ &\sum_{i | \color{darkblue}{\mathit{IC}}(i,c)} \color{darkred}x_i = \color{darkblue}{\mathit{NumColor}}_c && \forall c\\ & \color{darkred}y_{c,s} \ge \color{darkred}x_i && \forall i,c,s|\color{darkblue}{\mathit{ICS}}(i,c,s)\\ & \sum_{c|\color{darkblue}{\mathit{YRO}}(c)} \color{darkred}y_{c,s} \le 1 + \color{darkblue}M(1-\color{darkred}y_{\color{darkblue}{\mathit{blue}},s}) && \forall s \\&\color{darkred}x_i, \color{darkred}y_{c,s} \in \{0,1\} \end{align}

The max constraint has been replaced by a single inequality. This works in this case as we are only interested in $$y$$'s that are forced to be one. Those guys will limit blue shape constraint. This type of bounding is quite common, but it always requires a bit of thinking through in order to establish if this is allowed for the specific case. The blue shape implication constraint itself is rewritten as a big-M inequality. A good value for $$M$$ is not very difficult to establish: the number of colors minus blue minus 1.

#### MIP Model 2

We can actually bypass the variable $$y$$ and directly formulate the limit on selected shapes in terms of variables $$x$$. For this we develop a set SameShape(i,j) with:

1. i is an item with color blue
2. j is an item with color not blue
3. i and j have the same shape

The set SameShape looks like:

----     71 SET SameShape  blue i and non-blue j with same shape

B           C           D           E           F           H           I           J           L

A                                 YES                                                                     YES
G                                 YES                                                                     YES
K                     YES                                                                     YES
M         YES                                 YES         YES         YES         YES

+           N           O

A         YES
G         YES
K                     YES


This means for the first row: if item A (a blue circle) is considered, then other non-blue circles are items D, L and N. With this we can write:

Mixed Integer Programming Model 2
\begin{align}\max & \sum_i \color{darkblue}{\mathit{Value}}_i \cdot \color{darkred}x_i \\ &\sum_i \color{darkred}x_i = \color{darkblue}{\mathit{NumItems}}\\ &\sum_{i | \color{darkblue}{\mathit{IC}}(i,c)} \color{darkred}x_i = \color{darkblue}{\mathit{NumColor}}_c && \forall c\\ & \sum_{j|\color{darkblue}{\mathit{SameShape}}(i,j)} \color{darkred}x_j \le 1 + \color{darkblue}M(1-\color{darkred}x_i) && \forall i | \color{darkblue}{\mathit{Blue}}_i \\&\color{darkred}x_i \in \{0,1\} \end{align}

#### Solution without "blue shape" constraint

For comparison, let's first run the model without the complex "blue" shape constraints (the last two constraints). That gives:

----     90 VARIABLE z.L                   =        4.087  obj

----     90 VARIABLE x.L  select item

E 1.000,    G 1.000,    J 1.000,    L 1.000,    N 1.000

----     90 SET selected

circle      square    triangle

E.red                        YES
G.blue           YES
J.orange                                 YES
L.yellow         YES
N.orange         YES


We see that the blue shape (circle) is also selected as orange and yellow.

#### Solution with "blue shape" constraint

With the additional constraints, we see:

----     95 VARIABLE z.L                   =        4.049  obj

----     95 VARIABLE x.L  select item

E 1.000,    J 1.000,    K 1.000,    L 1.000,    N 1.000

----     95 SET selected

circle      square    triangle

E.red                        YES
J.orange                                 YES
K.blue                                   YES
L.yellow         YES
N.orange         YES


Now we see the blue shape is a triangle. We only have another triangle of color orange.

Obviously, as a sanity check, the objective reduces a bit when we introduce this constraint.

#### Appendix: Model formulated in GAMS

The complete GAMS model looks like:

 $ontext I'm trying to solve a napsack-style optimization problem with additional complexity. Here is a simple example. I'm trying to select 5 items that maximize value. 2 of the items must be orange, one must be blue, one must be yellow, and one must be red. This is straightforward. However, I want to add a constraint that the selected yellow, red, and orange items can only have one shape in common with the selected blue item.$offtext set    i 'item' /A*O/    c 'color' /blue,yellow,red,orange/    s 'shape' /circle,square,triangle/ ; parameters     data(i,c,s) 'value' /          A . blue   .  circle     0.454          B . yellow .  square     0.570          C . red    .  triangle   0.789          D . red    .  circle     0.718          E . red    .  square     0.828          F . orange .  square     0.709          G . blue   .  circle     0.696          H . orange .  square     0.285          I . orange .  square     0.698          J . orange .  triangle   0.861          K . blue   .  triangle   0.658          L . yellow .  circle     0.819          M . blue   .  square     0.352          N . orange .  circle     0.883          O . yellow .  triangle   0.755          /     NumItems 'number of items to select' /5/     NumColor(c) 'required number of each color' /          orange 2          red    1          blue   1          yellow 1         / ; alias(i,j); sets   ICS(i,c,s) '(i,c,s) combinations'   IC(i,c)    '(i,c) combinations'   IS(i,s)    '(i,s) combinations'   blue(i)    'blue colored items' ; parameter   value(i)   'value of item'   M          'big-M' ; ICS(i,c,s) = data(i,c,s); IC(i,c) = sum(ICS(i,c,s),1); IS(i,s) = sum(ICS(i,c,s),1); blue(i) = IC(i,"blue"); value(i) = sum((c,s),data(i,c,s)); display i,c,s,ICS,IC,IS,blue,value; M = card(s)-2; set SameShape(i,j) 'blue i and non-blue j with same shape'; SameShape(i,j)$(blue(i) and not blue(j)) = sum(s, IS(i,s) and IS(j,s)); display SameShape; binary variable x(i) 'select item'; variable z 'obj'; equations obj 'objective' count 'count number of selected items' countcolor(c) 'count selected items for each color' limit(i) ; obj.. z =e= sum(i, value(i)*x(i)); count.. sum(i, x(i)) =e= numitems; countcolor(c).. sum(IC(i,c), x(i)) =e= numcolor(c); limit(i)$blue(i).. sum(sameshape(i,j), x(j)) =l= 1 + M*(1-x(i)); option optcr=0; set selected(i,c,s) 'reporting'; model m1 /obj,count,countcolor/; solve m1 maximizing z using mip; selected(i,c,s)$ICS(i,c,s) = x.l(i)>0.5; display z.l,x.l,selected; model m2 /all/; solve m2 maximizing z using mip; selected(i,c,s)$ICS(i,c,s) = x.l(i)>0.5; display z.l,x.l,selected;

A second exercise would be to write this in Python using PuLP.

#### Appendix: PuLP model

 import pulp import io import pandas # -------- data ---------------- df = pandas.read_csv(io.StringIO(''' item,color,shape,value A,blue,circle,0.454 B,yellow,square,0.570 C,red,triangle,0.789 D,red,circle,0.718 E,red,square,0.828 F,orange,square,0.709 G,blue,circle,0.696 H,orange,square,0.285 I,orange,square,0.698 J,orange,triangle,0.861 K,blue,triangle,0.658 L,yellow,circle,0.819 M,blue,square,0.352 N,orange,circle,0.883 O,yellow,triangle,0.755 ''')) print(df) numItems = 5 numColor = {'orange':2, 'red':1, 'blue':1, 'yellow':1} # --------- derived data ------------ Items = df["item"] Colors = df["color"].unique() Shapes = df["shape"].unique() YRO = Colors[Colors != 'blue'] value = dict(zip(Items,df["value"])) color = dict(zip(Items,df["color"])) shape = dict(zip(Items,df["shape"])) M = Shapes.size - 2 SameShape = {    (i,j) for i in Items for j in Items          if (color[i]=='blue') and (color[j]!='blue') and              (shape[j]==shape[i]) } print('SameShape:',SameShape) # ----------- Model ----------- p = pulp.LpProblem("MIP problem", pulp.LpMaximize) x = pulp.LpVariable.dicts("x",Items,0,1,pulp.LpInteger) p += pulp.lpSum([x[i]*value[i] for i in Items]) p += pulp.lpSum([x[i] for i in Items]) == numItems for c in Colors:     p += pulp.lpSum([x[i] for i in Items if color[i]==c]) == numColor[c] for i in Items:     if color[i]=="blue":          p += pulp.lpSum([x[j] for j in Items if (i,j) in SameShape]) <= 1 + M*(1-x[i]) p.solve(pulp.PULP_CBC_CMD(msg=1)) #----------- results -------------- print("Status:", pulp.LpStatus[p.status]) print("Objective:",pulp.value(p.objective)) selected = [] for i in Items:     if pulp.value(x[i]) > 0.5:          selected += [[i, color[i], shape[i]]] selected = pandas.DataFrame(selected, columns=['item','color','shape']) print(selected)

Note that SameShape is a Python set. We could have dropped SameShape and implemented the logic of this set directly in the blue shape constraint. However I believe it is often a good thing to keep constraints as simple as possible. They are more difficult to debug. The set SameShape can easily be inspected by a print statement, even before we have written the rest of the MIP model.

The output looks like:

   item   color     shape  value
0     A    blue    circle  0.454
1     B  yellow    square  0.570
2     C     red  triangle  0.789
3     D     red    circle  0.718
4     E     red    square  0.828
5     F  orange    square  0.709
6     G    blue    circle  0.696
7     H  orange    square  0.285
8     I  orange    square  0.698
9     J  orange  triangle  0.861
10    K    blue  triangle  0.658
11    L  yellow    circle  0.819
12    M    blue    square  0.352
13    N  orange    circle  0.883
14    O  yellow  triangle  0.755
Welcome to the CBC MILP Solver
Version: 2.9.0
Build Date: Feb 12 2015

command line - D:\Python\Python37\lib\site-packages\pulp\solverdir\cbc\win\64\cbc.exe 3c0bdd177daf444ba924d26442c171ba-pulp.mps max branch printingOptions all solution 3c0bdd177daf444ba924d26442c171ba-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 14 COLUMNS
At line 108 RHS
At line 118 BOUNDS
At line 134 ENDATA
Problem MODEL has 9 rows, 15 columns and 48 elements
Coin0008I MODEL read with 0 errors
Continuous objective value is 4.049 - 0.00 seconds
Cgl0004I processed model has 9 rows, 15 columns (15 integer (15 of which binary)) and 48 elements
Cutoff increment increased from 1e-005 to 0.000999
Cbc0038I Initial state - 0 integers unsatisfied sum - 0
Cbc0038I Solution found of -4.049
Cbc0038I Before mini branch and bound, 15 integers at bound fixed and 0 continuous
Cbc0038I Mini branch and bound did not improve solution (0.01 seconds)
Cbc0038I After 0.01 seconds - Feasibility pump exiting with objective of -4.049 - took 0.00 seconds
Cbc0012I Integer solution of -4.049 found by feasibility pump after 0 iterations and 0 nodes (0.01 seconds)
Cbc0001I Search completed - best objective -4.049, took 0 iterations and 0 nodes (0.01 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost
Cuts at root node changed objective from -4.049 to -4.049
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                4.04900000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.02
Time (Wallclock seconds):       0.02

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.07   (Wallclock seconds):       0.07

Status: Optimal
Objective: 4.0489999999999995
item   color     shape
0    E     red    square
1    J  orange  triangle
2    K    blue  triangle
3    L  yellow    circle
4    N  orange    circle


Some notes:

• This does not look so bad. The PuLP model follows the mathematical model quite closely.
• The main differences with the GAMS model: we need for loops here and there (GAMS has implicit loops), and the variable y is dense. I don't think PuLP has direct facilities for sparse variables. In this case this is not a problem.
• The CBC version that comes with PuLP is rather old.
• Having said that, CBC is doing pretty good on this model: 0 nodes and 0 simplex iterations.

## Thursday, November 7, 2019

### CVXPY matrix style modeling limits

CVXPY[1] is a popular Python based modeling tool for convex models. It rigorously checks the model is convex which is very convenient: many convex solvers are thoroughly confused when passing on a non-convex model. This is a bit different from say passing on a non-convex model to a local NLP solver. In that case the solver will accept it and try to find local solutions.

In addition CVXPY provides many high level functions (e.g. for different norms, etc.). and a very compact matrix based modeling paradigm. Although matrix notation can be very powerful, there are limits. When overdoing things, the notation becomes less intuitive.

Below I will try to formulate four models in CVXPY and see how the matrix notation will work for these examples:

1. Transportation Model. Very simple but nevertheless interesting to look at.
2. A linearized non-convex quadratic model with binary variables. The matrix notation becomes a bit more cumbersome.
3. A sparse network problem. This is not very easily expressed in CVXPY. CVXPY does not support sparse variables (only sparse data).
4. A matrix balancing problem. This model is very difficult to deal with in CVXPY.
All these models are candidates for CVXPY: the models are linear or convex and they only use vectors and matrices.

#### Example: Transportation model

In this section I'll discuss some modeling issues when implementing a simple transportation model in CVXPY, and compare this to a standard GAMS implementation.

As an example consider the standard transportation model.

Transportation Model
\begin{align}\min&\sum_{i,j}\color{darkblue}c_{i,j}\color{darkred}x_{i,j}\\ & \sum_i\color{darkred}x_{i,j} \ge \color{darkblue} d_j && \forall j\\ &\sum_j\color{darkred}x_{i,j}\le \color{darkblue} s_i && \forall i \\ & \color{darkred}x_{i,j} \ge 0 \end{align}\begin{align}\min\>\>&\mathbf{tr}(\color{darkblue}C^T \color{darkred}X) \\ &\color{darkred}X^T \color{darkblue}e \ge \color{darkblue} d \\ &\color{darkred}X \color{darkblue}e \le \color{darkblue}s\\ & \color{darkred}X\ge 0\end{align}

Here $$e$$ indicates a column vector of ones of appropriate size. tr indicates the trace of a matrix: the sum of the diagonal elements. The model in matrix notation is identical to the equation based model on the left. The matrix based model is compacter, but arguably a bit more difficult to read for most readers (me included).

One important way to help matrix models to be more readable, is to add a summation function. As a matrix model does not have indices, we need other ways to indicate what to sum over. This is expressed as the (optional) axis argument. This means the model above can be expressed in CVXPY as

import numpy as np
import cvxpy as cp

#----- data -------
capacity = np.array([350, 600])
demand = np.array([325, 300, 275])
distance = np.array([[2.5, 1.7, 1.8],
[2.5, 1.8, 1.4]])
freight = 90
cost = freight*distance/1000

#------ set up LP data --------
C = cost
d = demand
s = capacity

#---- matrix formulation ----

ei = np.ones(s.shape)
ej = np.ones(d.shape)

X = cp.Variable(C.shape,"X")
prob = cp.Problem(
cp.Minimize(cp.trace(C.T@X)),
[X.T@ei >= d,
X@ej <= s,
X>=0])
prob.solve(verbose=True)
print("status:",prob.status)
print("objective:",prob.value)
print("levels:",X.value)

#---- summations ----

prob2 = cp.Problem(
cp.Minimize(cp.sum(cp.multiply(C,X))),
[cp.sum(X,axis=0) >= d,
cp.sum(X,axis=1) <= s,
X >= 0])
prob2.solve(verbose=True)
print("status:",prob2.status)
print("objective:",prob2.value)
print("levels:",X.value)


The matrix model follows the mathematical model closely. The second model with the summations requires some explanation. In Python * is for elementwise multiplication and @ for matrix multiplication. In CVXPY however, @, * and matmul are used both for matrix multiplication while multiply is for elementwise multiplication. A sum without axis argument sums over all elements, while axis=0 sums over the first index, and axis=1 sums over the second index.

The data for this model is from [2]. The lack of explicit indexing has another consequence. All data is determined by its position. I.e. we need to remember that position 0 means a canning plant in Seattle or a demand market in New York,

I am not so sure about the readability of these two CVXPY models. I think I still prefer the indexed model.

CVXPY uses OSQP [3] as default LP and QP solver. Here is the log:

-----------------------------------------------------------------
OSQP v0.6.0  -  Operator Splitting QP Solver
(c) Bartolomeo Stellato,  Goran Banjac
University of Oxford  -  Stanford University 2019
-----------------------------------------------------------------
problem:  variables n = 6, constraints m = 11
nnz(P) + nnz(A) = 18
settings: linear system solver = qdldl,
eps_abs = 1.0e-05, eps_rel = 1.0e-05,
eps_prim_inf = 1.0e-04, eps_dual_inf = 1.0e-04,
sigma = 1.00e-06, alpha = 1.60, max_iter = 10000
check_termination: on (interval 25),
scaling: on, scaled_termination: off
warm start: on, polish: on, time_limit: off

iter   objective    pri res    dua res    rho        time
1  -2.4113e+00   3.32e+02   7.31e+00   1.00e-01   5.47e-05s
200   1.5368e+02   2.77e-03   2.44e-07   1.23e-03   4.30e-02s

status:               solved
solution polish:      unsuccessful
number of iterations: 200
optimal objective:    153.6751
run time:             4.31e-02s
optimal rho estimate: 2.60e-03

status: optimal
objective: 153.67514323621972
levels: [[ 3.17321312e+01  3.00004719e+02  2.71849569e-03]
[ 2.93267895e+02 -2.77023874e-03  2.74995427e+02]]


The levels indicate OSQP is pretty sloppy here: we see some negative values of the order -1e-3. The real optimal objective is 153.675000 (so even though the solution is slightly infeasible, this did not help in achieving a better objective). Sometimes OSQP does better: if the solution polishing step works. This polishing is a bit like a poor man's crossover: it tries to guess the active constraints. In this case polishing did not work.

The number of iterations is large. This is normal: this solver uses a first order algorithm. They typically require a lot of iterations.

For completeness, the original GAMS model looks like:

Set
i 'canning plants' / seattle,  san-diego /
j 'markets'        / new-york, chicago, topeka /;

Parameter
a(i) 'capacity of plant i in cases'
/ seattle    350
san-diego  600 /

b(j) 'demand at market j in cases'
/ new-york   325
chicago    300
topeka     275 /;

Table d(i,j) 'distance in thousands of miles'
new-york  chicago  topeka
seattle         2.5      1.7     1.8
san-diego       2.5      1.8     1.4;

Scalar f 'freight in dollars per case per thousand miles' / 90 /;

Parameter c(i,j) 'transport cost in thousands of dollars per case';
c(i,j) = f*d(i,j)/1000;

Variable
x(i,j) 'shipment quantities in cases'
z      'total transportation costs in thousands of dollars';

Positive Variable x;

Equation
cost      'define objective function'
supply(i) 'observe supply limit at plant i'
demand(j) 'satisfy demand at market j';

cost..      z =e= sum((i,j), c(i,j)*x(i,j));

supply(i).. sum(j, x(i,j)) =l= a(i);

demand(j).. sum(i, x(i,j)) =g= b(j);

Model transport / all /;

solve transport using lp minimizing z;

display x.l, x.m;


The main differences with the CVXPY model are:

• GAMS indexes by names (set elements), CVXPY uses positions in a matrix or vector
• GAMS is equation based while CVXPY uses matrices
• For this model, the CVXPY representation is very compact but depending on the formulation it requires more than casual familiarity with matrix notation.
• Arguably, the most important feature of a modeling tool is readability. I have a preference to the GAMS notation here: it is closer to the original mathematical model in a notation I am used to.
• When printing the results, GAMS is a bit more intuitive:

GAMS:

----     66 VARIABLE x.L  shipment quantities in cases

new-york     chicago      topeka

seattle        50.000     300.000
san-diego     275.000                 275.000

Python:
[[ 3.17321312e+01  3.00004719e+02  2.71849569e-03]
[ 2.93267895e+02 -2.77023874e-03  2.74995427e+02]]


#### Example: non-convex binary quadratic optimization

Consider the binary quadratic programming problem (in indexed format and in matrix notation):

\begin{align}\min&\sum_{i,j} \color{darkred}x_i \color{darkblue}q_{i,j}\color{darkred}x_j\\ & \color{darkred}x_i \in \{0,1\} \end{align}\begin{align}\min\>\>& \color{darkred}x^T\color{darkblue}Q\color{darkred}x \\ &\color{darkred}x \in \{0,1\} \end{align}

We don't assume the Q matrix is positive (semi) definite or even symmetric. This makes the problem non-convex. Interestingly, when we feed this problem into solvers like Cplex or Gurobi, they have no problem in finding the optimal solution. The reason is that they apply a trick to make this problem linear.

We can linearize the binary product $$y_{i,j} = x_i x_j$$ by \begin{align} & y_{i,j} \le x_i \\& y_{i,j} \le x_j \\& y_{i,j} \ge x_i + x_j -1 \\ & x_i, y_{i,j} \in \{0,1\}\end{align} If we want we can relax $$y$$ to be continuous between 0 and 1.

After applying this linearization, we have:

\begin{align}\min&\sum_{i,j} \color{darkblue}q_{i,j}\color{darkred}y_{i,j}\\ & \color{darkred}y_{i,j} \le \color{darkred}x_i\\ & \color{darkred}y_{i,j} \le \color{darkred}x_j\\ & \color{darkred}y_{i,j} \ge \color{darkred}x_i+ \color{darkred}x_j - 1\\ & \color{darkred}x_i,\color{darkred}y_{i,j} \in \{0,1\} \end{align}\begin{align}\min\>\>& \mathbf{tr}(\color{darkblue}Q^T\color{darkred}Y) \\ & \color{darkred}Y \le \color{darkred}x \cdot\color{darkblue}e^T \\ & \color{darkred}Y \le \color{darkblue}e \cdot\color{darkred}x^T \\& \color{darkred}Y \ge \color{darkred}x \cdot\color{darkblue}e^T + \color{darkblue}e \cdot\color{darkred}x^T - \color{darkblue}e \cdot \color{darkblue}e^T\\ &\color{darkred}x, \color{darkred}Y \in \{0,1\} \end{align}

The matrix form of the objective is similar to the one we saw in the section on the transportation problem. This part can be written with a summation. The constraints are a little bit more complicated due to the outer products. I don't think there is a much easier and more intuitive way to express these outer products than with matrix multiplications. If you are not used to these matrix expressions, this may not be so readable.

Although a solver like Cplex and Gurobi can solve the quadratic formulation directly, CVXPY will complain with the message:

cvxpy.error.DCPError: Problem does not follow DCP rules. Specifically:
The objective is not DCP. Its following subexpressions are not:x * [[-6.56505736e+00  6.86533416e+00  1.00750712e+00 -3.97724192e+00
-4.15575766e+00 -5.51894266e+00 -3.00338992e+00  7.12540694e+00
-8.65772554e+00  4.21338000e-03]
[ 9.96235254e+00  1.57466756e+00  9.82266078e+00  5.24500934e+00
-7.38615034e+00  2.79437518e+00 -6.80964272e+00 -4.99838934e+00
3.37857218e+00 -1.29287238e+00]
[-2.80599468e+00 -2.97117264e+00 -7.37016820e+00 -6.99796424e+00
1.78227300e+00  6.61785624e+00 -5.38368524e+00  3.31468920e+00
5.51715212e+00 -3.92683046e+00]
[-7.79015418e+00  4.76973200e-02 -6.79654476e+00  7.44924622e+00
-4.69770910e+00 -4.28371356e+00  1.87911844e+00  4.45438142e+00
2.56497354e+00 -7.24042700e-01]
[-1.73386012e+00 -7.64609286e+00 -3.71575466e+00 -9.06896972e+00
-3.22899456e+00 -6.35800814e+00  2.91454254e+00  1.21491094e+00
5.39923440e+00 -4.04388272e+00]
[ 3.22212522e+00  5.11643348e+00  2.54894998e+00 -4.32271604e+00
-8.27150752e+00 -7.94970662e+00  2.82502302e+00  9.06189960e-01
-9.36950296e+00  5.84721284e+00]
[-8.54466004e+00 -6.48677902e+00  5.12652260e-01  5.00415338e+00
-6.43752572e+00 -9.31718028e+00  1.70262346e+00  2.42459968e+00
-2.21276200e+00 -2.82571694e+00]
[-5.13930766e+00 -5.07156922e+00 -7.38994394e+00  8.66899440e+00
-2.40124188e+00  5.66800922e+00 -3.99931484e+00 -7.49033556e+00
4.97748210e+00 -8.61535074e+00]
[-5.95968886e+00 -9.89868284e+00 -4.60773896e+00 -2.97050000e-03
-6.97428262e+00 -6.51661090e+00 -3.38724532e+00 -3.66187892e+00
-3.55826090e+00  9.27953282e+00]
[ 9.87204410e+00 -2.60193890e+00 -2.54222866e+00  5.43956660e+00
-2.06631716e+00  8.26192650e+00 -7.60844540e+00  4.70957778e+00
-8.89163050e+00  1.52599610e+00]] * x


CVXPY is doing rigorous checks to make sure the problem is convex, and refuses models that are found to be non-convex.

The linearized formulation can look like:

import numpy as np
import cvxpy as cp

# --------  data ---------

Q = np.array([
[-6.56505736,  6.86533416,  1.00750712, -3.97724192, -4.15575766, -5.51894266, -3.00338992,  7.12540694, -8.65772554,  0.00421338],
[ 9.96235254,  1.57466756,  9.82266078,  5.24500934, -7.38615034,  2.79437518, -6.80964272, -4.99838934,  3.37857218, -1.29287238],
[-2.80599468, -2.97117264, -7.3701682 , -6.99796424,  1.782273  ,  6.61785624, -5.38368524,  3.3146892 ,  5.51715212, -3.92683046],
[-7.79015418,  0.04769732, -6.79654476,  7.44924622, -4.6977091 , -4.28371356,  1.87911844,  4.45438142,  2.56497354, -0.7240427 ],
[-1.73386012, -7.64609286, -3.71575466, -9.06896972, -3.22899456, -6.35800814,  2.91454254,  1.21491094,  5.3992344 , -4.04388272],
[ 3.22212522,  5.11643348,  2.54894998, -4.32271604, -8.27150752, -7.94970662,  2.82502302,  0.90618996, -9.36950296,  5.84721284],
[-8.54466004, -6.48677902,  0.51265226,  5.00415338, -6.43752572, -9.31718028,  1.70262346,  2.42459968, -2.212762  , -2.82571694],
[-5.13930766, -5.07156922, -7.38994394,  8.6689944 , -2.40124188,  5.66800922, -3.99931484, -7.49033556,  4.9774821 , -8.61535074],
[-5.95968886, -9.89868284, -4.60773896, -0.0029705 , -6.97428262, -6.5166109 , -3.38724532, -3.66187892, -3.5582609 ,  9.27953282],
[ 9.8720441 , -2.6019389 , -2.54222866,  5.4395666 , -2.06631716,  8.2619265 , -7.6084454 ,  4.70957778, -8.8916305 ,  1.5259961 ]])

n = Q.shape[0]

# ---- linearized model, matrix format -----

x = cp.Variable((n,1),"x",boolean=True)
Y = cp.Variable((n,n),"Y")
e = np.ones((n,1))

prob = cp.Problem(cp.Minimize(cp.trace(Q.T@Y)),
[Y <= x@e.T,
Y <= e@x.T,
Y >= x@e.T + e@x.T - e@e.T,
Y >= 0,
Y <= 1])
prob.solve(solver=cp.GLPK_MI,verbose=True)
print("status:",prob.status)
print("objective:",prob.value)
print("levels:",x.value)


Notes:

• The objective can be replaced by cp.sum(cp.multiply(Q,Y)),
• I relaxed the Y variables,
• We use glpk as the MIP solver,
• CVXPY comes with an integer solver called ECOS_BB. This solver seems to choke on this problem,
• I was not able to enter this model directly: I has to use a piece of paper first to get from the indexed version to the matrix version. There was an extra manual translation step needed. This step, of course, can easily lead to errors.

The original GAMS model for this problem was as follows:

set i /i1*i10/;
alias(i,j);

parameter q(i,j);
q(i,j) = uniform(-10,10);

binary variable x(i);
variable z;

equation obj;

obj.. z =e= sum((i,j), x(i)*q(i,j)*x(j));

model m /obj/;
option miqcp=cplex,optcr=0;
solve m minimizing z using miqcp;
display z.l,x.l;


Cplex will automatically linearize this model.

#### CVXPY Sparse Variables

CVXPY has some severe limitations on how variables can look like.

First we cannot use three (or more) dimensional variables. So something like x[i,j,k] is not supported. A declaration like:

X = cp.Variable((n,n,n),"X")

gives:

ValueError: Expressions of dimension greater than 2 are not supported.

This is a rather severe restriction. Many practical model have variables exceeding 2 dimensions. Of course matrix notation becomes impractical for symbols with more than 2 dimensions. Which is probably the reason why CVXPY ony wants to handle scalars, vectors and matrices.

Furthermore sparse variables are not supported either. Everything is fully allocated. As an example consider the toy model:

n = 100
X = cp.Variable((n,n),"X")
prob = cp.Problem(
cp.Minimize(0),
[X[0,0]==1])

Solver Log:
problem:  variables n = 10000, constraints m = 1


My hope was that I could just declare a large variable matrix and that CVXPY would only export the used variables to the solver. Instead of the solver seeing a model with one variable, it receives a model with 10,000 variables.

#### Example: a Sparse Network Model

A linear programming formulation for a max-flow network problem can look like:

Max-flow Sparse Network Model
\begin{align}\max\>\>&\color{darkred}f\\ & \sum_{j|\color{darkblue}{\mathit{arc}}(j,i)}\color{darkred}x_{j,i} = \sum_{j|\color{darkblue}{\mathit{arc}}(i,j)}\color{darkred}x_{i,j} + \color{darkred}f\cdot \color{darkblue}b_i && \forall i \\ & 0 \le \color{darkred}x_{i,j} \le \color{darkblue}{\mathit{capacity}_i} && \forall i,j|\color{darkblue}{\mathit{arc}}(i,j) \end{align}

Here $$arc(i,j)$$ indicates whether link $$i \rightarrow j$$ exists. The sparse data vector $$b_i$$ is defined by $b_i = \begin{cases} -1 & \text{if node i is the source node}\\ +1 & \text{if node i is the sink node}\\ 0 & \text{otherwise} \end{cases}$

This mathematical model translates directly into a GAMS model:

$ontext max flow network example Data from example in Mitsuo Gen, Runwei Cheng, Lin Lin Network Models and Optimization: Multiobjective Genetic Algorithm Approach Springer, 2008 Erwin Kalvelagen, Amsterdam Optimization, May 2008$offtext

sets
i 'nodes' /node1*node11/
source(i)   /node1/
sink(i)    /node11/
;

alias(i,j);

parameter capacity(i,j) /
node1.node2   60
node1.node3   60
node1.node4   60
node2.node3   30
node2.node5   40
node2.node6   30
node3.node4   30
node3.node6   50
node3.node7   30
node4.node7   40
node5.node8   60
node6.node5   20
node6.node8   30
node6.node9   40
node6.node10  30
node7.node6   20
node7.node10  40
node8.node9   30
node8.node11  60
node9.node10  30
node9.node11  50
node10.node11 50
/;

set arcs(i,j);
arcs(i,j)capacity(i,j) = yes; display arcs; parameter rhs(i); rhs(source) = -1; rhs(sink) = 1; variables x(i,j) 'flow along arcs' f 'total flow' ; positive variables x; x.up(i,j) = capacity(i,j); equations flowbal(i) 'flow balance' ; flowbal(i).. sum(arcs(j,i), x(j,i)) - sum(arcs(i,j), x(i,j)) =e= f*rhs(i); model m /flowbal/; solve m maximizing f using lp;  The GAMS model exploits that GAMS stores data sparsely. The variables x(i,j) are only allocated when they are used inside the equations. This usage is restricted to cases where arcs(i,j) exist. I.e. the number of variables x(i,j) is 22 instead of $$11 \times 11$$. As we discussed in the previous section, CVXPY does not support sparse variables like GAMS. So instead of variables x(i,j) we'll use x[k] where k indicates the arc number. CVXPY supports sparse data matrices through scipy.sparse. In the code below we set up a sparse matrix A with entries as follows: • A[i,k] = -1 if arc $$k$$ represents an outgoing link $$i \rightarrow j$$ • A[i,k] = +1 if arc $$k$$ represents an incoming link $$j \rightarrow i$$ With this we can formulate the model: Matrix based max-flow model \begin{align}\max\>\>&\color{darkred}f\\ & \color{darkblue}A \cdot \color{darkred}x = \color{darkred}f \cdot \color{darkblue} b \\ & 0 \le \color{darkred}x \le \color{darkblue}{\mathit{capacity}} \end{align} The network logic is hidden in the sparse matrix $$A$$. From a modeling point of view, this model is at a too high abstraction level: we have lost all structure we want to be visible and reason about. The CVXPY implementation is as follows: import numpy as np import scipy.sparse as sparse import cvxpy as cp # ------ data -------- data = { 'nodes':['A','B','C','D','E','F','G','H','I','J','K'], 'from':['A','A','A','B','B','B','C','C','C','D','E', 'F','F','F','F','G','G','H','H','I','I','J'], 'to': ['B','C','D','C','E','F','D','F','G','G','H', 'E','H','I','J','F','J','I','K','J','K','K'], 'capacity': [60,60,60,30,40,30,30,50,30,40,60,20,30,40,30,20,40,30,60,30,50,50], 'source' : 'A', 'sink' : 'K' } numnodes = len(data['nodes']) numarcs = len(data['capacity']) print("Number of nodes: {}".format(numnodes)) print("Number of arcs: {}".format(numarcs)) # ------ lp data -------- # map node name to index map = dict(zip(data['nodes'],range(numnodes))) # coefficients irow = np.zeros(2*numarcs,int) jcol = np.zeros(2*numarcs,int) val = np.zeros(2*numarcs) # arc k: i->j has coefficient -1 in row i, column k # +1 j for k in range(numarcs): i = map[data['from'][k]] j = map[data['to'][k]] kk = 2*k irow[kk] = i jcol[kk] = k val[kk] = -1 kk = 2*k+1 irow[kk] = j jcol[kk] = k val[kk] = 1 A = sparse.csc_matrix((val,(irow,jcol))) b = np.zeros(numnodes) b[map[data['source']]] = -1 b[map[data['sink']]] = 1 cap = data['capacity'] # ------ lp model -------- x = cp.Variable(numarcs,"x") f = cp.Variable(1,"f") prob = cp.Problem(cp.Maximize(f), [A@x == f*b, x >= 0, x <= cap]) prob.solve(verbose=True) print(prob)  With this experiment, we have confirmed that sparse data matrices work just fine with CVXPY. However, this makes the model not that straightforward anymore. This is more complicated than the corresponding GAMS model. I am able to write down the GAMS immediately, but this CVXPY implementation required some real work (and some thinking). See [4] for an alternative approach. #### Example: Matrix Balancing In this example we want to estimate the inner part of a matrix subject to row- and column-total constraints. This problem is frequently encountered in economic modeling. An additional constraint is that we want to maintain the sparsity pattern of the matrix. Basically the model is: Matrix Balancing Model \begin{align}\min\>\>&\mathbf{dist}(\color{darkred}A,\color{darkblue}A^0)\\ & \sum_i\color{darkred}a_{i,j} = \color{darkblue} v_j && \forall j\\ &\sum_j\color{darkred}a_{i,j} = \color{darkblue} u_i && \forall i \\ & \color{darkblue}a^0_{i,j}=0 \Rightarrow\color{darkred}a_{i,j} = 0 \end{align} There are different possibilities for the distance function. E.g. it can be a quadratic function $\mathbf{dist}(A,A^0) = \sum_{i,j} (a_{i,j}-a^0_{i,j})^2$ or in this case an entropy function $\mathbf{dist}(A,A^0) =\sum_{i,j} a_{i,j}\log\left(\frac{a_{i,j}}{a^0_{i,j}}\right)$ Often the implication is enforced by just ignoring or skipping all elements $$a_{i,j}$$ where $$a^0_{i,j}=0$$. This leads again to a sparse representation of the variables. In GAMS this is quite easy. ontext

Example from

Using PROC IML to do Matrix Balancing
Carol Alderman, University of Kansas
Institute for Public Policy and Business Research
MidWest SAS Users Group MWSUG 1992

$offtext sets p 'products' /pA*pI/ s 'salesmen' /s1*s10/ ; table A0(*,*) 'estimated matrix, known totals' s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 rowTotal pA 230 375 375 100 685 215 50 2029 pB 330 405 419 175 90 504 515 240 105 2798 pC 268 225 242 30 790 301 44 100 1998 pD 595 380 638 275 30 685 605 88 100 160 3566 pE 340 360 440 200 30 755 475 44 150 2794 pF 132 190 200 432 130 1071 pG 309 330 350 125 612 474 50 50 2305 pH 365 400 330 150 50 575 600 44 150 110 2747 pI 210 250 308 125 720 256 100 50 2015 colTotal 2772 2910 3300 1150 240 5760 3526 220 950 495 ; alias (p,i); alias (s,j); variables A(i,j) 'new values' z 'objective (minimized)' ; equations objective rowsum(i) colsum(j) ; objective.. z =e= sum((i,j)$A0(i,j), A(i,j)*log(A(i,j)/A0(i,j)));
rowsum(i).. sum(j$A0(i,j), A(i,j)) =e= A0(i,'rowTotal'); colsum(j).. sum(i$A0(i,j), A(i,j)) =e= A0('colTotal',j);

A.L(i,j) = A0(i,j);
A.lo(i,j)\$A0(i,j) = 0.0001;

model m /all/;
solve m minimizing z using nlp;

display A.L,z.l;


When solved with the general purpose NLP solver CONOPT, we get the following results:

----     58 VARIABLE A.L  new values

s1          s2          s3          s4          s5          s6          s7          s8          s9

pA     229.652     374.869     375.099     100.028                 686.238     212.542                  50.572
pB     330.587     406.192     420.492     175.626      94.300     506.575     510.790                 243.545
pC     267.313     224.685     241.809                  31.297     790.595     297.246      44.017     101.037
pD     595.262     380.610     639.416     275.614      31.391     687.580     599.252      88.299     101.341
pE     339.659     360.058     440.341     200.158      31.346     756.750     469.809      44.086     151.793
pF     130.330     187.815     197.821                             427.953     127.080
pG     309.478     330.895     351.165     125.418                 614.984     470.016                  50.727
pH     360.594     395.631     326.596     148.455      51.665     569.947     586.867      43.598     150.111
pI     209.123     249.246     307.260     124.701                 719.378     252.398                 100.874

+         s10

pB     109.894
pD     167.233
pG      52.318
pH     113.535
pI      52.019

----     58 VARIABLE z.L                   =      -15.769  objective (minimized)


CVXPY does not allow to skip elements like GAMS does. Well, unless we build the whole model in scalar mode: element by element. That is not very attractive so let's try another way. It will be a bit of a struggle. Let's define a (data) matrix D as $d_{i,j} = \begin{cases} 1 & \text{if a^0_{i,j} \ne 0}\\ 0 & \text{if a^0_{i,j} = 0}\end{cases}$ This matrix can be used to skip linear terms that are not needed. The objective is another story. CVXPY has the function entr() which is defined by: $$-x\log(x)$$. We expand the objective as: $\sum_{i,j} a_{i,j}\log\left(\frac{a_{i,j}}{a^0_{i,j}}\right) = \sum_{i,j} - \mathbf{entr}(a_{i,j}) - a_{i,j}\log(a^0_{i,j})$ Finally we insert $$d$$ to ignore the zero's: $\min \sum_{i,j} - d_{i,j} \mathbf{entr}(a_{i,j}) - d_{i,j} a_{i,j}\log(a^0_{i,j}+1-d_{i,j})$ This is quite some gymnastics to shoehorn our model into an acceptable CVXPY format.

import numpy as np
import cvxpy as cp

# --------  data ----------
A0 =  [[ 230 , 375 , 375 , 100 ,   0 , 685 , 215 ,   0 ,  50 ,   0 ],
[ 330 , 405 , 419 , 175 ,  90 , 504 , 515 ,   0 , 240 , 105 ],
[ 268 , 225 , 242 ,   0 ,  30 , 790 , 301 ,  44 , 100 ,   0 ],
[ 595 , 380 , 638 , 275 ,  30 , 685 , 605 ,  88 , 100 , 160 ],
[ 340 , 360 , 440 , 200 ,  30 , 755 , 475 ,  44 , 150 ,   0 ],
[ 132 , 190 , 200 ,   0 ,   0 , 432 , 130 ,   0 ,   0 ,   0 ],
[ 309 , 330 , 350 , 125 ,   0 , 612 , 474 ,   0 ,  50 ,  50 ],
[ 365 , 400 , 330 , 150 ,  50 , 575 , 600 ,  44 , 150 , 110 ],
[ 210 , 250 , 308 , 125 ,   0 , 720 , 256 ,   0 , 100 ,  50 ]]

u = [2029,2798,1998,3566,2794,1071,2305,2747,2015]
v = [2772,2910,3300,1150,240,5760,3526,220,950,495]

m = len(u)
n = len(v)

# --------  model ----------

D = np.sign(A0)

Dloga0 = D * np.log(A0+np.ones_like(D)-D)

A = cp.Variable((m,n),"A")

obj = cp.Minimize(cp.sum(cp.multiply(D,-cp.entr(A)) - cp.multiply(A,Dloga0)))
cons = [cp.sum(cp.multiply(D,A),axis=1)==u,
cp.sum(cp.multiply(D,A),axis=0)==v,
A >= 0]
prob = cp.Problem(obj,cons)
prob.solve(solver=cp.SCS,verbose=True,max_iters=200000)


Solving this tiny model is very difficult. It is using exponential cones and that is not very robustly implemented in the solvers. With SCS and a lot of iterations, we finally see:

101400| 7.74e-05  9.08e-05  7.16e-04 -1.56e+01 -1.56e+01  3.30e-11  1.14e+02
101500| 7.72e-05  9.10e-05  5.74e-04 -1.54e+01 -1.54e+01  3.30e-11  1.14e+02
101600| 7.69e-05  9.13e-05  5.30e-04 -1.51e+01 -1.51e+01  7.77e-11  1.14e+02
101700| 7.66e-05  9.17e-05  4.02e-04 -1.49e+01 -1.49e+01  5.64e-11  1.14e+02
101800| 7.62e-05  9.22e-05  3.96e-04 -1.46e+01 -1.46e+01  3.30e-11  1.14e+02
101900| 7.59e-05  9.28e-05  2.10e-04 -1.44e+01 -1.44e+01  3.30e-11  1.14e+02
101980| 7.56e-05  9.33e-05  3.35e-05 -1.42e+01 -1.42e+01  1.17e-11  1.14e+02
----------------------------------------------------------------------------
Status: Solved
Timing: Solve time: 1.14e+02s
Lin-sys: nnz in L factor: 1089, avg solve time: 1.87e-05s
Cones: avg projection time: 1.06e-03s
Acceleration: avg step time: 2.62e-07s
----------------------------------------------------------------------------
Error metrics:
dist(s, K) = 4.8795e-06, dist(y, K*) = 0.0000e+00, s'y/|s||y| = -5.3311e-12
primal res: |Ax + s - b|_2 / (1 + |b|_2) = 7.5554e-05
dual res:   |A'y + c|_2 / (1 + |c|_2) = 9.3272e-05
rel gap:    |c'x + b'y| / (1 + |c'x| + |b'y|) = 3.3548e-05
----------------------------------------------------------------------------
c'x = -14.2093, -b'y = -14.2103
============================================================================


The objective is in the neighborhood of what CONOPT found for the GAMS model (CONOPT required just a few iterations).

This model is not exactly a showcase for CVXPY.

#### Conclusion

CVXPY can be very convenient to model certain classes of models: convex, structured and easy too write in matrix notation. For other models it is in all likelihood not very suited. We saw some examples where things become really hairy when using CVXPY while the underlying mathematical model is quite simple. CVXPY should really be considered as a special purpose modeling tool and you may want to consider other tools when the model does not really fits CVXPY's matrix philosophy. Also this shows how an equation based modeling tool like AMPL or GAMS is more versatile: we can express many more models in a natural way without much trickery. Furthermore, we saw that pushing matrix notation too far can lead to models that are less readable.