## Saturday, September 28, 2019

### Duplicate constraints in Pyomo model

#### Introduction

Pyomo [1]  is a popular Python based modeling tool. In [2] a question is posed about a situation where a certain constraint takes more than 8 hours to generate. As we shall see, the reason is that extra indices are used.

#### A simple example

The constraint $y_i = \sum_j x_{i,j} \>\>\>\forall i,j$ is really malformed. The extra $$\forall j$$ is problematic. What does this mean? There are two obvious interpretations:

1. One could say, this is just wrong. Below we will see that GAMS and AMPL follow this approach.
2. We can also interpret this differently. Assume the inner $$j$$ is scoped (i.e. local). Then we could read this as: repeat the constraint $$y_i = \sum_j x_{i,j}$$, $$n$$ times. Here $$n=|J|$$ is the cardinality of set $$J$$. This is what Pyomo is doing.

#### LaTeX: no checks

Journal papers are often typeset using LaTeX. Not much checking there, and as a result authors are free to write equations like [3]:

Here we have the same thing. We have a summation over $$i$$ as well as a $$\forall i$$.

#### GAMS: syntax error

The GAMS fragment corresponding to this example, shows GAMS will object to this construct:

 11  equation e(i,j);   12  e(i,j)..  y(i) =e= sum(j, x(i,j)); ****                          \$125 **** 125  Set is under control already   13   **** 1 ERROR(S)   0 WARNING(S)

#### AMPL: syntax error

AMPL also gives a syntax error. The message could be improved a bit:

 D:\etc>ampl x.mod d:x.mod, line 8 (offset 155):         syntax error context:  e{i in I, j in J}: Y[i] = sum{j  >>> in  <<< J} X[i,j];

#### Pyomo: create duplicate constraints

The Pyomo equivalent can look like:

 def eqRule(m,i,j):     return m.Y[i] == sum(m.X[i,j] for j in m.J); model.Eq = Constraint(model.I,model.J,rule=eqRule)

This fragment is a bit more difficult to read, largely due to syntactic clutter. But in any case: Python and Pyomo accepts this constraint as written. To see what is generated, we can use

 model.Eq.pprint()

This will show something like:

 Eq : Size=6, Index=Eq_index, Active=True     Key          : Lower : Body                                     : Upper : Active     ('i1', 'j1') :   0.0 : Y[i1] - (X[i1,j1] + X[i1,j2] + X[i1,j3]) :   0.0 :   True     ('i1', 'j2') :   0.0 : Y[i1] - (X[i1,j1] + X[i1,j2] + X[i1,j3]) :   0.0 :   True     ('i1', 'j3') :   0.0 : Y[i1] - (X[i1,j1] + X[i1,j2] + X[i1,j3]) :   0.0 :   True     ('i2', 'j1') :   0.0 : Y[i2] - (X[i2,j1] + X[i2,j2] + X[i2,j3]) :   0.0 :   True     ('i2', 'j2') :   0.0 : Y[i2] - (X[i2,j1] + X[i2,j2] + X[i2,j3]) :   0.0 :   True     ('i2', 'j3') :   0.0 : Y[i2] - (X[i2,j1] + X[i2,j2] + X[i2,j3]) :   0.0 :   True

We see for each $$i$$ we have three duplicate constraints. Instead of 6 constraints, we just should have 2. The way to fix this is to remove the function argument $$j$$ from eqRule:

 def eqRule(m,i):     return m.Y[i] == sum(m.X[i,j] for j in m.J); model.Eq = Constraint(model.I,rule=eqRule)

After this, model.Eq.pprint() produces

 Eq : Size=2, Index=I, Active=True     Key : Lower : Body                                     : Upper : Active      i1 :   0.0 : Y[i1] - (X[i1,j3] + X[i1,j2] + X[i1,j1]) :   0.0 :   True      i2 :   0.0 : Y[i2] - (X[i2,j3] + X[i2,j2] + X[i2,j1]) :   0.0 :   True

This looks much better.

#### The original problem

The constraint in the original question was:

 def period_capacity_dept(m, e, j, t, dp):     return sum(a[e, j, dp, t]*m.y[e,j,t] for (e,j) in model.EJ)<= K[dp,t] + m.R[t,dp] model.period_capacity_dept = Constraint(E, J, T, DP, rule=period_capacity_dept)

Using the knowledge of the previous paragraph we know this should really be:

 def period_capacity_dept(m, t, dp):     return sum(a[e, j, dp, t]*m.y[e,j,t] for (e,j) in model.EJ)<= K[dp,t] + m.R[t,dp] model.period_capacity_dept = Constraint(T, DP, rule=period_capacity_dept)

Pyomo mixes mathematical notation with programming. I think that is one of the reasons this bug is more difficult to see. In normal programming, adding an argument to a function has an obvious meaning. However in this case, adding e,j means in effect: $$\forall e,j$$. If $$e$$ and $$j$$ belong to large sets, we can easily create a large number of duplicates.

#### Notes: Debugging Pyomo models

When debugging Pyomo models there are a few tricks to know about:

• Have a small data set available. Debugging with large data sets is very difficult and plain unpleasant.
• Use model.symbol.pprint()
• Create an LP file of the model. When using the pyomo script, use the convert option (don't forget the --symbolic-solver-labels flag). From Python code, use model.write(filename, io_options={'symbolic_solver_labels': True}).

## Wednesday, September 18, 2019

### Demo problem with constraint on standard deviation

In [1] a hypothetical demo problem is shown. I don't think it is a real problem, but rather contrived as an example. Nevertheless, there are things to say about it.

The problem is:
Original Problem
\begin{align}\min\>&\sum_i \color{darkred}x_i\\ & \mathbf{sd}(\color{darkred}x) \lt \color{darkblue}\alpha\\ & \color{darkred}x_i \in \{0,1\}\end{align}

Notes:

• Here sd is the standard deviation
• We assume $$x$$ has $$n$$ components.
• Of course, $$\lt$$ is problematic in optimization. So the equation should become a $$\le$$ constraint.
• The standard formula for the standard deviation is: $\sqrt{\frac{\sum_i (x_i-\bar{x})^2}{n-1}}$ where $$\bar{x}$$ is the average of $$x$$.
• This is an easy problem. Just choose $$x_i=0$$.
• When we use  $$\max \sum_i x_i$$ things are equally simple. In that case choose $$x_i = 1$$.
• There is symmetry: $$\mathbf{sd}(x) = \mathbf{sd}(1-x)$$.
• A more interesting problem is to have $$\mathbf{sd}(x)\ge\alpha$$.

#### Updated problem

A slightly different problem, and somewhat reformulated is:

MIQCP problem
\begin{align}\min\>&\bar{\color{darkred}x} \\ & \bar{\color{darkred}x}= \frac{\sum_i \color{darkred}x_i}{\color{darkblue}n}\\ & \frac{\sum_i (\color{darkred}x_i - \bar{\color{darkred}x})^2}{\color{darkblue}n-1} \ge \color{darkblue}\alpha^2 \\ & \color{darkred}x_i \in \{0,1\}\end{align}

First, we replaced $$\lt$$ by $$\ge$$ to make the problem more interesting. Furthermore I got rid of the square root. This removes a possible problem with being non-differentiable at zero. The remaining problem is a non-convex quadratically constrained (MIQCP=Mixed Integer Quadratically Constrained Problem). The non-convexity implies we want a global solver.

This model solves easily with solvers like Baron or Couenne.

#### Integer variable

When we look at the problem a bit more, we see we are not really interested in which $$x_i$$'s are zero or one. Rather, we need only to worry about the number. Let $$k = \sum_i x_i$$, Obviously $$\bar{x}=k/n$$. But more interestingly: $\mathbf{sd}(x) = \sqrt{\frac{k (1-\bar{x})^2+(n-k)(0-\bar{x})^2}{n-1}}$ The integer variable $$k$$ is restricted to $$k=0,2,\dots,n$$.

Thus we can write:

MINLP problem
\begin{align}\min\>&\color{darkred}k \\ & \frac{\color{darkred}k (1-\color{darkred}k/\color{darkblue}n)^2+(\color{darkblue}n-\color{darkred}k) (\color{darkred}k/\color{darkblue}n)^2}{\color{darkblue}n-1} \ge \color{darkblue}\alpha^2 \\ & \color{darkred}k = 0,1,\dots,\color{darkblue}n \end{align}

The constraint can be simplified into $\frac{k-k^2/n}{n-1}\ge \alpha^2$ This is now so simple we can do this by enumerating $$k=0,\dots,n$$, check the constraint, and pick the best.

Because of the form of the standard deviation curve (note the symmetry), we can specialize the enumeration loop and restrict the loop to $$k=1,\dots,\lfloor n/2 \rfloor$$. Pick the first $$k$$ that does not violate the constraint (and when found exit the loop). For very large $$n$$ we can use something like a bisection to speed things up even further.

So this example optimization problem does not really need to use optimization at all.

#### References

1. Constrained optimisation with function in the constraint and binary variable, https://stackoverflow.com/questions/57850149/constrained-optimisation-with-function-in-the-constraint-and-binary-variable
2. Another problem that minimizes the standard deviation, https://yetanothermathprogrammingconsultant.blogspot.com/2017/09/minimizing-standard-deviation.html