Sunday, May 1, 2022

Getting rid of non-linearities

In [1], an attempt was made to implement the following non-linear constraint: tmax{iCFi,tqiLt,0}tLt0.015 Obviously code like:

 
cfm = quicksum( max(quicksum(cf[i][t] * q[i] - L[t] for i in range(I)), 0) for t in range(T) /  quicksum(L[t] for t in range(T)) <= 0.015

is really not what we want to see. It is best, not to directly start up your editor and write Python code like this. Rather, get a piece of paper, and focus on the math.

There are two non-linearities: a division and a max() function. Note that I am not sure which symbols are variables or parameters.  I'll try to explain some possible cases below.


Division


Exogenous divisions of the form xa where a is a constant, are not an issue. This is just really a linear coefficient 1/a. If you have xiai in the constraints, I would advice to calculate α:=iai in advance, so we can simplify things to xα It is a good habit to keep constraints as simple as possible, as they are more difficult to debug than parameter assignments.

If we have something like xy (endogenous division) we need to be careful and worry about division by zero (or, more generally, that the expression can assume very large values if y0). If y is a positive variable, we can protect things a bit by using a lower bound, say y0.0001. This is difficult to do if y is a free variable (we want a hole in the middle). If we have  xiyi it may be better to write this as: xzz=iyiz0.0001 at the expense of an extra variable and constraint. It has the additional advantage of making the model less non-linear (fewer non-linear variables, smaller non-linear part of the Jacobian). Note that we assumed here that iyi0. If this expression can assume both negative and positive values, this will not not work.

If the division is not embedded in other non-linear expressions, we often can multiply things out. E.g. if we have the constraint:  xiyi0.015 then we can write: x0.015iyi This is now linear. This is what can apply to our original constraint:tmax{iCFi,tqiLt,0}0.015tLt


Max function


We can feed max{x,0} to a general-purpose NLP solver. However, this is dangerous. Although this function is continuous, it is non-differentiable at =0.

In the special case max{x,0}y we can write: xy0y If y represents an expression, you may want to use an extra variable to prevent duplication.  

If we have imax{xi,0}y we can do: zi0zixiiziy Note that the variables zi are a bit difficult to interpret. They can be larger than max{xi,0} if the inequality is not tight. The reason I use zimax{xi,0} is that it is sufficient for our purpose and that zi=max{xi,0} is just way more difficult to represent.

This is the reformulation we can apply to our original problem: ztiCFi,tqiLtzt0tzt0.015tLt This is now completely linear: we have removed the division and the max(.,0) function.


References



No comments:

Post a Comment