There are two possible formulations for linearizing
The first one uses two equations:
I usually use a different one with three inequalities:
The last one is stronger: if x=0,y=1 in the first formulation we have z ≤ 0.5 while in the second formulation we have z=0.
I noted that inside Gurobi some of the linearizations for quadratic programs are using the first formulation. See the interesting video at http://www.gurobi.com/resources/seminars-and-videos/gurobi-quadratic-constraints-webinar. For an explanation see the comment from Ed Rothberg below. Basically the idea is to use the reformulation that adds the fewest equations and nonzero elements to the model, and then let the cut generator expand this to something tighter when needed. That is pretty clever. At least in some cases it still makes sense to reformulate models by hand. Here is an example:
Of course a single model is not enough to draw any strong conclusion (the performance differences may just be the result of a different solution path).
Note that cover cuts will dynamically generate the second from the first, but only when the second is violated. That often leads to a smaller overall formulation.
ReplyDeleteThis is actually a special case of 'constraint disaggregation':
ReplyDeletesum aj xj <= M b (aj >= 0, xj >= 0, b binary, M >= sup(aj xj))
You can replace this with a tighter set of constraints:
xj <= M b
Empirically, the latter representation produces worse results in most cases, but as you found with your test, the exception is the rule in MIP.