This is a very small MIP:
Starting Gurobi... Optimize a model with 24 rows, 54 columns and 132 nonzeros Presolve removed 6 rows and 6 columns Presolve time: 0.11s Presolved: 18 rows, 48 columns, 114 nonzeros Variable types: 12 continuous, 36 integer (0 binary)
|
so I expected to run this very quickly on a fast 8 core Xeon machine. The time limit of 9999 seconds should be no problem even with the OPTCR (rel. gap tolerance) of 0.0. But I see:
835689342 7572026 infeasible 63 0.00062 0.00000 100% 1.0 9980s 836077602 7576642 0.00000 65 14 0.00062 0.00000 100% 1.0 9985s 836454289 7582365 0.00000 56 15 0.00062 0.00000 100% 1.0 9990s 836850333 7587882 0.00000 45 15 0.00062 0.00000 100% 1.0 9995s Cutting planes: Gomory: 1 MIR: 2 Explored 837155963 nodes (841647794 simplex iterations) in 9999.00 seconds Thread count was 8 (of 8 available processors) Time limit reached Best objective 6.179540598832e-04, best bound 0.000000000000e+00, gap 100.0000% MIP status(9): Optimization terminated due to time limit.
|
This is not a Gurobi problem: Cplex is just as bad. Apparently these branch and bound methods are just not suited very well for this problem. I have solved some huge problems with hundreds of thousands of binary variables, but here we see some evidence, not all problems are that easy.
Recently I saw someone writing:
gurobi> m.setParam('Threads',80)
You may all drool now.
Guess we need something like that to solve my tiny 36 integer variable problem….
Notes:
- The OPTCR (gap tolerance) setting is not an issue as it turns out the best bound remains at 0.0, so a gap of 100% is reported.
- Gurobi is solving the nodes pretty efficiently. Approx. 1 simplex iteration per node, and > 80k nodes per second.
Update
Some Cplex runs solve this fast (see comments below), but on some machines I see the same problem:
Cplex 12.3.0.0 Reading data... Starting Cplex... Tried aggregator 2 times. MIP Presolve eliminated 1 rows and 1 columns. MIP Presolve modified 36 coefficients. Aggregator did 6 substitutions. Reduced MIP has 18 rows, 48 columns, and 114 nonzeros. Reduced MIP has 0 binaries, 36 generals, 0 SOSs, and 0 indicators. … Elapsed real time = 9880.28 sec. (tree size = 1251.36 MB, solutions = 70) Nodefile size = 1123.85 MB (527.18 MB after compression) 478309344 6481362 0.0000 15 0.0012 0.0000 5.26e+008 100.00% 479049990 6483695 0.0000 15 0.0012 0.0000 5.27e+008 100.00% 479749224 6499596 0.0000 15 0.0012 0.0000 5.27e+008 100.00% 480524806 6502892 infeasible 0.0012 0.0000 5.28e+008 100.00% 481284148 6500253 0.0000 16 0.0012 0.0000 5.29e+008 100.00% 482022525 6501343 0.0000 16 0.0012 0.0000 5.30e+008 100.00% 482764367 6511470 0.0000 16 0.0012 0.0000 5.31e+008 100.00% Mixed integer rounding cuts applied: 2 Zero-half cuts applied: 2 Gomory fractional cuts applied: 2 Root node processing (before b&c): Real time = 0.03 Parallel b&c, 4 threads: Real time = 9998.98 Sync time (average) = 420.45 Wait time (average) = 45.69 ------- Total (root+branch&cut) = 9999.01 sec. MIP status(107): time limit exceeded Fixing integer variables, and solving final LP... Parallel mode: deterministic, using up to 2 threads for concurrent optimization. Tried aggregator 1 time. LP Presolve eliminated 25 rows and 55 columns. All rows and columns eliminated. Fixed MIP status(1): optimal Resource limit exceeded. MIP Solution: 0.001230 (530937464 iterations, 482992718 nodes) Final Solve: 0.001230 (0 iterations) Best possible: 0.000000 Absolute gap: 0.001230 Relative gap: 1.000000
|
This is a worse solution than Gurobi found. But on a different machine using 8 threads:
Cplex 12.3.0.0 Reading data... Starting Cplex... Tried aggregator 2 times. MIP Presolve eliminated 1 rows and 1 columns. MIP Presolve modified 36 coefficients. Aggregator did 6 substitutions. Reduced MIP has 18 rows, 48 columns, and 114 nonzeros. Reduced MIP has 0 binaries, 36 generals, 0 SOSs, and 0 indicators. … 129959 4046 0.0003 10 0.0031 0.0003 113627 91.54% *165794 5594 integral 0 0.0027 0.0005 146606 81.09% *170030 4859 integral 0 0.0019 0.0005 149634 72.63% *193953 3702 integral 0 0.0007 0.0006 171409 15.85% *195840 418 integral 0 0.0006 0.0006 172394 2.03% Mixed integer rounding cuts applied: 3 Gomory fractional cuts applied: 1 Root node processing (before b&c): Real time = 0.12 Parallel b&c, 8 threads: Real time = 4.10 Sync time (average) = 0.39 Wait time (average) = 0.02 ------- Total (root+branch&cut) = 4.23 sec. MIP status(101): integer optimal solution Fixing integer variables, and solving final LP... Parallel mode: deterministic, using up to 2 threads for concurrent optimization. Tried aggregator 1 time. LP Presolve eliminated 25 rows and 55 columns. All rows and columns eliminated. Fixed MIP status(1): optimal Proven optimal solution. MIP Solution: 0.000616 (175023 iterations, 203294 nodes) Final Solve: 0.000616 (0 iterations) Best possible: 0.000616 Absolute gap: 0.000000 Relative gap: 0.000000
|
So we have some very different behavior here: optimal in just 4 seconds.
Looks really surprising indeed - these solvers should be able to tackle that pretty easily. It may be the case that your branching got stuck in local minimum because of a few variables. At least in CPLEX you can try setting priorities for branching which can be helpful here, unfortunately Gurobi is not as good in customization AFAIK.
ReplyDeleteCan you share the model? I like these examples. Non-binary MIP can be pretty hard.
ReplyDelete"It may be the case that your branching got stuck in local minimum because of a few variables."
ReplyDeleteTo be honest, I have no idea what this means.
"At least in CPLEX you can try setting priorities for branching which can be helpful here".
May be. It is not so common a fixed user-provided branching order will beat Cplex.
"Can you share the model?"
ReplyDeleteHere you go:
http://amsterdamoptimization.com/models/metolius12.mps
Sorry if I didn't make it clear. What I meant is that some of your variables may have so heavy impact on the solution that not branching on them early may cause solver to explore space where no big improvement is possible.
ReplyDeleteI'm not experienced with CPLEX, neither I know the details of the problem we're discussing here, so it's just guessing on my end if customizing branching rules and priorities is worth the hassle or not in this software. But since in general choosing node/branching and tight bound are the essence of B&B, when default heuristic fails or takes much more time than is allowed, there's probably most space for some experimentation in these areas.
Now, all that probably sounded trivial to you, but I'm still new to this so this is non trivial and cool to me :)
"gap = 9.08439e-07, 0.15%"
ReplyDeleteLooks like it is not completely there yet...
With Cplex I see different results on different machines (with different thread counts). I'll show the log for 4 threads.
ReplyDeleteThanks Erwin. Interesting model. I can see why it might be hard.
ReplyDeleteMy intuition is that it comes from the level of precision in the instance data. (To me, it seems to share some similarities to some very hard knapsack-type problems). FWIW, I "solved" it in 1 sec on my laptop by changing all coefficients and RHS to have only 2 coefficients.
gurobi> m = read("approx.lp")
Read LP format model from file approx.lp
Reading time = 0.00 seconds
(null): 24 rows, 54 columns, 132 nonzeros
gurobi> m.optimize()
Optimize a model with 24 rows, 54 columns and 132 nonzeros
Presolve removed 6 rows and 6 columns
Presolve time: 0.02s
Presolved: 18 rows, 48 columns, 114 nonzeros
Variable types: 12 continuous, 36 integer (0 binary)
Found heuristic solution: objective 39.2500000
Found heuristic solution: objective 35.1200000
Root relaxation: objective 0.000000e+00, 30 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 0.00000 0 16 35.12000 0.00000 100% - 0s
H 0 0 1.4000000 0.00000 100% - 0s
0 0 0.00000 0 14 1.40000 0.00000 100% - 0s
0 0 0.00000 0 14 1.40000 0.00000 100% - 0s
0 0 0.00000 0 14 1.40000 0.00000 100% - 0s
0 0 0.00000 0 14 1.40000 0.00000 100% - 0s
* 42 0 36 0.4000000 0.00000 100% 2.2 0s
* 318 193 38 0.3600000 0.00000 100% 1.7 0s
* 403 259 32 0.3500000 0.00000 100% 1.7 0s
* 423 213 27 0.2200000 0.00000 100% 1.7 0s
H 505 248 0.2000000 0.00000 100% 1.6 0s
H 528 240 0.1800000 0.00000 100% 1.6 0s
H 748 319 0.1700000 0.00000 100% 1.6 0s
* 959 354 37 0.1400000 0.00000 100% 1.5 0s
* 3841 1437 86 0.1200000 0.00000 100% 1.6 0s
H 4098 1547 0.1100000 0.00000 100% 1.6 0s
* 4109 1483 82 0.1000000 0.00000 100% 1.6 0s
H 4603 1584 0.0800000 0.00000 100% 1.7 0s
H 7431 2438 0.0700000 0.00000 100% 1.7 0s
* 9920 2920 65 0.0500000 0.00000 100% 1.7 0s
H 9958 2759 0.0400000 0.00000 100% 1.7 0s
H11572 2579 0.0200000 0.00000 100% 1.7 0s
H15613 2474 0.0100000 0.00000 100% 1.6 0s
H26945 17 0.0000000 0.00000 0.0% 1.6 0s
Cutting planes:
Gomory: 2
MIR: 3
Explored 27147 nodes (44625 simplex iterations) in 0.97 seconds
Thread count was 4 (of 4 available processors)
Optimal solution found (tolerance 1.00e-04)
Best objective 0.000000000000e+00, best bound 0.000000000000e+00, gap 0.0%
I'm an engineering, so this is good enough for me. :-)
Ugh. I should have said "all coefficients and RHS to have only 2 *decimal places.* And I'm *in* engineering.
ReplyDeleteIs that just luck or is there a good reason why this rounded number makes the problem easier? My intuition (often wrong) says this is just luck.
ReplyDeleteIn order to prove optimality the solver has to definitively rule out the possibility of better solutions than the one it's got. When you have more decimal places in the input in some cases this can be harder to do since there may be competing solutions that are "really darn close". Truncating may make it easier to sort things out (loosely speaking - I am in software so I get to do that).
ReplyDeleteYes -- finding a solution is often just luck. But I don't think it is completely luck in this case. With all those extra decimal places, it is just *waaay* to easy for the LP to 'cheat' and find relaxations of value 0. By limiting the decimal places, you remove some of the LPs flexibility for getting solutions of value 0, so that maybe the branching will 'hit' on one more easily. This is essentially what Nate suggests as well. And I also get to speak loosely, since I am an engineer.
ReplyDeleteThat said, if you have many similar instances in this family, and you rounded them all, I'd still be willing to bet that the rounded ones are quite a bit easier.