Monday, September 19, 2011

Cplex MIP and time limits

Looks like Cplex tries to do something smart just before it hits a time limit: see the logs in In there you see that the gap is reduced just before the time limit kicks in, almost independent of the value of the time limit itself.


Above: run with 100 seconds time limit and below: 1000 seconds time limit.



You would almost think it would be beneficial for Cplex to have this end-of-time behavior every now and then.

Note: these runs are on one thread (thus hopefully deterministic – may be apart from the exact iteration number where the time limit kicks in). The statement “The answer to this is an implementational detail: when you use time limits, the solving process becomes non-deterministic close to the end of the time limit.” does not completely satisfy me. I would like to have a better understanding about what is happening here.


  1. Having a time limit makes it non-deterministic as Tobias says in his reply.

  2. I just didn't understand if this is run after the time limit is met or if there is an estimated time for this "intelligence" so that it starts before time limit is hit (avoiding going over time).
    I have found that post-solve can sometimes improve MIP solutions to some extent, but not as dramatically as you show in the graph's.
    Nice stuff!

  3. I noticed this ad of CPLEX11 a few days back in an old issue of OR/MS, and this blog post reminded me of that: "Leverage your multi-core machine and the new deterministic parallel MIP mode to get repeatable invariant solution paths."

  4. Hi, I encountered a similar issue and decided to come back to this post that I read a while ago.

    I have a set of instances that cplex won't be able to solve, say in 10 minutes, but the story is different when I introduce a time limit, say 10 seconds, boom! I have the optimal solution.

    Now, that makes benchmarking when comparing two solvers very difficult. The runtime for cplex will depend on the time limit that I introduce.

    So my question is what is a good practice then when benchmarking? Do you have any suggestions? I can introduce a time limit of 10 seconds or 1000 seconds, and the runtimes will be much different.

    Also I wonder how many papers that I read in the past was subject to this artifact that went unnoticed.