I was working on a MIQP model in finance: the optimal allocation of contracts to client accounts so that each client observes an average price that is as close as possible to the average contract price. The model was implemented in Solver Foundation (for the client) and GAMS (for prototyping).
With one data set we observed Gurobi was giving slightly sub-optimal solutions. I.e. the obj was 1.396560E-5 instead of the number found by other solvers: 0. Most likely this is a tolerance problem. I tried the optimality tolerance (from 1e-6 to 1e-8 and 1e-9) but that did not help. That option is just for the LP/QP subproblems. What actually did the trick was multiplying the objective by a 1000:
Apparently this is not scaled away (somewhat to my surprise), allowing us to convey the solver not to stop too early.
I also checked: reset the relative gap tolerance to 0 from its default of 1.0e-4. In the GAMS and Solver Foundation version of the model this did not help.
- On some new data sets we still see somewhat unstable behavior: different solutions with different objs by just reordering data.
- Gurobi solves KKT conditions, so may need to tighten feasibility tolerance. This helps somewhat, but not completely. Question: I suppose this introduces some danger of declaring models infeasible.
- I get better stability when formulated as a MIP (using an absolute value formulation).