Often I see people who have read or heard about NP-completeness dismissing a MIP formulation even before trying. This is certainly a misunderstanding of complexity theory. To be clear:
Complexity theory says (almost) nothing about how fast a particular MIP solver can solve your particular problem on your particular dataset.
I think we can even drop the "almost." So, let's not fall into this trap, and just try things out!
In [1] a simple problem is stated:
Given \(n=40,000\) numbers, select a subset that sums up to 100,000 or as closely as possible.
A small MIP model can do the job:
MIP model |
---|
\[\begin{align}\min\>&\left|\mathit{\color{darkred}{Deviation}}\right| \\ & \sum_i \color{darkblue}v_i \cdot \color{darkred}x_i = \mathit{\color{darkblue}{Target}} + \mathit{\color{darkred}{Deviation}}\\ & \color{darkred}x_i \in \{0,1\}\end{align} \] |