Global optimization of packing problem by multi-start NLP |
In [1] the following non-convex model is used to pack \(n=50\) circles in a square that is as small as possible:
\[\begin{align} \min_{x,y,S}\>&S\\ &(x_i - x_j)^2 +(y_i - y_j)^2 \ge (r_i+r_j)^2 &\forall i \lt j\\ & r_i \le x_i \le S- r_i \\ & r_i \le y_i \le S - r_i\end{align}\]
This is a difficult model for global solvers. In the animation above we solve \(N=50\) problems with a different random starting point using SNOPT (a local NLP solver). The random starting points we generate are not feasible. SNOPT seems to have no problems in finding a feasible and locally optimal configuration. The results look quite good. Sometimes a really lo-tech approach is quite appropriate.
Speeding up the solve loop in GAMS
Here we do some experiments with a loop of \(N=50\) tries, each with \(n=50\) circles to be packed.
- A simple loop statement is the most obvious approach. On my machine this takes 73 seconds.
- Speed up the loop by using a solver DLL (instead of a .EXE), and reducing output in the listing file (m.solprint=2) . The elapsed time is now 40 seconds.
- Run all 50 problems in parallel using solvelink.async (we kept m.solprint=2). The total time reduces to 16 seconds.
- Add extra logic to run up to 4 jobs at the time (my laptop has 4 real cores). This does not help. It increases the run time to 40 seconds. The effort to start a new job once a running job is finished adds some idle time. Looks like it is better to keep things saturated and launch all our jobs. This assumes we have enough memory to keep all jobs in RAM.
- Scenario solver. This multi-start algorithm can be considered as solving \(N\) independent scenarios. It would be a good candidate for the GAMS scenario solver [2]. The scenario solver allows independent scenarios to solve very efficiently by updating the problem directly in memory. Unfortunately, this tool is not flexible enough to be able to handle a multi-start approach (we cannot initialize the levels before each solve).
Stopping rule
In this experiment we simply ran \(N=50\) jobs. A more sophisticated approach would be to use a probabilistic argument for stopping the search [3]: stop when the probability of finding a better solution becomes small.
Looking at the objectives in the plot above, it looks they may be normally distributed,
Apart from the outlier (the last solve), this indeed looks to be the case. The idea is to track the probability we can find an objective that is better than the current best one:
The blue area in the density plot is the probability we can improve the objective |
Multi-start and global solvers
The global solve Baron actually runs a few multi-start solves inside to try to improve the initial solution:
Starting multi-start local search
Done with local search
In this case it did not improve anything. Besides this we can run our own multi-start algorithm, Baron automatically picks up a good solution from the initial values:
Starting solution is feasible with a value of 0.269880944180E-001
A related problem
A similar problem is the "hostile brothers problem". Given a plot of land (here \([0,1]\times [0,1]\)), place \(n\) points such that the minimum distance is maximized. The model can look like:
\[\begin{align} \max_{x,y,d^2}\>&d^2\\ &d^2 \le (x_i - x_j)^2 +(y_i - y_j)^2 &\forall i \lt j\\ & x_i, y_i \in [0,1]\end{align}\]
Again, good solutions can be found using a simple multi-start strategy,
Hostile Brothers Problem solved with a multi-start algorithm |
In this case it is very easy to generate a feasible starting point:\[\begin{align}&x_i \sim U(0,1)\\&y_i \sim U(0,1)\\& d^2 := \max_{i\lt j} \left\{ (x_i - x_j)^2 +(y_i - y_j)^2 \right\} \end{align}\]
References
- Knapsack + packing: a difficult MIQCP, http://yetanothermathprogrammingconsultant.blogspot.com/2018/05/knapsack-packing-difficult-miqcp.html
- Gather-Update-Solve-Scatter (GUSS), https://www.gams.com/latest/docs/S_GUSS.html
- C.G.E. Boender and A.H.G. Rinnooy Kan. Bayesian stopping rules for multistart global optimization methods. Mathematical Programming, 37:59–80, 1987
- Stopping criteria for meta-heuristics, http://yetanothermathprogrammingconsultant.blogspot.com/2015/01/stopping-criteria-for-meta-heuristics.html
No comments:
Post a Comment