In http://yetanothermathprogrammingconsultant.blogspot.com/2015/01/the-hostile-brothers-problem-1.html we discussed how to find local optimal solutions for a simple geometric problem. Using a local solver, we use:
where model m2 uses the second (smooth) variant of:
Multi-start Algorithm
A simple multi-start algorithm would just slap a loop around this. Indeed we can find slightly better solutions with this:
obj | min distance | time | |
local solve conopt | 0.01694 | 0.13016 | 1 |
multi start sequential (N=50, conopt) | 0.01744 | 0.13208 | 93 |
multi start sequential (N=50, snopt) | 0.01744 | 0.13205 | 39 |
multi start parallel (N=500, 4 threads, snopt) | 0.01748 | 0.13219 | 113 |
As the problems are independent of each other it is easy to solve them in parallel. Here we used a scheme where we solve four problems in parallel. If a problem is done, a new one is started, so we keep those four threads busy (no starvation). The code managing all this is written in GAMS using a few loops. Being able to write little algorithms in a modeling language can be an important feature. Using the parallel algorithm we can solve 500 problems on 4 worker threads in under 2 minutes.
Local Solver
Interestingly, the heuristic solver “Localsolver” likes the first formulation, model m1, better. This is more compact but non-differentiable. This solver does not care about the objective function not being smooth.
LocalSolver 24.4.1 r50296 Released Dec 20, 2014 WEI x86 64bit/MS Windows LocalSolver 5.0 (Win64, build 20141219) Searching for equations that define variables... found 0 time | iterations | objective Stopped due to time limit. |
For this particular problem the multi-start + local NLP solver seems more efficient.
Next: more experiments.
Update: LocalSolver statistics
The counts for the expressions and operands are not immediately clear. To investigate we use n=3 points. The log file shows:
Instance: 6 Variables |
With an option we can write out an .LSP file with localsolver function code :
.LSP file | My comment |
function model() { | n0,n1: Bounds on each variable n2 := x1 in [0,1] n4 := -x1 n5 := x2 in [0,1] n6 := (-x1) + x2 n7 := (x2-x1)^2 n8 := y1 in [0,1] n9 := -y1 n10 := y2 in [0,1] n11 := (-y1)+y2 n12 := (y2-y1)^2 n13:= (x2-x1)^2+(y2-y1)^2 n30: mindist := min(n13,n22,n29) n31-n33: Related to dummy objective function row n33:= mindist n34: not used? maximize mindist |
Indeed the generated code is very low level. This certainly does not look optimal. Lets say there is room for improvement. I suspect that a smarter code will increase the number of iterations per second only (the search space is related to the number of these float variables which stays the same). Update: see comment below where some experiments show no performance degradation for this code.