Tuesday, December 2, 2014

Big Iron vs my laptop

In this post a question is being asked about MIP solvers running on a cluster. In my answer I note that I usually prefer to be able to run models comfortably on my laptop (it is somewhat souped up with e.g. 32 gb RAM). When comparing running models on your own machine vs a compute cluster we can distinguish four cases:

  1. Model runs runs fast on both. This is what I like most. In this case of course running locally is often easier: the big iron does not really help. This is a large class of problems (helped by good solvers and lots of modeling experience). 
  2. Model runs slow on laptop and slow on big machine. The expensive hardware does not help here. This also happens quite frequently. Solution: rework the problem (e.g. reformulations, aiming for good solutions, heuristics etc). Hopefully moving the problem to category 1.
  3. Model runs slow on my laptop but fast on the big machine. This actually does not happen too often. The number of models that falls in this category is surprisingly small.
  4. Model runs fast on my laptop and slow on the big machine. This should not happen, but it actually does. Often my optimization problems run surprisingly slow on expensive hardware. Some reasons: the cluster has expensive hardware from a few years ago (my laptop is new every year), there may be latencies in starting jobs on a cluster, it may have been too expensive to get the best solvers on the big machines (I think IBM counts VPUs which can add up) or much parallelization does not pay off. Big caches may help less than expected for some sparse data structures used in modeling systems and solvers. Another reason is often; not enough memory. Running n parallel jobs may require n times the memory. I have seen clients being somewhat underwhelmed or even disappointed by performance on what was sold as very expensive (and fast) hardware. 
Sometimes big hardware can help when evaluating many scenarios or if different users want to solve things at the same time (e.g. some web service).      


  1. I currently run my OptaPlanner benchmarks on a 4 year old Xeon desktop (4GB RAM) and surprisingly, it's faster(!) than on my 1 year old laptop (8GB RAM). I still haven't found a good explanation for that, although I suspect CPU cache size is involved.

    We're doing experiments with big iron too, which lead me to the same conclusion in the past: throwing hardware at a problem doesn't help much. Better algorithms/models indeed do.

  2. Memory latency and memory throughput are both important and could be worse on your laptop. Laptops, especially ultrabooks, sometimes reduce the processor speed to prevent overheating.

    Somewhat related: http://www.quora.com/Which-optimization-algorithms-are-good-candidates-for-parallelization-with-MapReduce