- Model runs runs fast on both. This is what I like most. In this case of course running locally is often easier: the big iron does not really help. This is a large class of problems (helped by good solvers and lots of modeling experience).
- Model runs slow on laptop and slow on big machine. The expensive hardware does not help here. This also happens quite frequently. Solution: rework the problem (e.g. reformulations, aiming for good solutions, heuristics etc). Hopefully moving the problem to category 1.
- Model runs slow on my laptop but fast on the big machine. This actually does not happen too often. The number of models that falls in this category is surprisingly small.
- Model runs fast on my laptop and slow on the big machine. This should not happen, but it actually does. Often my optimization problems run surprisingly slow on expensive hardware. Some reasons: the cluster has expensive hardware from a few years ago (my laptop is new every year), there may be latencies in starting jobs on a cluster, it may have been too expensive to get the best solvers on the big machines (I think IBM counts VPUs which can add up) or much parallelization does not pay off. Big caches may help less than expected for some sparse data structures used in modeling systems and solvers. Another reason is often; not enough memory. Running n parallel jobs may require n times the memory. I have seen clients being somewhat underwhelmed or even disappointed by performance on what was sold as very expensive (and fast) hardware.
I am a full-time consultant and provide services related to the design, implementation and deployment of mathematical programming, optimization and data-science applications. I also teach courses and workshops. Usually I cannot blog about projects I am doing, but there are many technical notes I'd like to share. Not in the least so I have an easy way to search and find them again myself. You can reach me at erwin@amsterdamoptimization.com.
Tuesday, December 2, 2014
Big Iron vs my laptop
In this post a question is being asked about MIP solvers running on a cluster. In my answer I note that I usually prefer to be able to run models comfortably on my laptop (it is somewhat souped up with e.g. 32 gb RAM). When comparing running models on your own machine vs a compute cluster we can distinguish four cases:
Subscribe to:
Post Comments (Atom)
I currently run my OptaPlanner benchmarks on a 4 year old Xeon desktop (4GB RAM) and surprisingly, it's faster(!) than on my 1 year old laptop (8GB RAM). I still haven't found a good explanation for that, although I suspect CPU cache size is involved.
ReplyDeleteWe're doing experiments with big iron too, which lead me to the same conclusion in the past: throwing hardware at a problem doesn't help much. Better algorithms/models indeed do.
Memory latency and memory throughput are both important and could be worse on your laptop. Laptops, especially ultrabooks, sometimes reduce the processor speed to prevent overheating.
ReplyDeleteSomewhat related: http://www.quora.com/Which-optimization-algorithms-are-good-candidates-for-parallelization-with-MapReduce