The greatest lie ever told: Premature Optimisation is the root of all evil!
Its a lie because it implies there is a ‘premature’ aspect to writing code, that you shouldn’t worry about performance or power at some phase of the research/project.
Its simple not true, there are time when its not the greatest priority but thinking about how it affects these areas is never wasted.
At some fundamental level, software is something that takes power (electricity) into math. Anytime you don’t do it optimally you waste power. The only time that doesn’t matter is when you have a small problem and lots of power, in practise few interesting problems are lucky to get away with that.
If you work in machine learning or big data, the likely limit to what you can do is related to how many processors (wether they are CPU, GPU or FPGA) you can throw at the problem. Assuming you can’t scale infinitely, then the results you can get out of a finite set of HW will largely be determined by how performant your code is.
When you’ve designed your latest ANN that will scale across a few hundred GPU. figuring out how to maximise memory usage or processing power could enable dramatic savings. Even 10% per machine adds up to a lot of extra performance, and that means bigger problems can be solved.
Many problems are of the type. performing N (where N is large) operations. To really get the best result those operations need to be thought about with performance as a priority, there is no premature optimisations! Whilst there lots of code where its not as important, the core code should be at the earliest phase thought about in terms of optimisations.
At the extreme that even means custom hardware, optimising the HW that will run the core algorithms to run just that algorithms very well. In deep learning, the ‘accelerator’ (clusters of GPUs or FPGAs) are vital, and it seem likely that at some point custom ASICs will enter the datacenter just to run the AI code at the heart of many big data algorithms.