It is becoming more and more apparent to me that hardware – computers, mobile phones, home servers, embedded machines, and cloud computing – is little more than a commodity these days. Most people have smart phones now – and storage is cheap and plentiful.
We are up to our eyeballs in cheap CPUs. ARM processors, Intel chips – 32 bit processors are everywhere. A CPU isn’t designed for a single purpose – it is designed to be as generic as possible to accomplish as many varied tasks as possible.
One CPU can do many, many different things. Thus for one CPU there are an almost infinite number of software packages that can be written to utilise that CPU.
Whilst it is true that one can make a processor that specialises in a particular task (graphics, signal processing, etc) we have cheap plentiful access to very generic and capable processors in commodity quantities and prices.
Software Is Not A Commodity
Software is not, yet, a commodity. With such flexible hardware out there – one chip does many things – we still do not have such broad flexibility in software (one program doing many things). About the closest thing we have to that is spreadsheets and web browsers. And yet even spreadsheets must be programmed and web browsers require websites to be written for them.
There will only be an ever increasing demand for software engineers/programmers in the years to come. As our world becomes flooded with generic all-purpose CPUs everywhere (they will only become more prolific in the household) there will be an ever increasing demand for those CPUs to be utilised.
The gap between software possibility, actual implementation, and hardware capability
In my personal view of the world it has appeared that hardware capability has observed, more or less, Moore’s Law – which is roughly a linear (straight line) growth of CPU performance over the years. And I believe that the potential utilisation of a CPU in software is exponential in comparison to the increase in capability in hardware.
When you consider what a Commodore 64 was able to do – with 64KB (kilobytes) of memory – it really is mind-blowing. Of course back then games were often written in assembler – manually optimised with every instruction carefully considered. Yet games had sound, responsiveness, and playability. If you extrapolate that to multi-core processors with gigabytes of memory – and solid state disk drives – the mind boggles at what we could be capable of doing if we optimised our applications as much as we could.
Of course commercial realities prevent software from being all it could possibly be. With the incredible growth in hardware capabilities has been the realisation that we can be lazier with software development. With stronger-typed languages, garbage collection, and numerous other protections our less efficient software can give us what we need albeit somewhat slower than the potential. Hence we have an actual software utilisation rate that is more closely matched to the hardware than actual potential.
What this leaves is a vacuum of potential in the software world. And given where we are today in terms of hardware capability we may never fill this gap in spite of all the effort in the world. Hardware manufacturers will still try and grow their capabilities – but you only need a small number of hardware engineers to do this: meanwhile those few hardware engineers design a chip that gets pumped out of factories in their multi-millions and the software industry has the job of taking that quantity of chips and processing power and putting it to use – and that’s not a small job!