Bob “SmoothSpan” Warfield calls it the “Multicore Crisis”. That’s the situation in which the effects of Moore’s Law have changed: while the doubling of transistors per chip continues, the by-product is no longer faster processor speeds but more cores per chips instead. He illustrates this point with the graph below:
Click the graph to see it on its original page.
It’s easy to see from the chart above that relatively little progress has been made since the curve flattens out around 2002. Here we are 5 years later in 2007. The 3GHz chips of 2002 should be running at about 24 GHz, but in fact, Intel’s latest Core 2 Extreme is running at about 3 GHz. Doh! I hate when this happens! In fact, Intel made an announcement in 2003 that they were moving away from trying to increase the clock speed and over to adding more cores. Four cores are available today, and soon there will be 8, 16, 32, or more cores.
(Warfield’s written more in another article, You’ve Already Had a Multicore Crisis and Just Didn’t Realize It!)
The problem with multicore processors is that most software isn’t written to take advantage of it. Jeff “Coding Horror” Atwood demonstrates this in an article where he compares a 3.0GHz dual-core machine against a 2.4GHz quad-core at a number of tasks. In most cases the faster dual-core machine had the better performance. At the end of the article, he wrote:
It’s possible software engineering will eventually advance to the point that clock speed matters less than parallelism. Or eventually it might be irrelevant, if we don’t get to make the choice between faster clock speeds and more CPU cores. But in the meantime, clock speed wins most of the time. More CPU cores isn’t automatically better. Typical users will be better off with the fastest possible dual-core CPU they can afford.
Software — or more accurately, its programmers and architects — are going to have to adjust to this shift in the effects of Moores’ Law. Intel fellow Shekhar Borkar puts it quite well:
The software has to also start following Moore’s law. Software has to double the amount of parallelism that it can support every two years.
This means that programming will have to change. Parallel programming — typically a sidebar course in most undergraduate computer science curricula and a sidebar feature in mainstream programming languages — will have to become a fundamental part of software development. Approaches that lend themselves well to parallelization — Google’s MapReduce is a notable example — will have to become part of our daily lexicon. We’ll have to move from primitive parallel paradigms like the way threading is done in mainstream programming languages to better ways, such as the Erlang model, where you think in terms of isolated processes which pass messages among each other. As a lazy programmer, I hope that some really clever programmers out there will write compilers and interpreters capable of finding parallelizable stuff in my code and do the dirty work for me, but I’ve picked up Programming Erlang just in case that doesn’t work out.
I hope that Adam Beberg, who’s quoted in the O’Reilly Radar article Google’s Folding@Home on the “Multi-Core Crisis”, is right:
Yes we will solve them [the problems arising from the fact that “Many of our economically important algorithms (value > $1 Billion) do not scale above 16-20 cores”], but we have to change our algorithms from what most people are used to and this will take time. The same methods we use for distributed folding also seem to translate to a wide variety of other domains, so I see no hard walls on the horizon. I really do hope we find a wall soon so I can climb it and I’m crossing my fingers for a surprise at a billion.
The current generation of programmers is learning in a world of multi-core, and from what I have seen they have zero if any trouble dealing with it. Once they get some experience, we’ll wonder why this was ever considered hard.