Who remembers SMP?
That's SMP as in symmetric multiprocessing.
IF (and it's a huge if) your workload is SMP-compatible, and your CPU (alone) is the bottleneck, the time taken to process a given set of work is halved when you double the number of processors.
SMP was industry standard technology and terminology back in the 80s and 90s, but seems to have been forgotten (or is being ignored) because the people now coming out of IT college and/or going into IT management are seemingly Intel-sponsored Microsoft-brainwashed fashion victims, as is illustrated by the VMware-style hype in the base article here.
Multicores is the same as SMP, just cheaper smaller and faster than back then. Back then, 80 CPUs in an SMP box would generally with good reason have been laughed at by those with a clue, and so it should be today, outside of certain niches.
...
"the software industry finds writing multithreaded applications hard largely because engineers don't go seeking out the tools for the job."
'Engineers'? You've already let the cat out of the bag. Software 2.0 is about prettified GUIs and a "business model" that makes money for the various players regardless of the quality of the end product or service. IT's not about engineering Stuff That Works.
"no one is taught multithreaded software design anymore."
Absolutely. Communicating Sequential Processes is indeed what they need. That's from when, 1978? Not hip, not trendy, nothing to sell (except a mind-numbingly simple sensible design concept), move on.
If civil engineers built buildings the way most software people design systems, they'd be reinventing a different kind of girder/welder/rivet every couple of weeks and spewing rubbish about it in every available industry journal. And a lot more bridges would collapse.
The rest makes a great deal of sense too. Where do you work, have you any jobs going?
...
"Of course databases like Oracle (or even MySQL) can benefit from a large number of cores, as they are highly multi-threaded applications."
Bollocks factor: 75%. The database might be multiprocess or multithreaded, but sooner or later in any worthwhile real world application (eg anything with transactions) there will be one or more serialisation/synchronisation points where everything has to stop to get in sync before progressing further. And that's in addition to the memory and disk bandwidth issues others have already mentioned.
....
"Don't teach BASIC to the kids anymore, teach parallel C."
Indeed. Except Parallel C (and indeed Parallel Fortran) has been around since the days of BigSMP (see above) and DSPs and so on (and indeed the Transputer). Say around the late 80s, for (eg) Parallel C from 3L in Edinburgh. Where has Parallel C been the last twenty years? Waiting for Studio support??
...
A domestic or business-class desktop PC can use a couple more cores than it's got hard drives. If you've got a core per hard drive, you can parallelise the virus scan *and* maybe do some work/have some fun while it's scanning. Anything beyond a few cores isn't going to help, regardless of how many threads are sitting idly waiting for something to do.
...
Summary: massively parallel processing hasn't been, isn't, and won't be, relevant to anything except a tiny proportion of the IT problem space. In the problem space where it might potentially be relevant, other constraints (memory bandwidth, disk bandwidth, scheduling issues) often render massive parallelism relatively pointless.
If you want to know more, ask someone who remembers the late 80s/90s, do not ask someone who thinks Visual Studio is a software development tool.