whatever personal computer performance we have
Some crappy pointless JavaScript overkill website will bring it grinding to a halt.
When I first saw an image of the 'wafer-scale engine' from AI hardware startup Cerebras, my mind rejected it. The company's current product is about the size of an iPad and uses 2.6 trillion transistors that contribute to over 850,000 cores. I felt it was impossible. Not a chip – not even a bunch of chiplets – just a monstrous …
You're speaking from my soul there !
Somehow, it feels as if the increases of the "power at your fingertips" we've seen in the last decades have largely been used to ... make the eyecandy a little sweeter still. Advances in usability ? Word'365 or what it's called today is not so far ahead of what Winword'95 could do; user interfaces haven't become more obvious or snappier either (but thanks to the huge compute power increases, also not slower even though every mouse pointer move in your Electron-Framework-App pushes around 100 REST API requests with a number of grpc-to-json-to-yaml-to-binary-to-toml data conversions. Oh, forgot about that old Java component in there that's using a REST-to-XMLRPC bridge thing. anyway ...).
Mobile Apps have seen advances; the interface on current Android or iOS devices beats 00's era S60 or Windows Mobile any day for sure. Desktop Apps have stagnated, though, and for a longtime. Still no voice dictation. Still no talking to my computer. It's not even taking advantage of my two widescreen displays ... still puts taskbar, menubars, toolbars, ribbon bars ... vertically on top of each other, to shrink the space that already shrank from the screens getting wider even more ... so I can now see four top halves of A4 pages next to each other alright, . Thanks Microsoft, but please train the monkey brigade doing your UI reviews a bit more in practical tasks.
Still, much of the time it feels like modern computing is like modern cars - only available as huge SUV and ultra-huge-hyperluxurious-SuperSUV, with a fuel efficiency quite a bit worse than an early-1980s Corsa, but oh yea it'll have airconditioning inside the tires as well as an exhaust pipe camera livestreaming the glow of the catalytic converter to the cloud.
Spin one of those screens through 90 degrees, and then you'll have a decent document viewer.
Soon MS will spot the adobe style "detachable menu" and decide that's a great idea, then three years later they'll stop doing so, then rinse and repeat.
They just changed the UI for track changes such that even when my wife had figured out *how* to turn it of and on, nd was pointing me at the change that indicated it... it still took my ~30 seconds to see what the indicator was. And I could only tell by repeatedly turning it off/on and watching for the change.
It used to be a little slider button that was green when it was on (right hand side), and grey when it was off (left hand side). Now the icon *background* changes between two subtly different greyscales...
*Still no voice dictation. Still no talking to my computer*
One of my greatest bugbears. For all the local power available, Google, apple and amazon* insist on transmitting audio to their servers, where it is not only processed but also harvested and stored**. Most other dictation /voice command apps use 3rd party cloud provider.
Yet surely there is enough local power, even on a mobile, to do the processing locally. And if its 92% accurate instead of 97%, I can live with that
*and other culprits
**in whatever form they can get away with
"One of my greatest bugbears. For all the local power available, Google, apple and amazon* insist on transmitting audio to their servers, where it is not only processed but also harvested and stored**."
I don't know about storage, but most of those places offer offline dictation and have for some time:
Apple: On Mac OS, go to the dictation settings and select the offline option. Download each language file you are interested in. On IOS, it's less clear, but they claim that if you select languages under Settings -> General -> Keyboard, that the processing will be offline. It works when I set my phone to airplane mode.
Google: On Android, go to Settings -> System -> Language and Input -> Google keyboard and select offline languages to download.
Amazon: I don't know about their tablets, but for Alexa devices, you're out of luck.
Most of the people who are smart enough to communicate with a computer directly with language, are already doing so, They're called computer programmers, and they have to be specially educated to do so. Most humans don't have sufficient grasp of the syntax of their native language to communicate clearly with one-another, let alone with a strictly logical computer.
Maybe with another 20 years of development of quantum computing and AI, we will have a computer capable of understanding typical human babble.
"Still no voice dictation. Still no talking to my computer."
I'm confused by this. We have dictation. In my experience, it works rather well when using it to type, though like everything else you have to check it for mistakes it will make eventually. Of course, I know some people who the software seems to hate and frequently misunderstand, but there's a reasonable chance you're not one of them. We have had dictation software for some time now. If you meant conversational dictation where the computer talks back, we don't really have that. The problem with that is that the computer doesn't understand and construct responses, but it can listen and write down what you said just fine.
Windows speech recognition has been "standard" since Windows XP Tablet Edition, which wasn't quite standard, and was built-in in Windows Vista and ever since. You could also get it with Microsoft Office. Windows XP Tablets weren't especially powerful, though.
You do have to find it, activate it, and train it, and avoid being steered into using the cloud-processing version if you prefer not to. The first part is "key Win+U then select Speech" as of Windows 10.
Catt certainly held a number of key patents back then.
I remember talking to him about the cooling problems, particularly when you link wafers together to form a transputer style grid computer. I forget the numbers, just that we managed to cram something ridiculous into the space of an upright piano, only problem was getting rid of circa 8KW of heat.
The other area of concern was getting data on and off the wafer.
But then Catt was more focused on small footprint CPU and memory, like the Transputer; not large single CPU and large memory (40GB+) on a single wafer.
Somewhere, if I dredge the depths of the storage boxes in my attic I still have a working Sinclair QL, complete with the original user manual. I suspect the microdrive cartridges are long dead though - they never were the most reliable of things.
I must dig it out sometimes and see if it'll still power on....
Absolutely agree, the use-case for recent Mac Studio Ultra is pretty much niche.
Wafer fabs need to shrink and ship large quantities of wafers to stay viable. hardware vendors need to trump the other hardware vendors for sales. Users don't need the extra compute. The result must be that things last longer. I'm writing This on a 4 year old machine and feel no need to change it. Once every house is saturated and there is no new app, then there will be a culling
Agreed. I only last month replaced my 16 year old desktop, and that only because the 4 GB RAM maximum just wasn't cutting it for a box that needs to visit mainstream websites in a mainstream browser.
So I replaced it with... a 10 year old refurbished desktop, and immediately maxed out the RAM to 16 GB. DDR3 isn't going to get any cheaper or easier to find, you know. That out to be good for a few years.
We have to remember that technological 'innovation' is driven by vendor interest, not customer demand. Raw capacity (i.e. ever bigger numbers) leads to constant churn, which is what has kept the industry rolling for so long. If everyone's kit was felt to be adequate for its functional lifetime, the revenue stream would dry up. I still have an perfectly adequate and reliable system with an Athlon single core processor which has been driving a SCSI professional graphics scanner since 2005, but obviously that represents lots of lost opportunities for vendors to take money over the intervening 17 years.
my next MacBook Pro will be bigger and heavier than my 2015 model
Based on what evidence? The notebook's weight is now largely dependent upon the screen and the battery. Apple has kept the weight of the MBP constant since it ditched the DVD drive and the ARM based ones use the extra space for more battery: few users complain about better battery time. Not that I'm about to rush out any by one, but I do appreciate what they've done.
As for larger wafers/dies, this is simple physics: communication on the die is faster than to anything connected. Apple has done this with memory on the M1 and shouts the numbers at anyone who'll listen: memory shared by CPU and GPU makes some operations a lot faster.
Hardware running ahead of software is hardly new (sometimes the pendulum swings the other way and we’re waiting for hardware capable of putting The New Shiny in software onto the desktop, sometimes it’s the other way round).
Give it a year or two and somebody out there will find a use for it, of course it’s possible we’ll then all wish they hadn’t but once again “plus ça change, plus c'est la même chose“ and all that…
Isn't the real problem that PC (and processor) manufacturers have just been trapped in a cycle of having to constantly stay at the high end of the latest and greatest chips, because that's what people have been taught to buy to run up to date software.
How many office workers need a 4Ghz processor in order to run the software they use? Almost none, but that's what you'll have to get if you buy a new one. Why aren't 2Ghz based machines still available, and at a much lower price point?
If you want a new car, you don't have to buy a Ferrari, you can buy a modestly performing one for much less money to meet your more modest needs, surely this should be the case too for PCs.
The CPU is but one part of a desktop or laptop computer. Change it and you only change the cost of that one part, ergo you would very quickly hit the point where that computer is not profitable to sell.
The only way round that is to reduce the quality of the remaining parts, and looking at some of the fragile examples out there now I don't think we have too far to go before they only last three years and ........oh!
This post has been deleted by its author
I disagree completely. I'm a programmer. There are never enough compute cycles, cores, and IOs for doing modern web development, where you have to run multiple servers/services and multiple nodes, sometimes running as a local test implementation of a fault-tolerant network.
But for the workloads I'm talking about, it is the core count that matters, or at least the full hardware threads, not raw CPU.
The one limitation that continues to haunt processing is the plethora of single-threaded utilities and tasks that run on a standard system, severely limiting the throughput. Take a kernel update for example: instead of spawning multiple kernel link cycles, Linux does them one at a time in sequence, even on a 12-core box!
Please don't write articles that suggest everyone shares the same requirements. There are plenty of professional users that can use all the computing power they can get from these machines (and more). I'm working in such an environment myself and have been frustrated for years by the achingly slow development from Intel. Apply has set a fire under them and not a moment too soon.
the old saying
thats 90% of users use 10% of the software functions...
The only reason this windows 10 PC leaps into life faster than my oldest win95 pc is because of the SSD bolted to the motherboard.
If it loaded in from the HDD , it would take as long.... 25 years of 'progress'?... 25 years of bloat more like.
And lets face it..... most workers could get by on office 2003/win xp with no problem and no disruption.
At this point in time, that is just not true anymore. The bulk of sales are midrange systems that the sales droids tell them will play games - usually an i5 or AMD equivalent with a paltry 8-16GB of RAM and a 250GB SDD.
The bulk of business sales is low-end machines for the masses of staff; that is where things get skewed, because those same businesses are responsible for the bulk of the high end purchases as well. Those aren't really "Windows boxes" in the sense you or I think of them, though - they're just glorified modern green screens for the servers that the staff use. If the staff weren't so insistent on it being "usable", most of their work could still be done on dumb terminals.
At least with physical hardware there is gokd actual return.
3.11 would run fine on 4MB of memory, 16MB feeling superb.
And the install would deplete your hard drive by a massive 15MB.
I feel that on the software side we have had minimal return since Windows 3.11, the primary reason for future Microsoft bloat being to replenish sales of both hardware and software. And what have we inherited from this shifty behaviour?
Yes, 1000s and 1000s of new avenues for malware.
> I feel that on the software side we have had minimal return since Windows 3.11, the primary reason for future Microsoft bloat being to replenish sales of both hardware and software
Linux Mint requires 20GB of hard drive space
Mac OS requires at least 35GB of space.
Even the Pi's Rasbian distro takes up 4.7GB of space.
Yes, there's arguably a lot of pointless bloat in modern OS's, and if you're brave and/or have lots of time, you can no doubt manually trim them down significantly.
But it's not a purely "Microsoft" issue.
THE GOOD
I supported a biotech research computing environment for >3 decades and in my experience there were occasions when the availability of bigger systems enabled new research. For example, the emergence of TiB scale RAM systems (SGI UV) enabled the de novo sequencing of very large genomes (of plants) which, up to then, was impractical. Available software was written with much smaller genomes and smaller hardware platforms in mind, and it was inefficient and wobbly, but it worked. Phew!
Also, some researchers might not attempt ambitious computational tasks if they (mistakenly) believe it's not possible, when those of us in support groups can say "of course we can try, we just need to make the box bigger".
THE NOT YET PERFECT
Inefficient code delivered with cutting edge lab equipment required unnecessarily large resource to run. Some years ago, I spec'd and installed a multi-cabinet HPC/HTC cluster solution for the initial processing of data from a cutting-edge high throughput sequencing instrument .... only for the equipment vendor to later spend a bit of time optimising the code which meant it would now run on a single quad core desktop! This is the nature of cutting-edge laboratory coupled with cutting-edge software to handle it. The cluster capacity was then swallowed up with the unexpected volume of work to analyse the avalanche of (complex) processed data.
THE BAD
A lot of open-source research-grade software is written by people who are trying to prove a research objective, and will choose language, frameworks and platform that will just get the job done. Once achieved, there is precious little time or resource available to make the workflow or the code run efficiently, or even to re-write. This means a lot of HPC/HTC cluster time is lost to inefficient software and workflows ... and researchers use each other's code a lot, so useful (if inefficient or wobbly) software rapidly spreads within a global research community.
CONCLUSION
If we were to ask science to progress more slowly, and spend more time and money on software engineering, we could make much better use of existing hardware, but I end my comment with a defence of science that it would be wrong to do this in many cases. Sometimes the pace of change in science is so fast there is no time or funding available to optimise the many hundreds of apps used in research, much less to keep pace with advances in code libraries, hardware, storage and networking etc.. I feel the benefits of rapid advances in biological science often (not always) far outweigh the apparent background wastage of CPU cycles and IOs. Bigger is better if we want science to move ahead .. especially so when we need it to move fast ( remember covid? ).
@philstubbington .... I agree yes, profiling has become something of a lost art, or at least a less prevalent one in some aspects. We created tools to assist with profiling, but the barrier has always been the sheer weight of new and updated apps, rapid churn of favoured apps and the availability of time, money and researcher expertise to actually do this.
Any software tool implemented in an HPC/HTC environment will perform differently at different institutions, on different architectures and, more importantly, in different data centre ecosystems where scheduling, storage and networking can vary considerably. Software tools are rarely used in isolation and there is normally a workflow comprising several different/disparate tools and data movements. Ideally then, tools and entire workflows need to be re-profiled, and this is not cost effective if a workflow is only going to be used for one or two projects.
We had over 300 software items in our central catalogue, including local code but most of which were created at other places, plus an unknown number sitting in user homedirs, and there was no realistic way to keep on top of all of them. There are one or two companies out there who have specialised in profiling bioinformatics codes, and this will work well for a lab who is creating a standardised workflow, e.g. for bulk production sequencing or similar over a significant time period e.g. many months or years. We had a different problem, where a lot of the research was either cutting-edge or blue skies discovery, so nearly all workflows were new and experimental, and then soon discarded as the science marched forwards.
Bioinformatics codes are generally heavy on IO, so one of the quickest wins could be achieved by looking at the proximity of storage placement of input data, and the output location, and how to then minimise the number of steps required for it to be ingested by the next part of the workflow.
The maximum sustained power consumed by the Mac Studio with the M1 Ultra CPU is 370 W. Quite a bit more than the most efficient, full-sized laptops, but quite a bit less than a lot of full-sized desktops. Don't mistake efficient thermal design for wasteful power consumption (e.g. Xeons).
"the 17-year cicada strategy, applied to computing – availability in such overwhelming numbers that it simply doesn't matter if thirty per cent of capacity disappears into the belly of every bird and lizard within a hundred kilometers."
That is known in the waferfab world as "yield". Usually we bin the ones that don't make it; with a new bleeding-edge product, this is often around 80% at launch and falls slowly as the fab beds in and we all learn to do our latest jobs properly. Waferscale hinges on the idea that it is more efficient to leave the duds in and wire round them. It is a meme that (like broadband over the power grid) recurs regularly but, as others have pointed out, since the days of Ivor Catt and Clive Sinclair (better known as the 1980s) has yet to prove itself deserving of Darwinian survival.
P.S. Our hack is clearly no gamer. "a monstrous 1000mm2 die"? That's only just over 30 mm on a side. The idea that the CPU is a monster and will catch fire unless extreme cooling is employed, is a fundamental axiom of said community.