Late next year?!
So we won't see competitors catch up with apple's A7 until late next year, and even then it might be a 32bit-only implementation? Apple will be releasing the A8 then. What happened?
Forget the smartphone horse race. Although Apple introduced 64-bit processors to smartphones with its "forward-looking" iPhone 5S, mobile processor maven ARM says it's "no biggie": by late next year, smartphones based on its 64-bit Cortex-A53 processor core should be widely available. It was expected that the next smartphone …
What happened? The world went meh!
Consumers don't care how many bits, I have a 64-bit version of Windows sitting on this laptop.. and the benefit to me? Bugger all. All it means to me is making sure I install the correct version of a small number of applications, that come in 32-bit and 64-bit versions.
What users care about is faster and the same or lower power consumption. Apple went 64 bit with the A7 because it was faster on the same power budget (the ARMv8 instruction set is more efficient). For the same reasons Android phones will be moving that way also as per the article. Until that point Fandroids will be saying "meh" purely because it is something they don't have.
"I have a 64-bit version of Windows sitting on this laptop.. and the benefit to me? Bugger all. "
I find that being able to address more than 4GB of memory is invaluable; without it, my virtual machines would have cripplingly small amounts of RAM and be basically unusable, and large media file playback (and other very high resolution graphical operations) would be significantly affected.
Maybe you just use your laptop for eMail and Minesweeper, and don't need so much addressable memory, but an awful lot of other people do.
Maybe you use your Window box for massive-memory applications but the number of desktop/mobile applications which benefit from >>2GB of virtual or physical memory is small, and the proportion of punters who will benefit from 64bit (putting to one side any incidental benefits which both ARM and AMD brought with their 64bit product introductions) is very very small indeed.
I see a significant risk of a polarisation in the market - high volume unexpandable product (cheap, disposable, one per family member rather than one per household?) based on ARM/Linux for email and Minesweeper and iPlayer and and and, these will increasingly replace low AND mid range PCs in the volume market.
Consequently the shrinkage of the volume PC market in general will accelerate, high end PCs will get increasingly expensive as there is less and less opportunity for manufacturers to build on shared designs (and volume discounts and "incentives" such as the Dell/Intel deal) across the range.
Enjoy your Xeon workstations while they're relatively affordable.
If you want a laugh, have a read of The Guardian's (and in particular, Charles Arthur's) explanation of what 64bit was about when Apple introduced it:
That's all very nice, but while you're (rightly) making fun of a dippy Guardian journo you're doing something very similar yourself: believing that RAM has anything to do with the use of ARM64. For the moment, it doesn't - but ARM64 is faster and lower power than ARM32 anyway. See here for how:
"believing that RAM has anything to do with the use of ARM64."
ARM64 (or whatever its official name is) hits the press because of 64 bit addressing. 64bit addressing doesn't really *need* more RAM but more RAM can be helpful for performance for apps that need it. Obviously.
And ARM64 does, as you quite rightly point out, bring other architectural stuff too - the doubling of the number of registers, for example. I know this because I first read it in that very same MikeAsh post back in September (thank you) and then went digging in ARM land.
Long before September, the same kind of change and benefit applied to AMD64 vs previous x86 - twice as many registers is beneficial, with or without bigger address space.
Hence my throwaway about "incidental benefits which both ARM and AMD brought with their 64bit product introductions" but I guess it wasn't clear?
If you plot how much RAM successive models of iPhone have possessed since the first model, a 4GB RAM might be on the cards in a couple of years... by introducing a 64bit iOS version now, 3rd party developers can get used to it before the majority of the iDevice range implements it.
"by introducing a 64bit iOS version now, 3rd party developers can get used to it before the majority of the iDevice range implements it."
The leading edge of the rest of the iDevice range does have it now (or will once the new iPad mini is launched in November - though I'm excluding iPods, the 5c and older models).
There is very little for developers to get used to. For anyone who hasn't written any C code or defined C level data structures, switching over is just a matter of hitting compile button and running your test to check all is OK and then resubmitting the app archive to the App Store. If C code/data-structures have been written, there may be some additional small changes/checks required. But still even for complex apps this shouldn't take a coder more than a couple of hours.
Not a 32-bit implementation - the silicon will be full 64-bit. It's just that the software running on it might only run in 32-bit mode.
No, it will be the other way around. You will have a 64bit core inside a physical package and pinning of a 32bit processor. Much like the i386SX was a 32 bit core in a 16 bit package back in the 90s.
So the software can make use of all the nice 64bit operations and registers, even if the hardware on the outside is still lagging behind a step.
64bit external hardware will only start making sense once smartphones with more than 4GB memory arrive, but having a 64bit core, even with 32bit externals means you will be able to upgrade your OS and apps so much longer.
to address more than 4GB. Cortex A15 has an address bus of 48 bits (IIRC), but uses only 32-bit registers. This means that a single program using more than 4GB will need to use the MMU to switch between banks of memory (much like old 8-bit computers used banked memory to have more than 64K total). But phones and tablets usually run many programs at once, and each of these can use their own 4GB out of a larger total RAM without needing to do fancy tricks -- the OS does the bank switching.
On a longer time scale, I don't see much future in huge, flat address spaces: The future is parallel, so instead of a few cores sharing a large flat memory space, you have many cores each with their own memory communicating over channels. So 64 bit registers for memory addressing is not all-important. You can more easily operate on large integers, though, which is an advantage for cryptographic applications and a few other things. So it is by no means irrelevant, but the importance has been hyped a bit when Apple released their new phone.
If anything it was under-hyped with multiple stories about how 64 bit makes no difference on a smartphone with limited memory. While that's fairly true if the only difference is doubling the size of the registers (and corresponding caches), but there are many subtle implementation details in what Apple have done which means the 64bit implementation does in fact provide a substantial performance gain. The instruction set is enhanced for AArch64 and is more efficient (this is not of necessity inherent to "64 bitness" but is true of the ARM AArch64 instruction set in particular) also look up what Apple have done with tagged pointers, as that change requires 64 bits and is bringing a very real and significant performance gain for iOS applications.
"The future is parallel, so instead of a few cores sharing a large flat memory space, you have many cores each with their own memory communicating over channels. "
How much am I bid for my Occam/Transputer stuff then? 
Parallel is only important in PCs because Intel say so (because they are maxed out on clock speed).
Parallel is interesting in mobile devices for different reasons: maybe because of the big.LITTLE style opportunities (a big and a little processor working on the same stuff at different times, depending on activity levels).
Parallel is important in HPC and related fields because it's the only way to do some particular kinds of numbercrunching in sensible timescales.
So yes, there's a small quantity of workloads where parallel processing can increase throughput in the real world but parallel is almost always technically a second choice versus the tried and tested approach of a single faster CPU of equivalent power.
 Wrt "communicating over channels.": Hoare's excellent "Communicating Sequential Processes" book from 2004 is now available for legitimate free download. Start at
And don't forget the bentendo 64 in 96
Fun fact, the nintendo 64 was actually designed and completed in 1964. The 64bit chip powering it is just a coincidence. The truth is that Nintendo are so far ahead of the curve they hold back their own consoles so as not to have governments ripping off their chip designs.
The WiiU was actually built in the late 80's sadly in recent years other companies have caught up by stealing nintendos designs, so their original 50 year roadmap is now out-dated. Sadly being japanese they refuse to deviate from their current plans, on the bright side though by their current internal roadmap it's only 23 years away from releasing their full VR console (think nerve gear from sword art online)