5G will give cloud AI abilities to low power devices.
ARM will give onboard AI abilities to low power devices.
Phone manufacturer: No need to disagree. We will add shovelware for both!
Arm has set out its stall for the first major new version of its instruction set architecture – Armv9 – in about a decade, and promised compatible chips will have improved machine-learning and security capabilities. Previous versions of the architecture introduced support for things like virtualization and SIMD; the last major …
They are using realms. Anyone who has used trustzone knows that the chip manufacturers have made life exceedingly hard for hardware and software engineers. Each part has its own variation on which bits to flip for a particular functionality, what features to provide, what those features mean.
Using Arm’s software layer locks you down.
Just looked at one processor today with trustzone and no rng, no ecc/rsa, half a secure boot, etc. Add to that the chip vendor convincing my hardware engineers that it is good enough for secure boot and security.
Moving software from one chip to their next generation requires a whole redo of the otp maps and how the system even boots.
I just see a mess.
Well... The core is still RISC, and it's all about modules added to it. The instruction sets of each module may also be relatively reduced in size. Maybe we need a new term: Modular Instruction Set Computer - MISC.
Intel keeps adding instructions to the main instruction set (but pretends to be modular by giving each addition a new label), because backwards compatibility. ARM is just pick and mix - make it as reduced or as complex as you fell like. I think RISC-V adopts this philosophy too.
One could think of this as having a lot more coprocessors than just a math one.
This post has been deleted by its author
If its not coprocessors but block handling functions that can take use of extant hardware with minimum on chip hokey pokey then why not? Its a bit like taking simple maths and rather than a * b we have matrix A*B then its just the instruction domain that has been added to rather than more instructions added. I've been impressed by the SIMD performance on my Raspberries and that to my mind is just adding arrays to int16,int32 etc. Moving the looping stuff from code into hardware obviously offers fantastic speedups over the code version and I doubt the CPU additions are large - just additional counters/registers for most of it.
Er, yes. As soon as chip designers bake in sekuritty that's a given.
The reason? It's difficult, if not impossible, to update and fix the holes when, not if, it's compromised.
 Because you can cater for all the penetration techniques around now, but you cannot, without a reliable crystal ball, guard against those yet to be developed.