"may signal a coming surge in RISC-V processor shipments"
May also signal concerns about the future ownership of Arm.
Seagate says it has, after several years of effort, designed two custom RISC-V processor cores for what seems a range of functions including computational storage. The disk drive maker told us one of the homegrown CPUs is focused on high performance, and the other is optimized for area, ie: it's less powerful though smaller …
Apple moved from (6502 to) 68k to PowerPc to x86 to ARM while migrating software via emulation layers so why could't they move to RiscV (which is a much smaller change than any previous change) and effectiviely take the entire CPU "In house" (no need for an architecture licences ... actually, they probably have no need for RiscV and could probably design their own architecture if they wanted). So, why should they pay ARM any "significant" license/royalty fees as they've got very little dependency on ARM as their ecosystem is targetting the architecuter they define which just happens to be moving to ARM at the moment.
So a billion CPU's a year .... even if ARM only get 1c per CPU roylaties then you can probably afford to run a decent CPU design team for less than that and know that all the software tool sife is availbale "for free" (ol, you can probably afford to put a bit towards this as well - but you don't have to) ... not looking good got ARM (or is that ARMnVidia)
N.b. 20-25 years ago when ARM were the "up and coming architecture" other processor architectures had to deal with the "its just a processor and ARM looks cheap - plus our new grads used it at Uni" argument and mostly lost ... looks like the boot is on the other foot now!
FWIW I asked the RISC-V Int'l directors about patents, and they were pretty sure anything they spec out that Arm could claim ownership of could be traced back to pre-Arm days, or would be entirely new and novel.
I think there's going to be a patent royal rumble at some point. One side - and not just Arm or a RISC-V member - is going to crack and it's going to kick off, and we'll find out that once again IBM has the patent on adding 4 to the program counter each cycle.
...but at least unlikely to apply to the Risc-V instruction set.
Of course if you want to connect your Risc-V core to some RAM, you'll need to license the RAM controller. Or if you want HDMI. Or play video. Or a GPU that is vaguely compatible with existing programs.
You can probably figure out a power supply, though.
When SCSI first replaced SASI (about 40 years ago) I proposed disk drives with embedded SQL processors, so, instead of looking up your info via a file system, you searched for it in a database. In other words, instead of telling the drive where to look for the data, you tell it what you want, and ask it to go and find it.
Of course, this costs computing power, but if you are getting the computing power for the price of 1/2 teaspoon-full of sand (incremental cost of Risc-V) then why not use it to off load processing from your (Beowulf cluster of) mainstream CPUs? Postgresql is pretty cheap these days, as it does not need to fund any AC75 yachts. (If you need any good indexing algorithms for text, PM me - I want an AC75 for myself).
(At the time I was proposing to use an array of pipelines of Transputers, but Mrs Thatcher quietly took the Transputer round the back of the shed and shot it).
Another project / product in this area was CAFS, the Content Addressable File Store, which searched directly in the drive. I knew someone who had worked on it, who said it used a microcoded processor to do regexp searches as the data passed the heads --- none of this cumbersome "reading it into memory first" stuff.
I worked on and with CAFS and it had its place and I think similar systems would have their place today.
The main advantage was freeing up your expensive mainframe from the laborious, time consuming and CPU intensive task of searching though gobbets (technical term) of data finding the nuggets that where then passed on for further processing. CAFS searches were loaded into the disc controllers and retrieval kicked off. Only hit records were returned. A great boon if the ratio of data to be searched to the number of hit records was large.
It's other main advantage was that it could be loaded with what amounted to free text searches of arbitrary fields. Thus, way back in the day, what amounted to unstructured big data, could be searched efficiently for matches against whatever criteria your
big data analystsDBAs or even end users wanted.
The closer to the data you do the filtering the less the data has to move about and is thus more efficient the process is. That is assuming you are keeping the rest of the system busy doing useful work and not just heating up the atmosphere.
> When SCSI first replaced SASI (about 40 years ago) I proposed disk drives with embedded SQL processors, so, instead of looking up your info via a file system, you searched for it in a database
Back then, any such solution would have been proprietary - as I recall, at one point Oracle even offered it's own file-system for people wanting to maximise performance.
But either way, I'm really not sure about the benefit of this approach. Because fundamentally, any system advanced enough to perform complex activities on your data will be complex in and of itself.
E.g. a SQL "processor" would need code to parse and process the SQL language. It would then need memory, so it can store the data while processing it. And so on, until what you effectively end up with
a full blown server in it's own right.
And that's before you consider how many extra layers of abstraction we have these days. E.g. if a company has any sense, any sensitive data will be encrypted at rest. And/or the fun which lies in debugging issues when you have a black-box design like this.
Arguably, that's the sort of thing which services like Elasticsearch are trying to address - in an era where processor cycles are cheap and disk space is cheaper, it makes sense to build a secondary platform containing denormalised data which you can quickly search before going back to your main datastore (assuming you can't just pull what's needed from the secondary platform).
But embedded processors at the disk hardware level? That's at completely the wrong layer.
No, IMO the drive is not the place to do content searching. The reason being that either the entire drive would have to be seached for every query, which would take far too much time, or the drive would only work with a very specific database format - and even then would have to have a large amount of RAM to store the hashing tables and caches. This would be too inflexible for use as a general purpose HDD, and in any case not be much cheaper, smaller or use less power than a conventional HDD attached to a separate CPU system or full-fledged server PC.
Even making a drive that has a built-in file system so that it accepts file-based commands directly is not a particularly good idea IMO, because it greatly decreases the HDDs flexibility while not achieving much advantage (if anything) to whatever system it is attached to.
I once asked a Microsoft guy what happened to the database filesystem that was meant to replace NTFS. He said "reality happened".
That's quite succint considering it took them four attempts to realise this. (OFS, the first, is available in some betas of NT according to youtube). As for the rest.... I think there might've been a vista beta with WinFS in but who knows???
Filesystems are not databases, even if parts look superficially similar.
You know this.... I know this..... (I mean FOUR attempts..... I'd still not try it now and computers are much quicker.....)
Mind you, the Be File System had metadata level indexing, which was supposed to be quite good, but still, that's a far cry for trying to fit a DB in there.
it shipped about a billion CPU cores last year in its storage products, this development may signal a coming surge in RISC-V processor shipments
And does that matter to anyone? The fabs likely don't care the architecture. If it's a fairly a small number of designs, it doesn't even suggest a large number of RISC-V jobs in the job market, either.
Uh yeah, it matters a lot. It's a validation of a design, for one thing. You're right that the fabs -- and by that, we mean TSMC, etc -- don't care about the architecture. That's not their job.
But people further up the chain considering using the architecture will think it matters. 'Can we trust that this tech works?' 'Well, Seagate just put XYZm into production.'
Interesting stuff, but I'm not really sure what the benefit is to the end-user.
I mean, they're talking like you could offload certain computational tasks to your storage device much like you can do with GPUs now....but given the...err...temporary nature of storage devices in large clusters what happens if a drive fails but it's currently running a task the system needs?
I suppose the main advantage would be the ability to handle tasks like drive-level encryption entirely in hardware rather than relying on the OS to do it.
It means products will become even cheaper or will have even more computing power at the same price point.
ARM processors are already dirt-cheap (you can get a powerful MCU for just a few bucks) but they will become even cheaper (less than a dollar each) if RISC-V takes off.
MCU's arre already so cheap that I wonder if it's even useful to make them any cheaper. Anyone can slap something together using today's MCU's at a price that's very wallet friendly. By making it any cheaper it will simply become a throwaway.
And M0 cores are actually quite powerful, at least 10 x faster than 8-bit AVR parts.
In fact some of these new parts are *so* powerful I'm actually at a loss what to do with all that horsepower. The RISC-V MCU used in the SiFive SBC is 40 times (!) faster than an AVR 8-bit MCU.
The offloading of tasks they are talking about aren't general purpose system tasks, but tasks related to the data on the device where if the device failed the data would be gone, so it wouldn't matter the task running on the device fails/can't complete.
Examples beyond what you already mentioned like drive-level encryption (which, btw already exists, a self-encrypting drive, of which there are many examples of HDDs and SSDs that already have the onboard processing to do that), could be things like offloading a 'find' command to the device, or data processing tasks such as what scientific data gathering systems would like. Massive sensors like radiotelescope arrays (e.g. the SKA) generate peta-bytes a day of data. You could have some initial filtering of that data occur on the drives, so as the raw data is streamed to the drive for storage, some filters could be applied in this offloaded processor that say, can detect and discard 'junk' raw data so it doesn't have to write it to disk, and doesn't have to be processed by the supercomputer doing more detailed analysis as the on-drive filtering eliminated it already.
Drive-level encryption has been around for decades. Many (most?) SSDs are also SEDs, although few people seem to use the feature.
But it sounds to me that the main reason for the hardware change is to allow more precise (and perhaps faster) servo control, which will mean more storage in the same area of disk, and maybe faster seek times. These are things that cannot be done in an external CPU.
I'm amazed how fast the usage of RISC-V is growing. I'm certain the sale of ARM has accelerated this and will continue to accelerate it even further in the near term. Currently there are almost weekly reveals of new RISC-V designs, both in the U.S. and China. (I haven't seen many European companies jumping on board).
I'm interested how fast the tool chain, which is said to be one of ARM's distinguishing points, is able to catch up.
China's focus seems to be on x86 (via VIA's license) and MIPS (Loongson). As part of their 'China 2025' initiative, both have received massive government support.
You can even buy a Loongsoon MIPS-based laptop in China, e.g.: https://www.tomshardware.com/news/chinese-laptop-featuring-new-14nm-loongsoon-3a4000-cpu-appears
The big iron and PC market will only switch to RISC-V or ARM if one of them can break the 5Ghz barrier.
The cost of switching platforms and architectures is huge and people and companies will not want to swallow that cost unless there's a clear advantage.
5Ghz is a physics problem, not particularly related to the instruction set.
At the end of the day, CPUs switch transistors. Instruction sets are just a big lookup table to control that process; part a chain of encodings starting in the middle of your GCC/LLVM compiler...
Biting the hand that feeds IT © 1998–2022