back to article Nvidia may be mulling lopping Arm off Softbank: GPU goliath said to have shown interest in acquiring CPU design house

Nvidia is reportedly pondering snapping up Arm, going as far as approaching the Softbank-owned microprocessor designer to talk about a potential takeover. Arm licenses chip blueprints, which range from individual CPU cores to graphics and machine-learning accelerators to entire instruction set architectures, to system-on-chip …

  1. Duncan Macdonald Silver badge

    What is the point ?

    For Nvidia, if all they want is custom ARM designs then buying a suitable license from ARM would be far cheaper than buying the company. As ARM have recently brought out their semi-custom Cortex-X1 which makes it easier for its customers to get higher performance systems, there would seem to be little point to Nvidia trying to buy the whole company.

    1. diodesign (Written by Reg staff) Silver badge

      Re: What is the point ?

      > For Nvidia, if all they want is custom ARM designs then buying a suitable license from ARM would be far cheaper than buying the company

      I think they perhaps don't want it to die or be asset stripped under Softbank or as a public company.


      1. Glen 1 Silver badge

        Re: What is the point ?

        ..Or want to be the ones doing the asset stripping.

        Lots of brains at ARM have the skills that could boost Nvidia's ranks, thats *before* we start talking about the tech licencing.

        1. diodesign (Written by Reg staff) Silver badge

          Re: Re: What is the point ?

          Fair point. It does feel like Arm's backed into a corner.


          1. SteveK

            Re: What is the point ?

            Having its arm twisted you mean?

            1. Korev Silver badge

              Re: What is the point ?

              Yeah, they have a chip on their shoulder...

          2. I ain't Spartacus Gold badge

            Re: What is the point ?

            The problem is, Softbank are likely to not want to lose an embarrassing amount of money on the deal. So it's going to cost an arm and a leg.

            1. Anonymous Coward
              Anonymous Coward

              Will Softbank get their leg over?

              Overvalued by one leg?

        2. werdsmith Silver badge

          Re: What is the point ?

          Right now, when Apple have announced that their MacBooks will be using their own CPUs that are based on ARM cores, I’m fairly sure that other PC makers will follow and the progress of ARM moving but from small device territory and into productivity devices will be followed by makers of other computers for Windows and Linux etc. They will use SOC components from companies like NVidia who have the existing skills to supply and adapt GPU drivers.

          1. Porco Rosso

            Re: What is the point ?

            Isn't Bill Gates or Microsoft the biggest shareholder of Nvidia ?

            so its basically Microsoft buying ARM ...

            Nvidia is a better candidate than an Chinese state owned conglomerate.

            even better would be a European private one (Nokia, Ericsson or the car boys Bosch, HERE! ) from my European point of view ..

            for once we have a Jewel in IT better keep it ..

            1. katrinab Silver badge

              Re: What is the point ?



              Fidelity is the largest shareholder, and a 0.95% stake would put you in the top 10.

              1. Porco Rosso

                Re: What is the point ?

                you have point

        3. maffski

          Re: What is the point ?

          'Lots of brains at ARM have the skills that could boost Nvidia's ranks'

          It could go the other direction as well. NVidia are extremely good at market segmentation and they might think that while Softbanks 'just charge more for everything' approach will drive customers from ARM some careful technology segmentation might see specialist industries paying up.

    2. Throatwarbler Mangrove Silver badge

      Re: What is the point ?

      Also, there's tidy licensing income to be collected.

      1. Schultz Silver badge

        "Also, there's tidy licensing income to be collected."

        .. which may or may not be sufficient to pay the interest on the purchase price.

        Based on the valuation Softbank put on Arm in 2016 ($32 billion), the current licensing model income (flat 1.6 billion in 2019) may not be sufficient to keep things running. So the question becomes whether they can grow significantly (getting hard when you already dominate the market), collect more rent from the licensees (a question of alternatives and, possibly, monopoly laws), or whether some bidder sees additional strategic value.

        The challenge with the strategic value bit is that ARM's past success is based on equal technology access for all their customers. If you move the best developers to your own in-house development projects, your customers may become unhappy. They can't compete if you curtail their access to the latest technology. But if you don't, then where is your strategic value? There is a clear conflict of interest for any buyer who wants ARM for 'strategic advantage'.

        1. bazza Silver badge

          Re: "Also, there's tidy licensing income to be collected."

          Yep, I agree with all that. I always felt that an independent ARM trod a very fine line between asking too much for licenses, and not asking enough. With a $32billion acquisition hanging round the business's neck, well.

          Personally speaking I think that there's room for growth yet for ARM with their current business model and pricing. Apple's move, and Windows 10's availability for ARM, might be about to kick off a ARM revolution in desktops, simply because app developers are getting lots more prodding to support multiple architectures (like the OSes do / will). This would be at the expense of Intel. It'd be ironic if Softbank sold up and then missed out on that.

          1. gnasher729 Silver badge

            Re: "Also, there's tidy licensing income to be collected."

            Remember that Apple doesn't really intend to switch from Intel to ARM's processors. They are switching to Apple's processors, which are ARM compatible but currently significantly more powerful.

            1. PeeKay

              Re: "Also, there's tidy licensing income to be collected."

              I would agree that Apple design their own Arm core based processors, I also believe this would sting Apple as they currently have some kind of beef with Nvidia (see:

              Apple don't like to play nice with companies that they feel aggrieved by (which is business I guess).

              1. Anonymous Coward
                Anonymous Coward

                Re: "Also, there's tidy licensing income to be collected."

                It's rumoured that Apple may be aggrieved by El Reg :-)

          2. I ain't Spartacus Gold badge

            Re: "Also, there's tidy licensing income to be collected."

            On the other hand, Softbank lost $100bn last year. And I'm pretty sure that's a pre-Covid 19 number too. Although quite a few tech companies have done quite well out of lockdown and working from home - just not, I suspect, Uber and WeWork.

            So although Softbank bought it for $32bn, they may well be willing to just add a few more numbers to their losses. And then they'll have lovely cash to invest more into the glorious future prospects of Uber and WeWork!

            Personally I'd just save the effort and burn it...

    3. This post has been deleted by its author

    4. Charlie Clark Silver badge

      Re: What is the point ?

      Nvidia's main interest might actually be to get hold of ARM's GPU team and integrate it and close down Mali. But there is a huge risk for any chipmaker thinking about buying ARM because it immediately makes the licence business less attractive for the competitors.

      Not sure if Nvidia could afford to keep ARM if Softbank is selling at anything near its purchase price so some kind of strip and flip would be the outcome.

      Softbank is probably hoping for a bidding war, but might find itself disappointed.

  2. sw guy

    GPU ?

    ARM also has à GPU line.

    I stop here, up to you to think of what could happen there...

  3. Mark192 Bronze badge

    This could make sense

    In addition to the ideas in the article, with the expected increase in ARM chips, at the expense of x86, in laptop and desktop computers, Nvidia could be in early while also leveraging it's graphics branding in the mobile space.

    This could be massive for Nvidia. Question is, will the price Nvidia offers be enough to solve SoftBank's current problems and will investors in SoftBank see a sale at less than the original purchase price as evidence that SoftBank's piss-poor investments have effectively turned it into a giant Ponzi scheme (many of SoftBank's investment choices were made a good few years too early so will be expensive failures).

    Nvidia has the option of waiting for SoftBank's demise and picking it up cheap but by then the talent will have left ARM.

    1. werdsmith Silver badge

      Re: This could make sense

      Talent could be leaving ARM anyway, these people who mess around with the future of companies are often oblivious to the unsettling effect on staff caused by uncertainty.

      1. jotheberlock

        Re: This could make sense

        And, y'know, the layoffs.

        1. Anonymous Coward
          Anonymous Coward

          Re: This could make sense

          Actually (according to ARM still need to recruit around 700 people in the UK by sometime next year to meet the legally binding undertakings that SoftBank made to the UK Gov when they bought ARM so may not be any lay-offs ... well not for another 12 months!

  4. Anonymous Coward
    Anonymous Coward

    ARM & Nvidia

    I worked at ARM in Cambridge over a decade ago. A very interesting bunch of people. Extremely "Cambridge" in its culture. Lots of people knew each other (myself included) from previous "Silicon Fen" companies. When SoftBank bought ARM a few years ago I really did wonder what exactly it thought it was buying. I'd be interested to see if there is a synergy or clash of cultures between ARM and Nvidia. At least Nvidia actually *do* stuff I guess.

    Oh well, pint for the chaps at ARM I occasionally bump into at the Cambridge Beer Festival (well, not this year due to death plague).

    1. Warm Braw Silver badge

      Re: ARM & Nvidia

      Can't help feeling it would have been a more appropriate Dominic Cummings acquisition than OneWeb for that very reason.

  5. StargateSg7 Bronze badge

    A pity to ARM, Intel, AMD, IBM, Samsung and Huawei that WE here in Vancouver, Canada are the designers and manufacturer of the world's ONLY FULLY ITAR-free 128-bits wide combined CPU/GPU/DSP superchip that is on a GaAs substrate AND runs at 60 GHz for a now increased 575 TeraFLOPS of 24/7/365 sustained time period of simultaneous 128-bit Signed and Unsigned Floating Point, Fixed Point, Integer and RGBA/YCbCrA/HLSA/CMYK pixel computation.

    A pity that we are about to CRUSH ALL CPU COMPETITION !!!!

    It's coming and IT'S INSANELY GREAT !!!!


    1. GrumpenKraut Silver badge

      > Signed and Unsigned Floating Point

      Yeah, rrrright. Funny how The Capitalization gives it away already.

      1. DarkwavePunk

        Got to admit that StargateSg7 has an amazing posting history though. Truly remarkable for those with a morbid curiosity for such things.

        1. GrumpenKraut Silver badge

          Morbid curiosity, oh yes. From his early art:

          "It's gotten SO BAD with Linux and C/C++ our company said screw this and went ahead and developed it's own Windows 10-like OS shell and remade the command line interface of Linux to PROPER ENGLISH! We remade LAMP into custom Windows 2016 server-like environment with a decent Active Directory WAN/ALN management system analogue!"


          "We also REPROGRAMMED ALL OF LINUX using a customized version of EASY-TO-READ PASCAL SOURCE CODE that is FULLY COMMENTED !!!!!!!!!!!!!!!!!!!!!!!"

          For a short time I thought he is nonsensical on purpose.

          1. Anonymous Coward
            Anonymous Coward

            I'd say that I want some of what she's smoking, but on second thoughts I really don't...

        2. Anonymous Coward
          Anonymous Coward


          OK, hands up, which of the Goa'uld is going to confess to carrying out unethical embryonic gene-splicing experiments using cells from Bombastic Bob and A Man From Mars, then?

      2. StargateSg7 Bronze badge

        Actually, we DO HAVE Signed and UnSigned Floating Points and Fixed Points for those times where we want 0.0 to positive upper limit and/or negative lower limit to positive lower limit real numbers!

        These are the current custom data type definitions in our assemblers, compilers and hardware:

        SI (Signed Integers) at 4-bit, 8-bit, 16, 24, 32, 48, 64, 96 and 128-bits wide

        Example: My_Signed_Integer_Number = SI_128_Bit;

        UI (UnSigned Integers) at 4-bit, 8-bit, 16, 24, 32, 48, 64, 96 and 128-bits wide

        Example: My_UnSigned_Integer_Number = UI_128_Bit;

        SFXP (Signed Fixed Points) at 4-bit, 8-bit, 16, 24, 32, 48, 64, 96 and 128-bits wide

        where half the bits are used for integer portion and half for the fractional portion.

        Example: My_Signed_Fixed_Point_Number = SFXP_128_Bit;

        UFXP (UnSigned Fixed Points) at 4-bit, 8-bit, 16, 24, 32, 48, 64, 96 and 128-bits wide

        where half the bits are used for integer portion and half for the fractional portion.

        Example: My_UnSigned_Fixed_Point_Number = UFXP_128_Bit;

        SFLP (Signed Floating Points) at 16, 24, 32, 48, 64, 96 and 128-bits wide

        with large and small negative to positive floating point real number values.

        Example: My_Signed_Fixed_Point_Number = SFLP_128_Bit;

        UFLP (UnSigned Floating Points) at 16, 24, 32, 48, 64, 96 and 128-bits wide

        with large and small positive-only floating point real number values.

        Example: My_UnSigned_Fixed_Point_Number = UFLP_128_Bit;

        RGBA Pixel (Web and Still Photo-oriented Red, Green, Blue, Alpha components)

        at 16, 32, 64, 128 bits wide

        YCbCrA Pixel (Video-oriented Luminance, Chroma Blue, Chroma Red, Alpha components)

        at 16, 32, 64, 128 bits wide

        HSLA Pixel (Still and Video-oriented Hue, Saturation, Luminance, Alpha components)

        at 16, 32, 64, 128 bits wide

        CMYK (Print-oriented Cyan, Magenta, Yellow, Black components)

        at 16, 32, 64, 128 bits wide (Alpha is handled separately as a bitmap)

        We also have HUGE NUMBER VALUE BCD (Binary Coded Decimal strings at 4 bits per value and 8-bits per value) for physics and astronomy purposes that need HUGE numbers.

        Built-in Hardware-based 128-bits per channel DSP such as Hi-Pass, Lo-Pass, 2D-XY/3D-XYZ SOBEL edge detection, notch filters, and other hardware-based audio/video/vision recognition-specific filters and effects, and also pixel lines to vector-spline-curve conversion and object recognition, and other

        application-specific hardware DSP acceleration. (we also put in AMAZING 128-bits wide high precision ultra low-noise ADC/DAC (Digital to Analog and Analog to DIgital Convertors) of 128 general purpose input and output channels at up to 60 GHz bandwidth! (it means we can do ONBOARD 3G/4G/5G SDR Software Defined Radio!)

        We also have Extended State Boolean Type operations for Expert Systems and Neural Net A.I. development with extended Boolean logic states such as:


        We also have many Custom integer and pixel-specific Bitwise operations such as Shift Bits Left/Right, Rotate Bits, Spin Bits, Reverse Bits, Set Specific Bits, Protect Bits, XOR, OR, AND, NOT, Set-Bits-Only-If-Other-Bits-Are-ON-or-OFF, and a lot more low-level bitwise ops.

        There is hardware accelerated ANSI/ASCII and UNICODE text search and replace, multi-language wildcard-based string handling, auto-hash tag creation and VERY high quality text-speech and speech-to-text built-in!

        And of course there is extended error trapping and fail-gracefully error processing along with HUGE onboard L1 Multi-Gigabyte-sized SRAM-like cache and L2/L3 RAM caches up to the MULTI-TERABYTE+ RANGE! plus some user-settable GROUP NETWORK and LOCAL MACHINE SHARED MEMORY, EXCLUSIVE-to-APP memory and scheduled unused/unfreed memory garbage collection!

        Encryption is Anti-Quantum Computing-based (invariate, one-time-pad, multi-curved, etc) for both local RAM and shared global memory AND all read/write disk-operations which is ALWAYS ON.

        It REALLY IS the most advanced super-chip ever created at 575 TeraFLOPS SUSTAINED during 24/7/365 operations!

        We also have a TWO TERAHERTZ version (i.e. 2 THz) in current development which is GaAs (Gallium Arsenide) on Cooled Sapphire for extra speed of cooling!

        That version will be 20+ PETAFLOPS on a single chip! That means you only need 50 of them to hit ONE EXAFLOP (we already have a 119 ExaFLOP supercomputer in current operation which IS absolutely the world's FASTEST SUPERCOMPUTER PERIOD!)

        We're going for YottaFLOP Supercomputing which will TRULY HIT that long awaited A.G.S.I. (Artificial General Super Intelligence) levels because we can then do a truly molecular scale EMULATION / SIMULATION of the ENTIRE human body with all its electro-chemical interactions!

        We've been Under-the-Radar for over a DECADE NOW !!!

        And we are supported by a local maverick nerd/tech sub-billionaire who has our engineers custom research, design and build EVERYTHING IN-HOUSE so we don't have to follow patent licences since we do it all ourselves and purposefully AVOID infringing/using other companies intellectual property using ONLY Canadian designed/built/sourced personnel and technology (i.e. is FULLY ITAR-free sellable to EVERYONE without interference from any U.S., European or Asian government agency).


        1. adam 40


          Just in case it becomes true, then I will be on the bleeding edge of this new (possibly alien) tech!

    2. Anonymous Coward
      Anonymous Coward

      Gibberish tossed (word) salad.

      Thankfully pseudoengineering doesn't get the traction pseudoscience seems to get.

    3. Anonymous Coward
      Anonymous Coward

      So, stargate, how involved were you in this project?

      1. StargateSg7 Bronze badge

        I personally designed and coded the 2D-XY and 3D-XYZ SOBEL edge detection system, pixels-to-vector lines/B-splines convertor and it's real-time object recognition database library of 200,000 3D-XYZ objects, terrain, vehicles, vessels, buildings, personnel, etc. so that the four times 8192 by 4320 at 10,000 fps video frame autonomous image recognition system works JUST FINE in terms of automatically finding, recognizing, tracking and targeting up to 65,535 objects PER SECOND per camera in real time at four times 10,000 fps input video frame rates!!!! (i.e. four cameras where each camera stream is recording at 10,000 fps at DCI 8K 8192 by 4320 pixel resolution at 64-bits wide -- aka 16 bits per colour and alpha channel -- per RGBA/YCbCrA/HSLA pixel)

        I also directed the hardware engineers on HOW to convert my object-Pascal source code into hard-coded microcircuits for this section of the GPU portion of the combined CPU/GPU/DSP super-chip so ALL of this is now fully hardware accelerated for REAL-TIME operations!


        1. Anonymous Coward
          Anonymous Coward

          More gibberish.

          1. StargateSg7 Bronze badge

            Been doing this for 25+ years now, so please do tell me what's gibberish about it!

            I've designed and coded autonomous flight control systems and multi-camera vision recognition systems for HALE (High Altitude, Long Endurance) UAVs (FL3000 aka 300,000 feet) AND high orbit SPACECRAFT!

            What have YOU done?


            1. Anonymous Coward
              Anonymous Coward

              I have 30+ years in silicon design and manufacture. With a small amount of GaAs fabrication. I have never read so much delusional claptrap. Ever. You string technical words together without even a basic understanding of what they mean. As well as using incorrect terminology. Pseudoengineering.

              1. StargateSg7 Bronze badge

                I'm NOT the electrical engineer! I am the GRAPHICS / VISION SYSTEMS PROGRAMMER !!!

                However, we DO HAVE many Ph.Ds and MSc.EEs who DO KNOW what they are doing and they do take my vision systems Object Pascal source code and IMPLEMENT that into actual GaAs-based hardware! Which is WHY THEY get paid the big bucks and I live in a shoebox!

                Please do tell me WHAT SPECIFIC WORDS that I am stringing together that is claptrap. They are seemingly quite clear to me! The higher-ups don't care that the chip substrate is so large with the line etchings at 400 nm wide, ergo the size of n-device vs p-device size is basically a non-existent problem so they simply don't care about the 9:1 difference in size ratio.

                AND they are doing thin film vapour deposition of Aluminum Oxide Ceramic as the Insulating layer and with the sheer number of vacuum chambers in the facility, they simply DON'T CARE about the time it takes to do that thin film deposition! They already know the defect rate is quite high when you're doing that type of layering BUT AGAIN the processes are so automated here that they just don't care!

                I'v BARELY DABBLED using automated design tools to get at least SOME working circuits so I understand the chipmaking process itself so I can write source code that has the least number of operations which SIMPLIFIES circuit layout but of course I leave all that to the actual electrical engineers!

                Anyways, we STILL got 128-bits wide Floating Point, Signed Integer, Fixed Point and RGBA, YCbCrA, HSLA, CMYK pixel array processing at 575 TeraFLOPS sustained.

                We also have a very nice on-board ADC/DAC/DSP system independent of the main circuits which has 128 input channels and 128 output channels where EACH channel samples/outputs at up to 60 Gigasamples per second (at 128-bits wide!)

                You CAN use GaAs for general purpose CPU manufacture. It's not just used for monolithic microwave circuits anymore!

                Ergo, it's a DAMN FAST SYSTEM !!!


                1. StargateSg7 Bronze badge

                  Correction: it's the p vs n devices that has the 9:1 ratio to balance the current properly through them and not n vs p as above.

                  Anyways, i'm NOT the actual MSc.EE --- I just do graphics/visions systems that gets translated to hardware.

    4. Steve Todd Silver badge

      Shame you have no grasp of the physics involved

      A GeAs chip would have CMOS pairs 3-4 times larger at the same minimum feature size than the equivalent Si chip (because GeAs is not good for hole mobility the PMOS side needs to be 9 times larger than the NMOS, as opposed to 2:1 for Si). Even if the process technology were able to handle GeAs on the same scale as Si, the resulting chips would not have the space for the complex circuits of a modern CPU, and would lack much of the optimisations because of that.

      Even if they could reach 60GHz, they would be nowhere near 10 times the speed of current designs, and certainly they wouldn’t be 128 bits wide.

      1. Steve Todd Silver badge

        Re: Shame you have no grasp of the physics involved

        Slight brain fart, that should be GaAs, not GeAs, otherwise all stands as is.

        1. StargateSg7 Bronze badge

          Re: Shame you have no grasp of the physics involved

          P.S. I very much KNOW the physics involved in making super-chips!

          We have MANY Ph.Ds and MSc.EEs on-hand to do our actual engineering and our superb multi-degreed materials scientists are at the FOREFRONT of their fields in terms of substrate materials engineering systems and microcircuit etching processes. Since we're now running 400 nanometre wide line traces, it's QUITE a bit easier to do than Intel/AMD/ARM's sub-22 nm CPU manufacturing processes!

          Actually, make that IT'S A HECK OF A LOT EASIER in terms of quality assurance and ease-of-manufacture when we use 400 nanometre line trace etching processes for GaAs (Gallium Arsenide) than what Intel/AMD/ARM is doing at sub-22 nm CMOS!

          Our engineers have SOLVED ALL of the main substrate doping and breakdown issues that have previously kept high-speed large-circuit GaAs chips from commercial production.

          Again, these are INSANELY GREAT super-chips and are coming down the pipeline for IMEMDIATE public consumption ASAP!


      2. StargateSg7 Bronze badge

        Re: Shame you have no grasp of the physics involved

        YES! I was wondering about that GeAs notation! Germanium Arsenide is something we haven't tried yet!

        Our line traces were originally 280 nanometres wide and the chip itself is physically larger than an A4 size piece of paper BUT even at 280 nm, basic electrical stability WAS DEFINITELY an issue, so our current chips are now using 400 nanometre wide line traces, hence the huge physical size of the chip even though it is a STACKED LAYER CPU with multiple line traces/microcircuits on separate layers all running from a master 60 GHz clock.

        Our circuits are also MUCH SIMPLER than CISC (Complex Instruction Set Computing) cpus using RISC (Reduce Instruction Set Computing) layouts. It's basically a massively-scaled GPU with very simple numeric array processing circuits MUCH SIMPLER than any ARM or AMD or INTEL general purpose cpu. Our supporting operating system is also a very simple RTOS (Real Time Operating System with very few items in the Kernel itself. Every upper layer feature EXCEPT low level numeric and pixel array processing is an upper-level re-entrant task. Even the graphics display tasks are re-entrant (i.e. interruptible)

        GaAs can goto 2 THz (Two Terahertz) when the lower substrate is on cooled Sapphire plate which is what our future versions will be running on.

        YES! WE ACTUALLY ARE at 128 bits wide for all our Floating Point, Fixed Point, Integer and Pixel Processing AND the current version's general purpose ADC/DAC/DSP system uses 128-bits per sample PER CHANNEL for all 128 channel inputs and 128-channel outputs SIMULTANEOUSLY at a full 60 GHz clock rate which is WHY we need TERABYTES of sampling memory. We can do MULTIPLE operations per clock tick which is WHY we can do 60 BILLION 128-bit SAMPLES PER SECOND !!!

        We normally Nyquist resample the 128-bits wide ADC samples down to 64-bits per sample as a natural form of antialiasing before we present to the upper level signals processor API BUT the user can specify receiving the original 128-bits wide ADC sample. When you're sampling 128 channels of incoming analog signals at 60 Gigasamples Per Second at 128-byts wide each sample and simultaneously OUTPUTTING 128 channels of 60 Gigasamples of 128-bits wide, the memory requirements ARE STUPENDOUS !!! It's the WORLD'S BEST analogue signal sampler and DSP EVER CREATED usable for audio, video, realtime metadata AND SDR (Software Defined Radio)

        We'll be introducing it publicly soon enough. We are in the midst of doing a FULL PRODUCTION RUN FIRST so when we do introduce the combined CPU/GPU/DSP super-chip, we will ALREADY have a few BILLION of them on hand RIGHT AWAY IN STOCK in our Vancouver, Canada for the general public to purchase right away !!!

        Price-wise, it's been all over the map during internal discussions BUT BECAUSE we design and build EVERYTHING in-house, we are STILL making a great profit margin if we sell the superchips at $1111 U.S. (about 957 Euros!) with PST/GST/VAT taxes extra.

        INTEL, ARM, AMD, NVIDIA, IBM, TI, SAMSUNG, HUAWEI are basically TOAST in terms of CPU, GPU and DSP chip making once these superchips come out on public sale!

        Since there is MANY TERABYTES of local main system RAM and more than a few Multi-Terabytes of flash-like long term non-volatile storage built-in (i.e. a built-in flash-memory-like hard drive), these chips are entire computers in themselves that only need a power supply, video display lines (i.e. HDMI/DisplayPort/Lightning ports) and HID (Mouse, keyboard, touchscreen, RJ-45 or Optical network ports) peripherals and devices to work right out of the box!

        These are INSANELY GREAT systems

        and they are

        Coming Soon To A Retailer Near You!


      3. StargateSg7 Bronze badge

        Re: Shame you have no grasp of the physics involved

        I think the engineers have addressed every issue you have espoused including getting defects out of insulating layers, large line widths and spacing in-between circuit traces to allow for increased current and PROBABLY to get the insulating layers to "stick" to the substrate properly mitigating defects as much as possible.

        This chip is larger than an A4 size sheet of paper! They're literally using super-smoothed bathroom tile as the base layer with a built-in micro-channel-based liquid-cooling heat-wicking system in it and etching/layering GaAs circuits into that! And they're 3D stacking those "tiles" to form the MLCM box (Multi-Layered Chip Module)

        It is what it is! The Materials Scientists and Electrical Engineers are the Genius eggheads -- I just do Graphics -- BUT -- I do know it's coming out a LOT SOONER than most people think!



  6. Greybearded old scrote

    Worried about it being NVidia

    They don't play by GPL rules on their GPU drivers, a point that has always given me grief keeping an NVidia graphics card working on Linux. (Oh, the kernel's updated. No acceleration for you until you've done the non-free driver dance. (Again!)) See Linus' opinion from his 'tell it how I see it' days.

    So I worry that non-free code will be required to use Arm processors at some point in the future.

    1. Anonymous Coward
      Anonymous Coward

      Re: Worried about it being NVidia

      > They don't play by GPL rules on their GPU drivers,

      Their drivers don't contain any GPL code, so they don't have to follow any GPL rules.

      1. Greybearded old scrote

        Re: Worried about it being NVidia

        They don't have to, but it means that changes to the kernel ABI bite them now and then. The drivers that are free code get updated as part of the kernel development process.

        My experience could be out of date mind, I had enough trouble that I've just avoided their products for many years now.

        My conclusion is still the same, they might not support free as in freedom on ARM, because they never have on their existing product line.

    2. Anonymous Coward
      Anonymous Coward

      Re: Worried about it being NVidia

      Which distribution do you use? I've been fiddling with linux off and on for about 10 years (and I'm NOT an expert, just a user) and originally AMD cards just didn't work at all. So I used NVidia, and the closed-source driver.

      All I can say is that on Mint I've never had to manually reinstall it.

      1. Greybearded old scrote

        Re: Worried about it being NVidia

        Mint is nice, but I've mostly used Debian. I've tried its derivatives now and again, but there's always something that drives me back again. Afterwards I can never remember what it was.

        It's about 10 years since I last tried NVidia. They ran out of forgiveness tokens from me.

        But again, I favour free code and they don't. So I'm concerned.

      2. Jim-234

        Re: Worried about it being NVidia

        If you use the Full Disk Encryption option on Linux Mint installs, after any kind of change, you'll often be greeted with a black screen and nothing else thanks to Nvidia drivers or lack of the ones it wants. Especially in the RTX series.

  7. Anonymous Coward
    Anonymous Coward

    Waiting for RISC-V

    I am waiting for nice RISC-V boards. I have played with the slow 100 Mhz single-core sample ones and they are quite nice to program. It will take somebody with deep pockets and a large customer base to turn these out in volume though. Maybe Alibaba....

  8. Lee D Silver badge

    Merging of CPU/GPU looks to be the only real reason.

    AMD is basically ATI/AMD.

    nVidia is out there, usually paired with Intel.

    Apple is incorporating everything onto an ARM chip and abandoning Intel.

    Seems like nVidia/ARM could well be a very powerful combination, bringing proper GPUs to computers and well-established ARM to the fore.

    I doubt they can afford it or it would work, but nVidia/ARM (strong GPU, strong CPU) against Intel (rubbish GPU, strong CPU) and AMD (strong GPU, strong CPU) seems to be about the only thing to stop them eventually becoming irrelevant - especially in the mobile/tablet area which they only dabble in (nVidia Shield being their most successful?).

    And I say that as someone who just bought a nVidia/Intel gaming laptop that I'm in love with.

    1. Zippy´s Sausage Factory

      Yes but don't Apple hate nVidia, after the debacle a few years back when nVidia chips were going wrong and they kept blaming Apple?

      It makes me think Apple might wake up and make an offer for ARM now.

      That said, they may not want the hassle of so doing, of course - it might never get approved under anti-trust laws, there would be licensing deals, Android slingers would be less willing to trust ARM if Apple owned them, etc.

      Still, I fully intend to sit back, open some popcorn, and enjoy the show...

  9. Anonymous Coward
    Anonymous Coward


    Why do I feel a strange disturbance, as if Microsith may be slowly edging forward, about to make a move?

    Of late, they've lured many and varied things into their Sarlacc trap, Nokia, LinkedIn, GitHub, there's even the very surreal experience (must be the effect of the digestive neurotoxins) of there being official deb packages for certain Micros~1-ware now…

    They certainly have the money in pocket change, it would add another block to their defensive moat, and it would allow them to continue to feed off other parts of the IT universe.

    Of these two possible outcomes, I think I'd rather Apple bought ARM. It would be pleasantly amusing if the roles reversed and then much of the IT industry had to work in collaboration with Apple; hopefully those there who are involved with open source and who remember 70s hippy California would ensure that they wouldn't do anything too evil with it…?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020