back to article Arm at 30: From Cambridge to the world, one plucky British startup changed everything

British chip designer Arm turned 30 last Friday. This is an auspicious occasion. The microprocessor technology drafted by this Cambridge-headquartered outfit forms the basis of nearly all smartphones and tablets in circulation, dominates the Internet-of-Things and embedded electronics space, and is appearing in an increasing …

  1. A Non e-mouse Silver badge
    Pint

    Beer to those at Acorn who took a massive leap in faith.

    1. Fruit and Nutcase Silver badge

      Out of Acorn has grown a mighty oak

      Just keep the dry rot at bay

    2. Anonymous Coward
      Thumb Up

      A small BBC Model B diversion, courtesy of Intel's oddness and the Internet's genius

      Apropos of very little, but here's an absolutely fantastic piece of reverse engineering (that I had nothing to do with) of the Intel 8271 - the FDD controller used in the BBC B.

      At the time we all thought it was an odd choice and very expensive. And it was on both counts; but it was also a dual core general CPU with an event-driven task dispatcher and yield based API built in and utterly unlike anything seen before or after at Intel. Deeply odd.

      Anyway - this makes for an excellent afternoon read:

      https://scarybeastsecurity.blogspot.com/2020/11/reverse-engineering-forgotten-1970s.html

      1. John Smith 19 Gold badge
        Unhappy

        Anyway - this makes for an excellent afternoon read:

        Indeed it did.

        Just astonishing. 2 processors and a Java style event execution model (in hardware) in the late 70's. You've impressed me.

        Nothing to do with Arm though.

        1. Anonymous Coward
          Anonymous Coward

          Re: Anyway - this makes for an excellent afternoon read:

          Agreed! Nothing at all.

          1. druck Silver badge

            Re: Anyway - this makes for an excellent afternoon read:

            You say that, but it was Acorn's aim that the ARM processor and it's supporting chip family (IOC, MEMC and VIDC) would eliminate the need for costly 3rd party chips such as the 8271, and enable them to own the the entire hardware ecosystem of the new 32 bit computers. In 1995 Acorn combined all those chips in the ARM7500 SOC, and that's pretty much what Apple has done with the M1 SOC, giving them complete control over the new MAC ecosystem.

            1. druck Silver badge

              Re: Anyway - this makes for an excellent afternoon read:

              I forgot that Acorn created their first SOC back in 1992 with the ARM250 for the A3000 computer, 3 years before the ARM7500.

    3. MOV r0,r0

      Some say there was an obligation to jump courtesy of the company accountants! Props to Malcolm Bird for the first pass of the business plan and Sir Robin Saxby for the second - and for implementing it so well.

  2. Anonymous Coward
    Anonymous Coward

    Excellent

    Lovely piece on a great company. Lets hope it stays that way,

  3. anthonyhegedus Silver badge

    I wonder how ‘reduced’ the instruction set is now on on a M1 chip compared to mainstream processors of back in the day. From 25000 transistors to 20 billion transistors is just an amazing amount of progress.

    1. A Non e-mouse Silver badge

      The difference between RISC & CISC chips has blurred enormously. I don't think any chip that started as "RISC" is really RISC any more (Well, not in the original ideals of what a RISC architecture was anyway) And CISC chips borrowed ideas from RISC too.

      1. Michael Wojcik Silver badge

        FWIW, Phil Hester - one of the principal designers of the IBM ROMP, the first commercial RISC CPU - was already redefining "RISC" to fit CPUs with larger instruction sets back in 1990, for the first generation of RIOS / POWER chips. So even some CPU designers had discarded the idea that RISC meant a small instruction set, three decades ago.

        And that was only 15 years since the IBM 801 project started. So one could argue that the "original ideals" of RISC architecture only lasted for a third of the history of RISC to date.

        For the RIOS, Hester redefined RISC to mean something like "minimizing the cycle time / path length product" (I don't have the article I'm thinking of handy - it's at the other house). The RIOS architecture was based around an orthogonal, consistent load/store instruction set with predictable cycle counts so deep pipelines and speculative execution could be used without incurring a lot of stall penalties. It still follows the basic RISC premise of optimizing for the CPU and letting compilers implement higher-level functionality, rather than optimizing for people writing assembly by hand (as with CISC CPUs like the VAX, with its array of fancy opcodes - which were fun to use but not so practical for working in high-level languages).

    2. Anonymous Coward
      Anonymous Coward

      From 1 core to 4, 6 or 8 soon adds up, even more when a large chunk of those are probably a hardware multiplier and floating point unit.

    3. Torben Mogensen

      You can't use the number of transistors to measure RISC vs. CISC. The majority of transistors in modern CPUs are used for cache, branch prediction, and other things that don't depend on the size or complexity of the instruction set.

    4. StrangerHereMyself Silver badge

      Most of those 20 billon transistors make up cache memory. The other parts are mainly subsystems such as the MMU, bus control and peripherals. There are also many CPU cores on the die and not a single one as in the ARM1.

    5. DS999 Silver badge

      Depends on what you mean by "reduced"

      Some people think it means "smaller number of instructions" which was never necessarily true and isn't true of any modern CPU (since they include so many specialized vector instructions)

      The idea in RISC was to reduce the complexity of the instruction set, to make decode easier and allow it to run faster. So instead of having instructions that would say increment a memory address modified by an index, you'd add the index to the address to get the address you wanted to operate on, load that into a register, add one to that register, then store it. For something like that it seems like RISC has a lot more instructions for the same operation but for most things the difference is small. Due to various features of ARMv8, Mac ARMv8 code is actually SMALLER than Mac Intel x64 code!

      The other side of making decode simple was to have fixed length instructions. On Intel x86 you can have anywhere from a single byte to a dozen or so be an "instruction". On ARMv8 all instructions are 32 bits. As you can imagine, decoding an "instruction" is a lot harder if you don't know how many bytes it contains until you've already begun decoding the first part!

      1. Torben Mogensen

        Re: Depends on what you mean by "reduced"

        "As you can imagine, decoding an "instruction" is a lot harder if you don't know how many bytes it contains until you've already begun decoding the first part!"

        Even worse, you can't begin decoding the next instruction until you have done a substantial part of the decoding of the current instruction (to determine its size). Decoding the next N instructions in parallel is easy if they are all the same size, but difficult if they are not. You basically have to assume that all byte borders can be the start of an instruction and start decoding at all these, throwing away a lot of work when you later discover that these were not actual instruction prefixes. This costs a lot of energy, which is a limiting factor in CPU, and getting more so over time.

        You CAN design multi-length instructions without this problem, for example by letting each 32-bit word either hold two 16-bit instructions or a single 32-bit instruction, so you can decode at every 32-bit boundary in parallel. But this is not the case for x86 because it has grown by bits and pieces over time, so it is a complete mess. So you need to do speculative deconding, most of which is discarded.

      2. Majikthise

        Re: Depends on what you mean by "reduced"

        As a greybeard FORTRAN wrangler explained to me, way back when...

        The *real* point of RISC was that it worked round the memory bandwidth problem.

        Processor speed and memory size were increasing exponentially but memory bandwidth could never keep up. A simulation which ran acceptably fast in 1986 could, by 1988, run on a machine with 2x clock speed and 2x memory, so one might innocently assume that by '88 a model 2x as detailed could complete in the same of time, yes? Except no, because the memory bus would also need to be 2x faster (that's bandwidth AND latency). You'd be doing well if your new system's memory bus was any faster at all than a couple of years ago; 20% improvement was quite something. So a CISC op directly on memory might have been: 20 cycles waiting for the read, one cycle processing, 20 cycles waiting for the write. Before long it would be 40 cycles read, one compute, 40 cycles write...

        In RISC architectures, logic / maths ops are all register to register, with separate load / store ops to transfer between registers and memory. RISC processors did indeed decode instructions faster, but by removing the fancy addressing modes and building in many more (usually 16) registers efficient code could read from memory into registers - and do useful work while waiting - then run a sequence of register to register ops on what you just read in, finally writing out the result while getting on something else again, which kept processor and memory as busy as possible.

        Unfortunately for a generation of assembly programmers, interleaving the ops effectively and reliably was much harder than CISC. My mid 80s CS degree used M68K and my first job was VAX assembly; both had easy-to-use ops with all sorts of convenient addressing modes and both were clearly Not The Future. To use RISC properly, you need optimising compilers to munge FORTRAN (all the compute bods are ever bothered about) into performant code.

        1. Torben Mogensen

          Re: Depends on what you mean by "reduced"

          "The *real* point of RISC was that it worked round the memory bandwidth problem."

          That too, but mostly the load-store architecture prevented a single instruction from generating multiple TLB lookups and multiple page faults. On a Vax, a single instruction could (IIRC) touch up to four unrelated addresses, which each could require a TLB lookup and each cause a page fault. In this respect x86 isn't all bad, as most instructions only touch one address each (though they may both load from and store to this address).

          On the original ARM, a load/store multiple registers could cross a page boundary, which actually caused faulty behaviour on early models.

          A load-store architecture requires more registers, which is why ARM had 16 registers from the start, which x86 only got in the 64-but version. In retrospect, letting one register double as the PC (a trick they goy from PDP-11) was probably a mistake, as it made the pipeline visible, which gave complications when this was lengthened (as it was in the StrongARM).

          1. Majikthise

            Re: Depends on what you mean by "reduced"

            The x86 CISC architecture is effectively byte code interpreted by the "Core" RISC processor under the covers, so I'm sure Intel put in sufficient physical registers to cope. :-)

            The important point is that RISC was the right development for several reasons, not just the "fast decoding" meme which seems to dominate "Why RISC?" explanations. The acronym is usually read to mean a reduced set of instructions - and while designs like RISC-V are minimal (noting Dave Paterson's idea that you can tell it's RISC when a booklet of opcodes requires no staple), one can also read it as a set of reduced instructions - which become practical to use when optimisers can emit efficient code.

  4. macjules

    Err

    No mention of Sophie Wilson?

    1. diodesign (Written by Reg staff) Silver badge

      Sophie Wilson

      Click the second link in the article, and you'll have a nice surprise.

      Also, the article's mainly about Arm the company (from 1990), not the original Arm team. Sophie, IIRC, remained at Acorn all the way to the Element-14 days, working on things like Acorn Replay (tho consulted for Arm Ltd).

      Trust me, we've covered her -- see the linked-to articles.

      C.

      1. Tessier-Ashpool

        Re: Sophie Wilson

        In large part I owe my career in IT to the brilliance of Sophie or, as she was then known, Roger.

        In the early 80s, there were a ton of IT books available encouraging people to learn how to program. Among those was a superbly crafted manual on how to write assembler for the 6502, and another that contained a detailed disassembly of the BBC BASIC ROM. It was awe-inspiring stuff, seeing how so much functionality had been squeezed into 16KB. The last 5 bytes of the ROM were devoted to the character codes for 'R', 'O', 'G', 'E', 'R' (I forget the exact casing - it's been 40 years!).

        I'd have never been able to defeat the copy protection on Elite without this grounding!

  5. AlanSh

    I remember installing a Microvax at Acorn back in 1986 using 70 floppies to load the O/S. They had a great culture, even back then.

    1. BebopWeBop
      Happy

      Crikey - 70 floppies, that was a big distribution

      1. Stoneshop

        70 floppies, that was a big distribution

        400k per disk, so you were looking at 28MB (max) but probably more like 25MB, though that would still expand a bit because of compression. It'd take about half of an RD53.

        Jockeying 70 floppies might still have been faster than loading VMS off a "my, those glaciers sure are frisky in comparison" TK50, but likely a little less cumbersome. Because if you finally had the console incantation right you could then sit back with cuppa. Or three.

        1. AlanSh

          Re: 70 floppies, that was a big distribution

          I sat back for a while - the 42nd floppy failed and I had to go back to Newmarket and get another distribution set. But they were very good about it all.

          No option for TK50 then.

          Alan

      2. Gene Cash Silver badge

        I remember a roommate installing Linux (a version before 0.99pl13 but I don't remember which) and X11 with a similar size stack of floppies.

        1. J.G.Harston Silver badge

          I've still got my 32-disk set of Windows-95(Chinese). That was a three-cuppa install. ;)

    2. oknop

      VMS came on 40 RX50 floppies and an additional 10 for the mandatory update.

      All floppies were read twice. This was because of the verify pass.

      Oswald

  6. Dabooka

    Doesn't time fly

    I recall asking my teacher at school about what RISC was all about when I first read about these. Best guess it would have been around the 1990 mark and probably through one of the many PC mags which littered the house at that time.

    I guess it is inevitable in this day and age for a company such as Arm to te used and abused by global tech and venture capitalists, but still such a shame.

  7. Dr Fidget

    British?

    Maybe it was a British startup and maybe it's mostly based in the UK but it's actually owned by a Japanese conglomerate SoftBank Group

    1. werdsmith Silver badge

      Re: British?

      STFO award has been won.

      1. Greybearded old scrote Silver badge

        Re: British?

        Also the Jingoistic Pillock Award.

    2. Greybearded old scrote Silver badge

      Re: British?

      Formally British, yes. That company founder who thinks he should still have a say shouldn't have sold it, if that's how he feels.

      1. Anonymous Coward
        Anonymous Coward

        Re: British?

        Bit like Cadbury's then?

        Sorry, bit hungry at the mo!

    3. Adam Foxton
      FAIL

      Re: British?

      You've also won the Gold Medal of RTFA.

    4. Anonymous Coward
      Anonymous Coward

      Re: British?

      Irrelevant - British Company / British DNA / British Headquarters. Who cares who is gambling the stocks on the markets at the current time. That said perhaps NVidia will probably provide technical direction / input - but not Soft*ank.

      1. Michael Wojcik Silver badge

        Re: British?

        I agree with the sentiment, but does this metaphorical use of "DNA" mean anything? Arm is certainly British in various senses, both historically and currently. The article does a fine job of describing several of those. Could we just omit the weird, nonspecific, vaguely-metaphorically-ethnocentric DNA metaphors?

        This figurative use of "DNA" has become very popular in the last few years, and as far as I can see it's just a substitute for the equally non-substantive term "soul", which presumably fell out of favor due to its religious connotations. References to the "DNA" of a company (or product, or anything else that's not an organism) just mean "I'm going to ascribe some vague set of qualities to this thing, but I can't be bothered to enumerate them or justify that claim in any way". It's even worse than the execrable misuse of "powered" as in "Powered by Intel".

        There, that's my usage rant for today.

    5. diodesign (Written by Reg staff) Silver badge

      Re: British?

      It's still headquartered in Cambridge, UK. It's acknowledged in the piece that it's owned by Japan's Softbank.

      FWIW, an Arm PR once punched me in the arm - how apt - after we called Arm a Japanese chip designer in an opening sentence in a Register story. That jab didn't lead to us calling Arm a British company this week, but it reinforces my feeling that we sufficiently made the point of its foreign ownership.

      Arm was created in Britain, bankrolled by non-British entities, now owned by a non-British entity, but still headquartered in the same city it grew up in. It's British by nature, Japanese owned.

      Basically, we didn't say Arm is British-owned. We said it's British. And that's something we decided ourselves.

      C.

  8. werdsmith Silver badge

    I had always assumed the were a Science Park company, but a couple of weeks ago a went into Cambridge on Fulbourn Road and was surprised to find them there. I was a little bit in awe.

  9. Torben Mogensen

    Who killed MIPS?

    The article states that Arm killed of its RISC rival MIPS. I do not believe this to be true. IMO, it was Intel's Itanium project that killed MIPS: Silicon Graphics, which at the time had the rights to MIPS, stopped development of this to join the Itanium bandwagon, long before any hardware was available. Hewlett-Packard (which had their own PA-RISC architecture) did the same, as did Compaq, who had recently acquired the Alpha architecture from DEC. So, effectively, Itanium killed three of the four dominant server RISC architectures (the fourth being Sun's SPARC architecture, that was later acquired by Oracle), and that was solely based on wildly optimistic claims about future performance made by Intel. MIPS continued to exist as an independent company for some years, but never regained its position. It was eventually open-sourced and used as the basis of some Chinese mobile-phone processors, but these were, indeed, swamped by Arm. Itanium didn't affect Arm much, except that Intel stopped producing their StrongArm (acquired from DEC) and the successor XScale.

    So, while Itanium itself was a colossal failure, it actually helped Intel gain dominance in the server market -- with x86 -- as it had eliminated potential competitors in the server market. Now, it seems Arm is beginning to make inroads on this market.

    1. Anonymous Coward
      Anonymous Coward

      Re: Who killed MIPS?

      That's an interesting take on Itanium. I'd always thought of it as a catastrophe of such magnitude that only a company the size of Intel could have survived it. But perhaps, by driving out almost all the other server-processor vendors, it was, in the long run, good for them. Intel then drove SPARC effectively (or actually: is there still a SPARC roadmap?) out of existence which you could argue was because of the dominance they achieved and so also was because of Itanium.

      The whole ARM thing is a slow game of leapfrog - x86 drove ARM out of desktop machines in the late 80s I think, later x86/x64 drove MIPS (and everyone else) out of workstation/server processors, ARM then drove MIPS and I guess a bunch of others out of device processors, ARM is now fairly likely to drive x64 out of laptop processors for sure and may well drive it out of server processors as well.

      1. Anonymous Coward
        Anonymous Coward

        Re: Who killed MIPS?

        "x86 drove ARM out of desktop machines" - IBM launched the PC in 1981, Mac was not much later, Archimedes was 1986 so it was more a case that ARM failed to drive the others out of the desktop.

        As Robert X. Cringley wrote in "Accidental Empires" (1992), "In the personal computer business today, about 85 percent of the machines sold are IBM compatible, and 15 percent are Apple Macintoshes. Sure, there are other brands - Commodore Amigas, Atari STs, and weird boxes built in England that function in ways that make sense only to English minds".

        1. Anonymous Coward
          Anonymous Coward

          Re: Who killed MIPS?

          IBM launched the PC in 1981, Mac was not much later, Archimedes was 1986 so it was more a case that ARM failed to drive the others out of the desktop.

          What I meant, really, was that it was not clear that x86 or 68k had a future in the mid 80s: everyone assumed that the future was 32-bit machines (which it was, until it was 64-bit machines) and almost certainly RISC machines (which it only sort-of was). I forget what people thought would happen to the PC world, but I guess it was 'taken over by some kind of 32-bit RISC system'. Well, 68k indeed didn't have a future, but it turned out that Intel could turn x86 into something fast enough that it drove out all the things people would have called RISC at that point, however hideous it still was.

          I forget which edition of Hennesy & Patterson it's in, but there's a bit in it where they describe the x86 with some kind of horror, but accept that despite that, it's now very quick.

        2. Dan 55 Silver badge

          Re: Who killed MIPS?

          I'm not sure if the PC made sense in anyone's mind, it was like Frankenstein's monster bolted together by DOS and Windows. The only thing it had going for it was it got cheap once the clones come out and once it got cheap it took off.

          The platform was like Trigger's broom so it couldn't be killed off. The Mac/ST/Amiga/weird boxes built in England eventually couldn't compete against a generic PC with decent graphics and sound expansion cards made by specialist companies who could survive thanks to the market being huge.

      2. Stoneshop

        Re: Who killed MIPS?

        x86 drove ARM out of desktop machines in the late 80s I think, later x86/x64 drove MIPS (and everyone else) out of workstation/server processors

        And AMD gave Intel a good kick in the danglies when they launched X86_64 where Intel were pushing Itanic as the 64 bit Industry Standard (harhar) arch. Didn't quite kill Itanic, but severely cut short its predicted dominance, which was already hampered by at best lukewarm uptake because of non-existent x86 compatibility.

    2. DarkwavePunk

      Re: Who killed MIPS?

      Whilst what you say is true about the MIPS server market, they were most definitely in the embedded scene with an overlap with ARM offerings. They were still considered a rival (albeit relatively small fry) during the period I spent at ARM.

      On a tangent, the article does bring back some nostalgia for the place. Pub lunches at the Robin Hood in Cherry Hinton (or Hairy Chin Town as t'was known by some) being often a highlight.

      1. dharmOS

        Re: Who killed MIPS?

        MIPS is still around. It was owned briefly by Imagination Technologies before they had their near-death experience by Apple dumping their GPUs and forcing the divestment of this company.

    3. Charlie Clark Silver badge

      Re: Who killed MIPS?

      Itanium was HP's last throw of the dice. The real damage to alternative architectures had been done by Microsoft's shafting of Windows NT for the DEC Alpha. The Alpha was so much better than x86 at the time that Intel really was worried.

      By the time it came to the Itanium fabs were getting so expensive and TSMC et al. weren't able to step in, that HP had no choice but to go with Intel, who managed to get enough IP out of the deal to stick in future less-x86 x86s.

      1. Anonymous Coward
        Anonymous Coward

        Re: Who killed MIPS?

        I do remember running Windows 2000 Beta on DEC Alpha, then like you said - shafted!

        DEC even had some translation software that allowed you run x86 programs on a DEC Alpha CPU -

        FX!32 I think it was called?

      2. Kristian Walsh Silver badge

        Re: Who killed MIPS?

        Odd how you blame Microsoft for Alpha's demise when Intel is clearly the villain in this story. "WinTel" was never the kind of close cartel that the Linux and Mac fan communities painted it; if it were, Microsoft would not have tried to get NT running on so many architectures. Truth was, Microsoft wanted to break its own dependency on Intel at a time when CISC looked like yesterday's technology and Intel was seen as clinging to the past while everyone else embraced the RISC future. NT launched with MIPS, Alpha and x86, then PowerPC was added when that hardware was launched later*.

        When pressure was brought to bear to avoid Alpha (and others), it was on the hardware manufacturers. NT was cross-platform and very easy to port, so Microsoft really would not care what CPUs the hardware makers were going to use, so long as they bought NT licences for that hardware when they sold it; it was Intel that had something to lose. But Intel was also the one that had real direct leverage over those hardware vendors. Companies like Compaq and HP were faced with decisions that could have effects on the pricing of key parts for their booming x86 desktop sales, and that doubt was often all that was needed to keep them on x86.

        Microsoft dropped support for NT architectures when sales no longer warranted the cost of qualification. Alpha systems did not sell in enough numbers to justify the expense of tested and qualifying and supporting a build - dropping Alpha support on NT was an effect, not a cause, of the architecture's demise. NT workstations with PowerPC also didn't make much impact, and while Motorola did take over qualification of NT updates in order to support its existing customers, that ended before NT4.

        In short, all the evidence says that the plot to kill Alpha was hatched in Santa Clara, not Redmond, as Alpha, MIPS and PPC were no threat to Microsoft's business.

        __

        * Apple's PowerPC systems could not run NT because Apple never produced a machine that was compliant with the PowerPC Reference Platform; all of the first series PowerMacs used Apple-proprietary support and I/O chips for which no public driver sources were available (actually, there were some small parts of the Mac ROMs at this time for which no source-code at all was available: the ROM image contained a couple of binary merges from known-working driver builds which could not be re-created from any archived source-code).

        1. Charlie Clark Silver badge

          Re: Who killed MIPS?

          NT was cross-platform and very easy to port

          That was the the idea but it was only really possible for low level parts of the system and nothing that relied on MFC. This why versions of NT for the Alpha were always late and why DEC also invested in providing x86 support on the chip.

          Later on, things got even worse as the kernel was optimised for x86 quirks, which is why Microsoft struggled with the x86_64 transition and later with the move to ARM. It had a definite interest in supporting as few architectures as possible and Intel kept promising that the next generation of chips would be faster… But how much was really a plot and how much was just "stuff" we'll never know. In the end, a bit like VHS versus Beta or VESA local bus versus PCI, the better technology looks like it will win.

    4. Anonymous Coward
      Anonymous Coward

      Re: Who killed MIPS?

      IIRC - didn't Intel "borrow" parts from the DEC Alpha CPUs?

      1. Stoneshop

        Re: Who killed MIPS?

        IIRC - didn't Intel "borrow" parts from the DEC Alpha CPUs?

        Not really. HP got, via Compaq, a good part of DEC engineering and software development. When Itanic was taken over by Intel they got the compiler group, which makes sense as that is intimately tied to the CPU arch; fabbing became Global Foundries and hardware engineering for the most part went to AMD; several chipset subsystems for x86_64 strongly resemble those of Alpha.

        1. druck Silver badge

          Re: Who killed MIPS?

          Don't forget that cache technology of the Alpha was combined with the ARM ISA to form the Pentium beating (for a time) StrongARM. Intel inherited this and created the X-Scale range of chips which were used in PDAs and I/O controllers, before losing interest and selling off the division to Marvel. Which given ARMs are now encroaching on Intel's data centre markets and they've lost Apple custom to ARM, they will have plenty of time to regret.

          Incidentally the Alpha lead designer went on form PA-Semi which was bough by Apple to initially work CPUs for the iPhone, and now of course the MAC range is moving to their M1 chip, which should really be called the StrongARM Mk2.

    5. Stoneshop
      Trollface

      Re: Who killed MIPS?

      and that was solely based on wildly optimistic claims about future performance made by Intel.

      I have to say it's one of the best space heaters I know.

  10. ForthIsNotDead
    Thumb Up

    Awesome!

    I just wish Inmos could have made it, too. Their product was ahead of its time.

  11. StrangerHereMyself Silver badge

    Unwritten law

    I believe the sale of ARM to Softbank and then Nvidia is a failure of the capitalist system, where shareholders are always clamoring for 'MO MONEY!'

    There had been the unwritten law that ARM should remain an independent licensing company, not a plaything of big corporate giants. The move that started this all was the listing of ARM on the LSE in the 1990's. Many shareholders were delighted that ARM's valuation rose and were keen on cashing in on the company's meteoric rise in smartphones.

    Now that the golden rule has been broken ARM will become a plaything of corporations and will slowly become irrelevant as customers head for the exit labelled RISC-V. Considering that Nvida is paying $40 billion for ARM it's almost an inevitability that they will want to use it to crush their competitors, no matter what the regulators try to prevent this.

    1. Anonymous Coward
      Pirate

      Re: Unwritten law

      I don't think this is a 'failure of the capitalist system', but it does smell worrying, I agree. I don't know much about NVIDIA, but there is an obvious nasty case where your competitor owns the IPR on which you are relying and that case could easily happen for ARM (Apple could buy NVIDIA, say, or Google could). Presumably if you've been competent your existing licenses are all fine, but the new owner of the IPR might decide that it's no longer in their interest to license new designs.

      I think (I am not a lawyer) that the answer to cases like this is anti-monopoly legislation, and the enforcing of such rather than 'capitalism doesn't work'. Unfortunately enforcing of anti-trust laws seems to be a bit unfashionable now.

      1. StrangerHereMyself Silver badge

        Re: Unwritten law

        The problem with anti-trust law is that's too slow to move the case through the courts. By that time your business will have all but evaporated.

        So I don't see this as a realistic remedy. Only the independence of ARM would've been sufficient.

        BTW: I think the new name (Arm) is stupid.

  12. Binraider Silver badge

    I'd rather see Nvidia have it. Company that actively develops products to sell rather than SoftBank; who exist to buy IP and force up consumer pricing (or not). RISC-V is a warning that you can't afford to alter the pricing structure of ARM too much.

    M1's success and rise of ARM server hardware says to me that there's a more general CPU change potentially on the cards; the clout of Nvidia being capable of being behind such a change. Rosetta 2 points to suitable emulation layers not only being possible but actually very good which is all the more necessary to persuade businesses that dropping X86 is a thing.

    So long as I can get TIE fighter up on dosbox; and linux; I'll use whatever works for me!

    1. StrangerHereMyself Silver badge

      Guarantee

      I can almost guarantee you that Nvidia will push up licensing costs enormously to pay for the acquisition, since they know many licensees will not be able to switch quickly to RISC-V and hoping for momentum of the ARM business.

      Yes, many will jump ship, but not before NVidia has taken them to the cleaners first.

    2. mdubash

      Agree. Anything but Softbank.

      I worked for a company that was bought by Softbank. Masoyoshi Son was full of - shall we say politely - unfeasibly high expectations, boasts and claims, none of which clearly had the remotest chance of coming true. He eventually sold the company, having sucked the profit out of it by loading the purchase borrowing onto the bottom line.

      Result: nothing new got done, shareholders got a lot richer. Us grunts on the ground just got frustrated and a lot of good people departed. I don't think that fundamental biz model has changed much.

  13. Stuart Halliday
    Pint

    Good times

    Is my original ARM (Advanced RISC Machines) mug and dangly label worth anything yet?

    1. J.G.Harston Silver badge

      Re: Good times

      I left my Acorn mug in Hong Kong. I can't remember the exact details, but I think they were handed out with the launch of the A5000.

      So many photos I never took. :(

      1. Yet Another Anonymous coward Silver badge

        Re: Good times

        There is an ARM baseball cap on top of a mountain in Canada that I got with my A310 and had for 20 years til it blew away

  14. Alistair Dabbs

    When I met Steve Furber

    An opportune moment to remind readers of this interview I did with Prof Steve Furber a few years ago at a stand-up lunch while he was trying to eat his sandwiches.

  15. Nick Sticks

    It's about this time in the comments that someone needs to reminisce about their old Acorn computers.....

    I'll be that person.

    I still have my boxed BBC B, BBC Master and my Acorn Archimedes in the attic.

    Great computers and many a happy hour spent learning BBC BASIC and assembler and of course playing games.

    1. mdubash

      Ah yes, the BBC Micro. The first computer I ever bought, learnt to program on and - most importantly - how to play Elite.

      Sigh...

  16. Adelio

    Arm for the UK

    I know that this ship has sailed but I just wish we could keep SOME companies as UK owned.

    We are always too eager to sell off our crown jewels.

    The UK should have a greater say in what happens to major British companies being bought externally.

    1. Anonymous Coward
      Anonymous Coward

      Re: Arm for the UK

      Brexitters will blame the EU for that.

    2. Binraider Silver badge

      Re: Arm for the UK

      I regret that as long as expenditure exceeds income, one has to sell the crown jewels (and maybe make some more) to square off the balance sheet.

      Drip feed income for next 50 years isn't as appealling as a $27bn shot in the arm; if you're in the brigade that believes in planning only for surviving the next election. That would be the two biggest political parties in the UK then.

    3. MOV r0,r0

      Re: Arm for the UK

      "keep"? ARM's founding capital was 100% from overseas: Apple and VLSI (Acorn put IP and people in, not money) and even by share ownership it was largely a "foreign" company (I prefer "international") as Acorn were majority owned by Olivetti. Arm was still minority UK-owned (around 40%) when Softbank bought it.

      Semiconductors had been a global industry for a couple of decades prior to ARM's inception and big money had been global for longer - any legislation back then requiring ARM to be British-owned would have smothered it at birth and any restrictions now could impact on the future success of British innovation.

  17. nxnwest
    Facepalm

    Embedded

    There should be a special circle in hell for those who embed acronyms in acronyms. ARM almost forms an oxymoron when spelled out ,"Advanced Reduced"...

  18. Michael Wojcik Silver badge

    Olivetti

    I wasn't aware of Olivetti's relationship with Acorn until I read this article. Given Olivetti's lack of success in the US with their own brand of PCs (including rebadges such as the AT&T 6300 series), and the briefness of their success in Europe, it's strange and kind of nice to think that they contributed to what's now the most successful PC CPU line of all time - counting smartphones, tablets, and the new Macs.

    The lion's share of the credit goes to Acorn and then Arm, of course, but I remember reading about the Olivetti M19 and M24 machines in magazine articles back in the day, and the thought that this is in some way their legacy too is pleasing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like