back to article Happy birthday, ARM1. It is 35 years since Britain's Acorn RISC Machine chip sipped power for the first time

Did the reminder on your smartphone go off over the weekend? It's been 35 years since the first Arm processor was powered up. "At 1pm on April 26th 1985," recalled Prof Steve Furber, "the first ARM microprocessors arrived back from the manufacturer – VTI [VLSI Technology, Inc]. They were put straight into the development …

  1. Red Ted


    I'll drink to that.

    If I recall correctly the first one appeared to draw no power at all, and it turned out the power supply wasn't wired to the chip correctly. However it could draw enough power through inputs that were driven high to operate.

    1. Will Godfrey Silver badge

      Re: Cheers...

      What gave the game away was that it would instantly crash if all data and address lines went to zero

      1. Yet Another Anonymous coward Silver badge

        Re: Cheers...

        Call it "agresive sleep mode" in the documentation

        1. Lee D Silver badge

          Re: Cheers...

          Spotted the engineer.

  2. monty75

    Somewhere in your phone, something is humming 'Happy Birthday to me'

    Presumably it's washing the hands on the end of its Arms

  3. Dwarf


    I recall playing this many times on an borrowed Archimedes, very impressive graphics and game performance for its day.

    1. ThomH

      Re: Zarch

      Yet, check out [url=]the Zarch optimisation project[/url], which extends the landscape to fill the entire display and a large distance further back. All while still running speedily on an original 8Mhz ARM2.

  4. whitepines

    I'd think the threat from RISC-V is lower than that from OpenPOWER, but then again the threat from both combined is more significant than from either one alone.

    Not sure why El Reg keeps only highlighting RISC-V as the One True ARM/x86 Killer, it's not the only open ISA out there, and in point of fact it's currently the one with the least amount of usable open-friendly silicon available for purchase, and also the one where most of the existing open source software simply won't run. And with those trends not changing very quickly, if at all, it's not currently a serious threat to all but the smallest ARM designs in reality.

    1. diodesign (Written by Reg staff) Silver badge

      "Not sure why El Reg keeps only highlighting RISC-V as the One True ARM/x86 Killer"

      It's not going to kill x86, let's be honest nor is it going to outright kill Arm. It poses a threat to the latter's dominance, though.

      We mention RISC-V because: Arm's CEO once said, in a meeting in which this hack was present, that RISC-V keeps Arm "on its toes." Arm has responded to RISC-V with various licensing programs that reduce the upfront cost, and also briefly tried to smear RV with a weird attack website. That, to us, signals it's a headache for Arm.

      There are other open CPU architectures, sure, but look, OpenRISC for whatever reason didn't excite the industry nor did OpenPower.

      RISC-V is backed by Google, Nvidia, Samsung, Western Digital, and more. They are all using it in chips where they could have used Arm. That's why we mention RISC-V. And I speak as someone who is fond of all open architectures, not just RV, and had a soft spot for Arm BITD.

      "it's not currently a serious threat to all but the smallest ARM designs in reality."

      The RISC-V implementations coming out of China, at least, are Cortex-5x or Cortex-A7x-grade, if the numbers are to be believed. SiFive's U and E-series RV implementations are not competing against "the smallest" Arm designs, either: the U-series features a quad-core 64-bit SoC (with a management CPU core) capable of running desktop Linux.


      1. whitepines

        Re: "Not sure why El Reg keeps only highlighting RISC-V as the One True ARM/x86 Killer"

        capable of running desktop Linux.

        All issues with management blobs etc. aside, this is a bit debatable IMO. Looking at the Debian archive build status * (which unfortunately is b0rked at the moment) RISC-V has a lot of software that just doesn't build. It's in the ports tree, not main Debian, and frankly with the lack of standardization on what the ISA actually is / does or does not include it's not going to be an easy thing to try to support in the larger software packages. Best case I expect it to break apart into several flavors (as ARM once did with the original Raspberry Pi -- remember Raspbian specially built packages?), worse case it may have enough incompatible hardware in the wild not to gain traction in the major distros for many many years.

        Most of the cited examples have one thing in common. They are all largely embedded devices (even if fairly powerful) where the vendor has control from cradle to grave, so it doesn't really matter if binaries for one implementation run on another. This is how ARM development was done for years before SBSA and similar initiatives, and I just have no appetite for that on desktop personally. I tried ARM on desktop in that timeframe, and it just got to be too much of a pain in the rear to continue -- having to compile almost everything outside of the kernel and base system components means you end up running outdated insecure software in the long run. I can apt-get / dnf anything I need from the main distros on x86, ARM, and OpenPOWER -- RISC-V is just not there yet and it has a long road to go.

        I want an open system to succeed, but I just don't see RISC-V being a viable desktop or server option without some serious re-thinking of how they approach ISA design and ecosystem maintainence. The fact that the majority of RISC-V chips by volume ship in closed, locked products is something of a reinforcement from industry of that viewpoint.


        1. diodesign (Written by Reg staff) Silver badge

          "All issues with management blobs etc. aside, this is a bit debatable IMO"

          There aren't any management blobs I'm aware of for off-the-shelf implementations: all the bootloader stuff is open-source.

          As for desktop Linux - I said it was capable of running the OS, not that it's perfected it. Here's a way to get a system running with PCIe. You can boot a terminal-level Linux on lots of available RV soft and hard cores.

          If you think I'm ignorant of RV's issues, you're mistaken, sadly. I can list a few. The dev boards right now are relatively expensive for anything greater than a microcontroller, and your best bet is a soft core on an FPGA. The extension system is dangerously close to going down the route of MIPS with lots of wacky variants. There is no common ecosystem a la Arm Linux. The ISA isn't perfect: I've written RV32/64 assembly code, so I'm aware of the awkwardness at times. Swapping endianness in a 32-bit word, for example, requires a surprising number of instructions.

          It's a young architecture that has various kinks to work out. However, it took Arm an age to get to desktop level, and with standards on the server side, so I'm willing to see how this specification and ecosystem grows.

          Is RISC-V going to take over the world right now. No. Could it later? Possibly.


          1. whitepines

            Re: "All issues with management blobs etc. aside, this is a bit debatable IMO"

            You're correct, for the SiFive stuff it's open source(ish?). For a while it wasn't, and I'll admit to not knowing if the underlying issue was fixed or if they just managed to hide the blob somewhere else that hasn't been looked into. If they managed to finally open source the entire firmware, that's good.

            All I'm really trying to say is that as an OpenPOWER user on desktop I can't imagine going backward to where RISC-V is at, and it frustrates me to no end that RISC-V keeps getting the spotlight when it's actually quite inferior as it stands today. I know some of this is RISC-V was first to the gate as an open ISA, but from a technical perspective it's always been a bit of a mess.

            The simple fact is, if I have $3,000 USD to spend I can either get:

            * A RISC-V system with a few PCIe Gen 2 lanes and an ARM class CPU that can't run a lot of my software without fiddling around with it, plus doesn't have any real distro support and (one of the main reasons I won't touch it) could become nearly worthless with a future update to the ISA

            * An OpenPOWER desktop system with near-x86 class cores and a bunch of PCIe Gen 4 I/O, that runs standard Linux distros and the vast majority of existing Linux software out of the box, and has an established backward compatibility track record. And uses a standard form factor to boot.

            Both are open ISA, yet one seems on paper to be an objectively better choice. Given the options, why doesn't OpenPOWER get the same interest as RISC-V at this point? I'm genuinely confused...

            1. bazza Silver badge

              Re: "All issues with management blobs etc. aside, this is a bit debatable IMO"

              I’m a fan of all things POWER too, though currently, sadly, not to the point of having an OpenPower desktop.

              I think that the reason RiscV has garnered so much attention is that it’s an academic project, and has always been free. I think that the university background has lent it an air of newness and cool, which help it get noticed regardless of whether or not it’s any good. I’ve certainly noticed an air of “oh that’s interesting” amongst fellow engineers, all of whom have none the less diligently avoided it when choosing a part for an actual design job.

              OpenPower has previously cost money to join the club and actually get the specs, though not nowadays, and has a big corporate background of possibly the dullest origins imaginable, IBM. That’s not helpful in communicating it’s coolness, which is a pity because these OpenPower desktops are pretty phenomenally amazing. Open source all the way down to the PCB and silicon masks. More or less.

              POWER doesn’t have the same micro to CPU scalability as RiscV but I think that’s largely a distraction. The history of embedded electronics is one of a march from the very primitive (PICs?) towards embedded Linux. I’ve seen some ludicrous things in my time; eg a microcontroller running a drinks machine interfaced to a Raspberry Pi stumping up a web interface. The ludicrous part was that an ARM SBC could easily have run the drinks machine too, saving the hassle of integrating it with a microcontroller, and benefiting from the tighter coupling between hardware and web interface. Anyway my point is that RiscV microcontrollers are probably just not a worthwhile aspect of the family.

              Another aspect is that I don’t think that people really care much about open source chips, they’re much more interested in cheap chips. And ARM based chips, even though the manufacturers pay ARM a license fee, are cheap enough. That’s where ARM innovated the most - make it cheap, wait, don’t gouge the market more than it will bear.

              1. don't you hate it when you lose your account

                I have no issues with your knowledge but

                My ZX Spectrum is way better than your Open Source cpu

          2. Joe Montana

            Re: "All issues with management blobs etc. aside, this is a bit debatable IMO"

            It didn't take ARM an age to get to desktop level, it took them an age to get back.

            The earliest ARM chips were used in desktops, and those machines were more than performance competitive with the common x86 and m68k designs of the time.

  5. Martin an gof Silver badge

    My Risc PC still gets powered up every day

    It does have a StrongARM but it started life with an ARM610. One day it'll retire, but not just yet.

    Just playing with my first ARM-based "Arduino"-alike. The 48MHz Cortex M0 it uses could be compared with the 30MHz 610 originally in the RiscPC, though it uses a different instruction set (those condition codes were such a revelation) and would probably beat it in a head-to-head, though 32k RAM doesn't even come close :-)


    1. Soruk

      Re: My Risc PC still gets powered up every day

      A RiscPC610 is also my mum's daily driver, and after a recent HDD failure it runs off an SD card, blisteringly fast. It's mainly used for PMS, a music typesetting program that was later released as PMW for Unix/Linux and open-sourced.

    2. steelpillow Silver badge

      Re: My Risc PC still gets powered up every day

      I still have my working RISC PC but TBH I moved across to Linux/x386 because things like network interfaces, decent graphics and software upgrades were so darn expensive and getting hard to find. Also, killer apps like Sibelius and Xara had moved across.

      Still, with RISC OS now available open-source, maybe one day I will have a decent RISC V/RISC OS workstation (with unpwnable OS in ROM) alongside my AMD64/Devuan.

  6. Will Godfrey Silver badge

    Thanks for this.

    I went along for the ride right from the (BBC B) start, and now have a Raspberry Pi with RISC OS installed on it.

    1. Tom 7

      Re: Thanks for this.

      RISC-OS is very impressive - even on the original Pi. Shame about the ecosystem.

      1. Sorry that handle is already taken. Silver badge

        Re: Thanks for this.

        Shame about the ecosystem
        Simsuty 2000 was available for RiscOS, what more do you need?!

        1. Sorry that handle is already taken. Silver badge

          Re: Thanks for this.

          Uhh... SimCity 2000? No idea how I managed that. SimCity 2000.

          1. diodesign (Written by Reg staff) Silver badge

            Re: Re: Thanks for this.

            SimCity 2000 for Acorn/RISC OS has the best music of all the ports. It just blows everything else out of the water.


            1. Sorry that handle is already taken. Silver badge

              Re: Thanks for this.

              I spent a lot of time on it in my school's Acorn lab but I think I missed out on the music.

              Yes, we had Acorns in Australia!

      2. werdsmith Silver badge

        Re: Thanks for this.

        RISC-OS is very impressive - even on the original Pi. Shame about the ecosystem.

        Runs very well on the original Pi with its single core. Runs well on later Pis (not sure about latest) but still only uses one core.

    2. Dan 55 Silver badge
  7. This post has been deleted by its author

  8. Malcolm Weir Silver badge

    Anyone know if Furber's brain-mimicing thing has a future, given that the Human Brain Project was an EU thing, and obviously Manchester isn't in the EU anymore?

  9. Phil Endecott


    Thanks for sharing the ARM @ Apple video, it has brought back some memories.

    1. diodesign (Written by Reg staff) Silver badge

      Re: Video

      The first guy presenting, Mike, is Mike Muller – co-founder and, up until very recently, chief technology officer of Arm. He collared me mid-pint at an event in Silicon Valley a couple of years back, and yes, it was about a flippant Register headline about Arm. At least he saw the funny side of it.


      1. Phil Endecott

        Re: Video

        And the second guy is Al Thomas (I hope the surname is right), who very sadly took his own life probably not very long after that event.

  10. Maximus Decimus Meridius

    Douglas Adams saw the future

    "Incidentally, the first ARM1 chips required so little power, when the first one from the factory was plugged into the development system to test it, the microprocessor immediately sprung to life by drawing current from the IO interface – before its own power supply could be properly connected."


    "And to this end they built themselves a stupendous super-computer which was so amazingly intelligent that even before its data banks had been connected up it had started from I think therefore I am and got as far as deducing the existence of rice pudding and income tax before anyone managed to turn it off."

  11. Torben Mogensen

    Sophie Wilson post about early ARM history from 1988

    I saved a USENET post that Sophie Wilson made in November 1988 about the early ARM days. Since USENET is publicly archived, I don't think there are any IP issues with showing this. I think many of you will find the following interesting:


    Newsgroups: comp.arch

    Subject: Some facts about the Acorn RISC Machine

    Keywords: Acorn RISC ARM

    Message-ID: <543@acorn.UUCP>

    Date: 2 Nov 88 18:03:47 GMT

    Sender: andy@acorn.UUCP

    Lines: 186

    There have now been enough partially correct postings about the Acorn RISC

    Machine (ARM) to justify semi-official comment.


    ARM is a key member of a 4 chip set designed by Acorn, beginning in 1984, to

    make a low cost, high performance personal computer. Our slogan was/is "MIPs

    for the masses". The casting vote in each design decision was to make the

    final computer economic.

    The chips are (1) ARM: a 32 bit RISC Microprocessor; (2) MEMC: a MMU and

    DRAM/ROM controller; (3) VIDC: a video CRTC with on chip DACs and sound; and

    (4) IOC: a chip containing I/O bus and interrupt control logic, real time

    clocks, serial keyboard link, etc.

    The first ARM (that referred to by David Chase @ Menlo Park) was designed at

    Acorn and built using VLSI Technology Inc's (VTI) 3 micron double level metal

    CMOS process using full custom techniques; samples, working first time, were

    obtained on 26th April 1985. The target clock was 4MHz, but it ran at 8. The

    timings that David gives are for the ARM Evaluation System, where ARM was run

    at 3.3MHz and 6.6MHz (20/3) for initial and page-mode DRAM cycles,

    respectively. The ARM comprises 24,000 transistors (circa 8,000 gates). Every

    instruction is conditional, but there are neither delayed loads/stores nor

    delayed branches (sorry, Martin Hanley). Call is via Branch and Link (same

    timing as Branch). All instructions are abortable, to support virtual memory.

    The first VIDC was obtained on 22nd Oct 1985, the first MEMC on 25th Feb 1986,

    and the first IOC 30th Apr 1986. All were "right first time".

    We then redesigned ARM to make it go faster (since, by this time, Acorn had

    decided roughly what market to aim the completed machines at and 8MHz minimum

    capability was required - but we did continue to develop software on the 3

    micron part!). Some more FIQ registers were added, bringing the total to 27

    (some of our "must go as fast as possible for real time reasons" code didn't

    manage with the smaller set). A multiply instruction (2 bits per cycle,

    terminate when multiplier exhausted so that 8xn multiply takes 4 cycles max)

    and a set of coprocessor interfaces were added. Scaled indexed by register

    shifted by register (i.e. effective address was ra+rb<<rc) was removed from

    the instruction set (too hard to compile for) [scaled indexed by register

    shifted by constant was NOT removed!].

    The new, 2 micron ARM was right first time on 19th Feb 1987. It's peak

    performance was 18MHz; its die size 230x230 mil^2; 25,000 transistors.

    VTI were given a license to sell the chips to anyone. They renamed the chips:

    VL86C010 (ARM), VL86C110 (MEMC), VL86C310 (VIDC), VL86C410 (IOC).

    Acorn released volume machines "Acorn Archimedes" in June 1987. Briefly:

    A305: 1/2 MByte, 1MByte floppy, graphics to 640x514x16 colours

    A310: ditto, 1MByte

    A310M: ditto with PC software emulator (circa a PC XT, if you're interested)

    A440: 4MByte, 20MByte hard disc, 1152x896 graphics also.

    All machines have ARM at 4/8MHz (circa 5000 dhrystones 1.1), 8 channel sound

    synthesiser, proprietry OS, 6502 software emulator, software.... Prices

    between 800 and 3000 pounds UK with monitor and mouse and all other useful

    bits. Not available in the US, but try Olivetti Canada.

    VTI make ARM available as an ASIC cell. Sanyo have taken a second source

    license (in April 1988) for the chip set, and make a 32 bit microcomputer

    (single chip controller). In "VLSI Systems Design" July 1988, the following

    statements are made by VTI: ARM in 1.5 micron (18-20MHz clock), 180x180 mil^2;

    future shrink to 1 micron (they are expecting "perhaps 40MHz" and 150 mil

    square with the price dropping from $50 to $15); expected sales in 1988

    90-100,000 units.

    Contact Ron Cates, VTI Application Specific Logic Products Division,

    Tempe, Arizona for details (e.g. the "VL86C010 RISC Family Data Manual").

    Plug in boards for PCs are available. A controller for Laser printers

    with ARM, MEMC, VIDC and 4MBytes DRAM has been sold to Olivetti [Acorn'

    parent company as of 1985-6] (contact if you want to

    know more).

    In the Near Future:

    We have a Floating Point Coprocessor interface chip working "in the lab" - the

    fifth member of the four chip set. It interfaces an ATT WE32206 to ARM's

    coprocessor bus. It benchmarks at 95.5 KFlops LINPACK DP FORTRAN Rolled BLAS

    (slowest) (11KFlops with a floating point emulator) on an A310. Definitely

    have to make our own, some time...

    Acorn is about to release UNIX 4.3BSD including TCP/IP, NFS, X Windows and

    IXI's X.desktop on the A440. Contact or for more info (and to be told that it isn't available in

    the US {yet}).

    Operating Systems:

    Acorn's proprietry OS "Arthur" is written in machine code: it fills 1/2MByte

    of ROM! (yes, writing in RISC machine code is truly wonderful as others have

    noted on comp.arch). Its main features are windows, anti-aliased fonts

    (wonderful at 90 pixels per inch - I use 8 point all the time) and sound

    synthesis. It runs on all Archimedes machines. A 2nd release is due real soon

    now and features multitasking, a better desktop and a name change to RISC OS.

    VTI are porting VRTX to the ARM; Cambridge (UK) Computer Lab's Tripos has been

    ported to A310/A440. UNIX has been ported by Acorn: see above. There are MINIX

    ports everywhere one looks (try querying the net...).


    C Compiler: ANSI/pcc; register allocation by graph colouring; code motion;

    dead code elimation; tail call elimination; very good local code generation;

    CSE and cross-jumping work and will be in the next release. No peepholing (yet

    - not much advantage, I'm afraid). Can't turn off most optimisation features.

    Also FORTRAN 77, ISO PASCAL, interpreted BASIC (structured BBC BASIC, very

    fast), Forth, Algol, APL, Smalltalk 80 (as seen at OOPSLA 88: on an A440 it

    approximates a Dorado) and others (LISP, Prolog, ML, Ponder, BCPL....).

    Specific applications for Archimedes computers are too numerous to mention!

    (though the high speed Mandelbrot calculation has to be seen to be believed -

    one iteration of the set in 28 clock ticks [32 bit fixed point] real time

    scroll across the set [calculate row/column in a frame time and move the


    There is a part of the net that talks about Archimedes machines:


    Random Info:

    Code density is approximately that of 80x86/68020. Occasionally 30% worse

    (usually on very small programs).

    The average number of ticks per instruction 1.895 (claims VTI - we've never

    bothered to measure it).

    DRAM page mode is controlled by the MEMC, but there is a prediction signal

    from the ARM saying "I will use a sequential address in the next cycle" which

    helps the timing a great deal! S=125nS, N=250nS with current MEMC and DRAM

    (see David Chase's article for instruction timing). Static RAM ARM systems

    have been implemented up to 18MHz - S=N=1/18 with these systems.

    Approximately 1000 dhrystones 1.1 per MHz if N=S; about 1000/1.895 dhrystones

    per MHz if N=2S (i.e. 5K dhrystones for a 4/8MHz system; 18K dhrystones for

    an 18/18MHz system).

    Most recent features: Electronic Design Jul 28 1988, VLSI Systems Design July


    We had a competition to see who would use "ra := rb op rc shifted by rd" with

    all of ra, rb, rc and rd actually different registers, but the graphics people

    won it too easily!

    ARM's byte sex is as VAX and NS32000 (little endian). The byte sex of a 32 bit

    word can be changed in 4 clock ticks by:

    EOR R1,R0,R0,R0R #16

    BIC R1,R1,#&FF0000

    MOV R0,R0,ROR #8

    EOR R0,R0,R1,LSR #8

    which reverses R0's bytes. Shifting and operating in one instruction is fun.

    Shifted 8bit constants (see David Chase's article) catch virtually everything.

    Major use of block register load/save (via bitmask) is procedure entry/exit.

    And graphics - you just can't keep those boys down. The C and BCPL compilers

    turn some multiple ordinary loads into single block loads.

    urn some multiple ordinary loads into single block loads.

    MEMC's Content Addressable Memory inverted page table contains 128 entries.

    This gives rather large pages (32KBytes with 4MBytes of RAM) and one can't

    have the same page at two virtual addresses. Our UNIX hackers revolted, but

    are now learning to love it (there's a nice bit in the standard kernel which

    goes "allocate 31 pages to start a new process"....)

    Data types: byte, word aligned word, and multi-word (usually with a

    coprocessor e.g. single, double, double extended floating point).

    Neatest trick: compressing all binary images by around a factor of 2. The

    decompression is done FASTER than reading the extra data from a 5MBit


    Enough! (too much?) Specific questions to me, general brickbats to the net.

    .....Roger Wilson (

    DISCLAIMER: (I speak for me only, etc.)

    The above is all a fiction constructed by an outline processor, a thesaurus

    and a grammatical checker. It wasn't even my computer, nor was I near it at

    the time.

    1. Dave559

      Re: Sophie Wilson post about early ARM history from 1988

      Ah, good old usenet, back in the days when it was useful, and was perhaps the original global online social network (possibly Fidonet fans might disagree). A real shame that the gradual increase in spam and gradual tailing off of new users largely put an end to it (I know there are still a few groups that are, somehow, still thriving even now, although most of it is now a spambot-infested post-apocalyptic wasteland, sadly).

      Impressed that way back then (wow, the 80s!), Acorn had a domain name and email.

      And loving the uucp usenet transmission, too... Wanders off, muttering something about bang paths...

    2. Martin an gof Silver badge

      Re: Sophie Wilson post about early ARM history from 1988

      That brings back memories. I remember someone posting it on the polytechnic's message board system (whatever it was called - the poly had a VAX8600 IIRC as well as a whole bunch of smaller ones). Must have been shortly after it was posted to usenet judging by that date - I didn't have usenet in those days.

      I remember being terribly impressed by the poster - surely he must have had personal contact with Roger! Turns out not, but Jan Paxton had actually had a part in writing some commercial terminal software for the Archimedes, which I later went on to buy when I got my modem and was able to remote-in to the poly's computer in order to work on assignments from home, on mum & dad's phone bill :-)

      It was Jan who introduced me to the Archimedes demo scene. I spent hours watching those things!

      I also remember the time - not long afterwards - when I got *very* confused when someone insisted on referring to Sophie Wilson and assumed I knew who she was.

      I still think the sale to Softbank was wrong...


  12. BebopWeBop
    Thumb Up

    Ahh, fornd memories of working with an Archimedes (we used them in the labs - a cheap and very flexible alternative to the PC) - and being a bit of a packrat, I still have a (working ) machine - along with my Zx80 and other bits of arcana.

  13. Ragarath

    My Beloved Electron

    My beloved Acorn Electron never gets a mention. Yes we could not afford a BBC but it did spark my love of all things computers.

    1. Dan 55 Silver badge

      Re: My Beloved Electron

      It did a while back, as did the first ARM computer.

      I think it must be about time to revisit all these 80s machines in another set of articles...

      1. Tom 7

        Re: My Beloved Electron

        As a child I liked knowing how things worked and we are now so far from the ground in IT its hard to show kids how a modern computer works. I have spent some time with a few kids and an 8080 simulator to take them few a few simple steps and when I've had access to the internet and kids the Visual6502 is a way of showing them the silicon running the code along with the code. I wonder if a Knuth revisited on a real machine might make a good educational toy.

        1. Martin an gof Silver badge

          Re: My Beloved Electron

          I suppose it depends on what you think they need to know. I used to love tinkering with 6502 assembler on my BBC Micro, but apart from a few times when I used it to speed up some process that was taking far too long in BASIC, I stuck with BASIC.

          When I started my first job I used some of my newfound wealth to buy a 6502, some static RAM, some EEPROM, address decoders, breadboard etc. and was intending to build my own computer for no reason other than I could*...

          ...the bits are still in a box in the attic somewhere, partly assembled.

          These days my children have access to Raspberry Pi, Arduino and BBC Micro:Bit as well as more "normal" computers. I've just bought an "enviro" board for the micro:bit and using it is as easy as dragging the "Read Temperature in °C" block to the build area. I need a temperature sensor for another application on Arduino and there is a vast array of devices, all with lovely libraries which let you do something similar. When you read the data sheets, you understand why. Bit-banging the 1-wire protocol does not look like fun.

          Question then - do children really need to be able to know how to communicate "raw" with a remote sensor, or is dragging the "Read Temperature" block enough, because it's the outcome of creating a temperature display that's the real goal?

          I have a project of my own that needs a temperature sensor - doesn't need to be terribly accurate and doesn't need to be at all fast. I'm actually probably going to be using a device with an analogue output. It will be on the end of about 10m of cable, maybe more, and busses such as I²C can't cope with long wires.


          *I knew I could, because I'd spent a very happy "sandwich year" building from the ground up a series of MCS96-based electronic devices, prototyping them in wire-wrap. The development system (compiler) was way out of my domestic budget so at home I was intending to use the BBC Micro's compiler :-)

  14. anthonyhegedus Silver badge

    The 6502 was so simple and yet so effective in those early computers. Did some of that simplicity rub off on the designers of the ARM1? I was at university studying computer science at the time, and I remember learning about RISC being touted the future of microcomping. The first Archimedes was blisteringly fast for the time.

    1. druck Silver badge

      Yes, one of the design goals was a 32 bit processor with as close to the simplicity of the 6502 as possible. So it had a very orthogonal instruction set (very few special cases), three operand instructions, register to register only ALU operations, and status flags as part of the program counter for easy stacking.

      Compared to the contemporary 16 bit instructions sets (68K, 286, etc) it was an absolute joy to program in assembler, and resulted in blisteringly fast code. That initial advantage did come back to bite in the end, as most of RISC OS and quite a few large applications were written in assembler, and proved far more difficult to port to later ARM versions, particularly the move to 32bit PC in ARMv5, than if everything had been written in a high level language.

    2. ThomH

      I'm pretty sure some of the design ethic did; my understanding is that a visit to MOS by Acorn provided the necessary evidence that you can design a processor on a shoestring with a tiny group of people.

      1. Tom 7

        TBF at the time most 8 bit machines were largely the work of one person, The Z80 was pretty much a one man design. Interestingly these things are at a headful level - a top engineer could in a couple of years design the logic, the ccts and the layout of something of that size. Once you get much more complicated than that you need to start working in teams.

  15. Victor Ludorum

    That brings back memories

    I remember the A310 being installed in the computer room at school, and everyone being amazed how quickly this ran in BBC BASIC.

    10 T%=TIME

    20 FOR X%=1 TO 10000 : NEXT

    30 PRINT TIME-T%

    Just tried it with the BBC BASIC 'demo' edition on my i7 laptop - 1 to 30,000,000 takes one second...

  16. Simon Harris

    Wilson had created a simulation of the 32-bit mpu's instruction set in 808 lines of BBC BASIC

    It was very forward thinking of Acorn to make their BASIC integers 32-bit right from the start (even before BBC-BASIC), rather than the 16-bit integers most other 8-bit BASICs used are the time. Just that feature must have helped facilitate simulating a 32-bit CPU in such a relatively small number of program lines.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like