Mighty oaks
Out of little acorns grew.
Ever since the silicon blueprints of the ARM1 – the grandfather of today's smartphone processors – were recovered in November, hardware guru Ken Shirriff has been poring over the layout and reverse-engineering the landmark chip. The ARM1, designed in 1985 by Acorn engineers, was a prototype 32-bit RISC CPU with 25,000 …
The simple RISC architecture of the ARM1 makes the circuitry of the processor easy to understand, at least compared to a [contemporary] chip such as the 386
The ARM processor was a 32-bit clean sheet design, with no legacy baggage to support. Whereas the i386 processor has a lot of backwards compatibility, being able to trace its roots back to the 16-bit 8086.
And the 8086 was designed to make porting code from the 8080 relatively straight forwards. So all the applications (Wordstar dBase, etc.) that ran on CP/M were quickly ported to CP/M-86 and MS-DOS/PC-DOS computers.
For some reason the software on the Archimedes seemed to before much more advanced than that for Win/x86 machines. E.g. ImageFS - open an image file as a folder and drag and drop multiple alternate formats that seem to be present inside, SparkFS - another installed file system that mad Zip archives behave like folders, Artworks - real time updates of graduated fills as you dragged a pointer around the screen, Techwriter - takes too much description but people seeing how it works would invariably get angry saying "why doesn't my computer do that!"
> For some reason the software on the Archimedes seemed to before much more advanced than that for Win/x86 machines
It is open these days - I'm involved with RISC OS Open Limited, which manages it. Emulators available for those who want the nostalgia and it'll run on things like the Raspberry Pi for those that want something a bit more practical :-)
https://www.riscosopen.org/
Strangely enough, the 8008 architecture is a copy of the processor in the Datapoint 2200 programmable terminal, which contained a CPU built from 7400-series TTL chips. The 8008 was intended to replace this hardwired processor, but ended up being sold as a separate processor. For some reason, Intel doesn't publicize this much. You can trace Intel's little-endian architecture back to the serial processor in the Datapoint 2200 - when you're adding one bit at a time, you want to start with the lowest. That's also why the 8008 and descendants generate a parity flag - it's very useful for a terminal.
The 4004 has a totally different architecture, and the 8008 is not based on it at all. The 8008 of course led to the 8080, 8085 and Z-80. Going to 8086 was a bigger jump - as an earlier post mentioned, 8080 assembly could be sort of translated to 8086 code. One interesting thing about the 8085 is it implemented instructions that were never documented because Intel decided they didn't want to have to support them in the 8086. Well, enough random chip facts.
IIRC, ARM2 was fully static, so you could single-step through an instruction sequence or stop the clock indefinitely. So I was surprised to hear that the flags in ARM1 used dynamic logic.
If the flag logic was the only dynamic logic on the chip, it makes good sense to change that in the redesign to get single-step capability, so it is perfectly plausible that this happened.
Not just single step. I suspect a major disadvantage of the ARM1 was that the range of clock speeds was limited by the need to keep the dynamic cells alive.
In embedded designs it's very common to vary the clock speed according to the work being done, as this reduces power consumption. This is now, of course, standard operating technique in phones and laptops. For instance, one design I did back in the late 70s used an RCA processor which normally ticked over at about 10kHz waiting for keypad input. When it sensed a key it went to 2MHz until processing was complete. The whole unit ran for weeks off 3 C cells.
Going to static for the ARM2 would have vastly increased the possible range of applications.
For instance, one design I did back in the late 70s used an RCA processor which normally ticked over at about 10kHz waiting for keypad input. When it sensed a key it went to 2MHz until processing was complete. The whole unit ran for weeks off 3 C cells.
The RCA1802? Great chip. The original low power micro. Got used in all sorts of things - cruise missiles (where I think they did an entire terrain following radar guidance system on it, which would have been a monumental achievement), British Telecom payphones used them.
Another thing I miss is 4000 series CMOS logic. Want to run it off 24V? No problems. Fiddly to use (don't dare leave an input undefined), not fast, but great power consumption and good noise immunity. I don't think the 1802 went quite that high, but it was good noise immunity that made is suitable for amateur satellites back in the 1970s.
Indeed. Slow but reliable as anything. However - 24V 4000 series logic? Nothing I ever used was safe above 15V. The 1802 had a version where the core logic ran at 10V and the interfaces ran at 5, and when I enquired what it was used for the sales guy declined to answer.
While we're completely off topic, let me tell you a story (shut up grandpa). On a system I worked on there was a logic board which kept bringing down its 15V power line. There were, predictably, no schematics. And it did seem to draw a lot of power, with mysterious spikes.
So although it was working we extracted it and worked out the schematic. It was a state machine implemented in 15V 4000 series CMOS. It was a very clever design. And every single unused input had been left open circuit. The board had then been varnished so that each unused input was a little capacitor. It took me a whole day to work out which inputs should be tied to +15 and which should be tied to Vcc. After which it drew 90% less power and the glitch disappeared.
In some circles 4000 series had a terrible reputation for unreliability, and I often wondered how much of this was simply failure to terminate properly.
In some circles 4000 series had a terrible reputation for unreliability, and I often wondered how much of this was simply failure to terminate properly.
My vote is on the latter. I never had any issues with 4000 series. In fact the wide supply voltage range and high fan-out was very useful.
They did tend to somewhat sensitive to ESD, so perhaps the reputation in some circles was due to combination of bad design and improper handling.
Better than that ugly mess called Intel x86. Just a shame that Intel and not ARM or Motorola didn't win on the desktop.
I recall that when they first tested the ARM they forgot to connect the power line to it but miraculously it was working, it was being powered by the logic signals of the host BBC Micro (it was developed by fitting it a CPU expansion to the BBC Micro). It was power efficient even then, although it wasn't a design goal.
Also, give Apple credit for seeing what a great CPU it was when they were looking for a processor for the Newton. It was a deal with Apple that led to the formation of ARM as a separate company since Apple was not going to be supplied to a competitor in the computer market.
It's always been possible to power CMOS processors (and CMOS logic) from glue logic via their I/O port ESD protection diodes, it's just a feature of ESD protection in CMOS chips so it's not specifically a shining example of low power chip design.
Especially when you're hooking them up to logic chips that are capable of supplying tens of milliamps per pin like those used in the Beeb, even the venerable 6502 could be run that way if by some oversight you managed to forget to connect the power pin.
"Companies stock normally increases in value when the company has predicted growth so there is a link to the real world."
Might be that, might be "illogical exuberance" (helped by share buybacks and such, funded by money printed at taxpayers expense).
Companies stock normally increases in value when the company has predicted growth so there is a link to the real world
Please provide evidence of a statistically significant correlation, in general, between predictions of growth and subsequent real-world results.
I'm surprised there was no mention of: http://www.theregister.co.uk/2015/11/28/arm1_visualized/
"In the case of the ARM1, to celebrate the 25th anniversary of Brit chip architects ARM, the team have managed to lay their hands on the original designs of the 32-bit RISC processor core, and visualized it for the web" http://visual6502.org/sim/varm/armgl.html
Yes, the Visual 6502 team should definitely get credit for obtaining the ARM1 layout and building a simulator - my reverse engineering is based on their data. Also Dave Mugridge has also been doing a lot of ARM1 reverse engineering and his website (daveshacks.blogspot.com) is worth visiting.
In the early days - when there were less than about 5k devices on a chip there was not a lot of automation available and many hours were spent laying out stuff and you would learn what bits looked like. Worse than that - I used to layout NMOS on a tectronic storage scope which wrote on a green screen with a very bright stream of electrons that left a glow that was a bit brighter than the dark green background, After a long day on one of these you could stand in the pub and whole parts of the circuit would literally flash before your eyes!
Towards the ends of the eighties a huge amount of CAD had been developed and you would only hand craft (and hence recognise) repeated parts to get the maximum utilisation of space and these parts would stay with you for a long time.
I worked on some (for then) ultra high speed bipolar and the high (relatively) power consumption meant chips didnt have too many components and were largely pad limited the pads the connecting wires were attached were on 100u 'grid' so you could spend a month or more trying to shrink some part of the circuit by 10 or 20u so the whole chip size would drop 100u and you could get another 15 devices from a 4" die. When you spend that amount of time on something you remember it.
Not sure what its like these days - wanted to get into it again but they wanted £40k just to see the process details FFS.
Even an undergrad student who's done a little chip-design work in class can often recognize some common structures in relatively simple architectures - things like barrel shifters (note the one in the ARM1 picture in the article). I remember looking at chip layouts with friends in the CS and EE programs, guessing what various sections were.
Someone with more experience can no doubt do better.
But it's just large-scale stuff. It's like looking at an aerial shot of a city: you can say, oh, here's a park, and here's a high-rise office district, and this is a residential neighborhood. But you can't read the street signs without getting a closer look.
An experienced engineer can learn a lot by looking at a chip plot, less by looking at a die photo. Much less these days of dizzying numbers of layers. It is not unknown to have "traps", structures that look like one sort of transistor but in fact (due to implant or doping) behave differently than expected. The chip plot will probably be more "honest", but also less likely to be available to an adversary.
Now, if the question is about non-adversarial (e.g. scholarly) study, again, the increasing complexity of modern chips means things don't just "jump out at you", and much of modern (or not so modern) chips is generated rather than hand drawn. One would need some familiarity with the idioms of the designer, or the compiler.
I managed to get a look at some bits of the Acorn internal memos concerning the ARM2 and its support chips on the original Archimedes design. Very interesting. Discussions about what audio capabilities (stereo output and/or input) on the IO chip and "how much sand it would require" :)
I found the ARM1 to be a great chip to use on my BBC Micro and went on to create stuff on the ARM2 and 3 on my Archimedes.