It blew the Apple, Amiga and ST away tech-wise in many ways.
Good to see an open source version of RiscOS is being ported to the Raspberry Pi.
More than two decades after the alliance of Intel and Microsoft drove ARM from the battleground of personal computing, Microsoft is warmly embracing the low-power processor designer for Windows 8. ARM was squeezed out of the then emerging and subsequently dominant platform of the time, the desktop PC, as computer makers …
Archimedes = Very good CPU, average chipset.
ST = Average CPU, average chipset.
Amiga = Average CPU, above average chipset.
Apple = Average CPU, average chipset.
It was horses for courses, if you wanted to do 3D rendering then the Arc was king. The fast CPU allowed for you to do a lot of things in software too, no need for hardware assistance.
But the Arc was too expensive, £799 at launch or £875 if you wanted 1MB. I remember paying around £500 or so for my A500 back then.
Acorn's Archimedes series of machines were wonderful all rounders and so easy to work with. Some of the games we converted (for example Pacmania which I did myself) were by far the best versions in fact. For games players, we supported it like no other games company did, converting a such a of major Amiga titles, such as the Lemmings series (which I also did) etc. I often wonder if it wasn't for our efforts on the platform, the format might have died a lot earlier, along with the Arm processor. What we achieved certainly made it a more acceptable computer for the kids to own.
Perhaps phones might have all have had intel devices in them and still be big and fat if it wasn't for us lol!
...and all this was in an age where optimisation meant changing the program so it ran faster, not buying newer, faster hardware. Newer versions of software also tended to come with software optimisations and new features, not just bloat that ran slower on the same hardware but ran roughly the same speed on much faster new hardware.
It was an art form back then tweaking your system, usually to make it faster, for example to boot in less than 5 seconds. I remember getting a system to the desktop (graphical shell) in under 2 seconds; and this was a fully usable desktop, not the sham tweak from Windows XP onwards that may display a desktop relatively quickly but you still have to wait another 30 seconds for it to be usable. I don't especially miss the tweaking of systems to make them run faster, and definitely don't miss the IRQ table and memory allocation juggling that came with DOS, but considering the sheer processing power available just on a basic PC, it's disappointing just how slow they are.
That's because those machines were from a time before Microsoft's monopoly damaged computing and set it back 5 - 10 yrs.
An Amiga 1200 (from 1991) compared to a windows PC from 1996
The Amiga ran at 14Mhz and had a 2MB RAM, the Windows PC I had ran at 166 Mhz and had 32MB RAM
- The Amiga booted up faster (seconds for a HDD boot into workbench)
- The Amiga was more responsive desktop
- The Amiga was more a more stable desktop (only games ever cause crashes...)
- The Amiga has sound that didn't stutter ALL THE GOD DAMN TIME
- The Amiga's graphics although lacked the 3D capabilities seems far faster - and didn't stutter ALL THE GOD DAMN TIME - at 1996 I saw nothing as impressive as 'state of the art'
- Everything cost far more in Windows than the Amiga, there were loads of software on the Amiga that just didn't exist for Windows (i.e music production.) and equivalents that did cost often 2 - 10 X the amount (for less reliable software)
Going from the Amiga 1200 -> Win 95 was like a step into the Dark ages.
Thank god for Linux.
"RISC OS, which bore little resemblance to what students would see and use at work when they left school."
Absolute rubbish.Windows 95 bore far more resemblance to RISC OS than to windows 3.1.
The kids who were moved from RISC OS to Windows 3.1 would have encountered Windows 95 when they left school.
(And apparently the official launch of RISC OS for the Raspberry Pi is this weekend).
Well, I never had to get all techy on Acorns, as they were in my junior school's IT room... However I remember it being straight forward enough for all us 12 year-olds to do DTP and word processing on them, and on the customary last-lesson-of-term free play, we would marvel at the games.
At home, I was having to mess around with IRQ numbers, Autoexec.bat, config.sys, extended, expanded etc, to run games on a PC with beepy sound and so-so graphics. Using it for homework meant an ASCII-based GUI called MS Works, and it just seemed very unpolished.
With the senior school came Mac LCIIIs (IIRC) and despite most assignments being completed on them, we rarely saw signs of slow down. Hours of fun making Macromedia Director animations, too. Our own quota of network storage, and a laser printer. I think the transition was fairly smooth for most pupils, despite the disappearance of two mouse buttons.
By this point, at home, I must have had a 386 SX... Windows was optional... I still had to reboot after connecting a peripheral... but later Doom and Carmageddon would cheer me up!
Acorns and Apples then were much more like the Windows PCs I use now.
"I knew this would rile ROS fans, but y'know that's how Bill called it. ADFS::HardDisc4.$ compares to C:\ how?"
But how often do you actually need to type in ADFS::HardDisc4.$.? Virtually never, though you can if you want to. If you need a full pathname you can just shift-drag the icon.
Oh, and RISC OS keeps the file type separately from the file name.
Re Acorn file paths.
Instead of names like HardDisc4
:0, :1 etc. could be used for floppy disks.
:4, :5 etc. could also be used for hard drive devices as I recall, as well as the device names or name of the individual floppy disk. If you didn't have a needed named floppy in the drive, then you'd be prompted to insert it.
But you could also use <Obey$Dir> for the path of your app just clicked on, and just go down from there for your own resources in your app folder.
All easy peasy actually.
How can you call RISC OS idiosyncratic? It's intuitive. Drag and drop filing is so much easier than having to navigate Windows's or MacOS's filing systems. It had the icon bar long before both Microsoft and Apple pinched the idea. It has the ability to put windows to the back, and to have focus on a window without automatically bringing it to the front thereby covering up the other windows you wanted to see. In short, RISC OS is so much more user-friendly that it's worth putting up with a few drawbacks like the absence of a fully-featured web browser.
I think that at that time (until Windows 95 was launched), Windows was the red-headed stepchild of WIMP interfaces. RiscOS, Acorn, Atari, and Mac had similar, and by similar I mean usable, GUIs. The three-button mouse was RiscOS's little innovation at that time which is now considered normal.
No. Here's the situation as I understand it:
Windows RT (the operating system) is pretty much just the ARM build of Windows 8, with various features turned off (and a confusing name).
Note that Windows Phone 7 & 7.5 were (I believe) built on top of Windows CE, but Windows Phone 8 is built largely on Windows 8 code (e.g. the kernel, networking stack, some of the UI stack, etc). That means this new round of OS updates (including the ability to build "proper Windows" on ARM) may be what finally gets rid of Win CE.
From a marketing perspective. Yes, CE has run on ARM for years and it has more in common with Windows NT (2000, XP, Vista, 7, 8) family than Win on Dos (3.x, 9x, ME) family.
But CE resource / process limits are static, hard coded for tiny RAM and ROM. Win NT (Win 8) is dynamic allocation more suited to the ARM in Smart phones that has more physical resource than a 2001 x86 Laptop!
$DEITY knows the competition has used sometimes outright ludicrous marketing claims. As those go, this one was relatively benign. In fact, it was, and is, mostly true as long as you read "x86" for "CISC", ie that which they were up against in that market.
Even if x86 has a RISC-type core underneath. It still means they have a layer of CISC on top, and that layering itself adds complexity. The point of RISC is exactly to spurn complexity, something a CISC-over-RISC design doesn't quite do. Handily defeats, one might say. Meaning you might as well not have mooted that argument.
The problem with acorn wasn't with their marketing slogans, but the dominance of redmond and its marketeering. And of course its chum intel, whose segmented CPUs didn't win out because of their nice and elegant architectures. How is itanic doing these days, or am I not allowed to ask that?
In fact, that very success with wintendo could be called something of a curse in disguise. For comparison, apple has switched chip architectures in their desktops twice now, while maintaining reasonable backward compatability. So I expect them to understand how to keep their independence, meaning that should a better architecture float by for the mobile market, they can switch over there too. This means that ARM will have to continue to deliver more than intel had to redmond. In addition, apple has enough know-how to keep the pressure on.
On the other hand, redmond didn't quite manage anything close to that. winnt on alpha, anyone? Instead their offerings seemed designed to require upgrades, driving sales for intel, and by neatly hooking into that cycle, themselves, on the desktop. In "mobile", they did do wince, and so on, and it supports multiple architectures, but it's clunky to write for in a way entirely different yet curiously similar to their other software offerings. But I suspect that may not help them.
It's certainly audacious to try and unify mobile and desktop, but where they've antagonised the desktop with the user interface, they may have scuttled their mobile efforts before they have good and well started by sheer greed. That is, the differently-branded arm variant requires the device to be cryptographically dedicated to them for all eternity, and all you will most likely get is a few choice bits of software they want you to have, when the competition will happily let you tinker. It may turn out to be another zune because of that.
And sticking with intel isn't really an option as long as they fail to get that mips per watt thing sorted. Which they may not be able to do at all, should it turn out the "architecture" simply has become too bloated to compete. Turning parts of the chip off will only help you so much, however cleverly done.
Not that I particularly want them to improve. The two of them have hoarded end user computing for long enough that I don't particularly mind them scuttling themselves. Now if only android could stop hogging so many cycles....
"On the other hand, redmond didn't quite manage anything close to that. winnt on alpha, anyone? Instead their offerings seemed designed to require upgrades, driving sales for intel,"
When you realise that MS "friends" are *hardware* suppliers who make money whey you buy more/higher spec'd bits of kit.
Consider what a Win98 PC *did* back then in *user* terms.
What does it do *now* from the *user* PoV?
You've got to be pretty "creative" to waste that big a speed up and capacity increase in the OS.
The most important point about RISC isn't arguments about instructions per second, but reduced die size. The original ARM-2 had 30,000 transistors, roughly the same as the 8-bit 6502. That makes it (a) cheap (b) low power (c) easily testable and (d) easily integrated.
While I am not familiar with the processors you guys are discussing, I do find this article - which, IMHO is otherwise very good - lacking in that it failed to discuss die size and unit cost to manufacture.
A discussion on the other relative technical merits (instruction set, heat/efficiency, etc) is all well and good, but at the end of the day I think it could be argued that ARM's success over the last ~20 years was primarily driven by the fact that that a licensee could pump ARM chips out at a fraction of the cost of a chip from Intel. The ability to customize, the efficiency - those are crucial factors as well, no doubt... but without that low marginal cost, they wouldn't have had the same success in the embedded market, and possibly wouldn't have survived to see the rise of the smartphones, tablets, and what appears to be a round 2 vs. Wintel brewing.
wrong cpu to compare with
the idea is to compare transistor count of z80 or 68000 . 6502 was 'riscy' in itself..
Bit confusing to make the claim that 'risc takes more instructions' by talking about loading and storing, when ARM has far more registers so doesn't need to be moving in and out of memory all the time.
In fact, looking at most code, ARM is often shorter than x86.
Both MIPS and SuperH, the two architecture rivals mentioned are also RISC machines. SuperHs were only made by Hitachi (later Reneasas), but MIPS designs were licensed to all and sundry. They predade ARM in desktops too. Why is ARM prevailing over MIPS? MIPS had hardware floating point early on and a 64-bit core available two decades ago. (One could argue they are actually nicer to code assembly on than ARM, but that's bordering on heresy.) I think it must ultimately boil down to licensing costs, or else ARM have just been really lucky.
MIPS got big by being the CPU of choice within Silicon Graphics, once a name to reckon with. A vast portion of the game console and PC video hardware that appeared in the 90s had a history in what SGI did in the 80s. Consequently this meant a lot of business for MIPS.
After SGI stopped being so influential and stopped driving innovation there was less interest in using MIPS processors in new designs. Competitors were pushing forward hard and taking away business. Especially in the emerging market for mobile devices. MIPS hadn't much focus on the needs of mobile compared to ARM and that cost them dearly in the long term. There were any number of MIPS Windows CE PDAs and such but by the turn of the century it was plain that ARM was the company to beat for mobile designs. MIPS didn't have as good a handle on power issues and its advanced features didn't matter much for mobile. In the non-x86 workstation market IBM and Motorola were advancing the PowerPC platform strongly.
After the Sony Playstation 2 it's hard to recall a major design win for MIPS. I suppose it may have come down to a lack of good leadership to seek out and develop the next market.
That makes [RISC processors] (a) cheap (b) low power (c) easily testable and (d) easily integrated.
Also, the simpler instruction set lends itself well to architecture-level optimizations (e.g. pipelines). That's why Intel went to the trouble of making their chips RISC-like on the inside even though they still show a CISC facade to the world outside.
Back in the early 90's the ARM wasn't the only RISC micro available. What ARM did right was to recognize this and offer their solution with a low licensing cost and quality support, making it better and cheaper and lower risk to go with their licensed design than any other solution.
"but then require complex combinations of mouse buttons and dragging just to save a file. Its menu-driven, drag-n-drop-based user interface was alien to anyone used to Microsoft's Windows."
There speaks somebody who is familiar with Windows and doesn't really know RISC OS that well, else you would know...
* Saving is a different operation, yes, but the RISC OS API has its own good points such as two-dimensional scrolling at the same time, the ability to give input focus to a window that isn't topmost (that pop to top behaviour is annoying).
* Wanna compare boot speeds?
* Full proper anti-aliasing on-screen in the late '80s, none of this CoolType stuff.
* The Windows contemporary with RISC OS in the beginning was version 3.something which was all sorts of horrid. My eyes hurt looking at it, and I frequently found dropping to DOS quicker than the Windows klutzy API.
* Check your dates. RISC OS, 1987. Windows 3.0, May 1990. Before Windows took on ground, kids were being taught stuff like WordPerfect 5.1. FFS, my Acorn had a fully WYSIWYG DTP package and multitasking GUI. No comparison, really.
Shouldn't we be teaching how to use computers rather than how to use specific ones? The company I work for recently changed to Ubuntu and it was a headache for those "programmed" to use MS Office...
Only if they can get it recompiled as a <not Metro> app and approved for the store. AFAIK there is no side-loading of apps allowed for RT (like iOS).
That said, I am curious what sort of performance hit software emulation for x86 would cause. Somewhat related: I heard a rumor that one of the server ARM ventures was getting hardware based x86 emulation. Interesting times we live in : )
The ARM chips in the first generation of Windows RT devices, mainly Tegra 3 chips, is pretty pokey in sheer performance compared to current low-end x86 CPUs. I'd expect any major chunk of Windows x86 software running under emulation on a Tegra 3 to be painfully slow. It would complicate things a great deal without delivering much of use.
Microsoft has some experience with x86 emulation. You may recall they acquired Connectix, who were once the leader in PC emulation on 680x0 and PowerPC Macs, for their expertise in that areas and some related needs. Some of those personnel wrote the software for letting the PowerPC Xbox 360 run a big portion of the original x86 Xbox's game library.
So, I'm pretty sure they gave serious consideration and found it just wasn't going to be workable. The other big problem is one of the big reasons they introduced the new UI: battery life. It's the same reason Apple doesn't give iOS the OS X desktop. That kind of windowing, multi-tasking environment is a big power draw. Running Windows desktop apps under emulation, even if it was usably fast, would really kill the battery life on a Windows RT tablet.
"QEmu runs on ARM and can emulate x86 and x86_64 amongst others. Won't this be available as a package for Windows RT in pretty short order once the devices start to appear?"
You can bet Redmond factored that into their OS design.
Making sure that it will be either a) Impossible. or b) *just* unreliable enough to discourage people (which when someone starts digging will turn out to be down to some API calls having been "mysteriously" tweaked for no apparent reason.
In my estimation, it was not the unfamiliarity of the OS that led to the failure of Acorn. It was a mixture of price (the cheapest RISC OS computer was much more expensive than the cheapest Wintel PC) and lack of software. While there were excellent applications for most tasks, there were applications on Windows and MacOS that had no decent equivalent on RISC OS -- the market would simply be too small. This led to a downwards spiral where developers would move from RISC OS to Windows to get more customers and this would reduce the uptake of RISC OS computers and so on.
In retrospect, Acorn should have done as Microsoft did: Licensed their OS so anyone could have made hardware using it. This would have given people who felt that Acorns prices were too high a cheaper entry and, possibly, kept the market large enough to attract software developers. Apple was able to survive on a non-license policy and high prices, but that was because they were dominant in a market that was willing and able to pay a premium: Graphics design and publishing. Acorns main market was education, a market that is neither willing nor able to pay premium. Their secondary market (hobbyists) has a segment that will and can pay premium, but this was not enough to keep Acorn alive. And after the death of Acorn, development of RISC OS was much too slow, and was hampered by ARMs shift in focus from performance to low power, which meant that ARM-based computers could no longer compete with x86 ditto.
As for ARM vs x86 instructions sets, ARM is in the CISCy end of RISC and x86 in the RISCy end of CISC, so you should not use them as defining instances of these terms. Rather, you should compare the instruction sets on their own merits. IMO, ARM assembly language is much easier to program than x86 ditto and also a lot easier to compile to. x86 code is slightly more compact, but with the Thumb extension ARM got the advantage here again. ARM suffered for a long time in not having a unified floating-point instruction set over all processors but, again, that problem is solved. Now the main failing of ARM is lack of a true 64-bit processor, but even that is coming soon. And in any case, the advantage of 64 bits is mainly that a single program can easily use more than 4GB of RAM, but that is (as yet) not a problem for personal users. Saying it will never be is, however, as silly as saying taht 640K is enough for everyone.
Acorn had a captive deal with the BBC and wanted to retain that niche status.
IOW they wanted to be Apple.
They were not. Had they accepted their cash flow was *temporary* and leveraged it to move into the mass market they *could* have been a contender.
Where can I locate a Bill Gates shaped punching bag? There *has* to be a market for that.
Risc OS 2.0 was blinding fast and pretty much all assembler.
Risc OS 3.0 started to get patched and less efficient.
Cannot imagine having a common source base will lead to optimum performance.
8Mhz ARM 2 / 1MB RAM. (A3000).
AFAIK whether something is RISC or CISC is only to do with the instruction set. (Makes no difference what happens internally).
WinRT is just a cut-down version of Windows which somehow manages to be even less useful than regular Windows.
The start of the real revolution are devices like the Raspberry Pi, which are more or less ARM PCs. You have your operating system, which can be Linux or *BSD or whatever on a removable storage medium, you can connect a monitor a keyboard and a mouse and there you go, the full power of a workstation, but running on a tiny little ARM. It's not an appliance, but a full blown workstation capable of doing any kind of data processing imaginable.
This the reason I suspect for confining Win RT to Metro, the online shop, and not providing x86 emulation (remember that most Windows desktop programs except games just wait for events).
If you could run Win RT in desktop mode, have as much freedom to load programs as you do with Win x86, and it could emulate the existing software base, Intel would immediately go down like the Titanic.
Well there are other points for Intel, but yes they'd take a huge hit.
If I was Microsoft I'd find a way to run legacy win32 applications on small screens, by re-compositing the gui. Since the GUI on Windows is managed by the operating system, this should be possible.
Translating those x86 op codes into something RISC-like costs Intel an awful lot of transistors. That's why they struggle to compete on Watts with ARM. Sure, Intel compilers can second guess that translation and line up opcodes in a convenient order, but those transistors still have to be there getting hot.
Whereas with ARM there is no opcode translation to do at all!
Whereas with ARM there is no opcode translation to do at all!
No, but there is still a decoding stage, so there's still something there to "get hot". As ARM CPUs have a fairly orthogonal instruction set, though, this stage is vastly less complex than the decode stage on CISC CPUs. This also allows, for example, reserving a few bits to encode for conditional execution and a few more for whether and how to rotate one of the instruction operands. These features are available with most, if not all instructions and effectively come "for free" from the programmer's perspective.
As you might have guessed, I quite like the ARM architecture. It's one of the nicest CPUs I've coded for, though 68000 is really nice too, and I've also got a soft spot for the PS3's Cell architecture. All of these are a complete joy to write for compared to the abomination that is the x86 architecture!
Not really an issue in this day and age.
Long ago, there was a lot of conjecture about what was going to bring down the curtain on x86. Intel itsef thought this was looming, which is why they had products like the 80860 and Itanium. BYTE magazine first discussed the 80860 under the headline "Cray on a chip" and they were used as accelerator boards to run custom code in a PC ISA card slot. Ultimately, their biggest use was as controllers for laser printers.
One thing they thought was going to kill x86 was the cost in transistors to decode the x86 instruction set. The 486 design had a good chunk given over to this. But this failed to consider Moore's Law, which was much more obscure back then and process nodes weren't discussed much. But not only did Moore's Law continue, it accelerated. The pace of new, smaller process nodes picked up and it wasn't long before the decode stage for an x86 processor was a trivial bit of real estate on the chip.
The same thing happened with video chips. When DVD was the next big thing the big concern was how much a dedicated decoder for MPEG-2 cost. CPUs couldn't do decent playback unaided but adding parts of a decoder to offloading from the CPU was going to make for pricey video chips. This ended up only being a genuine problem for about 18 months. Again, the transistor real estate involved became a trivial item as chip transistor counts soared with much smaller and cheaper transistors.
Imagine having a quad processor workstation in the Pentium II era. The sheer physical size of the processors in their packaging would have been an issue all by itself. Now we have quad-core CPUs in phones.
"So, for example, a CISC processor will have a single instruction that swaps the contents of a specific register with a 32-bit value stored in a specific memory location."
BAD example ... ARM added that back in the ARM3 (the ARMv2a architecture, which added caching to the ARMv2) as SWP, since atomically exchanging a register and memory location's contents is frequently needed for multi-processor applications.
It's a shame - and odd - that Microsoft has chosen to put a sabotaged subset of Windows on ARM for the moment: I do wish they'd gone for parity, putting a proper Windows on there, allowing regular software to be ported normally instead of insisting on Metro^WWTF apps only. After all, even a basic "tablet" now has more power in every respect than anything out when Windows XP launched!
I do hope I can get a blank ARM system. WinRT looks like a real steaming pile, I do not want Windows, Windows 8 looks even worse and Windows RT looks yet worse than that. An ARM computer, on the other hand, would be lovely.
I actually shoehorned a Debian install onto my Droid 2 Global (which is now out of commission with a cracked screen). Obviously, the screen was too small, but I ran some apps remotely to see how they'd run, and even OpenOffice was quite snappy; not only was it snappy but the CPU usage was single-digit percent even whipping through menus and such to try to drive the load up (and 0 otherwise, blinking the cursor in a word processor shouldn't use noticeable CPU time and it doesn't). The newer ARM designs are even faster per mhz, and have dual and quad cores available to accomodate additional workload.
For that matter, I've read that ARM virtualization has even been sorted out, so you'll be able to have a VirtualBox-like or VMWare-like setup that runs ARM VMs at near-native speed like is available on x86. (Of course, x86 VMs would have to be run under CPU emulation.)
Just saying, the ARM doesn't require some stripped and crippled environment like Microsoft feels like providing; it can run a FULL desktop, and I think this is what a lot of people would prefer. If I want a stripped environment I can get an Android tablet.
I think Microsoft is playing a dangerous game here, trying to market a stripped and crippled tablet OS as a portable PC replacement. And trying to claim equivalence to the desktop by ALSO tacking on a stripped and crippled desktop environment? Madness. Perhaps when people decide these are pants, I'll pick up a used one (after making sure the OS can be wiped and replaced.)
"I do hope I can get a blank ARM system"
Absolutely zero chance. Precisely because of the licensed/customized design of ARM chips, so unlike with x86 no two ARM devices are necessarily the same and each requires a custom OS (which is why Windows RT is limited to a subset of ARM devices). It's not all win-win as the author suggests. Nearest you'll get is something like Raspberry Pi.
Also, for the record, Windows RT does have a desktop and you can fiddle in Explorer et al to your hearts content. You won't be able to run third party desktop applications, but that's not (quite) the same thing as no desktop at all.
In fact with RT Microsoft seems to be trying to create the same kind of tightly controlled environment for ARM based tablets, as iOS, where everything will come either from Microsoft, or via their "Store" and nowhere else.... Certainly it seems that independently built desktop apps won't be allowed for sure, even from the Store.
Why is that ?
I hope it is a dismal failure. I certainly won't buy one that's for sure.
Pretty soon it seems we may have to rely on Linux for the freedom to run exactly what we want on OUR hardware. Thank goodness it is there for us.
It's a pity the original Linux licence didn't prevent it being used to be morped into repressive, highly controlled touch screen operating systems.
Both Amiga and Commodore boxes had *substantial* ASICs to off load tasks for like video, memory management and sound (as did the Archimdes). Anyone remember what *standard* sound support in a PC was in the late 80s? Not much in the way of "off loading" going on there. Even the Mac had a fair bit of ASIC support, *despite* the way Apple liked to talk about it as "A processor and a bitmap".
I don't think Motorola *ever* supplied an MMU to support the baseline M68000.
ARM designers were able to check out most of the high end processors of the day by hanging boards off the "Tube" interface in the BBC (2nd processor bus). There conclusions ( the money being charged just did not give you the kind of performance boost they *expected* for the clock rate and the 16 bit 6502 they wanted was *years* late). They'd some experience of VLSI's design tools and reckoned they were up to the challenge.
The CISC/RISC example in the article is *very* poor. Historically RISC has been *strong* on *internal* data movement between registers, limiting I/O to a few *specific* instructions.
CISC normally used the idea of "microcoding" where a "short" instruction is actually the start *address* of a short program (in on chip memory)in a simpler processor whose instruction set width is *much* wider. The Alto workstation main memory (which has a nice report available about it) was 16bits but its *microcode* was 32 and partly writeable (if you were, err "bold" enough to do so). The Transputer *deliberately* split instructions into bytes so (in principle) a 32 bit transputer got a 4 instruction look ahead buffer for free.
The Z80 was also a microcoded design.
A RISC goal was *direct* interpretation. Instruction set bits *directly* routing hardware within the CPU. This was difficult (BTW the 6502 did this and it was laid out by *hand*. The delay to the 16 bit version in the Apple IIGS demonstrated what a monumental PITA this is without software tools). IIRC most went with some direct routing and other functions being controlled by signals derived from the bits in the instruction set being fed into a Programmable Logic Array.. VLSI logic supplied tools to help take logic equations and do the layout automatically.
Another goal was architecture that compiler *writers* could get the best out. So developers would want to migrate *onto* the architecture and could do so easily. IIRC *all* the core MSDOS programs that were successes were assembler coded, things like Lotus 123 (remember that?). They thought developer costs were going to rise so better to write clean code in HLL and have the compiler do the heavy lifting, the mad fools.
*All* processor architectures *evolve* over time. Original RISC's have sprouted FPU support, special data type support etc. Intel has *internally* been re-architectured to run *common* instructions *much* faster, making optimisation rules of thumb obsolete on new versions (yes, in some cases your code runs *slower*). At it's core is still the random logic replacement device developed for traffic light control.
The hits you taking reading and writing to/from the outside world are the reasons *all* RISC's have big register sets (SPARC anyone?).
RISC CPU's stressed "orthogonality" where *all* instructions handle *all* data types (typically 8,16 and 32 bits. Possibly 4 if BCD is accepted as well). No tricky op codes to benefit just 1 type that *might* be used sometime (Decimal Adjust anyone?). Likewise *one* instruction format IE *all* 1 word long, not a mix of 8,16,24 or 32 (or more) at random.
Mfg bang on about how many more transistors they can stuff on a chip but if you've doubled the the transistor count without *halving* the power level (or *better*) your power consumption is only going one way.
IIRC Dick Pountain's article on the ARM 1 (can't be a**sed to dig it out) said it was something like 2 micrometres and 25k transistors and about the size of the die for a 6502 (as reported up the thread).
It shouldn't replace the current Windows save dialog, but there's plenty of room for a draggable icon somewhere in the window. On the occasion that an operation needs you to open files one at a time in folder C:\stuff\like\this\X then make a simple change and save them in C:\stuff\like\that\Z, the ability to just drag and drop the icon to an explorer window instead of having to select the folder every single time would save endless time. Even more if you needed to save into two different folders depending on file contents.
In OS X, you can sort of do that by right clicking/two-finger tapping on the file's icon at the top of the window and dropping into the finder. Drag the icon in the finder where you want it to go et voila. No nonsense about the file currently being in use. You can even rename open files this way, without using Save As (which is good because the duplicate thing is handy, but why not have both Save As and Duplicate?????. Sorry rant over).
I went from a BBC B to a BBC Master to an Archimedes 310 to a RISC PC
Eventually I had a x86 PC card in my RISC PC so that I had Windows compatibility too. It was interesting to compare the two processor cards at that time. The x86 bulky with a fan and heatsink, ARM all slim and sleek with just the bare silicon. It was obvious which was the better design.
Nowadays my Risc PC lives on in the form of the redsquirrel emulator.