Linux kernel 6.11 lands with vintage TV support
io_uring
is getting more capable, and PREEMPT_RT is going mainstream A patch to add a new display driver for Linux is being reviewed. What's unusual is that it's for a machine released 30 years ago. In the words of developer Geert Uytterhoeven: This RFC patch series adds a DRM driver for the good old Atari ST/TT/Falcon hardware. Uytterhoeven is the maintainer of Linux/m68k, the Motorola …
It's lovely to see the Atari ST family getting a bit of love.
There seems to be quite a lot written about the Amiga but the ST seems to have been forgotten by most people. I had a couple of Atari 1040STs, one of which got fried by a current surge during a thunderstorm. I loved the capabilities of the machine and invested in a SCSI hard drive with a huge 20MB of storage! and an SC1224 monitor. It worked well and really sped up the machine and made it useful for things other than games.
Mines still in the loft somewhere but I don't suppose it will work after being exposed to extreme levels of heat and cold.
I loved the ST and having to jump to Windows 3.11 was a real shock.
Joking aside, I originally had an Atari 800XL (updated version of the Atari 800) and counted myself as an Atari fan back then!
I did get an Atari ST at one point, but ended up selling it to buy the Amiga. If anything, while I might have considered myself an "Atari" fan, I'd never really have said I became a Commodore fan (which to me was more associated with the C64), just a fan of the Amiga.
The Atari 800 and the Atari ST were really products of two different companies anyway, though. The Atari 800 was developed by the original Atari Inc. and was state of the art, with many custom chips, but also very expensive when first released.
The ST was developed by Tramiel's Atari Corp. under his distinctly different "Power Without The Price" philosophy (after he'd bought out Atari Inc's computer business and existing models) after he'd got rid of most of the original Atari engineers and was far more off-the-shelf, but also affordable when new.
So, in hindsight (more than I realised at the time), being an "Atari" fan on the basis of one wasn't really the same as the other.
Ironically, the Amiga was far more the spiritual heir to the original Atari 800. It was originally independently designed (before Commodore bought it out) by a company sharing many of the same designers and with the the same state-of-the-art, custom chip approach and even had similarities in the graphics architecture which in turn had been inherited- and advanced- by the 800 the original Atari VCS.
The Amiga was also (like the Atari 800) very expensive when new, which is why the more affordable ST dominated early on, until the Amiga 500 came out later, and rapidly eclipsed the ST after it fell to £400 and its superior graphics and sound became affordable.
Atari 400 here. (Star Raiders anyone?)
With musicians at the time the Amiga was much in favour - which I've never understood. The Atari's MIDI ports alone had the Amiga pipped IMO [edit: and ST had an awesome sound chip]. I know studios used the ST for the MIDI, but for those playing at home at the time I expect the music/synth software was the winner for the Amiga.
> With musicians at the time the Amiga was much in favour
I've never known that to be the case. The Atari ST *was* always more popular than the Amiga for MIDI-based sound production, and not just with studios.
Personally, I *did* use my Amiga for MIDI, but that's because I'd already ditched my ST and wasn't remotely a serious "musician".
The ST- and its ecosystem- always had way better support for MIDI production and was always clearly the "real" musicians' choice.
The ST remained popular in that niche long after its mainstream popularity had declined. (Why- as the article mentions- do you think that C-Lab was manufacturing the ST's successor- the Falcon- under license as a specialised music-oriented computer as late as 1996?)
> edit: and ST had an awesome sound chip
Hard disagree on this one too, I'm afraid!
Despite its popularity in the music field, the original ST's built-in 3-channel squarewave sound chip was definitely *not* "awesome"...! Quite the opposite, it was generally considered a weak point.
It was a derivative of the AY-3-8910 found in countless 8-bit machines like the Oric-1 and Amstrad CPC. (*) Whereas the Amiga *did* have fantastic sound for the time- four-channel hardware support for 8-bit (256-level) sampled audio.
Compare the "Lotus II" music on a regular ST to the same music on the Amiga... this speaks for itself(!)
So why was it the ST that became popular for music production if the Amiga had a "better" spec?
Most likely- in part- because the original Amiga 1000 was *way* more expensive. (The ST line started with the lower-end 520ST, the Amiga started with the c. $2000 Amiga 1000).
And also because- I'd guess- no-one was using internal sound for music production anyway. That was the whole point of MIDI after all, to control your nicer-sounding synth.
The Amiga was a better machine in general, but not in areas that would have been relevant to MIDI production. And its built-in sample playback- amazing as it was by mid-80s computer standards- was still only 8-bit and probably not *quite* good enough for use in "real" music production (which was moving towards 16-bit resolution by then).
The ST had built-in MIDI, built-in disk drives, a good enough 16-bit processor able to drive a usable GUI and would have been a convenient all-in-one semi-portable unit.
By the time the Amiga was within range, the ST was the de facto standard for MIDI regardless.
(*) The Atari STe did have hardware support for sample playback, but that came much later, was hobbled by limited fixed playback rates and was ignored due to stupid, short-sighted marketing by Atari.
The ST was made to a budget and it's sound and graphics were on a par with previous 8 bit machines (the ST had a high resolution monochrome option but that locked out all the other modes and the monitor was expensive).
What it had going for it was the 68000 and GEM (which for the time was pretty cutting edge). And it was cheap - hence it was an ST not an Amiga under the tree that year
The STE and later Falcon fixed a lot of the issues by adding more colours, a blitter and better sound.. but it was too late, because by the time they appeared they were competing directly with the now lower priced Amiga.
If the STe had replaced the original when it came out and- importantly- at the same price (c. £299 for the 520STFM), it would have provided a more compelling defence against the Amiga.
But they got greedy, and charged more, around the time the Amiga 500 fell to the same price, so why bother? And anyone who still cared about the £100 saving would had to settle for the STFM, entrenching that as the base model and (vicious circle) reinforcing the lack of support and any reason to buy the STe.
Not that the extra palette was much use because you still only had 16 colours on screen at once, which was always the limiting factor on the original ST.
Falcon looked nice, but- as I said- way too late and by that time even the Amiga was being nudged off its throne and the ST line was already a has-been.
For some reason I bought a Falcon, thus joining an *extremely* select group of owners with comfortably my worst computer purchase ever.
Atari shipped it (or sent as a free add-on shortly after launch, can't quite remember) with a thing called 'MultiTOS', which was a multitasking version of GEM running over the preemptive MiNT kernel. MiNT had always been pre-emptive and provided an efficient, very Unix-like command line environment, and on the 68030 gained memory protection (I think?). However the MultiTOS desktop was abysmally, almost unusably, slow. There were much better desktops available though.
Fairly quickly I started playing with Linux - early days with none of the niceties of actual distributions etc.. It was the first port of Linux to a non-x86 architecture if I remember correctly. With very little disk space available after partitioning, I ended up using the UMSDOS Linux filesystem instead - this layered longer filenames, permissions etc. over a FAT filesystem to make it almost feel like ext2, so I could boot Linux off a folder of my main TOS disk drive. It eventually worked passably and I could even run X Windows with a very light window manager.
However, the UMSDOS kernel filesystem code assumed little endianness, so didn't work initially. Developing a patch was ... painful. Rebuilding the kernel took over a day. No debugging. Comfortably the most annoying coding I've ever done.
Moving to a 486 and Slackware a couple of years later was a breath of fresh air - everything felt like it happened almost instantly. Probably the zenith of responsiveness of any computer I've ever owned. Since then the rapid growth of compute power has been more than offset by the massive growth of crud.
Greetings fellow Falcon owner! I got mine from Atari, for a development project that ultimately came to nothing - much like the Falcon market in that respect, sadly...
The CPU in the Falcon was a 68LC030, so it had a couple of limitations versus the "full fat" 68030 CPU. First, the data bus was 16 bits wide (no, you wouldn't have heard that at the time - it was a surprise to me when I found out too!), and second, I believe there was no MMU included.
(I could be wrong about the MMU - the EC030 used by Apple in cheaper Macs didn't have one, but I don't know about the LC030, and at this remove it's hard to find a datasheet quickly)
640x200 at 2bpp, 320x200 at 4bpp are both a step up from the 8-bit machines; fairly predictably for machines sold on the size of their databus, they’re approximately double the bandwidth of anything on an 8-bit micro at the time.
E.g. the C64’s 320px mode is attribute based, much like a ZX Spectrum, and the CPC’s is 2bpp.
The MSX 2 and Master System both muddy the water… but both postdate the ST.
The Sam Coupé had 256x192 with 4bpp, but it ran too slow to do anything useful with it even though the Z80B ran at 6MHz. The best it could manage for games which required scrolling was an MSX-like 256x192 with 8x1 colour attributes.
8-bit computing didn't go out with a bang but a whimper.
There is no doubt the ST was the standard, mostly because of Cubase and the midi ports on board. Combine with an Akai sampler rack or two, maybe a drum ROMpler and a couple of synths. Good enough to reach the charts. Many bands with that setup, did.
Amiga you could get an external (Serial/Parallel?) to Midi adapter for little expense. MED, Soundtracker both fun and very accessible, but hardly "pro". Driving external MIDI devices from a tracker is also a bit naff. There were apps like Bars and Pipes Pro a bit more suited to the Cubase role; but they were expensive and not particularly nice to use. Certainly not a patch on Steinberg's offerings.
Music careers have been forged on both formats of course, Calvin Harris first album (love-or-hate) was composed entirely on Amiga even post 2000s.
Me, well, I'm just a rank amateur that's played around with this hardware for nigh on three decades; but I would not dare expose anyone else to the ear bleeding awful racket I tend to do... Though there is that niche of the world that likes that sort of thing.
"The Amiga was a better machine in general, but not in areas that would have been relevant to MIDI production."
I've read that, for technical reasons I don't remember, MIDI timing on the Amiga was a PITA too. It was never intended to do that and was more of the 3rd party afterthought while Atari built that in from the start. So yes, for home use without extra synths etc, the Amiga was the king of the hill for sound back then. Just not in the studio. Of course, the Amiga was aimed very much more at the graphics end of the market and lived on for years after it's demise thanks to it's easy integration with the broadcast video standards of the time.
Oh, yeah, that "timing" issue rings a bell too, yes. Was it due to the Amiga's OS or hardware? Possibly the multitasking, if the former? (The ST didn't have multitasking- a few kludges aside- so that would have kept it simpler for them).
IIRC, MIDI on the Amiga was via a cheap adaptor- at least that's what *I* used- that connected via one of the ports. Couldn't remember whether it was parallel or serial, a quick check suggests the latter, which would make sense for MIDI. No idea if that was the issue?
But yeah, the ST was always *the* machine for MIDI, not the Amiga.
The ST was popular with professional/semi-pro musicians who used it for its midi capabilities. The Amiga was popular with the demo scene, coders and others who wanted to just use the computer to make music.
I recall going to a music studio that had a very expensive mac setup for sampling, but they still had a 1040ST for midi. They claimed its timing was more accurate than the mac.
Another former Atari 800XL owner here:
Loved the machine, had a pair of 5.25" floppy drives (one needed 5 minutes to warm up before working!) and a 256kB ram expansion pack, hooked a brother thermal printer up and had lots of fun.
Sold the lot for a good price just before the market in them crashed to get my first PC (IBM model 30).
Would have loved to have has an ST but could never quite afford it at the time
Yeah, I loved my STs too, especially the Mega 2 (£800 at a trade show!). At our local Atari club, playing MIDI Maze! Ah, those days!
GFA Basic was really good, and 68000 assembler. Getting to college on a PDP 11/73 to find it was the same assembler kicked COBOL into touch pretty quickly!
"...Someday we'll be programming computers that use a strict FLAT MEMORY MODEL, which is a fancy way of saying we'll be able to throw that segment stuff out the door and get back to the basics of giving every memory location a sequential address starting at 0 and continuing until the memory runs out."
Quote from a book on C programming written in 1993, the year of the first 80586-cum-Pentium (you know, so that Intel could copyright that name, to prevent anyone else from selling a clone of an Intel 80586; the poor gits would have to be content selling '586s, '686s...which simply had to be much less powerful because they were not called "Pentium").
Not just you, but a large contingent of IBM engineers who were involved--at Boca Raton--in the design of the IBM PC. Their main argument?--the 68000, with its memory-mapped architecture is consistent with traditional computer design--as regards both hardware and software design (it is a "real computer"); the 8086, with its register-based design, is more like a hardware kluge--designed by people who really didn't understand computer design--and would, they correctly predicted--be much harder to program and interface to.
The bean-counters won--the 68000 was only available (within IBM's time-frame) in a 64-pin package, and the 8086 was further 'kluged' and crippled by Intel by cutting its external data bus to 8 bits (along with the requisite labor-creating workarounds) so the resulting "8088" would fit in a much cheaper 40-pin package. The bean counters declared a "no contest", and we've all been paying the price ever since.
There was also the "advantage" that it could run CP/M and the huge library of software already out there. At least that was the plan. ISTR upgrading a Tandy 1000 with an NEC V20 8086 clone that did a better job of running old CP/M stuff than the native 8088/86 could (and actually ran faster by about 10% for the same clock speed too, so win/win at the time!)
While typing that, tiny nuggets of memories are coming back. IBM PCs could run CP/M86, but with the V20 CPU instead, you could also run 8-bit CP/M stuff as the CPU emulated a Z80 internally too. Or something like that.
The prediction that it would be harder to program and interface to turned out to be false, and the 8086 also had the advantage of an easier assembly language and structure..
Where it fell down was in not being like the processors c and unix were designed for, which turned out to be a critical point when CS schools moved to cheap multi-user unix mini-computers to teach OS design.
I've spent most of my career interfacing hardware to cheap 8-bit microprocessors, starting with the Z80, but I've done some 16 bit stuff as well, and of course now it's 32bit ARM. Out of all of it, the 8086 family was the fastest to develop in, and the easiest to debug. The tiny register set of the 8086 was just one of the advantages.
People trying to port compilers from a different architecture feel differently: it's easier to design compiler-compilers with optimizing for register-less design. In my world, there were far more people doing programming and interfacing than building compilers, and the 8086 had the advantage over 68000 designs.
A very strong case of confirmation bias exists in this comment; that is acceptable as long as it is recognized and acknowledged.
It is a fact that--upon their introductions-- Assembly-Language programming of the I8086/8088 and the MC680x0 were taught by most every institution which teaches computer engineering and/or computer science.
When the Intel devices matured to the 80286, one could no longer find--nor were they being written--any textbooks on 80x86 Assembly-Language programming, nor any courses on the subject (and, of course, IBM's ROM-based debugger, dbug
, fell by the wayside; it simply was no longer usable).
The complexities associated with the upgrading an inherently inefficient design were staggering, and being made painfully obvious.
No similar phenomenon occurred regarding the MC680x0.
------------------------------------------
One does not, of course, have to use Assembly Language when programming a machine.
The situation, which is a direct reflection on the quality of the original design, is that one absolutely cannot use Assembly Language on any of the 8086's successors.
“By understanding a machine-oriented language [i.e. Assembly Language], the programmer will tend to use a much more efficient method; it is much closer to reality.”--Donald Knuth
“People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like [provided, inherently, by the use of Assembly Language]. Otherwise the programs they write will be pretty weird.”--Donald Knuth
>In my world, there were far more people doing programming and interfacing than building compilers
That is always the case, those writing compilers and other essential development tools always will be a small minority.
The IBM-PC/8086 was a bugger to write a decent C-compiler's code generator for...
From memory many early C-compilers for the IBM-PC/8086 just restricted themselves to generating small memory model code (code in single segment, data in single segment). The better one's also supported some other models such as small code, large data.
Can't comment on other CPU's, but reputedly the Motorola 680x0 and the National Semiconductor Series 32000 architecture were more high-level language and compiler friendly.
This is as close as I've come to confirmation of the informed speculation (of the day) that, because the 68oXo was a microprogrammed design--as opposed to the random-logic design of the 8086--IBM was in talks with Motorola to create an IBM 360/370-on-a-chip, based on the 68XXX.
Makes imminent sense with a 4 GB flat address space, and (68020's) 32-bit ALU and data bus. From the 68020 onward, it really is a 32-bit machine.
Contrast this with Intel's bastardized segment:offset method of memory addressing which was--and IS--so convoluted that, by the time the 80286 arrived, Intel had to resort to its memory addressing being relegated to emulation in software.
Seems as though IBM should have listened to its engineers, and not to its bean-counters.
This insistence that you do not want a segmented model flies in the face of almost all modern processors.
This is because a segment model still exists in almost all modern processors, they're just not crippled like the 80286's model was.
The 80286 had, if I understand it correctly, a single segment register that had to be set up each time you needed to move to another part of the address space.
But comparing this to, say, Power, VAX, PDP11, IBM/370, Sparc and I believe 68010, the memory management of all of these still has the concept of memory segments. Indeed, they're fundamental to the implementation of virtual address spaces that modern multi-processing capable processors provide.
The difference is that there are multiple segment registers in all of the architectures that I've listed that allow the complete program address space to be mapped simultaneously. The virtual address spaces are implemented by setting up the segment registers to map the contiguous virtual address space into different addresses in the real address space of the processor. So a program thinks it's running in a flat 32 or 64 bit address space while the actual memory could be scattered all over the physical address space, even out of sequence, and with 'holes' that aren't backed up by physical memory.
One advantage of this is that the system can preserve the complete address space of a program on a context switch by saving the contents of the segment registers when pre-empted, and restoring when you get back to execute. And systems like the 370, PDP11 and VAX had a different set of segment registers when in supervisor or privileged mode meaning that you didn't even have to save the segment registers when you crossed into kernel space to handle an interrupt or service a system call, and the Sparc had register rings to do something similar.
And in the case of the PDP11, which had a 16 bit process address space mapped into a physical 18 or 22 bit address space (depending on the model), it allowed the system to have more memory than a process could use, and I'm sure that the PDP11 was not the only processor that implemented this trick. In fact, I believe that late model 32 bit Pentium processors also used something similar with PAE. And now you have multiple registers sets and register renaming, including the memory segment registers that can be used to provide even more sets without having to save registers.
In most cases, this allows you to treat the virtual address space as a flat address space for the purposes of writing programs, but even in architectures like Power, there are a range of addressing modes that work within a single segment, meaning that it's not flat for all addressing modes (in fact IBM used this in-segment offset addressing mode to allow shared libraries to be position independent in AIX, at least in the 32 bit Power implementations).
When working in IBM's AIX System Support Centre shortly after the RS/6000 was launched, I remember taking a call from a developer who started by saying "Is the RS/6000 processor still a segemented architecture?", to which I started answering "Well, yes, but it's not like the Intel 80286 because..." at which point the person at the other end said "Well that's just rubbish!" and hung up the phone, without bothering to listen to why it was different. This has stuck in my memory for over thirty years. I often wonder whether the anonymous person (because he didn't even say who he was, or what company he was from) ever got to the point where he realized he had drawn a hasty conclusion.
But getting back to when the IBM PC was being designed, even if they had chosen a 68000, they could not have had virtual address spaces, because the 68000 did not have a built in memory management unit, that was provided by the 68451, which was a few years later than the 68000 (and was flawed so much that Sun built their own memory management unit for the early 68000 based Sun 1 and 2 systems).
Not surprising.
Looking into the history of Intel's successful processors (8085 onward) you see they didn't design them.
They were implementing someone elses design (in that case for a Point of Sale terminal)
Then the deal fell through and they got to keep the processor design for free.
IIRC the 8088 was another "inherited" design.
Basically Intel are a foundtry that was able to move up the value chain. It took till the 386 before they could cope with context switching.
I'd call everything else "Lipstick on a pick" but that's just my rather jaundiced view of them and their relationship with their fellow co-dependent enabler.
68K's were everywhere, and probably still are, in all kinds of hidden, realtime and embedded stuff. I spent about 15 years of my life, through various jobs, college courses and careers bumping into them. From railway stations departure board management systems (main board, and all the platform monitors, including the automated stored voice announcements), water and sewerage treatment works/systems, power stations. You name it, if it was an industrial system in the 80's/90's it probably had a 68K in there somewhere.
I keep an old 1990's car in my garage, last summer I was doing a bit of work under the dashboard and decided to drop the injection ECU out to see how clean it was, etc. Popped the lid off and lo and behold, there is a 68030 in there, along with a 27C256! I really should try and find a PROM blower machine and make a copy of it before its too late.......
I seem to remember reading on an Amiga forum that there is a market for "second user" 680x0 out of China where they are stripping "useful" parts from old/dead kit. IIRC it also came with a warning that what they advertise them as and even what the markings on the chip says, it may not be quite what you are expecting, eg various different version of the 68040 were produced and these chips will often be marked as the "better" version even if they may not be since they can sell them for more.
Yup, most of the "new" higher end CPUs (particularly 68060) that come onto the second-hand market these days are stripped out of stuff like old Nortel kit. As you note, the problem then is that for these embedded uses, it was more often the LC or EC variants which got used, as there wasn't a need for the MMU or FPU,. whilst for Amiga use it would normally (for the 68060 at least) be the fully-featured variant that was expected to be present.
There's also a seperate issue that chips manufactured using later mask revisions could run at higher clock speeds than the original 50MHz parts, so in addition to unscrupulous dealers remarking LC/EC parts to make them look like full-fat ones, they might also remark older revisions to make them look like later ones...
Fortunately I don't have to deal with any of that hassle - bought my '060 towards the end of 1995 when they were still hot off the production line :-)
+1 GFA Basic was really good, and 68000 assembler.
Somewhere in a box should be my GFA Basic manual - it was printed on red colour paper with black ink to defeat photocopying.
Along with the assember (Kuma's K-SEKA), rounding off the toolkit was the essential reference - "Atari ST Internals"...
IIRC my GFA Basic manual was printed normal black on white, spiral bound. I've still got a copy of "Internals" in the attic (or is that the Antic? <sorry!>). I might have a dig around to confirm whether or not I've got a memory underflow.
I remember the black on red for anti-copying though - a true PITA typing in codes from that, lots of squinting at tiny characters under a lamp needing loads of head tilting and page wiggling to see just one char, only to be wrong!
I got my ST out of the attic a couple of years ago, and after sourcing a nice new RGB Din plug to SCART cable it booted straight to TOS, first time used in over 25 years! Unfortunately, the floppy drive had failed, but finding a replacement wasn't too hard.
It was very nostalgic and everything, but also a fairly bleak reminder of how far we've come. Back in 1991 I was amazed at the loading speed of floppy disks, but then my previous computer was an Oric Atmos with a cassette tape interface.
Anyone else remember The Carebears demo discs?
Bought a Gotek 2 weeks ago with exactly that in mind.
Came across this recently...
ACSI2STM Atari ST/E Hard Disk Interface - for a Micro SD Card
£25 in 1987 is the equivalent of today's £66, according to the BoE's inflation calculator.
So I was about to say yes, you'd definitely be able to afford an RPi. Then I looked on Amazon UK to see what they're currently selling for over there. Holy crap. For that money, forget the Pi, buy yourself a day's worth of central heating instead.
I remember hearing about the Falcon circa late 1992. It sounded interesting, but even then I dismissed it as doomed, since the ST line it was the successor to was already in terminal decline and Jack Tramiel's Atari Corp. (*) couldn't- and wouldn't- market their way out of a paper bag. (By that point even the ST's nemesis, the Amiga, was starting to lose its place as *the* machine everyone wanted- at least in Europe- to the unstoppable rise of the PC on one side and the Mega Drive and SNES on the other).
And indeed, as far as the mainstream was concerned, the Falcon may as well not have existed, and I assumed it had disappeared into obscurity after Atari Corp. stopped making computers in favour of their equally ill-fated Jaguar console.
I'd heard of the C-Lab licensed versions a few years ago, but I hadn't realised they'd kept it going quite so long (and that they considered it worth their time to do so), even after Atari themselves exited the computer market. It shows just how successful and established the ST was in its niche of MIDI music production, even after the mainstream had abandoned it.
Also interesting that newer versions of TOS can run on the Amiga (sacrilege!), since- while TOS is much improved since then- back in the day the Amiga was the one that had the better OS (Amiga OS), with true pre-emptive multitasking (which even Windows 3.1 couldn't match) compared to the limited early TOS and I doubt anyone would have wanted to back then!
In more recent years I'd considered the possibility that there was no real reason the ST couldn't have run the original AmigaOS; although the Amiga was generally the better machine hardware-wise, most of the features of AmigaOS didn't rely on that and most of it could otherwise have been ported to on the ST (i.e. same 68000 CPU at the same speed, comparable amounts of RAM).
Nowadays, the diehard enthusiasts are running new versions of AmigaOS on "official" hardware that isn't even compatible with most of the low-level "bare metal" programming that the majority of its games relied on.
(*) Atari Corp- formed when Tramiel bought the former home computer/console division of the original Atari Inc.- was a very different creature in terms of style and approach- and budget!- to its predecessor.
In more recent years I'd considered the possibility that there was no real reason the ST couldn't have run the original AmigaOS; although the Amiga was generally the better machine hardware-wise, most of the features of AmigaOS didn't rely on that and most of it could otherwise have been ported to on the ST (i.e. same 68000 CPU at the same speed, comparable amounts of RAM).
There would have been no DMA to free the CPU while the floppy drive was being read though, you would have just one screen and you wouldn't have been able to slide screens up and down, and the sound, well, probably just easier for the OS not to make any sound instead of dealing with the Yamaha waveform chip. Also no Zorro Autoconfig add-ons.
I appreciate that, but there's no reason that the ST couldn't have implemented many- if not most- of the other core parts of the OS. Especially its much-praised pre-emptive multitasking, which I don't believe was reliant upon any Amiga-specific features.
Though I do agree that the square wave sound chip in the original ST was rubbish! :-)
The ST and I think the Amiga had a 68000 and this did not support preemptive multi tasking, this was fixed in the 68010 onwards.
I agree about the sound chip in the ST - poor at best.
There was a non preemptive multitasking upgrade for the ST written by someone with no connection to Atari - I can't remember who wrote it but it was a one man job in about 1990. I just about remember his (usenet) post when he said that this is free to the world. Can anyone remember who this was? I used his multitasking for a few years and it was pretty good considering the hardware.
I’m going to politely venture to disagree with you. AIUI, the preemptive multitasking was a feature of the OS and was present even on the 68000.
What that processor lacked, and what you’re maybe thinking of, was an MMU. The lack of hardware memory protection meant that a single malfunctioning application on the Amiga could - and invariably, did - bring down the entire OS. Ah, the dreaded Guru Meditation, oh how little we miss ye.
Indeed, you can do pre-emptive multi-tasking on pretty much any processor with a timer interrupt (so long as you have the RAM for the stacks and process control list - and there aren't any weird bits of state you can't read and/or write in the switcher). Fun factor goes up if you want IPC and can't find any atomic instructions...
Having an MMU makes it robust(er).
You couldn't do paged memory on 68000 (the chip couldn’t retry a failed instruction), nor could you do hardware protection on supervisor vs user-mode memory access: while the CPU did indicate what kind of access was being done (using pins FC0,FC1,FC2), the ISA had no way of saying "reading from user space, writing to supervisor space". Fixes for these issues are the main difference between 68000 and 68010.
(That said, AmigaOS never used Supervisor mode at all: the whole OS ran in User mode, thus sidestepping the difficulties of memory protection by not even trying to do it..)
> What that processor lacked, and what you’re maybe thinking of, was an MMU.
Most precisely, the stack frames it produces upon a bus error aren’t sufficient to know how complete an instruction was when it threw, and no support is offered for restarting a half-complete instruction.
So you can’t bolt on an MMU in a general-purpose fashion because any instruction that causes a page fault will have done an unknowable amount of its operation — and side effects — before throwing but can only be restarted from the beginning.
[Author here]
> So you can’t bolt on an MMU in a general-purpose fashion
Thanks for the clarification. I was considering writing one 'til I got to your comment. :-) Nicely done.
IIRC Apollo fixed this in DomainOS by simply having 2 68000s, the second following the first until the 1st threw a page fault. An odd way to do it but economical given the cost of the CPU relative to the whole system.
What I've read of DomainOS sounds fascinating. HP didn't know what it had acquired and cut most of it.
The flipside of the Amiga and AmigaOS coin is more serious, though. Because the hardware lacked memory protection, the OS did all that clever stuff in software. Meaning that when the hardware caught up and it was present, the OS couldn't use it without breaking compatibility... and so the OS designers' cleverness became a crippling restriction.
Apple faced much the same problem. Lisa OS had multitasking, but it was stripped out when making MacOS. Which became a crippling restriction, but luckily for Apple, it survived long enough to just "embrace and extend" its way out of the problem.
[1] Buy a Unix vendor with a strong UI focus and SOTA dev tools
[1a] Get original cofounder back again into the deal!
[2] Give acquired OS a facelift to resemble the old one
[3] Stick the entire old OS into a VM for backwards compatibility
Amiga didn't last long enough to do that. It had 2 potential killer OSes, first QNX and then Taos, but the kit was not yet powerful enough to stick legacy AmigaOS into a VM and run it that way.
By the time it was, Amiga Inc was gone. Hyperion still sells a modernised AmigaOS but I don't think it will ever be relevant again.
Your Apple story has points [1] and [1a] reversed. The decision to buy NeXT was almost entirely due to the chance of getting Steve Jobs back to run the company. The other acquisition candidate, Be, Inc.’s BeOS, was better in every respect except for the personalities involved at the top. The CEO of Be, Jean-Louis Gassée was also ex-Apple, and was generally well-regarded by Apple’s engineering staff (of whom I was one at this time, but Gassée was gone a while before I started), but nobody could out-figurehead Steve Jobs...
FWIW Amiga 1000 had already preemptive multitasking it was in fact one of its marvels https://www.popularmechanics.com/technology/gadgets/a27437/amiga-2017-a1222-tabor/
"The Amiga was decades ahead of its time—look no further than preemptive multitasking," says Perry Kivolowitz, a professor of computer science at Carthage College. "The Mac, until OS X, was a cooperative multitasking machine—you've experienced the handicap of this if you've ever seen the Spinning Beach Ball of Death. In a cooperative environment if any task hangs, the computer hangs. On a preemptive system, any hung task slows the machine a small bit but doesn't kill it."
The Amiga has pre-emptive multitasking right from the start, although they should have used the 020 and above in big box Amigas to bring memory protection and virtual memory to the OS, with perhaps some option to keep compatibility with older software.
But R&D in Commodore always ran on fumes, they never did enough and that and the terrible marketing finished them off.
No, as several others have already noted, the Amiga certainly *did* have pre-emptive multitasking, and that was one of its much-vaunted features. Even Windows 3.1's multitasking several years later was inferior, i.e. co-operative multitasking that could- and did- get stuck with badly-behaved programs.
I mentioned the possibility of running the original Amiga OS on a (contemporary) Atari ST as, while a couple of aspects may have been tied to the Amiga hardware, I suspect the majority probably weren't and that the ST (with the same CPU and comparable RAM) could theoretically have implemented the same pre-emptive multitasking as the Amiga.
(Anyone want to agree or disagree with that suggestion?)
The CPU could obviously handle it. ST was actually clocked slightly higher than Amiga - noticeable in frame rates in e.g. Frontier:Elite 2. However you are correct to call out the hardware dependency for certain features. The graphics hardware in particular had major differences hard-coded with impact on the O/S (think implementation of the Blitter, and multi-screen overlays).
Audio handling, not important to the OS is also very different. Amiga being a PCM type thing whereas ST was an FM Synth.
I would be inclined to argue 3.1's major technical advantage over Amiga was the 640x480 being available on "vanilla" hardware, and most PCs shipping with nice crispy monitors by default. Makes it very easy to sell a business application having that, irrespective of how horrible the memory model was behind 3.1.
Whereas Amiga one would have to actively think about expanded hardware or look at one of the very premium machines (think A3000) to get equivalent capability. A 386 with 4MB RAM, a hard drive, HD floppy and SVGA could be had for less.
In 1987, Amiga was obviously leaps and bounds ahead. They didn't really move or capitalise on that edge. And thus Commodore (amongst hundreds of other daft decisions) more or less destroyed itself.
Agree with pretty much everything you said, though the ST's audio chip wasn't even FM (which would have been quite impressive by 1985 standards), it was a variant of the same AY-3-8912 square-wave chip already used in numerous 8-bit computers.
I get the impression that Commodore *did* have some talented people working for them (e.g. Dave Haynie), but ultimately yeah, they quite clearly didn't put enough resources and effort into building on the fantastic start the original Amiga provided and instead sat on that advantage until it wasn't.
The Amiga 3000 was- as far as I can tell- probably the biggest improvement of the original design, but it was expensive and the changes didn't filter down to the mass-market models. The Amiga 1200 and 4000 had some noticeable improvements, but they should have come out earlier and- by then- were simply keeping up with the PC clones that were already starting to overtake the Amiga.
Indeed. I remember my excitement when I got a 2.04 Kickstart ROM and plugged it into my A500, turning it into (more or less) an A500+.
I'd spent an unhealthy amount of time before that tweaking my Workbench 1.3 to make it resemble 2.0, but it was very satisfying to have the real thing.
Back in the days when a new OS version was exciting, and you didn't have to ask questions like "yes, new capabilities, but it'll harvest all my data and spam me with adverts... is the trade-off worth it?"
> There would have been no DMA to free the CPU while the floppy drive was being read though
The floppy drive and ACSI (i.e. proto-SCSI) port are the two things other than video in an ST that have a DMA interface to RAM.
The ST doesn’t even then have to do any further decoding as it receives decoded original bytes, whereas the Amiga receives an MFM stream and uses the Blitter further to decode.
I looked at those in our Uni's bookstore, but the spongy, mushy keyboard action, coupled with the bizarre, style-o-riffic sharply right-slanted function keys, killed them for me. A good keyboard is a must for any computer I use. (The Atari 800 and Commodore-64 had good, though not great keyboards.)
Thanks for the tip re: OberonEmulator Liam.
I've not used Oberon before, so I'll give it a go. The first thing I code in any new language is a prime number generator (Ich bin ein Nerd!), which brings us nicely back to 68K assembler on the ST. I thought I'd made a mistake because it appeared there was nothing done, no run time detected. Wrong! "Primes to 1 million in what!?"
[TBH I don't recall if the first limit was a million, I still had plenty of opportunity to catch flies tho!]
The year was 1984, a brand now college opens in Swindon, ironically called "New College" and the primary computer on campus was BBC Model B attached to the Econet Network and sharing a 20mb Hard disk. Pretty much every classroom had one along with the 20 odd in the computer room.
The secondary computer room however, was fitted with half a dozen Atari ST which turned into my fave computer of that time. They were so new that the OS was still on bootable floppy and the 3.5" floppy discs were single sided. It took a few months before upgrades included OS on ROM.
After leaving college in 1986 and entering the real world of a career in IT, my first pay packet was to buy an ST512. Subsequent upgrades were to replace the floppy disk drive with a double sided 1.4MB version and Evesham Micros sold a memory upgrade kit. This required the soldering of 8 memory chips and 8 capacitors direct to the motherboard after you had used a solder sucker to clear the holes for the chips legs to sit in.
Sadly, I'm of an age now that a task such as that requires a large magnifying glass, a lot of light and a very steady hand.
In case you wondered, "Why the qemu target?" It's for cross-compiling. It's possible to run a m68k cross compiler on your x86-64 boxx or whatever. But debian (for one) has a builld system where, to make sure the headers (.h files) are right, to be able to run the tests (to make sure a program built properly), etc., they favor running a build for some CPU type on that CPU type. For m68k, these build systems are almost always emulated (they have genuine 68k-powered hardware still, but building on like a 25mhz CPU is much slower than buildling on a qemu emulation that can probably emulate at 20x that speed.) So, why run on a qeu-emulated system with 1GB RAM emulating 1980s-era peripherals, when you can run on a qemu-emulated system with more RAM and the (in this case) Android-provided modern-style peripherals.
That’s what I was thinking. Although my preferred *nix for those old Macs is A/UX (which I have installed on a Quadra 650, an SE/30 and a IIci). I have Tenon MachTen on an LCIII, which also dual boots into Linux (slowly!). The LCIII also has an Atari ST emulator, MagicMac, installed on it.
To complete my retro fix, I’m writing a game using a lot of 68k assembly language on it. Well, when my son isn’t using it for Lemmings, Prince of Persia or Monkey Island!
Indeed. I had a friend at university (circa 1995) who bought himself a 1GB hard drive, and we all gathered round to worship it and bask in its capacious magnificence.
I'd have given my left nut - I certainly wasn't using it at that time, unfortunately, despite my hapless Richie-and-Eddie-esque pickup attempts - to have a GB of disk myself, never mind RAM.
I remember paying an extra hundred quid to get my A4000 factory-fitted with a 120MB hard drive rather than the default 80MB, and considering that to have been money well spent at the time. These days I find myself price comparing multi-TB NAS-rated drives and wondering if this one is really worth an extra 10 quid compared to that one...
For a system of that era, it's beyond imagination. I still remember the day when I watched one of my postgrad colleagues boot up his PC and feeling my jaw drop as the POST screen proudly declared it having 80MB of memory, given that this was in the days (1995) when if you wandered into your local electronics shop to buy a PC, it'd most likely come with 8MB unless you were a flash git and opted for a 16MB model. Meanwhile, my A4000 back home was doing just fine with 6MB, although by the time I stopped throwing money at it about 5 years later, it'd gained a further 60MB...
These days I'm running W11 on an 11th gen i7 with 32GB, M2 SSD etc., and wondering why it's so bloody slow at times and STILL manages to show evidence that MS don't really understand what pre-emptive multitasking is all about. Progress eh.
It got updated subsequently, but for a long time Sysinfo, one of the standard Amiga benchmark programs, would freak out when run under emulation. After running its (simple, whetstone-based? IIRC) speed-test, it would print a suitable comment - "Yawn", "Now we're talking", "Cooking!" etc.
And when run under emulation: "Call me NOW!!!!!!".
I often wondered whether to call the author and tell him how it was that I had an Amiga running at 35x the speed of an A4000/040, but I was always put off by the hassle. Calling Australia circa 1993, from Swindon circa 2010, was more trouble than it was worth, and the long-distance charges would be horrendous...
> Calling Australia circa 1993, from Swindon circa 2010
If you still want to make that call, by 2023 Swindon should have just caught up with 1993, so only a few weeks left to wait. Can't do much about the long-distance, but if you like I'm sure we can find enough volunteers to carry the town over to Oz for you.
I think I still have my Atari ST Mega 4 somewhere - it was the peak of the 68000 based ST range. I upgraded to this from a 1040STFM.
Had many good times with this machine and after my BBC Micro it was a huge step up. In the last few years of using this I used Minix as a OS which lead to SCO OS (a unix variant with a bad track record) and then Linux.
I learnt alot from this machine.
I’ve a Falcon030 here (it was one of the first ones, which also makes it one of the last ones, I suppose...), I still need to get around to re-capping the power-supply before I start it up again, though.
I've also got to make up a “weird D-sub Falcon video-port to VGA” cable, then find a “VGA to DisplayPort” adaptor before I can plug it into anything!
And then you find out the aspect ratio is all wrong and try to justify the money for one of these (12 days to go!)
io_uring
is getting more capable, and PREEMPT_RT is going mainstream