Re: Why retro?
They optimized to the limit because they had to, not because that was good. If that meant they cut corners on all sorts of areas, well that was the only choice, because not cutting corners means the computer cost double so you could have enough performance. Cutting corners, in this case, means everything from not having convenient elements to massive security holes by the time there was any security at all.
Here's an example. Remember when Microsoft Word was a tiny binary? Remember when the Office file format changed with every release? Do you know that those were very connected? Sure, part of it was Microsoft wanting people to buy new Office versions, but not all of it. Many structures of the old Office formats weren't parsed when they got opened because that would eat up cycles. Instead, they were simply copied out of RAM onto disk when you saved and then back from disk into RAM. That makes loading them a lot faster. What else did it do? Well:
1. If you weren't that program, compatibility with that format was a much bigger task because someone's struct blitted into RAM isn't as useful if it's not your struct.
2. It made backward compatibility harder for the same reasons. Microsoft had that code and included it, but it made Word bigger every time to include all the old formats rather than design the formats for forward compatibility.
3. It made cross-platform compatibility harder because, if everything's a 386, then your structures always come back into RAM the same way, but if you have other processor architectures that use a different byte order or memory segmentation, now they don't.
4. You're copying arbitrary data into memory directly. If someone crafts a file right, they can break or even take over your word processor by you opening it. That's not so easy to do with a modern docx file because it's not copying that straight into RAM.
That's leaving out all of the things that those operating systems didn't do that modern ones did. In the ones you're talking about, audio was a single supported device with a basic interface or, later, a separate board you need to add which would have limits on how well it handled multiple audio sources or devices. Now, the OS has a sound stack that does that for you, can detect the hardware you have, can deal with hotplugging interfaces, can mix sources in a variety of ways. One of the costs of that is that it has to look at what hardware is plugged in when you boot and create structures to handle that, because unlike those efficient systems, it doesn't get to hardcode a single answer for that. Video and most other hardware is the same way. Part of the typical boot process is copying large amounts of data into RAM, but another part is looking at the hardware, finding the code that runs the hardware, loading that, and having all those resources available when a program wants them. That change was important, because no longer does a program have to have support for any hardware it might use compiled into it. Not only will the OS expose the hardware in a generic way, it will figure out what to offer and deal with the configuration of it.