DON'T MIND ME. CARRY ON WITH WHATEVER YOU WERE DOING. I HAVE A BOOK.
Linux, Windows, macOS, FreeBSD, and some implementations of Xen have a design flaw that could allow attackers to, at best, crash Intel and AMD-powered computers. At worst, miscreants can, potentially, "gain access to sensitive memory information or control low-level operating system functions,” which is a fancy way of saying …
1) No documentation. You know you're on your own.
2) Documentation which just lists functions or commands or instructions or whatever on separate pages without giving you an oversight into how things fit together, so you walk straight into a bear trap, just like every OS vendor on the planet did here.
I disagree. Bad, error filled documentation is worse than no documentation. With no documentation you see how the thing behaves and treat it accordingly. With bad documentation you assume you've done something wrong and spend ages trying to get it to work.
Neither situation is ideal, obviously, but I prefer working it out over reading instructions that are wrong.
With no documentation you see how the thing behaves and treat it accordingly.
With computers this is often much harder to determine than you think, ie. more unknown unknowns than you have in poor documentation and liability is clearer. But it quickly becomes sophsitry to discuss this without a specific example.
"16 instructions, and a debate over whether to include Multiply..."
I never thought it was as few as 16 instructions, but I think you might be right if you don't count all the condition codes, other flag bits and intrinsic shifts which on many other CPUs would be separate instructions.
As for multiply, all commercial ARMs had it, no debate! It was divide that didn't come until later.
You're right. There were (from memory - it's been a _long_ time) 16 basic operations, and each one could be run conditionally based on the status register (allowing you to inline a few instructions that you'd normally have to JMP over), and there was a flag to have the instruction _set_ the status registers too. It wasn't mandatory. So, whilst there were many permutations of these options, it all came down to 16 simple instructions (which was ideal for learning assembly).
Yep, all commercial ARMs had MUL, but I'm pretty sure I recall there being a debate whether they would at the time. The thinking was that it might be too CISC-y, and you could multiply in software. Compared to other instructions on the chip it took a long time too.
I miss my Archimedes.
Say what you will about companies like IBM and DEC - they produced extremely clear, comprehensive, professional documentation.
I used to know a DEC technical writer who knew so much about the VMS file system that the developers used to consult her when they were in doubt as to just how something worked.
Sort of the exact opposite of this present situation.
"But if the documentation was as good as you say why would they need to ask?"
Because otherwise they'd have to read? :-) There are many places documentation fails: the people who don't want to write it, the people who don't want to maintain it, and the people who don't want to read it.
If you want documentation to work, you have to show the benefits. I tend to find in corporate level IT (rather the IT industry) that nobody really gets the benefits, everyone works with isolated knowledge with little desire to share. This seems to be that their perceived value to the company is their limited skill-set, rather than their ability to adapt to, learn and apply (new) technology. Documentation exposes this limitation as people become less depended on, so is shunned.
"But if the documentation was as good as you say why would they need to ask?"
Because the existence of good and accurate documentation does not imply the existence of a developer sufficiently intelligent and wide-ranging to understand it without further help.
It's a kind of version of the Watchmaker Fallacy in reverse; the existence of a design manual does not in fact imply the existence of a designer.
Disclaimer: I am terrible at understanding documentation without sample code.
Easy, most people are too lazy to actually read the documentation.
Its a bit more justifiable when you know it is documented somewhere but not which particular document set you have to look in.
And yes documentation (and some test teams) are often the people who get the biggest picture of how a complex system/application works. Most other people are too low level (concentrating on one particular component), or too high level (understand the architecture but not implementation details).
But if the documentation was as good as you say why would they need to ask?
Also, it depends on the question.
With an easy question, great documentation is all you need. Especially if it's searchable. How does this one command work? Well easy, I type it in and the info comes up. Now what if you know you want to do something, but don't know the command name? Well as long as you know the right terminology, you should be able to find that with 3 or 4 searches. So maybe that takes ten times as long to find, but still quick, once you find the info you need.
What if your question is about how five different commands interact with a particular system or sub-system (and each other)? Then you need to read the documentation on all 5 of those, plus other stuff. At this point you need a much deeper understanding of what's going on. And that's where human help is useful.
Good (technical) product documentation is rare, nowadays. It takes a certain type of person to write it, and they need time to write it.
However, companies nowadays see this effort as overhead and ripe for cutting. (I've even used a product where the supplier said they refused to write any documentation!)
I remember the first time I encountered protected mode assembler on a '386... it only took a couple of days to get a grip on the changes to the instructions set from 8086 style real mode stuff.
The problem was it then took *months* to fully get your head around the vast changes in architecture and how they all fitted and played together. The documentation of the day was a single 3/4" thick Intel programmers reference guide (small print, thin paper!) that was pretty dense and hard going.
The segmentation alone is vastly different and more sophisticated - but you could see that lots of it was engineered get you from a place you would rather not start at (i.e. DOS programs all hitting the hardware directly for maximum performance), and allow a transition to a system that could run several such programs concurrently and not have them fight to the death.
Not to say that the 386 wasn't needed, but I'm pretty sure an 286 could have run a pre-emptive multitasking OS (without memory management) and had a hardware abstraction layer, it's just everybody ran DOS so hardware was used to solve the problem. Somebody's even managed it with a Z80.
x86 is far too complicated and over-engineered for what it delivers and this is why it's creaking at the seams now.
"but I'm pretty sure an 286 could have run a pre-emptive multitasking "
There's nothing magical about preemptive multitasking. All you need is a timer: dump the registers, switch stacks, restore the previous registers and resume. We used to do it on 8 bit machines. You could do it on DOS if you didn't reenter the OS.
Protected mode was implemented to allow multitasking - with hardware support. It introduced virtual address spaces, and many other features the 8086 lacked to support multiple concurrent processes - in a secure environment. It's not over-engineered - it introduced advanced security features that weren't used mostly for speed and compatibility reason. For example, a call gate means you can't jump into an arbitrary address.
The 286 had memory management, just it was implemented at the segment level. With 64K max segments, it looked feasible. Just like pages, segment can map physical memory to virtual one (with less granularity). Of course, it's not good for larger segments.
When a segment is accessed, the CPU checks if the segment is in memory or not (only the segment descriptor needs to stay in memory, the referenced memory doesn't). If it is not, an exception is raised. The exception handler can allocate the space, load the memory contents from external storage, swap other memory to make space, etc. and then execution can resume.
"Hardware Abstraction Layer" is not something a very hardware device like a CPU can implement - just in a multitasking OS you need to protect shared resources like I/O ports and physical memory addresses like the screen buffer from concurrent, non coordinated accesses. Protected mode allows to set which privilege levels can access IN/OUT instructions, and map specific physical addresses - usually the kernel, or anyway code running at an higher privileged level than user applications. You get hardware checks, so a rogue app can't easily create havoc.
It's the OS that needs to implement an "HAL", so application don't need to access HW directly.
Just, DOS applications were written to access directly memory and I/O ports, and would have not worked easily in a 286 protected mode OS, because it was tricky to trap those accesses and manage them.
That's why Intel had to introduce the Virtual86 mode - when in this mode the CPU traps explicitly those attempts, and let the OS handle them transparently.
In the end, in 286 times few users had more than 1MB of RAM that would have made a real multitasking OS useful. The few who had it, were happy enough to use EMS or XMS to allow for bigger spreadsheets, when the 386 came out, it had far superior features and it was time for GUI systems.
Did Lotus 1-2-3 run on Amiga? It was one of the few successful applications that could often require more than 640K, and spawned the need of memory extensions add-on boards, and later, software ways to access more memory. But PC memory was expensive in those days, and many business applications used most of the available memory, and swapping on those slow disks would have been painful...
But no OS can prevent by passing security on any Intel processor. The fact you could run an pre-emptive multitasking OS on a 286 does not mean it would be secure. To be secure, you need to have a guard register at both ends of memory for each process, and prevent any cross over access. Intel does not provide hardware support for that. That is because guard registers are for segmentation, and Intel uses the word segmentation for their bizarre form of over lapped paging.
but I'm pretty sure an 286 could have run a pre-emptive multitasking OS
It did (sort of) - it was called QEMM (later QEMM/386). My old PS/2 50z with an expanded RAM card did it quite happily. Enabled me to run Ultima (6?) while the IBM 3270 emulator sat in the background (and it was pretty finicky about being able to respond to incoming events..)
 The old IBM sort, not the new-fangled games machine. Had a 50khz 286 chip with *zero* wait states for the memory. What a beast it was. Could run OS/2 (the early versions - not Warp) and was used for travel agency machines.
Could we please go back to IBM/370 Assembler? That was vaguely understandable
And, even more importantly (under TPF anyway) didn't use stacks..
Of course, what it *did* use (a dedicated 4k block that every programme segment in the chain had access to) was just as bad. You put some data into your reserved address (EBW000+150), only to find that some numpty down the chain was also using it (but hadn't told anyone) and so when control gets passed back to you your data is essentially randomised.
That's why good mainframe shops have QA departments with real teeth - to stop idiocy like that.
I didn't like it then, I don't now.
It had nothing to do with performance or features, and everything to do with keeping a stranglehold on the market with "backwards compatibility".
We're starting to see the silicon equivalent of antibiotic resistance, as all those cumulative trade-offs make it impossible to secure a processor.
"It wasn't easy nor cheap to add all the required silicon structures to implement a full 32 bit architecture."
I'm sympathetic to this. I spent enough time programming 8086 assembly that I have a soft spot for all its quirks. But the MC68000 arrived a year after the 8086 and showed what could be done.
"8 bit CPUs used even weirder ways to access more than 256 bytes..."
The ones I used in anger all had 16 bit address registers and 16 bit address buses. Though most had legacies of darker days.
This post has been deleted by its author
Biting the hand that feeds IT © 1998–2020