back to article Every major OS maker misread Intel's docs. Now their kernels can be hijacked or crashed

Linux, Windows, macOS, FreeBSD, and some implementations of Xen have a design flaw that could allow attackers to, at best, crash Intel and AMD-powered computers. At worst, miscreants can, potentially, "gain access to sensitive memory information or control low-level operating system functions,” which is a fancy way of saying …

      1. Anonymous Coward
        Anonymous Coward

        Re: So....

        DON'T MIND ME. CARRY ON WITH WHATEVER YOU WERE DOING. I HAVE A BOOK.

        1. Norman Nescio

          Re: So....

          DON'T MIND ME. CARRY ON WITH WHATEVER YOU WERE DOING. I HAVE A BOOK.

          A pony on Binky each way in the Gold Cup, then please.

      2. Primus Secundus Tertius

        Re: So....

        After you are dead, you cannot say you did not expect that.

    1. Fruit and Nutcase Silver badge
      Joke

      Death and Taxes

      "...in this world nothing can be said to be certain, except Death and Taxes"

      "...in this world nothing can be said to be certain, except Death, Taxes and Computer Bugs"

      Benjamin Franklin

  1. Dan 55 Silver badge

    Which is worse?

    1) No documentation. You know you're on your own.

    2) Documentation which just lists functions or commands or instructions or whatever on separate pages without giving you an oversight into how things fit together, so you walk straight into a bear trap, just like every OS vendor on the planet did here.

    1. Charlie Clark Silver badge

      Re: Which is worse?

      No documentation is always worse. Insufficient documentation leads to different errors and shows a distinct lack of interest by the authors in the subject. Maybe someone needs to mention liability to them.

      1. sabroni Silver badge

        Re: No documentation is always worse

        I disagree. Bad, error filled documentation is worse than no documentation. With no documentation you see how the thing behaves and treat it accordingly. With bad documentation you assume you've done something wrong and spend ages trying to get it to work.

        Neither situation is ideal, obviously, but I prefer working it out over reading instructions that are wrong.

        1. Charlie Clark Silver badge

          Re: No documentation is always worse

          With no documentation you see how the thing behaves and treat it accordingly.

          With computers this is often much harder to determine than you think, ie. more unknown unknowns than you have in poor documentation and liability is clearer. But it quickly becomes sophsitry to discuss this without a specific example.

          1. Norman Nescio

            Re: No documentation is always worse

            Hmm, an old programming team leader once told me documentation was like sex. When it's good, it's very, very good, and when it's bad, it's better than nothing.

  2. Pen-y-gors Silver badge

    I'm impressed

    by any mekon-brain who can understand all this sort of hyper-low-level stuff. Could we please go back to IBM/370 Assembler? That was vaguely understandable (and the quick reference guided fitted onto one folding card)

    1. defiler Silver badge

      Re: I'm impressed

      Could we please go back to IBM/370 ARM2 Assembler

      There - FTFY. 16 instructions, and a debate over whether to include Multiply...

      1. Anonymous South African Coward Silver badge
        Trollface

        Re: I'm impressed

        Could we please go back to IBM/370 ARM2 Assembler

        There - FTFY. 16 instructions, and a debate over whether to include Multiply...

        Z80 CPU's in parallel rather?

      2. Wilseus

        Re: I'm impressed

        "16 instructions, and a debate over whether to include Multiply..."

        I never thought it was as few as 16 instructions, but I think you might be right if you don't count all the condition codes, other flag bits and intrinsic shifts which on many other CPUs would be separate instructions.

        As for multiply, all commercial ARMs had it, no debate! It was divide that didn't come until later.

        1. defiler Silver badge

          @Wilseus Re: I'm impressed

          You're right. There were (from memory - it's been a _long_ time) 16 basic operations, and each one could be run conditionally based on the status register (allowing you to inline a few instructions that you'd normally have to JMP over), and there was a flag to have the instruction _set_ the status registers too. It wasn't mandatory. So, whilst there were many permutations of these options, it all came down to 16 simple instructions (which was ideal for learning assembly).

          Yep, all commercial ARMs had MUL, but I'm pretty sure I recall there being a debate whether they would at the time. The thinking was that it might be too CISC-y, and you could multiply in software. Compared to other instructions on the chip it took a long time too.

          I miss my Archimedes.

    2. Archtech Silver badge

      Re: I'm impressed

      Say what you will about companies like IBM and DEC - they produced extremely clear, comprehensive, professional documentation.

      I used to know a DEC technical writer who knew so much about the VMS file system that the developers used to consult her when they were in doubt as to just how something worked.

      Sort of the exact opposite of this present situation.

      1. Doctor Syntax Silver badge

        Re: I'm impressed

        "I used to know a DEC technical writer who knew so much about the VMS file system that the developers used to consult her when they were in doubt as to just how something worked."

        But if the documentation was as good as you say why would they need to ask?

        1. Anonymous Coward
          Anonymous Coward

          Re: I'm impressed

          "But if the documentation was as good as you say why would they need to ask?"

          Because otherwise they'd have to read? :-) There are many places documentation fails: the people who don't want to write it, the people who don't want to maintain it, and the people who don't want to read it.

          If you want documentation to work, you have to show the benefits. I tend to find in corporate level IT (rather the IT industry) that nobody really gets the benefits, everyone works with isolated knowledge with little desire to share. This seems to be that their perceived value to the company is their limited skill-set, rather than their ability to adapt to, learn and apply (new) technology. Documentation exposes this limitation as people become less depended on, so is shunned.

        2. Anonymous Coward
          Anonymous Coward

          Re: I'm impressed

          "But if the documentation was as good as you say why would they need to ask?"

          Because the existence of good and accurate documentation does not imply the existence of a developer sufficiently intelligent and wide-ranging to understand it without further help.

          It's a kind of version of the Watchmaker Fallacy in reverse; the existence of a design manual does not in fact imply the existence of a designer.

          Disclaimer: I am terrible at understanding documentation without sample code.

          1. oldcoder

            Re: I'm impressed

            DEC documentation quite often included the example code - with before and after samples of what every instruction did.

        3. The Mole

          Re: I'm impressed

          Easy, most people are too lazy to actually read the documentation.

          Its a bit more justifiable when you know it is documented somewhere but not which particular document set you have to look in.

          And yes documentation (and some test teams) are often the people who get the biggest picture of how a complex system/application works. Most other people are too low level (concentrating on one particular component), or too high level (understand the architecture but not implementation details).

          1. CrazyOldCatMan Silver badge

            Re: I'm impressed

            most people are too lazy to actually read the documentation

            AKA - "I'm calling support because I want you to do my thinking for me"..

        4. I ain't Spartacus Gold badge

          Re: I'm impressed

          But if the documentation was as good as you say why would they need to ask?

          Also, it depends on the question.

          With an easy question, great documentation is all you need. Especially if it's searchable. How does this one command work? Well easy, I type it in and the info comes up. Now what if you know you want to do something, but don't know the command name? Well as long as you know the right terminology, you should be able to find that with 3 or 4 searches. So maybe that takes ten times as long to find, but still quick, once you find the info you need.

          What if your question is about how five different commands interact with a particular system or sub-system (and each other)? Then you need to read the documentation on all 5 of those, plus other stuff. At this point you need a much deeper understanding of what's going on. And that's where human help is useful.

      2. A Non e-mouse Silver badge

        @Archtech Re: I'm impressed

        Good (technical) product documentation is rare, nowadays. It takes a certain type of person to write it, and they need time to write it.

        However, companies nowadays see this effort as overhead and ripe for cutting. (I've even used a product where the supplier said they refused to write any documentation!)

        1. stiine Silver badge
          Thumb Up

          Re: @Archtech I'm impressed

          Absolutely. I just posted a similar comment on ARS, that good Tech Writers are expensive, and for good reason. Twenty years ago, a good tech writer could get over $120/hr.

    3. John 48

      Re: I'm impressed

      I remember the first time I encountered protected mode assembler on a '386... it only took a couple of days to get a grip on the changes to the instructions set from 8086 style real mode stuff.

      The problem was it then took *months* to fully get your head around the vast changes in architecture and how they all fitted and played together. The documentation of the day was a single 3/4" thick Intel programmers reference guide (small print, thin paper!) that was pretty dense and hard going.

      The segmentation alone is vastly different and more sophisticated - but you could see that lots of it was engineered get you from a place you would rather not start at (i.e. DOS programs all hitting the hardware directly for maximum performance), and allow a transition to a system that could run several such programs concurrently and not have them fight to the death.

      1. Dan 55 Silver badge

        Re: I'm impressed

        Not to say that the 386 wasn't needed, but I'm pretty sure an 286 could have run a pre-emptive multitasking OS (without memory management) and had a hardware abstraction layer, it's just everybody ran DOS so hardware was used to solve the problem. Somebody's even managed it with a Z80.

        x86 is far too complicated and over-engineered for what it delivers and this is why it's creaking at the seams now.

        1. Anonymous South African Coward Silver badge

          Re: I'm impressed

          Wow... Symbos.de just blew my mind.

          Now that is proper programming within the hardware limits...

        2. Brewster's Angle Grinder Silver badge

          Re: I'm impressed

          "but I'm pretty sure an 286 could have run a pre-emptive multitasking "

          There's nothing magical about preemptive multitasking. All you need is a timer: dump the registers, switch stacks, restore the previous registers and resume. We used to do it on 8 bit machines. You could do it on DOS if you didn't reenter the OS.

        3. Anonymous Coward
          Anonymous Coward

          "286 could have run a pre-emptive multitasking OS"

          Protected mode was implemented to allow multitasking - with hardware support. It introduced virtual address spaces, and many other features the 8086 lacked to support multiple concurrent processes - in a secure environment. It's not over-engineered - it introduced advanced security features that weren't used mostly for speed and compatibility reason. For example, a call gate means you can't jump into an arbitrary address.

          The 286 had memory management, just it was implemented at the segment level. With 64K max segments, it looked feasible. Just like pages, segment can map physical memory to virtual one (with less granularity). Of course, it's not good for larger segments.

          When a segment is accessed, the CPU checks if the segment is in memory or not (only the segment descriptor needs to stay in memory, the referenced memory doesn't). If it is not, an exception is raised. The exception handler can allocate the space, load the memory contents from external storage, swap other memory to make space, etc. and then execution can resume.

          "Hardware Abstraction Layer" is not something a very hardware device like a CPU can implement - just in a multitasking OS you need to protect shared resources like I/O ports and physical memory addresses like the screen buffer from concurrent, non coordinated accesses. Protected mode allows to set which privilege levels can access IN/OUT instructions, and map specific physical addresses - usually the kernel, or anyway code running at an higher privileged level than user applications. You get hardware checks, so a rogue app can't easily create havoc.

          It's the OS that needs to implement an "HAL", so application don't need to access HW directly.

          Just, DOS applications were written to access directly memory and I/O ports, and would have not worked easily in a 286 protected mode OS, because it was tricky to trap those accesses and manage them.

          That's why Intel had to introduce the Virtual86 mode - when in this mode the CPU traps explicitly those attempts, and let the OS handle them transparently.

          In the end, in 286 times few users had more than 1MB of RAM that would have made a real multitasking OS useful. The few who had it, were happy enough to use EMS or XMS to allow for bigger spreadsheets, when the 386 came out, it had far superior features and it was time for GUI systems.

          1. Dan 55 Silver badge

            Re: "286 could have run a pre-emptive multitasking OS"

            few users had more than 1MB of RAM that would have made a real multitasking OS useful

            You can always tell who those who never used an Amiga are...

            1. anonymous boring coward Silver badge

              Re: "286 could have run a pre-emptive multitasking OS"

              Indeed. Useful applications would have been in the kB to 10s of kB ranges in those days. Multitasking very useful indeed! (Real, preemptive, one)

            2. Anonymous Coward
              Anonymous Coward

              "You can always tell who those who never used an Amiga are..."

              Did Lotus 1-2-3 run on Amiga? It was one of the few successful applications that could often require more than 640K, and spawned the need of memory extensions add-on boards, and later, software ways to access more memory. But PC memory was expensive in those days, and many business applications used most of the available memory, and swapping on those slow disks would have been painful...

              1. Dan 55 Silver badge

                Re: "You can always tell who those who never used an Amiga are..."

                If Maxiplan or Superbase wasn't good enough for you, you could run DOS.

                1. TchmilFan

                  Re: "You can always tell who those who never used an Amiga are..."

                  Superbase!

                  There’s a Proustian rush I wasn’t expecting today.

                  1. kirk_augustin@yahoo.com

                    Re: "You can always tell who those who never used an Amiga are..."

                    Amiga was still my favorite computer.

          2. Fruit and Nutcase Silver badge
            Thumb Up

            Re: "286 could have run a pre-emptive multitasking OS"

            @AC

            "Protected mode was implemented to allow multitasking - with hardware support."

            Exactly - the 286 was the target processor for OS/2 1.x

            1. kirk_augustin@yahoo.com

              Re: "286 could have run a pre-emptive multitasking OS"

              But no OS can prevent by passing security on any Intel processor. The fact you could run an pre-emptive multitasking OS on a 286 does not mean it would be secure. To be secure, you need to have a guard register at both ends of memory for each process, and prevent any cross over access. Intel does not provide hardware support for that. That is because guard registers are for segmentation, and Intel uses the word segmentation for their bizarre form of over lapped paging.

        4. CrazyOldCatMan Silver badge

          Re: I'm impressed

          but I'm pretty sure an 286 could have run a pre-emptive multitasking OS

          It did (sort of) - it was called QEMM (later QEMM/386). My old PS/2 50z[1] with an expanded RAM card did it quite happily. Enabled me to run Ultima (6?) while the IBM 3270 emulator sat in the background (and it was pretty finicky about being able to respond to incoming events..)

          [1] The old IBM sort, not the new-fangled games machine. Had a 50khz 286 chip with *zero* wait states for the memory. What a beast it was. Could run OS/2 (the early versions - not Warp) and was used for travel agency machines.

    4. CrazyOldCatMan Silver badge

      Re: I'm impressed

      Could we please go back to IBM/370 Assembler? That was vaguely understandable

      And, even more importantly (under TPF anyway) didn't use stacks..

      Of course, what it *did* use (a dedicated 4k block that every programme segment in the chain had access to) was just as bad. You put some data into your reserved address (EBW000+150), only to find that some numpty down the chain was also using it (but hadn't told anyone) and so when control gets passed back to you your data is essentially randomised.

      That's why good mainframe shops have QA departments with real teeth - to stop idiocy like that.

  3. Steve Button

    Most importantly...

    what's it CALLED? If it don't got a name, we ain't tekkin it serius.

    1. Tomato42
      Pint

      Re: Most importantly...

      called? Ha! If a vulnerability doesn't come with an interpretative dance now, it's not worth your time!

      1. Bronek Kozicki Silver badge

        Re: Most importantly...

        ... and does it have a logo?

        1. defiler Silver badge

          Re: Most importantly...

          ...and a dramatic theme tune?

    2. Anonymous Coward
      Anonymous Coward

      Re: Most importantly...

      Failed user code known execution design up privilege

      1. Fatman
        Thumb Up

        Re: Most importantly...

        <quote>Failed User Code Known Execution Design Up Privilege</quote>

        I like the acronym.

  4. JimmyPage
    Alert

    Segmentation ...

    I didn't like it then, I don't now.

    It had nothing to do with performance or features, and everything to do with keeping a stranglehold on the market with "backwards compatibility".

    We're starting to see the silicon equivalent of antibiotic resistance, as all those cumulative trade-offs make it impossible to secure a processor.

    1. Brewster's Angle Grinder Silver badge

      Re: Segmentation ...

      "It had nothing to do with performance or features"

      Did anybody ever say that? It was a hack to allow a 16 bit architecture to use 20 bit addressing without implementing a full 32 bit architecture.

      1. Anonymous Coward
        Anonymous Coward

        "without implementing a full 32 bit architecture"

        It wasn't easy nor cheap to add all the required silicon structures to implement a full 32 bit architecture.

        8 bit CPUs used even weirder ways to access more than 256 bytes...

        1. Brewster's Angle Grinder Silver badge

          Re: "without implementing a full 32 bit architecture"

          "It wasn't easy nor cheap to add all the required silicon structures to implement a full 32 bit architecture."

          I'm sympathetic to this. I spent enough time programming 8086 assembly that I have a soft spot for all its quirks. But the MC68000 arrived a year after the 8086 and showed what could be done.

          "8 bit CPUs used even weirder ways to access more than 256 bytes..."

          The ones I used in anger all had 16 bit address registers and 16 bit address buses. Though most had legacies of darker days.

          1. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020