Re: x86 versus x86-32
The point was that most of the "modern" Unix derivatives were not ported to the 8086 or the 80286 (Xenix/SCO was an exception). Lack of memory management precluded Unix being ported to the 8086, though Intel did build a few prototype machines with memory management (one of my classmates had one). The 80286 could support code that was originally written for the PDP-11 as both were 16 bit machines with tricks to extend the address space (16 bit segments in the case of the 80286), but the versions of BSD that were being ported to Intel architecture were originally developed on the Vaxen at UCB and the VAX had a 32 bit address space. The 80386 was the first in the x86 series to allow use of 32 bit segment and thus was more in tune with the way 32 bit Unix memory management worked. I would not want to subject any development to requiring use of 80286 code for any x86 software.
As far as "x86" code goes, the 8086 was designed so that 8080/8085 assembly code code be easily ported to the 8086. The 8080 was designed to allow easy translation of 8008 assembly code and the 8008 dates back to 1971.
The 80286 was designed before Unix was a niche OS and had some very impressive options for multi-tasking and multi-user systems. IIRC, OS/2 v1.x was designed around the 80286 and dynamic memory allocation involved the hardware more directly than more modern systems. One result was if 1K of memory was allocated, any attempt to access beyond that 1K would generate a fault.
A BSD tidbit. The CS department at Cal bought a PDP-11 to replace the loss of the CDC6400 "B" machine, which was running the KRONOS time sharing OS (I got a couple of exposures to using Kronos during my first quarter at Cal. That PDP-11 was rotated between 3 different OS's, one being an experimental OS from Bell Labs was prompted comments such as "If Bell Labs hadn't invented the transistor, The Phone Company would still be using vacuum tubes" and describing the "C" language an "An abomination in the eyes of the Lord".