The list of things that make something look like Unix needs to include Unix system calls.
Forgetting the history of Unix is coding us into a corner
There are vital lessons to be learned from the history of Unix, but they're being forgotten. This is leading to truly vast amounts of wasted effort. This article is the second in a series of pieces based on The Reg FOSS desk's talk at FOSDEM 2024. FOSDEM is a conference all about free and open source software, and these days …
COMMENTS
-
-
-
-
Friday 16th February 2024 17:16 GMT Anonymous Coward
Ummm Wayland...
I've a number of use cases that I can easily get to work on windows and x11 that are just not possible with Wayland...
I doubt that Wayland will ever support them as they are all considered security issues.
I run multiple processes that interact with each other. One application controls window layering and they all use global shortcuts. They are all graphic intensive applications. There are good reasons for not creating a huge monalithic application... The old approach of do one thing well...
I could probably do something with each window manager but... why?
-
-
-
-
-
Friday 16th February 2024 20:28 GMT l8gravely
Plan 9
I've got a treasured copy of Plan9 NiB sitting on my shelves, with manuals and everything. I really wish I could find the time to take it out and use it. The big gotcha today is a web browser honestly.
Having using various job scheduling tools, gsh (global shell), shell splitters to push stuff out to clients (tried cfengine, using ansible in a few places...) and all those other tools over the years, I have to say Plan 9 is seductive in alot of ways.
And maybe it's time to bring it back now that we can fire up VMs/containers and such easily now for testing and playing around. The GUI will be a problem, but maybe just abstracting it ontop of wayland where you have a single window that takes the full screen and Plan9 just runs it's magic inside it would be a way to play?
-
Friday 16th February 2024 21:00 GMT HuBo
It's clearly the only way to prevent humanity's doomsday-oriented OS development from destroying the known computational universe (as unearthed by Bruce Ordway's prescient comment, further down this page)! A bit odd that Bell Labs extraterrestrials had to resurrect hordes of zombie machines to achieve that (or so I hear), but well, ... Ed Wood. Nils Holm's Scheme 9 from Empty Space could feel right at home in there ( hopefully )!
-
-
Monday 19th February 2024 05:52 GMT Ilgaz
a wish regarding wayland
Unless wayland isn't the main point of 2 article, I wished it wasn't mentioned. It will create "noise", a lot of noise.
Hopefully plan9 will be mentioned, that strange looking OS and its ideas could become the next MacOS. MRAM, thousand++ cores may need the true UNIX V2.
-
-
-
Saturday 17th February 2024 18:03 GMT Proton_badger
Re: Also...
Yes the UNIX philosophy is about write programs that do one thing and do it well and having programs that interface will with each other. It's about Modularity and Composition, the art of dividing a huge software into small parts, which are connected through clean interfaces.
Systemd is a number of separate modules, in separate processes, many of them optional, that interact through well defined interfaces, those of them that needs to interact. Others are simply managed as services doing their own thing. I would argue Systemd follows the UNIX philosophy. I understand some people don't like it (on this forum, nearly everyone) but there must be other and better arguments against it.
-
Saturday 17th February 2024 20:45 GMT Anonymous Coward
Re: Why is SystemD hated?
May I humbly suggest that you look at some of the output that the [redacted] POS puts into the System Message log (dmesg). All sorts of bovine excrement appear on a regular basis.
So much for 'rule 1 of OS design" i.e. The OS is to get out of the way and not consume CPU cycles then not needed to help the applications do their job.
I have an ancient PDP-11/73 system in my garage. It's main job is to run my NC lathe using RT-11. I was showing my grandson what it could do when running a general purpose OS like RSX-11M. He is studying CS at University. He was impressed at how little RAM was taken up by a multi-user OS.
I can if pressed let it loose with UNIX and then it flies. For a CPU that is clocked at less then 1Mhz you can sure get a lot going in 256kb of RAM.
Yes, these OS's aren't safe enough to connect to the mess of crap that is the Internet.
-
Monday 19th February 2024 09:19 GMT Anonymous Coward
Re: Also...
Yeah, people like to screech about systemd a lot because it isn’t a pile of shell scripts. All the pieces are separate. The first piece is just the highly featured init replacement. Not all the other pieces are good, but many of them are okay. The init system is better than anything Linux had previously.
-
Monday 19th February 2024 22:02 GMT ldo
Re: I would argue Systemd follows the UNIX philosophy
People use systemd not because of any “philosophy”, but because it makes their job easier.
Consider how much easier it is to write a short .service file, versus all the boilerplate you have to stick into every sysvinit script. Here’s a simple example, taken from these very forum pages.
-
-
Monday 19th February 2024 21:46 GMT ldo
SystemD: Worse Than SystemA, B or C!!!
The TRUE HORROR of systemd has YET TO BE EXPOSED!!! You have NO IDEA of WHAT ITS REALLY ABOUT!!!
All is revealed HERE!!! The Tragedy Of systemd
-
Friday 16th February 2024 13:13 GMT Androgynous Cupboard
Not *everything* is a file
Surely it's that everything *required for inter-process communication* is a file? What you're describing sounds like internal state to me.
I don't know Wayland at all, but I'm fairly certain that if there is a way for unrelated processes to communicate with the Wayland compositor, it's done by means of a UNIX socket, TCP socket, pipe, pseudo-filesystem, shared memory mapped or message-queue, because (from memory) those are all the options for inter-process communications on UNIX. And they all live behind a file-like interface (definition stretched slightly for TCP sockets). Did I miss any?
-
Friday 16th February 2024 14:12 GMT MarkMLl
Re: Not *everything* is a file
I think the original idea was that everything was a represented by a name somewhere in the tree of mounted filesystems, and was manipulated by one of a small number of system calls.
Unfortunately, things like sockets on top of IP don't have names. Network devices don't have names (relative to / ). USB devices don't have names... the list goes on, although in fairness the /state/ of many of those is often accessible via /sys or /proc (with, in the case of Linux, a layout decided entirely by the whim of the kernel module author).
And again unfortunately, there's a vast number of devices which have their own APIs funneled through ioctl() or accessible only via a private kernel-level API.
So in short: everything is a file (provided that it's a file).
The challenge for a putative C21st UNIX replacement would be generalising all possible devices to have a consistent minimal API, generalising all streams interacting with a device to have a consistent minimal API, and so on. And what I've seen of the industry over the last 50ish years doesn't encourage me to hold my breath waiting.
-
-
This post has been deleted by its author
-
-
Friday 16th February 2024 14:55 GMT bombastic bob
Re: Not *everything* is a file
just a minor correction...
with respect to network layer, the basic I/O works exactly as it does for a serial device, from the standpoint of reading, writing, FCNTL, ioctl, polling, etc..
A program that accesses a serial port can just as easily work with a socket rather than a file identifier so long as you do not expect to change the baud rate, etc..
Other than that the socket API functions (like send, recv) are supersets of read/write (etc.), and you have connections for TCP. Yet the basic I/O looks the same from inside as well as outside.
So "everything is a file" still applies to sockets and networking.
In X11 the basic communication is client/server through sockets. I am not sure how Wayland does it, but there is NO "DISPLAY" environment variable to indicate which socket to communicate on, such as X11 would use for multiple displays and remote execution.
-
Friday 16th February 2024 15:38 GMT the spectacularly refined chap
Re: Not *everything* is a file
Unfortunately, things like sockets on top of IP don't have names.
That's the way things have evolved, AT&T Research Unix introduced Streams which worked well both as a generalisation and allowing for some neat tricks. It would probably have taken off had it been five years earlier, but by then the sockets model was too engrained thanks to the BSDs and Sun in particular.
-
-
Friday 16th February 2024 16:28 GMT Tom7
Re: Not *everything* is a file
It's slightly complicated by the fact that the Wayland protocol doesn't specify the IPC mechanism it uses, the compositor and the client have to agree on it. Weston, the reference implementation, uses UNIX domain sockets but there's no particular reason that a different IPC mechanism couldn't be used.
-
Sunday 18th February 2024 08:29 GMT Kevin McMurtrie
Re: Not *everything* is a file
"Everything is a file" has similar problems to "Everything is a URL path" in REST.
The file representation only works with tree-like data and simple concurrency requirements. It already starts to get a little weird with some devices having a hardware GUID, and assigned GUID, and a name all at the same time. What if you need to perform an atomic operation but the data is split into multiple paths? COW the base path? That wouldn't work at all.
I'm thankful that the abstractions aren't taken too far.
-
Monday 19th February 2024 21:54 GMT ldo
Re: Not *everything* is a file
On Linux, this principle is generalized slightly. You get three variants:
* Everything is a file
* Everything is a file descriptor
* Everything is a file system
The first one is the traditional Unix idea, with device files, Unix-family sockets and named pipes.
The second one started there (pollfd), but has been taken further on Linux (e.g. eventfd, signalfd).
The third one was also present somewhat on Unix systems (procfs). But again, Linux has taken it much further (sysfs, tmpfs, configfs, securityfs, cgroups and a host of others).
-
-
Friday 16th February 2024 13:16 GMT MarkMLl
What is unix anyway?
"That's why several versions of IBM z/OS are on the Open Group List. That seems strange because they are not Unix-like at all."
Which has always struck me as very, very odd. It's like saying that OS/2 or Windows "is A Unix" by virtue of their compatibility layer. Or that a proprietary hypervisor becomes "A Unix" because it can run UNIX in a VM.
Or- and purists will find this really contentious- Linux becomes "A Unix" by virtue of running UNIX in a VM.
I have enormous respect for the classic IBM mainframe designs, and for the architecture of OS: which was astounding once they finally got it working, a worthy challenger to established products such as Burroughs's MCP. But many of its strengths that, for example, allow seamless process migration between members of a cluster, are fundamentally incompatible with UNIX: claiming z/OS is "A Unix" does neither justice.
-
Friday 16th February 2024 14:00 GMT Doctor Syntax
Re: What is unix anyway?
It's like saying that OS/2 or Windows "is A Unix" by virtue of their compatibility layer.
I was wondering about the reverse of that prompted by the end of W10.
It would be possible, if there were sufficiently free space available, for a Linux installer to compact and shrink an NTFS partition and create a new partition into which to install a bootable Linux. The user's data files could be linked in to the new home directory.
But would it be possible to retain in place any Windows applications that couldn't be substituted, run them via Wine and reassure them that they were still on the same Windows machine on which they'd been registered?
-
Friday 16th February 2024 14:26 GMT MarkMLl
Re: What is unix anyway?
That doesn't sound too difficult, and could basically be achieved by using different code and data search paths depending on the OS requirement of a particular application program.
The sticking point would be libraries and support programs that came with Windows, and couldn't- at least legitimately- be run on top of some other foundation OS.
However it does remind me of something that happened to a user on CIX (anybody remember CIX?) a considerable number of years ago, when WINE was much less mature than it is today. He came across something that he strongly suspected carried a (non-bootsector) virus, and idly ran it: it infected files in his application search path, which meant that malware-carrying files could be referenced directly by a carelessly-entered shell command.
Of course, Windows has improved enormously since that happened, albeit more by the inclusion of virus scanners and mandatory code signing than by inherent good design. Which makes me wonder how much of that "medicine" would work properly if the foundation OS was something other than Windows.
-
Friday 16th February 2024 15:15 GMT Doctor Syntax
Re: What is unix anyway?
"The sticking point would be libraries and support programs that came with Windows, and couldn't- at least legitimately- be run on top of some other foundation OS."
Wine can run Windows applications without Windows being present so it must be (not!) emulating them already. What I'm pondering is whether it would pick up applications already installed in Windows as if it had installed them itself. If so then, at least at first sight, there should be no difference between Windows being there and not being there except for the paths where it finds them.
On the other hand the original installation might be directing the executables to use the installed Windows libraries. S do the "libraries and support programs" come with Windows or are they part of it? There would still be a registered* copy of Windows installed. If the libraries are part of that then while they're executing Windows would be executing. And how would that differ from running the original installation in a VM, which would be an alternative way of going about it?
Much the same applies to any support programs from the original installation although there would be a greatly reduced dependency on them as the Linux installation would largely take over their roles.
* Assuming it was legit in the first place unless Microsoft were to actually cancel licences when a version goes out of support.
-
-
Sunday 18th February 2024 02:09 GMT alkasetzer
Re: What is unix anyway?
I did something like you are describing, having a dual partition setup (win10+linux) and using virtual box raw disk access, when I was under one OS I just executed the other in a VM. I could then either run applications from within the vm os (using vbox seamless integration).
Then Microsoft released WSL. Afterwards Windows performance got worse due to Defender and so on, so I just ditched the windows partition and started using Office365.
Wine is great and all, but things not always work as they should, and it's sometimes easier to just migrate to another application than working through all that pain.
-
-
Friday 16th February 2024 14:47 GMT MarkMLl
Re: What is unix anyway?
Incidentally, if anybody is as perplexed by IBM's z/Wotzit range as I am, I can recommend the PDF at
https://www.ksp.kit.edu/site/books/m/10.5445/KSP/1000034624/
as a particularly good read. It goes into a lot of detail about IBM's mainframes up to the early 2010s, and does so in the context of "industry dominant" terminology: i.e. it describes a sysplex in the wider context of clustered systems and so on.
-
Monday 19th February 2024 21:23 GMT bazza
Re: What is unix anyway?
It's in the article: Unix is a standard for what API calls are available in an operating system, what kind of shell is available, etc. Unix is what POSIX is now called. It's a notional operating system that closely resembles a software product that was called Unix.
POSIX was created by the US DoD to make sure that software, ways of doing things, scripts, etc could be ported from one OS to another with minimal re-work. They also demanded open-standards hardware, for exactly the same reason. This is still in play today, and there's an awful lot of VME / OpenVPX-based hardware in the military world that is also used in other domains. The motivation was to get away from bespoke vendor lock-in for software / hardware, and it has worked exceptionally well in that regard. It's also the reason some OSes grew posix compat layers; DoD wouldn't procure anything that wasn't capable of POSIX (though they relaxed that a lot for corporate IT - Windows / Office won).
If one casts a wider net than the article does, one can see that OS/2 or Windows being considered "a Unix" is in not that odd. There's operating systems like VxWorks, INTEGRITY that also offer POSIX environments, and yet have their own APIs too. The OSes that are commonly perceived to be *nix are simply those that do only the POSIX API. Trouble is, even that's a bit uncertain. For example at least some versions of Solaris had its own API calls for some things beyond those of POSIX ( I seem to recall alternative APIs for threads, semaphores; it's a long time ago). Is Solaris a *nix? Most would say yes, but it wasn't just POSIX, in a similar way to OS/2 being not just POSIX. Linux is extensively diverging from just POSIX - SystemD is forcing all sorts of new ways of doing things into being. Do things like name resolution the SystemD way (basically a dBus service call instead of a glibc function call) and you end up with non-POSIX compatible source code.
-
-
-
Friday 16th February 2024 14:59 GMT Liam Proven
Re: Further reading?
> Does this story continue somewhere
I have not yet adapted part 3 into article form, but the very first paragraph of the story links to both part 1 and to the FOSDEM talk it is based on. You can download the script and read it now if you're impatient, but note, it is essentially speaker's notes and not an article aimed at anyone else, just me.
-
Friday 16th February 2024 13:56 GMT umouklo
I was thinking this was leading to a discussion on Plan 9
I actually downloaded Plan 9 at some point and gave it a try. I admired it from a tech perspective but wasn't sure how it could be useful at that time ... USL was actually across the street in Summit, New Jersey from where I worked for a few years ...
Best,
Lorie
-
-
Friday 16th February 2024 15:05 GMT bombastic bob
Re: One thing that has/should change is...
"the user experience"
That phrase and "UX" vs "UI" always makes me *cringe*
It also reminds me of a joke: How many people in Silicon Valley does it take to change a light bulb? Three. One to physically change the bulb, and 2 to "share in the experience".
(that is assuming that anyone in Silly Valley even KNOWS what a light bulb IS...)
-
Monday 19th February 2024 16:56 GMT Greg D
Re: One thing that has/should change is...
Don't get me started on the UI. Dumbing it down has been universally bad for every echelon of technology. How are files and directories THAT difficult to wrap your head around?
At this point, I'm not even sure the UI engineers know what they are doing at any of the major software houses.
-
-
Friday 16th February 2024 14:42 GMT Anonymous Coward
Linux use to be Unix
I no longer consider Linux to be Unix. I have worked with Unix on and off since 1988, and full time since 1997. For the last 8-10 years, Linux has become "Unix-like", abandoning many Unix philosophies. It has become more and more complex like Windows. Complexity is the bane of operating systems. Everyone seems to think they can do a subsystem better and forks the code. I do like some of the improvements made in Linux. And Linux has helped proprietary Unixes to improve their DE. Making Linux to look like Windows won't significantly increase its desktop presence.
-
Friday 16th February 2024 18:50 GMT MarkMLl
Re: Linux use to be Unix
Which takes us back to "stuff that Poettering hasn't touched."
I'm not necessarily saying that all his ideas are bad, but he has repeatedly demonstrated reckless disregard for the various layers which conventionally make up "A Unix"- kernel, libraries, daemons, Application-level APIs and so on that de-facto Linux is more defined by Systemd than it is by the strict Linux kernel.
-
-
Friday 16th February 2024 15:08 GMT Phil O'Sophical
It took until the late 1980s for equipment like inexpensive 32-bit computers with onboard graphics, reasonably fast expansion buses (and thus, reasonably fast networking as a fairly cheap option) to start to be mainstream. Then Unix acquired networking support, as it still has.
I'd date Unix networking more from the late 1970s when UUCP became standard, or perhaps around 1975 with RFC681.
-
Friday 16th February 2024 15:34 GMT Throatwarbler Mangrove
What Unix cost us
You can find a contrasting (not necessarily conflicting) viewpoint here, and here, wherein the author challenges the notion that the Unix way is the best way, the idea that "everything is a file" is inherent to the fuzzy-headed concept of the Unix philosophy, and, for good measure, makes a case for systemd. I know that latter point will garner me a lot of downvotes from the initd zombies along with some long-winded angry screeds; rest assured that while you're frothing angrily at your keyboard, I'm sitting back and experiencing amusement.
-
-
Saturday 17th February 2024 22:25 GMT Throatwarbler Mangrove
Re: What Unix cost us
"Just puzzlement as to why you feel the need to be so unpleasant."
I will acknowledge that sentiment as being reasonable, and even though it's unlikely you'll read this comment, I'll respond for posterity's sake. The reason for my unpleasantness is that I have a great deal of pent-up disgust and frustration with the dogmatic and close-minded nature of many of the commentards here and in the wider Linux community, and it brings me a certain pleasure to lambast their perspectives. On the other hand, I suppose that being uncivil does somewhat undermine my persuasivness, so fair point to you for calling me out on it
-
Sunday 18th February 2024 16:17 GMT Anonymous Coward
Re: What Unix cost us
Well, lambast me away of my comfort zone I say! Swashbuckle me some stoutly bitterness for heaven's sake! Angerify the spiceless dish that some might pass as civil! Suplex-me out of comatose! That I can feel alive again, with all the contradictions this entails, the joy, and, the pain!
Sheeps are for making condoms!
-
-
-
Saturday 17th February 2024 10:50 GMT Pete Sdev
Re: What Unix cost us
One of the joys of using UNIXy systems is that, thanks to the userspace tools, many tasks are easy.
When the boss asked "what recipients has the contact form recently sent to?" I could quickly provide the answer with a one-line shell command (grep,awk, uniq).
Note that the UNIX philosophy isn't just "everything is a file", but also "use text files wherever possibly" because we already have the tools. One of the (many) reasons systemd gets stick because it goes against this philosophy.
The other part of the philosophy is that tools should be small and do one job well.
-
Tuesday 20th February 2024 00:19 GMT ldo
Re: systemd gets stick
Actually, it does use text files for all its config. And they are nice, typically short files in classic .INI format, where every directive means something, so you don’t end up copying and pasting a lot of boilerplate like you do with sysvinit scripts.
And it offers nice tools for managing these unit files, including systemd-delta so you can immediately spot where your system setup differs from the distro default.
Yes, the system log (journal) is a binary file. That’s actually useful, too. Because internally, all the timestamps are in UTC, and journalctl can display them in whatever timezone you wish. Very useful in these Internet-centric days, where your Linux server is in some colo facility in a different timezone from you, and a customer trying to use it might be in yet another timezone. So when they report a problem and the time (for them) that it happened, you can check the logs and relate that back to your own time, and see if (ahem) something you were doing at that time was the cause of the trouble.
-
-
Friday 16th February 2024 15:46 GMT StrangerHereMyself
We need a new Unix
I personally think we need a new Unix, not based on the original ideas but on new ones. Where networking and GUI's are first class clients and not some afterthought shoehorned in.
I'm a fan of microkernels so I'd like to point out that another microkernel is missing in the list: MMURTL. It's described in a PDF book which can freely be downloaded and has some interesting takes, such as *no dynamic linked libraries* (the dreaded DLL's under Windows). Dynamic libraries are the instigators of misery in operating systems and doing away with them would save a world of pain.
-
Friday 16th February 2024 16:21 GMT Doctor Syntax
Re: We need a new Unix
Draw up and implement your set of new ideas. What happens when a new new idea emerges? The test of the quality of your design is how well it gets accommodated.
The original Unix ideas were few and flexible so that networking and GUIs could be built on top. They didn't have to be built in as first-class clients (whatever that means).
If you start out be saying "this needs to be built in and that needs to be built in as special cases" then you're making assumptions about what belongs in there. Assumptions have a habit of becoming limitations.
If the new new idea violates the assumptions, you have to take the back off the system and build in another extra case. That way lies bloat.
The original ideas of extensible simplicity have lasted for decades and been scaled from some controller based on a Pi to supercomputers. That seems a good indication that they got things right.
If there's a case for a new Unix it has to be for removing cruft and getting back to the minimalism.
-
Friday 16th February 2024 21:10 GMT StrangerHereMyself
Re: We need a new Unix
Unix got a lot of things wrong, like security. Read up on the "confused deputy problem" and Capability-based computing.
Just because they shoehorned a GUI and networking into Unix doesn't mean it's flexible. You can shoehorn almost anything into any operating system if you want. It's just that it's not a nice fit.
-
Friday 16th February 2024 19:00 GMT Ken Hagan
Re: We need a new Unix
There is no moral difference between a service provided by DLL and one provided by a separate process. In both cases you have the benefit of being able to patch all clients simultaneously and the risk of breaking them all if you are careless about defining your interface.
-
Friday 16th February 2024 21:03 GMT Richard 12
Re: We need a new Unix
The only technical difference between IPC and a DLL is that the DLL and host process can automatically share all data and state.
This makes DLLs far more performant than IPC, because there is no serialisation or setup costs for individual calls.
For a Windows example, compare COM with DLLs. COM is both far harder to use and considerably slower.
There are advantages to IPC that can make it worthwhile in some circumstances - the 4GB barrier used to be a very big point in favour, for example.
-
-
Friday 16th February 2024 21:07 GMT doublelayer
Re: We need a new Unix
I'll have to read what they want to do instead, but while DLLs can cause a lot of problems, there's a reason they're often used. Nobody has to use one. You can statically link everything, or you can implement every library as its own program and communicate between it and something else. Each approach fixes some problems introduced by the concept, although usually opposing ones. They introduce new problems instead. Maybe those problems are easier to deal with or just better, but that is not guaranteed and it depends a lot on how you use your computer.
-
Sunday 18th February 2024 00:11 GMT J.G.Harston
Re: We need a new Unix
I think confusing the GUI and the operating system is confusing things. The GUI shouldn't be seen as, or be part of, the operating system, the GUI is the graphical user interface with which the human user access the functionality of the operating system. People have been sucked in by pointing at the Window GUI and saying "that's Windows", when Windows is the operating system, but named after the functionality of the GUI that users use to use it.
-
Monday 19th February 2024 04:37 GMT David Newall
No DLLs
I don't get it. I hark from the time when a.out ruled and shared libraries were unknown. Although everything was much smaller then, statically linking libraries was seen as wasteful of space, and shared libraries were the answer.
This "version hell" is a made up problem because shared libraries include version numbers (in their names) and ELF list the needed shared libraries by version. If you don't delete "obsolete" versions of a library, the loader will find whatever versions are needed, load and link, and you're running. So, yes, you can have a program then needs v1.6 running at the same time as a different program that needs v2.0.
-
Monday 19th February 2024 07:09 GMT J.G.Harston
Re: No DLLs
The problem is the plethora of programs that not only need v1.6 but need *exactly* v1.6 and die on a later version. Functionality should be >=needed not =needed. A program that requires v1.6 should - nay, *MUST* - work on v2.0. A program that depends on bugs in a particular version of support code, and dies when the support code is impoved, is broken code; similarly later versions of support code that kills working functionality of its earlier version is similarly broken.
-
-
Tuesday 20th February 2024 00:12 GMT ldo
Re: the dreaded DLL's under Windows
If your idea of “dynamic libraries” is Windows DLLs, no wonder you hate them.
Note that shared objects on Linux have both file-level versioning and symbol-level versioning. This is why you don’t get “DLL hell” on Linux.
Windows could learn from that.
Oh wait, no it can’t, because they would have to throw away the current DLL scheme and start again.
-
-
-
Saturday 17th February 2024 14:36 GMT MarkMLl
Re: What is unix?
I broadly agree, but that needs closer examination.
"Back in the day", Digital Research sold CCP/M (aka MDOS etc.) with the claim that it could do anything that unix could do. However what that actually boiled down to was that /some/ programs originally written for unix could be recompiled for CCP/M, since they only used standard library facilities which could be emulated adequately by a decent compiler: Lattice C springs to mind.
fork(), as a system call, has well-defined semantics to which anything that isn't "a unix" struggles to conform.
fork(), as a library routine, is more tolerant provided that you don't start looking too closely at the memory semantics.
Any OS which claims to provide a good imitation of something more mature becomes a support nightmare, as more and more people uncover marginal behaviour. OTOH, if its emulation survives for more than a few years it provides an incentive for people to write and test their code for at least some degree of cross-platform portability.
DR's "Better DOS than DOS (and as good as any unix)" phase lasted perhaps five years. OS/2's "Better Windows than Windows (and as good as any unix)" for perhaps the same sort of time. Ditto for Linux's claims to have a subsystem that would run other unixes' binaries. But all of them were strong when running code specially written for their native APIs.
Which I suppose means that the days of Linux pretending to be "a unix" are actually long past, and that for the last five years or so almost everybody has been more inclined to treat various distreaux as "a Linux": with, like it or not, systemd, Wayland and the rest.
-
This post has been deleted by its author
-
-
-
Monday 19th February 2024 07:17 GMT J.G.Harston
Re: Fork
I'm not sure what you mean by "didn't share address space", but very definely in early Unix your current process executed in memory from 00000 upwards as seen from the process in memory from 00000 upwards, and fork() created a process which executed from 00000 upwards as seen by the process sitting in memory from 00000 upwards. for (a=0; a<1024; a++) { printf("%02X ",*a); } would display the process's code in memory from 00000 upwards.
-
Monday 19th February 2024 16:12 GMT rivimey
Re: Fork
The special bit about vfork was that it deferred the allocation of physical address space to the new process and use CoW semantics to optimise future page allocations. Consequently, using vfork made much better use of memory for some use-cases because only memory that had to be allocated was allocated.
Both fork and vfork provided the process with the appearance of a new address space. Fork did so from the get-go, creating a physical copy of the original process' data and code, VFork simulated this using the MMU.
In both cases, while the new process could read the old process' data the moment either process wrote to a page the address space diverged.
The main benefit of vfork within Unix at the time was that in most cases a fork was followed almost immediately with an exec, which replaces all code & data with the new program, leaving only the process table (file descriptors and so on). Consequently old-fork was very wasteful, and vfork was used whenever possible. ISTR some instances where vfork semantics were not sufficiently similar, hence retaining both calls, but forget what those were.
-
-
-
Friday 16th February 2024 17:14 GMT Anonymous Coward
If you decide not to choose
you will still have made a choice
One of the things I have seen waste more time and resources (i.e. "money") in my experience is where outfits forget why they decided something a decade ago, and therefore are totally and utterly incapable of knowing if they should continue or move away.
"Yes, 10 years ago we went with "X" and we still review that decision against the original criteria" is not something you hear.
Ever.
-
Saturday 17th February 2024 02:28 GMT Benegesserict Cumbersomberbatch
Re: If you decide not to choose
Within a corporate entity, maybe.
In science, it's called a literature or journal article. You can't write one without citing similar pieces of work that establish the premises of your conclusions. And when you make them, your peers get open access to critique your conclusions. You therefore have a chain of trust. If it takes a hundred years for an error to be discovered, follow your citations to determine what you need to revise.
If you want a model for institutional memory that doesn't even depend on the discoverer being alive, you already have one.
-
Friday 16th February 2024 17:26 GMT Anonymous Coward
Objects are superior to files
Windows has (almost) reached three decades of consistent userland backwards compatibility guaranteed within user space. The NT (or VMS) tradition of very carefully abstracting away everything as securable objects, accessed via carefully defined userspace APIs, has resulted in a software stack where individual utilities can change radically without negatively impacting on anything else in the system. By comparison, UNIX only ever guaranteed compatibility between kernel<->userspace, with the exact same mistake being repeated in both macOS and Linux. The NT approach has given users free performance boosts such as GPU accelerated GDI calls, ever-improving encryption and audio/printing/imaging enhancements for all their already-purchased software, often without any recompilation, updates or application modifications necessary. The UNIX approach by comparison has has yielded the opposite, a culture of reckless abandonment and seemingly endless code wastage.
If we want to fix the complexity problem, it's time to embrace "everything as an object" and start stabilising API/ABI compatibility properly, since "everything as a file" just doesn't cut it anymore.
-
Friday 16th February 2024 17:30 GMT abufrejoval
That's a very long and windy buildup for Plan9
I've struggled for many years trying to understand and explain how Unix could survive for so long, given it's utter terrible shortcomings.
For starters, please remember that the very name "Unix" or "Unics" was actually a joke on Multics, an OS far more modern and ambitious and a fraction of the current Linux kernel in code size and finally open source today.
Everything *had* to be files in Unix, because the PDP only had a couple of kwords of magnetic core memory and no MMU, while Fernando Carbató made everything memory on Multics, a much more sensible approach driven further by single level store architectures like the i-Series.
I love saying that Unix has RAM envy, because it started with too short an address bus :-)
And I was flabbergasted when Linus re-invented Unix in about the worst possible manner possible: there was just everything wrong about the initial release! I was busy writing a Unix emulator for a distibuted µ-kernel inspired by QNX at the time (unfortunately closed source) so I could run X (the windows system, not the anti-social cloaca) on a TMS 34020 graphics accelerator within the SUPRENUM project: I had access to AT&T Unix and BSD source code, so I wasn't going to touch his garbage even with a long pole...
...for many years, by which time none of his original code bits survived; but his social code, his excellent decision making capabilities, had showed its value for accelerating the Linux evolution via developer crowd scale-out far beyond what the best husband and wife team (Bill and Lynne Jolitz) could do.
I've always thought that the main reason why the Unix inventors came up with Plan 9 was, that they didn't want to be remembered for the simpleton hack they produced when they came up with Unix to make use of a leftover PDP that would have been junk otherwise. They felt they could do much, much better if they had the opportunity to turn their full attention to an OS challenge!
In a way it's like the Intel guys, who hacked the 4004, 8008, 8080 and 8086 but wanted to show the world that they can do *real* CPUs via the 80432, i860 or Itanium.
So why did those clean sheet reinventions all fail?
The short answer is: because evolution doesn't normally allow for jumps or restarts (isolations can be special). It will accelerate to the point where the results are hard to recognize as evolution, but every intermediate step needs to pay in rather immediate returns.
(And if in doubt, just consider the body you live in, which is very far from the best design even you could think of for sitting in front of this screen)
Once Unix was there and had gained scale, nothing fundamentally better but too different had a chance to turn the path.
I've tried explaining this a couple of times, you be the judge if I got any close.
But I've surly used many words, too.
https://zenodo.org/records/4719694
https://zenodo.org/records/4719690
A little more on the cost o code evoution:
https://zenodo.org/records/4719690
of the full list via https://zenodo.org/search?q=Thomas%20Hoberg&l=list&p=1&s=10&sort=bestmatch
-
Friday 16th February 2024 17:51 GMT martinusher
Step away from the keyboard......
Our notions of what a computer is, and by extension what an operating system is, are clouded because everyone knows that a computer has a screen, a keyboard (or other HID), a shell or other UI software and so on. This notion has done immense damage to the field of computing because the systems we interact with as people are just a subset of computers as a whole -- yes, its a big, important, subset but its still a subset.
To answer the question about what UNIX is, or was, we might want to ask ourselves about the business the Bell System (which became AT&T) was in. Ultimately their core business was telephones -- point to point communication -- and to manage that they needed phone switches ('exchanges' in UK parlance). US switches were designed as a crossbar matrix which is best controlled by a small computer, one that runs a quite complex algorithm to determine the best path a circuit might make trough the matrix (or, more accurately, matrices). This, plus countless other diagnostics, accounting and management functions is a complex task that's just not suited to ad-hoc software or the kind of batch systems that characterized business computers. A system like UNIX is an obvious fit -- its both modular and flexible, everything about it just logically flows from the requirements of a computer that has to do work in the real world.
So my guess about what UNIX is could be "An operating system designed from the requirements down that's rooted (non-filesystem sense) in the real world".
-
Friday 16th February 2024 18:37 GMT PRR
Re: Step away from the keyboard......
> everyone knows that a computer has a screen, a keyboard (or other HID), a shell or other UI software and so on.
Long ago a computer (no longer a low-paid math student) was a mysterious machine in a faraway office that sent us utility and credit bills on "do not spindle" punchcards.
And MADE MISTAKES!!!! Or at least we blamed "the computer" for all bureaucratic problems.
Oh, and blinkin lights and constantly-spooling tape drives.
I don't think the unix folks were directly involved in telephone exchanges (AT&T/Bell was a VERY big operation). The ESS project was very special, made most computers look like toys. (look-up) Ah, the 3B20, running a unix, added features the ESS hadn't contemplated.
-
Monday 19th February 2024 17:42 GMT A.P. Veening
Re: Step away from the keyboard......
everyone knows that a computer has a screen, a keyboard (or other HID), a shell or other UI software and so on
In that case the seven servers I have running aren't computers. And neither is that modem/router thingy. Come to think of it, my (smart) TV doesn't have a keyboard either though it obviously has a screen.
-
-
Friday 16th February 2024 18:06 GMT mark l 2
My first experience of QNX was around the turn of the millennium, they had a demo which had the a whole GUI, with TCP/IP stack, web browser, text editor and file explorer which fit on just one floppy disk.
You could also down a free ISOs for QNX 6.1 and I remembering burning that to a CDR (i still probably have the disc somewhere) and trying it on a Pentium 3. It was a fully functional OS that you could use as a daily driver, although the number of applications were limited as I think it was mainly aimed at developers looking to write programs for QNX.
-
Friday 16th February 2024 19:30 GMT josiegross
Nice article
Really enjoyed reading this article.
I worked in USL (Unix Systems Labs) as a MTS (Member of the Technical Staff) during the 80's and into the early 90's and one small correction is that Research Unix was not a product of USL. Research UNIX was actually a product of "Department 452" within Bell Laboratories and although many of the innovations that were part of that distribution made their way into the USL product, they were distinct organizations.
I look back at my time there very fondly, working with some of the smartest and nicest people.
-
Friday 16th February 2024 19:55 GMT Vomlehn
The *three* things that make a microkernel.
The article makes the claim that two things make a microkernel: scheduling and memory management. It then mentions, in a backhanded way, interprocess communication, but this is actually the third and necessary leg of a microkernel. Not only that, but is precisely IPC that builds in the performance edge that monolithic kernels have over microkernel and which have kept microkernels on the periphery. There *are* ways to address this, at some cost in hardware, but this does not appear to be an area of active research.
-
Saturday 17th February 2024 13:12 GMT Julz
Re: The *three* things that make a microkernel.
ICLs Goldrush used a chorus system microkernel with a special high speed IPC system that was implemented in both software and hardware. Basically cut out any (well most) buffer copying. This meant that calls coming and going across the user land/system land boundary were not copied (fancy memory protection hardware) and between node IPC (fancy low latency network hardware) calls were addressed as if local. Goldrush was a true UNIX implementing a version of SVR4. Seems to be forgotten now.
-
Tuesday 20th February 2024 09:22 GMT JParker
Re: The *three* things that make a microkernel.
Based on my experience with large in-memory database development (DataBlitz, aka Dali) and the performance increases that can be obtained by not copying data (both disk/memory and memory/memory) I've wondered if it perhaps a fundamental mistake to use messaging as the fundamental IPC mechanism rather than something akin to memory-mapped files.
-
-
-
-
-
Monday 19th February 2024 07:47 GMT Bebu
Suggestion for title for systemd haters' tome
《Thinking about it, would there be a market for 1400 page tome about systemd? Any suggestions for a title?》
And in the Darkness Bind Them
With the cover encircling the title with the full verse in the original Tengwar.
For me the main thing that Unix originally did differently was to provide mechanisms for the separation of policy from implementation to some extent. I recall at the time I was pleasantly surprised that you could actually write your own cli (shell) with Unix which wasn't really a thing with the DEC PDP10 or PDP11 operating systems (or later VMS either I think.)
The relatively clean and abstract Unix system call interface was refreshing after learning assembly on a PDP10 where the system service interfaces seemed fairly ad hoc and inconsistent.
I always thought of the "everything is a file" abstraction was more every "thing" (object in everyday sense) in the kernel was represented or named in the same name space as traditional files (which themselves are fairly deep abstractions of often quite horrid block device hardware eg PC floppy.)
A comment mentioned Unix STREAMS which appeared in System V (and SunOS) I think was implemented by DMR but I think I read he later wrote that the scheduling between connected stream modules was a real problem presumably with single threaded kernels. I imagine implementing streams in a multithreaded kernel would be simpler.
-
Monday 19th February 2024 22:16 GMT ldo
Re: write your own cli (shell)
That was another innovation that came from Unix. An inseparable part of that was the fact that the shell ran as just another user process, and would spawn yet more processes to run your programs. And typically when a program terminated, its process terminated too.
In order for this to work well, process creation had to be cheap. Or at least, cheaper than on other OSes.
You can see why other OS designers looked askance at this idea: they saw it as wastefully inefficient. But it was, and still is, a remarkably powerful idea.
And yes, Windows is, to this day, one of those “other” OSes where process creation is expensive and something to be avoided.
-
Tuesday 20th February 2024 11:43 GMT R Soul
Re: Suggestion for title for systemd haters' tome
"Unix STREAMS which appeared in System V (and SunOS) I think was implemented by DMR"
It wasn't. He wrote and designed Streams at Bell Labs. That somehow got handed over to USL. Who turned DMR's elegant, clean code into a heap of shit. Which went into the even bigger heap of shit that was SysVR4. That Sun adopted for a while and rebadged as Solaris.
IIUC DMR's Streams never made its way out of Bell Labs or into a mainstream flavour of Unix.
-
-
-
-
Sunday 18th February 2024 06:27 GMT Anonymous Coward
UNIX and particularly Linux attitude
There is also the huge problem in the Linux world of its leadership's attitude.
Usually this goes along the lines of "I must always be right because I'm an arsehole with a collosal ego". Torvalds, Poettering and Drepper for instance. They are all very good at saying that everyone else is wrong. Humility and admitting that they are wrong seems to be something that they are incapable of (at least to the outside world, Torvalds does seem to f-bomb anyone within the project himself included if he considers that they aren't up to snuff).
-
-
Friday 16th February 2024 22:45 GMT steelpillow
Not really
Interesting as ever, lots of stuff I didn't know. But also as ever, not quite as I recall it. Icon for the appropriate discussion venue. Meanwhile:
First, Liam's favourite "Linux is a UNIX" trope. This is nonsense; by that argument, Android is a UNIX too. Just because a handful of distros swim, quack and walk like a duck does not mean they all do. I mean, reality check here, please. The most we might want to claim is that those few Linux distros are Unices. But even there, one might suggest they just happen to look and fly like unices, in the same way that a replica De Havilland Comet Racer is not a real one, even if it is a thoroughbred bitch in the stall.
Then, if we want a Swiss Army knife of an OS, do we actually want networking and GUIs in the OS core as such? RISC OS put the GUI in the core and it eventually became clunky and outdated; design choices such as the three-button mouse failed the test of time but proved too deep in to fix. Baked-in networking is just a massive security risk. In both cases, choice is critical in tailoring the system to your needs. Baking them into the OS is not offering choice.
Somewhere I lost the core thread - where exactly in the tale did we end up coding ourselves into a corner? Now, that is probably because I am old and ugly and only have three cylinders still firing, but I'd still like it in a pithy one-liner.
-
-
Saturday 17th February 2024 17:56 GMT steelpillow
Re: Not really
Nope. Android is at best UNIX replica. It is not a real UNIX. It's not even a very good replica, you certainly can't install the average UNIX app such as Oracle for UNIX and expect it to work. FFS, stop changing the meanings of words to suit your desired sophistry. As any programmer knows, that game doesn't work with strongly-typed languages and it creates havoc with loosely-typed languages like English.
-
Saturday 17th February 2024 22:52 GMT Anonymous Coward
Re: Not really
> the average UNIX app such as Oracle for UNIX
For "a certain value of average"?
That is itself a novel way of defining UNIX - it has to be able to run my particular favourite, pretty dang large, application. Pretty sure that "definition" would force most (damn near all of the ones discussed in the article) of the UNIX implementations to leave the room, heads hanging in shame.
-
-
-
Saturday 17th February 2024 23:05 GMT Anonymous Coward
Re: Not really
> design choices such as the three-button mouse failed the test of time
Ah, that explains why this cheap generic mouse only has two buttons - oh, wait, it has a wheel which can also be clicked, which makes, let me see, three buttons!
Now you are going to reply that the *OS* doesn't enforce a *specific* meaning on the third button[1], therefore it isn't really useful and is still a failure.
-
Sunday 18th February 2024 20:42 GMT steelpillow
Re: Not really
>sigh!< The RISC OS system was three buttons for Select, Menu and Adjust. Adjust was usually a second menu with more arcane options (If you really hate yourself, the RISC OS Style Guide tells all). Do, please, show me the contemporary apps which use the mouse wheel for a third, Adjust menu in this way. No, on second thoughts, keep it for my deathbed.
-
Monday 19th February 2024 21:44 GMT Anonymous Coward
Re: Not really
>> Now you are going to reply that the *OS* doesn't enforce a *specific* meaning on the third button[1], therefore it isn't really useful and is still a failure.
> Do, please, show me the contemporary apps which use the mouse wheel for a third, Adjust menu in this way.
Yup, called it.
>sigh!< Steelpillow is the one who gets to define what "Unix" is. Steelpillow says that the middle button (*button*, not *wheel* btw - get it right) is a complete failure because the OS doesn't enforce that it *always* bring up an Adjust menu.
Is there anything else in the computing world where we are so clearly in the wrong, Oh You Mite Brain?
PS
Windows leaves the middle button (and, if you happen to have it, the fourth X button) for use by the app - WM_MBUTTONUP etc has been in windows.h for a long time. That rarest of apps, the web browser (you may have come across it?) gives purpose to the middle button click - it even changes it depending upon where you click! Ooops, no, sorry, that isn't *precisely* the way that RISC OS used it, so it doesn't count.
PPS
You are clearly Old and Wise and have The Knowing of the Ancient RISC OS, but you may like to look more deeply into the Archives of Computing and not just what *you* happen to have stumbled across. Such as, the three button mouse predates RISC OS: for example, Smalltalk's "red (left), yellow (middle), blue (right)" buttons, used for "select", "invoke action" and "act on view / meta / other UI" - still honoured by Smalltalk. Oops, sorry, clearly nobody cares about Smalltalk, so that didn't count.
PPPS
> keep it for my deathbed
No, no, must resist the temptation to use the obvious retort there.
-
Tuesday 20th February 2024 17:14 GMT steelpillow
Re: Not really
The important point is that RISC OS system did not catch on. You can logic-chop your way through a thousand piles of bent words but it won't make the RISC OS a success. At least my brain is a mite bigger than yours.
You say that Android is a UNIX. Has anybody ever put that to the test suite and seen it through to certification? I do like evidence-based discussion.
-
Monday 29th April 2024 16:40 GMT FeRDNYC
Re: Not really
Long prior to the advent of the wheelie-mouse,¹ Sun workstations all shipped with three-button mice. (Three-button laser-optical mice; also a rarity for the time. You needed a special mirrored mousepad to use it, and when that got scratched up it was a nightmare, but it still beat de-gunking your balls.)
Now, one can't really make arguments about Solaris "catching on" or not, considering its users didn't really choose the OS — it "caught on" for Sun boxen the same way Windows did for PCs. But it's still around today.
Notes
1. The wheelie-mouse is hands down one of the greatest advances in computing I've personally experienced. I mean that with complete sincerity, BTW. The scroll wheel, in combination with the contemporary advent of the tabbed browser interface, completely transformed the way I interact with information in the digital space. Being able to scroll without having to click anything or move the mouse pointer? And, being able to follow document links by middle-click "branching" into background tabs, without abandoning the context of my current document? Revolution!
-
-
-
-
-
Saturday 17th February 2024 01:40 GMT FeRDNYC
Only Wayland?
So any Wayland evangelists out there, tell us: where in the file system can I find the files describing a window on the screen under the Wayland protocol? What file holds the coordinates of the window, its place in the Z-order, its colour depth, its contents?
Do those parameters exist somewhere on the filesystem when running an Xorg server?
-
Saturday 17th February 2024 14:51 GMT MarkMLl
Re: Only Wayland?
While I agree, things like the position on a screen or its Z-order should be reflected by metadata in the directory structure rather than file content.
That sort of thing was completely overlooked in UNIX's design, which is why there were attempts like Alpha-Windows on character terminals and why window position and furniture are separate from window content on X11.
But even if we ignore the positioning aspect, we have to ask ourselves how best to represent process output as a file. Is it going to be a stream with embedded positioning commands etc.? Or is it going to be a sequence of rows each with a fixed number of columns representing what is actually presented to a user?
The first of those might be appropriate if the output device is a glass teletype, but once one starts considering any sort of smart terminal or form-based display one has to wonder whether IBM were actually right when they defined that fileset metadata included block and record sizes.
-
Saturday 17th February 2024 23:18 GMT Anonymous Coward
Re: Only Wayland?
> That sort of thing was completely overlooked in UNIX's design, which is ... why window position and furniture are separate from window content on X11.
So overlooking a design feature led to an obviously good result (window content should always be separated from its position - the known usefulness of the view/window transform predates the availability of the interactive UI, let alone the GUI)? While we're at it, separating the furniture and the content is laudable as well.
We can only hope for more such fortuitous overlooking in our designs.
-
-
-
Saturday 17th February 2024 16:36 GMT Mage
Did I blink and miss it?
One of the reasons for GNU, BSD etc was conflict between AT& T / Bell and the Universities. A lot was done by universities and others and wasn't an AT&T contract or paid for them. But Bell/AT&T turned round and said that AT&T owned it all.
Linux is strictly speaking just the Kernel and lots of the OS are GNU family tools. Simply reinventing, rebuilding what AT&T said was theirs. Very sad.
However, great article.
-
Saturday 17th February 2024 23:21 GMT Persona
Man pages
For it to be UNIX it has to have man pages. In the days before the internet there was nowhere else to go for help. If you wanted to learn Unix you read the manual pages. So if it doesn't have man pages, then for me it's not Unix. The reverse however is not true as non Unix systems have copied that rather excellent feature.
-
-
Tuesday 20th February 2024 18:08 GMT Persona
Re: Man pages
Nope with a 10Mbyte hard drive there wasn't room. They came on paper as in "pages", and with the Ironics system I used first (probably Unix Version 8) they even came in half letter binders. I read the manuals cover to cover, which was hard as they were alphabetically arranged. One of the first system calls I read about was "accept(3)" and that was my introduction to socket programming. I had never even heard of "sockets" before. It was utterly meaningless and so was bind(3) a few pages later. Things got a bit clearer with listen(3), but it was not till nearer the end of the manual when I got to socket(3) that I had all the bits (having read read(3) I could guess there would be a write(3). A day or so later when I was in my car that it all clicked into place in my mind and I knew how to use sockets. Reading that manual a second time was so much easier.
When I moved on to Sun systems they were still shipping the manuals on paper. It was quite a while before they started shipping them on CD ROM.
-
-
-
Monday 19th February 2024 09:09 GMT geoff61
Linux certified as UNIX
> in the Open Group register of UNIX® certified products, there used to be two different Linux distros, Huawei EulerOS and Inspur K/UX, both Chinese CentOS Linux derivatives. Like it or not, this means that Linux isn't "a Unix-like OS" any more. It is a Unix.
No, it means that those two specific Linux distros are (or were) UNIX. Huawei and Inspur would have had to make a huge amount of changes to their systems in order to pass the certification tests, similar to what was done by Apple to OS X (as described by Terry Lambert in a post to Quora). In fact, it could be argued that those changes mean EulerOS and K/UX are no longer Linux, since Linus Torvalds famously refused to make at least one of the changes that would be needed to make the kernel conform to POSIX. (The one I'm sure of is that POSIX requires kill(-1, sig) to include the calling process in the set of processes the signal is sent to, but Linux doesn't include it; I vaguely recall there were more such cases.)
-
Monday 19th February 2024 22:09 GMT ldo
Re: Linux certified as UNIX
Mr Proven has this bee in his bonnet about insisting on pushing the term “Unix” beyond its trademark usage.
This is why we normally say “*nix” to describe the way a decent system should behave. That is certainly what we think when we say “Unix”, but we are not allowed to call that “Unix”.
Or, you know, give up on the name “Unix” altogether, and just say “Linux”. Because that is the standard *nix system now. And the only other ones left, the BSDs, are also slowly, grudgingly, adopting some Linux features too. So they recognize Linux as the de facto standard too.
And yes, that includes systemd.
-
-
Monday 19th February 2024 09:25 GMT thvv
so... Multics is a Unix
almost, anyway
- hierarchical file system, but the root is '>' rather than '/'
- shell, check
- case sensitive file system, check
- Commands like ls, cat, echo, mkdir, rmdir, touch: Multics has list (ls), print (pr), echo, createdir (cd), deletedir (dd), touch .. close enough.
- Plain text files, check
- commands connected with redirection and pipes: redirection, yes. the pipe symbol was not added to Multics till the 80s in imitation of Unix.
(the initial Multics I/O system was mostly designed by Bell Labs folks)
additional details: https://multicians.org/unix.html
-
Tuesday 20th February 2024 04:12 GMT Herby
Certification of Unix
Having been on the front line of this process, I can relate that it is a trying exercise. You test ALL sorts of things that make things Unix. The shell, the library, the system calls, and (last but not least) the C compiler. There are verification suites for all of these things, and a "simple" run can take over a day (at least an overnight exercise).
Others have opined that MacOS is case insensitive. This is not exactly true. The latest file system promulgated by Apple the attribute of case insensitivity a file system parameter. Turn on/off at will.
Of course things can get tricky at times. You get a C compiler that emits code that has "speculative executions" and then you write a routine (Gamma functions come to mind) that exercise this, and the speculative execution generates "errors", which you don't really want, and it fails. Ooops!!