Igalia again?
I'm not sure what they're putting in the water there, but it's clearly working.
Chimera Linux is a new distro under construction that is not only systemd-free, it's GNU-free as well. Its creator hopes to reach alpha testing this spring. Chimera Linux is a new project which began in mid-2021, but has already made considerable progress. Its solo developer is Czech programmer Daniel "q66" Kolesa, who gave a …
I have been doing some numerical (ODE/chaos) work, using exclusively long doubles (please don't attempt to persuade me they are unnecessary!), but found that on aarch64 (which I run on my Raspberry Pi 400s) the trig functions are merely double precision. I can't find out whether this is ever going to be fixed . . .
From Wikipedia
"In C and related programming languages, long double refers to a floating-point data type that is often more precise than double precision though the language standard only requires it to be at least as precise as double . As with C's other floating-point types, it may not necessarily map to an IEEE format."
There is nothing in the C specification that states that a long double has to be more precise than just a double, and actually specifies very little about the real size of data types. This is a long-standing feature for integer and floating point datatypes, all the way back to the original K&R.
It is also possible that the underlying hardware does not provide quadruple precision floating point arithmetic (come on, we're talking about a Raspberry Pi here!), so if they actually did it, it may have to be done (slowly) in software.
So no conflict with the standard, it's not a defect, and won't be 'fixed' any time soon.
Was that an attempt? ;) Doesn't matter. I want them, and I can get them on all GNU/Linux systems (and probably on BSD I haven't looked).
Technically, the extended precision is based on 80-bit HW floating point on "intel" systems, 64-bit HW on ARM 32 bit systems, and 128-bit software floats on aarch64.
MUSL on aarch64 does 128-bit floats fine inherently, it is just the math library that clobbers it.
Perhaps you will now want to explain to me that SW floats are slow? I still don't care - I want them ;)
I misunderstood your comment. I had not realised that you were saying that although MUSL does software 128 floating point arithmetic correctly on AArch64, some of the other maths functions in the library don't work to this precision. That does seem a bit of an omission.
I was commenting on something much more generic with C data types. Sorry.
NP. I'm not trying to argue here - no apology necessary, I was just making a point. Incidentally I did go on IRC to ask the MUSL guys, they confirmed my observations and said they could fix it but it was a low priority. I think they needed to port a lot of stuff from (I think) libquadmath and it would be a lot of work replacing stubs, or something like that.
long double is 64-bit on musl in general; i doubt this is going to be fixed, and IMO it's a good thing (as you can never really be sure what long double is going to be, it's never guaranteed to be better than double precision)
you can use explicit quad-precision types if necessary, at least then you have a guarantee that they are indeed quad-precision (though of course, you cannot use standard c library with it for math ops, so a separate library would be needed, gcc provides libquadmath but llvm does not, but perhaps this could be addressed at some point)
I built a floating point emulator at AMD in the '90s. I've seen things you people would not believe.
There is a LOT of seriously ugly behavior past that 53rd bit. I don't know your education or background, but please look VERY carefully at the code being generated to ensure that you understand what you are are getting. i86/i64-based ep is one thing Double-double is something else entirely. Which gives more precision depends ENTIRELY on how double-double is actually implemented, and where you are in the operand space.
Seriously, while I'm well know here for "use the **** library" when it comes to crypto, here I'm the opposite: the libraries that I have seen have VERY disappointing characteristics. If you are technically competent enough to demonstrate that 53 bits is not enough, you can probably develop a library that is reasonably fast that does what you need.
i don't like Stallman, but that's hardly a reason in itself to avoid GNU things
but then, the project doesn't actually avoid GNU things (as a matter of fact, many of them are packaged and anyone can install them); it's effectively free of GNU things in its core (as in you can boot a system that does not have GNU components in it), but that's more of a side effect of various decisions than anything else
i still think that "avoid GNU stuff" is on its own too shallow of a goal for a project to hold up in the long term, and to most people pretty meaningless; but that's fine, because for chimera it's a means to an end, the BSD components were simply the best and most fitting choice at the time (just like the LLVM toolchain was the technically superior choice, for example)
GNU tools are GPL licensed, which a lot of companies hate because they can't just take it. You'll see a lot of people trying to replace GPL with MIT licenses.
I'm not sure that running the BSD userspace on Linux means much. I assumed that most of it could be compiled on most Unix systems already (and that would definitely include Linux).
In his presentation he says one reason is to prove that Linux does not equate to GNU/Linux.
Good enough reason for me but then he describes one of the biggest pains for those who need to modify a distribution, the system daemon. For embedded, single board people, it really sucks being forced into using Ubuntu if your SOM maker only supports it (and has proprietary dependencies in their variant of it). I agree with his assessment that systemd was needed but just isn't that great, it's got problems, like all init systems. I'm excited by his init system being designed to be modifiable down to bare initramfs yet able to work with Gnome (not that I require Gnome usually, it's just a great validation step/forcing function for development). Systemd just gets more and more brittle and it feels like even with plenty of experience plumbing with it and reading the documentation, it still can break unexpectedly from subtle changes.
From my point of view, the GNU toolset is going in a direction that I don't like.
I was annoyed when they pulled netstat, route, ifconfig et. al. from the TCP packages, and I was extremely cross when they pulled pg from coreutils (I work on traditional UNIX systems for my job, and muscle memory is persistent). It seems that some decisions are made on the whim of the package maintainers who won't listen to counter arguments.
So, yes, having a free platform with alternate tooling is of interest to me, although I would ask why, if you want the FreeBSD tooling, why not run FreeBSD?
The reason why I'm currently sticking with mainstream Linux distros is the scope of the pre-built repositories, although the above changes and systemd are challenging that decision.
netstat, route, ifconfig et al on Linux were never provided by GNU; they are part of net-tools (which is likely still available for installing on your distribution). The biggest reasons most distros switched from net-tools to inetutils were maintainership (net-tools had very little activity at one point, and all user-space support for cool new kernel networking features was going straight for iputils); and for whatever reason net-utils broke the output format of ifconfig which "forced" many distros to carry old net-utils versions.
net-tools is marked as depricated by many distributions, but I knew that it could be installed if it was not in the standard set for a distro. Unfortunately, I don't have admin. access to all of the Linux systems I work with, and do not have total freedom on some of the ones I do, so it's difficult on those systems.
I had assumed that because it was part of the fundamental networking in older GNU/Linux systems, that it was a part of GNU provided toolset. I must re-educate myself about exactly how much of the toolset is actually provided by the FSF.
The sockets API that everyone decided to use came from Berkeley and is known as BSD sockets.
But the implementations are not derived from BSD code. Infamously around 2000 Windows was using a BSD-derived TCP/IP stack, but that was rewritten for Vista. There’s probably the odd utility they haven’t bothered to rewrite, but the core of Windows or Linux networking is not BSD.
Disclaimer: I'm not claiming the scenario I'm about to propose is the (or even a) reason this project exists, but it is one reason why someone would create such a project:
Because of the (L)GPLv3 (although getting rid of as much GPLv2 as you can is also a plus for this scenario). Unlike the GPLv2, the GPLv3 requires hardware manufacturers distributing GPLv3-covered software to allow users to run said software on the hardware they sold, to stem what TiVO did (they gave the entire GPLv2 bits, but you couldn't run your own freshly baked OS image because it would only load signed images). IIRC, GPLv3 also closed a loophole regarding build systems (IIRC, you can arguably legally distribute your modified version of any GPLv2 stripped of the build system/"forgetting" to add the plumbing on the build system for your modifications).
Note that they didn't get here first; Android has been (L)GPLv3-free from day one, with their own ground-up implementation of Linux userspace (busybox is pretty much the only base component not written by google), and before that macOS stuck to the last stable GPLv2 versions of gcc, binutils/coreutils, bash, and samba until they got their own versions (invested heavily in LLVM, got coreutils from BSD, zsh to replace bash, and their own ground-up SMB implementation)
PD: Yes, Linux is GPLv2, but license enforcement efforts on the Kernel have been few, and there is seemingly lack of interest in enforcement within the project.
it's definitely not a reason, as this is a free software project and therefore concerns of random corporates about the gpl don't matter to me whatsoever
i do prefer less restrictive licensing for my own code, but there is absolutely nothing blocking (l)gpl3 stuff going to chimera (and there is plenty); the only real concern is agpl-licensed stuff because of potential legal concerns, but this is also of concern to mainstream distros
I can understand removing SystemD but why the push to remove GNU?
I suppose it just shows how far detached Linux has become from its roots. 30 years ago "open" referred to Open Systems, not Open Source - the idea you could freely migrate between suppliers and implementations. Much recent development in the Unix world has thoroughly shat on that idea, beginning with the assumptions that Unix=Linux, make=GNU Make and cc=gcc, then X must be running one of a limited number of desktop environments, then it uses a particular messaging system not specified in any standard (DBus), now it uses a particular init system.
Make no mistake - that is still a walled garden vendor lock in, the fact the specific implementation is "open source" means diddly squat as far as interoperability is concerned. Those who forget history are doomed to repeat it and all that.
Because Gnu's not cool anymore... and commercial interests are more interested nowadays, thus there is greater interest in licenses that don't bind you like the GPL does. That, and they resent having to say "Gnu/Linux"
I still prefer Gnu's tools and use them where still possible. If something will configure with gnu autoconf, compile with gcc, link with GNU ld,.. that's what I'm using. I would not install a Linux system that is not GNU based (i.e. glibc, libstdc++ from gcc etc.). I'm not going to be dependent on a distributor for patched sources for things because they won't compile against the shit they have chosen.
I still prefer Gnu's tools and use them where still possible. If something will configure with gnu autoconf, compile with gcc, link with GNU ld,.. that's what I'm using. I would not install a Linux system that is not GNU based (i.e. glibc, libstdc++ from gcc etc.). I'm not going to be dependent on a distributor for patched sources for things because they won't compile against the shit they have chosen.
That's your choice. Why do you want to deny others the same choice?
And yes, often the Linux option is the shit one. I was about to describe a couple of specific examples, but I have experience of flagging up issues with prima donna authors who can do no wrong. Yes, that goes for libraries that are essentially universal on any Linux system.
One consequence is that providers or proprietary software / systems are better able to use Linux without fewer concerns about GPL.
So this may chip away at FreeBSD. That sometimes gets picked because it is GPL free. But the Linux kernel gets better optimised for today's CPUs. So Chimera could be a best of both solution for some.
The Avro Shackleton is widely known to aviation buffs as "50,000 rivets flying in close formation" and is presumably the source of today's Linux wit.
Curiously, there is indeed a "Shackleton OS," though it ap[pears no more than a concept inspired by a mashup of Linux and iOS >shudder!<
[Author here]
> the source of today's Linux wit.
To be completely honest, I read this somewhere, long ago, and it stuck with me. I was unable to find an origin of the line, and most of what I find when I search is various instances of myself using it over the years in various places.
So, I absolutely do not claim it is original, and I think you're right about its ultimate origin, but I don't remember where I got it from, I'm afraid.
I'm pretty sure I've heard the same description applied to the Lancaster, but I can't provide a source for that.
My favourite quote which is definitely about the Avro Shackleton was from a documentary about them which was filmed shortly before they were retired, in which a pilot was asked whether he liked flying them and replied something like "we've got leather seats and Rolls Royce engines, it doesn't get much better than this!"
[Author here]
> "Gnome on Wayland" :( Why?!
I think I said why, didn't I?
Anyway, the architect and maintainer is "q66" and I see he is commenting here, but he has pointed out to me that the installation image does contain other GUIs, including Enlightenment and some tiling WMs.
I guess that's because some distros required us to jump through hoops to install another WM or DE or GUI... (I know I had trouble, but that's a while ago). Anyways, good luck! I might check it out, just out of curiosity. Binary compatibility sure helps a lot there (I could build, but... nah, too lazy at the moment - or rather another focus).
Around 20 years ago, I went to see Stallman give a talk in London. To get a sense of the time, the crowd was full of black t-shirts including a selection from ThinkGeek, a large contingent of "Perl Monks", and a couple involving the BSD daemon apparently buggering Tux the penguin. If you've ever been to a Stallman talk you'll know that he has a Q&A at the end, during which there are usually a couple of edgelords who try to ask awkward questions. On this occasion a guy stood up and asked what Stallman thought about a project at the time that sought to replace all GNU software in Linux with BSD equivalents.
Stallman's response was "I'd say, what an enormous waste of effort."
"My people have already done it, so no one else needs to worry their pretty little head about it" does rather seem to sum up Stallman's worldview.
Once Henry Ford started mass-producing automobiles using an assembly line, we really didn't need any other automobile manufacturers. What an enormous waste of effort, right? Really, why do anything?
I know some corporates, e.g., Google, will not allow GPLv3 inside their code. Difficult to speculate why, perhaps it creates a hardware creep akin to Balmer's description of Linux, which seems to suggest GPLv3 does what it says on the tin and if there's no need to pick up the tin then don't.
Chimera is a pre-alpha by one person (see the appropriate XKCD) It might or might not be interesting, I've got no idea. Good luck to him scratching his itch.
I suppose the number of people involved will multiply based on the number of comments on this thread.
Perhaps one way to discover if any of this really matters is when Chimera gets to an RC and there are several developers.
why is "make a good operating system" not an acceptable answer? do i even need a reason to start a project?
i don't think there will ever be a simple answer anyway, as it wouldn't make sense; there are too many smaller goals that may not count as super strong individual reasons, but the whole is greater than the sum of its parts
From a security point of view it is interesting. GNU has become quite big in LOC terms and does questionable things (e.g. libc NSS mandating dynamic linking). Although GNU has a reputation of being well implemented. At least 20 years ago, bugginess was drastically smaller than that of Unix competitors. Now many wheel reinventors purport some (very subjective IMHO) "cleanliness". Never liked that term, sounds like the authors have a compulsion to go around the house, always looking for the last dust corn to dust off.
> The talk has a lot of discussion of hardening and error checking, as one might expect from someone whose day job involves developing mobile web browsers and the like.
There are quite a few specialisms where I would expect that, but none of them are anything to do with mobiles or web browsers.
I’m merely a dabbling non-geek in the *nix world, but that comment/connection struck me as a bit odd too.
Chimera certainly sounds like a distortion I’d like to try when it’s stable enough for for non-geek use and (hopefully) available with a lightweight desktop such as XFCE.
Kudos to @q66
Beginner question: I notice that the OS is binary compatible with linux, but uses a different compiler and library.
In my (8 bit embedded app) world, using a different compiler and library winds up with me not having a binary compatible object. Not that it makes any difference either way, but how does it work for LLVM, musl and linux?
that's actually a little bit complicated
things are directly compatible at kernel level, therefore container-based solutions (e.g. flatpak) will work out of box
as far as running binaries directly within the host userland goes - right now it's only 100% compatible with binaries compiled against musl *and* the compiler-rt runtime (from llvm)
however, the builtins library from compiler-rt as well as the llvm libunwind implementation are technically abi-compatible with libgcc(_s), just under different names and with different linkage, so it should be possible (and not extremely hard) to create a little very thin shim that should allow running any binaries compiled for any other musl environment directly; including those built with gcc and against gcc runtime - even right now one can just symlink libunwind to libgcc_s and a lot of binaries will just run (though it's not guaranteed as libunwind alone is a subset)
as for directly running stuff compiled for glibc, that's a lot more complex; there are efforts like gcompat, but their actual compatibility is limited - if the gcompat shim managed to map the glibc ABI fully against the musl environment, in theory you could run anything, but that's a complicated matter
so yeah, best stick to flatpak and/or other container solutions for running foreign things
I guess that the only downside of using apk over pkg is that packages from apk are likely to take a massive dump in your /etc, /bin, /usr/bin and other directories, whereas I like how FreeBSD, generally, sticks to putting package manager installed software in sane locations, rather than the base OS locations.
Linuxland doesn't really observe any distinction between base OS and subsequently installed software; probably because of the author's description of it being lots of individual parts flying in close formation so what does it matter if you start adding more and more parts to the formation right?
Why is duplicating the same functionality in GNU and BSD toolchains a waste of effort? By that argument, duplicating the kernel function in both Linux and BSD is a waste of effort. They are released under different licenses, because having the choice of license is a GOOD THING.
Talk was "highly technical"? I guess that reflects most Linux kids these days... don't know what's going on under the hood at all. I also find that echo'd in people asking "Why do this?" It's called messing with technology, Like when Linus wanted something better than Minix on cheap PCs. I.e., do something other than consume.
not knowing what's under the hood is fine, and for most people the talk is going to be fairly technical, but it's supposed to be
the "why do this, this is pointless" people are another matter, they tend to have their mouth full of how linux is about choice and start whining once somebody does not support their system, but when somebody else expands the choices beyond what they approve of, they are like "no, not like that"
it's a rather hypocritical stance IMO, and reflects how people tend to care only once it restricts them personally - everything else is subject to contempt
Perhaps there was never an intention that Chimera should be a long-term distro as opposed to a one-off making a point, but maintaining a distro is a lot of work and this simply won’t survive for more than a few months if it’s a solo project. Perhaps a few hardy techies will try it out of curiosity but it would be a brave person who decided to do anything important with Chimera.
Because Linux itself was launched by a consortium of major computer vendors with a long list of paying customers?
The origin of the Linux kernel doesn't prove that Chimera will last, but it certainly disproves that Chimera can't last, doesn't it?
io_uring
is getting more capable, and PREEMPT_RT is going mainstream