Linux kernel 6.11 lands with vintage TV support
io_uring
is getting more capable, and PREEMPT_RT is going mainstream While many Ubuntu remixes just switch the desktop or replace a few default apps, Zinc changes some of the fundamentals. The result is impressive. Teejeetech is a small computer consultancy in Kerala, India, run by programmer Tony George. Zinc isn't the company's first distro, nor is this the company's first mention on The …
[Author here]
It doesn't matter. It's Ubuntu. It will get updates for as long as Xubuntu does, and the core OS for longer.
If TG never does another version, it will be useful at least until 24.04 comes out, and if you're worried, you can reproduce the additions and customisations he's made on your own installation.
As an example, inspired by Zinc, I have installed `nala` and `deb-get` on my daily workhorse machine, installed native Debian versions of my key containerised apps -- Slack, Zoom, Spotify and so on -- and then removed Snap and Flatpak support entirely from my machine.
It now boots up rather faster and has more free disk space. :-)
"Unlike U-Mix, Zinc is a free download."
For now. So was the same developer's Mainline program for easily installing mainline kernels from the Ubuntu PPA, but now it's not. He apparently wasn't getting enough donations to make maintaining Mainline worth the effort as a free product. It's presumably the same reason Timeshift (which I have made a donation for) was abandoned as something he no longer had the time to maintain for free.
It's a recurring dilemma in FOSS. On the one hand, there are all the reasons FOSS is good, which need not be repeated here, as I am sure we are all well aware of them. On the other hand, people have a limited number of hours in a day and have to make a living. If I was short on money and I had to work more hours to make ends meet, those hours would have to come at the expense of something that was not bringing in any money.
We have come to a point where any attempts to monetize a given program will meet with scorn from the community, whether that be via promotional tie-ins, installation bundling, advertising within the program, data collection and resale, or really anything else. We've gotten so tired of the shady and greedy behavior of the likes of Microsoft (which is trying to monetize a commercial product that is no cheaper than when it was non-monetized) that each of these things looks shady now. Mozilla has tried some promotional tie-ins with Pocket and iRobot, along with sponsored content optionally present in the default start page, and now a lot of people look down on Mozilla as being no different than Microsoft or Google, both of which are extremely rich megacorporations who don't have to worry about keeping the lights on, unlike Mozilla.
From the user's perspective, it's good to remember that developers of no-cost software can use some help, but on the other side... which ones? As a user of a Linux distro, I am using nearly 2700 FOSS packages, with nearly each one having developers that might be thankful for a donation (and better yet, recurring donations over time).
One of those is the aforementioned Firefox, my primary browser of two decades. Mozilla's efforts to stay afloat by monetizing Firefox have not been well-received, and quite apparently, donations have not been nearly enough to pay for what needs to be paid for, so Mozilla is left with monetizing its default search engine, which happens to be (of course) a product of the same megacorporation that is Mozilla's biggest competitor, with literally 22 times the market share Mozilla has, in what could be described as the mother of all conflicts of interest on Google's part.
It would really be nice if Mozilla did not have to rely on Google for their income, as that limits the ability of Mozilla to stand toe to toe with Google and tell them NO, we are not going to follow your plans for the web. But if all of Firefox's users donated all we could comfortably afford to Mozilla, that would leave nothing for thousands of other dev teams who may not be any better off than Mozilla.
[Author here]
No, not at all.
One, if it was just another re-skinning effort, I wouldn't bother to cover it. If something is small or niche and I write about it, that means it's impressed me in some way. (Well or badly.)
Secondly, I learned stuff from this distro.
So, as I said above, inspired by it, I have installed `nala` and `deb-get` on my daily workhorse machine, installed native Debian-packaged versions of my key containerised apps -- Slack, Zoom, Spotify and so on -- and then removed Snap and Flatpak support entirely from my machine.
It now boots up rather faster and has more free disk space. :-)
And how long did that take you?
And how long will it last?
My point is, why do ‘they’ keep reinventing stuff, instead of choosing a a jumping off point and continually improving it?
and yes, for my Linux requirements I have done what has been suggested; Mint and Raspberry Pi OS. And very satisfied I am too.
When I install Linux on a new machine, I format the disk with swap at the start and at the end, and an EXT4 partition in the middle. My vague notion was that this would optimize disk head movement, but I have no idea if that's true.
But it did save me the time I was creating a bootable USB dongle, and typed /dev/sda instead of /dev/sdb. I hit ctrl-C before the writes reached the root partition, and only had to rebuild the boot sector and partition table.
[Author here]
I presume that you mean `sda1` rather than `sda2` or something like that?
The stuff about location of swap on a spinning hard disk is a canard.
About 25 years ago, I wrote a piece in PC Pro magazine explaining that using PartitionMagic you could carve a lump of 200MB off the end of the then-standard 1.2GB drive, reducing the main partition to under 1GB and thus reducing the cluster size to 8kB instead of 16kB.
The result is that the drive now held *more* and you got a free 200MB partition, ideal for Windows 95's swapfile.
A prominent UK system builder complained at me, and said putting the swapfile on the end would make the machine slower.
So I benchmarked with and without to falsify this statement: it made no measurable difference to 2 decimal places.
These days it's on SSD so there isn't even that excuse, really.
It's fake.
What makes far more difference is compressed swap.
Stick:
zswap.enabled=1
on the end of the kernel parameters in `/etc/default/grub` then run `update-grub`.
On this old and relatively underspecced machine, this reduced swap usage by 3-10×, making a pleasant speed increase.
The bit about sequential throughput on a rust spinner dropping as the head gets closer to the center of the disk is real. One revolution of the platters takes the same time as it would if the active head was reading/writing at the outside of the disk, but with fewer sectors moving under the head in that time as the heads move inward. If you benchmark a drive's throughput from sector 0 all the way to the last, you will see that the sequential throughput is at its greatest at the start of the test and considerably lower by the time the end is reached.
That does not mean that having a swap file at the end will always slow things down. Workloads that consist of a lot of small reads or writes (especially if they are small enough to fit within the hardware or software caches) will depend more on seek time than sequential throughput potential.
That seems to be what Steve Graham was referring to above. If you put the swap partition close to where the heads will spend most of their time during normal operation, the time it takes the heads to move from one to the other track will be reduced.
Of course, it would be hard to justify using a rust spinner if any kind of performance is desired in 2022.
This is what makes -- made, for me -- benchmarking tricky.
There are 100% effects in all manner of aspects of computer systems that measurably affect performance.
The problem is finding whether they affect the OS or the apps.
And of course that depends on what the OS and the apps are.
For *most* end-user apps, more CPU cores does not really improve performance. Writing multithreaded code is hard. Only BeOS on the desktop really embraced this. No other mainstream OS does. Most of course are still in C, and C has no direct internal support for threading and so on; that is why using C++ lent a big advantage to BeOS and EPOC32/Symbian: their APIs could embrace this stuff.
See my earlier story on why C is an IDL these days.
Even for threadable tasks, more cores doesn't always help, just as more programmers doesn't help a programming project. Read _the Mythical Man Month_ for why. Better still: buy 4 copies so you can read it in 1/4 of the time! ;-)
Worse still, Amdahl's Law means that even if your task _is_ amenable to multi-threading, adding more cores quickly tops out and performance stops improving.
So, a multicore PC does well on media encoding and things, but doesn't run ordinary apps any better. That's hard to measure.
A multicore PC with lots of single-threaded apps but a multithreaded OS will feel more responsive. That's almost impossible to measure and hard to demonstrate.
So, adding more L1 or L2 cache speeds up all operations, and that's easy to measure.
I wrote recently about the busmastering disk driver for Intel Triton chipsets. (See the HDD Clicker article.) That driver made no effective difference to Win9x, as its kernel is single-threaded. It really helped NT, though, even on a single core.
The thing is that while the effect of file location on spinning media might be measurable in a synthetic benchmark aimed at measuring that and nothing else, in real life, on a real OS with a clutter of apps and files spread all over the disk, fragmented etc., the effect is not measurable on end-user app performance. It is lost in the noise. It is a purely theoretical effect and in real life it makes no measurable difference.
*However* putting the swap file on a dedicated partition, in the long run, after lots of use, *reduces* disk fragmentation. That is a win. It's hard to demonstrate but it's a win.
Secondly, on FAT16, 8kB clusters are 20% or so more efficient overall. Win95 contained a lot of small files. Most OSes do.
Repartitioning produced measurable, demonstrable improvements in storage efficiency, reduction in slack space, and so on.
So one cost is not measurable; it's too small to see. But the benefit is directly measurable and demonstrable. That means it wins. And a 2nd benefit will only be seen after years of use, but that is valuable too.
That is why system benchmarking is an art as well as a science, and it needs a lot of skill and judgement.
And modern benchmarks that play up the effects of multiple cores are, in real life, a lie. They are a marketing measure only.
Multicore desktops in mainstream OSes are mostly a waste of time. Two cores is slightly better than one, 4 isn't much better at all (which is why AMD sold 3-core chips), and >4 is a waste of transistors.
While that is demonstrably true, that sounds like heresy, in an industry that runs on selling people new chips.
Real world performance stopped doubling every ~18 months after about 2007 or so. Sledgehammer was great; Core 2 was a good jump over that; Core iX a small one over that. Since then it's been incremental, unless you're a rich gamer, and the increments are small. Instead of doubling every $MOORES_PERIOD, it now gains 10-15% every MOORES_PERIOD instead.
io_uring
is getting more capable, and PREEMPT_RT is going mainstream