I can make my car faster by:
- Removing all the air bags
- Removing all the padding in the cabin
- Removing the seat belts
- Replaing the padded seats with bare-bones bucket racing seats.
- etc.
Doesn't mean I should...
A cryptic website with a single line of text promises to make your Linux box more responsive – if you are willing to accept some risk. Another day, another bleed-ing vulnerability. New speculative-execution attacks keep being discovered, and OS kernel developers keep finding ways to block them – at the cost of some CPU …
only for machines that aren't connected at all
Well, aren't connected when you boot like that, at least. If you have a big CPU-intensive job like an image render, or some video processing, you may want to get everything downloaded & setup, unplug the network, and reboot into a fast unprotected mode while running that one job. Then back to 'normal' & reconnect.
You're right.
It is generally servers which run controlled workloads rather than clients. Lots of these bleed attacks have been demonstrated from JS, so most web browsing is out.
On the other hand lots of servers never have any "logged in users" so there is no chance of extra SW being installed and run unless some other bug in your apps can be exploited by which time you're largely dead anyway.
The days when server meant hundreds of users with login access is long gone.
> On the other hand lots of servers never have any "logged in users" so there is no chance of extra SW being installed and run unless some other bug in your apps can be exploited by which time you're largely dead anyway.
In a properly laid out zero trust architecture the effect of an exploit can be limited, by employing a rigid process isolation and RBAC least privilege scheme. For the protections to be meaningful you depend on kernel services. Therefore the kernel must be fortified.
The usual process isolation being done is to put the service in its own container, e.g. one HTTP server process processing all requests, privileged and unprivileged. An unprivileged request with the exploit could still gain access to privileged requests. To limit the exploits reach, more HTTP server processes for different privilege levels would be required.
> As the article says, it's only for machines that aren't connected at all. How much use is one of those?
For sure. Possibly a task that benefits from the user ignoring the outside world, such as writing a novel, illustrating, playing a single player game.
But yeah. Maybe like the above sports car with the headlights removed - legal on a closed track, fun even, but not useful in the sense of nipping to the shops on the public highway network.
We Linux guys live in the true land of the free, unlike those poor souls suppressed in a Stalinistic dictatorship like Windows or Apple land.
We Linux guys think for ourselves, do not need some corporate bot determining what we should or should not do.
If we want safe, we make safe, if fast is needed, for instance a database server in a secure network without internet, we make it fast.
We are free and keep money for expensive licenses in our pocket, what can be better ?.
> that's what you want: a computer that is sealed off to the outside world.
That's what I want for my Windows machines.
It's (not) surprising how much faster they run without any anti-virus software. And there is joy to be found in not being nagged every five minutes to apply this update or that. Although the scourge of seemingly continuous updates has not infected my Linux machines, too.
Define crud for CPU. Generally, if you can get something to run in hardware directly, it will be significantly faster than doing it in software: encryption, vectoring, transcoding, etc. And that means more complex chips and instruction sets.
Some of the other stuff, especially speculative branch prediction, is trickier but almost essential to get things to go fast.
> wouldn't it be nice if someone designed a new architecture
OMG no! Not only would that spend an awful lot of R&D money, it also would make all the existing stock* obsolete. It's way more profitable to keep using the old architecture.
To make it simple: Old architecture = bonus, new architecture = getting fired...
* You don't replace your whole line of products in one fell swoop: Last year's high-end CPUs become next year's mid level offerings, and so on.
>P.S. wouldn't it be nice if someone designed a new architecture that was simple and fast without all the crud?
And it would be simple and fast. For ten minutes, by which time someone will have found a flaw that needs patching. Design a better mousetrap and the universe conspires to throw better mice at you.
There are attempts at writing formally verified code - code proven to be as per spec just as 2 plus two equals four can be proved - but it's very hard work.
That would be the 80186 architecture. Simple, fast, and without all the crud.
That is, without all the parallel threads, pipelining, branch prediction, and caching, so it run very fast at the speed of your RAM, not like your existing compute, which runs 100s faster than the speed of your ram, due to parallel threads, pipelining, branch prediction, and caching.
“… then it probably occasionally swaps to disk…”
I have run my Linux box with swap disabled for ages. These days with gobs of RAM, I see no point at all in having swap enabled on a personal machine (and I’m not even convinced most servers need swap any more - though obviously some do). If I ever get to the point where I’m running out of RAM then I either have a virus or I’m doing something really stupid.
You have to remember what swap was invented for - a backstop when RAM was small and very expensive. If you are going to run out of memory then swap really just delays the inevitable. The size/cost issue just doesn’t exist any more
Swap is not just about being the equivalent of extra RAM on disk.
For example, in the FreeBSD world (and some other UNICES), if you have a swap partition (not a swap file) and you have a kernel panic, that panic can be saved to the swap partition for later debugging. You can't do the same on a file system during a panic, at least it is not safe to, as the kernel may have an invalid view of the file system. It has no such problem with a swap partition.
I don't know how Linux handles kernel panics (don't tell me they never happen).
I usually keep a classic swap partition just in case. With a 8Gb machine, it is almost never used (one can check this with "top" for example), and consequently does not cause any performance degradation.
However, there is one situation where it is useful: You have some modern piece of desktop bloatware with lots of data open, and you wish to use something else for a while with the intention of getting back. So you launch the other program, things are sluggish for a while (swapeti-swap) but then you can do your thing in the other program and then return to the original bloatware. Again things are slow for a while, but you do get eventually to a state where you get work done, after the necessary pages have been restored from swap.
In other words, you swap from one big task to another and back. Like the name "swap" says. If I did not have a swap partition, the dreaded OOM of Linux might have decided to kill the first program. (Or something else).
Of course, the idea of using swap to run two active, huge programs in an interleaved fashion does not work nowadays, you might as well be running Babbage's mechanical computer.
Swap removes unused data that would compete with more useful caches. It's generally best to leave it on. You can tune the "swappiness" parameter if the balance between data and cache isn't where you want it.
My personal server usually has a few GB of cruft that pages out over a period of several days and never pages back in.
You're defeating the most significant and important purpose of swap. It's not so much for giving yourself "extra memory" nowadays. (If you're actually working with data paging in and out of swap, that's not viable. A big compile job can take days, for example)
The most significant use of swap is to aid in memory management. The kernel can't free up anonymous pages. It has no idea if that data is there for a process, or if it's orphaned and it's data that could be lost since it's not backed by data on disk, if you don't have swap. It can't just be dropped.
Spoken like a true Linux user.
We get it, Linux has traditionally been crap and buggy at swapping, but it's much better these days. Here is an article on why swap is good, from a Linux perspective: https://chrisdown.name/2018/01/02/in-defence-of-swap.html
Swap space usage is good. All proper systems use it. Don't confuse that with frequent swapping, which is obviously bad.
If it ain't broke, don't fix it.
Hacking around with the kernel can be great fun, IF YOU KNOW WHAT YOU'RE DOING / DON'T MIND REINSTALLING EVERYTHING IF IT FAILS.
And we should have all had plenty of experience warning our less technical friends to never, ever do what they read in a "here's a magic trick which will enormously speed up your computer" article - because too many of them are ways of actively inflicting harm.
This article obviously isn't trying to do that, it's for interested techies. However, proper caveats should still be given, even though nearly all Linux users here will not need them.
I don't know if it's a skill or a curse that I can still remember how to do such optimisation - and more to the point have had cause to do it in the last couple of years for anything other than historical curiosity.
Legacy DOS, Autoexec.bat and config.sys still linger in the PLC world for various reasons.
Finding minimally-sized drivers for mouse and CDROM are easy starting points. Super-compact soundcard drivers are the remaining item on the wish list maxing out the 640k.
I mentioned this in my FIRST reply to the FIRST article that came out about SPECTRE. The drip, drip, drip of these exploits is precisely why NIST recommended (for twelve hours) turning speculative execution off. These partial mitigations add up, and in a big way.
I've interviewed with multiple companies who have gone this route for protected database servers and the like. The cost savings are enormous, and, as alluded to above, with proper least-access privileges implemented network-wide, quite safe.
What surprises me is that a mere kernel option can really do it. To really see the benefit, you need to turn off things like retpoline when you compile your applications and libraries. I would strongly urge something like Yggdrasil if a company I was advising was considering going this route.
On an old laptop with 4 MB RAM and a spinning rust hard drive running under Mint 20.3 Cinnamon, I just enabled the zswap change ( (2nd-to-last paragraph) with 2x compression (not 3x) + Firefox cache switch from disk to memory cache, and the difference is ... phenomenal! Cool!