MS Azure Hypervisor
Microsoft has some fairly generous licensing for it's hyper-v based on-prem solution. Azure Hypervisor or something. MS licenses included in the core. I think it's going to probably take over just because.
201 publicly visible posts • joined 4 Dec 2017
It's getting to the point that upgrading Windows really isn't buying anything new. It really hasn't since Windows 7. I've been running KDE Plasma on Debian stable for the last several years, and I'm not finding any real downsides, except.
It seems INTEL cannot create any new hardware platform without requiring brand new drivers for every bit of chippery in the box, from the GPU, to the NIC, to the Wifi, to the sound system, to the CPU, to the USB chips to the ... Sadly for a modicum of stability on a platform this seems to require the latest version of Windows, and a slew of bleeding edge drivers, or experimental kernel's on various linux platforms. If AMD can ever get any market share their compatibility seems to be much better.
Further this obscene need to have "firmware" built into a device driver is pathetic. ANY chip that leaves the factory should have a rom with a basic firmware that is modestly functional and can act as a default. This doesn't mean you cant have dynamically loadable more up-to-date "firmware", but the chip should at least operate without some customized upload code feeding it bits at boot. It is pathetic and stupid.
(Off Topic)
Systemd is growing on me. Early implementations were abysmal. The general network configuration stuff is pretty bad, and I would say the state of network configuration in Linux in general went from simple to abysmal starting with 'predictable' interface names, which are just the opposite, solving a problem that did not exist. net.ifnames=0. RH variants with /etc/sysconfig, and slackware with its script file, debian and ifup-down, netplan.io. . . Networkmanager solved a problem for mobile / desktop, but I find it a tad arcane. I figured out how to create a portable bond/bridge/vlan configuration with systemd after getting rid of the bizarre 'predictable' names. Want to talk complicated, from a single file with a handful of stanza's with ifup-down to a bowl of vegetable soup. Then and upgrade broke it, I fixed it, and an upgrade broke it, and I fixed it.
To configure 4 interfaces bonded with 5 vlans on bridges to run with KVM/QEMU you'll need to build this stupidity, we won't even go into the contents of the files:
<304>lf
bond1.1099.netdev bond1.1101.network bond1.4001.netdev br0.1099.netdev br0.1101.network br0.4001.netdev
bond1.1099.network bond1.1102.netdev bond1.4001.network br0.1099.network br0.1102.netdev br0.4001.network
bond1.1100.netdev bond1.1102.network bond1.br0.network br0.1100.netdev br0.1102.network br0.netdev
bond1.1100.network bond1.3001.netdev bond1.netdev br0.1100.network br0.3001.netdev br0.network
bond1.1101.netdev bond1.3001.network bond1.network br0.1101.netdev br0.3001.network
OTOH, It did actually solve existing problems, and they took a somewhat modular if rather complex approach with it. Not thrilled with the journald logging either. Systemd-resolved/resolvectl is very good, hopefully they won't break it. I try and use the parts that work the best, but the biggest problem I have is they keep breaking things. 'systemctl edit' is a nice idea, but we keep changing how the overrides work, and at the moment I'm not sure if it adds on to a section or completely replaces it. It has done both in the past.
The problem is VMware is alienating smaller customers. Over time smaller customers become larger customers who are not using their products, over time the free and less expensive products become more robust, and then larger customers start looking at the less expensive alternatives. Microsoft did this to the IBM, Sun, SVR4, mini-midrange-mainframe market, and we are going to see another cycle probably much faster with VMware, since the core of technology stack (Virtualization) is a pure software play.
"Under the rules, any electronic device priced between $50 and $99.99 would need to have parts available for five years, and devices that cost in excess of that would have to have parts available for seven years. "
Any electronic device priced between $50 and $100 should be E-wasted if it breaks. It's not worth $50 to crack it open.
The geek that was the DBA was fired or moved on. His assistant didn't really have a handle on everything but took over at 1/2 the Salary cost, management was happy. A year or so later the new guy realizes he's grossly underpaid, asks for more money, is denied, and finds another gig elsewhere. Clerk who always wanted to be in IT takes over, fortunately everything is in good shape so things continue to run smoothly even though there is not a good understanding of all the processes. Over time, minor changes cause a disruption, nobody with the skills to notice, and it blows up.
Or maybe, new DBA was hired to replace the guy who left, but has no idea the process exists, and nobody is complaining, and the error messages were being sent to an email address that no longer exists. . .
Or it was outsourced to India, and we are in the process of training up a 4th set of contractor folk who are going to leave in a year or so. . .
This crap happens all the time. Welcome to IT.
"NFS is probably the only protocol in the unix world, that is worst that its Microsoft counter part.
It's always been shite, performance and reliability wise."
--Flame Bait--
If your trying to say NFS is worse than SMB/CIFS I beg to differ. NFSv3 performance is very good albeit single threaded. NFSv3 also works reasonably well over higher latency connections. SMB/CIFS falls over. That being said, Microsoft's NFS implementation is abysmal. If you are running a *nix box run NFS, If you are running Windows run CIFS, though both can be abysmal under windows. I've used NFS extensively since
I note NetApp still uses NFS as the core protocol for VMware storage connectivity, and it routinely out-performed iSCSI in years past. I know more recently they support multi-threaded NFSv4, though I haven't followed the benchmarks recently.
--
TSMC's Central Taiwan Science Park, for example, uses 3.3% of its daily allocation, or around 4.9 acre-feet of water per year. The Southern Taiwan Science Park and the Hsinchu Science Park use 5 acre-feet of water and 5.7 acre-feet of water daily, respectively.
The reason behind the minimal consumption is thanks to recycled water.
--
I own an EV. Generally charge at home. Traveling ocasionally, I have to charge with these sorry ccs cables. It was 30F outside and I had to wrestle that BOA constrictor into the socket. I'm stunned I did't crack the plastic. My friend has a Tesla. The tesla cable is vastly superior. I would convert my socket on my Niro EV to a tesla socket if I could.
Running a RISC V core? ARM? GPU specs? Vector specs? Gallium Arsenide stuff has been around since the 80's. Fast and hot, but fabbing chips with a new process with the densities of today's cpu's would mean billion dollar fabs and esoteric processes, nobody has ever seen or heard about in Can-aid-i-a. We'll believe it when we see it. In the mean time TSMC is building a "Multi-Billion Dollar Fab" up the road a piece, that should usher in some 3 & 4nm parts from Arizona in a year or two.
"ICs weren't developed because the authorities considered discrete components too cumbersome."
Yes they were. (V)SLIC and component hardening was funded in large part by DARPA. As was the internet. It's always been both. NASA also funded tons of research into materials and such. Don't be silly.
Now if you want to make an efficiency argument . . .
Likely Sodium and Aluminum in the short term. An outfit in Australia has a prototype factory running for the latter, they seem to be on target for initial production of pouch batteries. Another company has some improvements around stuff blowing up with current tech that increases volume/kw by 25% or so but that would not be relevant if you are in a shipping container at a power plant.
Frankly aluminum if they can get it going is like a no brainer and highly recyclable.
No, I do not want f1 to open a help window for the terminal emulator. I want it to perform what the f1 label says in the window. And if you want to emulate a vt100 (vs a wyse 50/60 or an adm3 <grin>) then give me a keyboard without a backspace key, and I want the dip switches underneath. Early VT terminals were total crap (adm3a was cool looking, ctrl-h to backspace). The wyse 60 was sweet, and had a 132 column mode.
I think termcap is better than terminfo, and curses is aptly named.
If you want maximum speed drop to layer 2. As long as you are at l4, you have to endure more overhead. If all your traffic is in the same layer2, ... Early on in the ip vs ipx wars... There is already a bunch of l2 protocol management going on as well.... lldp, spanning tree. To get data from point a to point b you need a handle on both ends. Mac addresses with ethernet. Not sure how you are gonna improve on switching that with asic's. Never did understand why usb was so popular for perhipherals, when the cost of a much faster ethernet chip was the same price. 8 port switches were $15, build it in. Ataoe never took off, but iscsi did. Designing a new use specific lightweight l3 would be ok, but why stick with ip at all? Customizing a new l2 seems dumb. (Can you say fiber channel?). Then again 802.1x...
This whole thing is a bunch of hummice homa.
I've got a pinephone. Thrown pretty much all of them on it. Ubuntu touch has the dis-advantage of not working all that well in general. Postmarket and Arch and Debian and the other projects are so much farther along,I think I'd put the effort in Plasma Mobile, or phosh which seem to be mostly working. What we really need are phone apps. Not effort trying to weld ancient desktop things on a phone display. Further the drivers for the cellular modems are still rather 'alpha' quality. We really need to figure out modemmanager and ofono and all the low level driver cruft is targeted at the latest kernels which need Linux 5.x. It's miles better than it was 2 years ago, but still pretty awful.
-- "It’s an electric plug. FFS."
No it's not. It's a Universal Serial Bus, that happens to have power on it to power remote devices. Always has since 1.0 5v at up to .25a. Just because the spec allows for everything, does not mean that the 100W PD brick you bought would implement anything beyond the charging conversation. And the arguments here about malware are borderline silly. If you use a USB-C hub with HDMI and ethernet they have to have drivers and such on both ends. I have updated firmware on a number of USB-C docking stations. I doubt that would be implemented on a power brick
USB is a BUS with some evolving standards around power distribution on the bus. You know like the power definitions on a PCI bus, or any other similar computer bus. We are talking about standardizing the connector around the USB-C connector spec. Just as the Micro-USB slowly became a 5V/2A "standard" for small device power and charging because it was ubiquitous so will USB-C supplant it being backwards compatible with tiny adaptors, while allowing for drastically more power. Arguing the esoteric's of a specification is just wasteful. We are already seeing stable ASIC's that just work, and the price premium for it dropping.
The market will make USB-C the standard without gummit' intervention. You don't have to carry around 20 different chargers today. Not so true 5 years or so ago. Further, the arguments are somewhat silly. First of all power requirements are basically much higher for the average device today. You can hop on amazon and buy some Usb-C to micro usb charge adapters for around US$5 in a 5 pak with a little chain you can clip around the cable. I don't think there are any non-apple mobile devices today that dont use UsB-C. For the handful of things that still use micro-usb the adapters work just fine (Samsung smart watch wireless charger for example).
Most of the current crop of inexpensive charging blocks are shipping with at least a mix of USB-A and C connectors and support PD. The last piece of the puzzle is PD on the laptop side. Dumping 100watts over the #28 and lighter wire, means the voltage has to go up to compensate for the lack of current. Assuming around 3A on #28 you should be pushing close to 35v to get 100W, without the wire melting. 'PD' aka Power Delivery protocols is what makes this possible.
Finally I have noted improvements in the physical connectors on both the male and female side. The last few cables i got seem to be much more 'snug' and secure when pressed in. YMMV
Totally disagree. I was touch scrolling thru this forum on my onemix netbook running Plasma, then I dropped to the keyboard to type this response. I've found, that when doing work that requires a keyboard, I want a mouse, but when just scrolling around in say Emby or Plex on a browser, or reading thru a forum like this I much prefer to ue the touchscreen.
And any argument for weight gain or power usage from touch hardware is silly.
You make the assumption that Applied Materials cannot expand production if you throw copious amounts of money at it. the 20% after year one might be accurate but I would bet with a concerted effort and a trillion dollars or so they can product plenty of state of the art wafer polishers and whatnot. Be real.
INTEL could be interesting. My problems with intel date back to the 8080. NONE of their chips ever seemed to meet their specs. There were hardware work-arounds, software work arounds, they were actually so awful it spurred the 'chipset wars'. Almost forgot the math problems, hell even their current crop of Wifi chips have issues that seem to plague linux and windows. I think in the past things have been hyper-tuned for intel to work around stuff. Graphics issues could take things to a totally new problem level, but at least you might see a reasonable working open driver with manufacturer support for linux.
Ditto.
Plus time spent downloading and compiling and whatnot and DKMS problems, hyper kernel version specific code, and more downloading. I've never had a linux/NV stay even remotely stable through more than a handful of kernel updates.
AMD/ATI just pretty much worked, though I do get spurious ECC errors supposedly linked to an issue with the new AMD stuff. Not sure, but the system doesn't crash.
Defamation is indeed incumbent on presenting untruthful information. It gets a bit grey when politicized. A politician here was accused in a hit piece of having come out against contraception, and having written a paper (when he was 19) that was pro-nazi. This was picked up by a few national media outlets and published as 'facts'.
To the credit of some in the local media here, we find apparently the paper (which the gentleman stood by) used the Nazi party as an example of riling up the people with a cause to creating a drastic political shift. There was no record or incidence of him ever speaking against contraception, in fact he was on the record as just the opposite.
But the damage was done. Any corrections are on page 9 or an honorable 2-second mention at the end of a broadcast. He's suing for defamation. I agree with the sentiment of the suit, but on one hand the hit piece folks could claim it as opinion, and the National media outlets will just say "oops" and print a retraction on page 9. Just try and stay informed. The minute someone uses the term "Orange" or "O-buma" it's high index talk designed to inflame, you should skip the comment.
"The Wall" was being built. "The Wall" is not really the great wall of china. In fact up until about a week into the current administration sections of wall were still being put up in key areas along the border, and then the materials were simply abandoned. Arizona was trying to obtain the material to continue the project but was thwarted by President Biden's administration. Ditto in Texas. Ms Sinema knows she doesn't stand an ice cubes chance in hell of getting re-elected if she supports some of the idiotic things our current administration is doing, so while I disagree with much of her politics, I find she apparently does care about the state she grew up in for which I am grateful.
I love it when someone who lives in another country, or behind fenced and guarded neighborhoods with security want to bash "The Wall". For the former, carry your ass down here, and live on someones property near the border for a week with them. For the latter, you are some of the biggest hypocrites, and it's a shame all you and your friends in the media who live with you have such loud lying voices, that you convince or manipulate people to your way of thinking to screw them over.
"if things are really that bad where you live either get off your fat asses and vote in a government that'll fix the problems, or fucking move!" -- Really?
Move where exactly, into what exactly, for how much approximately? You see the politicians want to spend the money on 'light rail" and "mass transportation" products for all of us who don't live in gated communities with armed security. At least I don't live in Seattle. So they drive their Tesla's and Audi's and E-Class's back and forth from their secure compounds to their secure work buildings with more armed guards and fences. And tell me to ride the bus/rail.
Somehow these clowns use the media to convince the sheeple of the unwashed masses they are doing them favors, taking their money, while the bus stops and rail stations are swimming in homeless people, many of whom are aggressive to say the least. There is a homeless enclave under a bridge / overpass close enough to throw a baseball from the side of the building where I work, a stones throw from the cars in the parking lot. People are literally pouring over the border, If you don't believe it just drive down near Tucson/Nogales or Yuma out in the desert. I suggest you be armed, and have a buddy or two.
I don't have $1M to pour into a safe place, behind bars with $500+/mo dues. I carry either a SCCY CPX-9 or a Springfield XPS-45ACP. AR-15's are for enthusiasts, you can get a nice 223/556 for reasonable money, but ammo is too expensive and harder to reload. I have a TNW survival (45ACP) for if I ever need to skedaddle to the hills. Nothing over $1K, but I'm not judgemental. I'd love to move, and I'm looking, but by keeping interest rates artificially low for the last 20 years, investors are buying up all the housing and then leasing at exorbitant rates. Tell me about the job you need to rent an 800sqft apt for $2500 a month in the hood?
I would expect to see a two-fold (maybe three) increase in density in 5 years. Several tech companies are claiming to have lab'ed new battery tech. One out of Australia is shipping prototypes, and ramping up for pouch batteries. I expect some mass production hurdles, but like the flat screen LCD panel, there is a huge market for several big players, and the technology is evolving rapidly on several fronts. Expect to see a reduction in the need for 'Rare Earth's as things progress as well.
Lithum ION is the near term solution, and will likely be on-ramping some of the aforementioned items like safety and longevity before sodium and aluminum et. al. technologies become viable. It's going to be interesting I think.
China, Russia, and N Korea can (and do) produce X86 ISA or ARM ISA parts as well. Hell I could probably cobble together some (painfully slow, single threaded) silicon using discrete parts that that would honor an X86 ISA. RISC-V is an ISA... Instruction Set Architecture, ie a list of low level primitive commands about slinging data between registers and memory and maybe an I/O bus if it's not memory mapped.
Making silicon CPU's that properly implement the ISA, and offer performance, and pipelining and branch prediction, and ... That is a whole other ballgame. The best 'western technology' would actually be pretty much in Taiwan at this point in the far East but I digress.
It's the implementation of the ISA that matters, not the instructions themselves.
I'd go more with, someone decided to create a language that would (eventually) do everything you could possibly ever want to do with a system or database by adding 'modules' and roll it into a big giant mess that makes PERL look good. This is similar to what IBM did when it went from OCL on the 34/36 to QCL and then onto compiling CL for 'performance' and now I can have input forms on my CL! Every time I've tried to do something with powershell I've had to get a module, or something, and then everything is an object, so often the result you get is not what you expect, or requires further parsing or... It's just very darn inconsistent from function to function, and then the function you found that does what you want is only for Azure, not a domain controller, and ...
Sorry, but I'm not overly impressed with powershell to this day. dumping write-hosts's to see things in loops often fail to reveal actual object content that breaks somewhere else, blah, blah. Working with objects is great as long as you have a thorough understanding of each and every object you use. You are a maze of twisty little objects, all different.
I think shells and shell scripting should really be GLUE logic for weaving things together. If you want to get serious write some real code, in a language designed to accomplish the task, and use your shell to weave it all together.
And I'm happy to help tar and feather the idiot that decided names should be quoted and escaped by default in a directory listing on modern unix breaking scripts. (ls -1 | ...). The minute you make a 'shell' that is all things for all people, what you end up with is a 'thing' that is a necessary evil for a lot of people.
Of course opinions are like assho...
It's much easier to shout down ideas you disagree with, than support an argument to dispute it. What you need to do is label every word as 'racist', and 'homophobic'. After all everyone knows unless you agree that the earth is going to melt and it's mans fault, and we are all gonna die unless we stop using oil, nuclear power, and live in an organic farm commune, then you are obviously a racicist homophobe, and your opinions have no merit. Sorry tight asses.
Stalinists, not Marists. If someone has an opinion you don't agree with you shut them down. Or just kill them. Or ridicule them. My guess is most the anti-free speech rhetoric th this comment section is from trolls and such. It's much easier to shout down or censor an opinion you don't like than make an argument against it.
Produce a word document and you know it will look just the same in Tokyo, belgrade or New Delhi. Likewise Excel files will display correctly. OO/LO does an excellent job,
Produce a PDF maybe...
It's all about the fonts. Of course Suzy found this really neat font, that looks stunning 'Whispering Forest and used it in the slide template on the pp deck. "Your email must have messed it up in transit", "It must be the TV your are presenting on, it's just fine here"
Use the corefonts for everything and this will generally be true.
davmail will work the bulk of it with thunderbird. Tbsync has a couple of tools that fill in some gaps. Teams for linux works pretty well. Still has audio problems. Zoom, bluejeans are just fine. MFA gets fun.
The funny thing I find with win/ad, is first we give you a laptop that can do all these neat things and integrate seamlessly. Then we use ad/group policy, to turn it all off/make it unavailable. No usb sticks, no saving anything in chrome, ....
Not this time mee thinks. I just don't see a reduction in demand at this point forward. Further, fabs have to be re-architected every few years. I think the unseen problem was the demand boom coming from the micro-controller type markets. Demand for the small cheap stuff exploded, then dipped with the Covid nonsense, then re-exploded. A bunch of the new low news profile fab capacity is going to fill some of that need. The latest 1nm fab get's the headlines to make the next crazy fast mining chip, but the shortage is because everyone fabbed up to make all this high end, high profit stuff and ignored all the low end markets. They will come on line ramp up, and prices may fall briefly, until the word get's out that you can make that widget we wanted to build now, and then things will stablize. Keep in mind even an SOC has a bunch of support components to go with it. Most of that crap still comes out of china. All the high-end fabbing going in will pretty much just keep pace, with retiring old fabs, and expanding capacity to level out current demand. Intel is well aware of the cycle, has done a reasonable job of getting new fabbing online as old tech is retired. Micron, and Samsung, and so forth are in the same boat. They are constantly getting new fabbing in the pipeline, but they have to amortize out the costs sanely.
I offset the down vote, why would you do that to provable fact? Makes no sense. I've often wondered, since amps determines wire gauge, how you step 12v @ 50a down to 1 or 2v at 200+ Amps and don't need double ought wire. . .
P = E * I ... 600w=12v*I, I=600/12=50a ... 600w=1.2v*I, I=600/1.2=500a
"Current" (ugh) chips generally run around 1.2v so that is a lot of electrons generating heat, in a space the size of a pinky nail.
"But no, the architecture is not the point here. The point is that IPC, library calls, and system calls all inherit the insecurities of C, even where no actual C code is involved in either side, because the C conventions are the de facto standard that all programming languages adhere to for interoperability. This is how an Ada process reads an error text from another Ada process as an interger and Ariane 5 explodes, because C inhereted type unsafety from BCPL."
You just made my point... -Architecture- IS IPC, library and system calls and all that nasty talking back an forth stuff, and device driver interfaces, and shared memory, and ... Like DBUS on a desktop is architecture not language.
The article implies C is somehow awful because someone based a poorly defined architecture with it, to the point that when the underlying hardware became more robust the architectural definitions became perverted. The fact that nobody has redesigned the underlying architecture to allow for a more robust and well defined "MI" layer to handle all this is not a problem with the C language, it's a problem with not designing a robust extensible architecture to compile your programs against.
Again, IBM solved this "Interface Definition" problem in the 80's so that their systems could become 100% CPU/hardware agnostic. It seems Apple has defined their systems architecture well enough to change CPU architectures 4 times over their lifetime. Moving from big to little endian is decidedly non-trivial. The Wintel world was a free for all based on the original IBM PC and intel based chips. The fact that nobody deigned to design a proper "MI" layer for commodity PC hardware has nothing to do with 'C' as a language.
"Rust and Swift cannot simply speak their native and comfortable tongues – they must instead wrap themselves in a grotesque simulacra of C's skin and make their flesh undulate in the same ways it does." ®
Paints an interesting picture. Create an MI layer, write the horizontal and vertical microcode, likely lot's of assembler. Make it extensible, define everything you can think of, ... memory map all your IO and IPC. The write your OS against the MI without 'undulating' in a 'grotesque simulacra' of any kind. It's been done before, I've been mildly surprised some university never made a project out of it.
Whenever I read stupid sh*t like the above statement it makes me want to puke. You want to do IPC differently from 'C' with the latest language of the hour, shut the f*ck up and write it, and nail the API down. The bottom line on all this boils down to assembly language on your cpu of choice.
Sorry, The author is confusing a tool (C) with a systems architecture. I love C, don't use it much any more, prefer to write most things I do in shell or PHP, . . . YMMV.
As for a fully structured platform architecture, IBM solved this problem in the 80's with the AS/400 (now iSeries). They created a fully functional MI layer, layered on top of HMC/VMC layers creating a 64bit canvas with fully memory mapped IO and tightly defined data types. It was closed, but WAY ahead of it's time. It tended to be rather sluggish as well, They published a number of papers, but you basically compile everything to MI (Machine Instruction) and follow the guidelines and the tools all keep you honest. Everything works and talks in exactly the same way. The security models were interesting as well, everything was an object, and it didn't support a tree'd file structure... You could restrict any object many different ways, with ACLs, membership, location, ... At the time I thought it was arcane. Looking back I get it now.
Frankly it's sad nobody ever adopted these ideas into an open source OS. As a ground up approach, It completely hides everything underneath, and makes all your programing 100% agnostic to the underlying hardware. Unfortunately the overhead on 80's hardware had everyone bypassing BIOS and scribbling directly on hardware for performance dooming us to the current plights.
Doppler shift. Really? So somehow because my car (or golf cart or ...) is electric itceases to make *ANY* noise? Sorry, but electric motors make noise, tires make noise when they roll, sometimes on crappy roads or gravel an order of magnitude more dB than a any car noise. I have several deaf friends, no blind ones, but I used chat not infrequently with a guy who rode on my city bus to work with a seeing eye dog. He actually mentioned one time he was annoyed by some of the over-loud backup beeps on trucks because he couldn't always tell where they were. I'm not blind, so I can't speak past that.
So you put forth a premise that 'doppler shift' in sound of a modern ICE vehicle helps blind people identify moving vehicles and keeps them from getting run over vs an electric car. Do you have a reference or study materials on this? It's a lot like 'Masks Prevent Covid', because your spit doesn't travel as far. 'Well it has to do some good', except in the real world, you just can't actually connect the dots. Lot's of opinions, lots of seeming valid premises (Gee that sounds right) but the pudding ain't got no proof.
The other part is ear 'training'. Since most vehicles have been rather noisy on the outside what you have is a recognized sound. Electric cars also make plenty of sounds without the stupid ESS boxes, but they are simply not as familiar / recognized.
Hmmm, I just dumped my 92yro father's Windows 10 box that was a disaster and replaced it with a Debian 11, running KDE/Plasma, Firefox (Which has "all his programs on it"), LibreOffice (He cut his teeth on WordPerfect, this was a little painful), Thunderbird (Which he has used forever). He was stunned about how much faster it was than his old system. I set it up to "auto-vpn" over to one of my servers, and now I can just "RDP" over whenever I need to or use KDE screenshare. Zoom works as well, don't even need teamviewer anymore.
I cut the GF over to Kubuntu 18LTS years ago. I am safe to say she has had ZERO problems "harming herself thru mistakes she has made". I'm here to tell you, for most home desktops 99% of the time is spent in a browser. Frankly the only reason Window's is still relevant, is because it is the path of least resistance for the manufacturers.
I might mention all these boxes connect to my OpenLDAP server for authentication, and use Keepass backed to a private nextcloud server to store all their passwords. Now Openldap is decidedly non-trivial , and setting up the nslcd and sssd frontend's on the machines is not a 'user' process, but at this point the only reason you don't see more Linux on desktops is pure momentum, and the fact that way too many Windows Admins get lost if they can't click a checkbox. I've written more powershell as the Linux guy, than all of our Windows 'Administrators' combined. And I can't stand powershell. YMMV.
Static compile? Why? Shared libraries solve so many problems, if something is broken you just fix the shared library and all the programs that use it are instantly fixed.
No Really! (ROTFL, my sides will be hurting soon).
One of the reasons I can tolerate "App Image" is it's basically a static compile. My problem with todays 'static compile' is unlike the static compiles of old where you used 'ranlib' and 'ar' files and only included relevant portions of the library... Today's static compile just includes the whole thing every time. So it's basically an App Image. Which could easily be done by putting all your crap in a folder tree somewhere, use (say) apparmor to restrict it's access to resources outside that folder, and set your library path local for specific copies of any highly volatile shared libraries you might be needing.
Kubernetes and Containers just creates another layer of bulls... to debug. I just had an argument with a fellow at work, that was struggling to define what a 'container' buys me over a tiny VM guest. Apparently another layer of management, with some flaky tooling is a "Good Thing". Apparently this somehow saves overhead, but I'll be damned if I see it in practice. It's basically a chroot type jail (which I'm not a big fan of either) built into a virtualized "container" that depends on the virtual machine, that depends on the hypervisor host, that depends on the hardware underneath.
IMNSHO if it needs that level of isolation, build a tiny alpine install, 2 cpu & 1G RAM, 1-2G disk, and run the application. Why invent a new wheel? Build a better guest management and deployment framework for kvm/vmware/whatever. Quit overthinking the problem. Stop solving problems we don't have with a new idea everyone should use.