Re: Buggritt
Got back home, there were 3:
https://www.theregister.com/2008/10/02/eee_girl_saga_continues
But one will not accept the pint, rather have wine.
471 publicly visible posts • joined 31 Jan 2024
If you search my past comments you will discover that:
1.) I am a Venezuelan (born and raised) living in Venezuela
2.) I refurbish(ed) very ancient Laptops with linux (Even Pentium4 Type Laptops) to donate to Venezuelans and Cubans (from the medical mission here) whose only other option was no laptop at all.
Yes, I am not starving, but many people here are. I may be many things, but condescending towards Venezuelans or Cubans is not one of them.
IIRC intel also had i586 and i686 class radhard devices as well.
There are a couple of 5.x and 6.x kernels in the LTS-CIP branch that will have support for 6 to 8 more years.
But in the hacylon years of i486, i586 and i686 NASA tended to use VxWork, QNX and other stuff, and NOT linux. So, not an issue.
Linux !IN SPACE! Is a quite "recent" phenomenon in the great scheme of things
Tell me you jumped straight to the comments section without reading the article without saying "I came to te comment section without reading the article"
The article quotes Ingo Monlar saying: "We have various complicated hardware emulation facilities on x86-32 to support ancient 32-bit CPUs that very very few people are using with modern kernels," Molnar explained. "This compatibility glue is sometimes even causing problems that people spend time to resolve, which time could be spent on other things."
the same applies to i586
If you substitute "AI generated bug report" with "Automated fuzzing tools", is like reading an article from circa 2002
The workload of software maintainers increased significantly, because researchers used automated tools to find significantly more bugs...
the survived the onlaught then, and the maintainers of today will survive the onslaught too...
Why find an old XP machine and air gap it? instead of finding a Win10 machine and airgaping that?
You said it yourself. The XP era drivers work on win10
As a matter of fact, depending on the type of contract yor company has with microsoft, you may not need to airgap until oct 26, or oct 28, or Jan 31...
Easier to source compatible modern machines with less/no use.
Easier to find people able to maintain said machine
Less (known) security holes in Win10 than in XP...
Well, you weill get an AI chatbot, or an AI chatbot with Speech-recognition text to speech to "talk" with you.
But, for high rollers, or when the AI chatbot gets utterly confused, you will get some folk in Puerto Rico, Louisiana, Mississippi, New Mexico, West Virginia, Kentucky, Oklahoma, Arkansas, New York and Tennessee, the 10 poorer states in the USoA
I guess for people in those states this call center onshoring will be kind of a silver lining. And for people in the USoA who hate to speak with a non USoAn, you got your wish, let's just hope the companies use an All-American model, and not DeepSeek, Mistral or DeepMind...
People in Puerto Rico in particular can do double duty, maning phones in English AND Spanish.
Reference for the list of 10 states: https://www.fcnl.org/updates/2024-09/top-10-poorest-states-us
What about concurrently start services in parallel to boot faster. For desktop, enbeded and trqaditional server is a footnote.
For openstack (what used to put bread on my table) and elasticity in the cloud it means all the difference in the world.
Sys-v init and previous could not do it. Systemd, runinit and launchd can.
No. There is still plenty to improve in init-land
In theory, it can. But in practice, unless you are into VERY RETRO windows gaming, is of no use.
This distro has a very specific use case in mind (low end hardware) as well as a very strong ideological stance, so, the hoops you have to jump trough to get some software (likce closed source drivers, firmwares, codecs, DRM and plug-ins) is well above Linux average.
If you have hardware capable enough to run modern games, you do not need an "Ultra Lightweight" distro like AntiX, instead, go for something like Mint or Zorin, which, while still lightweight, will make your life much easier.
No. MX linux WAS a cooperative development between MEPIS Linux (therefore the M) and AntiX Linux (therefore the X). MEPIS died, so, AntiX is the only "surviving parent" of MX.
As a matter of fact, MX Linux still pulls a lot from AntiX. The most recent pull being the capacity to have multiple init systems, they were having problems and were about to let go. As per the MX people themselves:
«The current system is thanks to @ProwlerGr and their “init-diversity” work with the antiX distribution.»
https://mxlinux.org/blog/mx-25-1-infinity-beta-1-isos-now-available-for-testing-purposes/
Perhaps Liam can chip in this conversation.
I'd not say is four too many. They are moving from sys-v init as default to the more modern runinit as default. The other three are experimental. So maybe, at some point, they may ditch sysv-init and the failed experiments, and offer only one or two main, and newer experiments.
They have a long tradition of multiinit support, so is both a diferentiating factor for them as well as a fertile ground for init testing and improvement
What are the chances that sysv-init was the pinacle of init? Or that systemd is the pinacle of init tech? In both cases i'd say "slim to none". So having a place were is easy to test, develop and compare multiple init systems is important
Yay for AntiX.
Installed the beta for AntiX 26 on really old machines. We are talking 32bit only P4 laptops. Was the only way to make them "usable" for donation (for some Venezuelans, and all cubans, those laptops are very welcome indeed)
Comming from sysadmin and OpenStack, I do not care much about the "init cultural wars". If anything, I have some simpathty for Systemd, so, the only thing that factored in the decition was 32bit support and lightness, and AntiX delivered
For more powerfull 32bit only machines, mageia it is!
A minipc running mint and a browser like you sugest wold probably run DOSBox or a VM with FreeDOS so you can run Wordstar/Wordperfect dBase][+ and Lotus 123...just fine, negating the need for an ineffcient and power hungry XT on the side.
My response was to the idea that you can run an XT, not with the choice of software on said XT. Even with a mini PC, it makes no sense to run a 2 computer setup, even if you still want to use Wordstar and Lotus 123.
PS: In my linux VMs I use Joe's Own Editor, muscle memory from my WordStar and Borland days.
Even with freedos and dillo, your cunning plan of running a household or samll bussines on an XT crashes witht the fact that the world around that XT changed, so you need a modern browser to interact with the government, banks, suppliers and customers.
Try the argument again with a P2 with 300MB of RAM using tinycore linux and some lite wordprocessor and spreadsheet, and maybe, barely maybe, the argument may fly
You must be confused. More than 60% of RAM worldwide comes from south korea (Samsung and SKHynix) and about 30% from the USoA (micron). Taiwan's participation in the global DRAM maket is negligible. Even china produces more RAM than Taiwan.
You must be thinking of advanced logic chips, which are very important indeed, but offtopic for the article at hand.
All these tycoons wanting to put (AI) datacenters into space is a strong indicator that:
Moore's law, while not dead, has slowed to a crawl.
There are no more big arquitectural gains to be had on the hardware side.
Otherwise, it makes no sense to put an (AI) DC up there, just to de-orbit/burn it after 4 years
So we can expect that cutting end hardware progress will stall in the comming years
The computing resources used to run DLSS (tensor cores) are as different to the resources used for rendering (CUDA cores), as the resources used for rendering (CUDA cores) are different from the resources used to run the game Logic (ALU units in the CPU cores).
By the time the tensor cores were introduced (RTX20x0), architectural progress in Rendering technology significantly slowed (10~20% gen-on-gen including both architectural improvements AND process shrinks), meanwhile, since tensor cores were a newer tech, architectural improvements in the sucessive generations were much more significant.
As a developer (games or otherwise), you would be wise to actively leverage the parts of the hardware stack that are having the more generational performance improvements... We saw it in games when the faster FPU of the Pentium* was leveraged for games like Quake, then again when the 3D accelerated GPUs were used instead of the FPU+Software rendering. We saw it also in both gaming AND non-gaming workloads with SIMD, when SIMD Stored procedures in databases significantly accelerated certain calculations. We saw it again in non-Gaming workloads, when a bunch of operations was moved from FPU/SIMD in the CPU to OpenCL. And now, part of the rendering workload is being moved from CUDA cores to Tensor Cores.
Is not an "all or nothing" type of situation.
* The FPU of the Pentium was faster both in terms of raw calculation speed, as well as the CPU being able to field one int instruction in paralell with one FP instruction.
With my P1 90Mhz 16MB RAM and WfW 3.1 I "discovered" that, I if I made a 4 or 8 MB RAMdisk, compressed it with doublespace and pointed /tmp, /temp and the windows pagefile to it, the machine was greatly accelerated (especialy compile jobs) AND coyld fit bigger software than woulkd be possible otherwise.
Sadly, win95 put an end to it, as the swap bypased the doublespace via APIs and stored everything uncomprerssed.
One of the reasons I delayed the upgrade as much as feasible.
ATi was the bigger company, yet, post acquisition, all the directives were Artfx people.
Here is the same, the people who will direct the company post-aquisition are the google ones.
My educated guess is that this has a BIG component of an acqui-hire, the board of astound dissatisfied with the performance of astound's upper management.
In a past life I was a technical trainer for OpenStack. Systemd solved MANY painpoints for us sysadmins, doubly so in OpenStack.
Y'all can argue until you are blue in the face if some of those pain points should have been resolved _outside_ of the init subsystem. Or if the solution was good or not so good
But the fact remains that, at the time of its enshrinement, systemd was the best fix available for all those pain points.
If only, because they get to the next version of debian sooner.
Debian is slow to release as it is, making it even slower for remixes is not good for my use cases (simulating many GUI end user machines in virtualized networks for teaching).
For ultralight desktops for actual human use (donation), I tend to go with AntiX as the UI is more Windows-Like, and the donation recipients preffer that.
For small distros for specialized tasks, Also, antiX (but sans the UI), as I got familiar with it via the donation stuff.
You are thinking it from an individual user point of view. Many an enterpise/education customer has more than one people using the same desktop (think, operators in shifts). Or machines get reasigned. Or, internal traning and support materials are developed with nice screenshoots to acompany them.
For those customers (who are the customers MS is interested in) is better to standarize in one "true" UI
It also helps that microsoft has to spend less money on development and ongoing support (both customer support, as well as patching and sutff) if there is only one non-customizable interface
Once you associatee a pictogram with a given function, finding the exact button you want with a pictogram is easier than finding a text only button.
The best interfaces were done in the late 90s, when we had combined text + icon buttons.
The text helped with discoverability when youi were not yet familiarized witht he program, while the icons helped to locate the buttons faster once you familiarized yourself with it
Less ammount of RAM is one thing. Less expensive RAM is another.
Each (LP-)DRR4 chip is less Expensive than a same size (LP-)DDR5, and (LP-)DRR3 chips of the same size are even chepaer, but you can not go from one to the other without changing the mobo (and sometimes the processor too)
Meanwhile, each chip of DDR-5 cost the same, no matter if you put it in one 8GB (SO-)DIMM or one 16GB (SO-DIMM), but buying a machine with 8GB will cost less. You can go from 8 to 16GB DDR-5 without changing the Mobo or processor.
107%agree, but with a caveat. Give a windows refugee the distro as is OotB. Do not custo.ize the distro. Otherwise, as soon as a reinstall is needed, or an update/upgrade chages stuff, they will either be up shift creek without a paddle or co.e back to you for re-customization.
If they do customize it on their own, is up to them
That's why zorin and mint rule
I'd also tell you about the time I was a contractor for Huawei (2012-2016). The spreadsheets with timesheets for students, exam results, as well as my expense reports had macros that did not play nice with libreoffice.
Word documents (exams, and technical documentation) and ppts that Huawei provided that i had to edit or translate to spanish had their formating borked by LO.
Can i do a presentation in LO? As a matter of fact, during that time I also teached at the Uni (in venezuela, so salary was junk), and all the paperwork and presentations I did in LO + YED, so, yes, I could "do stuff". But I need office to operate on the documents Huawei insist I use.
My options at the time: refuse to work with Huawei for not being open enough and not put food on my table (salaries in venezuela were crap at the time, so the esporadic work with huawei in mexico, colombia and brazil was what really paid the bills). Try to convince the whole of Huawei-Training division to migrate to LO (good luck with that). Use LO to maintain my FOSS purity and do a lot of extra and unpaid work reformating stuff, and take extra risk by re-implementing the macros, or, you know, use windows/macOS + office.
Well, macOS + Office it was!
I though i gave clear enough examples in the OG comment. But I overestimated a part of the audience.
probably the impedance thing combined with a low power disipation per pin need (driving a socketed HMB produces lots of heat because of the sheer size of the bus concentrated in such small space
Also, there is no mem controller prepared to use socketed HBM.
Something similar happened with LP-DDR, from it's inception it was designed to be soldered. It took YEARS for some bright sparks at Dell to come up with CAMM that opened the door to socketed LP-DDR.
They are investing, but not massive amounts.
Actualy, some of them are disguising investments decided and/or done before this cruch as reaction to the crunch and comingling (is that a proper verb?) with investment directly related to this bubble.
So yes, the actions of the big three memory manucaturers point to a 2 - 3 year bubble of memory demand.
AI uses HBM memory. That CAN NOT be socketed. So is always soldered. So it goes to the scrap pile.
HBM production lines can be retooled to make GDDR, DDR or LP-DDR but that costs money AND the line is out for weeks or months, so sometimes manufactuers opt to mothball the lines instead.
Datacenters use Full Buffered ECC - DIMMs. Even if the mem controller of your processor support it, you need special mobos to use it.
Also, there are modules that allow you to take your older FB-ECC-DIMMs and use them as CLX memory in more advaced servers that would not take the older modules directly.
Those two factors combined mean that: do not expect a flod of FB-ECC-DIMMs on the used market either.
Datacenters do not use normal DIMMs, SO-DIMMs or CAMM2. So again, nothing of that sort flooding the used market.
Come 2028, the only thing we may expect to have in the second hand market for consumers comming out of the datacenters is the future (but inminent) SO-CAMM2 LP-DDR modules...
Zorin and Mint are the best distros for "Windows Refugees".
People who DO NOT want to leave Windows, but circumstances force them to leave Windows.
People who DO NOT want to use Linux, but circumstances force them to use Linux.
Asking those people to use fedora or ubuntu will lead to frustration and lower productivity. I've seen it with my own eyes.
That's why, whenever I prepare a machine for donation, I go with Zorin CE if possible, mint otherwise. Mageia or AntiX for 32bit only machines depending on power.
The OS's job is to let you run the apps you want/need/have/are forced to run and get out of the way.
If said apps are not available in said OSm the OS is useless. This is not an statement avout the quality and technical merits of said OS, this is just the way the world works.
If you want/need/have/are forced to use FinalCutPro (say, because the customer requests it), then linux or windows will do you no good.
If you want/need/have/are forced to use excel (say, because there are some macros in the spreadsheet your emplotyer use that do not play nice with libre office) then Linux will do you no good.
I use linux for donated laptops, and is very good that that. But is no panacea. And for some people, free hardware with linux on top is of no use
No one is patching security bugs
No one is patching non-security bugs.
The software world outside the project (GTK2 in this case) keeps evolving, which means that, sooner or later, there will be interop issues, and no one will be fixing those.
when it stops being maintained, is dead...
"He's dead Jim"
The memory is getting more expensive, and the storage is getting more expensive, they are trying to lower costs somewhere else, cost saving innitiatives that normaly would be overlooked, suddenly become interesting...
Be it using more JIT, going to cheaper logistics firms, using a SMT resistor that is 0,001¢ cheaper, normaly, you could care less, but with Memory and storage prices hurting margin (because you can not pass 100% of the increase to consumers), volumes (because you are selling less units due to high prices) and possibly market share (because, maybe, your competitors also see a contraction, but they contract less than you), every little bit of cost savings counts.
And no, savings in other parts of the chain will not completely offest the memory and Storage price hikes, so the prices of the machines will still go up, but every bit helps.
Not necesarily. There are three main comapines making DDR-4, DDR-5, LP-DDR4 and LP-DDR5 memory chips.
But DIMMs, SO-DIMMs and CAMM2 modules that the HP Desktops and Laptops actually use, well, there are plenty of companies making them to go around, and they all need to be quailfied. HP does not buy memory chips, designs and manufactures their own PCBs for modules and solders the chips to the modules....
They just buy the modules already made from a third party. Again, as said, plenty of those, and they need to be qualified.
Also, With the uncertainty about CXMT being (or not being) on the entity list, I am not sure HP can even work with CXMT towards qualifying, even if, afther quailification, they do not buy a single stick with CXMT chips.
PS: Yes, I know that, until recently, the only way to use LP-DDR was to solder it directly to the board, and therefore, HP had to buy THOSE chips directly, but with the advent of CAMM2, and the inminent arrival of SO-CAMM2, coupled with this memory crisis, there is renewed interest on of upgradeable (LP-DDR) memory laptops.