> "by popular demand," it is now possible to check which model was used to run a prompt.
Jeez, really? It is a popular requirement to be able to at least tell which one of the blackboxes produced the output?
Wow! Who could've foreseen that!
570 publicly visible posts • joined 15 Nov 2021
> I was, however, restrained - I delivered my opinion in a private office with the door closed, and not in the open-plan office space in which I first saw the code submission.
Thing is, the code that goes into the Linux kernel, ends up on tens of billions of devices, from supercomputers all the way down to smart fridges. That's why these discussions are in the open, NEED to be open, on a mailing-list, and not behind closed doors. The world depends on Linux, and by extension, on Kernel hackers to not drop the ball.
The people writing that code literally make the software that keeps civilisation spinning.
> As others have said, ChromeOS became widespread because it came preinstalled on cheap laptops, not because it's so locked down.
This. Windows isn't winning on Desktop computers because its better than Linux. It wins because it's what's on the machine when people buy it.
"If only there was a simpler OS"
"If only it had a more accessible appstore"
"If only the Desktop was better"
"If only it was as simple as Chrome OS"
...if only [argument-goes here] changed about the FOSS OS offerings, surely the masses would flock to it, right? Right?
No. No, they wont.
This isn't about the OS folks. It's not about the appstores. It's not about the usability. Ask yourselves, how many of those non-tech people using windows, have EVER installed windows themselves? Any guesses? Hmm? BARELY ANYONE. How did they set up their computer then? Easy: By virtue of it coming pre-installed from the store.
This is the secret sauce. This is the reason. And while the answer to "What does this come installed with?" is "Windows _", windows will remain the dominant desktop operating system. Period, end of sentence.
Windows doesn't win this because its so much simpler to use than, e.g. Linux. At the surface level, sure. Anything beyond that, and a modern Linux Desktop Env wins easily. Linux also wins on ease of install...I know its en vogue to criticise Manjaro, but have you tried the installer of their KDE version? It's pure bliss, and so simple, my Granny could set up LUKS herself with it.
And even at the surface level, a well put-together KDE desktop, or dare I say it, even GNOME these days, is easily as simple to handle, or even easier, than windows. I mean, short of actually inventing J.A.R.V.I.S from the MCU, how much simpler than Desktop-Metaphore, + start Menu, plus clickable-everything + searchable settings menu + appstore + preinstalled everything + super easy installer can it get?
But all that is completely beside the point. People don't use windows because it so great. People use windows because its what was on the machine when they bought it. Most people aren't in tech, and doing an "OS install" is, to most people, a much scarier prospect than changing the tires on their car...despite the latter being a lot more difficult and dangerous activity.
So, if the FOSS world wants to change that, they need to change how peoples access to consumer hardware works.
Shit like that is part of the reason why many companies cybersecurity is a shambles.
When a companies (and yes, HR is the face of the company as far as employees are concerned) first reaction to a cybersecurity problem is to blame the messenger, guess what: The messengers will stop coming.
Browsers are probably one of the most critical pieces of software ever. They are networked super-applications, with capabilities that rival those of an OS. They have to execute code from random sources on purpose, and do so in a secure way. And we trust them with the most intimate and critical details of our lives.
Ain't no way in hell I'm gonna exacerbate the risks involved in that by also running 3rd party code, that automatically eats changes done by god-knows-who, and has essentially privileged access to the Browser, on top of that.
Yes, that costs me functionality. Boohoo. There is always a trade-off between comfort and security. I chose the latter.
The biggest stumbling block is gravity.
Let's be veeeery science fictional for a second, and assume we had really large (think: Container Ship-Large), really cheap, really efficient super hyper mega rockets, to ferry mass around the solar system with, then bringing in organic nitrogen compounds in large quantities at least starts sounding kinda sorta doable.
But we cannot increase a planets gravity. Period, end of sentence, that's all she wrote. And while we don't know what would happen, it's a pretty safe bet, backed up by 22 Million years of hominid, not to mention 518 Million years of vertebrate evolution, that our bodies wouldn't like living at 1/3rd Earths gravity for long.
> For example, the aim with the Linux kernel
Not every software is as modular as the Linux kernel.
> However, there has been significant pushback to that plan
Yes, and part of that pushback has to do with the exact problems outlined in the above post. Writing NEW stuff in another language is cool and exciting. Rewriting old stuff, or changing old stuff to accommodate the new stuff, is hard, tediuos, risky work. But someone would need to do that, or the new stuff won't happen either. And someone will need to maintain that limbo-state in the old stuff, until the transition is complete. Which is a tall order, when the deadline for that happening might be: Never.
THANK YOU!
Software isn't gonna be rewritten, not on a large scale at least, and most certainly not at the infrastructure level. Some new stuff might get built in Rust or another memory-safe language. But not at scale, and not the core pieces of our software ecosystems. That's a fact, and the sooner people realize that, the sooner we can move on from this endless discussion.
Something else: Even IF we could rewrite every browser, database, os, driver, network equipment firmware, etc. we would not make things "safer" ... quite the opposite in fact. We may fix memory safety issues, sure. But something people tend to forget, is that these decades old codebases also contain endless fixes for logic errors, security loopholes that have nothing do with memory safety, authentication mishaps, patching problems, and so on and so forth. If we rewrite all that from scratch, what do we get? Bingo!; codebases that will also rediscover many many many of these errors that we already fixed! And it would again take decades to get them to the point our existing software is at now.
Yeah, I kinda doubt that.
Because, if there is one thing I am 542% certain about, it's the fact that no LLM, no matter who offers it, no matter what the nice smiling salesperson promises, and no matter what cool acronyms the thingamabob comes packaged up in...
...will ever get any access to a shell on any machine I am responsible for...
...unless there is an agreement, in writing, signed by upper management, cosigned by the Pontifex Maximus, on thrice-consecrated vellum, sealed and sworn to before the eyes of men and the gods old and new, that I am not only not responsible for ANYTHING that happens on that machine, but that I am also in no way shape or form required to fix anything that happens as a consequence of such action, in perpetuity, throughout the known universe, unknown universe, any fictional universe, and all known and unknown dimensions, timelines and realities, past, present, future, yet to be discovered or hallucinated.
Many such proposals make the assumption that governments will always and ever work for the good of the people.
Gentle reminder: that can change with just one bad decision by the electorate.
If there are structures in place to essentially kick people off services they need to function in daily life, the question is not IF some government would eventually end up using that to go after people like protesters, "uncooperative" journalists, political opponents, vulnerable groups, etc.
It's only a question WHEN that will happen.
And besides: VIN numbers exist, and yet cars are still being stolen. And unless someone has a good proposal how to make essentially the entire world cooperate on this issue, there will pretty much always be some market for stolen phones somewhere in the world, where carriers simply ignore a hypothetical global IMEI-banlist.
This.
The REAL problem for people to transition to Linux was never usability. People always go on about "Linux Desktop has come a long way" ... yeah, newsflash: So has Windows. I used Windows 3.1 and 95, early NT and 2000, and guess what: Non-IT people were about as comfortable in those interface as they would have been in early versions of Linux Desktop Environments.
There is a reason Apple became popular among creative types who aren't in IT, long before the hardware itself became a lifestyle brand: Apples Interfaces were, to many people, more intuitive and easier to navigate than the alternatives.
Something lots of people forget when they (justifiably) fondly remember these older Windows Interfaces, is that they remember them *as IT people*. To us, these uncluttered, no-nonsense, no-frills interfaces of early windows versions, where things weere where you expect them, and information density was high, were good, they were solid. We like all these things.
But to the average user, they weren't. They were scary. Anything beyond the "Start"-Menu, and even half the stuff in there, and the 2-3 Folders IT pre-set for people to appear as Links on their Desktops, was a no-go-area for most non-IT users.
So no, what limited Linux Adoption was, in my opinion, never how user-friendly the desktop was.
What limits the adoption, boils down almost entirely to the fact that *MOST PEOPLE NEVER SETUP THEIR OWN OS FROM SCRATCH ON BLANK HARDWARE*, and windows comes pre-installed on almost any desktop-machine under the sun.
> Located outside Boxtown, a historically Black neighborhood in South Memphis, the datacenter's extensive use of mobile gas turbines
https://www.aljazeera.com/news/2024/11/6/us-election-2024-results-how-black-voters-shifted-towards-trump
https://www.theguardian.com/environment/2025/may/01/trump-air-climate-pollution-regulation-100-days
> Located outside Boxtown, a historically Black neighborhood in South Memphis, the datacenter's extensive use of mobile gas turbines
Question: Can you imagine such a facility be built near, say, the villas of the rich and famous in LA? Or anywhere close to the spots where billionaires have their mansions?
Replying to my own post here as an addendum.
Even if we ignore all I covered above, there is yet another, bigger-picture question to answer:
WHAT'S THE POINT?
What's the point in drilling for water on Mars? We cannot "terraform" it, it's impossible. Even IF there is that much water on Mars, and even IF we could somehow get it all out...what then? Let me tell you what happens next: The water immediately evaporates in the low-density atmosphere as steam. Okay, steam ... doesn't really do anything for us. So lets say that we can somehow separate a planets worth of water into H and O (we can't, but lets just assume for a moment). Let's further assume that we can somehow magic-away the Hydrogen.
Now what? We would create an Oxygen-Only Atmosphere, which is highly toxic to most life, including humans. Earths atmosphere is mostly Nitrogen. Where does that come from on Mars?
But it gets worse, because: the atmosphere wouldn't be stable. Mars has no active core, meaning no magnetic field to speak of, meaning no protection from solar winds. Remember that I said Mars has 0.3% the atmospheric pressure of Earth? Why is that, it has 1/3rd the gravity, right? Because, without a magnetic field protecting a planet from them, solar winds simply shred its atmosphere away piecemeal, like a cosmic rasp.
So even IF there is that much water, and even IF we could get it out, and even IF we could transform it to Oxygen, and even IF we also found a way to get lots of Nitrogen or other inert gas...
...we still couldn't build an atmosphere, because restarting a planets core, is simply not something a species that has not even reached level 1 on the Kardashev scale yet, is capable of.
> then wouldn’t it be easier to drill than Earth?
No, not at all.
First off, the drilling equipment would need to be designed to operate in a near-vacuum environment (Mars atmospheric density is ~0.3% that of Earth). Meaning any liquid (like lubricants or hydraulic fluid) will be hell to handle. Drilling equipment tends to need ALOT of those. Good luck drilling anything, if your equipments lubricant film just evaporates away constantly.
Second, the coolness is itself going to be a problem; mean temp. avg on Mars is -62 °C. So your drilling equipment will *also* need to be designed to operate at essentially arctic temperature levels. Which, again, plays hell with alot of systems, including lubrication, motors, circuitry, it has an effect on how materials behave, etc.
Third, Mars is covered not in sand, but powdered regolith. Think dust-sized tiny shards of glass and asbestos, with razor sharp edges. And since it is also freeze dried, it easily gets statically charged, clinging to absolutely every surface. Remember how military vessels constantly experience failures in deserts? Like that, but a million times worse.
Number four: Energy. Drilling requires ALOT of it, and whatever energy source it uses needs high energy densities as well. On earth, we solve this by burning fossil fuels. There are none on Mars.
And lastly, yes it is a problem of logistics, but that problem is simply not solveable. Imagine what drilling gear to reach such depths requires in material to build. Now imagine all the scaffolding, machines, tools, storage, spare parts, etc. required to build the thing. Now imagine all the people required, and the food, water and toilet paper they need. Now imagine all the gear and materials required to build the energy infrastructure for the drilling, and maintaining that.
We are talking about millions of tons of payload here.
> whereby other countries then come to the table and drop their already-existing tariffs on US goods
That's not gonna happen for 2 reasons:
1) These other countries have ZERO incentive to lower their tariffs in response, for economic reasons...that would just give american companies a competitive advantage on those countries home markets
2) These countries have every geopolitical reason to instead INCREASE their tariffs in response. If they didn't, that would signal to the world that bullying works against them.
The US government believes that other countries will negotiate with them, on US terms, and isolated one by one...because that's how maga perceives trade: as a zero sum game, where every win has to be someones loss. A world were cooperation generates wealth, thereby making everyone win, is not on their bingo card. Consequently, that's also how maga perceives international politics; they believe every country is in it always and only for itself at the expense of everyone else...which is why they are constantly flummoxed when groups of countries (including unlikely allies such as China+Japan+Korea), such as the EU or Mexico+Canada, band together to implement retaliatory tariffs in response to those from the US.
> Wouldn’t raising U.S. tariffs encourage other countries to negotiate and lower theirs in response,
No...because that's not how tariffs work?
If country A raises tariffs on goods from country B unilaterally, country B is pretty much forced to impose additional tariffs on country A, lest he endangers the competitiveness of his own industries.
> The newscycle has now reached picohertz frequencyies
Herz, the SI-unit of frequency, is defined as "number of periodically repeating events in the span of 1 second"
The prefix "pico" denotes 10 to the negative 12th power or 1E-12
Therefore, a "picohertz" would denote 1E-12 events per second, or roughly 1 event every 31709 years, give or take.
I think news cycles are a bit quicker than that ;-)
> It nay be rather better on Mars when Earth enters the next ice age in 10000 years.
No, it would not.
During the last ice age, the average temperature on Earth was about 6°C colder than it is today: https://en.wikipedia.org/wiki/Last_Glacial_Maximum
On Mars, the average temperature is minus 63°C. To put that number into perspective: The average winter temperature at Earths south pole, is minus 49°C.
And no matter how cold it gets during an ice age, Earth still has:
- an atmosphere that we can breathe
- a gravity well that doesn't destroy our circulatory system and bone density
- an active core generating a magnetic field that protects us from cosmic radiation
- liquid water
- soil
- plant life
- organic nitrogen compounds
- a microbiome
...and all the other gizmos required for this nice, cozy biosphere that makes our lives possible.
> Speaking more generally about space colonization than just Mars, as the saying (well, T-shirt) indicates it's not wise to keep all of a species' eggs in one basket.
Yeah, that narrative doesn't work though, because here is the thing:
It doesn't matter if we establish settlements on the Moon, Mars or wherever...none of these outposts will be self sustainable. Ever. There is 1, ONE planet in the solar system with a biosphere capable of supporting human life. Earth. Everything we need to survive originates here, and if we want to go somewhere else, and stay there, we have to ship supplies for as long as we want to be there.
And no, we do not have the technology to terraform other planets. If we did, we would not currently be flabbergasted on how to deal with global warming, and that is here on Earth, where we have all our resources and toys freely available to us. We also don't have the tech to make artificial self-sufficient habitats on other planets. We know that, because when we tried, it turned out, we cannot even do that here on Earth. And mind you, that experiment had all the infrastructure and resources of an established industrial society to set up everything before the "biospherians" arrived.
So whatever outpost is established on Luna, Mars, some Moon of Jupiter, or wherever, will live exactly until the supplies from Mother Earth run out, and then its inhabitants will die of hunger, thirst, or suffocate when the airlock gave out because Earth no longer sent spare parts for maintenance.
So unless we discover how to do FTL travel, and find other Earth-Like planets, we better get comfortable with protecting the planet we have, because right now, if this little blue-green marble goes bye-bye, so does humanity.
We ... are developing this technology. All those robots didn't get up there on thoughts and prayers.
And the tech required to keep humans alive up there, is useless in any other context. It also is neither glorious nor particularly interesting, it's essentially space-plumbing.
Fact is, Mars isn't for humans. Keep sending more and better robots, in those conditions they are simply better at the job than people. And developing THAT tech absolutely benefits our total technological progress.
Now I'm curious: What kind of "failure" would make Earth less hospitable to human life than Mars exactly?
Even after a global thermonuclear war, it would still be easier to keep humans alive here, cosplaying as IRL Fallout 4 settlers, than on Mars, an irradiated, freeze dried nightmare, with zero soil, no nitrogen to be found, half the sunlight, covered in microabrasive dust, and an atmospheric pressure <0.3% that of Earth.
And besides: No, there will not be a "self-sustaining" mars colony. Ever.
Any settlement up there will either be supported by regular delivery and crew rotations from earth...or it dies. Horribly.
So if Earth goes bye-bye, so does anything we built on mars.
Sorry no sorry, but anyone willing to go through the pain of using Windows 7, is already well equipped to use a much more accessible, user friendly, easier to install, update-able, and easier to configure Penguin pet of ones choosing.
And if you absolutely want to run Win7...why not do so inside a handy Virtual Box, which can be set up in a GUI with a few clicks? At least if that goes belly-up from some malware, you can just delete the box or reset it to a snapshot.
I respectfully disagree.
Many engineers have told management for YEARS that this would be an issue at some point. What did they get as responses (if any)?
- "Why would we need networking, we are cloud based"
- "This will easily be outsourced"
- "AI will do it"
etc. etc.
So no, I don't think engineers have any responsibility here. The decisions, inluding things such as how career progression works, were not theirs to make ... that's the job that managers get paid really well to do. The job of engineering is to provide all necessary technical information so that management can make informed decisions. If management choses to ignore this information and thinks they know better, that's not on the engineers.
If and when corporations inevitably come to the conclusion that, surprise surprise, computers meant to do anything useful in the 21st century do, in fact, need networks, and those networks, no matter how well designed, do, in fact, need someone knowledgeable to maintain them, there will be 2 types of companies:
Those that listened to the advice of people who tried to tell them, and those who didn't. The former will be fine, and the latter can't say they weren't told.
> Until the Linux community can display more welcome
There are very, very, very few communities in the world that are more welcoming than the Linux community. Tens of thousands of volunteers are writing patches, documentation, blogs, and are active in support forums.
> and less nit-picking and internecine squabbling over minor technical details
This "nit-picking" and "squabbling", is exactly the reason why Linux users have choice and options to pick from, instead of being presented with corporate monoliths, subscription-fees and enshittification.
> will remain a niche endeavor
If it does, so what?
Linux development efforts never were and never will be dependent on market share. If other people want to pay some big corporation for generously smearing AI all over everything and then raising their prices for this marvelous service while invalidating older hardware, that's fine by me.
> but as Blashki noted: "It's the difference between climbing a mountain and taking a helicopter to the top. Sure, you get the view either way, but one experience builds strength, resilience, and pride – the other is just a free ride."
Especially because we're not talking about smooth mountains with large flat surfaces suitable for landing.
No.
As soon as we go beyond the simplest of tasks, we're talking storm/cloud/lightning covered hellholes, that will smash any chopper getting to close to bits. These are mountains where the heli cannot help you, where you have only yourself, a sherpa, and your prayers to get you through.
God help you and whoever financed your expedition when you try to climb one of those when all your prior experience on easier mountains was to board a helicopter.
If you gentlement will now excuse me, I'm gonna set up camp for the night and cook some tea, for tomorrow I am going after an especially elusive yeti-variant called "sudden-inexplicable-packet-loss".
> As previously announced, unauthenticated netizens using the service will be limited to 10 image pulls per hour, per IPv4 address or IPv6 /64 subnet. According to Oro, that only limits about seven percent of Docker users.
Please, do explain what is being enshittified here?
10 pulls per hour is still an insanely high amount to get for free, considering the size and thus bandwidth required for some images. What do people do to ever get close to that limit? Disable their caches and let their build-pipelines run on loop?
Even most businesses will probably not exceed that limit unless they are doing something very very very wrong in their CI/CD pipelines.
And as for businesses that DO exceed that limit, well, there is no such thing as a free lunch.
And btw. if for some inexplicable reason someone really needs to pull fresh for every build; building your own docker registry is really really easy. Why, there is even a docker image for it.
> Why not just exploit the temperature difference between the surface and deep in the earth's crust? Maybe use thermoelectric generators, rather than turbines.
Marvelous idea.
Small problem though: The efficiency of a TEG is between 5-8%. That of a steam-turbine-generator is between 44-49%. So, what else you got?
> Distribution cables to move it to the dark side of the globe could well be cheaper than thousands of fusion power plants.
First of all, you don't need "thousands of plants". France, which is a really big country, has 18 nuclear reactors, fission reactors which are much weaker than fusion. They cover more than 70% of their electricity requirements from those 18 plants.
And now to why this "global cabling" thing doesn't work...
a) Cables have electrical resistence. If you run cabling halfway around the globe, no power makes it to the other side. Years ago, planning was underway for giant solar farms in the Sahara desert to cover the EUs power usage. The cabling was one of the major reasons why the projects went nowhere.
b) No nation on earth would make its nightly energy budget dependent on the goodwill of some country on the other side of the earth
c) How is the "sunny side" supposed to manage its own energy budget if it has to provide power for the other side at the same time. Solar already cannot provide enough power to cover the basic usage (which is precisely why we have nuclear power plants, and coal, and gas, and hydro, etc.)
> How about putting the same research effort into drastically reducing our energy needs?
Again, this research is already being done. We are developing more energy efficient tech all the time.
But we are also getting more humans every year, and more tech every year. A technological civilization, unless it is in decline, will ALWAYS grow its power demands, thats the Basis for the Kardashev Scale.
And what is your point exactly?
You complained that we still use water-aggregate transitions to generate power. Okay.
Question 1: What exactly is the problem with that method in your opinion?
Question 2: Do you have a better approach?
Question 3: What makes you believe that scientists are not doing this research already? Just because haven't seen any results yet, doesn't mean the research isn't done. And that, my friend, isn't scientists fault, but rather the idiotic process of science communication that society imposed on itself due to, *drumroll* capitalism! Because NEGATIVE results, even though scientifically speaking they are easily as important as positive ones, doesn't get people published, doesn't result in funding, doesn't result in jobs.
> and as a result have shouldered the burden of keeping Rust bindings separated
How nice of them. Too bad that's not how the Kernel development works.
https://docs.kernel.org/process/handling-regressions.html
One of the core rules of Linux Kernel development is "We do not cause regressions.". Also known as "you break it, you fix it".
And this is hard enough as it is when the kernel uses only one language. Now we have an ENTIRELY DIFFERENT language. This new language now writes stuff that relies on existing kernel code. This new language also requires APIs to be built to accommodate it. Guess what, these APIs can now break when the upstream code changes, meaning suddenly maintainers of purely C-based components need to invest time and effort to maintain that API.
And refusing to do that, is not "downright obstructive".
It's what happens when people, many of who invest their free time, don't want to have their own job made harder by an addition they have nothing to do with.
Well, since ASR is basically a seq-2-seq encoding, and nowadays mostly done by transformers, which are an ML model architecture...yes, speech recognition is very much AI.
And the above post also isn't about the speech recignition itself, but the summarization of an entire meeting worth of transcribed audio to minutes, which is a summarization task, and that is definitely done using LLMs these days.
> Pseudo-AI is useless in customer-facing jobs
Alot of customer service time is spent by cosplaying as an FAQ system, basically reading back to customers what they could have found themselves if they bothered to open the company website and locating the "Questions" sub-page. LLM based systems can take a lot of this boring work off service agents, allowing them to lower waiting time for customers who actually require their assistance.
Excuse me...what?
Fusion Power?
You mean the energy source that has been researched since the 50s? For which we are stll building the first few PURELY EXPERIMENTAL reactors, that are glad if they can maintain a plasma for more than a few seconds? Never mind actually producing power?
Is that the "fusion power" we are talking about here?
> AI tools, the team suggests, should incorporate mechanisms to support long-term skill development and encourage users to engage in reflective thinking when interacting with AI-generated outputs.
So we should teach AI tools to teach people to think?
Am I the only one who sees an unnecessary middle-man in this configuration?
> Users are now receiving notifications regarding their Microsoft 365 subscriptions and must take action if they wish to avoid Copilot and its extra charges.
Luckily, I took that action many many moons past. It's called "using Linux".
Oh, and I still get to use LLMs when I want, including virtual agents. Difference is: They run on my machine, they run when and as long as I tell them to, they run open source weights, their RAG storage lives on my local fileserver, I know exactly how it all works, and it's integrated in exactly the systems I want it integrated into.