* Posts by containerizer

60 publicly visible posts • joined 23 Jun 2023

Page:

Linux royalty backs adoption of Rust for kernel code, says its rise is inevitable

containerizer

Re: Veteran C and C++ programmers, however, are understandably worried...

> From what I read, Rust is a far cry from a Java-like garbage collector

Rust is a programming language, not a memory allocation technique.

Nothing stops someone from implementing a garbage collector in Rust, or for that matter in C or C++. Horses for courses.

> which essentially allows memory to leak until you run out of memory then makes everything wait while it tidies up

That's not fair. Modern GC implementations can be used which do a lot of clever stuff in the background. They can't reduce the delay to zero, but you can tune high-volume applications so that the delays are measured within single-digit milliseconds.

It's a skillset in itself to understand GC and learn how to measure and tune its behaviour, so I won't downplay the fact that it's overhead. But there's a different skillset needed for managing memory in C. Assuming that you don't have bugs that leak memory, you still need to consider things like memory fragmentation. The memory allocation routines are not instantaneous - they do have to spend CPU cycles looking for blocks of memory in the heap that match the size you requested. Linux/etc allow you to avoid this by allowing essentially infinitely large heaps and falling back on the virtual memory subsystem to deal with it, which works well but is wasteful.

> On the other hand, Rust fans seem to sweep bounds checking (that is also an aspect of Rust) under the rug or insist that everyone else does it and you should just get with the program.

I've no idea what this means, but Rust enforces bounds checking at compile time, so I suspect your research is deficient.

> I will say that multiple layers shouldn't exist in a driver that is written for performance. Calling a subroutine imposes a huge performance penalty.

Rust doesn't "call a subroutine" (since when was calling subroutines a problem ?). The memory allocation stuff is done at compile time.

containerizer

Re: re: Rust may be modern, but so is Windows 11…

> The formal language definition is a key design deliverable for any “well designed” programming language.

No true Scotsman ..

'Maybe the problem is you' ... Linus Torvalds wades into Linux kernel Rust driver drama

containerizer

Re: Not a trivial problem.

> If Rust was seamlessly able to work with C's ABI, adding Rust code would not be difficult.

Pedantic nitpick : Rust code does seamlessly work with the C ABI.

The problem is the kernel API, or more specifically, the fact that it very deliberately does not have a stable one.

If you try to extend C to make it give the sort of guarantees that Rust has, you would end up with something that looks like Rust and has the same integration issues.

containerizer

Re: Fair comment by Linus

It does.

If someone submits a change that breaks another component, they are expected to include fixes as part of their submission.

If the "other component" they break is something to do with Rust, then they need to know Rust in order to fix it. In other words, kernel developers would be required to master two languages, not just one.

Remember that a lot of devs are volunteers. They're doing this because they enjoy it. Forcing things on them that they don't see the point of is a good way to get them to leave.

containerizer

Re: Fair comment by Linus

The problem with this is that there is no stable "C API" within the kernel. There never has been.

when kernel developers make changes to subsystems that break other things, they are expected to update those other things at the same time.

How does that approach work if the thing they break is the Rust layer ?

The solutions are limited :

- introduce a stable C API in the kernel. Most of the kernel developers won't tolerate that as it adds overhead.

- all kernel developers must learn Rust to avoid breaking the Rust layer. Ditto.

- accept that Rust will be broken from time to time. I don't think the Rust guys would accept this - they'd constantly be playing catchup.

Trump’s tariffs, cuts may well put tech in a chokehold, say analysts

containerizer

OK, so we've established that you lied about Trump not being invited and now you've pivoted to the idea that the party whose nomination he sought and won was "hostile".

> Look at how Moscow Mitch has done his level best to sabotage Trump.

Yup, refused to lead his party to convict him in the Senate following his impeachment, and cleared the way for his Supreme Court picks. Sabotage takes many forms, it seems.

> The everyday people are sick and tired of being ruled from on high by the billionaire bankster class, the life long politicians who are in their pockets and their cronies.

You're defending a billionaire who appointed several other billionaires to his administration. Is it too much to ask that you are at least consistent ?

containerizer

> Why would be sign a pledge to an organisation that is openly hostile to him ?

What's hostile about pledging to respect the outcome of a vote ?

containerizer

> after not being invited to the first 4 (cos the RNC didn't want him)

This is a flagrant misrepresentation. Trump refused to sign a pledge where he would support whichever candidate won the nomination. Why do you think he refused to agree to this simple condition ?

containerizer

> You mean like the debates he wasn't invited to?

No, I mean the debates he could have attended if he signed up to the conditions of the debate. Which he refused to do.

> The people voted in the various primaries and voted for Trump. This is democracy in action. If they wanted someone else they could have voted for someone else.

Thank you for explaining the concepts of democracy and the party nomination process to me, but I was already aware.

> Its not like the RNC just decided that Trump is the candidate and it doesn't matter what the party at large wanted

I didn't say the RNC decided anything. I said that Trump mounted a reverse takeover of the GOP and ended up dominating it to such an extent that he thumbed his nose at their own nomination process and still won.

> I mean, something like that would NEVER happen with the DNC now would it? *cough* Hillary *cough* Harris *cough*. Heaven forbid the party ruling elite just pick who they want!

I think you are confusing me with someone who is defending the Democrats.

containerizer

As things stand, neither party has this kind of coherent discipline. They don't have the same culture of whips/party line that we do. Heck they had to get George Clooney in to get Biden to step down.

I think it's because the US is so geographically distributed. It also has had, until recently, a culture of slowing the workings of politicians and political institutions. In addition, the party's elected representatives have limited control over who gets nominated for the presidency, which is why Trump (historically a registered Democrat) was able to mount what was effectively a reverse takeover of the Republican Party. His takeover was so complete that he didn't even bother going through the motions of campaigning to be the presidential nominee.

If there is any justice, what follows will be a rout of the Republican Party which will lock them out of power for several decades. This happened in the 1930s following the Great Depression.

containerizer

Re: That's the point

> For some ridiculous reason banks have continued to give him loans (I have my guesses - evil).

After his string of casino bankruptcies, the banks stopped lending him money. That's why he had to go to the Russians. Eric Trump admitted this in an interview some time ago.

containerizer

Re: Shaking

what do you think Biden should have done ?

Mixing Rust and C in Linux likened to cancer by kernel maintainer

containerizer

Re: "it would suck"

I appreciate the lesson in structured computer organisation, but I am still struggling with whatever your point is.

If you have two languages, one is safe, one is less safe, and you run them both on the same hardware, isn't the safe one less likely to have bugs ?

(FWIW I've been programming embedded systems for > 20 years; I've written the odd value to a memory location in my time .. )

containerizer

Re: Operating systems always "unsafe"

> You have to develop your own safety for an operating system,

No, you don't. That's the whole point.

> C is perfectly safe as long as you know what you are doing.

This is true of a lot of things considered dangerous for humans.

> but there is no way to do all the things that have to be done and still have a compiler run the safety

Excepting memory-mapped hardware, have you got any examples ?

> Safety has to be hardware dependent.

No it doesn't. There are many widely used memory-safe programming languages.

containerizer

Re: Not magical thinking

They are static, but they are also things which are only necessary in a type-unsafe language. You don't need address checking in a language that validates address accesses at compile time. You don't need undefined behaviour checking in a language that has no undefined behaviour.

The second sentence is effectively suggesting that Rust's capabilities are not proven to work. Given the widespread adoption and the fact that Linus Torvalds thinks it is worth at least trying it in the kernel, I think the burden of proof is upon you to show that all these very clever people are wrong about Rust, not anyone else.

containerizer

Re: Can have it both ways

>The whole reason people are using rust in the kernel is because it makes also makes some kernel code easier to maintain.

It makes some kernel code easier to maintain for some people, and makes life harder for others. It means that instead of having to learn one language, kernel devs have to learn two, because now any breaking changes they make could effect the Rust layer as well as the C codebase.

containerizer

Re: This is the scariest part of all this, IMO

> it's an announcement that he plans to sabotage someone else's work, not because it has a technical flaw, but because he doesn't like it.

This is a mischaracterisation of what Hellwig said.

He didn't say he didn't like Rust. He said that maintaining two codebases in the same kernel would be too difficult and would threaten the future of the kernel.

I don't think he should be disciplined for saying this.

containerizer

Re: Not magical thinking

> I dislike the syntax and the borrow checker adds not much to UBSan/ASan/Valgrind

You're comparing static to dynamic analysis. Valgrind/etc won't spot anything unless you have a test cases that triggers the problem.

containerizer

Re: "it would suck"

> Software is the state of the hardware, and hardware has bugs.

Surely the idea is to have less bugs overall ?

Tiny Linux kernel tweak could cut datacenter power use by 30%, boffins say

containerizer

Re: Confused

> THAT is exactly the point I was making - it is the application that must make this decision. Not the kernel.

I don't understand. What decision ?

select/poll/epoll do not make any decision. They simply cause the application to block until certain conditions (eg arrival of data) arise. Whether or not that condition is detected via the arrival of an interrupt or via polling is hidden from the application.

containerizer

Re: Have They Measured the Whole Problem?

Slight nitpick on an excellent comment : enterprise NICs (and most of the good cheap ones) do DMA, so there is no issue with getting packets out of the NICs internal buffers. This is usually set up as a ring buffer and you can usually configure the size. Of course, you absolutely can get issues if you fail to service the ring buffer quickly enough.

Avoiding dropped packets then becomes a case of tuning the ring buffer size and the parameters which flip between interrupt and polling mode.

No, you definitely do not want PREEMPT_RT in most enterprise settings where maximising workload per watt are prioritised. PREEMPT_RT significantly reduces throughput in order to improve latency. This is rarely what people want.

containerizer

Re: Confused

Applications indicate they want to read more data by notifying the operating system via select()/poll()/epoll(). These mean "stop me, then wake me up when more data is available". The details of how to discover whether or not data is available are hidden by the kernel so the application doesn't need to care, it just needs to correctly use the proper API. Note that the poll(), epoll() calls are not a direction to the kernel to use polling instead of interrupts.

Imagine it's a web browser waiting for a response from a server. Simplified, the application opens a socket and sends data to the server and waits for the response by calling poll() on the socket.

When that happens, the kernel kicks in. If there is no data on the socket, the kernel won't reschedule the application. Later, the server responds - an interrupt occurs, the kernel reads the data and then places it in the application's socket buffer. The application is then rescheduled and can immediately read the data as soon as it is resumed.

In the alternative world with polling, the application does not change. Instead of waiting for an interrupt, a timer inside the kernel periodically wakes up and checks for data. If it is there, it follows the same series of steps that it did when it got an interrupt. The application does not have to change.

Some network cards, typically the higher-end ones, already effectively support this with a feature called "interrupt coalescing" where they will wait for their buffers to fill up, or for a timer to expire, before notifying the CPU.

The approach mentioned in the article is likely to be beneficial in high throughput scenarios, but not all. There is a crossover point; if your I/O is frequent and regular, polling is more efficient than interrupts due to the extra interrupt servicing overhead. If the I/O is more patchy, polling may waste CPU cycles through doing polling work which rarely finds available data, and it may also introduce latency as a packet will have to wait for the next polling interval before being serviced. According to the article they're adopting a hybrid approach to switch between polling and interrupts depending on the conditions, which is clever. Tune that right and I'd say this feature will end up being enabled by default for most deployments.

Interrupts are great if your I/O comes and goes at random, relatively infrequent intervals, which might be the case for compute-bound workloads

Fedora Asahi Remix 41 for Apple Macs is out

containerizer

Re: Mmmmmm

the cores may or may not be the same, but it probably does not matter given that they are implemented to comply with an ARM spec.

I believe the other hardware inside the SoC does change around. All the audio, video, network/wifi drivers, graphics etc are on-chip and I imagine they're continuously updated.

Public developer spats put bcachefs at risk in Linux

containerizer

Re: Are we reaching a monolithic limit?

> But they are trying to do something no other Unix project has ever done: the aim is that you can stick a HAMMER2 volume on a shared connection and multiple independent OS instances can all mount it at once.

It sounds like you're describing a cluster filesystem. This has been done many times in the UNIX world - Veritas, GFS[2], GlusterFS etc.

Fedora 41: A vast assortment, but there's something for everyone

containerizer

The IBM POWER workstation/server line (formerly known as RS/6000) would be my guess ..

The US government wants developers to stop using C and C++

containerizer

Re: It's not the language, it's just the way it's "talking"

> The fact that rust has this "unsafe" directive (or what ever it is called) means that the language designers absolutely know that the language cannot do what people want to do with in in a memory-safe manner.

I can think of very few cases where this kind of operation would be necessary. The one obvious one is where you have to manipulate memory that isn't really memory, such as when you're accessing memory-mapped hardware within a kernel, or certain other low-level operations where something else is handling memory for you.

This does not mean the language is bad. At the very least it shows that unusual memory usage which cannot be tracked by the compiler is at least auditable and the compiler can emit diagnostic warnings when they are used (or enforce against their use). In C there is no standard way to do this.

You cannot build a programming language which is foolproof in all scenarios. You can, however, build one which minimises bugs caused by human error. I think that's arguably a win over a language which pointedly does not.

containerizer

Re: It's not the language, it's just the way it's "talking"

> The precise issue is that C the language is not "memory unsafe" because it doesn't do memory management, the libraries and apps do that.

I am sorry, but you are very wrong.

Managing the stack is a form of memory management and the C language does this. If you declare a variable or an array etc on the stack, or take a pointer to it or dereference it, the compiler generates code for this purpose. Off-by-one errors and other stack-related bugs are a very common cause of instability or security bugs. That's a fundamental feature of C.

The use of malloc() and free() are discussed in the original K&R book and are defined in the ISO C99 spec (safe to assume they are in ANSI C89 and in subsequent ISO specs). They may not be operations that are directly converted into machine code by the compiler but that doesn't mean you can get away with saying they are not part of the language.

If you want to talk about losing credibility, scoring pedantic points by trying to suggest that the language and the support library which forms part of the language specification are not intrinsically linked seems like a good way to do that.

It's about time Intel, AMD dropped x86 games and turned to the real threat

containerizer

It would take several pages to explain in detail all the different ways where you are utterly wrong. But in summary :

- I really do not know where to start with the notion that only CISC CPUs can perform "complex mathematical and algebraic operations" with "extreme precision and efficiency". SPARC, MIPS and PowerPC have lengthy track record here, being used for CGI in films, by the oil & gas industry, financial services sector etc etc.

- A RISC is not inherently a "low power part for small form factor devices". The earliest RISC CPUs were used to build servers, workstations and mainframes. IBM dominated enterprise computing with the RS/6000 workstation, and its S/390 CPUs were a CISC ISA running on a version of its POWER ISA RISC platform.

- CISC does not mean "able to do complicated things". CISC means "I have a complicated instruction set whose instructions may take several clock cycles to execute and which you may never use".

- I have no idea why you think the inability to emulate another instruction set at full speed rules out an architecture as being viable.

- the Motorola 68K and Itanium are not RISC architectures. 68K is "dead" because it can't run Windows, and Itanium was simply a poor design.

I remember life 25-30 years ago. Nobody in their right mind would have deployed x86 in the enterprise server space, it simply was not done. Every RISC CPU wiped the floor with x86 at the time. They lost because x86 was cheaper and could run Windows, and Intel were eventually able to hotrod their rubbish architecture to make it run fast.

These days, the CISC vs RISC thing does not matter. It was important in the 1980s/90s when chip real estate was at a premium, and RISC could use the space vacated by complex instructions to make simple instructions run much faster. Nowadays, everything including x86 is implemented on a RISC core with the higher level CISC instructions microcoded.

Upgrading Linux with Rust looks like a new challenge. It's one of our oldest

containerizer

Re: Why a new language?

I don't recognise the idea that memory safety only recently became important. People have been banging on about C's limitations in this regard since the language first became available.

It's not like the industry has been frozen in aspic. Outside of specialist fields (kernels/device drivers, low-latency software) C/C++ have been replaced with Java, C-Sharp and Python, and Go is making some inroads in the systems programming side.

The remaining software which continues to be in C is that which cannot be easily migrated to any of these languages. For most cases, the cost of continuing to use C is lower than the cost of replacing codebases with Rust. The workarounds for C - static analysis, stack smashing protection tools etc - are deemed "good enough" most of the time.

I can well understand why some kernel developers might see that all of this is a solution in search of a problem. And the Rust crew aren't the first to make this case - there have been attempts to push C++ on the kernel in past years too, and the same kind of reaction when it was blocked - that the devs are a bunch of luddites trying to hold back advancement. It's not that simple.

If the world really is right, and the kernel devs are wrong, then this problem will solve itself in a different way : people will write a from-scratch, Rust-only kernel which is compatible with Linux at the system call layer and can therefore be dropped in to existing distributions. If that idea seems too far-fetched, then it telling us something about the cost/benefit of using Rust.

containerizer

Re: Why a new language?

The whole point of using C is for the compactness and efficiency of the code. If you start adding stuff like this on it defeats the purpose. And unit testing can't give you what compile time memory tracking gives you.

containerizer

Re: Why a new language?

> In my experience, Rust code is rarely shorter or easier to write than its C counterpart. It takes a lot of discipline to write anything sufficiently worthwhile in it.

This statement is probably not true if the definition of "sufficiently worthwhile" includes a memory safety guarantee.

containerizer

Re: Why a new language?

> Now had the issue of memory safety, etc, in C (or the lack thereof) been addressed by creating a sub-set of C and/or a few features to mitigate some issues like use-after-free then it would have been fairly trivial to do this

"Come on boffins, get of your backsides and make C safe!"

Google says replacing C/C++ in firmware with Rust is easy

containerizer

Re: Wanna give some examples?

> There is no downside to it other than having to learn Rust, some toolchain issues in some embedded environments, and of course shaking your cane at everyone on your lawn.

But two of of the three mentioned are pretty big issues.

C/C++ compilers have had decades of refinement behind them, work everywhere, and loads of people know how to program in it competently. Even if you managed to persuade people of the technical merits of Rust, the inertia is always going to be there.

With respect, saying "some toolchain issues" is a bit flippant. It looks like only x86 and ARM-64 are well supported ("tier 1"). Many embedded platforms won't have that hardware - I'd expect ARM32 to remain popular for a while yet. Other RISC architectures seem to be withering on the vine - MIPS seems to be dead, although I'd expect there's still a lot of PowerPC in telecoms-focussed SoCs. And SPARC/s390x remain small, but important in key enterprise markets.

A lot of that problem would go away if they'd switch the focus to adding a GCC frontend, where almost every other major language and target architecture is supported.

Raspberry Pi 4 bugs throw wrench in the works for Fedora 41

containerizer

Re: WTF ?

> And the reason for that is - HomeAssistant *hates* any other OS

Even Home Assistant OS ?

Red Hat middleware takes a back seat in strategic shuffle

containerizer

Most of this is .. actually quite sensible

Don't want to be the "nothing to see here" guy, and layoffs are never good. But some of this actually makes sense.

RH have a long history in maintaining their own builds of things, for example the JDK or Spring, where they add little in the way of extra capability. This made sense back in the days of yonder where community projects tended to be less concerned with LTS builds. RH added value by maintaining such builds.

OSS projects seem, in my perception, to have become much more mature of late, often maintaining their own LTS builds (corporate sponsorship plays its role here). Inevitably, rolling one's own build therefore accomplishes less.

I think it was a couple of years ago that RH announced that the Temurin JDK build would be fully supported on Openshift, for example. There's a win-win thing here; RH backs, and presumably helps fund, a community-led build, so they don't need to have a separate team themselves. The community gets the sponsorship.

Debian preps ground to drop 32-bit x86 as separate edition

containerizer

Re: Good thing too

I'm sure there are embedded projects crazy enough to (a) use an x86 and (b) use a Debian distro for their platform, but I'm going to guess they're probably few and far between!

containerizer

Re: It's our gift to you this Xmas

it's not the hard leap that's the problem, it's confusing people with newly invented terminology.

Anyway, I'm off to get a life now. Happy Christmas.

containerizer

"x86-32" huh ?

I can find no reference, anywhere, for the term x86-32. Did you guys just make it up ?

The 32-bit variant is pretty much universally known as simply x86, or sometimes ia32.

The 64-bit version has been variously branded amd64, intel 64, x86-64 or x64.

There's also a thing called x32, which was an attempt to have a hybrid. It runs on amd64 but limits itself to 32-bit pointers.

Introducing yet more nomenclature is not at all helpful.

Will anybody save Linux on Itanium? Absolutely not

containerizer

Re: Branch seems resasonable

You don't even need to do any branching. Just use kernel 6.6, which is going to be supported in LTS form for another 3+ years. Then you can branch it.

But of course branching should not be a problem, as these folks will presumably already have branched their own compilers and distributions all of which dropped support for this arch long before the kernel did ..

OpenELA flips Red Hat the bird with public release of Enterprise Linux source

containerizer

Re: Mean while Alma

> There isn't (now?) an arm version of RHEL9.

I doubt there ever will be. I think I read that some distributions have announced deprecation of 32-bit x86, never mind ARM.

containerizer

Re: Does not compute!

> Oracle and SUSE want customers who are willing to pay for support. The $$$$$$$ focus is behind this move.

The problem that Red Hat had, and which this new group will have, is the cost of building the and maintaining the thing used by all the people who are not willing to pay for support.

containerizer

I don't think they will, any more than the Americans were unhappy when the Soviet Union invaded Afghanistan. If this group are serious, it means they're going to burn a lot of cash building a stable enterprise OS that they will then give away to people who completely opposed to having to pay for it. They can't do that forever any more than Red Hat could.

If I were Red Hat, I'd be quite pleased. Go ahead, knock yourselves out burning cash and giving stuff to people who won't pay.

Oracle pours fuel all over Red Hat source code drama

containerizer

When you say "the community" who do you mean ? Taking the Linux kernel, from what I can tell it's pretty much other large corporations - AMD, Intel, Google, alongside IBM/RH. Strip out contributions made by those who are on their employer's clock and what have you left ? I'm not saying this to diminish the core contributions made by volunteers and enthusiasts, but proportionately, most of the work is done by corporate sponsorship.

I'll bet that "the community" who are really concerned with running a CentOS like OS for free are almost exclusively businesses who are annoyed at the idea they might have to pay for something that is of value to them (noting that academia is being caught in the crossfire). I'm sure there are exceptions to this, but why would the sort of enthusiasts who contribute to OSS routinely want to run a boring, enterprise focussed OS that is basically out of date on the day it is released ? I appreciate that for many people there is a principle at stake here, but how many enthusiasts and volunteers are really effected by this ?

Corporations around the world, many of them not directly involved in IT services, are making money off the back of open source contributors all the time. They contribute nothing back to the community - they're not asked to do so - but sell their products and services using open source frameworks. Across the world, businesses use tools such as the Linux OS, databases, editors, compilers and drivers to generate what must be $trillions in revenue without anyone ever complaining that they're making money on the backs of volunteers. Then Red Hat come along, contribute a ton of stuff upstream and ask for payment for their stabilized, boring downstream distribution and suddenly they're worse than Ghengis Kahn.

Go figure, as they say.

containerizer

Re: Opensolaris anyone? @containerizer

wasn't the "cumbersome LPAR system" basically a port of the long-established virtualization tech from their mainframe line ? I can see why it would have made sense if their enterprise customers loved it (which they did/do).

containerizer

Re: Opensolaris anyone?

I disagree. The market killed SPARC, along with MIPS and POWER. I think that leaves IBM as the only vendor selling enterprise IT platforms based off a proprietary architecture (which I understand is basically a heavily modified POWER arch under the hood?). We'll see how long that lasts ..

containerizer

Re: OpenSolaris had a better X11

Hmm. "it worked well with my graphics card" is great and all, but I rather suspect that Linux at that time worked with a far wider range of hardware!

Much as I'd happy to assign blame to Oracle, it was all over before they came along. The Solaris engineering team stiffly resisted licensing the code under the GPL, forcing them to come up with this CDDL daftness. I also don't think it's Oracle's fault that Sun screwed up the x86 side of things. As for X12, I am sure it would have ended up looking something like Wayland; if the issue is community leadership I am not sure Sun's involvement would have improved matters much ..

containerizer

Re: Opensolaris anyone?

Can't dispute the truth of any of that, it's a top notch Enterprise OS. In my first job we used Solaris on SPARC was utterly unbreakable - but expensive. A big part of that, alongside the very sound design, was tight integration between the hardware and the software. But when we got to the point where Linux distributions were being certified on x86 servers, it was all over.

Agree completely with your second paragraph. They could have saved both Solaris and SPARC by supporting x86 as a gateway drug.

Bosch goes all-in on hydrogen with €2.5B investment by 2026

containerizer

Re: I was expecting

Er, the government have been going pretty hard on backing hydrogen lately.

I don't understand why anyone could think that a hydrogen boiler is better than a heat pump. If the goal is to end dependency on gas and produce clean energy, the only way to produce hydrogen right now is via electrolysis of water. If you've already got electricity available, why not use it directly instead of losing energy by converting it to a dangerous, invisible, explosive gas ?

Rocky Linux details the loopholes that will help its RHEL rebuild live on

containerizer

Re: making them Red Hat customers, at least briefly

That's unambiguously illegal under the GPL. Anyone who gets the binary must be allowed to get the source.

Fedora Project mulls 'privacy preserving' usage telemetry

containerizer

Re: "Fedora Workstation the premier developer platform for cloud software development."

declaration : I'm probably a critical RHEL fanboy, I think they do a lot more good than harm.

What's the prob with Fedora ? Been using it in anger a couple of years now. It's come a hell of a long way since I first tried it - back in the days of Fedora Core 8, or 9 (can't remember, must've been about 12/13 years ago). Just few months back, I installed F37 on my Thinkpad X1. Runs buttery smooth, not a single hitch beyond a little fiddling needed with bluetooth. All of the usual handy dev tools are all pre-installed, or are a dnf install away. It's an excellent platform for software development, noticeably slicker than working with Windows on the same hardware. I was particularly impressed when I did an online upgrade to F38. Not a single beat skipped.

I've messed with Ubuntu - not very recently but a few years back. I think it's intended to make Linux accessible to novices and is fairly successful in that. But it doesn't feel like it is intended for power users; tries to hide too much from you. It's possible that opinion is out of date.

As to the decision to stop maintaining LibreOffice - can't say I noticed. Last time I tried it, it was rubbish. I use Google docs for all my personal stuff - limited, but typically sufficient. For work, MS Office is a staple. Maybe there's a big, active community of LibreOffice users out there I've not heard of ..

Page: