What's the point of a distro if it isn't to abstract things like that?
It's too hard for us, but don't worry, here's some documentation.
Fedora 34, a feature-packed new release of Red Hat's leading edge Linux distribution, was released today, though the main Java package maintainer has quit, urging "affected maintainers to drop dependencies on Java." Fedora 34 is used by Red Hat to try out new features that are likely to end up in first CentOS Stream and then …
It is a nightmare what everybody is using as "package manager" where none deserve the title. You have the python universe, the php universe, the javascript/node universe. Then you have ruby's way and another and another. Lets not talk about the "where must it be installed" dependencies and "I cannot move it to another directory" or only one single version of package X can be on one machine or else doom happens.
A nightmare.
One thing most linux distros did well previously was the central package management. That is a very important feature which has been consistently undermined by the WCDIB(*) crew because they often cannot see beyond their own little universe. Sigh.
(*) WCDIB: We Can Do It Better
I agree with this. It could be to do with the fact that these languages all need many bindings against C (this isn't just a language, it is the *entire* computing platform).
So because of all the bindings dependencies, the individual language developers come up with their own solution to generate them (and automate the system C compiler to hide this messy business from the user). And then because this is in place, they start to leverage it more and more, until it is just a dependency collection cluster fsck.
I don't think not using rpm for java libraries will affect anyone much.
I have never seen a Java app that uses that.
In theory you can, but because it's nobody's primary use case I am not surprised package maintainers care little about rpm builds in Fedora compared to versions in maven central.
Java has always been, and always will be resource intensive: a bit of HD space in ./libs is the least of you worries.
With python I have not had a problem, ever since I started spinning up throwaway virtual environments. Expecting the OS packagers to maintain all sorts of esoteric packages seems to be harsh.
Yes, I am told it has problems, but my fairly naive use seems to pose no problems.
Could never wrap my head around Java, though.
After a few 1TB venv's for a bunch of simple apps, you might start to question the logic of this deployment method. Not to mention the logistics of updating package X in every venv when you find out there's a vulnerability that needs patching.
I guess it's better than a docker container for /bin/true, but it's still pretty inefficient.
For your little local problem and throw-away environment, well, yes, you may get away with it. In a production environment you will pay because you are accumulating tech-debt at a high rate.
The real problem is when you are doing a large scale projects that are supposed to be flexible, maintainable, buildable and runable on a diverse set of environments. It is better to look for a supportable solution from vetted system- and stable-repositories than a "oh, that is a nice library" approach and often duplicating something.
The next problem is the concept of code/data/config separation. Supporting multiple instances of something requires a lot of thought. That is something the (real) system packagers normally support and think about. Using many other package systems thrive on duplication. Yes, they say, storage and memory are cheap... But good design is always cheaper and better in the long run. But then you'd have to plan for more than a hacked-together-application.
To be fair, Java developers are very similar to Python, Perl, Javascript and Rust developers. They drag in far too many dependencies for trivial things.
They rack up technical debt that simply isn't worth the effort. It seems to be a concerning culture for "non-C" languages.
And I personally blame these language based package managers. CPAN, Crates.io, NPM, etc. All add a big mess into projects.
The fact that so many issues can be shaken down at compile time in Java and that compiling is fast is a big factor. You can breaks apis freely in Java if you know the consequences will be compile time errors or caught in Juints.
As a java dev I do that all the time with my own code. Eclipse supports this as a development technique. You can break apis and fix all the use cases in a single operation sometimes. IntelliJ not so good, but never more that one compile run away from highlighting what needs to be done.
C requires more diligence in maintaining api compatibility.
I'll chuck R in as well (although they are working on a way of pulling out dependencies required for a given script as a snapshot).
Icon: It's 2037 and Mildred is doing her Phd and trying to make sense of a mix of Python and R scripts used to process data back in the 20s...
For R you can use Microsoft Open R, which they finally updated to v4 after looking like they abandoned it at v3.5, and make use of MRAN (their CRAN snapshot). This ensures that anyone using version X of Microsoft Open R will get the same version of package Y when they install it.
Obviously that's all here and now and not much use in 20 years time after MS have abandoned it. Not sure Docker would be any better in that regard.
You could also use RStudio Package Manager which supports snaptshots.
"the first time since GNOME 3.0 came out that there's a real rethinking of the basic desktop experience."
As an onlooker from a safe distance I wonder to what effect? That last real rethink prompted two new desktop projects to reinstate the original desktop experience. Would it actually be something that would lure me away from KDE?
At least in the Linux/BSD world we don't have to just get on with what a vendor chooses to inflict on us.
Did you ever use KDE 3.5?
If not, it is likely you would much prefer that to the current state of KDE.
Every open-source desktop has regressed. It is actually becoming a real problem.
The fact that a single person can't maintain Xfce compared to i.e CDE is also suggesting that modern approaches to packages and dependency management are broken.
And now, no-one dares risk start a new replacement, because Wayland is still too young and Xorg is perceieved to be too old. So we are at an absolute stalemate until Wayland finally disappears.
"If not, it is likely you would much prefer that to the current state of KDE."
First question, yes.
Second more complex. The one thing I missed in 4 and still miss in 5 is the ability to confine unhiding an auto-hid panel to a corner rather than to the whole edge. And 5 certainly wasn't ready for the big time when incorporated into Debian & via that into Devuan (and it wasn't an LTS version either). The current version (as in Mint), however, seems fine.
An exception is that Gwenview seems to have acquired some misfeature that I take to be an effort at response to gestures; when scrolling thorough a n image it will suddenly decide that what I really intended was to swith to the next image even though I hadn't taken advantage of the specific button provided for this. However that's not been enough to prompt me to look very hard to see if it can be turned off. On the whole It's still a better option for me than, say Cinnamon which would be my second choice. And "choice" really is the relevant word here.
* No top bar.
* No hot corners.
* No auto-maximise.
* No stupid thing on the side.
* No buttons in the window menu bar.
* No removing perfectly good menus with stupid random buttons everywhere (see: gedit).
...
* Make the slide-bar wider.
* Put the small arrows back on the slide-bar (have you tried viewing a 30,000 line terminal output with these stupid slide-bars?).
...
Do I need to go on? Why, when something has worked perfectly well for 20 years does some smart arse choose to fuck it up?
This isn't just a Linux thing--Apple has the same issue. And if you look closely, so does u$.
In my mind, the real issue is that every distribution has at least two user groups: the end users, and the developers. To the extent that these groups are identifiable, there will be differences, and these differences mean conflict.
Even better, to both of these groups, the distribution is substrate. This came as a shock to me when I realized this at AMD. Here we were, developing astounding technology, and all the user cared about was their apps. Everything else--the OS & all of the hardware--is just overhead for the app. I still remember how angered I was when I switched over to Ubuntu and found out I had to install dev-essentials.
So end users care about their apps running. To ensure that, the OS that the apps run in need stability. Meanwhile, the developers of the new apps want the new shiny. The only solution is that the devs end up with a system with up-to-date versions of their tools, and distribution-set versions of everything else.
The maze of rats' nests mentioned above is the inevitable result of this situation. Feel free to suggest a better way, but I'm not hopeful.
Former Gentoo Java lead here. I totally feel his pain as I burned out some years ago. It's even worse when you're trying to allow users to build the stuff from source. I'm familiar with many languages, including those that are known to be particularly troublesome for distros (like Rust and Ruby) but trust me, Java is the worst of all worlds.
One reason for this is its approach to optional dependencies. Take log4j 2, for instance. When I last looked, it had about two mandatory dependencies but tens of optional ones, most of which hardly anyone would care about. That's fine if you're grabbing the precompiled jars with Maven or Gradle or whatever. Grab just the ones you need. Or hey, just grab them all, it's only a few more KB to download. If you need to build from source though, as Fedora policy dictates, you're screwed. Although it's possible, no one uses a preprocessor with Java so all those dependencies that are optional at runtime suddenly become mandatory at build time. And guess what, those dependencies have more dependencies and so on and so on, and before you know it, you've had to package and build half the Internet. Maintaining a single distro package, particularly in Gentoo, carries significant overhead that just doesn't scale in the context of the Java ecosystem.
Gentoo is not as strict about building from source as Fedora is so I considered just using precompiled jars where possible. You then have to ask what the point of packaging Java stuff is at all though. There are some small benefits but I didn't feel it was worth my time so I moved onto other things. I'm now the Gentoo Games lead. That's much more fun!
This post has been deleted by its author
Absolutely feel your pain here.
I've never - ever - used the libraries packaged by the OS in a project, for the same reason. Most API developers are extremely careless when it comes to backwards compat, which makes versioning a nightmare. I say this with some authority as we've been shipping our own API for 20 years, so I'm familiar with my own errors from the past, and how I need to think when designing new features to ensure they can evolve.
If you're building on others APIs, outsourcing the version management of those APIs to anyone else is just asking for trouble. And yes, I suspect it's particularly troublesome for Java, which sits in that awkward spot between "application failed to load because it couldn't link" and "language is dynamically typed, so I can duck-type missing features".
btrfs?
Hmm... I tried it years back; it seemed unstable and I suffered some data loss from it. I tried it about a year ago, people report it's nice and all that. I ran an dedpluicater on some of my stored stuff, and compression on some stuff. It would run fine as long as you had 100% up time. Oh, you had a power cut or something? btrfs still has no strategy to recover from that kind of thing; it'll detect issues and go read-only, if you're lucky enough to not lose access to any files at that point (I had files or directories just go at that point), it can tell you that a generation of stuff is corrupted.... ok, it tells you the most recent generation. fsck doesn't help. Rolling back a generation fixes problems there, then it goes read only again because the generation count doesn't line up somewhere else in the directory tree. Seriously it was crap unless you have flawless hardware. rsync and virtualbox both seemed to have remarkably poor performance on btrfs.
Plain ext4, never a problem -- worst case if you have a poweroff is an empty file if you were in the middle of copying over a file. But, no deduplciation, no compression.
s3qlfs lets you have a filesystem mount with deduplication and compression, with the actual data stored on your ext4 filesystem. I had a dodgey USB drive for a while so I can tell you it's pretty fault tolerant. It has a proper fsck command that usually worked; once or twice it complained about the database being corrupt (which it does back up regularly, so you don't have a total loss if it's irrecoverable, you use one of the about dozen backup copies), I was able to run a sqlite3 .recover on it, and with an fsck it had everything but whatever I copied in within the last minute or so (which it stuck in lost+found). The performance is quite good, I back up a bunch of junk into s3ql and can also run virtualbox out of it (I doubt the .ova files shrink much since they're probably already compressed, but the live .vdis sure do.)
Just more proof that ALSA has failed.
Why don't Linux people swallow their pride and finally implement OSS properly? It's the nearest to a standard Unix audio system out there, and implemented properly, like in the BSDs, it doesn't require all sorts of soundserver hacks.