
Extra points for systemd making this damned easy.
Admins of the Ubuntu Store have pulled all apps from a developer who signed himself "Nicholas Tomb", and from his e-mail signature apparently wanted to crypto-mine himself into a Ferrari. Mr Tomb's "2048buntu" and "Hextris" applications are now absent from the store, with their removal sparked by a GitHub comment about the …
"Either systemd is evil because it makes managing services needlessly complicated. Or because it makes it too simple (like here). Choose one."
The two are not necessarily mutually exclusive. In this instance it appears to have made a particular type of abuse easier, but there are plenty of situations where it makes other things unnecessarily complicated. Recognising that doesn't demonstrate a lack of knowledge, rather the opposite.
If Ubuntu really wanted to make it easier to install packages, they'd drop -dev packages and put the -dev files into the main package. The historical reasons for separating out -dev files are long gone; yet as long as this situation persists, building from Source Code is unnecessarily complicated.
he historical reasons for separating out -dev files are long gone;
Nope, they are not. See openssl 1.0 vs openssl 1.1 as a fine example. Ditto for any and every other case where you need version coexistence. Getting the libs to coexist is usually trivial - the dynamic loader loads the right ones if present. Getting the full devel environment, headers, etc to coexist requires patching everything which is just not worth it for 99% of the cases.
I know that. You know that. An inexperienced user doesn't know that.
If you just follow the typical installation instructions -- make sure you have both foo and bar installed, then type ./configure -- your build probably will fail, for want of foo-dev and bar-dev. This gives people a terrible first impression of the process of compiling a package from Source Code. They end up thinking it's hard, and possibly even going back to a proprietary OS.
"If Ubuntu really wanted to make it easier to install packages, they'd drop -dev packages and put the -dev files into the main package."
It would substantially increase the size of the installation, which could be a problem on resource-constrained (eg. embedded) systems. The decision to separate them was made by Debian, not Ubuntu, who only follow Debian in this regard. There are other distributions such as Slackware to choose from if you want -dev files included by default.
Ubuntu is not a lightweight distro for resource-constrained systems. It's a shameless full-fat distro for laptops and desktops on the end of a broadband connection.
The fact is, the dynamic has changed since Debian was created. There are more non-specialist users out there. And while the Debian and Ubuntu people are pretty good with putting packages into APT, there will always be something you need that isn't there yet.
I think it's more reasonable to expect that anyone advanced enough to be building an embedded system probably will know enough about what they are doing, just to delete the files they don't need. It's still less of an inconvenience for an advanced user to have to do this, than for a beginner to have to learn about -dev packages.
I suppose the "proper" way to do it would be to have an option in APT always to download -dev packages alongside the "main" package (and to download all missing -dev packages, if you turned it off).
He is at least limiting resources based on core count. I hit a simple webpage one day and my fans started spooling up higher and higher by the second. That page took my browsers process from around 200MB to over 11GB.
Also, I'm with Jack of Shadows on this one. Ubuntu is bad enough*, but systemd is just ***** ******* *******.
*never saw a Linux distribution swap before Ubuntu, let alone suck up memory like Windows. If you need a Windows like OS, stick with Windows!
AZump said: "never saw a Linux distribution swap before Ubuntu, let alone suck up memory like Windows"
I'm more than a little sceptical about that claim. I'm writing this on an Ubuntu 16.04 desktop that has been been doing software development and web browsing all day long. Amount of swap being used - zero. That is typical for a system that I use on a daily basis, and I see the amount of usage on a regular basis as it is displayed incidentally to certain tests I run as part of software development.
About the only thing that pushes it into using swap is heavy memory usage from applications that are allocating a large proportion of available memory (e.g. very large arrays in the software I am working on). And that is exactly what happens in every other modern operating system since that is why swap exists in the first place.
If you want to make comments like that, I would suggest doing it on a non-IT related web site where you are less likely to run into people who actually use this stuff on a daily basis.
"I'm writing this on an Ubuntu 16.04 desktop"
Maybe that's your problem right there. For me, at least, the crazy memory leak started with 17.04, or was it 10? Can't remember, but I think it was 17.10. Well, whatever the version it was that first ditched Unity as the default.
For some reason, Gnome 3 leaks a bunch of memory. The browsers (and it does not matter whether I'm running Firefox or Chromium) leaks even more, to the point of swapping on my 6 GB RAM home computer.
My work computer, running 16.10, has no troubles. The browsers just crash. All. The. Time. Some system library, I suppose, because when one browser starts crashing, the other starts as well. But that is (?) another problem.
Can Canonical push an update? Surely there's no way of informing the victims.
And do Canonical know enough about the identity and location of this individual for some chance of prosecution under the Computer Misuse Act or similar? If not, then why not?
Installing software shouldn't necessary be easy, as software can be dangerous. Therefore you should check where your software is coming from and use as little as possible. Every line of code is a risk you take.
That's why there is a security rule to separate data from code. Data should always be safe to use so you can get it freely. Code on the other hand is something that needs to be checked, something that is dangerous.
Unfortunately we now have new commercial platforms, which like most commercial platforms are unfit to deal with data by default. Therefore every trivial task needs an "app" which then often just displays a webpage in a browser. Webbrowsers lack basic functionality like being able to sort an HTML table. Therefore webdevelopers augment browsers in the worst possible ways, by writing client-based code to run in them.
I particularly love how the new Samsung S8 doesn't even come stock with a text editor.
And companies wonder why people install over 100 apps: not because they like it but because they have to!
Alternatively it was just a soccermon playing the latest variants of Candy Smash and Flippin' Birds.
That was one of the things I most miss about PalmOS. You just knew that you would have a note application, a very functional calendar application, a calculator application and a contacts application (which was integrated into the phone on Treo devices).
They were always in the same place, always worked the same, and the data was portable between devices without having to hand the data over to Google, Apple, Microsoft, or your 'phone vendor when you upgraded your device.
Even with the web or internet based sync tools, I've always found problems going from one Android device to another.
Do I have the solution for you:
CalDAV/CardDAV for portable, accessible calendars and contacts.
Install Radicale, the simplest DAV server that just works, in a new Python virtual environment on your local server—if your server is Internet-facing you can sync over TLS with a bit of extra setup, otherwise only sync locally to avoid data leaks. Next, add new contact and calendar containers to the instance. Install DAVdroid on your phone and move all your contacts and calendars over to the CalDAV/CardDAV containers you just made. Assuming you are using your phone's default apps, as long as they conform to AOSP expectations of functionality you should be able to do this easily.
Loose file syncing.
Install Syncthing on that same server and then put it on your phone to sync other files, such as documents amd pictures. That's right, no Flickr or Google Drive. You don't even need a NAS, as you can run Syncthing just about anywhere, even on a desktop. You can easily add new devices to the pool, which will create a mesh network of file redundancy. Unlike Radicale, Syncthing offers security by default built into the protocol, so it's safe to use outside of your local network.
App data backup.
When it comes to app data backup, I myself just directly copy the /data/data/* and other related paths to my new devices, or export in the app where possible. I don't really have another option for this step since most apps are pretty proprietary. If anything, just use less apps, if at all possible. The whole app ecosystem is a disaster anyway.
If the problem you are having is a lack of standard functionality, consider flashing AOSP or Lineage, and stick with the apps provided by F-Droid and XDA Labs. I haven't used the stock, proprietary Android versions that any of my phones have shipped with nor their infestations of Google products and services and I have not had any want for them; this has the added benefiit of a huge decrease in bloat and resource usage.
Now, I've heard of web pages that mine while you are there. Then I thought: That, if done legally and with consent from the user, full transparency of course, could be an interesting way of micropayment. And now I see that the same could go for free (be it as in beer or speech) software.
Say, the page/program asks that the user provide the usage of a CPU core and some memory (I don't know how much would be necessary, I don't do mining, so that might make it prohibitive) for the use of the information/code. In programs, it could be for a set period, say, one year after installation, or a certain number of hours used.
If you do not like to donate that CPU time/memory usage, then click "no" and be redirected to another page or can't run the program.
Any thought for/against this possibly stupid idea, oh wise El Reg community? Has anyone out there actually tried doing this?
I recall a link sharing service I believe it was, that asked to mine for a few seconds before fetching the link. The user gets what they want faster than the "Please wait 30 seconds to download this file" prompts, doesn't see ads, and doesn't have to disable their ad blocker (unless you block 1st party scripts and XHR). It seems like a win for everyone involved.