Re: SystemD
Since they're both debian based it probably means dependency hell for .deb files which may or may not need systemd more than anything else.
5377 publicly visible posts • joined 18 Jul 2007
Governments, websites, individuals should start putting legally enforceable notices that AI slurpers are expressly forbidden from scraping content. Since AIs are natural language processing engines they should have no trouble understanding these messages exist.
And then throw some false names and facts into the content - something obscure and specific that an AI would incorporate and regurgitate as fact with the right prompt. A bit like how maps were booby trapped with innocuous but false data.
AI steals content to train itself. So if a site detects abusive automated visitors (i.e. those ignoring the robots.txt), then start feeding them poison - false and misleading images, text & data and links that lead to more junk and so on. Just shovel crap out for these AIs to ingest. Another AI could even be the source of this junk, generating deliberately hallucinatory content that won't be detected as wrong. Let's see how long it takes for the attacker to notice it is being poisoned.
The attacker could even be trained with poison to confirm it was scraping the site. e.g. if it was fed certain phrases or names which reinforced pathways that would yield the expected answer.
Prattling on about? We have multiple VMWare ESX servers and the majority of instances are running inhouse images - gitlab runners and things like that. Things that any VM framework could handle onsite or even the cloud. So why do we use VMWare ESX? Inertia. That's it. If VMWare start acting like dicks then it could easily overcome that inertia and there is no reason it wouldn't be viable to move away entirely.
And yes there will be customers with critical systems who need expensive 24/7 hour support and suchlike. I do not believe the majority of VMs are used that way. Other companies are just like us with a bunch of inhouse reasons for using VMs that could be moved to some other framework with minimal disruption to their business.
I'm sure there are some really critical deployments of VMWare but I bet the majority of them, possibly 90+% are just throwaway stuff that any VM could do with any appreciable difference in performance - virtual desktops, CI/CD runners and whatnot.
Given that VMWare seem to be suing their own customers and getting very aggressive maybe it's time for companies to explore their alternatives. Start with the throwaway stuff and work up. There are many alternatives, e.g. OpenStack, Proxmox, Nutanix etc. A big company could probably even roll its own infrastructure or contribute to an open framework.
That largely depends what the tests are failing on. Most likely because a lot of the tests are going to be running options which perhaps aren't implemented yet or on the critical path, That certainly seems to be the case, looking at some test results which are mostly just missing or stub argument support - https://github.com/orgs/uutils/projects/1/views/1?pane=issue&itemId=3865405&issue=uutils%7Ccoreutils%7C1872
I bet if you were to run busybox over the coreutils tests you'd also get a bunch of similar failures for similar reasons.
It's not "invulnerable" and NOBODY would claim it is. But it eliminates a lot of vulnerabilities caused by C regarding buffer overflows, double frees etc. It's still entirely possible to write logic which is broken in any language, and no one has said any different. As for maintainability, I suggest the people tasked with maintaining coreutils may feel it's held together with string at this point and NEEDS a rewrite. In which case, why not use a language which reduces the attack surface and scope for bugs.
The more I learn of AI the more bullshit it becomes. An LLM is basically just billions of weighted parameters that were trained on lots of content. Once trained, it's just a loop that builds up an output set of tokens based on the input set of tokens. It's practically mechanical aside from a little bit of randomness thrown in to make the responses vary. The only "intelligence" to it is whatever rules the developers programmed into it for training and also for vetting the response, i.e. guard rails.
Well if they use AI then potentially they're plagiarizing everything that went into the model all at once.
But it would be a *brave* student who used it for anything more than a hint while doing their own research because it would emit bland, repetitive, hallucinatory crap that would soon become obvious to someone reading it.
I've yet to step into an ASDA which isn't shabby and extremely claustrophobic with aisles set too close together. I have a feeling the staff don't care for a bunch of reasons. If I want cheap I'll use Aldi or Lidl. If I want expensive I'll buy Sainsburys or Waitrose. ASDA just sits in the middle as this oppressive horrible experience that is probably the only choice for some people. I know I wouldn't want to use one if I had the choice of anywhere else.
"So even if you do rewrite those modules in Rust, you get nothing. A safe module is impossible, because you have to treat the Rust module as-if it's C."
If you are calling Rust from another language (e.g. C) then you build as cdylib. Even if the entry points to a cdylib are an unsafe the innards can be checked against themselves as safe. i.e. you're already in a far better position than if the library was built with C/C++.
"Worse, Rust treats all breached invariants as immediately fatal. Your application may not corrupt memory, but it dies with no possibility of recovery."
Rust prefers errors to be defined in the function contract, i.e. a function should should return a Result<success, error> and the caller should process the result. The language makes it easy to process results or propagate them up. This is also what Swift and Golang do in their own ways - Swift makes it look like a try/catch but it's the same under the covers, Golang has an ok, err return convention.
What you might be referring to are panics, which Rust considers to be untenable application errors, i.e. you did something dumb like unwrap something without checking for error, or exceeding the bounds of an array and the code panics. Normally it is fatal (to the thread) but you can catch it with a catch_unwind() if you really wanted to. So yes there are two possibilities of recovery - define results in the function contract and handle errors properly, or use catch_unwind(). Neither has to be "immediately fatal" although if I were a developer I would want to know about such failures, not hide them.
"C++ exceptions mean a breached invariant can be detected and recovered from before a memory corruption would-have occurred. Even if you choose not to recover for domain-specific reasons, the application can take the opportunity to save user data or roll back a partial change."
Safe by default makes extremely hard to corrupt Rust, and if it happened (e.g. a call out to a C lib) it would crash. But a panic is an orderly unwinding of the stack terminating either the thread or at the catch_unwind(). Secondly, C++ *doesn't* protect you from memory corruption - you will get undefined behaviour, a crash rather than a helpful exception. If you're lucky you might get some kind of stack trace that helps isolate where the crash occurred but not necessarily where it was *caused*. e.g. I had to fix a crash on exit bug that was actually caused if someone hovered over the system tray and it took days to find.
Read the TrapC whitepaper. "TrapC removes 2 keywords: ‘goto’ and ‘union’, as unsafe and having been widely deprecated from use" If you use some random C library then chances are it contains either or both those keywords and will not compile against this C variant unless you remove their uses.
For example, people frown upon goto but it is legitimately used to denest complex code and ensure it jumps to a cleanup or error state that happens at the bottom of the function. For example, OpenSSL uses goto a lot
https://github.com/openssl/openssl/blob/c1cd6d89a32d08d171b359aba0219357acf0c5cb/crypto/pkcs12/p12_npas.c#L72
And union is arguably worse since it doesn't happen in the scope of a function. It could be pulled in from external headers, used internally between functions, or might form part of the API itself. If it's used then you're on the hook to refactor the code to not use it. Maybe it's as simple as replacing union with struct and taking the memory hit, or maybe it isn't such as when the union is imposed over a buffer to make sense of it.
Either way you're not going to be compiling that code with TrapC until you change the code and hope you did it properly.
That's definitely the problem with C++.
I once reviewed some shared_ptr code like this "shared_ptr<Foo> p2 = p1.get();" So instead of having a two shared_ptrs with a refcount of 2, each had a ref count of 1 since p2 was assigned the raw pointer out of p1. This crashed at some random point in the future and it took ages to debug. Another time I had an IO thread which crashed because someone passed a "this" into a timer, but by the timer fired, the "this" in question had readly been deleted.
These sorts of issues are caused by lack of lifetime and ownership checks and the language / compiler doesn't care.
Generally speaking Rust is not a hard language to learn. It's very easy to install, has some great plugins for VS Code, has an easy toolchain, is very clean and terse, has a C-like syntax, and it's easy to get up and running in no time. A lot of the skills and mindset are transferable from C/C++ and also back to C/C++. I certainly don't put learning it beyond the capabilities of most programmers and I expect most could become proficient in a couple of weeks.
That doesn't mean there aren't areas that are bound to confuse people, but could probably be overcome with a tutorial written specifically for people from that background. Things like how lifetimes work, borrow checks, what the equivalent of smart pointers are, how move assignment works, how to do polymorphic style functions and so on.
C++ has grown so hideously complex, nuanced and unsafe that I don't see how it can be fixed. At least not without gutting the language or doing something similar to TrapC. But TrapC is hiding some of the issues of the language, not making them go away and comes with a runtime cost since every pointer access has to be bounds and possibly type checked.
It's not just being ugly that is the issue but the only way to protect it is at runtime. e.g. this TrapC is imbuing every pointer with RTTI data. So if you have code that walks a buffer an element at a time, then you're going to incur a runtime bounds check every single element of the array.
A modern language would provide and encourage iterators or streams APIs to avoid this problem, and also relieve the client code from having to do any bounds checking of their own. But in C (and some C++) you're stuck with the shitty concepts baked into the language.
So TrapC is offering some protection from ugly code but it comes with a cost.
Actually it's not since they've changed the language by removing union and goto keywords. A LOT of code uses those things.
Aside from that there are some other considerations. This language doesn't make ancient code safe, it just makes it fail predictably. i.e. if someone abuses a pointer the code exits rather than be exploited. That's certainly a benefit but there is a performance overhead to this, because C code walks buffers with indexes or pointer arithmetic so something has to check each time at runtime.
And maybe you want the ancient code to actually handle the failure rather than have your program die (e.g. if there is an exploit you could recover from)? Well you're limited to the trap command which is very inelegant.
And what if your ancient code wants to call external APIs, system functions etc. One side uses RTTI to protect data the other doesn't. You can probably call from TrapC to C but not the other way around. So then you're going to have implement some of the kludges suggested.
So no, it's not going to be easy to recompile ancient code. It will take work and it could be a pain in the ass especially if you are not familiar with this ancient code. I've had similar experiences porting libraries onto platforms they weren't designed for and it's not fun.
There's no need to be "generous" to my point. It IS the point.
C++ nor STL nor boost ships with your code and even if it did, it would still be optional because the compiler DOES NOT CARE. Rust forces you to implement thread safety whether you want to or not. C++ sucks because if doesn't enforce shit and even if it has templates that might protect your code it will not check if you use them properly or at all.
I honestly find it perplexing that people leap to defend languages that are demonstrably less safe. Especially in an article about C being not fit for purpose, but the same applies for C++ especially in the modern world.
My knowledge is up to date. It doesn't mean code already written has the choice to move to the latest standard without being substantially rewritten. As a mitigation it is generally sensible to use Boost in preference STL since it compiles down to STL if it can but offers additional functionality like ASIO and richer types over a wider range of C++ standards. C++11 on its own only has the bare minimum of what we'd call multi-threaded support. But let's look at what you might do in C++ to protect shared data through C++11 or boost:
recursive_mutex mutex;
SharedResource sharedResource;
//...some code block
{
guard<recursive_mutex> lock(mutex); // What happens if I forget this line?
sharedResource->doSomething();
}
What happens if you forget to lock? Nothing, at least not at compile time. At runtime your code might blow up immediately or only once in a blue moon. But it is unsafe.
Let's look at the equivalent Rust code:
Arc<Mutex<SharedResource >> sharedResource;
//...some code block
{
let sharedResource= self.sharedResource.lock().unwrap();
sharedResource.doSomething();
}
The programmer has no choice but to lock because the shared resource is only accessible by explicitly obtaining it. The compiler will fail if you don't. It will also check that ProtectedResource is thread-safe without side effects just in case there is something about the struct that would break that guarantee.
Multithreading support also extends to other multi-threading patterns, e.g. mpsc send & receive, move semantics being the default, compile time checks on safe-to-send structs, old-school spawns, joins, barriers, condvars. It even supports for async / await. C++ has some building blocks for futures and C++ 20 adds co-routines (which are borderline incomprehensible) but neither futures nor co-routines are remotely simple to use.
So the point is Rust has your back and compels thread safety by design. It also supports modern patterns and terse code. While C++ throws more and more templates at you and doesn't give a damn if you use them properly or not. Maybe Super Programmer (tm) does everything correctly. Back in the real world real programmers make errors. I prefer my errors are found at compile time, not by the customer.
It's not even a fair comparison quite honestly. I might add that Go and Swift also wipe the floor with C++ for multithreading but Rust is the closest analogue in terms of portability and runtime performance.
You can multithread easily. Doing it safely is another matter. I'd also say that multithreading in C++ is awful unless you use Boost or something to hide platform differences and even then it's incredibly easy to screw up because the language itself has no concept of sync, async, guards, mutexes etc. So if you don't write your code properly the compiler won't save you and nor will the runtime since both are oblivious of the programmer's intent.
That is why a language like Rust would be far better for multithreading because it has proper rules of ownership, move semantics, sending data between threads, async / await, data access, guards etc. baked into the compiler and runtime. It makes it easier to parallelize code because the guarantees are so much stronger.
C doesn't "describe" the underlying processor, unless your code is peppered with near and far keywords in which case you're probably writing DOS or Win 3.1
C exists because to be an abstraction over the processor, i.e. writing machine code is a pain in the arse and 10x worse if you're targetting multiple processors since each demands its own machine code. So rather than _write_ machine code (or assembly) per processor, Kernighan & Ritchie wrote a language and compiler which was an abstraction over a processor and produced reasonablish machine code for any processor the compiler could target. And it was abstract enough to hide most of the processor detail but not so abstract that it couldn't do pointer-ish things to memory if you wanted.
Which is all well and good to a point but modern C has to deal with threats and issues that ancient C never had to concern itself with. And it's not fit for purpose in that role.
Right... except for 99% of the time, Rust DOES have control of the memory regions it has access to. And if it doesn't then your operating system is broken or you're dealing with volatile memory and you'll be using unsafe block to deal with it. Everywhere else will be safe by default. A sane response would be to say "hey that's a tangible improvement on C where everything is unsafe" rather than contrive some silly examples, or shift the goalposts and pretend somehow the compiler and runtime checks mean nothing.
Also, I don't know why you're going off into a rant about Rust in general. I'm sure Cargo is vulnerable to some attacks. So too is vcpkg and conan. What's your point? Or moaning about the Rust community for some straw man attack. It's bizarre.
Erm yes, they're the fault of the language and then you go on to admit it by saying if programmer doesn't do X, Y, and Z then bad things may happen. i.e. the language is shit. The language and compiler could have stopped the bad thing happening but it doesn't care. And then we get the C and C++ defence force leaping out of the woodwork moaning that if only if everyone were a Super Programmer (tm) just like them then of course bad things wouldn't happen. Except they keep happening over and over and over again because nobody is Super Programmer and the language is shit. Is this so hard to understand? Even when you're writing a comment against an article by a government begging people not to use shit languages?
I'm not sure why you're annoyed. Rust code is safe by default to the vulnerability this article talks about thanks to the efforts of the compiler and runtime. Not just buffer overflows but also null pointers, double frees, concurrency & ownership issues. If you don't write the code in a way to satisfy the borrow check it results in a compiler error or a panic rather than becoming code that breaks later.
As for unsafe blocks, yes it supports it as a way to talk to C. It wouldn't be much use as a real world language if it couldn't bind to C which is seen as a the lowest common denominator. But those blocks are explicitly marked in code and very easy to search for if you suffer an issue. Certainly much better than everything being unsafe and where the cause of an issue, or the attack surface being literally anything.
Buffer overflows are a consequence of languages like C and C++ that don't enforce buffer limits and where things pointer arithmetic can easily step outside of a buffer. You could scream at developers to not do the bad thing, but as long as they use languages allow it, it will happen forever. The only mitigations are layer upon layer of static analysis and rules like MISRA but really the sane option is to use a language not vulnerable to the exploits in the first place.
Themes are nice and all, but a functioning desktop is even nicer. This is why Enlightenment flamed out so fast in the day - great to look at but it brought computers to their knees and was useless as a desktop. When GNOME & KDE appeared and were slightly boring but actual functional desktops, interest in Enlightenment died on the spot. I seem to recall that Enlightenment got dropped by Red Hat too which caused a kerfuffle.
I expect these days you could go to town with window decorations again. It's a little bit different for Wayland and modern toolkits that they rendered the frame in the client rather than by the WM so Qt / Gtk draw all those bits as part of the program's surface before compositing. So it's a function of what themes you have installed for the toolkit as to what the UI and frame would look like.
I remember trying it back in the E16 days but it was hideously complex to make work and a performance hog. It was great if you liked bizarre themes with handlebars sticking out of windows or whatever but not so good if you just wanted to do stuff on a desktop without the desktop becoming your preoccupation because it was so broken.
I was happy when Red Carpet (a packaged version of GNOME 1.x) and demonstrated that yes Linux could offer a desktop that didn't require a PhD to install.
As I alluded to the out of box experience is pretty poor. A lot of people are used to the GMail style appearance and I think Thunderbird should change their defaults to be more like people are used to these days. Or at least ask the user to pick from 2 or 3 common styles.
Exchange probably uses proprietary protocols but I'm also sure that IMAP / POP3 can be enabled. Maybe someone has reverse engineered their protocols and Thunderbird could pull that work.
Microsoft forcibly switched people from the Mail and Calendar app onto Outlook so they could inject advertising into the inbox. There was no other reason for it since those other apps were completely usable as they were for the majority of people at home who just want something that hooks to their webmail.
As it is I suggest people uninstall Outlook and use Thunderbird. It's a completely free and ad free email client that works fine although I wish the default views were changed to something a bit more nice out of the box.
VMWare was amazing in the early days. I remember running it on a PC back around 2000 and my mind was boggled by the concept and the potential. But I think as they grew and grew, they got a bit complacent. Virtualization is so well supported by hardware, operating systems and open source stacks that the value proposition of paying a small fortune to VMWare just isn't worth it for a lot of customers. And if VMWare decides to screw customers and lock them into longer contracts then it may motivate even more of them to explore alternatives.
... they're bleeding out. I bet a lot of VMWare customers are realising that instead of paying $$$$$ to VMWare they can pay $$ to use proxmox or openstack instead and have more control over their experience. It probably requires a lot of work to move but in the long term it means big savings. So VMWare changes its contract lock their customers in. I hope it motivates more customers to explore the alternatives.
And an annoying co-pilot button eating up space on the keyboard. Allegedly this gives me benefits but I fail to see what they are even supposed to be. Even Microsoft's Copilot+ website struggles to justify why anyone should care, citing features like brightening video and audio transcription that a CPU or GPU could do, assuming someone wanted either of those things. So I've hidden / disabled Copilot as much as possible in the OS. It seems like a solution in search of a problem, and probably has all kinds of hideous data scraping going on with it too.
The Polestar 4 has a rear view camera, not mirror. I think it is a reckless and insane design choice since a camera doesn't give 3d cues like a mirror would and also messes with distance and night time view. And it's not like a transit van or something where there is no way to see out the back and where a camera would be better than nothing - Polestar deliberately made it this way.
It's not really the superficial syntax differences that would be the issue translating code. The issue would be raw API calls, pointer / buffer abuse, assumptions about nul terminators, code with side effects and so on. I suppose there are certain patterns of bad code a C translator could identify and replace with safe code, but it would radically alter the form in the process. Or it would do something worse than alter the code, which is to leave it alone but slather it in unsafe {} blocks calling C library functions so that it's legal Rust but also Rust with all the benefits and safeguards turned off
There will be other lawsuits because that's not all they did.
I would not be surprised if their checkout jacker also inserts an affiliate code even for people who didn't come in through an affiliate link. If so, they're stealing from online stores.
And users of honey are being stiffed by a product that claimed to find the best discount codes and actively suppressed them as part of their shakedown service to compliant online stores for a cut of the transaction.
So they were stiffing everyone in different jurisdictions. I expect they will have a lot of legal issues. I also hope that PayPal as the owner of this brand sees some push back, if certain online stores make the "strategic" decision to drop their asses from the checkout process and be done with them.
That is because its predecessor AdBlock Plus went from blocking ads to shaking down advertisers for $$$ to allow their "curated" ads through. Users had a big problem with that and so uBlock Origin took over.
This Pie Adblock could be benign but given its Honey roots it could be another shake down attempt to let advertisers pay to be let through or to infiltrate the browser and do other shitty and underhanded operations. I certainly wouldn't trust it enough to install it and find out.
I think it was mine for a short time since it introduced things that Microsoft belatedly copied like moving stuff to high memory and disk compression. There was something magical seeing a 20Mb drive suddenly be capable of holding 1.5x as much data and transparently. But Microsoft caught up were also anti-competitive dicks and had a lock on OEM preinstallations on new PCs. It probably didn't help that DR DOS was sold to Novell where it got dragged into a networking war between Novell & Microsoft.
I think to be honest that once MS DOS 6.x turned up there wasn't much incentive for anyone to bother with DR DOS and interest died off completely in it. I think if DR DOS had capitalized on their early advantages and secured an OEM or two for preinstalls it might have enjoyed more success.
Rust is actually easier to write than C/C++ in a lot ways but it has a steep unlearning curve from C/C++.
I say unlearning because compiler doesn't care about a lot of things that the Rust compiler will punish you for until you stop doing them, e.g. abusing ownership rules, memory safety etc. The advantage however is that once you do learn the Rust way then a lot of the practices are transferable back into C/C++ to make code you write in those languages less unsafe than they were before. I say less unsafe, because the language is always going to be broken but some badness can be mitigated.
I say easier aside from the unlearning because the language is terse, the tools are simple to use, the compiler is friendly stuff like unit testing & package management is built into the tools, async programming is part of the language and code tends to be more portable thanks to a better standard library.
Maybe you should ask why does the unsafe keyword exist in Rust? It is because Rust sometimes has to call bare metal, or external libraries (written in C/C++). These are external to Rust itself and therefore a mechanism is supplied to make those calls.
So this is not some "ahah gotcha!" moment.
The vast majority of Rust code is safe by default and doesn't need to use the unsafe keyword ever. And if it does need to use unsafe, then it is a sliver of code, a tiny fraction of the overall code base.
So this is an absurd argument. Memory safety is a very big part of the language and the benefits are obvious to anyone who programs in it, which I assume you haven't.