* Posts by Psamathos

11 publicly visible posts • joined 7 Jul 2021

Ubuntu 25.10 plans to swap GNU coreutils for Rust

Psamathos

Re: So far, the Rust coreutils pass approximately 500 out of 600 GNU tests

There seems to be a little cognitive dissonance in your reply. Your response to my suggesting that there is a big difference between two classes of bug is to first ask "Is there?" but then go on to say that one is "orders of magnitude worse" than the other, suggesting that there is.

The comment to which I was replying said "A 17% failure rate strikes me as being much too high ... Especially as Rust is promoted as something that reduces errors, by design." My point was that what's causing these tests to fail has little to do with the errors that Rust reduces by design. As I said, a 17% test failure rate in the functional compatibility testing is not acceptable as a final state, for precisely the reasons that you state. That's why the project explicitly treats any incompatibility as a bug and is diligently tracking its progress towards removing them all.

I'm unsure how you read into my reply some suggest that it's acceptable for core tools to behave slightly differently, especially when I just stated that it's not acceptable, but your final snark perhaps gives a clue. Many of the comments one reads about Rust seem to be obsessed with the idea that Rust programmers are obsessed with memory bugs to the exclusion of all else. I'm not a habitual Rust programmer, but I know many, and for the most part their goals (obsessions if you want) are around reliability, maintainability, portability and reusability, and they happen to find Rust a good tool for that particular job. The source code to GNU utils is just as arcane as the shell scripts that you mention. A rewrite whose explicit goal is 100% compatibility while being more portable and maintainable, and as it happens immune from certain classes of bug, seems like a worthwhile cause.

Psamathos

Re: So far, the Rust coreutils pass approximately 500 out of 600 GNU tests

This isn't about rewriting the code for the sake of the "trend-of-the-week"; its about rewriting the code in a more robust and maintainable form that can't suffer from some of the classes of bug that the GNU utils have suffered from in the past. In the process many of the tools are also getting faster.

You're correct that 17% test failure rate is not acceptable as a final state, but it's a little ridiculous to say that incomplete feature compatibility of a work in progress somehow reflects badly on the language because "Rust is promoted as something that reduces errors, by design". There's a big difference between failing a test for edge case settings on command line parameter parsing and having a use-after-free memory error. If you go look at the issue tracker you'll find the open items are in the former category. Canonical are not looking to make this change today and if you look at their test tracker [*] you'll see that they've been making steady progress. As the article rightly points out, the increased scrutiny from a widely used distro switching to them is only likely to speed up the fixing.

[*] https://uutils.github.io/coreutils/docs/test_coverage.html

Bitcoin creator suspect says he is not Bitcoin creator suspect

Psamathos

Not the suspect, or not Satoshi?

He says that "he is not Bitcoin creator suspect", but did he ever say he wasn't Satoshi? I think we should be told.

Microsoft seeks Rust developers to rewrite core C# code

Psamathos

Re: well, at least...

> [This trend towards garbage-collected memory management things has no Moore's Law to compensate for the inefficiency any more]

It's worth noting that while C# uses garbage collection, Rust does not. The Rust compiler does compile-time analysis of object ownership in order to achieve some of the same goals, but does so with deterministic performance and typically significantly better performance.

Going green Hertz: Rental giant axes third of EV fleet over lack of demand

Psamathos

Good experiences with renting EVs

I have rented Teslas from Hertz several times, including the last five times I've had cause to rent a car, and my experience has generally been excellent. For context, all of these have been rentals in southern California, where urban sprawl generally means that one is car dependent and things are spread out enough that Uber/Lyft gets expensive, but I'm not needing to do hundreds of miles of driving each day on a long road trip.

In terms of charging, when you rent a Tesla from Hertz it comes with a subscription to their "supercharger" network, which gets billed back to your credit card with no mark-up. The navigation system in the car is good at pointing you to the nearest charger if you need one and they are common enough down there that they are usually nearby and not busy. Furthermore, I have found that if I stay at a hotel that has valet parking and I show up in an EV then they will generally charge it overnight for "free", which is very handy. For clarity, Hertz DO NOT require you to return the car fully charged. Until recently they asked you to return it at least 80% full; now they have switched to letting you return it at any charge level but if it's less than 80% then they charge you (at standard supercharger prices) to charge it back up to 95%.

Aside from the charging, the rental Teslas from Hertz seem to have a pretty good level of spec compared to other cars in their fleet at a similar price. The acceleration from having an electric drivetrain is better than you'll get from anything else in a reasonably price category. Hertz still try to charge extra to book a car that will have GPS, but all the Teslas have it so there's no need to pay extra. The interior is as good as any random Ford/Chrysler/Toyota cars around the same price and the stereo is generally better. Having a BEV lets you use the carpool lane, which is a boon in that part of the world.

All in all, despite not owning a BEV, renting one has become my default option. Of course as the saying goes: "your milage may vary".

Avoiding AI-capable PCs will be impossible by 2027

Psamathos

A contrary view

OK, with all the negativity being hurled at AI here, I'm going to make a case for how this might be useful for something other than letting the OEMs shift more gear.

The machine learning systems that get hyped as AI at the moment have a stack of major problems, but high up the list are three: they typically are poorly tuned to individual users, they typically need you to be on-line to use them and (as a result) they are terrible for privacy. Having a capable, reasonably efficient neural network accelerator on your laptop, it not only becomes possible to run decent models while off-line but it allows local, privacy-preserving training and it allows the burden of fine-tuning for personalisation to be distributed, which makes it more feasible at scale.

I don't know if there will ever be a "killer app" for AI, but there are plenty of things for which it's rather useful. Generative AI for chat is at best a gimmick and at worst a fountain of BS, but generative language models are rather good at grammar checking, paraphrasing and writing style adjustment. Transformer language models have allowed a step change in the quality of speech-to-text systems (go track down Whisper.cpp on GitHub; it's better at TTS than most commercial systems and the code models are open source). Neural networks are great at network and process anomaly detection, which can make malware detection more responsive.

Yes, there's a boatload of hype for AI at the moment, and the "AI PC" is currently a solution in search of a problem, but I think that there are plenty of problems out there for which it can help. I don't know if the rest of you are just going to sit around and complain; I for one am going to go write some code in anticipation of having a 45 teraflop accelerator in my laptop.

OpenAI's ChatGPT has a left wing bias – at times

Psamathos

Notable flaws in the paper

Having read the paper, there seem to be a number of notable flaws in it.

The most glaring flaw is that the baseline that they use has nothing to do with real humans; the baseline for comparison was created by ChatGPT itself. They are not comparing the results of ChatGPT to the results of people who were surveyed, but instead comparing its results to results that it generated when asked to impersonate humans with certain positions. In the absence of a human baseline you could just as well read these results as "ChatGPT is better at impersonating right-wing positions than left-wing positions".

Related to the issue of not checking the answers against real humans is that idea that its answers are "biased" compared to the median position of people at large, rather than compared to the positions of parties. When the authors write that they "... believe the bias stems from either the training data or ChatGPT's algorithm", they are implying that the training data was not representative. In fact there is a large body of evidence to suggest that in fact its political parties' positions that are not representative. A survey of surveys a few years back (https://prospect.org/power/americans-liberal-even-know/) showed that when asked about specific policy positions, people consistently poll more "liberal" than they vote. It's quite possible that the training input is completely representative of the population at large, but the population at large hold positions that are not aligned to the parties policy positions. In recent years the right has increasingly compensated for this by being more "populist", but it's worth noting that the left has done that in the past too.

When ascribing bias to the algorithm itself, it's worth pointing out that there's a huge body of work that shows that people with wider exposure to ideas and cultures, and who have read more extensively, tend to be more liberal. This is a natural consequence of the very nature of conservatism; if your central tenants are accepting and preserving traditional values rather than questioning and evolving your positions, then the more alternatives that you are presented with the less likely you are to stay that way. ChatGPT has "read" a large fraction of the content of the internet, making it better read, and exposed to more, than any "lefty" academic. It should be no surprise if any system trained on that much data turns out to take an expansive, relativistic position rather than a traditional one. In that respect it's no more biased than the human brain.

Actual real-life hoverbike makes US debut at Detroit Auto Show

Psamathos

Re: What is “internal combustion with batteries “?

Don't you remember? Samsung tried that with the Galaxy Note 7, but then ended up recalling them all.

Heresy: Hare programming language an alternative to C

Psamathos

Missing memory safety

It seems that the authors spent two and a half years building a new programming language designed to replace C and yet didn't address the biggest failings in C. Hare does offer bounds-check arrays to stop you writing beyond the ends, and tagged unions to give a little help in avoiding writing a pointer as one type and reading back as another, but it still has manual memory allocation and manual memory ownership management. This means it's still possible to have memory leaks, it's still possible to have use-after-free bugs and it's also still possible to have stack corruption bugs.

Switching programming language is an effort and people don't do so without a compelling reason, usually that the new language addresses some existing pain point. Memory-related bugs are one of the biggest sources of pain when writing in C. I'm not sure that people are going to be keen to move to Hare as opposed to, say, Rust, if it does nothing to ease that pain.

macOS Server discontinued after years on life support

Psamathos

Push email for IMAP

Just about the only unique feature that macOS Server offered was the ability to run your own IMAP mail server have have it send push notifications to the Mail app on iOS. The protocol used was public, since Apple released their patches to the Dovecot mail server, and other people have written patches for other sorts of IMAP server, but you needed a digital certificate that seemed to only be available through the certificate request system on Apple's own server code. It would be great if, now that macOS Server is dead, Apple would provide an official process for getting the necessary certificates.

Nvidia launches Cambridge-1, UK's most powerful supercomputer, in Arm's neighbourhood

Psamathos

A 48-year-old design?

I seem to recall Sinclair already produced a "Cambridge 1" back in 1973. It certainly had an excellent price-performance ratio back them but I'd hardly call it a supercomputer; hopefully NVIDIA have updated it somewhat.

http://www.vintagecalculators.com/html/cambridge_models.html