* Posts by AdamWill

1524 posts • joined 4 Nov 2010

Arrogant, subtle, entitled: 'Toxic' open source GitHub discussions examined


Re: Eh?

It's the part where people who are using software that someone else is writing without paying them, or having any kind of formal support arrangement whatsoever, think they can arrogantly dictate design decisions.

Me deciding how I want to design some software that I am then nice enough to publish for you to use is not me "forcing you to use it" the way I designed it. You are entirely free not to design it. But yes, if someone opened an issue like this on something I'd written, I'd close it too.

Misguided call for a 7-Zip boycott brings attention to FOSS archiving tools


Re: I like 7Zip.

Well, first of all, not "the West". The UK, US and Australia. Everyone else ducked the heck out of that particular mess.

Second, the answer is "quite a lot, which was very bad, but what exactly does it have to do with the price of fish?"

Note the response to that particular debacle in "the West": Tony Blair - voted out. George W. Bush - voted out. Whichever of the revolving cast of Australian PMs was in power at the time - probably backstabbed by his/her colleagues before s/he could be voted out. Public inquiries up the wazoo.

When can we expect Putin to be voted out in a free and fair election, then?

SpaceX reportedly fires staffers behind open letter criticising Elon Musk


Re: A close reading...

Update: per this reporting - https://www.theverge.com/2022/6/17/23172913/spacex-complaint-letter-firing-elon-musk - "repeatedly" seems to mean "they posted it to a handful of Teams channels".


Re: Unclear on this whole employment thing

"Maybe its a modern thing but someone should have explained to these people some of the basics about employment. One is that you don't make (critical) statements about your employer in public."

It has not, AFAIK, been reported that they did this. This was an "open" letter in the sense that it was circulated *within the company*, it was not officially released to the public.

It then got leaked to the press, but none of the reporting I've read so far states that the authors of the letter were also the ones that leaked it.

The mail from SpaceX management also does not appear to say that people were fired for posting the letter publicly or leaking it.


Re: A close reading...

That's what SpaceX management would *like* you to think, sure. Of course they're going to paint it in as bad a light as they can manage. An *even closer* reading, though, suggests they're being vague about it. So they sent it to "thousands" of people? Well, yeah, if you want to circulate an open letter, you send it to everyone you can. Why not? It's an *open* letter. That's sort of the point. Not much point in writing it and sending it to nobody. So they sent it "repeatedly"? What does that mean, exactly? Would, for instance, it count as "repeatedly" if they sent it to a mailing list, there was some discussion, and they replied to it? Would it count as "repeatedly" if someone sent it twice? Is that a firing offence, do you reckon? Does it count as "repeatedly" if two different people sent it?

Airbus flies new passenger airplane aimed at 'long, thin' routes


Hah. Have you checked how much first class costs these days?

Somewhere over the last two or three years, business and first migrated right out of the realm of "manageable as a splurge for the fortunate but not utterly loaded" into "Kardashians only" territory.

When I flew transatlantic a couple of months back, business class was 4x the cost of economy. First class? Don't even ask. "Premium economy" cost what business class used to cost. Ugh.


Re: Why

"If my seat is comfortable"

Don't worry, it won't be.

Ubuntu releases Core 22: Its IoT and edge distro


Re: Security isn't simple

IoT / edge devices are generally not expected to be multiuser. They're often setup such that there's no user account at all.

Bill Gates says NFTs '100% based on greater fool theory' amid crypto cataclysm


Re: NFTs have no intrinsic value whatsoever, but have sold for multiple millions.

As of right now, probably not, no, because there is basically no legislation or case law suggesting that an NFT can actually *be* a "digital title". Just because one says it is, doesn't mean it is.

If such legislation and/or case law were to be established in enough places, maybe your digital title would wind up being worth something. It likely wouldn't work in the way web 3 purists want it to work, though, because in the real world things are messy. If you somehow contrive to steal the title to my house, a court would not then hold that you own my house. These are the sorts of safeguards that people turn out to quite like in reality. It's astonishingly unlikely that any legislature or court is going to sign off on a 'digital title' concept in which control of the NFT "digital title" alone determines legal ownership, just as they don't consider physical possession of a title deed to determine legal ownership of the property in any and all cases.


Re: New approach to an old piece of fun?

...although the photographer probably still owned the copyright...

Apple’s M2 chip isn’t a slam dunk, but it does point to the future



"The company did claim that the M2's CPU is 1.9x faster than Intel's 10-core Core i7-1255U while using the same amount of power, but while this may be a more appropriate comparison, the fact is that Apple doesn't have a CPU for ultra-thin laptops that is as powerful as Intel's best."

Well, that's kinda arguable. It's not as powerful *at peak power consumption*, indeed. But it seems unlikely that a CPU in any "ultra-thin" chassis is going to be able to consume 50+W of power for any significant amount of time.

You'd have to run extensive real-world benchmarks to be sure, but I would be pretty surprised if an ultra-thin laptop with the i7-1255U actually outperforms an ultra-thin M2 laptop in a long-running, high CPU intensity test. I'd expect, rather, that the i7-1255U one would blaze out of the gates for ten minutes, then fairly quickly be throttled down to a substantially lower speed to prevent overheating.

You can run a CPU at 50+W in a laptop for a long time, but it won't be an ultrathin one. It'll be a much thicker workstation-type one.

For good performance in an ultra-thin, you *really really need* high performance-per-watt, because it's just not possible to clear a lot of heat out of a chassis that thin.

EU lawmakers vote to ban sales of combustion engine cars from 2035


Re: In other words...

The way you achieve a radical change in the economics is to do exactly this.

Car makers will bitch and moan about this for a couple of years, and then - as long as the governments don't give in - will give up and get used to it. 2035 is 13 years away. That's 13 years for them to deal with the supply chain issues. Which is enough time. When they realize their choices are a) figure out a way to build enough EVs or b) don't sell any Vs, they will get a) done.

They will also figure out the economics, because if they can't sell you (someone with a hard price ceiling) a V, they will figure out a way to sell you an EV. The economics of scale will also kick in very fast when everyone involved in the manufacturing knows they have no choice but to figure it out.

The best way to keep EVs rare and expensive is *not* to have rules like this. The longer you say "well, it's fine to make dinosaur burners if you really must", the less incentive the industry has to put their big boy pants on and figure out how to scale EV production.

I love the Linux desktop, but that doesn't mean I don't see its problems all too well


Fragmentation is a red herring

I disagree that fragmentation is really "the problem", if problem there is. And ironically, the article's text sort of makes that point for me.

Let's consider some of its arguments:

"We have many excellent Linux desktop distros, which means none of them can gain enough market share to make any real dent in the overall market."

OK, sure, but then...if you add up the market share of *all* of them, it's still not much.

It's also worth noting there have been points in the history of Linux on the desktop where 'fragmentation' appeared much less significant. For a few years after Ubuntu emerged, love it or hate it, it was clearly and by a long chalk both the most popular and the most buzzy desktop Linux. That lasted for quite some time. But even still, its market share never got very high, and Canonical never made any money out of it.

"None of the major Linux distributors – Canonical, Red Hat, and SUSE – really care about the Linux desktop. Sure, they have them. They're also major desktop influencers. But their cash comes from servers, containers, the cloud, and the Internet of Things (IoT). The desktop? Please. We should just be glad they spend as many resources as they do on them."

This is true, and it's a good point, but *it has nothing to do with fragmentation*, does it? It would be true even if there were only one commercial Linux distributor. They still wouldn't be able to make much money off the end-user desktop.

And in the end, that's the real problem: money. Exactly as you say, nobody has ever figured out the magic way to make money off desktop Linux (with the possible exception of ChromeOS; I dunno how the financials on that look, honestly, not sure if Google breaks it out). At this point, given how many people have tried, that probably means it isn't going to happen. And without a large and self-replenishing pile of money, shifting a behemoth like Windows is just not realistically going to happen.

There are, rough guess (I'm too lazy to go look up the commit logs), probably less than 50 people in the world who are paid to work, full-time, on a *nix desktop environment (again excepting ChromeOS). That's *all* of them put together. So is fragmentation really the issue? Even if you got rid of all the desktops but one and put them all to work on that, 50 full-time devs is still not a large number. How many full-time devs do you think there are working on the Windows or macOS shells? It's a hell of a lot more than that.

Ditto for the F/OSS graphics stack, only if anything it's probably worse. Ditto audio. Ditto bluetooth. Ditto, let's be honest here, basically any part of the OS that doesn't make Red Hat, Google, Facebook or (a bit less significantly, since they have much less in the way of resources) SUSE or Canonical some money.

Given how few resources we (I'm the Fedora QA lead) have for this stuff, honestly, it's pretty amazing - and a tribute to the folks involved and the community volunteers who help - that Linux on the desktop works as well as it does. If you gave Microsoft or Apple the resources we have they probably wouldn't cut another release for a decade.

That's the real issue, not fragmentation. Fragmentation is, if anything, more of a consequence. If you think about it, when it comes to the areas where Linux actually is a serious, money-generating player - in the cloud, on-prem in the enterprise, Android, maybe ChromeOS - there is *much less* effective fragmentation in those areas. Where there's money to be made, a dominant player or a handful of them somehow emerge naturally. There's one serious player in Linux phones - Google with Android. ChromeOS is also all Google. There's a handful of serious players in the enterprise and the cloud - Red Hat, SUSE, Canonical, plus Debian for non-commercial cases. There aren't dozens and dozens of serious enterprise distros. If there was real money in the Linux desktop, the same pattern would likely emerge.

Multiplatform Linux kernel 'pretty much done' says Linus Torvalds


Re: Can anyone provide more context on "multiplatform"?

AIUI it's more or less the opposite: it's about having most ARM devices be able to boot a 'generic' kernel, like x86_64 systems generally can, rather than different ARM platforms needing their own custom kernel builds. https://lwn.net/Articles/496400/ is an article on it from 2012.

Sick of Windows but can't afford a Mac? Consult our cynic's guide to desktop Linux


Disgusted of Tunbridge Wells

Your explanation of why all the distros I don't like suck is great, but your explanation of why the distro I do like sucks is terrible and wrong and unfair. I protest!

Logitech's MX Mechanical keyboard, Master 3S mouse



"Some users prefer the short key-travel of a modern laptop-style keyboard while others would only allow their ancient IBM Model M keyboards to be prised from their cold, dead hands – regardless of the deafening clattering of keys being struck."


New audio server Pipewire coming to next version of Ubuntu


Re: Another Sound Server

This...just isn't really right, though.

We had sound servers before PA, as the article notes. GNOME used esd and KDE used arts. Every distribution shipped them. This wasn't Lennart's idea. PA was written, and generally adopted, in an attempt to replace and improve on both. It was generally considered to have achieved this, which is why the desktops and distributions adopted it.

People who encounter bugs with sound servers generally get frustrated at their existence, but people trying to build desktops and applications that play sound are unanimous in the opinion that a sound server is necessary to do it properly. If neither PA nor Pipewire existed, another one would promptly be invented, GNOME and KDE would adopt it, and all distributions would ship it.

Sound servers don't exist because nobody is "fixing the underlying software", really. They're just a layer of abstraction that's generally agreed to be useful. There's nothing exactly "broken" about ALSA, but it is by design written at a pretty low level of abstraction, and app and desktop developers don't like writing to that level.

I don't think it's really true to say that "audio on Linux definitely suffers from the constant churn of new sound servers", either. I'd say on the whole it *benefits* from constant churn in sound servers. PA wasn't perfect but it was, on the whole, an improvement on esd/arts. Pipewire isn't perfect but so far it seems to be, on the whole, an improvement on PA. If neither PA nor Pipewire had been written, and we were still using esd and arts, would things be better? I question that.


Re: Will it finally fix the bluetooth HSP/HFP issue

I mean, it's very likely the case behind the scenes in Windows too. Microsoft has Bluetooth devs and audio devs.

You just don't get to see inside the sausage factory in that case.

Apple and Microsoft also have a hell of a lot more resources to fund paid developers on these things, and the makers of the hardware generally don't care whether it works on Linux and so don't work to fix it if it doesn't, leaving it all up to the maintainers of the stacks to figure out the problems and fix them.

Workstation, server, IoT? No worries. Fedora 36 is out – all 13 editions of it


Re: Ready for your grandmother

Yet again, it has nothing to do with IBM. I dunno why people are so keen to believe IBM, having bought an extremely large and successful business, would immediately start interfering with every aspect of how it works, but it really isn't doing that. IBM hasn't said squat about what filesystem it wants RH to use. I doubt IBM cares very much, so long as the money keeps coming in.

I obviously can't say too much about internal matters, but, AIUI it boils down to this: RH has a storage team that makes these kinds of decisions, and after quite a lot of research and discussion, they made the call to pursue the Stratis strategy rather than going with btrfs. I'm not part of that team or much of an expert on filesystems, so I can't really judge whether that was the right call or not for myself, but that's the call they made. AFAIK it was purely a call made on their interpretation of the technical merits.

It'd be odd to frame it as an "admission" that btrfs "has failed", since RH doesn't control btrfs' development or anything. To RH it's a technology we could choose to use or not use, like hundreds of other F/OSS technologies. And of course decisions aren't set in stone; of course in future RH might decide to change course, like we have in the past on other similar technologies.



"The Linux community wants widespread/mainstream adoption of Linux as a desktop OS - in use by general Tom, Dick & Henrietta users for whom a computer is just a means of doing their daily job, hobbying, etc - they don't care about the OS."

I'd say your premise there is a bit shaky. Especially these days. The desktop wars were kind of a 90s/00s thing. It feels a bit weird these days when desktop/laptop computers are less of a big deal overall anyway.

I suspect quite a large chunk of "the Linux community" doesn't actually give much of a toss what anybody else runs on their computers, they just like Linux. I don't really care what Tom, Dick or Henrietta is doing any more.

Things are similar for the big Linux companies. None of them make money from desktop users. All of them tried to before, and found it doesn't work. The other commercial Linux companies of the past also tried it, and went broke. The money turns out to be in the enterprise, and not the enterprise desktop. RH, Canonical and SUSE don't really care what Joe Public runs on their desktop or laptop computers, if they even have one any more.

"Has anyone considered that if the community put all of that effort into making a distro which is as good as it possibly can be, rather than effectively duplicating effort across multiple distros, the less tech-savvy people might embrace LInux?"

Only about ten zillion internet commenters. But, again, there are some problems here. "Too many cooks spoil the broth" is a real thing.

In theory, you certainly could make one bigger better distro by combining all the work that goes into several different ones. But there are some devils in the details. The work that would really help out the bigger distros is probably not the work that people who maintain smaller distros want to do. It's often kind of thankless behind-the-scenes stuff like rote testing of updates or maintaining lesser-used packages or updating documentation or doing user support. You're not necessarily going to get anywhere by saying to people working on Mint "hey, how about you stop doing the fun stuff on Mint and start updating documentation for Ubuntu instead?", and so on.

Basically, people aren't just pawns to be moved around a chess board at will, especially when a lot of them aren't getting paid.

At the level of people getting paid, the major players paying people to work on Linux - Red Hat, Canonical, SUSE - are essentially competitors and have zero motivation to combine into a super-distro, or make a serious move on the consumer desktop space. It's just not going to happen.


Re: Ready for your grandmother

I think you must've misunderstood something - I don't think we ever had May 24th down as a release date for 36. It was actually delayed a couple of times, the original "well...we *might* hit it..." early target date was 2022-04-19 , and the more serious "we really *want* to hit it" date was 2022-04-26. We missed that by two weeks and released on 2022-05-10 instead. See https://fedorapeople.org/groups/schedule/f-36/f-36-key-tasks.html .

Glad you're enjoying it, though!


Well, we definitely do not advise anyone to run it in production, and stuff can still go wrong. But in truth we (and I personally - as noted above, I work for RH on Fedora as the QA team lead) have been working quite hard for a few years to make Rawhide much less scary than it used to be, and most of the time it is. I run it on my personal system sometimes (my policy is to run "the next version", so after a stable release I upgrade to Rawhide, then when the next release branches off from Rawhide I follow that branch until release, then go back to Rawhide) and it's fine most of the time. We have quite comprehensive automated testing of each Rawhide compose, which you can refer to to see if anything blew up in the latest compose, and avoid upgrading if it did - see https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=Rawhide&build=Fedora-Rawhide-20220512.n.0&groupid=1 for today's test results for instance.

My latest effort is to enable automated testing not just for the daily *composes* of Rawhide, but for individual Rawhide updates, as we already do for stable and branched release updates. Ultimately I'd like to have those updates gated on the testing, again as they are for stable and branched releases, so updates that break core functionality would never make it to a compose. For now the Rawhide update tests are running in the test instance of the QA system for the last couple of weeks, and have already helped us identify the package responsible for one critical bug - https://bugzilla.redhat.com/show_bug.cgi?id=2082359 - and find and fix a subtle bug in a recent systemd release candidate - https://bugzilla.redhat.com/show_bug.cgi?id=2083374 . Once we get to the stage of gating Rawhide updates on these tests, those issues would never make it to a Rawhide user.

So, Rawhide is still officially scary and you really shouldn't use it on a critical system, but in practice it's already less explode-y than it was a few years ago, and we're actively working to make it ever less explode-y over time!


Re: Ready for your grandmother

"Multiple ways to confuse people. Fedora has its own branch of YALD."

We (I work for RH on Fedora, as the Fedora QA team lead) do try hard to avoid this, which is why if you go to the download page - https://getfedora.org/ - you are directed prominently to just the three core Editions, for very distinct purposes: Workstation, Server, and IoT. Other flavors of Fedora are intentionally de-emphasized a bit and shown lower down.

On btrfs, we do actually get some benefit from it; since Fedora 34, zstd-based transparent compression has been enabled by default, which usually saves quite a lot of space. The folks pushing the incorporation of btrfs in Fedora also have grander plans down the line, we're just moving carefully on it to ensure folks don't get bitten by the vaunted "btrfs stability problems".

Stratis is not actually a next-gen filesystem effort, that's slightly inaccurate (though to be fair RH docs do refer to "Stratis filesystems", which makes this a bit unclear). Stratis is actually a management layer on top of a bunch of pre-existing technology - LVM, device-mapper, and the xfs filesystem. It aims to provide next-gen filesystem-like functionality by marshaling all those technologies together. See https://stratis-storage.github.io/ for more on this.

Microsoft points at Linux and shouts: Look, look! Privilege-escalation flaws here, too!



Note, this flaw is in a third-party addon for systemd-networkd (called networkd-dispatcher) which most distributions don't even package, let alone install by default. It seems like it's installed by default on Ubuntu and Mint - at least in some configurations - for some reason, and Mint also has some quirks which might make exploiting it easier (from what I read it's likely quite hard to exploit on Ubuntu even if it's technically vulnerable).


It's a third-party addon that sits on top of systemd-networkd, a network manager. It had a vulnerability because its author didn't think hard enough about the possibility of malicious strings being sent to it via DBus. None of this has anything to do with systemd, per se. The same vulnerability could equally well have existed in the NetworkManager equivalent, NetworkManager-dispatcher, which does the same thing on top of NetworkManager; it just doesn't happen to because it wasn't written the same way.

Fedora starts to simplify Linux graphics handling


Keep your pants on

Thanks for the write-up, Liam, but if I could just ask for one thing for Christmas, it would be for people to understand the difference between a *proposed* Change and an *accepted* Change. I do recognize this isn't super obvious, and I'm going to propose we clarify this in six foot tall letters at the top of the page (only a minor exaggeration).

The "remove fbdev" Change is an *accepted* Change for Fedora 36, and it's been implemented already, and it's definitely going to be in (or, you know...*not* in, depending on how you look at it?) Fedora 36, coming soon.

The Legacy Xorg Driver Removal change (for removing the vesa and fbdev X drivers) and DeprecateLegacyBIOS change (for...deprecating BIOS) are *proposed* Changes. They aren't accepted. That means they might not happen. Proposed Changes get posted to the public devel@ list for discussion, and ones like these get a lot of it. Then, eventually, FESCo votes on whether to accept them or not. Changes *do* get rejected. Just because a Change has been proposed doesn't mean it's going to happen. Changes that involve not supporting hardware any more are particularly likely to get rejected. So far the response to DeprecateLegacyBIOS has a strong "bit too soon" flavor to it, and I would not be at all surprised if that one got rejected for now at least.

It'll happen *sometime*, most likely. BIOS ain't gonna live forever. But it may well not happen in Fedora 37, or at any definite time soon.

The Souls noob's guide to Elden Ring


Re: Fun to watch, not to play

I accidentally found a handy trick for hammer guy...grapple yourself onto a doorframe and he'll just stand under you, swinging and missing, and you can kill him at your leisure.

I didn't like that fight because the game gave me an autosave point right when the boss spawned, and I happened to have a terrible choice of weapons. Trying to pick up new ones while he's splatting you is no fun. But then I accidentally wound up on the doorframe while running away from him and that solved that problem...

We take Asahi Linux alpha for a spin on an M1 Mac Mini


Re: Deleting macOS

You missed the "high performance, charge lasts 14 hours" bit. That's quite important.

I'm no Apple fan, but by all accounts this is extremely good silicon, much better for non-gaming tasks than any x86 hardware. I'd be super interested in running Linux on it if the HW support gets good enough.

FIDO Alliance says it has finally killed the password



""....A smartphone is something that end-users typically already have...."

Errrr, nope, not here, and I think they had better do some wider research - article in the news today that usage of dumb phones is on the increase; more than doubled in the last two years whilst use of 'smart' phones has dropped back. It is reported that 1 in 10 mobile phone users in the UK are using dumb phones (I am not alone!)"

Er. So, 9 in 10 mobile users *are* using smart phones? So, in other words, you could say..."a smartphone is something that end-users typically already have"? Since, you know, "typically" is not a synonym for "always".


This is kind of a subtle point. For 'true' two factor security, yes, you shouldn't be able to log in using only a single device. Your phone can be a second factor when logging in with something else, but it can't be a second factor when logging in on the phone. You must have a, uh, second second factor for that.

And FIDO knows this, and has *already* produced standards for that scenario. You can buy a Yubikey with NFC support for exactly this purpose, in fact.

The problem is, we live in the real world, and - as the white paper points out - we have several years of evidence to prove that people just aren't doing this. Even for highly-sensitive things like bank accounts, in most of the world, true 2FA is not being implemented. Banks aren't sending out hardware tokens to their customers. A *minority* of banks in a *minority* of places are doing this, but in most places, banks have only just got around to doing SMS-based "2FA" - which, as you point out, isn't 2FA at all if you're logging in on the phone to which the text is being sent.

The system described in the white paper doesn't meet the standards of "true" 2FA, but that's kind of intended: the idea is that we acknowledge that in most cases, "true" 2FA is just sufficiently cumbersome that it's not going to be used. We need an alternative that's as secure as possible, but lets you log in on a smartphone without needing any other hardware.

The answer proposed is basically "secure storage on the phone, usually with biometric authentication". PIN authentication is an alternative, but since that has most of the same drawbacks as passwords, isn't preferred. In general, when you go to log in to your bank, you'll have to pass fingerprint ID or face recognition on the phone; in the background this allows a token to be sent from the phone's secure store to the bank. It's not 2FA, but - as long as the biometric implementation is solid, and the secure store isn't hacked - means not just any Marlon Rando who finds your phone can log in to your account. It's not perfect, and the white paper doesn't pretend it's perfect; it specifically says that the existing FIDO standards for true 2FA are stronger and should be used where both parties are willing to deal with the inconvenience.

This is something that's *already happening* in a not-so-standardized way, BTW. My bank's phone app requires authentication each time it's run to access anything important; optionally this can be via phone biometrics rather than entering the account password.


Re: Who are these people?

Yes, but *only the organization knows the association between the token and the identity*. And they already *know* your identity. So the token isn't causing any net increase in loss of privacy. The token is not a problem. If the organization wasn't using this form of authentication at all, they would still have all the same personal information about you.

It's possible to use this kind of authentication system to authenticate to an account with ExtremelyAnonymousCo which has absolutely no personal information about you at all, or to an account with ExtremelyInvasiveCo which knows your name, phone number, address, social insurance number, bank account details, and inseam measurements. The token system doesn't care, and does not know any of those details, and does not provide a mechanism by which any of those details can be leaked from ExtremelyInvasiveCo to anybody else. It's your lookout whether you sign up for an account with ExtremelyInvasiveCo or not, and it doesn't have anything to do with this authentication system.


Re: What's the fallback mechanism?

I'm not saying the scheme is perfect. But I *am* saying it's very hard to come up with a system that provides a decent level of security and convenience for most people, and it's very *easy* for commentards to laze around on comment threads poking holes.

The system we have now is *definitely not it*. For most people, passwords do not provide sufficient security, because they use bad passwords, and reuse them; and they are not convenient, because typing in passwords sucks, especially on phones. Current widely-implemented 2FA schemes add substantially to the inconvenience, while still having significant security issues (especially SMS-based 2FA). Something better is desperately needed. At least FIDO is *trying*, and on the whole, there's a lot of good ideas in the things it has come up with so far.


Re: What's the fallback mechanism?

I think you (and several others) are missing the bit where you can't just use the tokens from any phone you get your hands on. You have to pass the authentication - biometric or PIN - to access them. Unless someone finds a flaw in the secure storage, of course, which is one of the bigger potential weaknesses in the overall system. But then, no system is perfect.


Re: Who are these people?

Nothing FIDO maintains "identifies" anyone. The standards are all about the interchange of tokens. Very simply put, you create an account with SomeOrg and agree with SomeOrg that "this magic token" is associated with that account. The token itself doesn't identify you, or SomeOrg. It's just a thing you've both agreed on. All FIDO standards cover is the process of generating and exchanging those tokens.

These standards also don't leak between organizations, so whatever SomeOrg knows about you, even if you also use webauthn to authenticate to your account at SomeOtherOrg using the same authenticator, neither SomeOtherOrg nor SomeOrg gets any information about you that is held by the other.

If you're worried about privacy, FIDO-defined standards are a better option than giving SomeOrg your phone number (a very sensitive identifier) so they can text you an authentication code.

I do wish people would do like five minutes of research before jumping into the comments with their tin foil hats on. FIDO's work isn't perfect, but it's a hell of a lot better than the alternative, which would be that Google or Apple just make this stuff up themselves.


Re: What's the fallback mechanism?

Upvoted for the kangaroo joke.

As to the question...the answer seems to be a bit complex. In the case where you replace an Apple phone with another Apple phone (or a Google phone with another Google phone, or a Samsung phone with another Samsung phone...), the answer is mostly "it's up to Google/Apple/Samsung to determine you're the same person and sync the tokens to the new device". How they do this is also left up to them. The paper implies this is already something they have processes for; I don't know if that's true.

In the case where you switch vendors, the answer seems to be "you keep the old device and sync the tokens from it to the new device via Bluetooth". This does seem to leave a rather large gap where the old device is lost or broken and you don't want to get a new device from the same company.

Another answer is sort of implied but not stated: keep an old device with the tokens on it around as a backup. Not everyone has two 'devices', of course, but lots and lots of people do. If this setup gets any momentum, as time goes on, it'll be more and more likely that you have an old device you can keep as a backup to seed a new device from if you lose or break your 'active' device. That's a lot of usable phones lying around in drawers just as identity backups when they could've been sold second hand and reused, though, I guess.

There's also a wrinkle to both answers, which is "if the app/site/whatever that's trying to authenticate the user doesn't want to just trust the process of syncing the tokens, it can choose not to: it will be told when the user is trying to authenticate from a device they haven't authenticated from before, and can then add its own steps to verify the user's identity, whatever they may be".


Re: The way I read this...

You're understanding it wrong, because this is not about protecting you, Apparent International Super Spy, from an adversary who would go to the extreme lengths of kidnapping and torturing you. It's about protecting J. Random Person In The Street from using the same bad password on all 1500 accounts they have, and getting phished.

This is a solid effort to try and improve security and convenience in the vast majority of real-world applications for real-world people. Who do *not* use password managers, or safe passwords, who hate passwords, who constantly forget them, and who do not have adversaries who would expend huge amounts of resources to compromise them. For such a person, "we'll authenticate you using a system where your phone checks your biometrics then sends out a safely-stored key" is both more secure and more convenient than "we'll authenticate you with that terrible password you use for both your bank account and fifty shoddily-coded PHP web forums, plus *maybe* a 2FA code we'll send you via a protocol known to be insecure and easily redirected, if you're lucky".

If you actually are an International Super Spy, this is not the standard for you. There are others (including ones set by FIDO) for your use case. By all means use them.

How not to attract a WSL (or any) engineer


Re: This process is widespread at Canonical

I mean, to be fair, that seems to be a fairly high-level position and most of those questions strike me as reasonable and well-thought-out ones for somebody interviewing for that position.

The "Education" ones, though, are just weird. I don't understand what possible use any of those answers would be in evaluating somebody for such a position, given that you'd have all their answers to the much more useful questions from later sections too.

Microsoft proposes type syntax for JavaScript


"The real fun comes when one of these programmers is tasked to write some code that does something that there's no library for -- they can put a graphical front end on something but they're absolutely clueless about what that 'something' is."

Well, sure, but all you're really saying here is, "get someone who knows how to do X to do X". Putting a graphical frontend on something is a useful skillset. Sometimes you need a graphical frontend put on something. If that's what you need, someone who knows how to do that is more valuable to you than somebody who can write a really efficient sort algorithm.

Sometimes you need the sort algorithm person, sometimes you need the person who can write a graphical frontend. The field is much too large these days for anybody to be great at all of it.

Fedora inches closer to dropping x86-32 support



There almost certainly aren't, because - as the article says - it hasn't been possible to do this for several releases. We stopped shipping i686 images years ago. We still build packages for i686 in order to allow 32-bit only third-party applications to run on 64-bit installs. This proposal is essentially about trying to strip that set of packages down to only the ones that are necessary for widely-popular 32-bit only third-party apps (like wine).


Not decided yet

"Following discussion on the mailing list, the Fedora team is taking another small step away from x86-32 support, with developers urged to stop building i686 versions of "leaf packages" – in other words, packages that nothing else depends upon."

This is not quite right, due to a misunderstanding of the Change process. Fedora is a very open project, so major proposals like this have to be publicly proposed and discussed. That's the point this Change is at. It has been proposed, and now is under discussion. That's why the mailing list thread title has "proposal" in it. It has not yet been approved by FESCo (which is the body that approves or rejects Changes). Nobody is actually being "urged" to do this as of yet; they only would be if the Change is approved.

IBM investors staged 2021 revolt over exec pay


Re: Double standards

Well, this particular case tends to suggest that IBM believed Mr. Whitehurst is extremely competent. If you boil out all the weasel words it comes down to "we were paying him a gigantic sum of money to sign a non-compete". i.e., IBM was just *that* terrified of him going somewhere else and competing with them.

If IBM thought he was rubbish, they wouldn't be bothered about backing up the money truck to get him to sign a non-compete agreement...

Deere & Co won't give out software and data needed for repairs, watchdog told


Re: Seriously…

Well no, but the definition of "news" is not "something that surprises you". So, what's the point you were trying to make?

Comparing the descendants of Mandrake and Mandriva Linux


Re: Mandrake Demonstrated How to Implode a Distribution by Corporatizing

"All was well in the Mandrake world - until it envisioned becoming a money making corporate enterprise. That single act, coupled with the inability (or incompetence) to balance a fiercely free and open-source minded community and its corporate goals led Mandrake into a death spiral from which it never recovered."

You're misremembering. Mandrake was always (trying to be) "a money making corporate enterprise", from the beginning. It was a for-profit company and its only business was making a Linux distribution. If it didn't make any money it would've failed much faster.

What caused all the trouble and ultimately killed Mandr* was, in three words: Mark bloody Shuttleworth. Before Ubuntu came along, Mandrake was the premier "user friendly" Linux, and in commercial terms looked pretty generous: the other major commercial distro was SUSE, and our (I used to be the community manager, and did some packaging work too) terms were rather more generous. There was a free edition of Mandrake but no free SUSE at the time, for instance. The major free options were Debian, which is of course great but not a new user friendly choice (even less so at the time), and Fedora, which we still had solid competitive advantages over at the time - better package manager, nicer installer, better proprietary driver support (which was an anti-feature for Fedora but something Mandrake users appreciated).

Then Uncle Bleeding Moneybags showed up with his business model of "let's do everything Mandrake does, only I'll pay for everything so it'll be free". How exactly is a business that doesn't have a multimillionaire sugar daddy supposed to compete? With bloody difficulty, that's how.

That's what led to the Club (especially the earlier, more exclusive incarnation) and some other desperate money making schemes you probably remember. Nobody was willing to pay us sixty bucks for a distro in a box any more when Shuttleworth would mail you one for free (this was still in the days when many people couldn't easily just download ISOs...), so we had to try and get creative.

Didn't do a blind bit of good in the end, of course. We never had a shot after Ubuntu showed up. The end was dragged out by a few rounds of financing from people we managed to convince, god, I have no idea how, and some EU financing, but it never covered losses.

Am I still bitter? Yup! Yup I am.

Interpol: Policing model needs to change with cybercrime


That Interpol?

This would be the Interpol that's run by a murderer, right?

Securing open-source code isn't going to be cheap


Re: It's not an open source problem - you forgot only

I mean, it's not really *that* simple. Random Developer X "just writing something" is pretty likely to "just write something" with security vulnerabilities.

Shared code on the whole, in the long run, when done properly, tends to *decrease* security issues. If we didn't have widely-used SSL libraries we'd certainly have more code with security vulnerabilities due to bad home-grown SSL implementations.

If you're thinking about things like leftpad, well, there's a few other factors in play there, like Javascript not having a standard library. Languages with sensible standard libraries are less subject to the problem of projects either constantly needing to reinvent the wheel, or needing to pull in tons of dependencies for fairly trivial operations.

Cringe: Salesforce latest megacorp to jump on non-fungible tokens bandwagon


cargo culting

You know what, I think I figured out what's going on with NFTs.

It's a form of cargo culting.

Some people noticed that a bunch of Highly Active twitter types who had got lucky and made money on crypto also owned stupid monkey NFTs. So, the thinking goes, the way to be a rich crypto bro is to own a bunch of stupid monkey NFTs.

And...that's about it. You set up a collection of NFTs because owning some themed NFT is the signifier of a cool rich crypto person. You buy them because...see above.

I don't think any of them even *really* expect their weird pixel art NFT to be worth any money in the long term. It's just that owning a weird pixel art NFT is the thing you do to be a part of this thing you want to be a part of...

Australian court finds Facebook 'divorced from reality' as it tried to define doing business down under



Facebook lawyer: "With respect, your honour, I think you'll find it's reality that's divorced from Facebook".

Google's DeepMind says its AI coding bot is 'competitive' with humans


Re: Googled the answer?

God, I hope not, cos if it knows how to do that then it really *is* coming for my job...

Machine learning the hard way: IBM Watson's fatal misdiagnosis


questions? hah.

"ironic, since the game of Jeopardy at which it excelled is all about deducing questions from data"

Nah, Jeopardy is all about answering questions, but with things stupidly and tortuously phrased to fit into a gimmick which stopped being cute about forty years ago. Good lord, I wish they'd give up on it already.

Toaster-friendly alternative web protocol Gemini attracts criticism for becoming exclusive clique


Re: simple websites

Heh, true. Raw link, if you want: https://pagure.io/fedora_nightlies/raw/main/f/fedora_nightlies.py


Biting the hand that feeds IT © 1998–2022