The Register Home Page

* Posts by bazza

3992 publicly visible posts • joined 23 Apr 2008

Boeing deliveries soar past Airbus for the first time in years, but this is no time to unbuckle your seat belt

bazza Silver badge

Re: If it's Boeing

That's how 777's originally were. A 9-across 777 was a nice plane to be on. 787 is similar, designed for 8 across, flown at 9 across. When 787 was first launched it was plugged as being the big step in comfort that passengers had been waiting for. Early journalist flights resulted in reports of amazing quietness, comfort. The rumour is that that demo aircraft had all the optional sound deadening fitted, and all production aircraft have been bought without (in economy) and most flown at 9 across. Why have airlines squeezed the passengers in? It's the usual story; money. Some airlines still operate 777 at 9 across, but not many. Most 787 operators with 8 across seating have switched to 9.

The trick Airbus seems to be able to pull off is designing planes such that the operators don't need to squeeze in another seat per row. Here's the comparisons in external fuselage width (as best as I can find):

A330neo: 18 foot 6 inches

787: + 5 inches

A350: + 7 inches

777: + 9 inches

A380 (lower deck): + 14inches.

It's easy to see why the A380 at 10 across is super comfortable, whilst the 777 (being 14 inches narrower) is not. It's interesting that all these aircraft fuselage widths are separated by about 6 inches, with the Airbus ones landing on the right widths and the Boeing ones in between. Even the A320 is about 6 inches wider than a 737, and feels a lot nicer for it. On such fine margins are the fortunes of two enormous companies balanced.

bazza Silver badge

Re: If it's Boeing

It's certainly a slogan that's come back to bight them where it hurts...

But to really make a risk assessment one has to look at what's actually going on. And, after the door blow out, it seems that a large slice of compulsory Quality Control effort has been injected by the FAA themselves (no more self-signing...).

777x

The 777X in particular has been through the ringer, and still is going through it; there's a reasonably good chance that that will turn out to be a safe and reliable aircraft. That would be a remarkable turn-around. Back in the bad old days you'd see ugly court cases where the company was being sued by ex-employees who'd raised concerns only to be fired for their trouble. It's gone from a program widely rumoured to be riddled with questionable design decisions to one that' had a thorough going over by a lot of non-Boeing eyes, and a lot of changes.

There is one thing that troubles me; it's a bit like a software project; you walk into a project, and you can pretty quickly tell what sort of job the team has been doing. Sometimes, it's obviously been done well, with quality at the heart of the project's feeling; everyone's been doing a good job, and you know the software is going to meet its design goals.

Sometimes, you can tell straight away that the project is an absolute stinker, with a lot of question marks over many, many parts. You know that this one's going to succeed only with an awful lot of review, extra test and a ton of debug. Quality is going to have to be hammered into it, and there's a risk that not all bad bits will be found.

Guess which one 777X looks like to me! I was especially alarmed by the thrust link failure, because they found it by luck (that aircraft could easily have crashed on the next take off and they found it only because of some unrelated servicing coincided with a sharp-eyed worker doing the servicing). They weren't looking out for it. They'd gone past the stage where they thought there were no more surprises, everything was back on course (but it wasn't).

FAA, US Gov, and Politics

Certainly, the future of Boeing rests very much in the hands of the FAA. They're deeply involved in the prevailing QA/QC work, and that's likely making a huge difference. This is not necessarily a good thing overall for Boeing.

The 777x is only worth anything at all if it's allowed to fly. The FAA can say "yes, it can fly overhead the USA". But, they can't say that for anywhere else on earth. In Europe, it's the EASA that say "it can fly over Europe", CAA in the UK, CAAC in China, etc. Those regulators give permission largely on the basis of trust; "if the FAA say it's good enough, then that's good enough for me" is how it supposed to go.

The trouble is that the FAA's reputation took a massive knock with the MAX crashes. It was the rest of the world that grounded the MAX, not the FAA. The reason why was because the FAA's assurances on the MAX simply did not have any credibility. That's not because the people in the FAA were making it up out of malice or incompetence, it's because the FAA had been denuded through reduced budgets by under all administrations for decades to the point where the coverage the agency's staff could provide had become ineffectual. This was a ghastly example of ignorant people saying "Why do we need to do this? Nothing ever goes wrong".

The issue for overseas regulators is that - to trust the FAA - they have to know what sort of condition the Agency is in, whether its foundations, funding, mission, freedom to act is secure. And that all starts to become very political judgements. And when one considers that an overseas regulator deciding that the FAA isn't to be trusted, and therefore that new Boeings aren't to be trusted, the consequences of that become big geopolitical problems within about 30 minutes. And if they ever had to reach that conclusion at the moment, that means the Orange Fake Jesus gets annoyed leading to who knows what.

It doesn't even have to get to the point where the FAA is judged to be unfit. If the US politicians / administration are judged to be acting irrationally over the funding and independence of the FAA, it becomes pretty hard for overseas regulators to continue to have faith in it. And the problem right now is that there's already rumblings of political interference in the FAA. It really wasn't helped by the likes of Elon Musk clearly using his former position as a way of (as he would see it) reigning in the FAA so that he can do whatever he likes with SpaceX.

If Boeing were building the 777X under the auspices of the EASA, there'd be no problem; that Agency has a good reputation and it's mostly independent of meddling politicians. And yes, it's perfectly possible for US manufacturing to come under an overseas regulator. It'd still need FAA permission to take off and overfly US territory to leave the USA, but otherwise it'd be good to go to wherever the overseas' regulator's word is accepted. It's a lot more expensive for Boeing to be regulated that way (they'd be paying for EASA staff to travel to the USA a lot).

There's also the hybrid approach; this happened with 747-400; the CAA in the UK was involved in the FAA's effort somehow, and managed to spot a structural problem that everyone in the USA had missed (related to the extension of the hump backwards I think).

TLDR

Anyway, that's a very long winded way of saying that if you don't think US politics is delivering an economic / regulatory environment in which manufacturing tends towards "quality" instead of "profit", then don't fly on one of their aeroplanes. Many have already made their decision on that front.

Personally I think the 777x will turn out to be safe, but at the end of the day it will have 10-across seating in a cabin designed for 9 across. Boeing say they've tinkered with the wall thickness to try and liberate a few more interior inches, but it remains to be seen if this has made any noticeable difference or introduced some other passenger discomfort. In contrast, all of Airbus's planes (and especially the A380) give a high-quality ride even for economy class; guess what I choose to fly on.

bazza Silver badge

Re: How many of those orders will get dropped?

The article is about deliveries, not orders. By definition, they cannot be dropped because they've actually happened!

If anything, despite all that's going on in the world. the recipients are likely grateful for them as they'd be replacing older less fuel efficient aircraft.

Ex-Microsoft engineer believes Azure problems stem from talent exodus

bazza Silver badge

Cloud is….

…Someone else’s computer, being run by someone else’s staff. If those staff aren’t incentivised in a manner compatible with one’s own business goals, It’s The Wrong Cloud (tm).

And given that most of the major cloud providers seem to be pretty bad wrt staff and expertise retention, they’re all hazardous.

Microsoft veteran says some 'broken by update' PCs were already doomed

bazza Silver badge

Re: Idiots never reboot

Agreed, though (at least for an internal combustion engine, and especially diesels) they benefit from being used regularly and driven hard every now and then, to get good and hot. Left off all manner of seals dry out, fuids go gooey, etc. A car just left will evolve problems without apparent changes in entropy.

Turns out that car engineers are very good at designing cars for a specific lifecycle / workload, to a price!

bazza Silver badge

Re: Idiots never reboot

Er, I don’t think you know much about electronics or power control in modern computers.

For a start, it’s perfectly normal for a computer or CPU to turn bits of itself off and on just in normal operations without the user ever being aware. And if one is to consider wear and tear on parts, you deal with dopant migration across silicon semiconductor junctions by turning them off as much as possible.

What tends to fail is capacitors and fans. Spinning fans less lengthens their life.

Liquid electrolyte capacitors are a bit odd; they prefer being kept powered up, preferably somewhere close to their operating voltage. Though good modern ones are far better than the old ones and don’t dry out anything like as quickly. It used to be a common design mistake to fit, say, 63V caps then operate at 12V.

I don’t think anyone is using thermionic valves in modern computers.

HDDs are about the only thing I know of that - if old with many operating hours on the clock - can be a bit grumpy about spinning up from cold. I have on occasion revived such drives with a light tap from a hammer to get the bearings unstuck. But if your drive is in that condition it is EOL anyway.

We know what day it is but these Raspberry Pi price hikes are no joke

bazza Silver badge

As a hard-nosed, old fashioned. don't-like-to-waste-resources programmer, I'm wondering how all those folk who've gotten lazy are getting on. Bloaty languages, inefficient data representations, large runtime environment memory requirements; they're all looking costly at the moment.

There once was an idea that it's OK to have containers replicating most of the same dependencies in both storage and in RAM. That's now looking a bit daft. There's a reason OS developers made sure that things like shared libraries were shared, both in RAM storage and on disk...

Probably folk could save a ton of money in looking at some glibc tweaks. The allocator in that - like many these days - likes to ask for and hang on to more memory than the program has actually asked for; it's good for ultimate speed. However, there may be some merit these days in reigning such allocator behaviour in. If one did that for all processes in an OS, that'd probably automatically cut RAM consumption for everything (even Java, C#, and other higher leverl languages).

I know that some versions of VmWare can de-dupe RAM. Neat trick. Do any other hypervisors do the same thing, or is that a Broadcom exclusive?

bazza Silver badge

Re: The inversion of capitalism

It is pretty crazy.

It may balance out in the end; there's be a lot of bankrupt companies and second hand hardware fire sales. And, new prices will become cheap again. We kinda saw that when everyone stopped using GPUs for Bitcoin mining...

Contracts are in C++26 despite disagreement over their value

bazza Silver badge

Re: Making it Work for Real

>The only options are "ignore", "diagnostic", std::terminate(), std::abort() or "instant kill".

>That means a contract failure is either ignored, ends the program with no chance of recovery, or ends the program with no chance of even logging that it happened.

That's unhelpful for programs that have to achieve some sort of runtime resiliency to error. Not everything can run in a disposable container...

Google is to journalism what Vikings were to monks. Now their man will run the BBC

bazza Silver badge

Re: What's really sad is...

They also had the scoop on the MMR vaccine being a cause of autism. They pushed that one with excessive enthusiasm. You could say they did it to death…

The BBC never actually properly apologised for that ridiculous episode, defending themselves as having to be balanced. Prof Brian Cox laid into the Beeb following that, rather awkwardly pointing out that it was not balanced to give the same airtime to a lunatic Andrew Wakefield as to someone representing the huge mass of peer reviewed scientific literature showing it to be safe.

Wakefield went on to become Trump’s adviser on vaccines…

Nanny state discovers Linux, demands it check kids' IDs before booting

bazza Silver badge

Important Edge Cases Forgotten (as usual)

There's an awful lot of operating system installations that are not on the Internet. If a jurisdiction passes a law requiring that all OSes check in with some sort of central age verification service, what are non-Internet installations going to do?

This kind of law risks accidentally making it illegal to not wire up a computer to the Internet. That would be a bad thing.

The kind of folk who don't wire up computers to the Internet does include some quite important use cases: secure government networks, embedded OSes (e.g. in lab or medical equipment), military systems, rural users who have no connection option in the first place and don't like Elon Musk, etc. It'd be kinda ridiculous if a nation's army woke up one morning to an OS update that then demanded to know who you are and what your age is and to please keep your tank connected to the Internet so that it can check.

Sensible law makers would realise this and come up with some sort of means for exceptions. But I can't see how such exceptions could be achieved without becoming very, very messy.

Whitehall seeks lone C++ coder to keep airport passenger model flying

bazza Silver badge

It does sound like C++/CLI.

Which - actually - is moderately Ok. You can just treat it at C++. If you dump C++ source code into it, you get that program. It's not really so badly described. All the C++ stuff is normal. The .NET syntax is fairly minimal with things like gcnew, references. The .NET library is the same as in any other .NET language.

The bit I like is that it's far, far easier to call native Win32 stuff in C++/CLI than it is in C# with a pinvoke (or whatever). For example, I've written a device viewer (as in a read-only Device Manager, but tells you a lot more about devices than device manager, partly by parsing some of the wierd binary stuff into what it actually means). I wrote this in C#, but to actually get the device information one has to go down to Win32 and some pretty dark corners of it at that. So, out comes C++/CLI, and an interface DLL written with very little difficulty, really. I wanted to have C# in there somewhere because that makes more sense in WPF-land.

The actual original motivation was to allow a C# program to accurately determine what COM port was what; one deals with COM ports by their name (COM1, COM2, etc), but you get some pretty random numbers for these when you start plugging in USB RS232 interfaces. By generating a tree structure in C# that represented the machine hardware, it was then trivial to 1) find com devices by their specific connection [which remained unchanged no matter what else was plugged in or not], and 2) find the corresponding COM port number it had been assigned (because that was part of the data given for relevant devices), 3) tell the user that the USB hardware is not plugged in and to please plug it in before proceeding.

Maybe I should apply...

Ig Nobel Prize flees US for Switzerland after 35 years over safety concerns

bazza Silver badge

The little girl speech limiting strategy worked absolutely fine. One winner found that she could not be bribed even with hard cash. The moment was hilarious! A true giant of science!

Brit dual nationals grounded by border digitization drive

bazza Silver badge

Re: unsure whether she would be able to return to the UK

They do ask. The passport swaperoo doesn’t work in Japan anymore. I know of Japanese citizens who have lost their Japanese citizenship by accidentally using their Japanese passport to enter the country. The problem comes when you leave because the airline wants to see proof that you can get into and stay in the UK, which requires your Brit passport at which point your dual nationality is proved. Whereupon one ceases to be Japanese. Ouch

bazza Silver badge

Some other countries don’t play nicely with dual nationality. Being British but travelling on the other passport is sometimes the only option for visiting one’s other country.

Having to have a “your British” stamp in that passport or having a British passport to travel back is risking losing your other nationality. Ask someone from Malaysia, Japan, Brunei, etc.

Orbital datacenters are a pie-in-the-sky idea: Gartner

bazza Silver badge

My favourite interview question for software developers is, “tell me how air conditioning works”.

One shouldn’t be allowed to write software unless on is also handy with thermodynamics…

bazza Silver badge

Some of the concepts floating round are laughable. There’s one where they are promising 4GW of solar panel to feed an orbiting data centre, but haven’t thought about what 4GW of battery or radiator looks like…

bazza Silver badge

Re: To be expected

Not so sure, if they’d stuck to using just Kelvin most of their readers would not understand.

But with that in mind it would probably have been better to stick to Celsius or Fahrenheit

Hard drives already sold out for this year – AI to blame

bazza Silver badge

Tape

I hope that tape doesn't become attractive to the AI outfits.

Am seriously wondering if one can contrive a liveable RAID set up with multiple tape drives. Individual file access times will be terrible, but with enough drives it could be livable. (Looks at how many files Windows holds open, checksout the volume of a tape drive, looks at room left in the house, sighs...)

bazza Silver badge

Gloomy economic outlook

This AI bubble’s consumption of all the IT basics is going to have a knock on consequence for the rest of the economy. Wanna launch a startup? That’s going to cost more. EPOS till broken and needs replacement? Guess what that’s going to cost. Phone breaks? Bad news.

None of this will be good for economic figures. Those have barely begun to register the impact because of the sudden onset of demand. Inflation figures in a year’s time could be painful.

I wonder if we’ll get to the point where the few companies doing this get labelled as a cartel?

Notepad++ declares hardened update process 'effectively unexploitable'

bazza Silver badge

Re: Mention of the installer's libcurl dependency raises a question.

Given how Windows stashes .DLLs separately for separate programmes, and how Snaps / containers work in Linux, the idea of not statically linking everything seems nuts.

We used to be encouraged not to statically link due to the wasted space. Now that OSes work to waste the space anyway, statically linking seems like a valid response.

bazza Silver badge

Re: I look forward to the security bounty announcement

They probably would, but I'd guess it's a money thing. Microsoft, Google, et al I think offer substantial rewards. If one is going to enter into that kind of thing, you'd have to attract the participants with something valuable. My guess is that the Notepad++ project doesn't have pockets as deep.

There is the altruistic approach, which in this case (coz Notepad++ has been a generous friend to many of us for years) may succeed. Could take little more than a message on a webpage guiding Whitehats on how to make contact.

bazza Silver badge

Re: author claims makes the "update process robust and effectively unexploitable."

Oh, it's highly exploitable. It's only as strong as the credentials used to access the source end of the update mechanism. One lazy password, one carelessly delayed machine patch, one zero-day on that machine or any of the others involved in building and delivering that binary, one source code repo illicit access and the now oh-so-reassuringly-strong-and-much-more-trusted update mechanism is delivering tainted code as before, likely with less vigilance on the part of the recipients due to the misplaced confidence.

The same applies to end-2-end encryption systems being unbreakable. The intended mathematical design may well be hard to break by a Man-in-the-middle. But what counts is the as-delivered implementation, which is dependent on a pretty wide set of persons (consider all the dependencies) and their log-in credentials being trusted and secure. We've seen this kind of vulnerability nearly succeed against sshd globally a couple of years ago, when that compression library repo got compromised by a long-running social engineering effort against the original developer. Had that attacker been just a little bit more careful, they'd have succeeded.

How the GNU C Compiler became the Clippy of cryptography

bazza Silver badge

Re: Can someone please explain to me ...

Pretty sure that AES is simply a data transformation function, it does make any decisions based on the inputs to that algorithm and will take the same amount of time for equal length inputs. Corrections welcome, if this is wrong!

The wider system that uses it may well be time variant on different inputs.

bazza Silver badge

Re: The problem here

Doesn’t help if the machine is going to be expected to process millions of such checks per second. Isochronic code is intended to be as fast as the machine can achieve and no slower. A timer means a compromise.

Also, there’s consequences in cache usage which another process can probably sense; the process that goes to sleep having completed its function would have to find something negatory to do, with the problem that the compiler may strip that out..:

bazza Silver badge

Re: Only one answer

Can one have one’s G-Cake and eat it?

bazza Silver badge

> To reiterate, this is *not* a bug in the compiler. If anything, it's a bug in the language which explicitly permits your code to be transformed in ways you don't expect.

I've got no problems with compilers spotting and stripping out pointless code that has no side effects or consequences to the final state of the program. It's when they generate code that produces a program with a different final state to that specified by the C source code. That's when the compiler is no longer a C compiler.

I have come across this in some compilers. The one in IAR Workbench for AVR's was at one point producing broken code if one turned on optimisation.

I've no idea if -O3 in gcc falls into this category. I can see why a "myth" could build up that it does (especially as it starts moving code order around), but the gcc folk are generally pretty careful.

> The C spec has nothing to say about timing of the generated assembly language

Indeed so. Though it's interesting; C started off back in the 1970s intending to be just a thin veneer atop assembly code, and therefore gross optimisations (like chopping out side-effectless code) are somewhat contrary to this philosphy (because, an assembler wouldn't carry out such optimisations). In this regard, C is no longer a "Systems" language, at least not with today's compilers implementing it; one cannot guarantee in source code alone what the system behaviour will be. System behaviour depends more than ever on how the code has been built. That's a perversion of what was intended. The side effect is that stuff like security (which is pretty important) can get broken, and is vulnerable to being silently broken despite zero code / build script changes.

It's all very well saying "but the standard says this is OK", the problem is that there were some things so intrinsic in expectations that no one thought to put them into the standard. Simply hiding behind the standard is akin to "just following orders" responsibility denial, especially given that the compiler authors are pretty mixed up in the standards creation process itself.

Perhaps the standard should be updated, so that where such things matter the source code can insist on specific system outcomes.

bazza Silver badge

-O3 being dangerous strikes me as somewhat absurd. If the compiler is building code that does not implement the as-written source's functionality, then it's not acting as a C compiler. Instead, it's acting as a nearly-but-not-quite C compiler that goes wrong in exciting and arcane ways.

Worse still, it's misbehaviours like this that make life a lot harder for developers. If one cannot trust the compiler to build correct code no matter what the build options are, then the build system's configuration and binary testing becomes a whole other thing to worry about, on top of whether or not the source code is correct.

Meusel and colleages are to be deeply congratulated for being alive to the problem and having a means of noticing the change (even if it was luck, or curiousity!). To set up a build / test system that could reliably spot when critical execution time variations have been changed by the compiler (when nothing but the compiler version has changed) - that's a ton of effort to make it work reliably.

Another interesting problem is, it's not just the compiler. All a compiler does is produce op-codes. These days, the op-codes get interpreted by the instruction decoder pipeline and broken down into instructions the core will actually run, and its the timing of these that actually matter. Really, the only way forward with instruction decoders is to make them do more and more complex analyses, much like compiler optimisations. If they start getting good enough to chop out nugatory sections of code...

Operating systems like INTEGRITY are quite interesting. The versions of that which I have used execute on a single core only, and dole out execution time to processes on a fixed-allocation basis. That's probably the only way to be totally sure of not having timing side channels.

Notepad++ update service hijacked in targeted state-linked attack

bazza Silver badge

Short Version

So is this the deal:

If you’ve not been using the auto update but have been fetching the installer direct from the notepad++ website, you’ve been getting the proper software and no hidden nasties?

If you have been using the auto update, you’ve may already be screwed? But that would have required the attacker to be able to fiddle with your ISP connection somehow, and is unlikely to have happened?

Is that about it?

One outstanding point. What about if you’ve downloaded and installed a plug in?

Musk distracts from struggling car biz with fantastical promise to make 1 million humanoid robots a year

bazza Silver badge

Not sure if the article / headline is an accurate report of what's been proposed. However, as a shareholder I'd be more interested if he'd promised to sell 1 million robots. Making 1 million robots doesn't count!

GNOME dev gives fans of Linux's middle-click paste the middle finger

bazza Silver badge

Re: I have dumped Gnome a long, long time ago.

Even Windows displays a directory tree. And that "confuses" (or, seemingly not) people by having drive letters... Gnome (and probably SystemD too) of course tries to recreate the consequences of drive letters ("this is a completely separate device that in no way joins into any other part of your file systems"), whilst making it annoying to find the mount point...

Nvidia spends $5B on Intel bailout, instantly gets $2.5B richer

bazza Silver badge

I doubt that Nvidia would benefit from an Intel foundry. Even Intel are getting TSMC to make their best chips these days...

bazza Silver badge

There are other concerns that the FTC could be interested in. Such a purchase is some sort of formal tie-up between the two companies which could grow, and there's also the matter of whether or not they're forming a cartel. Of course, a cartel can form without a share purchase, but this way there's a formal governmental "it's all OK".

Possibly the companies will move closer together, and wanted to get this off to a good, officially approved start.

Europe's cloud challenge: Building an Airbus for the digital age

bazza Silver badge

Re: "Digital Sovereignty" -- More Misdirection

>They have to cooperate, whether they like it or not and they cannot tell their customers about it.

There's degrees of cooperation. MS put up quite a stout legal fight against the US law enforcement request for access to an email account hosted in Ireland. They lost in the end. Whether or not the time bought was significant or not, I've no idea. But that there and then was the writing on the wall; an international hosting company (such as MS) is always going to be vulnerable to inter jurisdictional pressures.

Even a purely EU one will be vulnerable to some degree. Europe is not one single law enforcement jurisdiction. Data hosted in a foreign country is always vulnerable to the whims of that foreign country. The only way one can ensure* due process applies to government access to your data is to host it in your home country's jurisdiction, or to encrypt (using tools one has some control over) before the data crosses a border.

* for some measure of "ensure".

bazza Silver badge

Re: "Digital Sovereignty" -- More Misdirection

That's nothing more than saying that if you get wet, you're wet, and dismissing as useless all the various clothing options that exist to prevent one getting wet.

bazza Silver badge

Re: So Airbus Builds Out It's Own "Cloud" Provider In Europe......

Who knows.

AWS isn't great for straight forward commercial control; copying it may not float anyone's boat. So far as I can tell if one goes all-in with AWS you end up with a bunch of lambdas that work only on AWS, with all your data on AWS. One may have a good commercial relationship with AWS, but if Amazon itself gets into trouble, one runs out of options pretty quickly. If one has a bad commercial relationship with Amazon, things could be considerably worse. Outright copying that feels like it would be missing an opportunity.

One thing the EU has been quite good at is standardisation. Coming up with a standard for "cloud" (whatever that is) and then mandating it creates a more vibrant ecosystem. This is what happened in mobile telephony; the US let companies (Qualcomm) create their own standard, Europe created GSM as an open and complete standard. The result was that the whole world ended up on GSM and a rich choice of phones, whilst the US didn't. Europe / the UK went further, making it trivial to swap providers whilst keeping one's phone number. Now everyone in telecoms understands the commercial benefits of a "standard", so we have 4G and 5G these days. The same could be made to happen in Cloud; there'd be US providers, but if you want actual resiliency you'd pick the Euro-standard suppliers and then be able to pick and choose which provider one actually used.

What such standardisation has done in mobile telephony is made sure that the access providers are profitable, but not too profitable and prices are kept keen. Cloud as currently formulated by AWS and others is somewhat proprietary and has all the possibilities of price gouging (I'm not saying they are, but there's no technological / legal guarantee that any attempt at such a practice would be trivially thwarted by customers simply moving elsewhere over night. It's a lot of work to re-Cloud applications, etc). It would be far better if Cloud followed the telephony business model. It would also make hybrid models (some self hosting, some cloud) more achievable, which is the model most companies would like to follow.

bazza Silver badge

Re: air gap

Google (of all companies) announced a while back they were starting a project to get dev teams working "off-internet". I've no idea how that's going, but as they're probably one of the "usual suspects" it may be that they've got a trick or two up their sleeves to meet such a requirement.

bazza Silver badge

Re: "Digital Sovereignty" -- More Misdirection

Whoa!

There's the vast gulf between a data breach / loss brought about by the hosting company actively cooperating with an aggressive or suddenly hostile foreign government, and one brought about as a result of a security lapse. I'll give you a clue; the former involves a massive betrayal of trust, a contractual breach, and possibly broken laws in a legal jurisdiction and may require a war to reverse, whilst the latter is simply a common or garden hazard of doing business on the Internet against which it is possible to take precautions.

Air gaps and private cabling are well known solutions which are already in use by organisations with specific needs. And, contrary to your assertions, they too are not immune to problems.

Pen testers accused of 'blackmail' after reporting Eurostar chatbot flaws

bazza Silver badge

This all seems a bit casual by Eurostar.

Given the nature of Eurostars business, they’d fall under the Data Protection Act (or whatever it’s called these days). I should think that the company Information Officer would prefer not to have to explain to the Information Commissioner why a disclosed flaw met with this level of indifference, should they in fact get rolled over and a data breach occurred.

I’d be interested to learn of my fellow commentators‘ views on the idea of making such disclosures to the company information officer as well as (or instead of) to any vulnerability disclosure form. I suspect that the latter often gets dumped into the IT department somewhere (where it may fester, as happened here), where as the IO is likely more interested because they’re the one who owns the consequences of inaction.

Obviously it’s not the pen tester’s job to sort out internal comms problems in dysfunctional companies! But it’s interesting to consider what the best disclosure route actually is.

Microsoft wants to replace its entire C and C++ codebase, perhaps by 2030

bazza Silver badge

Re: Frangipani insane

To be fair, I think that even MS hold this to be an aspiration. I mean, if it was one engineer, 1 million lines a month *done well* that'd be a heck of an accomplishment. 1000 lines per day may be realistic, for fairly basic code. I very much doubt that they're actually going to churn through 1 million lines of code per engineer per month and ship the result regardless...

What I think is interesting is that, now, Rust appears to be the way of the future for MS. Are they the first major house to declare such an "all-in" on Rust? Makes me wonder what this'll do to the wider industry. Will Rust get ISO standardised? If Windows makes the transition and starts seeing real reductions in CVEs as a result, would the Linux kernel project start thinking about wider adoption? Who knows?

The US gov has been making strong noises about software getting written in safe languages. That might get more prescriptive if there's a major OS that has been re-written. Linux is a big deal now in server land, but if Uncle Sam starts insisting on its business being conducted on Windows because it's been rewritten in a memory safe language (national security, etc), that could put the cat amongst the pigeons.

bazza Silver badge

Re: "It's new and shiny - it must be better!"

The GNU coreutils seem to have benefitted from a re-write in Rust (fewer bugs including some that were discovered along the way, quicker).

Rust's sneaky party trick is that you don't have to really decide. If you've got a big pile of C (e.g. a kernel) and you want to add a new module, you can write that module in Rust and have a proper calling interface with the existing C. You can do the re-write at leisure.

Though one of the aspects that worries me is that one might end up with a bunch of Rust modules calling each other as if they were C (probably much use of the Unsafe keyword), whereas if the whole code base was pure Rust you'd not have that.

Interfaces are something that most software / software ecosystems does very badly. The worst is command line interfaces; fine if used by a human in an intended way on a terminal, but absolutely terrible as a means to get data moved from one program calling into another. We need fewer ways of interfacing. Anyway, my point is that just because a code base has been converted from one language to another, that's not necessarily the end of it. Getting rid of the old way of interfacing modules and adopting the new language's way is an essential part of the task.

bazza Silver badge

I know someone (younger than myself) who deliberately launched themselves off into COBOL. They're very busy and successful.

bazza Silver badge

Re: Death March

Rust came in to being specifically to support a re-write of Firefox from C++ to Rust! And, they've done a fair bit of that now.

Indeed, the value is in tested, known-to-work code. However, a code re-write in a language that is at a higher level than the original does make sense, and has been done comparatively often. The higher level in theory means that there's less testing / debug to do, and that can be essential for a piece of software to expand so as to help the overall testing burden of such an expansion to remain manageable.

This is precisely why compilers were written in the first place; if not, we'd all still be using assembler.

bazza Silver badge

Re: Not the holy grail

Myriad studies have found that a large proportion of bad bugs in software are related to memory mis-use. Using an alternative to C/C++ that is memory safe just makes sense, because of the ease with which such bugs are eliminated. Even code as venerable as the GNU coreutils was found to not be bug free (after all this time) when someone started a reimplementation in Rust.

I'm firmly in the camp that Rust Makes Sense, if you can have the inclination to use it.

However, I do have some reservation about what MS are doing. There's a lot more to Rust than simply "it's memory safe". Unlike C/C++, it has a built in Actor Model and CSP mechanism (or at least there's a crate for such a thing); the latter is what make Golang so appealing. If you take C/C++ and just translate it, what they'll end up with is a Rust translation of their existing C/C++. However, that'd probably miss out on the opportunity to re-think what the code is actually trying to do, and potentially miss out on using language features that Rust has and C/C++ do not. Granted, their "AI" approach may be smart enough to do that re-thinking, but that feels like a bit of a stretch.

I'd also be interested if they'd turn on Rust's "Fearless Concurrency".

What the Linux desktop really needs to challenge Windows

bazza Silver badge

What's the Best Bigger Picture?

That's the question that the article doesn't expand out into. And that question is a bit of a toughie to answer. But it helps understand why Linux hasn't taken off.

On the one hand, wouldn't it be great if there were a single desktop / mobile / server OS that we all used and liked, and one set of cloud services for us to use? Training would be easy, it'd generate the most vigorous economic activity possible as everyone's software would be accessible to the whole market, etc, etc.

On the other hand, one critical flaw impacts the entire planet. That'd be too tempting a target for bad actors, obnoxious states.

For many decades now it is clear that there's room for about 2 of everything. Apple / Mac is one, Windows / Android is the other. All are backed by corporations chasing the vast consumer market, everyone else gets forgotten about. It seems that industry is not going to grow a viable 3rd mass market alternative by itself, and it seems especially optimistic to think that any of the large companies in the Linux world (such as RedHat) will ever have any interest whatsoever in going out of their way to unify efforts on Linux/Desktop. All the myriad projects and ventures in the Linux world add up to an appalling mess for the average Joe to navigate. It's a mess for seasoned Linux users to navigate. You know you've got a mess on your hands when different suppliers of basically the same OS have to re-build everyone's software themselves and package it up for distribution independent of the software developer.

The reason why two is the magic number is because governments minded to let the market choose are generally happy enough with duopolies, and not with monopolies. If there's two of something, regulators rapidly lose interest and there's no pressure for the introduction of a third choice. It also suits governments because the industry then isn't so fragmented as to actively hinder an economy, thereby not necessitating government intervention to bring about much needed consolidation. As Apple and Microsoft were the ones with the biggest desktop dreams, they won.

Other things

<pedant mode: apologies on>

From the article:

Unix died because of endless incompatibilities between versions.

It hasn't died as such, it's simply transformed into a specification. Many OSes - including Windows (if one loads it up with WSL v1) are largely compliant with that specification. The big old Unix corps got eaten on the desktop as Windows grew in capability, and in server land by the hardware manufacturers doing x86/64 hardware that was viable for production use in data centres with Linux being just about good enough to be the OS. Linux's dominance of the data centre would not have happened if no one had manufactured an x86 server with an open specification for hardware, boot environment, etc. Much of the credit for that goes actually to Microsoft, who refined the concept of "IBM Compatible" down to an actual published standard that others could write OSes against with confidence.

Also from the article:

Just look at Android, he argued. Linux won on smartphones because, while there are different Android front ends, under their interfaces, there's a single, unified platform with a unified way to install programs.

Whilst that's true, Android is not and has not been the only Linux based mobile phone OS. Tizen, Ubuntu Phone, spring to mind. They were unsuccessful. Android's win in the Linux-based mobile phone OS market came about through big corporate backing with control brought about though forced adoption of one company's services (Google's) in an illegal way very reminiscent of the bad practices we used to accuse Microsoft of following. Despite some promising tech from various other stables (I still miss the tech perfection of BlackBerry10), we're now left with 2 of something which looks like persisting forever.

</pedant mode>

NIST contemplated pulling the pin on NTP servers after blackout caused atomic clock drift

bazza Silver badge

From the article:

This incident therefore shouldn’t trouble the prudent,

The prudent are a vanishingly rare species. They're never accountants, rarely policy makers, and seldom shareholders. Critical infrastructure is critical no matter how it's funded or how resilient it is, and too often the incautious learn far too late. The only difference between any past disasters and future ones is that our high tech world means we have fewer back up systems. The classic one is chimneys; houses don't have them anymore. In the old days if the mains or gas went off you could always burn something to keep a home warm. Now, you cannot!

The lunancy of those in financial control is that, at the same time they withold funding from propely beefing up critical infrastructure, they're probably personally diversifying their financial interests, investments, etc. The lunacy is that none of that diversification is worth a damn if the modern technological world suffers a major shock (e.g. a Carrington event) and whole economies evaporate in a puff of high energy protons.

React2Shell exploitation spreads as Microsoft counts hundreds of hacked machines

bazza Silver badge

Re: Several hundred...

You mean ones where a miscreant has exploited the flaw, got in, cleaned up the traces and have a hidden presence on that server or network?

Probably quite a lot…

bazza Silver badge

Re: Javascript on the server

Yep. The (deliberate?) inability to control what code runs on that server means that you may not own that server, even when said code is as sandboxed as a JavaScript interpreter achieves.

In this case, it’s another example of folk considering style to be more important than function.

Apple blocks dev from all accounts after he tries to redeem bad gift card

bazza Silver badge

Re: He isn't learning

OSS tends to come from someone else's computer too, and is withdrawn broken or changed surprisingly often. It's really hard to be fully independent of someone else's views and plans even with OSS.

Eg don't like what Gnome is doing with GTK? Tough.

With OSS, one doesn't have a contract to call back on...

This doesn't generally result in losing access to data, but it can result in the inability to view or process data. If a program is abandoned by a developer and becomes unusable as distros change, the effect can be the same.

Affection for Excel spans generations, from Boomers to Zoomers

bazza Silver badge

Liked by the Financial Industry, Hated by Compliance Departments

From the article:

"According to a Datarails report, more than half (54 percent) of 22 to 32-year-old finance professionals say they outright "love" Excel, up from 39 percent among the older generation."

I knew someone working in the compliance department of a large financial company, and they described Excel as their absolutey worst nightmare. It was the job of the financial whizzkids to cook up some cunning scheme for make money, providing a service, etc. It was the job of the compliance department to vet the scheme for legal compliance issues. It was then supposed to be passed over to the softies to develop the code to run the scheme, which also had to be reviewed and checked for compliance in its own right.

Where Excel came in was that it allowed the financial whizzkids to run their dreamt up scheme without needing the software to be written by the softies, and without troubling the compliance department at all. They could bypass all those boring checks, balances, processes and just wing it in a spreadsheet on their desktops...

A big part of the compliance department's job was scouring through company laptops actively hunting out such spreadsheets, and finding them to be in plentiful and continuous supply...