* Posts by bazza

3962 publicly visible posts • joined 23 Apr 2008

GNOME dev gives fans of Linux's middle-click paste the middle finger

bazza Silver badge

Re: I have dumped Gnome a long, long time ago.

Even Windows displays a directory tree. And that "confuses" (or, seemingly not) people by having drive letters... Gnome (and probably SystemD too) of course tries to recreate the consequences of drive letters ("this is a completely separate device that in no way joins into any other part of your file systems"), whilst making it annoying to find the mount point...

Nvidia spends $5B on Intel bailout, instantly gets $2.5B richer

bazza Silver badge

I doubt that Nvidia would benefit from an Intel foundry. Even Intel are getting TSMC to make their best chips these days...

bazza Silver badge

There are other concerns that the FTC could be interested in. Such a purchase is some sort of formal tie-up between the two companies which could grow, and there's also the matter of whether or not they're forming a cartel. Of course, a cartel can form without a share purchase, but this way there's a formal governmental "it's all OK".

Possibly the companies will move closer together, and wanted to get this off to a good, officially approved start.

Europe's cloud challenge: Building an Airbus for the digital age

bazza Silver badge

Re: "Digital Sovereignty" -- More Misdirection

>They have to cooperate, whether they like it or not and they cannot tell their customers about it.

There's degrees of cooperation. MS put up quite a stout legal fight against the US law enforcement request for access to an email account hosted in Ireland. They lost in the end. Whether or not the time bought was significant or not, I've no idea. But that there and then was the writing on the wall; an international hosting company (such as MS) is always going to be vulnerable to inter jurisdictional pressures.

Even a purely EU one will be vulnerable to some degree. Europe is not one single law enforcement jurisdiction. Data hosted in a foreign country is always vulnerable to the whims of that foreign country. The only way one can ensure* due process applies to government access to your data is to host it in your home country's jurisdiction, or to encrypt (using tools one has some control over) before the data crosses a border.

* for some measure of "ensure".

bazza Silver badge

Re: "Digital Sovereignty" -- More Misdirection

That's nothing more than saying that if you get wet, you're wet, and dismissing as useless all the various clothing options that exist to prevent one getting wet.

bazza Silver badge

Re: So Airbus Builds Out It's Own "Cloud" Provider In Europe......

Who knows.

AWS isn't great for straight forward commercial control; copying it may not float anyone's boat. So far as I can tell if one goes all-in with AWS you end up with a bunch of lambdas that work only on AWS, with all your data on AWS. One may have a good commercial relationship with AWS, but if Amazon itself gets into trouble, one runs out of options pretty quickly. If one has a bad commercial relationship with Amazon, things could be considerably worse. Outright copying that feels like it would be missing an opportunity.

One thing the EU has been quite good at is standardisation. Coming up with a standard for "cloud" (whatever that is) and then mandating it creates a more vibrant ecosystem. This is what happened in mobile telephony; the US let companies (Qualcomm) create their own standard, Europe created GSM as an open and complete standard. The result was that the whole world ended up on GSM and a rich choice of phones, whilst the US didn't. Europe / the UK went further, making it trivial to swap providers whilst keeping one's phone number. Now everyone in telecoms understands the commercial benefits of a "standard", so we have 4G and 5G these days. The same could be made to happen in Cloud; there'd be US providers, but if you want actual resiliency you'd pick the Euro-standard suppliers and then be able to pick and choose which provider one actually used.

What such standardisation has done in mobile telephony is made sure that the access providers are profitable, but not too profitable and prices are kept keen. Cloud as currently formulated by AWS and others is somewhat proprietary and has all the possibilities of price gouging (I'm not saying they are, but there's no technological / legal guarantee that any attempt at such a practice would be trivially thwarted by customers simply moving elsewhere over night. It's a lot of work to re-Cloud applications, etc). It would be far better if Cloud followed the telephony business model. It would also make hybrid models (some self hosting, some cloud) more achievable, which is the model most companies would like to follow.

bazza Silver badge

Re: air gap

Google (of all companies) announced a while back they were starting a project to get dev teams working "off-internet". I've no idea how that's going, but as they're probably one of the "usual suspects" it may be that they've got a trick or two up their sleeves to meet such a requirement.

bazza Silver badge

Re: "Digital Sovereignty" -- More Misdirection

Whoa!

There's the vast gulf between a data breach / loss brought about by the hosting company actively cooperating with an aggressive or suddenly hostile foreign government, and one brought about as a result of a security lapse. I'll give you a clue; the former involves a massive betrayal of trust, a contractual breach, and possibly broken laws in a legal jurisdiction and may require a war to reverse, whilst the latter is simply a common or garden hazard of doing business on the Internet against which it is possible to take precautions.

Air gaps and private cabling are well known solutions which are already in use by organisations with specific needs. And, contrary to your assertions, they too are not immune to problems.

Pen testers accused of 'blackmail' after reporting Eurostar chatbot flaws

bazza Silver badge

This all seems a bit casual by Eurostar.

Given the nature of Eurostars business, they’d fall under the Data Protection Act (or whatever it’s called these days). I should think that the company Information Officer would prefer not to have to explain to the Information Commissioner why a disclosed flaw met with this level of indifference, should they in fact get rolled over and a data breach occurred.

I’d be interested to learn of my fellow commentators‘ views on the idea of making such disclosures to the company information officer as well as (or instead of) to any vulnerability disclosure form. I suspect that the latter often gets dumped into the IT department somewhere (where it may fester, as happened here), where as the IO is likely more interested because they’re the one who owns the consequences of inaction.

Obviously it’s not the pen tester’s job to sort out internal comms problems in dysfunctional companies! But it’s interesting to consider what the best disclosure route actually is.

Microsoft wants to replace its entire C and C++ codebase, perhaps by 2030

bazza Silver badge

Re: Frangipani insane

To be fair, I think that even MS hold this to be an aspiration. I mean, if it was one engineer, 1 million lines a month *done well* that'd be a heck of an accomplishment. 1000 lines per day may be realistic, for fairly basic code. I very much doubt that they're actually going to churn through 1 million lines of code per engineer per month and ship the result regardless...

What I think is interesting is that, now, Rust appears to be the way of the future for MS. Are they the first major house to declare such an "all-in" on Rust? Makes me wonder what this'll do to the wider industry. Will Rust get ISO standardised? If Windows makes the transition and starts seeing real reductions in CVEs as a result, would the Linux kernel project start thinking about wider adoption? Who knows?

The US gov has been making strong noises about software getting written in safe languages. That might get more prescriptive if there's a major OS that has been re-written. Linux is a big deal now in server land, but if Uncle Sam starts insisting on its business being conducted on Windows because it's been rewritten in a memory safe language (national security, etc), that could put the cat amongst the pigeons.

bazza Silver badge

Re: "It's new and shiny - it must be better!"

The GNU coreutils seem to have benefitted from a re-write in Rust (fewer bugs including some that were discovered along the way, quicker).

Rust's sneaky party trick is that you don't have to really decide. If you've got a big pile of C (e.g. a kernel) and you want to add a new module, you can write that module in Rust and have a proper calling interface with the existing C. You can do the re-write at leisure.

Though one of the aspects that worries me is that one might end up with a bunch of Rust modules calling each other as if they were C (probably much use of the Unsafe keyword), whereas if the whole code base was pure Rust you'd not have that.

Interfaces are something that most software / software ecosystems does very badly. The worst is command line interfaces; fine if used by a human in an intended way on a terminal, but absolutely terrible as a means to get data moved from one program calling into another. We need fewer ways of interfacing. Anyway, my point is that just because a code base has been converted from one language to another, that's not necessarily the end of it. Getting rid of the old way of interfacing modules and adopting the new language's way is an essential part of the task.

bazza Silver badge

I know someone (younger than myself) who deliberately launched themselves off into COBOL. They're very busy and successful.

bazza Silver badge

Re: Death March

Rust came in to being specifically to support a re-write of Firefox from C++ to Rust! And, they've done a fair bit of that now.

Indeed, the value is in tested, known-to-work code. However, a code re-write in a language that is at a higher level than the original does make sense, and has been done comparatively often. The higher level in theory means that there's less testing / debug to do, and that can be essential for a piece of software to expand so as to help the overall testing burden of such an expansion to remain manageable.

This is precisely why compilers were written in the first place; if not, we'd all still be using assembler.

bazza Silver badge

Re: Not the holy grail

Myriad studies have found that a large proportion of bad bugs in software are related to memory mis-use. Using an alternative to C/C++ that is memory safe just makes sense, because of the ease with which such bugs are eliminated. Even code as venerable as the GNU coreutils was found to not be bug free (after all this time) when someone started a reimplementation in Rust.

I'm firmly in the camp that Rust Makes Sense, if you can have the inclination to use it.

However, I do have some reservation about what MS are doing. There's a lot more to Rust than simply "it's memory safe". Unlike C/C++, it has a built in Actor Model and CSP mechanism (or at least there's a crate for such a thing); the latter is what make Golang so appealing. If you take C/C++ and just translate it, what they'll end up with is a Rust translation of their existing C/C++. However, that'd probably miss out on the opportunity to re-think what the code is actually trying to do, and potentially miss out on using language features that Rust has and C/C++ do not. Granted, their "AI" approach may be smart enough to do that re-thinking, but that feels like a bit of a stretch.

I'd also be interested if they'd turn on Rust's "Fearless Concurrency".

What the Linux desktop really needs to challenge Windows

bazza Silver badge

What's the Best Bigger Picture?

That's the question that the article doesn't expand out into. And that question is a bit of a toughie to answer. But it helps understand why Linux hasn't taken off.

On the one hand, wouldn't it be great if there were a single desktop / mobile / server OS that we all used and liked, and one set of cloud services for us to use? Training would be easy, it'd generate the most vigorous economic activity possible as everyone's software would be accessible to the whole market, etc, etc.

On the other hand, one critical flaw impacts the entire planet. That'd be too tempting a target for bad actors, obnoxious states.

For many decades now it is clear that there's room for about 2 of everything. Apple / Mac is one, Windows / Android is the other. All are backed by corporations chasing the vast consumer market, everyone else gets forgotten about. It seems that industry is not going to grow a viable 3rd mass market alternative by itself, and it seems especially optimistic to think that any of the large companies in the Linux world (such as RedHat) will ever have any interest whatsoever in going out of their way to unify efforts on Linux/Desktop. All the myriad projects and ventures in the Linux world add up to an appalling mess for the average Joe to navigate. It's a mess for seasoned Linux users to navigate. You know you've got a mess on your hands when different suppliers of basically the same OS have to re-build everyone's software themselves and package it up for distribution independent of the software developer.

The reason why two is the magic number is because governments minded to let the market choose are generally happy enough with duopolies, and not with monopolies. If there's two of something, regulators rapidly lose interest and there's no pressure for the introduction of a third choice. It also suits governments because the industry then isn't so fragmented as to actively hinder an economy, thereby not necessitating government intervention to bring about much needed consolidation. As Apple and Microsoft were the ones with the biggest desktop dreams, they won.

Other things

<pedant mode: apologies on>

From the article:

Unix died because of endless incompatibilities between versions.

It hasn't died as such, it's simply transformed into a specification. Many OSes - including Windows (if one loads it up with WSL v1) are largely compliant with that specification. The big old Unix corps got eaten on the desktop as Windows grew in capability, and in server land by the hardware manufacturers doing x86/64 hardware that was viable for production use in data centres with Linux being just about good enough to be the OS. Linux's dominance of the data centre would not have happened if no one had manufactured an x86 server with an open specification for hardware, boot environment, etc. Much of the credit for that goes actually to Microsoft, who refined the concept of "IBM Compatible" down to an actual published standard that others could write OSes against with confidence.

Also from the article:

Just look at Android, he argued. Linux won on smartphones because, while there are different Android front ends, under their interfaces, there's a single, unified platform with a unified way to install programs.

Whilst that's true, Android is not and has not been the only Linux based mobile phone OS. Tizen, Ubuntu Phone, spring to mind. They were unsuccessful. Android's win in the Linux-based mobile phone OS market came about through big corporate backing with control brought about though forced adoption of one company's services (Google's) in an illegal way very reminiscent of the bad practices we used to accuse Microsoft of following. Despite some promising tech from various other stables (I still miss the tech perfection of BlackBerry10), we're now left with 2 of something which looks like persisting forever.

</pedant mode>

NIST contemplated pulling the pin on NTP servers after blackout caused atomic clock drift

bazza Silver badge

From the article:

This incident therefore shouldn’t trouble the prudent,

The prudent are a vanishingly rare species. They're never accountants, rarely policy makers, and seldom shareholders. Critical infrastructure is critical no matter how it's funded or how resilient it is, and too often the incautious learn far too late. The only difference between any past disasters and future ones is that our high tech world means we have fewer back up systems. The classic one is chimneys; houses don't have them anymore. In the old days if the mains or gas went off you could always burn something to keep a home warm. Now, you cannot!

The lunancy of those in financial control is that, at the same time they withold funding from propely beefing up critical infrastructure, they're probably personally diversifying their financial interests, investments, etc. The lunacy is that none of that diversification is worth a damn if the modern technological world suffers a major shock (e.g. a Carrington event) and whole economies evaporate in a puff of high energy protons.

React2Shell exploitation spreads as Microsoft counts hundreds of hacked machines

bazza Silver badge

Re: Several hundred...

You mean ones where a miscreant has exploited the flaw, got in, cleaned up the traces and have a hidden presence on that server or network?

Probably quite a lot…

bazza Silver badge

Re: Javascript on the server

Yep. The (deliberate?) inability to control what code runs on that server means that you may not own that server, even when said code is as sandboxed as a JavaScript interpreter achieves.

In this case, it’s another example of folk considering style to be more important than function.

Apple blocks dev from all accounts after he tries to redeem bad gift card

bazza Silver badge

Re: He isn't learning

OSS tends to come from someone else's computer too, and is withdrawn broken or changed surprisingly often. It's really hard to be fully independent of someone else's views and plans even with OSS.

Eg don't like what Gnome is doing with GTK? Tough.

With OSS, one doesn't have a contract to call back on...

This doesn't generally result in losing access to data, but it can result in the inability to view or process data. If a program is abandoned by a developer and becomes unusable as distros change, the effect can be the same.

Affection for Excel spans generations, from Boomers to Zoomers

bazza Silver badge

Liked by the Financial Industry, Hated by Compliance Departments

From the article:

"According to a Datarails report, more than half (54 percent) of 22 to 32-year-old finance professionals say they outright "love" Excel, up from 39 percent among the older generation."

I knew someone working in the compliance department of a large financial company, and they described Excel as their absolutey worst nightmare. It was the job of the financial whizzkids to cook up some cunning scheme for make money, providing a service, etc. It was the job of the compliance department to vet the scheme for legal compliance issues. It was then supposed to be passed over to the softies to develop the code to run the scheme, which also had to be reviewed and checked for compliance in its own right.

Where Excel came in was that it allowed the financial whizzkids to run their dreamt up scheme without needing the software to be written by the softies, and without troubling the compliance department at all. They could bypass all those boring checks, balances, processes and just wing it in a spreadsheet on their desktops...

A big part of the compliance department's job was scouring through company laptops actively hunting out such spreadsheets, and finding them to be in plentiful and continuous supply...

Death to one-time text codes: Passkeys are the new hotness in MFA

bazza Silver badge

Re: Don't let perfection be the enemy of progress

Passkeys are problematic if you lose the device on which they exist. That's simply an opportunity for a different form of phishing, given what else has likely been lost along the way (email, any recovery password, you name it).

FreeBSD 15 trims legacy fat and revamps how OS is built

bazza Silver badge

I think the Wiki page on LLVM is clear enough. Whilst it's written in C++ and could be built using gcc, there's no exclusivity there.

There was no usable version of gcc based on the Pastel compiler. You have selectively quoted the para from the history section of that Wikipedia page, which goes on to say that none of the Pastel compiler code ended up in GCC. And your last para literally confirms that gcc was bootstrapped from another C compiler. And there was no exclusivity as to which C compiler was used to boostrap gcc either.

bazza Silver badge

Hello Liam, Cloudflare has been doing some unaccustomed rate limiting (dunno why, this is Firefox on iOS and no unusualness my end) and a stumbling finger poking my mobile screen may pressed an as yet undrawn button on your’s or another post. From the brief pop up it may have been a “report problem” or such (it mentioned whipping developers I think). Anyway, I had no intention of reporting a problem with anyone’s post, if that’s what’s happened.

bazza Silver badge

Even a cursory reading of the history of llvm would teach that llvm may have used gcc, but was not dependent on gcc. It started off as being a research project into dynamic compilation, not a language suite, and could have used another c++ compiler for its code.

And if it comes to that, gcc was itself bootstrapped with your existing vax or unix c compiler. It is far from being a point of genesis.

Seven years later, Airbus is still trying to kick its Microsoft habit

bazza Silver badge

Re: The limitations of spreadsheets

I'm not in the aviation industry, but I'd guess that any in-service changes of parts are logged separately (probably on paper, with a signature, kept in a log file back in the hangar). So the current state of the aircraft would be the spreadsheet + all the separately recorded works done. It'd be a piece of work indeed to put that altogether, but having to do so would be a rare event (e.g. there'd been a crash). That way the spreadsheet BOM for the as-built aircraft would be immutable. Having a big spreadsheet that's read-only is a reasonbly good use of a spreadsheet; navigable, importable, searchable, viewable.

bazza Silver badge

Re: Airbus??? Excel??? Twenty Million Cells???

One of the party tricks someone had for Lotus 1-2-3 back in the day was a full FEA of the structure of a super tanker ship (actually doing the calcs in the spreadsheet too). It was quite good, too, but shockingly impressive at the time.

I'd have assumed that these days, your average professional CAD package such at Catia would just do all the FEA / CFD / visualisation for you. Resorting to Excel sounds, well, old-fashioned! There is the point that not everyone wants to buy Catia or its equivalent, and for a lot of structural stuff it all used to be hand-calculated on paper anyway, so why "over-computerise" the job?

It's one of the frustrating things about Excel - it's good enough for quite a few purposes, and becomes monumentally dangerous when over-used. The line between the two states is invisible, fast moving, and difficult to cross both ways!

bazza Silver badge

Re: The volatility of Microsoft Word

Like I said, boundaries and expectations are blurred.

Also, people quite often have unrealistic expectations; they pick up the first handy tool and generally assume that it'll do exactly what they want it to without ever reading up the product manual or any such investigation. Anyone relying on a .doc or .docx being adequate for immutable archival purposes is perhaps asking to be disappointed, despite the spec for .docx being administered by the Library of Congress.

Whereas anyone else relying on a .doc or .docx being infinitely mutable and is therefore willing to put in the font-updating (or at least, embedding fonts) and repagination effort has - so far - not been totally let down; it's still a live, usable format.

Interestingly, the British Library's safest archival format is raw ASCII. It's no good for pictures of course, but they settled on that for text as being the one format that is likely infinitely reinterprettable, just through frequency analysis alone.

bazza Silver badge

Re: Airbus??? Excel??? Twenty Million Cells???

Probably a Bill of Materials. Quite a good use for a spreadsheet. These big flying ****ers have a lot of parts, so 20,000,000 cells is no surprise. Having a spreadsheet generated per-aircraft built with a complete BOM for that specific aircraft sounds like a good idea. It would be sound and independent backup for whatever was in the CAD db that existed when the plane was built. It could even be just a CSV file.

bazza Silver badge

Ah, the mistake you've made is to think of Word as a Publishing Application, which it is not and never has been. It's a Word Processor.

The whole point of a Word Processor is to render words / images as best fits the medium, printer, etc; it's supposed to change word flow with paper size, etc. Take one of the most extreme examples, Latex; You supply it with words, images and rendering instructions, it interprets those instructions in different ways depending on whether it's going to PDF, or printer, or screen, etc. Word is like Latex, except that you have a quasi-wysiwyg view of the rendering as you enter the words, pictures and instructions.

Whereas a Publishing Application is supposed not to do that.

The distinction might seems spurious, but it is a different goal. The boundaries are now very blurred, as are expectations, but that's the origins of Word's behaviour. It's by design. Other WP's do the same.

I've not used Publishing Application since Ventura Publisher; that was (for the day) very good, and no matter what you got the same output on different printers.

bazza Silver badge

Re: Regulatory Compliance

There's other fields where long term availability of literally everything related to a product must be kept. Medical devices is one. Cars, probably. Financial records demand long term retention / access too.

It's a problem that the the technology world doesn't care to solve. The "average" user doesn't need this, and so the "average product" doesn't even begin to satisfy such long-lived requirements.

Extending the thought, "displayable as they would at the time of creation": that in itself doesn't make much sense anymore. Sure, there's a strong likelihood that a component's design was looked at (even on a computer screen) by engineers and manufacturing technicians at the time the plane was built. But, for quite a long time now, that is partly irrelevant. It's what the CNC (or 3D printer) actually did with the design that matters, and also what the automated manufacturing QA/QC tools did with it too. Keeping all that automated machinery alive, examinable, usable, testable for decades is even harder.

If regulations aren't asking for that, then they at least need a re-think.

One can go further; what about raw materials; 50 years down the line, can one really get material just as it was?

The Informationeer? A Re-Design Engineer?

There's probably a job for what I'd call an informationeer or a re-design engineer; someone who is specifically trained and qualified in updating the representation of information and can sign off on a conversion from old to new as being "equivalent".

For example, someone who has the tricks up their sleeve to be able to independently assess that a set of CAD drawings converted from one program to another reproduces the original engineer's design intensions cast into the context of modern manufacturing. This would extend beyond just seeing if the file loads, and wouldn't necessarily mean that the part looks the same afterwards... The important part would be that they've signed off the conversion; there is a strong traceabilty between the modern design representation and the original (much as aircraft maintenance logs get signed up by those doing the work).

It feels like a job that's different to the design engineer. A design engineer takes requirements and produces a design. An informationeer simply ensures that the design as cast into more modern representations is the same "design". That would mean that if the design engineer had made a mistake, the informationeer / re-design engineer would correctly carry that mistake forward [though if they detected a mistake, obvs there'd be a means to notify the design owners].

Though I'd hate to be a spreadsheet informationeer. Likely as not, one would find all the horrific mistakes made in the originals, making it difficult to discern what the original intent actually was...

Old Problem

We've been here before. When they (fairly recently) built a Babbage Difference Engine from Babbage's drawings, a major problem was understanding the nomenclature on the drawings; those old Victorian draughtsmen didn't draw things in the way they are in the later 20th / 21st century.

HPC won't be an x86 monoculture forever – and it's starting to show

bazza Silver badge

Fugaku…

…is still growing I think.

I like Fugaku because it’s pure CPU/Vector processing, and a lot of its performance comes from the very efficient Tofu Interconnect fabric welded into the CPUs. It can really shift data like nothing else.

Atlassian ran a tabletop DR simulation that revealed it lived in dependency hell

bazza Silver badge

This is all pointless. Whilst they still have one circular dependency - as the article's tail end reports - then they cannot bring it all up from cold.

Not impressed.

There are ways in which all of this can be avoided. For example, if one adopts Communicating Sequential Processes as the basis for one's system architecture, and then one uses the algebra Tony Hoare created for it, you can algebraically prove system correctness (lack of livelock, deadlock, etc) before ever cutting code.

This isn't especially hard, though of course such design formalism and analysis is anathema to many. In this case, Atlassian are spending a lot of money despaghettifying their own mess and haven't actually achieved anything substantive in doing so. Whereas some comparatively cheap design analysis up front would have saved them all of this.

Their current mess translates into business risk, increased costs, and ultimately surpressed share price.

Microsoft's fix for slow File Explorer: load it before you need it

bazza Silver badge

From the article:

Microsoft is tackling File Explorer's sluggish launch times - not by stripping out the bloat or optimizing code, but by preloading the application in the background.

Genius.

I say, bring back Sidekick.

De-duplicating the desktops: Let's come together, right now

bazza Silver badge

Re: Don't need to exchange lots of rich, complex messages.

>If you use a text based format you remove the problem of representation from encapsulation and transport (you're just sending utf-8 text), which allows you to have much richer object representations that are less rigid (You can ignore new fields if desired) and not constrained by the underlying binary format. (How do you represent a number, what byte order do you use, etc etc).

There is nothing special about text representations that binary representations cannot match. Text is a binary representation, it's just a very inefficient one. Binary serialisation is a thing, and has been a thing for far longer than today's popular text serialisations (ASN.1's BER goes back to the 1980s I think). If you've not heard of it, it's because it comes from the ITU and ITU technologies are often shunned by the "software" world. However, the ITU has maintained and extended it, so it's bang up to date with things like XML and JSON wireformats.

If one is doing "interfacing" by writing code according to one's own interpretation of a text stream, that invites all manner of bugs. One can improve on that with an ICD, but to be honest the easiest solution is a proper schema describing in detail what is and what is not valid data on the interface. There's a few of these around. XML schemas can express what valid XML is for the application, including the important part of constraints (value constraints and length constraints). JSON schema can also define shape and constraints. Both are moderately successful, but do not begin to offer enough functionality for interfacing done properly (and are let down by poor tooling). Google Protocol Buffers does some of this - binary wireformat, schema language - but falls short by having no mechanism to define constraints, and having weird rules about what valid wireformat data is (e.g. ambiguous decodings of data, the best example being that a "oneof" does not in fact mean "only one of these can be validly sent"). Google had some peculiar design choices when they did GPB and threw away some of the advantages of a schema-driven code generation.

The granddaddy of them all is ASN.1, with its multiple wireformats (which include numerous binary representations for different purposes, as well as text representations like JSON and XML (several dialects)), its rich constraints, enforcement and extensibility features, as well as the ability for an ASN.1 schema to define values / instances of objects and not just object shape. There's reasons why the entire telephony (mobile and fixed line) machinery uses ASN.1. They really care about thoroughly defined and policed interfaces and no one wants to waste time implementing those interfaces or checks by hand, and nor do they want to waste expensive network bandwidth on bloaty data representations. You get the right tools (good ASN.1 code generators cost a bit of money), the schema, build it and *boom* your interfaces are done. I've wiped out months of system integration time from projects using it, and had a level of design agility that's really hard to get any other way.

>You also have a human readable format, which as the person who's often tasked with fixing obscure bugs in this kind of stuff, isn't an insignificant benefit in reality.

Having a human readable format is a useful debugging trait. But it's a mistake to turn that into, "therefore all data that is exchanged should be human readable", because that's the worst possible format for proper definition for a developer and interpretation of data by a machine. One of the nice things about ASN.1 is that one can freely translate between data representation - e.g. sending a compact binary represetation of an object via a constrained bandwidth link, whilst (for 1 line of code) sending a text version of the same object to logging. Even Google Protocol Buffers can do this to some extent (GPB objects can be rendered as JSON).

The problem with most serialisation technologies is that they're concrete - whatever it is you do to define the shape of an object (e.g. in an XML schema, or in source code class decoration) has a 1:1 mapping to what the object looks like in wireformat (XML). What really helps with system implementation is if the the serialation technology one uses is more "abstract", in that it does not define specifically how objects are represented, allowing for that representation to be flexed trivially throughout the system acccording to needs. The 'A' in ASN.1 is "abstract".

NHS left with sick PCs as suppliers resist Windows 11 treatment

bazza Silver badge

Re: Question is

I remember that episode well.

It's origins go back to how hard it was even then to assemble a good POSIX / Unix dev team, especially when "graphics" at the time on such platforms were mediocre whereas Windows was a lot more sorted (= still had shortcomings). Such systems (any that has anything to do with weapons targeting and release) are quasi-safety-critical, which cuts out an awful lot of established normalcy.

In modern parlance, frameworks like Electron would be a complete no-no, as were the equivalents of the day. Incidentally, you can see the impact of such restrictions today; the GUIs in the cockpit of an airliner or any military jet are pretty primitive.

As you've related, windows for warships turned out not to be a great success. However, I'm not sure we'd be any better off today. MacOS? Nope. Windows? Nope (see above). Android? Nope. Linux? Also probably no, at least not a mainstream distro.

In reality, a good choice is one driven by a careful and complete analysis of the requirements and deciding whether or not one has to tackle unknonwn / untrusted data from external networks or not. A closed-off non-networked system with no USB ports can afford to carry a lot of secrity related bugs and simply never get updated. A networked system either has to be very particular indeed in its implemenetation to be thoroughly hardened to attack, or have an active monitoring, update and patching system in place.

bazza Silver badge

Re: Question is

Yes one can embed Linux and trim it down to size. But despite the advances it’s still a faff. Yocto is pretty good but you still need to be reasonably expert. Whereas if a machine starts off as a PC on to which one can just install a regular PC OS then you’re less dependent on such expertise.

At least that’s how many people (CFO’s, project managers) see it. And they’re poorly equipped to see the whole cost. One has to have a really good reason to be able to persuade such people to invest properly.

There’s tools and OSes like VxWorks and Integrity that make it very easy to minimise the code around the needs of the application, and are way less faff than yocto. But they cost money, and CFO’s and project managers can’t see beyond the end of the development phase to appreciate the cost savings in support. Nor do they want to keep an expensive dev team on when the development is “complete”.

So far as I can see, it’s game over for embedded OSes in devices where a screen is a necessary component, and unfortunately it’s not anything to do with good technical choices.

For medical devices the fact that the FDA’s rules on software certification are nuts (related to the recertification costs for rolling out patches) simply translates into a ripe commercial opportunity for the manufacturers that isn’t their fault. (Caveat: it’s been a while since I was in that line of work and the regulatory rules may be more relaxed). The regulatory environment wasn’t ready for the advent of putting Windows, networks and USB inside and around medical devices.

Yocto itself is running out of steam I think. RPi has shown that actually a full fat Linux can be laid down on an embedded system fairly easily. And if you think the Pi is good, NXP do a Debian distribution for their range of Arm devices. One of those, a Layerscape something or other, is 8mm x 8mm and can run a full 64 bit Linux with 2 GB of RAM, takes 1 Watt. Why bother with Yocto?

Major AWS outage across US-East region breaks half the internet

bazza Silver badge

Who Needs Hackers...

...when Amazon pull the biggest D.o.S. there's ever been?

Governments

If things become too cloudy, you've then got whole economies at the mercy of the cloud owner or their mistakes. You'd think that this event would wake up governments as to the grave perils that lie in proprietary clouds that lock customers in. I'm not holding my breath on that one...

Literal crossed wires sent cops after innocent neighbors in child abuse case

bazza Silver badge

Re: MAC addresses

A MAC address is an Ethernet concept. If the connection is PPP over something serial (which, often, your internet connection is that), there's no MAC address for your end.

Same for dial-up; you're making a call from phone number, not a MAC address, and spoofing the phone number requires fiddling about with wiring not with the content of some register on a modem.

There is a problem here. To "prove" that an IP address is associated with a property, you ideally need to be on the network in the property pinging out to a "what's my IP address" service. I'm not really sure of a way of determining for sure from outside the property that an IP is at a physical location. You might have to tap the wire at the property boundary (or nearest equivalent place) to be totally sure.

Otherwise you'd have to audit thoroughly the wiring from the property to the computer on the ISP's network that's issuing the home with an IP address, which is what they had to do in this case to find the actual culprit.

bazza Silver badge

No, it's the responsibility of the police to verify evidence. That's literally the core part of their job. If prosecutions were launched on the back of unverified evidence, you'd be able to get someone into a lot of trouble by simply selling the police a tall story.

If they had got as far as pressing charges, and in the court case the defence said "not our chap's IP address", they'd have had serious egg on face for not having verified that simple, basic fact, and likely be up on a purjury charge (having sworn an oath as to the correctness of their evidence).

BT screwing up is bad too, but hey they do that all the time.

This could now be a problem for previous cases. If it turns out that the police as a matter of routine have not been verifying an accused person's IP address really was what the police said it was, there could be hell to pay. In cases where other evidence was found, no problem. But if a case had come down to nothing more than "the traffic came from or went to your property", that could be a problem.

SpaceX's Starship: Two down, Mons Huygens to climb

bazza Silver badge

Re: SpaceX's approach is to deal with each problem as it arises

NASA didn’t wing it with Apollo and Saturn V. They went through a whole lot of design concepts but then settled on one and very carefully built it (albeit by the cheapest contractor). A lot of systems engineering practices got evolved there as a result.

The only reason why they could pivot LM engines from one supplier to another was because of the care they took in their process to ensure that the requirement was well defined. This meant that the pivot was not full of cascading consequences throughout the entire vehicle design.

Who gets a Mac at work? Here's how companies decide

bazza Silver badge

Re: Horses for Courses...

It is indeed horses for courses. Though, Macs are no longer suitable for long RAM-hungry courses with their unupgradability.

WSL is working well here, lots of different distros on tap and easy to have multiple version of the same one. I've never felt the need for a Mac for its unixness, it's been either Solaris or Linux on a Sparc or PC or just a VM on a PC. For dev in particular, I find the ability to freely mix Windows and Linux tools with WSL particularly handy. There's just some tools that Windows has (like, Scooter Software's excellent Beyond Compare) that has no equal on Linux (e.g. Meld tries to match Beyond Compare but - in my ironic opinion - it's no comparison). Notepad++ isn't equalled on Linux, and MobaXterm is entirely unmatched on Linux. Easy integration of Windows tools and Linux files (and vice versa) is a useful mix.

bazza Silver badge

Upgradable RAM

Surprisingly little mention thus far about the impossibility of upgrading RAM on Apple’s machines.

For some of the things I do, a MAC is a none starter owing to the lack of memory. If you want to get down to some proper number crunching, it’s Windows or Linux or FreeBSD on non-Apple hardware.

Intel's open source future in question as exec says he's done carrying the competition

bazza Silver badge

Re: If AI vibe coding...

No need to foresee it, the shortage is already here!

bazza Silver badge

I'm not sure what's actually going on here. Intel's reason to contribute so heavily to Linux was to ensure that Linux worked well on their hardware. They wanted that because Linux was becoming highly important in the server market. Wind forward to today, and Linux is even more entrenched. You'd think it was even more important. Intel's hardware is fundamentally uncompetitive, it's been a long time now since they lead anything. Making their hardware more expensive to run software on, or harder to use, or just a nuissance of license admin is not going to attract users to their hardware.

Whereas the ARM and AMD's of the world who do make competitive hardware will see this as "well, that's them gone (or at least, going)".

Intel can make this work for them, but they have to produce some super-competitive hardware that makes us all want to pay to use it. I just can't see that happening, not given Intel's recent employment history or the long-standing norms of the US economy.

Starlink is burning up one or two satellites a day in Earth’s atmosphere

bazza Silver badge

>Satellites, on the other had, are made from high-strength aerospace alloys, semiconductors, rocket fuel, etc. The chemical composition is wildly different.

>Things burning up in the upper atmosphere are rarely in their elemental form, and "burning up" is also a pretty misleading term; "heated by friction until it evaporates, is torn apart, or burns" is probably closer to what is actually going on.

Indeed so. But the same thing happens to meteorites. Most things that enter the atmosphere are stripped back to bare ions in the plasma generated on re-entry (a cloud of ions is pretty much the definition of a plasma), so it then matters little whether the origin of those ions is natural (meteorites) or artificial (satellites). All that could matter is whether we're introducing ions into the atmosphere that meteorites never have. Given the tonnage of natural debris that falls every day, I suspect that even if an element is found in trace quantities naturally it may well exceed out contribution.

Having said that, getting an estimate of our contribution of halogen ions into the atmosphere is probably a good idea, just in case.

bazza Silver badge

Satellite reentries are a rounding error. Also, Earth's atmosphere has been soaking up meteors for aeons, and it's probably an essential part of the ecosystem.

There is the point that satellites might be introducing elements not generally dumped by meteors. However, it's hard to see why meteors would be totally devoid of some element or other and satellites rich in them.

There are potentially heavy elements like uranium and plutonium which we do put into space, but that's comparatively rare and far smaller in volume than what was put into the atmosphere by all that bomb testing. And, generally, "heavy elements" and "satellites" are not combined in substantial quantities, if the engineers can avoid it. Heavy elements also don't stay in the atmosphere as aerosols for very long I should think.

Having said all that, it's probably a good idea to monitor the situation. That depends on having already done a lot of monitoring already to get a view of the natural steady-state, because we (us earthlings) are about to conduct a very large practical experiment on the matter. Observing the impact - good, neutral or bad - is important all ways round. The trick will be to avoid leaping to conclusions prematurely.

OpenAI and AMD link arms for AI buildout: It's a power-for-equity swap

bazza Silver badge

Re: Well

At least AMD are making things people want to buy…

Windows 11 25H2 is mostly 24H2 with bits bolted on or ripped out

bazza Silver badge

Re: Sorry, probably stupid question

Of course, it could be that Rufus just fixed all the installer problems along with working other small miracles....

bazza Silver badge

Re: Sorry, probably stupid question

I think there was a fowl-up of sorts that stopped various installations getting the update beyond 23H2 automagically. Some of mine had the same problem and had to be ISO whacked.

In fact, I recall that the only Windows 11 machine of mine that smoothly upgraded to 24H2 was the one that doesn't have a TPM and Windows 11 only installed on it at all because I turned on a lot of the options in Rufus. Why that machine upgraded cleanly and the "legit compatible hardware" ones did not, I don't know.

To dive down a rabbit hole - I was pretty impressed with Windows 11's tolerance of abuse. I tried installing a Rufus'ed installation with everything ticked on a really quite primitive machine. The installation worked, but the first boot was a nope (the CPU was missing an opcode MS have decided to put into Win11). On reboot, the installation wound itself all the way back to the pre-existing Windows 10 installation, which was seemingly utterly unscathed or altered by the near-death experience. So it's a pretty robust installer in some ways, and strangely fickle in others.

Linux's love-to-hate projects drop fresh versions: systemd 258 and GNOME 49

bazza Silver badge

Re: Who controls Linux?

No, he doesn't. At least, not for RHEL / Fedora. RedHat control that kernel, because it's they who build and release it. Linus has no ability to tell them to stop doing that. And at least for RHEL they're being a bit shy about releasing their source code. It may currently be a downstream derivative of the kernel Linus oversees, but it's not the same kernel.

People only use a distro at all if it's viable for their needs. When it comes to things like bug fixes, feature releases, those happen only if there's a large pool of developers writing the code. If, suddenly, those developers are withdrawn from freely distributed projects (such as systemD and Gnome currently are) that most now depend upon, every distro reliant on them is going to start becoming less viable, and less relevant. Linus and crew may still be updating their kernel, but it would become an irrelevance.

Asking whether this will happen or not is a bit like asking "will the sun set?", because this is a US company we're talking about. IBM may be playing a slightly long game, waiting for Linus to retire (whereupon chaos may ensue), then making their move and installing themselves as the custodian of the only Linux kernel and init/desktop that's getting developed and bug fixed.