* Posts by bazza

3501 publicly visible posts • joined 23 Apr 2008

curl vulnerabilities ironed out with patches after week-long tease

bazza Silver badge

Security Overblown?

Well, remote code execution is a pretty bad thing, even if someone has to "contrive" the circumstances to make it work. If an attacker wanted to get code execution on your system, they'd be doing their best to contrive the right circumstances, sure enough.

New information physics theory is evidence 'we're living in a simulation,' says author

bazza Silver badge

Re: A simple analogy.

Except that's not how evolution has actually worked. For example, whales are descended from the same land based ancestor as sheep. Some proto-sheep decided to foresake grass / legs / wool / bleating for the very different attraction of crill / flippers / blubber / ocean-spanning song. And, that proto sheep itself had an ancestor that'd decided "to hell with living in the sea, let's see what's beyond the beach".

If one goes looking for micro-evolutionary changes, one will find them. If one goes looking for macro-evolutionary changes, one will find them too, but one has to look harder. It's all about what evolutionary opportunities existed in the environment. Even within just mammals, there's a vast variety that exploded out into the world following the demise of the dinos.

Fresh curl tomorrow will patch 'worst' security flaw in ages

bazza Silver badge

No one ever uses it with root privileges, right?

Blockchain biz goes nuclear: Standard Power wants to use NuScale reactors for DCs

bazza Silver badge

Re: How long has Bitcoin mining got?

It's a futile activity. When fully mined out (as Bitcoin one day effectively will be), the only reason for anyone to keep validating the block chain is participate in the voting for what the correct version of the block chain is. In effect, it'd be a very expensive way of offering free banking services to those minded to move money outside of government control, for no returned benefit to oneself apart from the dubious pleasure of opening the monthly electricity bill.

I've yet to hear of a bank that's operated on a charitable basis, never mind one that's also decided to take on the liabilities of a burned out personal nuclear reactor sat in the carpark...

bazza Silver badge

>This is just getting ridiculous. Maybe it's time to decide energy shouldn't be wasted in this way.

Absolutely agreed.

Missing the Whole Point of Distributed Ledgers / Block Chains

Worse, it's defeating the whole point of proof of work block chains anyway. The whole idea is that there's a large number of separate participants, all doing the work, all agreeing on what the block chain content actually is. There is a majority vote amongst participants; that's how bad actors are detected and defeated. Trouble is that the vote is valid if, and only if, one assumes that the participants are fully independent.

If you go and lump most of the miners altogether into (for example) just one or two data centres, nuclear powered or not, then in effect there's only one or two voters. Each is perhaps comprised of myriad instances all working on and agreeing with the block chain content. However, if they're denied the opportunity to vote (because, ultimately, their ability to vote is in the gift of the data centre housing them and the internet connection it controls), then they no longer count. Instances in another datacentre can vote a different way, changing the "majority view" of the block chain content.

The danger inherent in nuclear powered data centres for block chain mining instances is that they could make it commercially uneconomic to host miners anywhere else. There's never going to be lots of nuclear powered data centres. Thus, the effective number of mining instances able to vote against a malicious change in the block chain could be severely impacted if one of the data centres goes offline.

All someone needs to do to take over the block chain is to knock out those few data centres' internet connections for long enough such that their mining instances drop out of contention for voting on the block chain content. When they do come back online, it could be to find that the block chain has been altered to their disadvantage and the rest of the world has moved on.

Political Analogy

For an analogy based on the democratic process; nuclear powered data centres for proof of work mining is a bit like putting the majority of voters in one particular district, having a general election, burning the votes cast in that district, and letting the election be decided by the minority of voters in other districts.

Elon Musk's ambitions for Starship soar high while reality waits on launchpad

bazza Silver badge

Re: As much as I'd love to see Elon Musk permanently leave this planet. . .

Pretty sure that, like most other bodies in the solar system, the inclination will get flattened out by the gravitational attraction of the planets. That's why the solar system has an orbital plane.

bazza Silver badge

Re: They don't dump the FTS

There's a big difference between a blob of plastic with a match put to it, and the same stuff contained in a linear cutting charge with a detonator embedded in it getting very hot.

bazza Silver badge

Re: Launch Pad Fixed?

> NASA was under a lot of pressure.

They were, which rather illustrates the bad set up for How Space Is Done in the USA (political motivations...).

The more sensible thing would have been to turn to the Europeans, and get them to do have done a crew capsule. Ariane 5 was crew-rated (though never flown as such), and the ATV was also crew rated, lacking only seats, a conical shape and a heat shield to have become a up-and-down capsule. All the necessary avionics / thrusters were already done (and these are the hard, expensive bit), and it could autonomously dock with the ISS (unlike Dragon).

I guess it was a case of it had to be a US solution, and it was better that it was 4 years late than foreign!

I think part of SpaceX being 4 years late was that you have to do an awful lot of qualification for a crew-rated vehicle. And the qualification is not just for the vehicle, it's the fitness-for-purpose of the organisation doing the work that gets qualified first (i.e. is its approach to QC and assurance right?). That sounds a bit meta, but one will not successfully build a crew-rated vehicle unless the organisation is right. Afterall, look at what one (deliberate?) lapse did in Boeing with the 737MAX.

I suspect it took quite a while for SpaceX to adjust from the Muskian gung-ho approach to the necessarily mature approach NASA was needing to see before they'd let anyone get inside one of their products.

>Even with NASA's help, the first descent of a manned Dragon capsule almost lost the heat shield.

I hadn't heard that. Crew inside it? I think I'd have got pretty cross if I'd been onboard that one.

bazza Silver badge

Re: They don't dump the FTS

Falcon 9 is totally irrelevant. The Stage 1 never gets fast enough for atmospheric heating to be an issue. Stage 2 get disposed of.

Name an explosive that is guaranteed not to detonate when heated to 2,500C (the temperature it would have to survive, still attached to the outside of StarShip on reentry).

bazza Silver badge

Re: As much as I'd love to see Elon Musk permanently leave this planet. . .

It's probably too late. The Tesla roadster they launched ended up in a Mars-crossing orbit. It was supposed to reach just short of Mar's orbit.

Basically, the treaties these days add up to anything capable of reaching Mar's orbit or travelling beyond are supposed to be sterile to a very high standard. SpaceX / Musk went ahead without that, as it wasn't supposed to reach Mar's orbit. Having underestimated the performance of the first Falcon Heavy (despite knowing full well the performance of its constituent parts - Falcon 9's). Thus SpaceX put the US in the embarassing position of having not lived up to its treaty obligations...

Eventually, that thing will crash on Mars, potentially contaminating it, unless it hits something else first. It's aphelion only just exceeds Mar's orbit, so I suspect that if it ever did hit Mars it'd do so quite gently (so far as these things are concerned). Possibly it won't burn up completely in Mar's thin atmosphere, so arguably the chances of contamination are maximised...

bazza Silver badge

Re: All not good

If the first flight was just a vanity shot, it could end up back firing spectacularly. There's a good chance that all those failures - particularly the FTS - add up to the FAA not permitting a second flight.

For me, the FTS is a real danger to the whole project. If the FTS has to become linear cutting charges to open it up along its entire length, Star Ship likely cannot also reenter the atmosphere with that still attached. I don't know how you attach linear cutting charges such at that they're not going to fall off during the launch, but can still be jetisoned when they're no longer required. Re-entering the atmosphere with explosives still on the outside sounds like a non-starter.

The whole concept is novel. There's never been a re-entry vehicle that's also had to have a launch FTS attached to it. Shuttle's FTS was on the SRBs and external tank; the Shuttle itself didn't need any (because it carried no fuel as such). Apollo - none (Saturn V - yes), and so on. Star Ship carries large tonnages of fuel up into orbit, and therefore needs a launch FTS, which might mean it cannot safely re-enter.

bazza Silver badge

Launch Pad Fixed?

They've not done a full power test of it. By definition, it has not proven to be fixed.

Every rocket engineer knows how to launch rocket such as this without destroying the pad or damaging the rocket. That they've not done so is, probably, asking for significant an on-going regulatory difficulties.

>For all his faults he has still managed, with a lot of help, to turn the space launch industry on its head.

Ha. Falcon 9 became reliable only after the child in the room was sidelined and the grown ups started listening to what NASA was saying about mandatory QC, if they ever wanted to get it rated for crewed flight. The credit really should go to NASA, who didn't actually have to help SpaceX fix Falcon 9 but did indeed help.

bazza Silver badge

Re: A brief look back

>Spacex have also proved the reliabilty of their rockets as they've lost 2 in 200+ launches (one to a helium tank failure and 1 to a manufacturing fault in the helium tank asembly)

That was back in the days when Elon Musk had more influence, and didn't care to pay for quality control. The first - when the helium tank came loose in flight - was because it turned out SpaceX wern't bothering to do any QC on the struts that attached the tank to the walls. The second - when the thing exploded on the pad during fuelling - happened because they were filling it with superchilled O2 (to get more O2 on board), but had never bothered testing the CF-wrapped helium tank immersed in such cold LOX. Turned out that the coating failed, pure O2 started infiltrating the carbon fibre weave, and kablooie. The fact that they blamed that particular incident initially on a sniper operating from the roof top of a ULA building tells you a lot about the maturity of the company at the time.

What happened since was that, to get rated for crewed flight, NASA obliged SpaceX to do their QC home work properly. This came about following what was reported to be something of a train wreck of a meeting bewteen SpaceX and NASA, in which the company rolled out Elon's vision of "reliability demonstrated by means of having launched lots of of them successfully". NASA said nope, and kept saying nope. NASA's input into the Falcon 9 program is what's resulted in it becoming a highly reliable launcher.

>As for starship, all they've proved with that is that it can fly with multiple engine failures, an APU failure, a flight control failure,its strong enough to survive the flight termination system, and it would be great for digging big holes really quickly if you got it to hover at 40 foot from the ground.

That is being generous! That first flight is a still on going disaster for SpaceX, though few realise it.

Firstly, to get a license for the second launch the company is having to persuade the FAA that they know what they're doing. Thing is, it's the same people who claimed to know what they were doing first time round. The FAA, rightly, can ask "so, what's changed to make it reasonable for us to believe you?".

Secondly, the FAA itself. Their job is to vet license applications, and act as technical experts of last resort to ensure that companies really do know what they're doing. And, as the litany of things that went really badly wrong with the first launch shows - particularly the failure of the FTS to disintegrate the vehicle - the FAA failed in their role as tech experts of last resort; they believed SpaceX. The questions they should be asking themselves is, what went wrong in our assessment of the first launch license application, and how have we (the FAA) changed to ensure that we don't get it wrong a second time.

This second one really, really matters because with a vehicle like this and an outcome like the first launch had, there is a very real possibility that a future launch could go just as badly wrong and end up wiping out Port Isabel or some other urban conibation. It's the FAA's job to prevent that. If the FAA decides that it cannot be competent enough to guarantee that, then they can't grant a license.

Thirdly, the design itself. The failure of the flight termination system means they're going to need a bigger, better one. Thing is, Star Ship itself sitting on top also needs a FTS, and this too failed (despite Star Ship being fully loaded with LOX and LMETH - you'd think it'd have gone up immediately but it didn't). Also, Star Ship cannot afford to carry this FTS up into space. It needs to be dumped before reaching orbit, because it cannot afford to have lumps of explosives strapped to the outside when it re-enters the atmosphere for a landing. But, if a larger more comprehensive FTS means that dumping it becomes difficult, then the whole Star Ship concept could be toast.

I think there is a real possibility of this; they used point charges to punch holes through the tank walls, and that failed to set anything off, and Star Ship did not disintegrate on command. Point charges can themselves be easily detached. If instead they're forced to have linear cutting charges strapped up the length of Star Ship, to be able to cut the thing open end to end, I don't see how they can then also detach those. If that's what they have to have, and they cannot detach them, then Star Ship won't be able to reenter the atmosphere and land without blowing itself to smithereens in the process. So it won't be re-usable. So the launch tempo would depend on manufacturing, not simply refurb / refuelling it. So, the rate at which Star Link V3 can be deployed is limited and more costly. Which probably risks the entire show.

This analysis shows just how fragile a position SpaceX is probably in, and how dependent they are on the good will of the FAA to say "yes". Insulting the FAA is simply going to incline them to say "No", when there's probably still / already a ton of technical reasons to say "No" anyway.

Musk in hot water with SEC for failure to comply with subpoena

bazza Silver badge

Might He Get Deported?

One does get the impression that the federal authorities are being incredibly, back-bendingly patient with this twerp.

Musk is a US citizen only by naturalisation. In theory, if he commits a sufficiently severe crime against the USA (fraud is mentioned as being one such crime) he can be denaturalised. Some of the things he's (potentially) done (market manipulation, etc) might be starting to get severe enough to count.

If he does end up being denaturalised, deported and excluded he's going to lose an awful lot...

bazza Silver badge

There would still have to be due process to determine whether or not the penalty is applied, just in case it turns out there's a good reason why compliance was not possible.

Things like Grandmother's funeral, family medical emergency, etc.coupled with apologies and active engagement to re-arrange probably count as "good reasons". I'm not sure that "I don't want to" amounts to a "good reason". I guess we're going to find out!

And now for something completely different: Python 3.12

bazza Silver badge

After all these decades there's still a GIL...

Musk's first year as Twitter's Dear Leader is nigh

bazza Silver badge

Re: Going downhill fast, and so is Twitter

It could happen. I expect DoD would be itching to get some control over Starlink and SpaceX. They may choose their moment...

Operating under a US licence, SpaceX is ultimately under the control of the US gov. Generally the laws around such things set out how a commercial operator has to operate for the government to be confident of meeting treaty obligations on how space is shared and managed internationally. Ultimately it's the government that carries the can, so the laws are generally structured so that government can step in and take control. Any hint that SpaceX are operating in a way that results in the US not meeting treaty obligations, the government is pretty much obliged to step in.

bazza Silver badge

Not really. He is sinking his own money into it, and he has a large supply of that. And he's also not paying his bills, another way of trimming costs. He can probably sustain Twitter for a very long time, though at some point the courts and creditors will get their way and it'll start costing him more than at present.

So, impressive only in the sense that it's the most costly demonstration ever of the maxim "a fool and his money will soon be parted". A wiser person would have cut their loses and binned the company entirely by now. A wise person would never have made such a rash offer in the first place.

UTM: An Apple hypervisor with some unique extra abilities

bazza Silver badge

Fabrice Bellard...

...Writes excellent stuff. I once found a bug in a piece of his code, fed it back, fix forthcoming. I actually felt useful that day!

US Trademark Office still wants to keep faxes, but is willing to try this cloud thing

bazza Silver badge

Re: Fax is still a fascinating punchbag

An online service still counts, provided it's giving the sender the receipts / acknowledgements the sender wants. An online fax sending service that didn't confirm delivery back to oneself as the sender is, effectively, no different to sending an email!

Email is certainly the more sane way to go, and works far better when things are going smoothly. I guess there's just some very fastidious people out there who want "certainty".

You probably can still connect a fax to the phone line you've got. AFAIK, when telephony moves from analogue / copper to VoIP, AFAIK you end up with a box in one's house that supports plugging in a telephone. The router I have here in the UK from the ISP has that. And, if the VoIP codec is a lossless one (as is quite common it seems), the fax machine will work just fine over that.

bazza Silver badge

Re: not quite simultaneous

Agreed, but you do still have to file at some point. Being able to say exactly when you did file is a big part of proving that you did indeed file, and gives the recipient very little room to manoeuvre if they've lost it!

bazza Silver badge

Re: not quite simultaneous

And if its the recipient's fax that's stored and delayed / lost the print out or failed email forwarding, it's the recipient's problem. The sender has their copy and their proof of sending and reception.

bazza Silver badge

That's, well, odd. Fax was extremely well standardised, I wonder what was going wrong? Could be the pharmacy's phone line was really poor quality?

Reading around few forums where die-hard dial-up BBS users hang around (there's more of those than there are dial up ISPs) the move away from analogue to VoIP can actually be a boon, so long as the telephony provider uses a lossless codec (and some of them are). Apparently, if the right one is used, you can get the legendary, never seen in the wild genuine article 56.6k connection speeds. Happy days.

bazza Silver badge

Re: Fax is still a fascinating punchbag

PDFs sent via WhatsApp or Signal is about as close as I think it gets, without the verifiable proof of sending.

That's the beauty of circuit switched networks, someone else (the provider) is willing to say that the circuit was indeed created and used for x minutes. Packet switched networks cannot do the same thing, proof of sending requires cooperation from the recipient.

bazza Silver badge

Timing, and proof of it, could well be a key thing. Sending and receiving are indeed simultaneous, and what's more the sender can prove it. Being able to prove submission date and time independently is a big plus over asking for an email read receipt.

Ubuntu's 'Mantic Minotaur' peeks out of the labyrinth

bazza Silver badge
Pint

Yay, ZFS!

A lovely fs.

Still not sold on Wayland though. Or snap. Currently running an apt version of Firefox, direct from the horses mouth (so to speak).

I've had no end of grief going from 20.04 (Nvidia drivers, everything working a treat) to 22.04 (nvidia drivers, suddenly it's all gone unstable!). It seems to have settled down now, but I was sorely vexed for quite a while.

Intel spices up its FPGA game with open source and RISC-V freebies

bazza Silver badge

Re: Giving Away Free Stuff?

And I have no doubt that Risc-V in a soft core incarnation will be just as slow as NiosII was.

I'm not sure that you'll find that many FPGAs in those market sectors, especially these days. They're the kind of market sector the sales literature likes to quote, but there's not really a need for a big chunky FPGA in any medical or industrial application. You'd be amazed at how many aerospace applications are actually done in software.

Also, FPGAs are not "massively parallel", they're "fairly parallel", and they eventually run out of on-chip memory. As soon as the dataset becomes too big for their on-chip RAM, they need to rely on DDR RAM of one sort or other. The problem there is that FPGAs tend not to have many DDR interfaces, and quite often they're slow. This ends up being the bottleneck,

bazza Silver badge

Re: Giving Away Free Stuff?

Why did they do that? To slow it down? I jest.

But, it's worth pondering how many FPGAs your employer's application needs, and then compare that to the volumes of CPUs / GPUs sold every year. This has always been the problem with FPGAs. They don't get the sales volumes. They therefore tend not to be at the cutting edge in terms of cost per performance, and even if they are faster for an application they won't stay that way for long. Plus, the dev time is painful. By the time the firmware for an FPGA is finished, a few lines of, for example, CUDA on NVIDIA's latest GPU blows it out of the water. Or to put it another way, I can compile and start a program running on a CPU long, long before the FPGA build tools have completed place-and-route. No point being the hare, if the tortoise can get a day's head start.

At various points in my career, making choices between FPGAs and CPUs for processing, not once has an FPGA actually come out as "faster", or "cheaper". It's just too, too easy to pile up CPUs until you have enough of them, and software is a whole lot easier to write (plus, doing other stuff too is easy). FPGAs suite only niche applications, and the problem then is that there's not a very large market for them.

I've also seen numerous projects stung by FPGA snake oil. Xilinx used to be particularly bad. There was one mess which was the result of a die shrink done by Xilinx; for some reason, one single transistor wasn't shrunk, and this oversized transistor got lithographed over the top of dozens of others. Needless to say, that part of the chip didn't work. Trouble was, Xilinx kept stum. When customers eventually concluded that they weren't going mad and that there had to be a problem with the chip and not their code, Xilinx simply said "sorry" and handed out a late, lamented errata sheet. There's also been problems with cross talk between different I/O subsystems on some FPGAs. They may have got better at avoiding such mistakes, but they lost my trust decades ago.

bazza Silver badge

Giving Away Free Stuff?

Hmmm, are they having trouble selling these devices?

I know that both Intel and AMD have both bundled into the FPGA business buying Xilinx and Altera, but for the life of me I can't think why. There's just not that many good applications for them. The current tech hotness - AI - is clearly best served either by GPUs, or bespoke silicon doing all the adding up in analogue. FPGAs are just far too slowly clocked to be competitive, and aren't as parallel as some of the monster GPUs that NVIDIA is churning out these days.

FPGAs are an expensive way of doing things at scale; they're only "cheap" for specific niche applications, none of which appears to have ever been of interest to Intel or AMD before. Such applications tend to be ones where, ideally, one would be building bespoke silicon but can't afford to do so. However, such niche applications are evaporating; modern CPUs have such enormous grunt, and are available in all sorts of sizes, that there's increasingly little point suffering the complexity and inflexibility of an FPGA.

Airbus takes its long, thin, plane on a ten-day test campaign

bazza Silver badge

Re: An interesting experiment

It is as you say.

However, it's pretty likely that if Airbus still made it, and offered RR's Ultrafan on it as an engine option, Emirates would be ordering them.

And the way the airline industry is headed, the need for an updated A380 is only growing, not declining. Hub and spoke is still growing, and what is being found is that the "lots of long, thin" routes model is leading to more "long, fat" routes instead. For example here in the UK, there's A380s operating out of all of Gatwick, Heathrow, Birmingham, Manchester and Glasgow, several times a day in most cases. There's a really good breakdown Aviation Week. There's some really important, profitable airports being served by the A380, and it's not possible to fly more aircraft into those airports, so the most profitable growth lies in more bigger aircraft.

bazza Silver badge

Re: An interesting experiment

One of the main reasons is to squeeze Boeing into a corner. The 737 has no more growth in it, A320 does. Airbus are able to stretch the A320 into 757 territory.

For Boeing this is an absolute nightmare because they wanted to do a new midsized aircraft. However, a stretched long range A321XLR eats away at the purpose of the new Boeing at the bottom end. Meanwhile the A330neo - biggish though it is - is actually very good, low cost, eats away at the top end of the new Boeing's market, whilst also being close enough to the 787 in performance to make selling 787s for high profit difficult. So Boeing has an incredibly small specification window to aim at that is not covered by Airbus, so the market size is doubtful.

And the market has spoken. Airbus has 400 orders, for very little development cost. Boeing would never get those orders now for a new midsized aircraft, and has the full development costs to bear. No wonder NMA got canned.

bazza Silver badge

Re: MAX anyone?

One of Airbus's big successes has been in making all their aircraft basically the same from a pilot point of view. They even handle more or less the same (the fly by wire system is programmed to make that so). Airlines with Airbuses have a lot of crew flexibility, it being easy to have pilots and cabin crew qualified on all Airbus types. Fleet upgrades are also very easy to integrate with existing crew.

Boeing? Less so. Attempting to emulate that crew continuity in the 737 family lead to the MAX, MCAS and a lot of dead people.

Pulitzer Prize winning author Michael Chabon and others sue OpenAI

bazza Silver badge

Fair Use? I Think Not

Look at it this way. If I, a human-intelligence, read a book, I'd now know the content of that book and could also stump up summaries.

If I then did start spouting large sections of that in a Blog, or wrote my own book but simply changed names on a little bit, I'd probably be in breach of copyright, committing plagiarism, or something.

Okay, so say I just put up a little quote from the book into a blog posting. Fair use? Yes, probably. But, do the same thing week in week out, moving on a page each week. I'd be abusing fair use to republish the book by installment. Even if a different reader read each blog post, it's still not fair use.

Basically, it depends. It also depends on how I'd acquired the book. Having a physical copy is one thing. Copying and pasting from a pirate site is another.

A good way of testing such cases is to replace "AI" with "human, but a very quick one", and see what existing law / precedent says. Probably, just by being quick and a large scale service, the things we accept humans are allowed to do do not translate over to a machine doing the same things.

The future of the cloud sure looks like it'll be paved in even more custom silicon

bazza Silver badge

So, what's new? Silicon designers have been baking specific functions into silicon forever, in pursuit of better performance. For example, one didn't need an 8087 maths coprocessor to do maths but it's a lot faster with one.

The real problems come when "custom" gets used to lock up a market, create monopolies. If an AWS customer gets hooked on some bespoke Amazon AI accelerator they can't use market competition to get a fair price.

The only good thing about what's going on is that with all these vendors putting so much effort into making it easy for customers to build ideas into product, it becomes easier to reimplement that product from scratch on some other hosting platform offering similar capabilities. Hopefully the hosting platform providers will overdo what they provide and make this too easy. It then becomes normal to develop your cloud app for, say, AWS and get it working there, and then redo it on Azure, Google, whomever, and keep those in one's back pocket. If AWS becomes too expensive, fire up the Azure version of your cloud app, shift the data over, bye bye Bezos.

That's basically what happened with cars. By deskilling driving, suddenly there's no reason to be brand loyal. Cloud app development is not yet at that level, but by trying to make it "easy*, they're headed that way.

My fear is that they'll realise this, and work against that in subtle ways, eg by making it expensive to bulk export one's data, or being shit at DNS cache cleaning, or something like that. For example Google, running their own DNS servers, could "accidentally" make your life hell if you'd had the cheek to move your website off their servers to Amazon's. "That's a nice web address you have got there, want it to continue to resolve properly to your AWS microservices?".

With a lot of these providers also being active in the search and advertising market too, the opportunities for them to suppress competition in Cloud are immense. Google are well known for exploiting their search muscle to force companies to advertise with them. Unrestrained, they and the others will do the same with hosting and AI.

Marketing call

"Advertise with us, like your competitor does, and maybe you'll get an AI as smart as the one they're running. No? Oh dear, your AI has just lost a few neurons, got a bit thick as it were, senility is such a sad thing to see in a machine as wonderful as yours... ".

Linux 6.6's in-kernel SMB networking server graduates

bazza Silver badge

Age Old Architectural Mistake Coming Home to Roost?

The only reason to have an SMB server in-kernel is to speed things up. The only reason it gets speeded up, being in the kernel, is because that is where the networking is. If the networking was predominantly out in user space - like it is in FreeBSD, Windows, Mac, literally everything else - they wouldn't have to keep shoe-horning such things into the kernel.

The way this will end is with the Linux kernel being bloated and full of vulnerabilities, whilst network applications / servers not yet ported into it will remain emcumbered. If there were one thing to do to guarantee the future value and correctness of the kernel, it's getting rid of the networking to userland now, before it's too late (if it's not already).

bazza Silver badge

Huh?

From the article:

As one comment on Hacker News said "Unless this is formally proven or rewritten in a safer language, you'll have to pay me in solid gold to use such a CVE factory waiting to happen."

Well, I know what they mean, but being paid in solid gold is only of any additional remunerative value if one is given sufficient ounces to be worth more than one's salary!

GNOME 45 formalizes extensions module system

bazza Silver badge

Re: A desktop...

And, even if a web-tech-based desktop was a good idea, JavaScript is the absolute worst choice of all of them... I mean, not even TypeScript? And where will they be if it becomes clear that things like Web Assembly are actally the way to go, stop using tired old JavaScript?

The idea of a standardised plug in interface suddenly becoming viable, because JavaScript has recently standardised modules, does rather point to how poor a choice JavaScript has been. With native code, we've had the concept of a shared library (or, DLL in Windows lingo) for literally decades. Almost the very purpose of a shared library / dll is to allow extensions to code, and not necessarily by the original authors.

And, oh, the heavy weight nature of running JavaScript, the lack of threads (at the time they chose it?), the lack of code rigour, the opportunities for hidden found-at-runtime-in-unusual-circumstances bugs. Just the worst possible choice for a fundamental component of a desktop operating system.

Musk's mighty missile is ready for launch once FAA says OK

bazza Silver badge

Re: I doubt the FAA

The FAA does care about pad destruction, because it pertains to the possibility (and, in the first launch, certainty) that the booster gets damaged as a result.

The difficulty is that they can carry out a launch safety analysis against only the design of the rocket, but if the rocket gets damaged by a disintegrating pad it's no longer as per the design. Thus the thing flying is now different to the thing that was assessed, and your safety case has no validity from that point onwards. You are then trusting to luck.

For example, if a lump of debris took out the radio antenna for the flight termination system, it would no longer be possible to remotely destruct the vehicle. You now have a rocket flying that you may very much want to stop, but you cannot. If it's also started heading off towards, say, Port Isabel, perhaps because a bunch of engines on one side of the rocket got taken out, you're going to wind up with 5,000 dead people.

This is why "Foreign Object Damage" (FOD) is such a big deal in aerospace. One tiny piece of the wrong material in the wrong place can cause disaster. Lumps of launch pad flying around the place at high velocity is almost certainly doing to result in FOD on a truly grand scale, with a high potential for major problems as a result.

Other things going wrong (e.g. an engine explosion) can be modelled, assessed, allowed for in the design and deemed safe. Though it looks like they got quite a lot of that kind of thing wrong last time too.

bazza Silver badge

Re: Premature Stackulation?

>Sadly, I know the FAA representative who's signature is at the bottom of the last authorization. She was great to work with when I was in aerospace and she was part of the group that kept our group from behaving badly. It makes me think she's being handed decisions from on high rather than it being her discretion.

If that's the situation she is in, and that's what's happening to her, she is in an impossible situation. It's also something that other world regulators would like to know about, in some detail. Because if that is the way the FAA is operating, then their reputation around the world is toast. One can imagine what that'd do to Boeing's prospects...

My advice to engineers in a position of undue and dangerous managerial pressure is to print out all the emails (including headers, delivery receipts, read receipts if available), especially the one saying "if you make this happen it will kill people" (or its equivalent), lodge them with a lawyer immediately, witnessed as such by the lawyer, and only then quit. That way if something bad does happen, you have the evidence you need to save your own neck, under your lawyer's control, and submissible as evidence in a court of law. I would do that with another lawyer too for certainty, and keep copies oneself just in case. You are then not dependent on your former boss / employer to stump up that evidence on your behalf at a time when their own necks are on the line.

If one wants to continue to strive to prevent a disaster, you need to get legal guidance, and that possibly then results in a joint approach to law enforcement. Never go to the police / press without a lot of top cover.

The alternative - just quitting - is dangerous, because whilst you might eventually be exonerated by a court case, it's going to be one hell of a rough ride getting there.

Far better to have incontrovertible exonerating evidence on hand to show investigators, police, because that way one is very likely to avoid arrest should a disaster occur. And if they do arrest you, then it's probably a wrongful arrest (which has its compensations).

bazza Silver badge

Re: Premature Stackulation?

Regulatory capture is a problem to solve, though I think that since the 737-MAX the FAA, US Gov, and for the moment US politicians (who fund the FAA) have realised the importance of an independent regulator (and, more importantly, one that is very evidently independent) to the reputation of the USA and its industries.

The MAX crashes and the independent actions of regulators globally was effectively the rest of the world deciding - officially, at national leader level - that "Made in the USA" was not to be trusted.

A Problem that is Uniquely Unfixable in the USA?

The rest of the world fully understands how the FAA got captured by Boeing, and part of it is related to how the FAA is funded (by US politicians), and how the politicians in both parties down the decades consistently cut the budget, staffing, and scope of the FAA's work. However, the means by which the FAA's budget is set (and therefore its capabilities) has remained unaltered. The politicians still hold the purse strings. The politicians will change, forget, and so on.

We're probably (at best) only 10 years away from US politicians once again saying, "why do we put so much money into the FAA when there are no crashes happening?", and once again cutting the budget. The meta-issue going forward is that the rest of the world sees this, sees no changes to protect the FAA from politicians, and knows that the FAA is still vulnerable to a reduction in funding / scope. Other world regulators have to report to their lords and masters that there is no guarantee coming from the USA that effective regulation continues indefinitely.

In other words, "Made in the USA" still cannot be fully trusted. And given that this is essentially a political problem and the USA seems to have a lot of paralysis over such matters, there seems little prospect of change.

Geopolitics

In my view, this is a huge geopolitical / industrial problem for the USA going forward. The implication is that the US's former dominance in aviation came into being despite the polticial set up in the USA, not because of it. If a political system is effectively loaded against the success of an industry, that industry will, long term, cease to be.

It has other consequences. With rockets, ineffective regulation by one country might have disasterous consequences for another. One country's failed experimental rocket launch can look to another country like a deliberate act of war, especially if it keeps happening. This very thing is going on at the moment with one of China's launchers, where they simply abandon the booster to fall back to earth in some random, uncontrolled location.

To date, with rockets launched from the USA on the eastern seaboard, so far as I can tell launches are either in the drink or safely in orbit long before they could trouble countries on the other side. Similarly any worst case hypothetical re-entry mishap would result in minimum damage caused by an errant capsule, or a Shuttle gliding in to open countryside somewhere. It's very easy for other countries to be relaxed about it.

However, with StarShip, the USA is in a particularly interesting position in that StarShip is intended to re-enter the atmosphere still carrying a load of fuel and LOX to be used for landing. It's not called a bomb, no one is intending it to be a bomb, it's not got "bomb" painted on the side. But, coming back into the atmosphere uncontrolled and detonating on impact, it would definitely go off like a pretty convincingly big bomb.

By "big", well; Wikipedia lists it as having a capacity for 1,200 tons of fuel/LOX, some of which will have been used up getting into orbit. But, it's easy to see it still having a couple of hundred tons after re-entry for landing. If that detonated on crash landing, it'd not be on quite the same scale as the explosion in Beirut explosion in 2020, but it'd be up that way (depending on the explosive equivalence to TNT of a methane/LOX explosion).

So the USA and the FAA are on the hook to make certain that StarShip re-entry accidents do not ever occur. Especially not in other countries' cities. Can you imagine the ramifications if that happened in, say, Beijing, or Moscow? How comfortable can we be, knowing the fragility of the FAA's funding / effectiveness, and that the company boss seems to have a "send it" approach to getting things right? The USA and the rest of us is depending on the FAA to ensure that SpaceX / Musk doesn't cause a major geopoliticial incident, either by accident or by design. It's one of those low probability, high impact events that people like to dismiss, but probably shouldn't.

bazza Silver badge

Re: Premature Stackulation?

I had to look that up. Yes, that'd do the job.

Though I think there might be some people concerned about the idea of a privately launched rocket with a nuke inside it. Especially if Musk has got anything to do with it.

bazza Silver badge

Re: Premature Stackulation?

>While I understand what you're saying, if your staff have got it wrong once, and are afraid of getting it wrong again, perhaps you might need new staff.

Absolutely so. Thing is, the FAA is not exactly staffed by amateurs, and it's hard to imagine from where better more capable people would come from. If they say, "we cannot make a reasonable assessment of this launch request", then that's probably that. No launch permission can be given.

>But when you're done that, it's time to light the blue touchpaper and see what happens. Being totally risk averse achieves nothing; the benefits come from trying, failing, learning and trying again.

There's nothing wrong with trying, failing, and having another go. Except that, if the backup safety measures are not assured to work (specifically, the flight termination system), then you should not be allowed to try and try again. What was so concerning about the first flight is that, had it deviated off towards Port Isabel, it turns out that they had no way of destructing the rocket before it would have hit Port Isabel.

The FTS is quite intriguing on Super Heavy and, in particular, on Star Ship. Star Ship is the first proposed orbital vehicle that is intended to take a lot of fuel up into orbit, keep it, and bring it back into the atmosphere for a powered vertical landing. It has to have a flight termination system, because it is in itself quite a chunky rocket carrying a lot of fuel / LOX. That FTS is always going to be some sort of explosive device on the outer skin of the rocket. Yet, they're going to have to ditch that FTS before reaching orbit, and certainly before re-entry into the atmosphere. Having explosives on the outside during reentry is not a good idea.

I think the FTS on the first launch failed because they used point charges (which didn't do enough damage), and they used point charges because these are items that can be detached from the rocket and dumped. If they're forced to switch to linear cutting charges running the whole length of StarShip (much like the Shuttle tank and SRBs had), then that sounds like something a lot, lot harder to detach from the rocket and dump. It's just possible that there isn't a viable, detachable dumpable FTS design. If that proves to be the case, then StarShip is an impossible concept.

bazza Silver badge

Re: environmentalist wackos throwing a sueball

It's not really a matter that anyone needs a court to comment on. They had a process, it came up with an answer, and it was widely seen by the millions watching to have been the wrong answer. Any other assurance based on the same process (e.g. mitigation of worse harms) is also weakened.

The FAA does need to tread fairly carefully. There's more than just environmental safety at stake, there's also human safety. Although the rocket itself is uncrewed, it's more than capable of flying to Port Isabel in only a few seconds and blowing the whole place - and everyone living there - to smithereens. The events of the last launch showed that whatever process was followed to assure launch safety was wrong - the loss of directional control soon after launch and the failure of the FTS to T the F meant they'd been relying on pure luck for that one. If they just follow the same old process, and come up with another wrong conclusion, well that could be a very bad day for everybody. There's 5000+ people over there who are counting on the FAA ensuring that they come to no harm. Less proximate, but still of concern, there's lots of other people down range who are kind of hoping that an errant, out-of-control rocket with a defective FTS doesn't come landing on their heads.

bazza Silver badge

Premature Stackulation?

Hmmmm, are we going to be treated to another "the FAA is in my way" tirade? Musk has previously just gone ahead and rolled something out to the launch pad without bothering to wait for FAA clearance, and used that as a PR lever to bash the government regulator. After the many, many things that went wrong on the first launch, I'd not be surprised if SpaceX and the FAA have had and continue to have differences of opinion about "corrective actions".

From the FAA's point of view, the first launch's safety ended up relying on pure luck - the uncontrolled rocket chose not to fly off towards Port Isabel - and that is something that really, really cannot be allowed to happen again. For the first launch, they'd assessed material / evidence from SpaceX to demonstrate the launch's safety, and reached the wrong conclusion about the reliability of that material / evidence. That's got to have raised the bar for how convincing material / evidence from SpaceX has to be for the FAA to, once again, accept it.

Also, it has got to have also raised questions inside the FAA about their ability to accurately assess such material / evidence. They made an assessment, and got it wrong. What have they got to change, to be sure that their next assessment for the second launch is more reliable? It's possible that the FAA cannot reach a conclusion about the safety of the second launch, for it to happen in the way that SpaceX want it to. This could mean that, no matter what material / evidence SpaceX has presented, the FAA may decline to make a pronouncement on it.

The tricky thing for SpaceX is that, if the FAA reaches such a position, it's not like there is not another way. For example, SpaceX can demonstrate the robustness of the launch facilities without actually launching a booster. They could fuel one up, clamp it down, fit it with the Mother of All Destruct Systems (i.e. a lot more explosives than last time) in case it breaks loose, and do a full power firing for an extended period of time. If the launch pad survives undamaged, then the FAA could be convinced that an actual launch would take place without the rocket getting damaged by a disintegrating launch pad. I know they have recently done one small test along those lines. Whether or not it was a "big" enough test for the purpose of fully derisking the launch pad is something I don't know. Arguably, even if it was, doing that test only once is probably not adequate for certainty. Pressing for an actual launch after just one test forces the FAA to consider the possibility of damage occuring during the launch, which after the last debacle is possibly asking too much.

From the FAA's point of view, if they permit a second launch and that too goes wrong and causes injury (say, Port Isabel gets wiped out this time), they'd have totally failed in their statutory role. People would ask, "why did you believe them the second time?". Personally speaking, if I were in that position, I'd want a lot of other people looking at vehicle, launch site and launch design before I'd even think about putting my name to a second launch. I'd also be reluctant to put my staff in a position of - having got it wrong once - having to be put their names (and, only their names) to new assessments.

ArcaOS 5.1 gives vintage OS/2 a UEFI facelift for the 21st century

bazza Silver badge

Re: OS/2 and IBM?

>It was the right move, sadly, and it was a solid business decision.

Agreed, but I'd argue that IBM's lack of vision created the circumstances in which killing it was the best move. I know there's a complex early relationship between IBM and MS with OS/2, but had IBM decided early enough "this is a war we're going to win" and put the resources and imagination in to win it, they could have. By "early enough", I mean at the time they decided to bork OS/2 by supporting 286. Had they been resolute in heading towards 386 and 32bit, saying "this is the future" (a bit like Win NT did), things now could have been very different. But instead, the boring, business orientated thinking within IBM killed off any prospect of that; talk about missed opportunities. OS/2 never properly recovered from having supported 286.

bazza Silver badge

Re: Looking forward to that "more in-depth review"!

1992 was the pivotal year - that's when I first got it!

>> I used to embed OS/2 headless on VME boards. It was actually pretty easy to do this oneself.

>Gosh. Never tried that. I never tried building a GUI-less OS/2 but I did evaluate OS/2 1.0 which didn't have one yet.

I recall (dimly now) that I looked at how the OS started up the WPS desktop, and edited it (a text file somewhere) to start my (console only) application instead. It was pretty easy, and worked a treat. Having a proper, pre-emptively scheduled 32bit OS with excellent compilers (Watcom) was something of a luxury. I wrote a ton of code for OS/2 for this kind of thing.

>So, kudos to you if you stuck with it through the whole 1990s. I am afraid I didn't.

It wasn't easy towards the end, especially with the Web taking off in a big way, and by the time Win2k came along I was definitely, absolutely and totally ready to get off OS/2. By then the antiquity of the software I'd got on OS/2 (probably all for Win3.1 if it was closed source) meant that starting afresh on Win2K wasn't too painful.

Win2k was a relief as much as anything else, though HP announced they couldn't be bothered to do Win2K drivers for the Inkjet printer I'd got, despite it being nearly brand new. Haven't bought anything from them since!

The embedded stuff I was doing I transitioned to "proper" embedded OSes such as VxWorks, which was veeeeeery nice, definitely worth the money. I wouldn't have been able to convince the bean counters of the worth of things like VxWorks, had it not been for the earlier work done on and success experienced with OS/2. By using "cheap" OS/2 for a while, that also allowed for the settling out of the market in embedded OSes. By the time came to move off OS/2 early candidates like OS-9 were definitely on the wane. So that saved taking a wrong step in "stepping up" our game.

bazza Silver badge

Re: Looking forward to that "more in-depth review"!

I went from OS/2 to Win2k, that being the first version of Windows that was half decent. But I miss OS/2.

I used to embed OS/2 headless on VME boards. It was actually pretty easy to do this oneself. At a time when Linux was not ready for anything, really, and other embeddable OSes were very expensive, OS/2 was a pretty good option.

What happens when What3Words gets lost in translation?

bazza Silver badge

From the article:

"If W3W is widely adopted by emergency services it must be subject to rigorous evaluation," he explained in a preprint paper titled: "A Critical Analysis of the What3Words Geocoding Algorithm."

The Register contacted Arthur to talk about his findings but he declined, stating that he hopes the paper will be published in an academic journal and that he would rather wait until the peer review process is complete before discussing the work.

Hmm, well, if he didn't want to discuss it prior to publication, why put it up on arxiv?

I'm a little confused as to what the point of the discussion is anyway. So far as I can tell, W3W are saying that for locations near, for example, Crewe there are unlikely to be clashes / confusions and you'd double check anyway (i.e. correct for gross, wrong-continent errors). Whereas everyone else (such as this preprint?) seems to be considering the matter globally? It seems fairly reasonable that, in most applications, you'd anyway double check that when someone says "over.ages.they" they do actually mean "West Street in Crewe", after which you can be confident that they're outside the Crewe Heritage Centre.

I can understand problems occuring when a phone's location services are a bit unsure and are just reporting position guestimated from what cell towers it can see. You can see that on Google Maps sometimes, when it paints an enormous circle to say "you're somewhere in here, maybe".

That's not to say that W3W is the best way of verbally conveying accurate location. As the article mentions, the OS Grid is pretty much ideal for this kind of application here in the UK.

Or another way is to simply assume that whatever emergency service you're calling can guess your lat / long to within a degree anyway, just through which cell tower you're using, and you just quote 8 digits that are the numbers after the decimal point to 4 digits (which is pretty accurate). Those could even be sent down the phone call as DTMF sounds. (patent pending)

UK air traffic woes caused by 'invalid flight plan data'

bazza Silver badge

Re: Validating the input...

Certainly, having a way to at least junk old, irrelevant, would have happened in the past data would make sense. If a flight plan was referring to a previous date/time, it's never going to happen.

Clearly there is an old and carefully thought out design decision lurking in the background to this.

I've no idea what data and what format is in a flight plan, but I suspect that it's designed around a text file in an agreed format. That is a way in which the general / private aviation community could formulate a flight plan by hand and actually get up in the sky without the need for some specialised software. Clearly, some specialised software and thoroughly validated data exchange with organisations like NATS would be more reliable with less human overhead, but it would be bound to price the general / private aviationist out of the sky.

No doubt there is much talk of "is it time for a change?" going on, but I kinda hope that they stay as is and don't let the interests of big business result in small businesses / private individuals being excluded.

bazza Silver badge

Re: Resiliency – we've heard of it

If you write an XML schema properly, and use proper tools that fully understand XML schema (a rarity), then you can in the schema define valid content. For example, you can constrain the valid values of an integer, or the length of a list, and parsers / serialisers generated by proper tools will generate code that checks that such constraints are honoured.

So far, so good. As I hinted, most XML XSD code generators I've come across ignore the constraints and generate no code for them. Examples include Microsoft's xsd.exe, with a lame excuse (I found their reasoning buried deep in some docs) that amounted to "computers do not constrain the values of integers or the lengths of lists". Well duh, that's why we have software and write it to impose constraints when we want values and lengths constrained.

However, so far as I know XML XSD is far from "complete". I'll explain what I mean. Whilst you can express constraints in an XSD file, you get nothing in the generated code to tell the program that uses the generated code what the constraints actually are. So, touching on your "That data-program may crash because you have coded a stack overflow" point, it's hard for the program or developer to know, for example, how much memory to allow for the length of a list when parsing some XML data. Perhaps not a problem on a server / desktop application where memory is bountiful and memory allocation is automatic in the generated code, more of a problem in an embedded system where perhaps the developer has had to decide how much heap a process will need.

A serialisation technology that is "complete" is ASN.1, though not all the ASN.1 tools fulfil the whole standard. With ASN.1 you can define messages, setting value and length constraints as you do so, just like XML XSD, or JSON schemas. What is unusual is that you can also define static constants of messages, including integer constants, which can be used as the constrains in the definition of other messages, and these static constants also show up in generated code. So for the developer, the generated code contains 1) parsers / serialisers for objects (messages), 2) automatic constrains checking whilst parsing / serialising, 3) constants that can be used to understand the extents of constraints which can then be use for all sorts of purposes in the code the developer writes.

One such purpose might be iterating over the length of a received list. If the list is defined as containing 10 entries, then the for loop can be from 0..listlen-1, where listlen is a constant that comes from the schema, not from the developer.

The consequences can be quite profound. System behaviours related to the constraints on valid / invalid data can all be driven from the schema, not from developer-written source code. This means that all such constants have a single definition - irrespective of programming languages used across the whole system. Change the schema, rebuild the system, and the entire system is updated with the new constants and thus the new behaviour.

This can have profound consequences on how you run projects. If you have a risk of stack overflow due to the amount of data to be received, you can have the stack size driven by the constants in the schema. It either works (there is enough memory), or the code throws an exception when it can't get enough stack in advance of needing it. If you need the extend the length of a list to contain more items, and you need programs to generate / process the extra items, so long as they're using the constraint constants from the schema-derived generated code a single change in the schema brings about the required code change system wide.

That means you no longer need the developer to make the change, the schema author can safely make the change, at any point in the project life cycle, even quite late. You can be agile with the definition of messages in the system right throughout the project development cycle, because changes to message definitions do not have to result in any re-work.

Pretty neat for a useful, old technology. Especially as it can emit / consume binary formats as well as JSON and XML...

It's possible that JSON schema can, in some circumstances, pull off the same trick (I'm less familiar with JSON schemas, but I know that JSON is essentially executable JavaScript so who knows what can be done!). However, when I survey the vast array of serialisers out there it's remarkable how bad a lot of them are in terms of what they can actually do for developers. For example, Google Protocol Buffers is much lauded and widely used, but it does absolutely nothing to help developers valid inputs; there are no constraints (apart from an independent alpha/beta quality extension), developers (if they bother to validate messages at all) have to communicate by email or a word document or comments in the .proto file to understand what the valid range in a message can be. Most serialisers out there have not considered the role of serialisers in project development, or in project management.