* Posts by bazza

3457 publicly visible posts • joined 23 Apr 2008

The end of classic Outlook for Windows is coming. Are you ready?

bazza Silver badge

Re: PST files

This is an aspect of corporate email that most do not appreciate. Email can be a substantive record admissible in court. However to be admissible the record needs securing. A way of doing that is as you have said, export a PST.

The file can be archived onto media, and the person doing so / looking after the media can easily attest in court that the PST file is a complete unaltered contemporaneous record, unaltered since. The media can even be put in an evidence bag and sealed.

You can't do that with email stuck in a server. In fact, the email are probably not admissible at all, as it's likely difficult for anyone to swear that the server content "now" is a complete record of how content was "then".

An unviable alternative is to print everything.

Trying out Microsoft's pre-release OS/2 2.0

bazza Silver badge

Re: Pints' on me Brian

>Am I the only one who remembers OS/2 as "Oh Shit 2"?

I used it a lot, clung on to it as my primary desktop for far too long.

It was quite good for embedded work too. I used to install it headless (with a bit of manual trickery) on x86 VME cards. Bear in mind that at the time Linux wasn't a thing, and "standard" full fat 32 bit multiprocessing multithreaded OSes for VME cards were kinda pricey. OS/2 was a pretty good option.

Sandra Rivera’s next mission: Do for FPGAs what she did for Intel's Xeon

bazza Silver badge

Re: Dead End

Pretty sure they're not going to have $55billion's worth of advantage over CPUs.

As for latency, there's nothing in particular about an FPGA as such that gives them an advantage. They do as well as they do largely because interfaces such as ADCs are there on chip, rather than being at the end of a PCIe bus. If one put the ADC on a CPU hot wired into its memory system, that too would have a lower latency. CPUs these days also have a ton of parallelism and a higher clock rate.

As ever selection is a design choice in response to requirements. In 30+ years I've yet to encounter a project that has definitively needed an FPGA, definitively could not be done on a CPU. I've seen an awful lot of projects where the designers have chosen to use FPGAs fail, often badly.

To give an idea, a modern CPU with hundreds of cores and something like AVX512 available can execute 8960 32bit float point computations in the time it takes an FPGA running at a slower clock rate to clock just once. Given that things like an FFT cannot be completed until the last input data is input, there's a good chance a CPU with an integrated ADC would beat the FPGA with an integrated ADC.

bazza Silver badge

Dead End

An addressable market of $55billion? Pull the other one, it's got bells on. Xilinx were pulling in revenues of just over $3billion until they were bought by AMD, and I doubt Altera under Intel's stewardship has reversed their trailing market position. I'd be stunned if between them they were pulling in more than $7billion revenue.

The reason why there's an inventory correction going on is, I think, that a certain amount of AI Kool-Aid was drunk concerning FPGA's role in tech's latest bubble.

One really hard question both Xilinx and Altera have to face is, just how big is the market really? Taping out a new part these days is a very expensive business. To get a large complex part in production on the best silicon process is several $billions these days. I don't think the FPGA market is too far from the point where the cost of production set up exceeds the total market size. Xilinx, being part of AMD, is perhaps a bit immune in that AMD has some weight to exploit when it comes to getting time on TSMC's fabs. An newly independent Altera could really struggle. It feels to me like the whole technology is edging towards being unsustainable in the market place.

We shouldn't be surprpised if that happens. It's happened plenty of times before. There's many a useful / niche technology that's not been able to fund upgrades, and have been swamped by alternative technologies that enjoy the mass market appeal. Anyone remember Fibre Channel? Serial RapidIO? Both replaced by Ethernet.

FPGAs are troublesome, difficult, hard to program for, worst-of-all-worlds devices, the kind of thing one uses only if one absolutely has to. Thing is, there simply isn't that many such roles left where they're actually necessary. CPUs are very capable, and if for some reason the performance of many CPU cores all featuring their own SIMD / Vector units isn't enough, it's pretty simple to plug in a few teraflops of GPU. Even for the highly vaunted "they're good for radio" type work, FPGAs are often used simply to pipe raw signal data to / from a CPU where the hard work is done. I've seen projects go from blends of FPGA / CPU to just CPU, because the workload for which an FPGA was well suited is now a fraction of a CPU core's worth of compute. And with radio standards like 5G being engineered specifically to be readily implemented on commodity CPU hardware, the future looks bleaker not brighter.

At the lower end of the market, the problem is that it's actually pretty cheap to get low-spec ASICs made (if you're after millions). So, even if used in lower-tech devices FPGAs will struggle because if the product they're used in is successful in the mass market, it's worth ditching the FPGA and getting an ASIC made instead and making more money. So, FPGAs are useful only to product lines that are not run-away successes; doesn't sound like the kind of product line that's going to return $billions.

72 flights later and a rotor blade short, Mars chopper loses its fight with physics

bazza Silver badge

Re: "nothing short of jaw-dropping"

Many in NASA didn’t want the helicopter. It took the unignorable pressure from a Senator with the purse strings in his hands to get it included in the trip. It’s a tremendous success for him and for the engineers who did it, but it was not a glorious episode for some echelons in NASA who repeatedly tried to stop it happening, at least in the earlier days of planning this mission.

Starting over: Rebooting the OS stack for fun and profit

bazza Silver badge

Re: Replacing one set of falsehoods with a new set of falsehoods

Indeed. I was going to mention expanded Ram from the old days of DOS, which I guess is a form of bank switching for PCs

The one thing that might do something in this regard is HP's memristor. There were a lot of promises being made but it did seem to combine spaciousness with speed of access and no wear life. Who knows if that is ever going to be real.

Files are Useful

I think another aspect overlooked by the article is the question of file formats. For example, a Word document is not simply a copy of the document objects as stored in RAM. Instead MS goes to the effort of converting them to XML files and combining in a Zip file. They do that so that the objects can be recreated meaningfully on, say, a different computer like a Mac, or in a different execution environment type altogether (a Web version of Word).

If we did do things the way the article implies - just leave the objects in RAM - then suddenly things like file transfer and backup become complicated and interoperability becomes zero. The Machine and network couldn't do a file transfer with the aid of the software doing "file save as" first.

And if the software had been uninstalled the objects are untransferrable.

If one still saves the objects serialised to XML/Zip, one has essentially gone through the "file save and store" process which then may as well go to some other storage class for depositing there. There is then no point retaining the objects in RAM, because one then has no idea if the file or in RAM objects are newer.

bazza Silver badge

Replacing one set of falsehoods with a new set of falsehoods

The article seems to be based on the assumption that modern architectures are headed from a model of two storage classes to one.

Except, that it then proceeds brushes over the fact that in a new world we'd still have two different storage classes, despite briefly mentioning it. If you've got a storage class that's size constrained and infinitely re-writeable, and another that's bigger but has wear life issues, volatility makes no difference; one is forced to treat the two classes differently, and use them for different purposes. The fact that both storage classes can be non-volatile doesn't really come into it.

And also except that one is never, ever going to get large amounts of storage addressed directly by a CPU. RAM is fast partly because it is directly addressed - an address bus. Having such a bus is difficult, and the address decoding logic becomes exponentially more difficult if you make it wider still. If you wanted to have a single address space spanning all storage, there'd be an awful lot of decoding logic (and heat, slowness, etc). That's why large-scale storage is block addressed.

And whilst one storage class is addressed directly, and another is block addressed, they have to be handled by software in fundamentally different ways.

One might have the hardware abstract the different modes of addressing. This kind of thing already happens, for example, if you have multiple CPUs you have multiple address buses. Code running on one core on one CPU wanting to access data stored in RAM attached to another CPU makes a request to fetch from a memory address, but there's quite a lot of chat across an interconnect between the CPUs. So, why not have the hardware also convert from byte-address fetch requests to block addressed storage access requests? Of course, that would be extremely slow! It would very poor use of limited L1 cache resources.

Forgetting the history of Unix is coding us into a corner

bazza Silver badge

Re: What is unix anyway?

It's in the article: Unix is a standard for what API calls are available in an operating system, what kind of shell is available, etc. Unix is what POSIX is now called. It's a notional operating system that closely resembles a software product that was called Unix.

POSIX was created by the US DoD to make sure that software, ways of doing things, scripts, etc could be ported from one OS to another with minimal re-work. They also demanded open-standards hardware, for exactly the same reason. This is still in play today, and there's an awful lot of VME / OpenVPX-based hardware in the military world that is also used in other domains. The motivation was to get away from bespoke vendor lock-in for software / hardware, and it has worked exceptionally well in that regard. It's also the reason some OSes grew posix compat layers; DoD wouldn't procure anything that wasn't capable of POSIX (though they relaxed that a lot for corporate IT - Windows / Office won).

If one casts a wider net than the article does, one can see that OS/2 or Windows being considered "a Unix" is in not that odd. There's operating systems like VxWorks, INTEGRITY that also offer POSIX environments, and yet have their own APIs too. The OSes that are commonly perceived to be *nix are simply those that do only the POSIX API. Trouble is, even that's a bit uncertain. For example at least some versions of Solaris had its own API calls for some things beyond those of POSIX ( I seem to recall alternative APIs for threads, semaphores; it's a long time ago). Is Solaris a *nix? Most would say yes, but it wasn't just POSIX, in a similar way to OS/2 being not just POSIX. Linux is extensively diverging from just POSIX - SystemD is forcing all sorts of new ways of doing things into being. Do things like name resolution the SystemD way (basically a dBus service call instead of a glibc function call) and you end up with non-POSIX compatible source code.

Tesla's Cybertruck may not be so stainless after all

bazza Silver badge

So, strictly speaking, there's no such thing as a non-detergent soap? Has Tesla said "wash your vehicle with something that doesn't exist"?

bazza Silver badge

What on earth is a non detergent soap? Alkali based, not acid?

Microsoft might have just pulled support for very old PCs in Windows 11 24H2

bazza Silver badge

Re: Linux's moment

Hmmm, I wouldn't count on Linux being a long term alternative to Windows for this reason.

If Microsoft has finally started making SSE4.2 a core requirement on Windows, one reason to have done so will be that performance of the OS is faster as a result. And, that makes sense; SSE4.2 has some nice instructions in it, and Intel (and AMD) went to some effort to design it so that could be used to make your software faster.

Linux too does care about performance. Linux too has been dropping older CPUs, albeit getting only so far as chopping i386 off their supported list. At some point, they too are going to want to start making use of SSE4.2 instructions. Whilst the Linux kernel project does not directly care about competing with Windows, a lot of its customers do care about speed / efficiency. If Windows started looking substantially faster; well, it would be embarassing. Linux is potentially quite vulnerable to such pressure, as its network stack is in the kernel, and therefore quite a lot of pressure for cryptography to happen in the kernel; WireGuard already is. And if the kernel doesn't go that way, libraries might easily do so.

Blame Intel

If you want the real reason why this is a problem, look no further than Intel's haphazard approach to SSE over the decades. They did MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, FMA3, etc. This was lunacy.

Whilst they tentatively wobbled towards all of those up to FMA3 (kind of the bare minimum standard that's actually useful to software developers using this kind of thing), other architectures went straight there far earlier. For example, PowerPC got Altivec, which was a zero to everything you pretty much need in one mighty leap. I can't speak for ARM, but I'd hazzard a guess that they've not cocked about as much with what's in their SIMD extensions.

Had Intel actually sat down and thought it through properly and gone straight there, none of this would now be a problem (or, an excuse). One of the reasons FMA3 took so long to introduce on x64 was because Intanium had it, and Intel were using its presence there and absence on x64 to try and drive up demand for Itanium in the HPC community.

Their haphazard approach to this has pretty much meant no one has used SSEx in any software ever. And, if one considers how old SSE4.2 now actually is and how much older Altivec is (another 10 years or so), you can see how badly Intel has managed all this.

Chrome engine devs experiment with automatic browser micropayments

bazza Silver badge

Re: Flip Side

>I assume they're also planning to expand the system so that websites automatically micro-pay me every time they use the data they've taken from me to serve the ads.

No. No hint whatsoever of them doing that. Not even slightly. Not ever in the time taken for a sparrow to wear down a mountain by rubbing its beak on it once per year (thank you D. Adams).

bazza Silver badge

Re: Micropayment middleman

Being the middleman for such a payment system could be very profitable. And I think Google want to be that middleman.

As you've pointed out, the transaction size is troublesome. No credit card company is going to entertain such small payments. So, the way it would have to work is some middleman - Google - would have to make a transaction against your registered means of payment, hold those funds, and then meter payment out to websites. And in fact, they can be in charge of when the money is delivered to the websites. They would probably stack it up into monthly batches. That way they would be holding billions x not-very-much-money, which is still $billions, and would be earning interest on it. The way in which the microtransactions themselves is not a an energy cost burden on them is through control over the client (Chrome) which is running on the persons premises using their electricity, and the website which is running on the owner's electricity bill.

Been There, Seen That

It's exactly the game that UK banks used to play; you'd intiate a transaction to some other account (say, in another bank). It'd take 3 days for the money to show up, even though it'd been deducted from source immediately. They make 3 days' interest on the transaction. That got banned in the modern age where the full transaction could readily be instantaneous, and now we all enjoy money arriving immediately.

Google are simply trying to re-invent that, in another area of business, under the guise of "benefitting others".

Work to resolve binary babble from Voyager 1 is ongoing

bazza Silver badge
Pint

Letter to the Voyager Team

There's no shame in losing it; you've been pulling off minor miracles for decades, and supplied far more data for, well, all humanity than was ever planned, with far more impact than was ever imagined. So a big thank you from all of us regardless of the outcome of your latest endeavours, beers owed regardless -->

Arm share price bulges after AI pumped revenue to a new record

bazza Silver badge

Re: Time to change my user ID to Nobody

They’re selling shovels, which their customers are finding have a nack of turning up gold. That’s the very best type of shovel to be selling, unless one decides to enter the mining business directly.

Microsoft embraces its inner penguin as sudo sneaks into Windows 11

bazza Silver badge

Rather wondering how that saying should be restated, for a *nix that's busily swinging over to re-inventing more and more of Windows (a SystemD ref, just in case it's needed)...

Should anyone wish to contribute such a restatement, perhaps it's best expressed as the regular expression required to ammend the old to the new.

You're not imagining things – USB memory sticks are getting worse

bazza Silver badge

Re: QLC wears out after 1600 cycles...

I used to think that the weakness of floppy disks would be that the plastic film would eventually degrade. This has happened in old magnetic tape - it either goes stick or brittle. But floppies seems impressively far more resilient. I guess the fact that they're coated on both sides and the disk is a lot thicker than tape all helps.

IBM briefly talked about a storage medium based on just polythene. Microscopic dents melted into the polythene for 1's, unmelted for 0's. It's lifetime was thought to be near infinite (given the stability of polythene, so long as it doesn't get hot!).

bazza Silver badge

>With quad level cells (QLC), for example, four bits are stored per cell, which means that 16 states have to be distinguished.

I was preparing to lambast such a sentence and ridicule it to pieces; 4 "levels" of voltage would be able to store 2 bits, not 4. However, it seems that by "level" the industry does indeed mean "bits", according to the wikipedia page on the matter. I fear I have done El Reg a temporary disservice.

Now, why can't they just call it as it is, either 4 bit cells or sexdeclevel cells, or hexadecalevel cells?

Aircraft rivet hole issues cause delays to Boeing 737 Max deliveries

bazza Silver badge

Re: you can't put a hole where a hole don't belong

Don't know. Rear bulkhead repairs on a 747 that were done badly lead to a Japan Airlines 747 flying into the side of a mountain, so it'd definitely something to be done properly.

It would be pretty bad for Boeing if it were found that in-service aircraft had actually been damaged beyond repair during construction in the factory. I suspect that, knowing that, Boeing will find a way of repairing them no matter what.

This happened too in 787. Rememer all those battery problems? Not long afterwards an Ethiopean Airlines 787 caught fire at Heathrow airport, thanks to a lighting battery that let go and set fire to the rear of the aircraft. It looked terrible, like a complete write-off. However it seemed that Boeing were totally determined not to suffer a hull-loss to a failed battery, and did what is probably one of the biggest ever carbon fibre repairs ever. Bear in mind that the 787 fuselage is made out of enormous one-piece CF barrels, and the rear one of these had burned away at the top/rear. Not just a simple patching job.

bazza Silver badge

Re: And then there's the engine inlet problem...

This Simple Flying article has a nice graphic. A330/350 are nearly identical.

You need do only a short CCQ 13 day course to go between any of the Airbus aircraft types. This makes it practical to have a large fraction of one's pilot staff qualified on multiple Airbus types, able to fly an A320 one day and an A380 the next. This is useful for all-Airbus airlines (it's the ultimate in flexibility). It also makes recruitment simpler - any Airbus qualified pilot is a serious contender, no matter what they've flown and what your company operates.

The other (often overlooked) aspect is cabin crew training; there's far, far more to it than knowing how to pour a decent G&T. I think that here too Airbuses are substantially the same, meaning that your cabin crew can be flexibly deployed too.

It probably means that airlines are also somewhat flexible on delivery date. As with any conversion training, there's no point doing it too early because it'll have been forgotten. If the conversion course were lengthy, there'd be a greater emphasis on the training completion coinciding with aircraft being delivered. However, if the conversion training is short, it doesn't matter as much if there is a delivery delay or advance. It's not like you've had an expensive cadre of pilots going through months of wasted training if the delivery is badly delayed. Similarly, your new plane isn't going to be stuck on the runway pilotless for very long, if it turns up early.

It's a matter of speculation, but this probably gives Airbus some wiggle room in adjusting deliveries quite late on if a burning need arises elsewhere. For example, Airbus can pay a customer to take a delivery delay, it's not a massive issue for the customer in terms of staff training (if any), and Airbus can sell planes earlier at higher prices to other desparate airlines like United or American who are now want to get Airbuses, any Airbuses, at any price ASAP instead of waiting for Boeing to get their act together. The delayed customer is then earning money (at United's expense) from an aircraft they haven't even got yet. Neat trick. Those long established Airbus customers who have long established positions on the delivery schedule are essentially sitting on a gold mine before their plane even exists. It would be a lot harder to do this, if there wasn't cockpit / cabin commonality.

It's this kind of thing that probably makes it very easy for airlines who have been Airbus customers once to remain so.

The A220 is indeed different, though I have read that because that too is a FBW system a lot of what you learn qualifying for that is transferable (and vice versa). The same can probably be said for 787, 777, but most definitely not for 737.

bazza Silver badge

Re: How can a company be that bad...

I think that you're mistaking "Boeings generally do not crash", for, "Boeing has a good record".

Reading between the lines of many reports of problems, it seems that an awfully large number of operational problems have been nipped in the bud by airline operators' own engineers, post delivery. It seems fairly well established practice that the first thing a lot of airlines do on receipt of a new Boeing aircraft is that they send their own engineers crawling all over looking for the FOD, problems in general. Boeing does have a reputation for leaving everything ranging from metal shavings all over wiring, to partially eaten hamburgers, to whole step ladders inside brand new aircraft's fuel tanks, all of which has the potential to bring an aircraft down either immediately or eventually. There's been several reports of customers (ranging from airline companies to the US Army) that have sent brand new aircraft straight back, having had a look at what had just been delivered.

That Boeing's crash record isn't worse is probably mostly down to the diligence of its customers, not the efficacy of Boeing's QA processes of late.

The thing that should be very concerning is that faults like mis-drilled holes in pressure bulkheads are just as likely to cause a plane to crash (if it's been done just a little bit too badly), but are far harder to pick up in post delivery inspections. Worse, such faults generally aren't looked for very often during operational use; one cannot easily see such things without stripping out a lot of stuff at the back of the aircraft, so it's only looked at as per the inspection schedule. This is drawn up on the assumption that the aircraft was built perfectly; bit of a shame if it wasn't. So for faults like this, that customer inspection at the last-chance saloon before a plane enters service is missing.

Given the multiplicity and wide extent of Boeing's QA failings, does the fact that no one has looked at it since it was drilled make one feel safe?!

bazza Silver badge

...and political lobbying. Thanks to that, both political parties have at various times been more than content to reduce the FAA's funding, reigning in its ability to supervise Boeing.

I don't think the USA / US politicians quite realise what damage they've inflicted on their own country. When an overseas airline buys a Boeing product, it's only of any use to them if their own regulator accepts the FAA's word that it's fit to fly. It doesn't matter one jot what the FAA says; if your own regulator says "can't fly" you've got tons of costly aluminium sat on a runway not earning its keep. For overseas regulators, the MAX crisis was the last straw, with all but the FAA taking the decision to ground the aircraft. I don't think the FAA never actually did ground it - the Presidential administration ordered it out of the skies first.

And with the latest door blow out - well, I'd not be surprised if a few overseas regulators had phoned up the FAA and asked, "what gives?" in a pointed manner.

bazza Silver badge

Re: And then there's the engine inlet problem...

Now that it seems that the US flying public (and therefore US airline execs) have woken up to the problems MAX has, it seems like some of those big customers are beginning to vote with their cheque books and are actively looking elsewhere. It's a major screw up by Boeing.

One can see why Boeing would have been anxious to pander to a couple of big customers' desires to avoid pilot re-training. Had they crossed the "training required" rubicon, that would have then opened up the possibility that airlines like SWA would take a look at Airbus. The problem with that is that, seemingly, once an airline gets a taste for Airbuses the cockpit commonality that Airbus already has across their entire line up and the FBW flight characteristics all being the same (A318 all the way up to A380) tends to result in airlines buying more Airbuses.

The real answer is that Boeing should have gone for cross-fleet cockpit commonality long, long ago, back in the 1990s when it first became evident that Airbus were doing that and what it would bring. That would likely have meant scrapping the 737 long ago (the cockpit is pretty narrow, apparently). That would have been a pretty big hit, but nothing like as big a hit if they tried to do it nowadays.

bazza Silver badge

Re: How can a company be that bad...

>To be fair, even with these fuckups, the planes still have a great record.

Not sure that Lion Air or Etheopian Airlines and 346 dead would agree...

I'm not sure that many in the aviation engineering / maintenance world would agree either. There's been a background hum of reports of FOD and defects on Boeings for a long time now. I'm not sure the USAF or the US Army would agree either. The problems they have had with new Boeing products in recent times have been widely reported. These include reports of a whole batch of brand new Apache helicopters sent back over build issues. The KC46A has been extremely problematic to bring into service, with at least one exceptionally dangerous in flight failure that could have killed an entire crew and whomever they would have crash landed on (the hold cargo clamps suddenly released the hold cargo; fortunately, they were straight / level at the time and spotted it, and managed to fly without the cargo shifting). And so forth. The replacement Air Force One has been a farcial procurement.

>Airbus is snowed under with orders and would probably face similar problems if it tried to scale up production to get even more.

Actually, Airbus has successfully scaled up several times already, and managed to avoid such fuckups. I'm sure they had teething troubles in scaling up, but they've managed to keep those from entering into service. They have FALs all over Europe, in Mobile Alabama, and in China. The quality from all of them has been pretty good. Getting the Chinese one working as well as it does is a major accomplishment for both Airbus and the workers there, and is actually a major threat to Boeing. It's creating a generation of Chinese workers who thoroughly understand the point of good QC / QA, and may take that expertise into COMAC. Airbus's production rate for just A320 family jets is insanely high, 2.5 a day, every day of the year.

>Consolidation and a duopoly were considered the only way to preserve <so>profit margins</so> jobs and US airlines were strongarmed into continuing to buy Boeing.

The mistake Boeing made was to focus on profits, not market share. The company has been "successful" in the eye of investors because they've made profits. However, it's easy to make profits in a growing market where one's competitor has insufficient capacity to supply the whole market. John Leahy at Airbus took a different approach; he chased market share as the key metric for company success. He's the one who made Airbus equip itself to be able to take a large market share off Boeing. His point was that it would be easier to sell lots of jets if everyone thought Airbus could actually make lots of jets. Conversely, it would be hard to sell any jets at all, if customers thought they'd be delivered very slowly. And, he was right.

Looked at from a market share point of view, Boeing's performance since 1990 has been woeful. The mess they're in today is also traceable to short term profiteering; the surprise announcement of the A320neo bumped them into launching the MAX program instead of concentrating on a new design. What a mistake to make.

The result is that Airbus has now got them penned in from all angles; A320neo seems to have 737MAX licked (simply by not crashing a lot). A321XLR and A330neo has removed any market for Boeing's aborted NMA. 787 is reputed to be struggling to achieve good sale prices as the A330neo is easily cost competitive with it and costs Airbus a whole lot less to manufacture. The A350 is doing well at the top end of the market whilst 777X (which is rumoured to be a whole 40 tons heavier) is still not in service. And even the "pointless White Elephant" A380 seems to have come roaring back into large scale use, and now everyone is wondering when (not if) Airbus will be obliged to deliver an upgraded model.

Plus, Airbus now has the A220 program for $1, possibly the finest small jet that has ever existed, a hit with the passengers and airlines, and has a whole market segment all to itself that Boeing hasn't even begun to think about filling. Considering that Bombardier looked initially to Boeing to partner on it, one can only say that this was a significant miss by Boeing.

Researchers remotely exploit devices used to manage safe aircraft landings and takeoffs

bazza Silver badge

Re: “NSAllowsArbitraryLoads”?

It's a source of disappointment that you can't easily get just a plain old tablet with a generic POSIX operating system for apps such as an EFB. There's no need to give an EFB access to the entire Apple ecosystem (or Google's for that matter), no reason for it to be used for anything other an EFB. If you want a tablet shaped device for anything like reasonable money, it's either an iPad or some generic Android thing, with all the horrors and complications that come along with those platforms.

If one could buy a tablet device that ran vanilla Linux or especially QNX (which actually has a pretty good touch / graphics layer from its days as BlackBerry's OS), and you could just write / compile up an app for that and load it in, that'd be about perfect for a number of single function applications. Short of getting a PC-based device, such a thing doesn't really exist, mores the pity.

The closest I've seen for this is tablets in Japanese sushi / yaki niku restaurants. You use these to place your order, and your morsels arrives forthwith. They're clearly an Android tablet based thing, but someone has clearly gone to the effort of cutting it down to "just run the menu app, nothing else". I find it peculiar that the Japanese catering industry seems to be able to rustle up something dedicated to the purpose of ordering a meal, whereas the global aviation industry has decided that a stock iPad will do!

Microsoft seeks Rust developers to rewrite core C# code

bazza Silver badge

Having a language that guarantees memory safety helps with multithreaded code. The Rust compiler can tell you if you have a memory fault across threads (through its ownership mechanism). C# doesn’t I think (corrections welcome!).

bazza Silver badge

One of the issues with C# (and other languages like it) is load time. Rust starts a lot quicker.

This kind of thing matters for some applications. If you’re constantly starting services on demand, the load time can be a significant overhead. Horses for courses…

Oracle quietly extends Solaris 11.4 support until 2037

bazza Silver badge

>(snip) just for an antique OS, (snip)

Looks at Wikipedia's "Comparison of Operating Systems" page and struggles to find an OS that is 1) new, 2) cheaply supported, 3) mainstream. Finds.... nothing!

It's quite an interesting list to look at, sorted by "Initial Public Release". The newest, truly fresh and potentially very significant OS going forward is Redox - a from-scratch OS development written in Rust. In case you've not seen it, it's work taking a look - it might be "The Future".

What has impressed me the most about that project is just how quickly it went from nothing to a working graphical desktop, applications. It really hasn't taken them very long at all; I reckon they've written an awful lot of code really quite rapidly. That probably means Rust is pretty efficient in terms of developer time, which in turn probably indicates a future direction for all on-the-metal software projects; C/C++ are slower and riskier to develop for.

Considering that several existing OSes (Windows, MacOS, even Linux) look to be in the process of getting Rustier, in time it's possible that several antique but very mature and widespread OSes could eventually wind up being majorly reimplemented for the modern age.

The Land Before Linux: Let's talk about the Unix desktops

bazza Silver badge

Re: Standards

That's the possibility. It probably won't be a black'n'white, one-small-deprecation-cuts-out-RedHat thing. For example, changing init is something that other Unixes have done (e.g. Solaris) to no ill effect. But there is a tolerance threshold, and RedHat / IBM are moving towards it and not staying still with respect to it.

It all smacks of IBM / RedHat management having no idea what their developers are really doing, no idea exactly how much this stuff matters to users, no idea why some of their most influential customers care about this kind of stuff and are probably of the quiet-until-provoked variety. The management surely appreciate that DoD and pals are a big potential customer for their services, but have totally failed to connect the dots between their developers' arrogant "We know best" stance and making life harder (and not easier) for customers from one of the most monied government work areas there is.

For me the warning signs started when RedHat started price gouging the fees they charged for RedHat MRG (remember that?). They made Oracle and Microsoft look like rank amateurs. I ditched RedHat at that point in time and haven't gone back.

bazza Silver badge

Re: Standards

(except that the binary you download is also looking to execute in a POSIX compliant environment...)

bazza Silver badge

Standards

I think the article misunderstands the purpose of the Unix software standard that did succeed - POSIX.

POSIX came about largely because of the edict of the US DoD that it would not accept procurement bids from companies unless the design complied with open standards for both hardware and software. VME got chosen as the hardware standard (and is still alive, supported and functional today, though there is now also the far quicker OpenVPX too). POSIX became the software standard. The reason the DoD gave this edict is that, previously, it had been paying extremely large support costs for bespoke processing system for things like radar, communications, etc.

Whether or not the article is correct in saying that POSIX is "too general" to have succeeded by the article writer's terms, POSIX most certainly did succeed from the Department of Defence's point of view. The vast majority of technology-based systems across NATO are based on VME / OpenVPX, and POSIX. Software can be ported from generation to generation with minimal effort (compared to before POSIX). The price of development and support paid by the DoD for its very complex systems dropped very significantly.

And, if you can believe it, the risk in procurement has dropped. Essentially it is easy for equipment to at least pass the environmental testing it'll be subject to. The hardware manufacturers have become good at designing for the military environment. The DoD's, MoD's engineering standards have excellent data on what different environments are like in terms of temperature, shock / vibe, electrical supply, so it's been possible to make sure that the component parts survive. It doesn't mean the whole system works, but it should at least not fall to pieces!

If I were to guess, the problem being faced by a lot of these military systems is the slow demise of Xorg. There's quite a few military system based on XServer.

Windows!

Surprisingly, the open standards hardware that the DoD mandated opened the window for Windows to play a part. The hardware manufactures that glued down Intel chips into VME cards, or OpenVPX cards, essentially chose to make them PC compatible. So it became possible to install Windows. There are a fair few systems based on Windows, largely because of the availability of developer resources and Microsoft (by then) having a well deserved reputation for backward compatibility.

The irony now of course is that Windows itself is now an excellent platform on which to run POSIX compliant software, in WSL. WSL is interesting because you can in principal run an old / out of date Linux plus software combo, with security handled by fully-patched Windows. Like it or not, Linux has become one of the key POSIX platforms in military systems, but is now being dragged in all sorts of unhelpful directions by RedHat (systemd, gnome, etc), and increasingly the best option for long-lived Linux software systems that do not want to upgrade every 3 months is to run inside WSL.

Future Direction, Unintended Consequences of RedHat's Trajectory

DoD still mandates POSIX, and increasingly Linux isn't POSIX compliant (thanks to SystemD).

For example, for decades C code does name resolution by making a few library calls, and these library calls are the same on Linux, *BSD, Unix, VxWorks, INTEGRITY and other militarily significant operating systems. SystemD has introduced an alternative that involves making a request via dBus. Now, for the moment, SystemD has not displaced those well understood library functions, the dBus route for name resolution is an option. But, for how much longer? They're already re-routing conventional library call DNS requests to resolveD by messing with the default configuration files.

Given the attitude of RedHat / IBM, and their SystemD / Gnome teams, I would not put it past them to deprecate the library calls, and use their weight within the Linux distro world to make that stick.

If SystemD does start gutting Linux's compliance with POSIX at the software API level, this will cause military equipment / system providers a bit of a problem; they really cannot go that way. So there could be some very monied companies looking for a Linux alternative, with the motivation to put money into it. FreeBSD strikes me as a very strong candidate going forward.

This could hurt Linux badly as there might be strong demand for things like FreeBSD instances on AWS. Someone has already tried that I gather. And if there is plentiful supply of non-Linux based resources out there in the world, there may be others keen to get away from systemD. Certainly, with RedHat's current messing around with licenses causing no end of anguish, one has to consider the consequences of RedHat's grip on things like SystemD / Gnome. If RedHat were to buy Ubuntu (not impossible), possibly that'd be Linux in effect becoming owned by RedHat. There may still be a Linux kernel project, but if RedHat has bought Ubuntu then there'd not be many distros out of their control, and the only Linux kernel anyone is running comes from RedHat.

If RedHat are motivated to lock software and users in to their version of Linux (their corrupted version of POSIX), it won't be just a few OSS enthusiasts unhappy about that. It will be the military-industrial complex too and, indirectly, Uncle Sam.

For a moment there, Lotus Notes appeared to do everything a company needed

bazza Silver badge

Re: The problem with Notes

>Sharepoint, eat your heart out.

Document management systems seem to be out of fashion. In this day and age, driven by Google I'd say, "Search" is the way you organise documents these days, not some sort of structured index. Sharepoint to me seems awful because its search just isn't that great.

For many organisations, the "search" approach to document managemen is good enough (or at least the key person in the organisation thinks it is), and it's these organisations that are driving the software market. For those that really do need strongly organised, properly indexed and guaranteed findable document storage, the mainstream software world seems not to care anymore. Of course, there's plenty that opt for Sharepoint or similar, and get into deep, deep do-do. Such systems are immediately found to be awful as soon as the company gets involved in a court case; you cannot be sure that a search has found every single relevant document.

Part of the problem I'm sure is that every organisation's needs are different. No one product from, say, Microsoft or anybody else is going to do just what is required out of the box. Every organisation is more or less doomed to having to do some development for itself.

Another part of the problem now is that workers arrive pre-conditioned against document management. They grow up as kids / students with "search" totally and thoroughly drummed into them by every device, OS and service they're likely using from childhood upwards. They arrive in a work place where "search" is inadequate, and encounter a Document Management System. The whole concept is totally alien to them, and the work taken to learn and properly drive a DMS seems like something that matters little to them and more bother than its worth. Couple that with a godawful user interface... So, getting people to properly engage with a DMS is now a real challenge. And if they're entering crap into DMS fields (e.g. keywords), the DMS is only as good as that data. Worse, attempts at encouraging compliance are pretty tricky; like as not they'll quit for some other organisation that doesn't care so much.

How governments become addicted to suppliers like Fujitsu

bazza Silver badge

Re: Lack of expertise in the Civil Service

I'm convinced that, in any subcontracting process, be it commercial or government, you run an almighty risk of it going wrong if you / your organisation is itself incapable of doing the thing that it is contracting for. If you can't build it yourself, you're not likely to have the expertise to supervise some other organisation building it for you. It's even worse if the contractor hasn't built one themselves before.

With government projects, this is quite often the case. No one really knows how to buy something, doesn't really understand what it is that should be built, doesn't understand the consequences of this lack of knowledge. It's a serious problem.

The best answer for government is that government should do more things for itself, in house. The costs of outsourcing badly (i.e. wastefully) would more than pay for the dev teams to be able to do things for themselves. In particular, MoD contracts these days are a nightmare because MoD itself doesn't really know. Go back 40 years to when MoD had it's own research establishments and knowledgable people, things went a lot better I'm sure.

Biggest Linux kernel release ever welcomes bcachefs file system, jettisons Itanium

bazza Silver badge

> Oh, gods, what have I done?

Also, you've told us about some nice new features in Linux. I would have hurd it on the grapevine eventually, but it's nice to know early, to get ahead of the stampede, maybe steer corporate strategy a little.

bazza Silver badge

> So I guess you'll need to decide if you want to run bcachefs or the udder one..

Certainly something to ruminate on. Cud be a good thing.

File system transitions are tricky things to do, not something you want to burger up. Get it wrong and the boss gets properly cheesed off. Get it right and we’re all in clover. To skip it probably needs a good alibi. “Son,” says the boss, “don’t let me down now”…

Personally speaking I’d be sorry to see a decline in ZFS, which is a properly good piece of tech. Still, it may be time to moove on to pastures new, where the grass is greener.

bazza Silver badge

> Oh, gods, what have I done?

You’ve caused us to put a “file system transition day” in our dairy.

Open source's new mission: To boldly go where no software has gone before

bazza Silver badge

Re: Exploitation

It's also not mentioning that this situation has been coming for years (accelerated perhaps by Perens highlighting it some years ago in his opinion piece about GR Security), and that there's been little attempt to pre-empt the situation.

It is possible to do to RedHat what they're doing to their customers. Stop distributing the kernel source code to RedHat / IBM or their staff. It'd take a lot of organisation and some big changes in how certain GPL OSS projects are run (they'd be a lot less open-to-the-public), but it could be done.

Just standing still, letting this happen, is asking for worse situations to develop. For example, IBM buys Ubuntu. That, pretty much, would mean IBM effectively owns the Linux kernel (at least, the one that everyone is using). That is the situation that is coming, and should be prepared for.

Nearly 200 Boeing 737 MAX 9 airplanes grounded after door plug flies off mid-flight

bazza Silver badge

And SpaceX's attacks on the FAA have to be taken into account abroad, too. If the FAA is brow-beaten into permitting SpaceX operations, it's other countries that are in the line of fire and potentially suffer the consequences.

Whether or not an overseas regulator trusts the FAA or not is not for the US Congress or law courts to decide. If SpaceX's attacks on the FAA causes another regulator to decide that the FAA is toast, it could be Boeing that cops the consequences. This might be as a result of it becoming difficult for outsiders to be confident of where the political rot stops, whether it does or does not effect any one particular program. If that's what they decide, then any FAA certification becomes questionable.

Boeing, more than any other company in the USA, really really needs the FAA to be seen as healthy and effective, otherwise it's overseas sales are off. Rather than lobbying for an exemption for the -7 and -10, they should be lobbying for SpaceX to be put in their place.

bazza Silver badge

It is Congress that has granted the exemption. The FAA seems to have been put into a position of powerlessness on this matter.

I don't think that the US Congress quite realises what this means for Boeing's international market. Just because the US Congress has passed a law hamstringing the FAA, that has zero bearing on the UK's CAA, Europe's EASA, China's CAAC, etc. Worse, it puts overseas regulators in a real bind. They could simply roll with it and let the MAX7/10 fly. However, if anyone asks the awkward question "Please explain your thinking around the evidence that this aircraft is safe to operate over our heads", the only answer they've got is "the US Congress says Boeing don't have to do it".

That's not really fulfilling the role of a national / regional aviation safety assurance agency. Furthermore, if anything actually went wrong the persons involved could find themselves in some personal legal / career difficulty. Especially in China.

Properly speaking, overseas regulators should be reacting to this politically granted exemption, along the lines of determining that the USA is bullshitting everyone else on safety. That doesn't mean that US built aircraft shouldn't fly in, say, the UK. However, overseas regulators are perfecttly justified in requiring Boeing to put their new products through their full certification process instead of just taking the FAA's word for it. The danger for Boeing is that the -7 and -10 could very well become restricted to US operations only, or they're forced to do the work properly for aircraft sold overseas.

bazza Silver badge
Pint

Re: What is/was that saying?

That is a masterpiece, especially so early in the year!

What if Microsoft had given us Windows XP 2024?

bazza Silver badge

Re: Micros~1 needs to hire that guy...

Was wondering if anyone else would say as such, and I heartily agree!

There's probably not so much difference in the work needed to create the video, and the work needed to actually create the OS UI. They've practically done it all already, just hire them immediately.

Japanese earthquake disrupts chip industry operations

bazza Silver badge

Earthquake Demographics and Tech

It seems fairly likely from Japanese coverage that a large amount of infrastructure had been damaged. There's a lot of closed roads, power lines down, etc. It's going to be a lot of hard work to put everything right. Meanwhile a lot of people have to get their lives back together, get a new house, etc.

All areas such as this in Japan are struggling with demographic changes. Work and people are heading gradually towards the Tokyo region. A big earthquake in an area such as this may result in a lot of people voting with their feet, and reestablishing themselves elsewhere. This also happened after the 2011 earthquake.

This can have a knock on impact on tech companies. If there is a company whose key staff have lost their homes to the earthquake and are in an area where the roads have been ruined, etc, and it's going to takes years to get the place nice again, those key staff may already be moving out for good. Depending on what the staff / company do, there could be a big enduring impact, even if the factory itself is intact.

This happened to us a long time ago when a company in Japan was shut down by a smaller scale natural disaster, but it was the only one making plastic film for back up tapes. Suddenly, you couldn't buy a DDS cassette anywhere...

What comes after open source? Bruce Perens is working on it

bazza Silver badge

Re: Just enforce the existing license

Trouble with intent is that GPL says you can make money and charge costs. It makes no mention of rights to future versions of the binary and that source code. There's a very strong possibility I think that a judge would confirm RedHat as acting within the intent of the license, which many people (including myself) would consider to be something of bad outcome.

Probably better to assume they are within the license, and use the same trick to deprive RedHat of access to the source code. The mere threat of that could by itself make them think again, as being deprived of access to new versions of the kernel could destroy their business within a few chip generations (their kernel would not be taking advantage of new chip's features). It does mean adopting the very things generally abhorred by the OSS community, but the only other thing to do is stand by and let them get away with it.

bazza Silver badge

Re: dual open source licensing

Yes, I forgot to add the notion that such an example project included third party GPL code. In which case, the project is a derivative work, not an original wholly owned work. If one had published a derivative work, and the license obliged you to release all future version of the included third party work, how would you ever be able to stop?!

bazza Silver badge

A loop-hole is defined as "an ambiguity or inadequacy in a system, such as a law or security, which can be used to circumvent or otherwise avoid the purpose, implied or explicitly stated, of the system" (Wikipedia's definition).

The problem with identifying this treatment of GPL2 code as a "loop hole" is that the implication is that one is compelled to behave in a way that contradicts other laws. What most people see GPL2 as doing is that it obliges people / companies who have distributed code once to continue to do so, whenever there is a new version. However, that would never stick.

If the license did try and say, "distribute the source, and source for all future versions for free", consider the scenario where the Debian project skipped a version of Linux because they needed a rest. Or they simply wanted to retire. What then? Have they broken the license? Haven't they? Who decides? The answer is that you cannot compel someone or some company to do arbitrarily defined future work for no recompense; that would get close to a definition of slavery. There is a grey area between zero distribution of future versions (Debian taking a holiday) and partial distribution of future versions (what RedHat are doing). But there is no hint whatsoever in the GPL text about other versions of the same code, so there is no concrete basis for saying that license compliance involves future versions too. And grey areas lead to difficulties complying.

Worse, as the GPL makes it perfectly clear that you can make money out of this distribution process (you can sell a binary for any price you like, and if you do so you can charge reasonable costs for sending the source code too), it is easy to argue that cost recovery / cost minimisation / profit making is a firm and un-ignorable intent of the license. And it genuinely was an intent, to encourage software adoption, to incentivise companies to pick up code. "Why", RedHat could reasonably ask, "shouldn't we make a profit when the license says we can?".

So, whilst most people see GPL2 as obliging people who have distributed code once to distribute all future versions too, they also implicitly acknowledge that someone / some company has the freedom to stop doing just that if they want to, and is allowed to make money in doing so. The problem faced in taking RedHat or GR Security to court on this is a judge will not / cannot ignore the "freedom to stop doing that" aspect or the "you can make money" aspect; all law applies, the matter cannot be settled simply within a partial analysis of just the license text. As soon as one considers "intent" in a court case, you don't get to pick and choose what aspects of "intent" the case should cover.

There's no need to get lawyered up sufficiently to take on RedHat / IBM. GR Security are a much smaller company, doing the same thing for years, and have not yet been sued. There seems to be a lack of confidence in winning such a case. The general doom and gloom around the whole situation is derived from that.

My view is that the best way to change behaviour is to exploit the license in the same way as RedHat are; don't distribute the code to them in the first place. What's more realistic? Accepting that big corporations will take what they can and it's best to stop them doing so, or hoping that somehow they will behave nicely?

bazza Silver badge

Re: MIT/BSD anybody?

>Linus rejected that for his kernel,

To be fair, GPL3 (and many other licenses) did not exist when Linus first released his source code. Having released his source code and subsequently incorporated contributions from others, he then didn't have sole ownership of the source code to be able to unilaterally re-license the whole thing. In theory, all the contributors (or their inheritors) could agree to a license change, but getting that unified consent would be a mammoth task. I think it quite reasonable (especially given all the work he does anyway) that, given the scale of the task, Linus decided he'd got better things to do to fill his time!

There are mercenary approaches open to GPL projects, such as using RedHat's tactic against RedHat. If a project like the Linux kernel project did care enough to be motivated to stop RedHat doing what they're doing, it is in their power to do so; fork, and refuse to distribute future binaries (and thence future versions' source) to RedHat. It's not just RedHat who can choose to not distribute a binary. The same goes for any of the userland projects. Imagine RedHat, if (for example) the bash project stopped giving RedHat access to bash binaries and thence source? However, given that the main fuss seems really to be about the demise of CentOS, perhaps the best way forward is if people simply migrated away from using RedHat to a distro that is managed in a less commercially minded way.

Consequences of Acquiescence

The tricksy thing with that let-them-stew-in-their-own-juice approach is that other distros are not invulnerable. For example, if everyone shifted from RedHat to Ubuntu, what's to stop RedHat / IBM acquiring Ubuntu? And then, Debian? Suse? And, so on? Really, we're talking about the acquisition of people's time, not necessarily the company / foundation titles themselves. Buying a company is just a way of getting the people. For example, a Debian project with no active members ceases to exist. The best thing about recruiting the people is that you're engaging in the open employment market, not subject to anti-trust actions.

So, what we may be seeing is the first action in a slow move to consolidate all the major distros under one Red roof. That, to me, seems to be a more dangerous threat than simply the loss of CentOS. That would be a full take-over of Linux entire. The Linux kernel project itself would become pointless and irrelevant; it would be the version controlled, compiled and distributed by the winner (e.g. RedHat) that people would actually be running.

So perhaps the real question is, can the Linux kernel project afford to be quiescent in this at all? Or is this kind of outcome acceptable to them? I don't really mind either way; if those who have contributed to Linux in the many ways they have down the years hung up their keyboards and let a big corporate take over - fair play to them. But if they don't want that to happen, they probably do need to act.

Or at least, they probably need to show some resolve to act. If the Linux kernel project did look like it was going to cut RedHat's source access off at the knees with a popular mega-fork with controls to cut RedHat out, backed by Intel, AMD, etc. RedHat's version of Linux would be faced with becoming obsolete and junk within a few chip generations. The threat of achieving that could be sufficient to make RedHat / IBM change their mind. The threat of that would make many RedHat users jump ship to a better behaved distro ahead of time, and that would hurt RedHat before any change had actually come into being. If RedHat did cave in, then perhaps the move gets postponed.

bazza Silver badge

>The worst loophole is what Redhat is exploiting

I do wish people would stop describing it as a loop hole. However unpopular their behaviour, we all have the freedom to not do something. It'd be a pretty dystopian world if anyone or any organisation could be compelled by someone else to forever continue to act in a particular way, if they had chosen to act in that way once.

Just imagine the consequences. You write some software for a lark, and publish it just once as OSS under some license or other that "solves this weakness" in GPL. You then decide not to continue public work on it. You then get sued, because some member of the public had picked it up and wanted the new version's source code that you'd given to only your mates.

"All" RedHat are doing is deciding who their "mates" are by means of monetary exchange, something that GPL explicitly permits. I put "All" in quotes because it is a pretty large dynamic range between software written for a laugh and a seriously large enterprise undertaking, but it is all on the same spectrum. If you want the laws of the land to protect oneself from undue abuse at the small end of the spectrum, you have to accept that they apply at the other end of the spectrum too.

Fortunately, the safeguards in our modern liberal democracies mean that no one can be obliged to do something by anyone else. You don't even have to act according to a judicial order if due process even got that far; you can choose to take the consequences.

I also note that Perens seems to have lost his "opinion" that he gave sparking off the fuss with GR Security, which ultimately seems to have done them no harm whatsoever. I always thought it ridiculous that he took a swipe at them. Seems he's unwilling to take on IBM's bigger legal budget.

If there is fault in any of this, it lies with "experts" who for years have been saying that GPL is solid, have actively encouraged software developers to release their code under the "safe" GPL, but have turned out to be wrong. This has no doubt occurred due to flawed engagement with lawyers, a failure to explain to them adequately that "software" means more than just one specific version of the source code text.

Solutions

There probably aren't any bullet proof enforceable copyright-based license solutions without contracts and monetary exchange.

The best way to go is to take away RedHat's market.

The way to do this is for the Linux kernel project to fork RedHat's contributions, back a different distro (Debian?) as the official "this is Linux" combination, don't distribute a binary of the result to any *.redhat.com or *.ibm.com domain and to ask that any remaining down-streamers refrain from distributing to those domains too. That will mean forking SystemD, Gnome, other such things too. Beat RedHat at their own game. If no one distributes binaries built from this mega-fork to RedHat, RedHat has no rights to the modified source code even if it contains some elements of their own work. If a down-streamer does distribute to RedHat, cut them off from new versions too.

RedHat's strong position today is because they unofficially and without opposition put themselves into the role of "this is proper Linux for the serious minded". If the community doesn't like what they're now doing, the community does actually have the ability to appoint themselves in that role. Not doing so is simply acquiescing to what RedHat is doing. By offering a strongly coordinated "official" alternative, that's the best way of getting users to migrate away from RedHat's flavour of Linux/userland and take their market away from them.

Why This Could Work

RedHat would still be able to claim, "our's is the best Linux", but if the kernel project was closed off to them it'd become increasingly difficult for them to sustain that line.

For example, to where would a new lump of code to support the latest X64 chips go? The official kernel project, or RedHat? That would be the choice of the person(s) writing that code (or their employer), but at the moment no one is asking any such people/companies to make any such choice. It would be a battle of wills, but if successful then within only one or two chip generations RedHat's Linux could become massively obsolete, slow and inefficient.

Admittedly that's a big challenge to organise all that, convince everyone to go through with it, and a bit of a gamble. But, if it worked, RedHat could be out of business within 3 years. The mere threat of this outcome might be sufficient to amend their behaviour.

Kaspersky reveals previously unknown hardware 'feature' exploited in iPhone attacks

bazza Silver badge

Re: 'Security through obscurity' just doesn't cut it anymore.

One can improve even on the abacus. Look up Anzan https://en.wikipedia.org/wiki/Mental_abacus?wprov=sfti1#. You yourself would have to have an unknown hardware feature for that to be hacked.

bazza Silver badge

Re: 'Security through obscurity' just doesn't cut it anymore.

Agreed, an excellent article!

Technically speaking, even strong encryption is security through obscurity, just with an extremely high degree of obscurity with a well understood mathematical definition of just how much obscurity there is.

A lot of this reminded me of the days when people would poke around inside Z80 CPUs looking for extra instructions that the manufacturer were prone to slipping in.

There’s also plenty of scope for this to be a simple screw up. When designing a device you might start off with an assumption of how much memory mapped IO it’s going to need. So you slap down that many memory cells into the design. Then as the design matures you run out of ideas for what registers to have on the device and end up with loads left over. No one ever reviews that part of the design, the docs are written, bonus earned and some fab somewhere starts stamping out chips with more addresses than they know what to do with.

How thermal management is changing in the age of the kilowatt chip

bazza Silver badge

I recall that Crays were liquid cooled (freon?), and occasionally plumbed into a building's heating system... Room getting chilly? Run some CFD or something, that'll warm us up...