
Re: There is absolutely no problem here.
Ah well, 0 is simply an unnecessary number. All we use it for is to denote things or terms in equations that never existed or no longer exist. It’s not really a number at all, never mind an integer number!
3752 publicly visible posts • joined 23 Apr 2008
>It may be lazy developers who point the 'update url' in their app to a named
>S3 instance much as they could to any domain name. Once the domain
>name has lapsed you can buy it and do much the same.
And it's kinda crazy that the same problem exists with domain names too. But it's particularly nuts with S3 buckets. There' no reason why the primary identified for an S3 bucket as used in the API couldn't be an UUID / GUID, that a developer could get hold of when first accessing the bucket (i.e. when they know for sure who is providing the bucket). On the basis that Amazon could easily make it very difficult for a customer to get an S3 bucket with an arbitrary UUID / GUID, it would be very difficult for someone to come in later and squat on the bucket.
It's a bit harder with domain names, for sure, as people generally have to interact with them and long unique identifiers of a hexanumerical nature are not user friendly. But that's partly why websites have certificates...
>as wide as 886 standard four-inch cans of spam laid end to end
This opens up the option for a measurement of length, the pork-parsec; the distance at which a 4" can of spam subtends an angle of 1 arcsecond. It works out at about 13miles, which is curious because an astronomical parsec is about 2x10^13 miles.
That means that an astronomical parsec is (if I put on my South African accent for a moment) 2 times tin to the power pork-parsec miles. I find that worryingly coincidental.
No problems with excellent nitpicking :-)
I'm well aware of PREEMPT_RT's lack of appropriateness for enterprise settings! I have used it extensively for real time signal processing applications, with excellent results. It's quite good fun loading up a machine to 95% CPU utilisation, leaving it there for a few years, never missing a beat, and still be able to log in via ssh and do things without breaking it. I don't know if one still has to run the application as root to be able to have it real-time scheduled and to set it to an appropriate priority (e.g. more important than all the device driver kernel threads).
I remember C++ compilers of that style. Alas I didn't get into C++ until long after "real" c++ compilers came along, and thought it a pity that I didn't pay attention earlier. Seeing how C++ renders down to C is probably an excellent way for C developers to thoroughly understand what C++ is actually all about. Just what is a v-table and why would I iterate over it? Well, here it is!
Possibly the same would be true of Rust. It could be interesting in the C vs Rust debate if Rust could be illustrated as "here's the equivalent C", and see what people make of it.
The patch docs talk about NICs coalescing interrupts anyway, and explain that as the NIC knows nothing about the application's behaviour it's "guessing", and therefore suboptimal.
The mechanism that's been introduced in effect allows an application to give a firm hint as to what degree of coalescing does make sense from the application's point of view, with the side effect that (if judged well) the application will always be asking for data just as the NIC / kernel was beginning to think about raising an interrupt.
It's an interesting idea, but I do wonder.
One of the reasons to respond to NIC IRQs is to get that packet in, in memory somewhere, fast, so that more packets arriving on the network can all make their way through the limited buffers within the NIC. If the traffic load is such that the application isn't really keeping up, and one is now polling for available packets, it seems to me that there's the potential for network packets to get dropped. There is a parameter irq_suspend_timeout involved, which seems to be a timeout to ensure that the OS will start paying attention to the NIC if the application has taken too long to ask for more data. The suggestion is that this is tuned by the app developer "to cover the processing of an entire application batch"; but, what about the NIC's ability to continue to absorb packets in the meantime?
The patch documentation doesn't mention packet loss, drop, etc at all, so I'm presuming that actually that's covered off somehow, hence no need to explain the risk of it.
The thing is, dropped network packets will start having a big impact on the amount of energy consumed by the network itself. It costs quite a lot of power to fire bits down lengths of fibre or UTP, and the energy cost of dropped packets starts being more than a doubling of that power (because there's more network traffic than just re-sending the dropped ones). So that's why I'm interested in whether or not they have got packet dropping covered off somehow.
However, on the whole, a clever idea and well worthwhile!
Tuning Architectures?
Ultimately, if more network traffic is being fired at a host than the host can consume, then the architecture is perhaps wrong, or wrongly scaled. Our networks effectively implement Actor Model systems, which are notable in that a lack of performance gets hidden in increased system latency, because it muddles through the data backlog eventually (or at least, that's the hope). Thus, it's tempting to write off the increased latency as "who cares", and move on. That is often entirely acceptable (which is why all networking and nearly all software kinda works that way).
However, if one adopts a more Communicating Sequential Processes view of networking (think Golang's go routines / channels, but across networks instead, or a http put), this has the trait that if a recipient of data isn't keeping up, the sender knows all about it (send / receive block until the transfer is complete - an execution rendezvous). There's no hiding a lack of performance in buffers in NICs, networks, because there aren't any (not ones that count, anyway). It sounds like a nightmare, but actually it's quite refreshing; inadequate performance is never hidden, and you know for sure what you have to do to address it. However, if you do get the balance of data / processing right, all the "reading" is started just as the next "send" happens, and the intervening data transport shouldn't find a need to buffer or interrupt anything.
This new mechanism brings the opportunity to kinda blend both Actor and CSP. It's "Actor", in that data could build up in buffers, but if an application / system developer did tune their architecture scale just right in relation to processing performance, the packets would just keep rolling in and be consumed immediately with barely an interrupt in sight, as if it were a CSP system but without any explicit network transfer to ensure the synchronisation of sending and receiving.
Of course, achieving that in real life is hard for many applications. There are some where there's constant data rates, e.g. I/Q data streaming from a software defined radio (well, the ADC part of it anyway). This new mechanism paired with the fact that PREEMPT_RT has just become a 1st rank kernel component (another hooray for that!) does some interesting things to the performance that could be achieved.
What rot.
Here we are, not even 1 year on from the moment when someone very nearly succeeded in achieving global backdoor access through openSSH on all Linuxes by attacking the build system for liblzma, and already there's statements like this being spouted again. No one has actually fixed the general class of problem that enabled that attack.
Linux is only "as intended" if one accepts some wafer-thin trusts.
As with many of these things, organisations tend to think that because they are not handling anything obviously critical to anyone else they not at very high risk.
What they always forget is that what their IT is handling is their own business, and that that is very critical to themselves. Looked at that way, one generally becomes a bit more careful.
Yes indeed, it is md (or at least, the results of ps -ef show a lot of familiar looking kernel threads).
They layer a management system on top that's pretty effective (from the point of view of the end user looking for simplicity, low skill and not much reading). If interested take a look at Synology Hybrid RAID. Obviously it's not meant to delight and entertain a Unix / Linux purist, but it's an effective consumer product. One can opt for whatever RAID system one likes and manage it oneself if one wishes.
I've heard bad things about some aspects of btrfs, and so have avoided using it directly. However, I also run a Synology box, and that's using btrfs and I've had zero problems with that.
I vaguely recall that Synology have been careful to avoid some of the more tricksy aspects of btrfs. And - for a home NAS - I have to say that it's all pretty slick and easy. Swapping out drives for bigger drives is very simple (one simply needs patience).
I think the world of file systems is extraordinary. Clearly, it's perfectly possible to put a huge amount of effort in to developing one and wind up with a lemon. Good ones seem to be as rare as hens teeth. They're ferociously complicated, and getting more complex when a naive view is typically "it's just files, what's the big deal".
I can remember years ago when HP were still making a big deal about memristor, and how it'd replace all storage (volatile and non-volatile) because it was superior in all ways to both. I remember thinking then "well, that's the end of filesystems because everything will simply be a memory allocation by an OS". I think I also thought the opposite, that memory allocators would die out because finally everything truly was just a file. We didn't get memristor of course... And then there's the hoary old question of why isnt' a file system a database, and why isn't a database a file system.
But the lines are somewhat blurred these days. It's going to be interesting to see whether we're still using all of SQL, file I/O and malloc / free in the future.
I think Visual Studio may struggle to run in Wine. Caveat - I’ve not tried. However when you install it there’s a few things seemingly added to the OS (all sorts of debugging malarkey). Whilst these aren’t needed to edit and build, the installer may barf and not complete.
If you’ve not used Visual Studio, it’s worth a test drive. I use it for C++ development on Linux and it’s pretty good at that (all done remotely via ssh and gdbserver. It’s surprisingly effective for this use case). I also write in C# and build test on Windows but run on Linux. This is also surprisingly effective.
It is big and bloated but it’s pretty good.
Apparently it came in with AMD's Barcelona, and Intel's Nahalem / Haswell architectures. It's not part of SSE4a / SSE4.2, but came in at the same time those SIMD extensions did (which Win11 also requires).
Linux "occassional minor issues"? The last version upgrade I did between the last Ubuntu LTS edition and the current LTS one was an unmitigated disaster. There was nothing special about the machine or the installation - basically vanilla with no tricksie setup. And it borked itself.
24H2 has been particularly naff though. The only machine it's cleanly upgraded on for me is the one that isn't supported hardware (no TPM, too old a CPU but it does have popcnt). Once I found out how to trigger the upgrade on such ancientness, no problems, it's running 24H2 like a champ.
I'll put in a reply here even though it's most unlikely to be stumbled across by my fellow El Reg regular commentators.
I finally got them all fixed.
One was successfully upgraded by preparing a USB stick using Microsoft's media creation tool, and running the set up from there. I'm guessing that bypassed any kind of upgrade data in place in the existing installation. It successfully retained all apps and data. This was to overcome an error code of 0x800736cc when it tried to update itself to 24H2.
Another was a bad drive in the 23H2 install that the 24H2 update was trying to bring in, resulting in an 0x80070002 -0x20007 error. I found a post in Microsoft's own community pages here by 'youaremine', advising looking at "$Windows.~BT\Sources\Panther\setupact.log after the failed installation. This will be from the root of c: when you reboot (c:\$Windows.~BT...) and it's hidden (so turn on view hidden files). At the bottom of that file you'll find the driver in the existing 23H2 installation that it's trying to install into its 24H2 self and failing. All I did was take ownership of the corresponding folder in c:\Windows\system32\driverstore, and moved it elsewhere (instead of just deleting it, in case something goes wrong). Running the 24H2 upgrade again (from USB stick) resulted in a clean upgrade. Note that whilst the setupact.log file might be talking about a driver in d:\windows\system32\driverstore, the d: is simply because that's how the drive was enumerated whilst the upgrade was in progress and failing.
In my case it was an Oxford Semi eSata filter driver that was causing the problem, that my laptop definitely doesn't feature.
So, thank you 'youaremine'.
Sigh. The only hardware on which I have successfully upgraded to 24H2 is officially unsupported. It started off ages ago as a Rufus'ed installation to get past the lack of TPM, the antiquity of the CPU (though the CPU does support the necessary SSE4.2). But it's now running 24H2 like a champ.
Just as well. Meanwhile, on my officially supported Dell and Lenovo hardware? No joy. Mysterious error codes. No information. At least they're reverting to their previous state.
The problem with Boeing’s backlog is that it’s not big enough to pay for the company’s debt. Probably.
From an investor point of view it should be seen as a zombie company. The only reason it’s not totally folded is because there’s too many big players inc US Gov who would be in serious trouble if the company did actually fold. The problem is that a Plan B is unthinkable but the longer Boeing limp on the more likely it is that the World needs a plan B.
The only easy and viable plan B is to allow Airbus to win and take over. That’s especially unthinkable to Airbus who are thought to be desperate to not become a global monopoly because of the trouble that’ll cause them. Hence why they’ve not moved to wipe out the 777x when they easily could, etc.
No, anger is pretty much the mood at the moment.
With Boeing in particular the issues all stem from the senior management, the very people that the customers deal with directly. So it feels more like a personal thing than it ever did before.
You can sense this from some of the actions that the airlines have done. RyanAir and Emirates both sent their own engineers in to Boeing factories to do their own audits. If that isn’t a “I don’t trust you personally anymore” statement, I don’t know what is.
Boeing’s historical approach to apologies hasn’t helped either. They’ve basically said “what are you going to do about it then?” because they know that their customers can’t easily go elsewhere. Following the door blowout some of those customers have decided that they do have a choice and that the one or two aircraft they can get out of Airbus early are worth fighting for.
What this has done is marketise A320 production slots. There are airlines with long established orders now selling them to United for a profit and a price discount from Airbus to take a later slot. Making money out of a plane that’s not even been delivered to you yet is - in the Airline business - a smart move and a good result.
P&W’s woes are leaving the market ripe for RR to do a spin of their new Ultrafan for engines in that class. That could cause CFM a lot of trouble too.
I’m sure the only reason why they haven’t yet is because P&W might fix their issues. But the longer it goes on the more chances there are that Airbus will ask RR to do it…
For those airlines that picked A320 and CFM engines like EasyJet, well they’re laughing all the way to the bank.
The A220 figures - small though they are - are bigger than Bombardier’s original capacity. Supply chain size is probably the fundamental limiting factor but with Airbus backing it and sales looking good there’s every reason to for the supply chain to increase its capacity.
The Boeing re acquisition of Spirit may help as Airbus is getting the Spirit owned Belfast plant that makes the wings so Airbus is ithen able to scale that operation up.
And it’s to Toulouse Blagnac that you have to go to plead to be allowed to place an order.
British Airways had to do that once. They’d annoyed John Leahy of Airbus because BA were simply using Airbus quotes to drive down Boeing prices. So John Leahy stopped giving quotes and BA stopped asking for them. BA later got to the point where they finally decided they absolutely had to have some A320s and emailed Leahy for a quote. No reply. A letter. No reply. A hand written letter. No reply. A phone call. Not returned. And so on. It ended up with the boss of BA having to go to Blagnac personally, wait in the outer office for an inordinate amount of time, and then finally being seen by Leahy who gave him a full price no discount take it or leave it deal. BA signed on the spot.
Yes it’s worked fine for me too.
It worked (in a sense) even when trying to upgrade a very old machine from 10 to 11 with a Rufus’ed 11 installer set to disable the hardware check. All went well until the first boot which crashed. The CPU was too old and lacked a opcode Windows 11 is compiled to use (pop count?) so it crashed on with an illegal instruction error. Oh dear.
However the machine rebooted, the installer automatically rewound the upgrade and returned me a machine with Win 10 as was on it originally and a note of apology that something had gone wrong. And it was as clean as a whistle, no errors had occurred in the rewind.
Ok it was a pity that it wouldn’t run on that CPU, but I was impressed at how it handled a situation way outside the norm.
It's be interesting to know exactly what went wrong. For this kind of system, there is no reason for the system to work in "natural" time at all. One picks an epoch, and counts seconds from that point in time. One might even deign to use International Atomic Time TIA (if one wants to be rate-compatible with systems like GPS). If at any point in the system this system time needs translating into something friendly to humans, you write a function to provide that conversion, but you do all the calculations and storage in TAI.
There's some very good libraries for doing this accurately, available from the International Astronomy Union - the SOFA library (SOftware Fundamenals for Astronomy). This will do accurate conversions between timescales such as International Atomic Time (TIA) and UTC (and a load more timescales) allowing for leap seconds, the lot. You have to update it when a new leap second is published, but leap seconds have been suspended for the moment.
It's quite interesting because it highlights the impossiiblity of accurately having a Julian date/time in UTC. Which is a pretty good hint that one shouldn't use UTC (and therefore, most computers and most software libraries) for systems like this.
Linux is actually quite favourable for such systems these days. With Linux, gpsd and a GPS receiver, Linux can maintain an accurate representation of TAI (even though the primary system clock is running on the traditional incomplete implementation of UTC), and one can program against TAI (e.g. you can ask for the TAI current date / time, instead of the local date/time). It even deals properly with leap seconds, in that when one occurs the TAI timescale remains correct.
If the hardware starts lasting longer because Google are keeping the updates running longer, that's going to screw the manufacturers. The pricing / performance was based on rapid obsolesence, i.e. your ChromeBook customer was going to be a repeat customer within 3 years.
Now, once everything has averaged out, there's every possibility that they won't be back to buy again for 10 years. On average that's a drop of 66% in anticipated revenue for manufacturers' Chromebook product lines.
There's also a front-loading to this; sales for the next 7 years are likely to decline even more sharply as today's buyers will not be buying again for a long time to come.
And one then has to question, can they still be profitably sold at current prices at 33% (or less) of the established production volume going forward? Probably not. The margins can never have been great (given the low retail price), and low margins work only if the volume is high.
So far as I can see, the only way manufactureres can continue to make money on Chromebooks is if the prices go up. The problem with that is then purchasers might ask if one is buying a Chromebook at full laptop prices, is there any value in having a Chromebook at all?
Many may decide to buy a laptop. If they did, that'd be the end of Chromebook as a viable platform.
Agreed.
There is an uncanny relationship between Rust and Go (well, Go's CSP) that I think most have not spotted. Rust's fearless parallelism is based on its compile-time knowledge of object ownership. By knowing what is owned by who and when, it can deduce when it's safe to pass off some functionality to another thread.
The relationship comes from how that's done. I imagine that at present the Rust compiler simply starts a thread and shares the data address with it (safe in the knowledge that the main thread isn't going to access it). However, that is not the only possible implementation. It could just as easily serialise the data, pass it to another thread as a copy via (for example) a pipe, and accept the result in turn in the same fashion.
Thing is, if I rename the "pipe" to "channel" as such things are termed in Go, and you have a multiprocessing architecture implemented much as a Go programmer would have implemented it. Except the Rust compiler would have done it for itself.
The implication of that becomes very profound when one considers what that might allow hardware to become. At present, the Rust compiler is assuming that the code it generates runs on an SMP hardware environment - one global memory equally accessible to all CPU cores. However, if it could also serialise and copy data down a "channel" instead, what happens if that "channel" is a hardware inter-core link? The cores wouldn't have to be able to access all the memory in the system, just their own; they'd be exchanging data through these links, not via shared memory.
The profound bit is that to the developer, they're still writing what looks ostensibly like single threaded code, but Rust would be auto-parallelising it. And the machine itself would not be having to implement an SMP environment. And at the moment, it's having to implement an SMP environment for C that is causing all the problems with cache and CPU security such as Meltdown and Spectre (and derivatives), so getting rid of that problem wholesale would be a good move.
The implication of that is that one could write software in Rust, and build it either for an SMP environment (such as it runs on today), or build it for purely NUMA hardware. So if our hardware platforms had to transition from SMP to NUMA to continue to improve speeds and/or to lose problems like Meltdown for good, Rust has the potential for making that transition painless (or less painful); the source code would not have to change.
I know there'd be a lot of minutae to sort out, but a surprising number of these have already been done. For example, what I've outlined implies that there's a large number of hardware cores, one for every thread started by every process running on the machine. Modern hardware has a lot of cores, but not that many. However, the old Inmos Transputer had (in effect) hardware virtualisation of cores; multiple "threads" could run on a single Transputer, the hardware scheduled between them however many there were, and each thread behaved as if it had sole access to the cores resources such as inter-processor links (channels); multiple thread's traffic was multiplexed over the hardware links.
The Future
So far as this old coding dog is concerned, that's where the future lies. Rust is accidentally a pivotal language in the future of computing and computer science.
But as this old coding dog knows all too well from years in the business, the software development industry seems remarkably resistant to new ideas, better ways of doing things, tools that eliminate work, and remarkably dismissive of things that help improve rigour for no-effort.
This makes it very off putting for hardware manufacturers to try something new, because they know it's incredibly unlikely that they'd bring the software industry with them. Just look at Itanium's fate...
We shall see. It's interesting that companies like MS seem willing to put effort into adopting Rust for their OS. This kind of organisation has got the money and the power to re-write its most sacred software, and is also big enough to go play in the hardware space. If companies like Microsoft or Apple worked out the future like this, it could happen. One way they could do it would be to continue to support SMP code such as C, but only as a software emulation. Whereas Rust (or even Go) built and ran a whole lot faster being native for the hardware.
Some might suggest that this smacks of a 1980s coding dog calling and wanting their beloved 1980s hardware back. But my response is, if the ideas are so antique and useless, why do modern generations keep reinventing it and appreciating it? For CSP in particular, I've seen that be invented, die, re-invented and die, and finally re-invented yet again. Maybe Rust is the thing that finally makes it stick both in software and in hardware by hiding it from the developer!
Thing is, whilst the world does indeed run on top of C (and C++ to some extent), there's been an awful lot of CVEs attributed to careless use of those languages in writing bits of operating system, system service, etc.
Most such faults have occured because, whilst there's plenty of knowledge as to what one is supposed to do when writing code in such languages - e.g. validating inputs before processing them, those things have not been done by the developer and their omission has not been spotted in a review. For example, the HeartBleed bug was entirely due to inputs not being validated on an interface specified by English language RFC and implemented in hand written badly in C code. This was particularly poor because we've had the tools and technology to specify interfaces in schema with input validation defined, and automatically implement them in a language of your choice for decades. ASN.1 has been around forever, XML and JSON are newer ideas with the same capabilities.
Rust is proving capable largely because the amount of review effort required is significantly reduced, especially for anything slightly complex or multithreaded. A developer presenting a pile of Rust for review that is devoid of the unsafe keyword that compiles and runs is already a long way ahead in the review process. By comparison, C/C++ that compiles and even runs has to be treated with the utmost suspicion by anyone reviewing it or relying on it fresh out the box, even if it does lint cleanly.
The amount of time spent by the two developers getting to compiling and running code may not be so very different. However, as it's clear that it's everyone else's time thereafter that matters most, Rust wins on that basis. To argue it more bluntly, picking C / C++ over Rust for a new project today if Rust is a genuine alternative is deliberately making the job of review and delivery harder and slower, which is not nice for the reviewers or the people waiting for the end result. Of course, there's lots of valid reasons still why Rust may not be a genuine alternative for a news project (e.g. availability of developers), but one really should test one's assumptions these days. If Rust does explode in popularity, one could find one's fresh-shiny C project devoid of developers in only a few year's time.
And this is showing up in projects.
There's the Redox OS, written in Rust. Ok, that's not production ready. However, one has to look at the size of the development team and their productivity. They went from nothing to a running and fairly complete OS with a graphical desktop in an astonishingly small amount of time (3 years). This was far more code written and got running than - say - Linux required, because Linux was merely a kernel on top of which the GNU user land was deployed with X-11 (these two already existed and had taken years to get together).
Some guy at Mozilla has reimplemented the GNU core-utils (ls, echo, cat, etc) for Linux in Rust. Along the way a number of latent bugs were found, and the end result runs faster. For a "mere" re-write to be finding bugs in code as old and as depended upon as this, and to find ways of speeding it up (look up Rust's "Fearless Parallelism"), is somewhat surprising. But, there you go. I think quite a lot of distros now make this available as an alternative package.
>RISC-V isn't about saving a few cents on a chip, it's about 100 companies having freedom to innovate and compete, not just Intel. AMD, Arm, Qualcomm, and Apple.
That can be looked at in a different way. ARM providing a chip manufacturer a ready-to-go core (which implicitly includes all the software dev tools for that core - complete, finished, supported, documented, etc) allows the chip manufacturer to innovate in ways that matter more to their customers; namely, what sort of peripherals, what memory there is, etc on the device.
Ultimately a core is just a core, a way of running software, and a ready-made core that's extremely well understood is a valuable thing for a chip manufacturer. They can more cheaply participate in the market.
Sure, Risc-V allows chips manufacturers for no license fee to start tinkering with the ISA. However, that's a very complex undertaking. To differentiate themselves in any meaningful way they've got to take on an awful lot of extra expertise and a whole lot of extra work. Whether that extra effort amounts to a world-beating difference is very unlikely.
>You can already get multiple laptops using RISC-V processors, including a main board for the high quality Framework Laptop 13. At present they are slow, a similar speed to a late Pentium III or a very early Core 2 (e.g. original MacBook Air). They'll be hitting mid-life Core 2 Quad speeds sometime this year, and early Core i7 (maybe Sandy Bridge-ish) next year, maybe Zen2 / Apple M1 in 2027.
Well, to make Risc-V performance competitive with today's best is going to mean booking a large production run on TSMC's line, with a Risc-V design dotted with pipelines, caches and memory controllers sufficient to exploit the performance of TSMC's finest transistors. That's not going to happen unless some major player decides to abandon the ARM ecosystem (which has served them very well) and go it alone. Thing is, they'd also have to undertake to persuade peripheral vendors to re-write all the device drivers for the peripherals they want to glue into systems. That might be non-trivial, if the peripherals are not their own and are not blessed with OSS drivers (see a lot of WiFi, graphics, touchscreen devices). And then they'd have to persuade software / application vendors to support it too...
That's kinda ARM's strength. They've made it easy for everyone to use ARM, and there's a lot of OSS and proprietary software inertia (especially in the mobile space). They don't bight the hand that feeds them - in fact, they barely nibble - so there's not a lot of motivation to divert that inertia.
Never bet against free is certainly good advice, but then it can be pretty difficult to distinguish between "free" and "very cheap", especially as the "free" part of Risc-V gets lost on the fact that one still has to pay for the silicon regardless of the cost of the ISA.
ARM has been uncannily good at pricing its products so that it's other functions - i.e. controllers of what the ISA is - become important. If you write software for a particular flavour of ARM, you know it's going to work on any chip that claims to implement that flavour. It's a bit more complex on Risc-V, but that seems to be enough to keep ARM firmly in play.
Plug and Play; there's ARMs aplenty that support PCIe, and I presume that firmware and OSes that support that on ARM are perfectly capable of detecting what devices are actually plugged in. It's been a mixed bag in terms of non-PCIe peripherals on the chip, but that's seems to have been getting sorted out with a lot of work in the Linux kernel project aimed at simplifying how ARM peripherals are described to a kernel.
"If I couldn't write a mouse driver, I didn't deserve a mouse".
I took the opposite approach. Change the electronics, instead of writing a device driver.
I came across a track ball surplus from the Royal Navy - a giant yellow plastic ball with considerable inertia, but a really nice and slick action. Mechanically, it was superb, which is why I wanted to use it. BTW the size was due to the need for this to be usable by a crew member wearing bulk anti-flash protective gear.
However, to get it working with a PC I decided to tinker with the electronics inside to make it compatible with something that already existed (an early MS serial mouse if memory serves), rather than write software to make it work as-is. Worked a treat!
Thing is, a lot of folk's interpretations of various OSS licenses have little in common with what the licenses actually say.
For example, those banks and their use of OSS software, with no give back. The only valid expression of the software authors intentions is the license. If that license says "you can use that software for free", there is no other possible reading of it. And whilst many others may assume that the authors are looking for some sort of quid pro quo, some sort of give back, that's not actually what the license requests. It's not for third parties to say what the software authors meant to say instead of what they did actually say in the license they stuck on the front of their code.
One cannot be both obligated and free of obligation at the same time. And, the established reality of the Western world's capitalist democracies is that if a company is not obligated to do something, the shareholders are allowed to get properly grumpy if the company then does burn shareholder money on something they don't have to do. Arguably, releasing software for free into such an environment in the hope that major corporations will shower your project foundation with funds or code donations is somewhat naive. And where it gets very complex is that the companies "free loading" may well be part owned by the pension scheme the software author is a member of...
There are - amazingly - companies that'd rather pay for software (and get support) than use free software. The problem these days is that, in quite a few fields, there is no commercial option; it's OSS or nothing.
There are other business cultures. In Japan companies exist as much to be socially useful as to be profitable. This is why Japanese company execs are on TV bowing deeply in apology when the company screws up; it's a personal social issue for themselves (and the share price crash is a mere secondary consideration).
The ability to safely transfer security assets such as PassKeys from one device to another seems to me to be an essential component of any security management tool. What's irritated me about things like the popular OTP apps on smartphones is that - basically - you cannot do so. You have to rely on the OS and back end cloud backing up your device and restoring to another. That's not fine, because then you're locked in.
So it seems pointless building PassKeys into an operating system as core functionality, because that's simply going to make life for users extremely difficult at some point in their lives. Unless there is also a standardised means of transferring PassKeys around devices that all can agree to and is also safe, the software vendors should not be allowed to build them in to their OSes. It becomes another form of device lock in.
KeePass and its derivatives is the best for storing security assets, simply because one can then move one's KeePass file from device to PC to Mac, etc.
The inclusion of PREEMPT_RT is excellent news. I salute all those who have worked to make this happen!
You can do some really cool things with PREEMPT_RT, if you code to make use of it and need something more real time than simply playing back audio, video. It does mean one is likely coding in C or C++, which is becoming less popular. I’ve had excellent results with it, and now that it’s readily mainstream it’s easy to get. It’ll trickle down to Android, and things like ICE guis could become super slick.
It should put a dent into RedHat. They did (do?) a spin of RHEL called MRG, the R being a PREEMPT_RT kernel. They charged a ridiculously large fee for this. Now it’s a free thing.
CANDU's use of heavy water was (is?) a bit of a bonus for the UK's Rutherford Appleton Lab's ISIS particle accelerator. Heavy water being disposed of by the Canadians was really good for target cooling in the ISIS particle accelerator.
Trouble was, after a tour in a CANDU reactor, the heavy water would be lousy with tritium (T2O, or DTO, or THO). So, the ISIS facility had to install a load of Tritium detectors in case of leaks (you really don't want to ingest Tritium), and because (in theory) leaks didn't happen these were on something of a hair trigger.
The trouble was that, at the time, next door, there was the UKAEA and its reactors, and because they had reactors of a particular sort they were prone to guffing off vast clouds of tritium such as you'd never believe (and were licensed to do so). And it'd waft over the fence, blow in through gaps in windows, etc. and set of the tritium detectors in ISIS. Every single time there'd have to be a check, search, etc in case, but of course nothing was ever found to be amiss.
This was way back in - I guess - the late 1980s, very early 1990s. I don't know if they use heavy water today in ISIS; they're not using uranium targets like they used to, so perhaps the D2O has gone too.
Is it just me, or does anyone else think that the expenditure of 4GW on "AI" (for any purpose) is a complete and utter ****ing waste of resources?
To put that into context, so far as I can tell a ton of H2O produced from seawater using reverse osmosis requires 2.98kWh (see this presentation slide 19. 4GWh could produce 1.3million tons of fresh water, every hour. That would be quite a lot of irrigation water for farmers, or an awful lof of clean water for houses.
They're getting harder to avoid, as are Fedora's equivalent.
The fragmentation in Linux of "How to Distribute Software" is the biggest barrier to adoption outside the limited world of enthusiasts / experts. Windows and Mac just sorted it out, once, and haven't had to think about it in years and years and years.