* Posts by MarkMLl

145 publicly visible posts • joined 29 Jun 2019

A path out of bloat: A Linux built for VMs

MarkMLl

BTDT

> How hard can you cut down Linux if you know it will never run on bare metal? Further than any distro vendor we know of has tried to go.

Demonstrated adequately by User Mode Linux (UML), which has been a standard build target for a considerable number of years.

The major thing that it couldn't do, when I last looked, was run a 32-bit guest on a 64-bit host. Which is a great pity, since if a guest is only going to be used for work which is not memory-intensive there's no real reason for it to carry around the burden of 64-bit pointers etc.

City council megaproject mulls ditching Oracle after budget balloons to £131M

MarkMLl

Re: What would it cost ...

Contract out to... ICL? Sorry, Fujitsu.

The idea of having a preferred supplier for HM's DP Systems is not inherently bad, particularly if they genuinely have proper staff training schemes etc. (i.e. like Marconi had when they were the preferred supplier for certain types of telephone system).

But it needs politicians with some minimal level of cluefulness to get it to work.

MarkMLl

So whose bright idea was it in the first place?

I've seen comment in the past that Oracle tends to pitch its products directly to prospective customers' board members, cutting out the specialists who really know what's going on.

Has anybody looked carefully at the procurement process in the case of Birmingham?

I don't mean whether the tendering process was done by the book, I mean who started pushing the idea that relying on Oracle was a fundamentally sound bet?

The successor to Research Unix was Plan 9 from Bell Labs

MarkMLl

Re: So...

But be realistic: if you're the sort of person who still considers Slackware then you got a lot of that good attitude from your parents.

A thoroughly above-average family :-)

(The main reason I left Slackware for Debian was support for SPARC etc.)

MarkMLl

Re: OK, but is "multiuser" relevant any more?

But the Samba backend process runs- AIUI- as its own user (i.e. "samba" in group "samba"). It assumes responsibility for checking who is trying to access a file (i.e. which user on a client system) but once it's done that it still uses samba:samba at the library and kernel level.

I think I'd prefer that protection to be enforced at a lower level, in the same way that cross-system requests are enforced on many capability-based distributed OSes.

MarkMLl

Re: So...

> I believe Rob Pike studied under Wirth.

There's definitely a "citation required" tag on that. Wirth's connection with the ALGOL lineage- including Ada- is unassailable, and most modern languages seem at least tolerant of his ideas regarding strong typing. And one of the first APL implementations was done under his supervision at Stanford.

Robert Griesemer provides a link between Wirth and Google including Go, but so far I've seen no definite confirmation that Wirth had a personal connection with any of the Bell researchers- or for that matter anybody on the USA's East Coast i.e. IBM and the rest.

> Plan 9 is a successor to a multiuser OS with strong security, authentication, etc.

OK, but is "multiuser" relevant any more? Multi-*tasking* is definitely relevant, as is strong security between processes which might be briefly processing data on behalf of a known or anonymous user. Most desktop systems are now strictly single-user, and anything larger seems to have reverted to the 1970s mainframe model: a frontend would enqueue a query (etc.) on behalf of a user, and a backend would dequeue and action it with appropriate access permissions.

Focusing on "multiuser" as the prime requirement and running backends with their own fixed identity might actually be undesirable: the industry needs a good hardware-enforced security model applicable primarily to daemons, and if it turns out that that can also support a traditional multiuser architecture that's just icing on the cake.

Forgetting the history of Unix is coding us into a corner

MarkMLl

Re: Only Wayland?

While I agree, things like the position on a screen or its Z-order should be reflected by metadata in the directory structure rather than file content.

That sort of thing was completely overlooked in UNIX's design, which is why there were attempts like Alpha-Windows on character terminals and why window position and furniture are separate from window content on X11.

But even if we ignore the positioning aspect, we have to ask ourselves how best to represent process output as a file. Is it going to be a stream with embedded positioning commands etc.? Or is it going to be a sequence of rows each with a fixed number of columns representing what is actually presented to a user?

The first of those might be appropriate if the output device is a glass teletype, but once one starts considering any sort of smart terminal or form-based display one has to wonder whether IBM were actually right when they defined that fileset metadata included block and record sizes.

MarkMLl

Re: What is unix?

I broadly agree, but that needs closer examination.

"Back in the day", Digital Research sold CCP/M (aka MDOS etc.) with the claim that it could do anything that unix could do. However what that actually boiled down to was that /some/ programs originally written for unix could be recompiled for CCP/M, since they only used standard library facilities which could be emulated adequately by a decent compiler: Lattice C springs to mind.

fork(), as a system call, has well-defined semantics to which anything that isn't "a unix" struggles to conform.

fork(), as a library routine, is more tolerant provided that you don't start looking too closely at the memory semantics.

Any OS which claims to provide a good imitation of something more mature becomes a support nightmare, as more and more people uncover marginal behaviour. OTOH, if its emulation survives for more than a few years it provides an incentive for people to write and test their code for at least some degree of cross-platform portability.

DR's "Better DOS than DOS (and as good as any unix)" phase lasted perhaps five years. OS/2's "Better Windows than Windows (and as good as any unix)" for perhaps the same sort of time. Ditto for Linux's claims to have a subsystem that would run other unixes' binaries. But all of them were strong when running code specially written for their native APIs.

Which I suppose means that the days of Linux pretending to be "a unix" are actually long past, and that for the last five years or so almost everybody has been more inclined to treat various distreaux as "a Linux": with, like it or not, systemd, Wayland and the rest.

MarkMLl

Re: Linux use to be Unix

Which takes us back to "stuff that Poettering hasn't touched."

I'm not necessarily saying that all his ideas are bad, but he has repeatedly demonstrated reckless disregard for the various layers which conventionally make up "A Unix"- kernel, libraries, daemons, Application-level APIs and so on that de-facto Linux is more defined by Systemd than it is by the strict Linux kernel.

MarkMLl

Re: Not *everything* is a file

And /you/ try netstat -an, which includes all of the ports your system exposes none of which appear directly in /dev.

MarkMLl

Re: What is unix anyway?

Incidentally, if anybody is as perplexed by IBM's z/Wotzit range as I am, I can recommend the PDF at

https://www.ksp.kit.edu/site/books/m/10.5445/KSP/1000034624/

as a particularly good read. It goes into a lot of detail about IBM's mainframes up to the early 2010s, and does so in the context of "industry dominant" terminology: i.e. it describes a sysplex in the wider context of clustered systems and so on.

MarkMLl

Re: What is unix anyway?

That doesn't sound too difficult, and could basically be achieved by using different code and data search paths depending on the OS requirement of a particular application program.

The sticking point would be libraries and support programs that came with Windows, and couldn't- at least legitimately- be run on top of some other foundation OS.

However it does remind me of something that happened to a user on CIX (anybody remember CIX?) a considerable number of years ago, when WINE was much less mature than it is today. He came across something that he strongly suspected carried a (non-bootsector) virus, and idly ran it: it infected files in his application search path, which meant that malware-carrying files could be referenced directly by a carelessly-entered shell command.

Of course, Windows has improved enormously since that happened, albeit more by the inclusion of virus scanners and mandatory code signing than by inherent good design. Which makes me wonder how much of that "medicine" would work properly if the foundation OS was something other than Windows.

MarkMLl

Re: Not *everything* is a file

I think the original idea was that everything was a represented by a name somewhere in the tree of mounted filesystems, and was manipulated by one of a small number of system calls.

Unfortunately, things like sockets on top of IP don't have names. Network devices don't have names (relative to / ). USB devices don't have names... the list goes on, although in fairness the /state/ of many of those is often accessible via /sys or /proc (with, in the case of Linux, a layout decided entirely by the whim of the kernel module author).

And again unfortunately, there's a vast number of devices which have their own APIs funneled through ioctl() or accessible only via a private kernel-level API.

So in short: everything is a file (provided that it's a file).

The challenge for a putative C21st UNIX replacement would be generalising all possible devices to have a consistent minimal API, generalising all streams interacting with a device to have a consistent minimal API, and so on. And what I've seen of the industry over the last 50ish years doesn't encourage me to hold my breath waiting.

MarkMLl

What is unix anyway?

"That's why several versions of IBM z/OS are on the Open Group List. That seems strange because they are not Unix-like at all."

Which has always struck me as very, very odd. It's like saying that OS/2 or Windows "is A Unix" by virtue of their compatibility layer. Or that a proprietary hypervisor becomes "A Unix" because it can run UNIX in a VM.

Or- and purists will find this really contentious- Linux becomes "A Unix" by virtue of running UNIX in a VM.

I have enormous respect for the classic IBM mainframe designs, and for the architecture of OS: which was astounding once they finally got it working, a worthy challenger to established products such as Burroughs's MCP. But many of its strengths that, for example, allow seamless process migration between members of a cluster, are fundamentally incompatible with UNIX: claiming z/OS is "A Unix" does neither justice.

Drowning in code: The ever-growing problem of ever-growing codebases

MarkMLl

Re: m4 macro processor

However, the real strength of a macro stage is when it can determine the type of its parameters: /this/ is an lvalue to which a 64-bit real can be assigned, /that/ is an expression that evaluates to a real, and so on.

Lack of a decent frontend which can handle that sort of thing is one of Pascal's many flaws. However when combined with a decent RAD IDE (i.e. Delphi or Lazarus) it still has quite a lot going for it: but I'm no longer sure that I'd recommend it to somebody seeking a language for in-depth study.

MarkMLl

Re: "Late in Wirth's career, he became a passionate advocate of small software."

"everything that was necessary and nothing that was not" is fine, /provided/ that the underlying platform (i.e. language, OS or whatever) is extensible.

For a language to not have a decent macro-expanding frontend is a major omission (and for the language's custodians to point to some favoured facility that can do many, but not all, of the same tricks is inexcusable aggression).

For an OS not to be extensible by device drivers is a major omission (and the same applies to loadable network protocol plugins etc.).

The devil take any developer that assumes that he understands all possible use cases to which his product may be applied, and eschews conventional wisdom and expectations.

MarkMLl

Simplify and add lightness...

I'm all for eliminating cruft and keeping systems- hardware, OS and apps- as small as possible.

The problem is, however, that if you simplify a computer (hardware and OS) too far, then it will no longer be able to support the mix of functionality that almost everybody insists is essential.

You can obviously get round that to some extent by having more physically-separated computers. But as you do that the number of communications links goes up exponentially, followed by the number of opportunities for race conditions and inconsistencies.

As should be obvious, in retrospect, to absolutely anybody who has had anything to do with a system such as the UK Post Office's "Horizon": which remains in service with a very worried population of captive users.

MarkMLl

Re: A Few Ironies

Yes, but there /is/ a fundamental problem here: if you revert to a simpler OS then you're going to have to throw stuff away. And nobody can agree what's to go.

MarkMLl

Re: UI standardised ?

Agreed. A major issue is that now that programs don't- as a rule- have menus the UI has lost the ability to "pop up" a hint telling the user what he's about to do.

RIP John Walker, software and hardware hacker extraordinaire

MarkMLl

Xanadu, and other SF

One of my favourite short stories is Walker's "We'll Return, After This Message" which he dates December 1989 https://www.fourmilab.ch/documents/sftriple/gpic.html

It includes the words "Back in '97 Xanadu still wasn't finished...", which suggest that despite his optimistic description in 1988 he very rapidly started having second thoughts.

Clarke in "Odyssey Two" (1982) seems to have swallowed the underlying idea, with just about everything (including electricity bills) stored in a handful of DP vaults scattered around the planet: strongly suggested to be a shared resource. And I think it was in "The Fountains of Paradise" (1979) where he implied that the pinnacle of computer proficiency was being able to generate imaginative SQL one-liners.

Since then we've had the Semantic Web, not to mention indisputable duds like Chandler.

And everybody's gone back to the DP bureau model, where they hire time from Google or Facebook without looking too closely at the fine print.

Raspberry Pi Pico cracks BitLocker in under a minute

MarkMLl

It took the /Pico/ less than a minute. The point being made- and I think it's fair- is that it didn't involve hours or days of computation to decrypt a key, or some substantial amount of memory to build up rainbow tables.

MarkMLl

Hardly surprising...

There's been multiple people looking at making use of the LPC bus- which is what accesses the TPM when it's not integrated into the CPU- for a couple of years: some of whom use the Pico.

https://hackaday.com/2023/06/13/bios-post-card-built-using-raspberry-pi-pico/

So the real issue here is that BitLocker uses the key from the TPM without combining it with "something /else/ you have" or "something /else/ you know": i.e. a swipeable card or a passphrase to be entered.

None of which would help in the "Dread Pirate Roberts" scenario where both he and his laptop were taken live, or... (obligatory XKCD) https://xkcd.com/538/

Fairberry project brings a hardware keyboard to the Fairphone

MarkMLl

Re: Not the way I would do this

Search for "Blackberry Q10 keyboard" on AliExpress: there's a lot around, either as unused spares or salvaged from e-waste.

Designing something like a keyboard is difficult, and refining it to the point where it is as well-regarded as the Blackberries' is well-nigh impossible. Not to mention MOQs for the specialist parts which are probably in the thousands.

MarkMLl

Fiddly bits

Apropos components, it appears to be only the Hirose connector which has to be sourced and soldered by the purchaser, and it probably wouldn't take many people clubbing together (order of 100?) to get JLCPCB to add it to their list in short order.

And they do have competitors...

More of an issue is the final housing. 3D printing- /particularly/ if it's FDM (Fused Deposition Modelling)- benefits enormously from manual finishing which is not a million miles removed from high-end car bodywork. Filler-sand-repeat several times, then paint: relatively few people have the patience to do a good job of it.

Tech billionaires ask Californians to give new utopian city their blessing

MarkMLl

Same old...

California, not to mention the rest of the USA, seems totally unable to get any form of "improvement" right:

https://www.nytimes.com/2022/10/09/us/california-high-speed-rail-politics.html

RIP: Software design pioneer and Pascal creator Niklaus Wirth

MarkMLl

Re: dangling else problem

Yes, which is my point. Have you actually read the ALGOL W source with an eye to working out how to make a significant change in the syntax?

MarkMLl

Re: Wirth and John McCarthy

According to Waychoff ** IBM had made a significant donation to Stanford in the early 1960s (i.e. pre-McCarthy) which they'd used to build a computer centre... and then put a Burroughs mainframe in it. Also according to the same narrative McCarthy was unhappy that Burroughs wouldn't give him a way round their memory protection so that he could take over memory management with LISP. It's very easy to interpret that as McCarthy wanting to get rid of Burroughs, with IBM prepared to bend over backwards to keep him sweet. (I'd note here that SLAC kept their Burroughs, and it contributed to the design of the Starwars-era S1 supercomputer which had an OS implemented in an extended Pascal.)

There was, without any doubt whatsoever, an enormous amount going on behind the scenes which might never be disentangled. We know that Wirth and Dijkstra were friends, we know that Dijkstra was very public in his disdain for IBM, we know that Wirth spent at least one sabbatical on the US West Coast but there's little indication that he visited Stanford.

So while it's very common to see accounts of there being a Europe vs USA battle centred around ALGOL/Pascal vs FORTRAN/COBOL, it might actually have been Dijkstra/Wirth vs IBM and, at a deeper level, IBM doing everything it could to disparage "The BUNCH": i.e. Burroughs et al.

** http://archive.computerhistory.org/resources/text/Knuth_Don_X4100/PDF_index/k-8-pdf/k-8-u2779-B5000-People.pdf which is Donald Knuth's copy, hence the curious annotation on the front page (which has never been elaborated, including in Knuth's oral history where he has nothing but praise for Burroughs). I believe that copy came from Bill Parker who was a Burroughs FE in the UK, and had trained on the next generation Burroughs mainframes in November '79; I suspect it got to Knuth via Warwick University which was another Burroughs site despite its proximity to an IBM campus.

MarkMLl

Re: dangling else problem

I am fully aware of what it means. The problem persisted in Pascal implementations: it is more than just a specification issue.

It was fixed in Modula-2, which Wirth designed at his relative leisure after- I believe- switching to recursive descent which is easier to maintain.

It was fixed in Ada, where Wirth- and other authors of the ALGOL-68 Minority Report- served on the HOLWG.

But most importantly it was fixed in ALGOL-68 which required an explicit FI etc., hence Wirth was aware that it was a significant issue before embarking on Pascal.

MarkMLl

Wirth and ALGOL-68

The thing that has long surprised me is the complete lack of support that Wirth appeared to get from his former boss at Stanford, John McCarthy. The quip about "knuthput" in the "Final Word" cited suggests that he felt that the entire Stanford department had become hostile to him.

Not having McCarthy's support, and probably correctly feeling that he was too junior an academic to stand up to somebody of McCarthy's stature, he resigned from the committee in May 1968 ** and spent the next few months modifying his ALGOL W compiler to implement a new language for (I speculate) the Autumn postgrad intake to work with. As it was, the delay before an intelligible ALGOL-68 specification was available probably meant that he needn't have rushed.

At my most charitable, I think that Wirth was sufficiently inexperienced to realise that he had to cultivate his former colleagues and explain what he was doing and why.

Somewhat less charitably, accounts elsewhere suggest that McCarthy had lost enthusiasm for ALGOL by about 1965, when he had Stanford's Burroughs (i.e. ALGOL-based) mainframe replaced by an IBM.

Perhaps cruelly, there's even a possibility that McCarthy intentionally sabotaged ALGOL-68 to remove a threat to LISP.

** https://www.tomandmaria.com/Tom/Writing/DijkstrasCrisis_LeidenDRAFT.pdf

MarkMLl

Re: dangling else problem

"fixed in later languages, including Pascal"

No it wasn't. Wirth was in a rush, he based his earliest Pascal compiler on ALGOL W which used recursive ascent (rather than descent) and the changes required would have taken more time than he thought was available.

In practice, because ALGOL-68 was delayed, he could have slowed down a bit and done the job properly.

Apropos the semicolon, being a /separator/ rather than /terminator/ it couldn't appear before an ELSE (and some Pascal implementations were picky if it appeared before e.g. UNTIL).

But after various people- UCSD, Borland- had "improved" the language we've ended up with a mixture of structures which handle single statements (e.g. if-then-else) and structures which require END (try-finally-end).

The comparative lack of success of Modula-2, which tidied this stuff up, is unfortunate. But Wirth no longer had Kathleen Jensen to help him make it intelligible.

War of the workstations: How the lowest bidders shaped today's tech landscape

MarkMLl

Re: Good article, if a bit too wide ranging

I agree: so wide ranging it makes comment difficult and risks having somebody who has good reason to disagree with one point say things based on an imperfect understanding of others.

Having said that, I think the priesthoods that surrounded the major mainframe architectures were so intent on protecting their precincts by means of arcane incantations that their successors really hadn't got the faintest idea what they were talking about.

It took me years to find a coherent explanation of what a "parallel sysplex" is, and how it compares with an SMP system, or NUMA, or a cluster.

MarkMLl

Re: Sorry Liam, Not Even Wrong...

"I'd heard his reason for leaving MS was his insistence on keeping the GUI out of the kernel vs. Gates' insistence it run in privileged mode for visual performance reasons"

I'd say that history has proven Gates wrong. MS was very much into "let's improve the user experience", but the amount of productivity sacrificed (and crime perpetrated) as a result of that vastly outweighs the benefits.

MarkMLl

Re: Sorry Liam, Not Even Wrong...

There is no way the PC shipped with a sound chip: it had 1-bit output to a PC and you had to hardcode almost everything.

MarkMLl

Re: Disagree on a few points

"Yes, a concept carried over by Commodore on the original PET and it's floppy disk unit. Maybe others did it to, but none I'm aware of."

In that case you've really not looked very hard. From the early 80s onwards anything using a SCSI bus had a significant amount of processing power in each host (i.e. peripheral), and the situation has continued to the present day. Haven't you ever stopped to wonder just what's inside that fancy printer on your USB bus, with a display that would put the original PC to shame?

MarkMLl

Re: Correctness and Simplicity

I believe that the original definition was from John McCarthy (MIT and later Stanford), who compared a certain type of programmer with a "ski bum" intent on shaving a fraction of a second off his downhill run time.

MarkMLl

Re: What Is A “Workstation”?

Interestingly (at least IMO), Sun's enterprise-grade machines and the Cray CS64000 were all based on an "Artificial Intelligence Workstation" architecture developed at Xerox PARC, which was then reworked by a collaborative team of Sun and Xerox engineers to use SPARC processors (the seminal papers have a couple of dozen authors from the two companies).

If nothing else, this suggests that physical size and the number of simultaneous users are barely relevant when it comes to firming up the definition: it mostly boils down to "what does this company see itself as selling?".

MarkMLl

Re: Disagree

The bigger problems with the dynamic languages is lack of internal protection: an ordinary user can make breaking changes to the underlying structure of the system.

I'm fully behind "It's my PC, and I'll pry if I want to". However I think it's indefensible for a user to (a) make some arbitrary change to a network-facing computer and then claim it's unmodified or (b) make a change to some component which he does not "own" (i.e. take full responsibility for) and then blame everybody else for the resultant problems.

Smalltalk (or for that matter Lisp) with some form of object/class-based ownership and protection mechanism would be very interesting indeed. But AFAIK, such a thing does not exist.

Kernel kerfuffle kiboshes Debian 12.3 release

MarkMLl

Re: Oh flippin' great...

Got it thanks.

66-1 = 67.

MarkMLl

MarkMLl

Oh flippin' great...

Spotted the upgrade on Sunday (10th) evening, installed it yesterday (Monday 11th) morning and it's left me with kernel 6.1.66-1 (2023-12-09) and no indication of any available replacement.

However /etc/debian_version tells me I'm on 12.4, which is presumably OK.

Still better than my memories of OS/2...

MarkMLl

Revival of Medley/Interlisp: Elegant weapon for a more civilized age sharpened up again

MarkMLl

Re: You /what/ Liam?

So in Smalltalk you have left-to-right, in APL you have right to-left, and Lisp does its own thing. Then you've got RPN as used by Forth, HP et al.

All of which has left me with the definite feeling that we owe an enormous amount to ALGOL as originally defined, which despite the fact that few people at the time knew how to write a compiler did its best to stick to normal algebraic evaluation.

MarkMLl

Re: Downvote magnet ...

> But thats standard when writting any sort of code... comment the hell out of it.

No. Read the Blue Book and you will see that the language- not support software added later- contained what was basically project-management facilities.

MarkMLl

Re: (Potentially-) Evil Programming Constructs

"Untagged variant records" in more recent Wirth-style parlance. A lot of machines were like that, and while I've not gone into that facet of history in detail my suspicion is that the original intention of types as conceived by Hoare/Wirth was to handle those, with extension to more complex data structures etc. following.

MarkMLl

Re: Excellent Work !

Burroughs ALGOL-based mainframe emulation: https://retro-b5500.blogspot.com/

MarkMLl

Re: Excellent Work !

George 3: https://en.wikipedia.org/wiki/GEORGE_(operating_system)#Emulation https://www.rs-online.com/designspark/running-george-3-on-a-raspberry-pi

plus many others.

MarkMLl

Re: Downvote magnet ...

But one of the most significant things about Smalltalk as described in the "Blue Book" was that it went to significant lengths to encourage commenting etc. to make large-scale programming manageable.

Kay, unlike many who attempt to design a language, was intimately familiar with the state of the art: warts and all.

MarkMLl

Re: You /what/ Liam?

I'm disappointed that your editorial staff aren't sufficiently on the ball to have picked up the lisp/list inconsistency.

The more time I spend with software projects' excuse for documentation, the more I respect the rigour that the Ada community attempted.

I've dug around the history a bit, and in actual fact the Strawman requirements did mention LISP's indeterminate-length lists as something useful to have. Other than that it was ignored, and in the end they explicitly based the language on Pascal (i.e. as distinct from ALGOL-60 etc.).

And then it appears that the DoD's HOLWG actually hired Dijkstra, Hoare and Wirth as consultants: they being the prime movers behind the "Minority Report" which pointed out flaws in ALGOL-68 as first defined.

All of which could be very easily interpreted as an aggressive dismissal of a whole bunch of ivory tower academics including John McCarthy and van Wijngaarden's coterie.

But the bottom line is that neither Ada nor ALGOL-68 had an easy/cheap implementation which allowed an engineer or project specifier to take a copy home or run it standalone on his office workstation. And that's probably why C (and, in its day, Turbo Pascal etc.) outsold it something like 5,000-to-1. Hell, I've seen more copies of LISP sold than Ada...

MarkMLl
Coat

You /what/ Liam?

Interlisp or Interlist? Parehelion... who the heck were they?

And if LISP really was so great, how come it was almost totally ignored when the DoD was looking for a foundation for Ada?

MarkMLl

Polishing off a printer with a flourish revealed not to be best practice

MarkMLl

How the Hell did he manage that???

The TC500 decoders were sealed units with the electromagnetic clutches and brakes running in an oil bath.

If a rag got into it... well, all I can say is that https://xkcd.com/463/ applies.

System76 teases features coming in homegrown Rust-based desktop COSMIC

MarkMLl

Yes, yes, yes... but when are we going to get a half-decent Rust-based RAD with IDE, form builder, integrated debugging etc. to rival NeXTSTEP, Delphi or FPC/Lazarus?

openSUSE makes baseline CPU requirements a little friendlier than feared

MarkMLl

Re: I don't see an explicit "version2" or "architecture version number" there...

Hmm. So /very/ approximately:

v2 sse4_2

v3 avx2

v4 avx512

...with a lot of scope for chip variants to have some enhancement of (in particular) AVX512 to suit "jeu de jour".