* Posts by MarkMLl

107 publicly visible posts • joined 29 Jun 2019

Page:

Public developer spats put bcachefs at risk in Linux

MarkMLl

Not quite how I read it.

"...executive summary might be that the Rust folks are proposing changing how the C parts of the kernel work slightly in pursuit of cleaner, more reliable code."

I'm not sure that's entirely accurate. Having read the transcripts shortly after that particular spat broke out, the complaint was that there were a lot of poorly-documented edge-cases in an inconsistent API, which demanded poorly-documented use practices to deliver reliable results.

The proposal was not for the API to be changed to suit Rust. It was for the API to be cleaned up and- among other things- documented, which would have obvious advantages for all.

The existing maintainer reacted like a true prima donna, which was not to his credit and reflected badly on the entire community.

MarkMLl

UK farm ministry offers £27M to support legacy systems for another three years

MarkMLl

OK, but can't you at least tell us what these "legacy systems" are?

Are they frontended by PC desktop apps?

Is the backend on a single server site or is it (in principle) distributed and fault-(in)tolerant?

What's the backend: PC-based servers or Sun Enterprise? Or SGI because of the mapping involvement? or (Lord help us) S/360 running on three layers of emulation?

Help! My mouse climbed a wall and now it doesn't work right

MarkMLl

That was such a common occurrence that one learned to query any order for a 9-pin serial or PS/2 extension cable.

Of course, any user with the original MS mouse that plugged into a custom ISA card with a proprietary connector didn't even get that far.

Linux kernel 4.14 gets a life extension, thanks to OpenELA

MarkMLl

So what can only be installed on v4 kernels?

I've not gone checking version numbers, but, as a specific example, HP (nee Compaq) used to distribute RAID etc. monitoring software in linkable form. All it would take would be one important user of old machines, like the USA's Veterans Administration or some other organ of government.

Alternatively, various American organisations were big users of 32-bit SPARC systems: their manufacturer doesn't want to maintain them and they sure as Hell aren't going to get much support from the Open Source community after they cut off support of the GNU Utilities porting effort.

And for that matter, in around the v4 era somebody developed a nice way of patching a running kernel (which might not work on later versions). It was bought by Oracle...

Judge demands social media sites prove they didn't help radicalize mass shooter

MarkMLl

Re: IANAL

You can't connect to Discord without enabling Javascript in your browser, i.e. messages are filtered through Discord's client-side code.

It's down to Discord to /prove/ that they don't, ever, filter messages in any way.

Same apples to everybody else, but if they can't do that then they're not a "common carrier" and are liable for content.

MarkMLl

Re: IANAL

In short, server + proprietary_protocol + in-browser_custom_client = product.

I don't like to sound like an open-source zealot, but if somebody- and this includes the likes of Discord- is filtering messages through their custom code, and by use of a proprietary protocol are at least potentially preventing people from getting at the /actual/ postings on the server, then they're going to have a Hell of a hard time demonstrating that they're a "common carrier".

A path out of bloat: A Linux built for VMs

MarkMLl

Re: BTDT

I was either fed dud information on the forum by the developers, or the situation has improved over the last few years.

if ARCH=um, then it's possible to select either a 64- or 32-bit guest kernel build either from the menu system or by using SUBARCH=i386: the result is the same.

On an x86-64 host, I can build either a 64- or 32-bit kernel+modules set which appears to be consistent.

However the overall setup and boot has probably grown in complexity over the years: it used to be that you could fire up e.g. a UML kernel and immediately load from ISO media (e.g. Slackware from a CD or disc image) but despite spending a day tinkering I wasn't able to get anything working,

Revisiting it after a few days, I find that the best documentation comes from Debian's "man linux.uml", which is an extended derivative of https://docs.kernel.org/next/virt/uml/user_mode_linux_howto_v2.html (not very well indexed or linked to).

Following that carefully, I was able to boot UML and get a root login, which leaves me reasonably confident that I could rebuild an arbitrary kernel version.

I've not tried comparing operation speed with Qemu/KVM or Docker etc., but this is probably something which is still useful if one needs a specific kernel version, not just a particular set of libraries etc. as one gets with most container environments.

MarkMLl

BTDT

> How hard can you cut down Linux if you know it will never run on bare metal? Further than any distro vendor we know of has tried to go.

Demonstrated adequately by User Mode Linux (UML), which has been a standard build target for a considerable number of years.

The major thing that it couldn't do, when I last looked, was run a 32-bit guest on a 64-bit host. Which is a great pity, since if a guest is only going to be used for work which is not memory-intensive there's no real reason for it to carry around the burden of 64-bit pointers etc.

Starting over: Rebooting the OS stack for fun and profit

MarkMLl

Re: No. Sorry, just /no/.

> B) this is not something for the language to handle, it would be for the system

You're overlooking the fact that in Smalltalk and Lisp environments there is no significant distinction between the language and the OS.

MarkMLl

Re: Winding paths of history...

"I always thought Multics approach of making everything a segment was an idea worth doing properly - 64kb segments are just plain silly but 2^64 byte segments could fly."

The fundamental problem was that even after x86 segments were expanded to a maximum of 4Gb each, the descriptor tables- GDT and LDT- were still each restricted to 64Kb i.e. 8K entries. If one wrote an OS as Intel (originally) suggested, and if one actually had more than one descriptor per segment (e.g. one to allow code to be executed, and another to allow a debugger to examine it) that 8K was ridiculously inadequate.

MarkMLl

No. Sorry, just /no/.

First: "You load a program, then once in memory and running, you can then load a file into it to work on it. But you must remember to save regularly or you might lose your work. If your computer crashes it has to reboot and reload the OS. This resets it and whatever you had open is lost."

Look me in the eye Liam, and tell me that you've never had a program crash for no obvious reason. If you don't have a copy of your magnum opus on secondary storage then the only thing you can do is revert to the same memory image: which is likely to be inconsistent, hence your work is unsalvageable.

Second, if Smalltalk and Lisp are so great 'ow come the DoD considered neither as a foundation for Ada? (In the interest of full transparency, I wrote my MPhil on Smalltalk and related matters: I wot of which I speak).

Third, environments such as Smalltalk and Lisp suffer from not having per-object ownership and access rights. One of the early PARC books (can't remember whether it's the Blue or the Green) makes the point that if the system owner decides to rewrite the definition of (all) arrays so that they are zero- rather than one-based... well, be it on his own head. And reviewers of the earliest published version in the Green Book make the point that while great fun such a thing is virtually unsupportable since it's impossible to work out what state a user's system image is in: the one thing you can't do is say "reboot and try it again".

Look, I'm a firm believer in "it's my PC, and I'll pry if I want to" but without internal protection Smalltalk (and Lisp) are basically unusable: something I've pointed out repeatedly to Liam over the last ten years or so.

In addition to that, "If you find that you need different languages for your kernel and your init system and your system scripts and your end-user apps and your customisation tools, that indicates that there's something wrong with the language that you started in." Sorry, I disagree: the languages used at the lowest levels of implementation /have/ to be able to do things are are anathema at the application level, the canonical example being handle hardware-level pointers (i.e. physical memory addresses) and [horror] perform arithmetic on them. In addition, the lowest levels of system implementation usually (a) include the code that implements at least one heap and (b) carefully avoid using such things because the risk of indeterminacy is too great. By way of contrast, the highest levels usually /demand/ dynamic storage on the heap, plus either reference counting or garbage collection. And $DEITY help programmers with a bit of app-level experience who think they are competent to write low-level microcontroller code, and that it's appropriate to have garbage-collected strings in critical code...

And so on...

City council megaproject mulls ditching Oracle after budget balloons to £131M

MarkMLl

Re: So whose bright idea was it in the first place?

> It would be a blind tender initially and Oracle will have ticked all the boxes.

Would it be excessively cynical to suggest that Oracle have specialists to assist potential clients to prepare request-to-tender documents?

MarkMLl

Re: What would it cost ...

Contract out to... ICL? Sorry, Fujitsu.

The idea of having a preferred supplier for HM's DP Systems is not inherently bad, particularly if they genuinely have proper staff training schemes etc. (i.e. like Marconi had when they were the preferred supplier for certain types of telephone system).

But it needs politicians with some minimal level of cluefulness to get it to work.

MarkMLl

So whose bright idea was it in the first place?

I've seen comment in the past that Oracle tends to pitch its products directly to prospective customers' board members, cutting out the specialists who really know what's going on.

Has anybody looked carefully at the procurement process in the case of Birmingham?

I don't mean whether the tendering process was done by the book, I mean who started pushing the idea that relying on Oracle was a fundamentally sound bet?

The successor to Research Unix was Plan 9 from Bell Labs

MarkMLl

Re: So...

But be realistic: if you're the sort of person who still considers Slackware then you got a lot of that good attitude from your parents.

A thoroughly above-average family :-)

(The main reason I left Slackware for Debian was support for SPARC etc.)

MarkMLl

Re: OK, but is "multiuser" relevant any more?

But the Samba backend process runs- AIUI- as its own user (i.e. "samba" in group "samba"). It assumes responsibility for checking who is trying to access a file (i.e. which user on a client system) but once it's done that it still uses samba:samba at the library and kernel level.

I think I'd prefer that protection to be enforced at a lower level, in the same way that cross-system requests are enforced on many capability-based distributed OSes.

MarkMLl

Re: So...

> I believe Rob Pike studied under Wirth.

There's definitely a "citation required" tag on that. Wirth's connection with the ALGOL lineage- including Ada- is unassailable, and most modern languages seem at least tolerant of his ideas regarding strong typing. And one of the first APL implementations was done under his supervision at Stanford.

Robert Griesemer provides a link between Wirth and Google including Go, but so far I've seen no definite confirmation that Wirth had a personal connection with any of the Bell researchers- or for that matter anybody on the USA's East Coast i.e. IBM and the rest.

> Plan 9 is a successor to a multiuser OS with strong security, authentication, etc.

OK, but is "multiuser" relevant any more? Multi-*tasking* is definitely relevant, as is strong security between processes which might be briefly processing data on behalf of a known or anonymous user. Most desktop systems are now strictly single-user, and anything larger seems to have reverted to the 1970s mainframe model: a frontend would enqueue a query (etc.) on behalf of a user, and a backend would dequeue and action it with appropriate access permissions.

Focusing on "multiuser" as the prime requirement and running backends with their own fixed identity might actually be undesirable: the industry needs a good hardware-enforced security model applicable primarily to daemons, and if it turns out that that can also support a traditional multiuser architecture that's just icing on the cake.

Forgetting the history of Unix is coding us into a corner

MarkMLl

Re: Only Wayland?

While I agree, things like the position on a screen or its Z-order should be reflected by metadata in the directory structure rather than file content.

That sort of thing was completely overlooked in UNIX's design, which is why there were attempts like Alpha-Windows on character terminals and why window position and furniture are separate from window content on X11.

But even if we ignore the positioning aspect, we have to ask ourselves how best to represent process output as a file. Is it going to be a stream with embedded positioning commands etc.? Or is it going to be a sequence of rows each with a fixed number of columns representing what is actually presented to a user?

The first of those might be appropriate if the output device is a glass teletype, but once one starts considering any sort of smart terminal or form-based display one has to wonder whether IBM were actually right when they defined that fileset metadata included block and record sizes.

MarkMLl

Re: What is unix?

I broadly agree, but that needs closer examination.

"Back in the day", Digital Research sold CCP/M (aka MDOS etc.) with the claim that it could do anything that unix could do. However what that actually boiled down to was that /some/ programs originally written for unix could be recompiled for CCP/M, since they only used standard library facilities which could be emulated adequately by a decent compiler: Lattice C springs to mind.

fork(), as a system call, has well-defined semantics to which anything that isn't "a unix" struggles to conform.

fork(), as a library routine, is more tolerant provided that you don't start looking too closely at the memory semantics.

Any OS which claims to provide a good imitation of something more mature becomes a support nightmare, as more and more people uncover marginal behaviour. OTOH, if its emulation survives for more than a few years it provides an incentive for people to write and test their code for at least some degree of cross-platform portability.

DR's "Better DOS than DOS (and as good as any unix)" phase lasted perhaps five years. OS/2's "Better Windows than Windows (and as good as any unix)" for perhaps the same sort of time. Ditto for Linux's claims to have a subsystem that would run other unixes' binaries. But all of them were strong when running code specially written for their native APIs.

Which I suppose means that the days of Linux pretending to be "a unix" are actually long past, and that for the last five years or so almost everybody has been more inclined to treat various distreaux as "a Linux": with, like it or not, systemd, Wayland and the rest.

MarkMLl

Re: Linux use to be Unix

Which takes us back to "stuff that Poettering hasn't touched."

I'm not necessarily saying that all his ideas are bad, but he has repeatedly demonstrated reckless disregard for the various layers which conventionally make up "A Unix"- kernel, libraries, daemons, Application-level APIs and so on that de-facto Linux is more defined by Systemd than it is by the strict Linux kernel.

MarkMLl

Re: Not *everything* is a file

And /you/ try netstat -an, which includes all of the ports your system exposes none of which appear directly in /dev.

MarkMLl

Re: What is unix anyway?

Incidentally, if anybody is as perplexed by IBM's z/Wotzit range as I am, I can recommend the PDF at

https://www.ksp.kit.edu/site/books/m/10.5445/KSP/1000034624/

as a particularly good read. It goes into a lot of detail about IBM's mainframes up to the early 2010s, and does so in the context of "industry dominant" terminology: i.e. it describes a sysplex in the wider context of clustered systems and so on.

MarkMLl

Re: What is unix anyway?

That doesn't sound too difficult, and could basically be achieved by using different code and data search paths depending on the OS requirement of a particular application program.

The sticking point would be libraries and support programs that came with Windows, and couldn't- at least legitimately- be run on top of some other foundation OS.

However it does remind me of something that happened to a user on CIX (anybody remember CIX?) a considerable number of years ago, when WINE was much less mature than it is today. He came across something that he strongly suspected carried a (non-bootsector) virus, and idly ran it: it infected files in his application search path, which meant that malware-carrying files could be referenced directly by a carelessly-entered shell command.

Of course, Windows has improved enormously since that happened, albeit more by the inclusion of virus scanners and mandatory code signing than by inherent good design. Which makes me wonder how much of that "medicine" would work properly if the foundation OS was something other than Windows.

MarkMLl

Re: Not *everything* is a file

I think the original idea was that everything was a represented by a name somewhere in the tree of mounted filesystems, and was manipulated by one of a small number of system calls.

Unfortunately, things like sockets on top of IP don't have names. Network devices don't have names (relative to / ). USB devices don't have names... the list goes on, although in fairness the /state/ of many of those is often accessible via /sys or /proc (with, in the case of Linux, a layout decided entirely by the whim of the kernel module author).

And again unfortunately, there's a vast number of devices which have their own APIs funneled through ioctl() or accessible only via a private kernel-level API.

So in short: everything is a file (provided that it's a file).

The challenge for a putative C21st UNIX replacement would be generalising all possible devices to have a consistent minimal API, generalising all streams interacting with a device to have a consistent minimal API, and so on. And what I've seen of the industry over the last 50ish years doesn't encourage me to hold my breath waiting.

MarkMLl

What is unix anyway?

"That's why several versions of IBM z/OS are on the Open Group List. That seems strange because they are not Unix-like at all."

Which has always struck me as very, very odd. It's like saying that OS/2 or Windows "is A Unix" by virtue of their compatibility layer. Or that a proprietary hypervisor becomes "A Unix" because it can run UNIX in a VM.

Or- and purists will find this really contentious- Linux becomes "A Unix" by virtue of running UNIX in a VM.

I have enormous respect for the classic IBM mainframe designs, and for the architecture of OS: which was astounding once they finally got it working, a worthy challenger to established products such as Burroughs's MCP. But many of its strengths that, for example, allow seamless process migration between members of a cluster, are fundamentally incompatible with UNIX: claiming z/OS is "A Unix" does neither justice.

Drowning in code: The ever-growing problem of ever-growing codebases

MarkMLl

Re: m4 macro processor

However, the real strength of a macro stage is when it can determine the type of its parameters: /this/ is an lvalue to which a 64-bit real can be assigned, /that/ is an expression that evaluates to a real, and so on.

Lack of a decent frontend which can handle that sort of thing is one of Pascal's many flaws. However when combined with a decent RAD IDE (i.e. Delphi or Lazarus) it still has quite a lot going for it: but I'm no longer sure that I'd recommend it to somebody seeking a language for in-depth study.

MarkMLl

Re: "Late in Wirth's career, he became a passionate advocate of small software."

"everything that was necessary and nothing that was not" is fine, /provided/ that the underlying platform (i.e. language, OS or whatever) is extensible.

For a language to not have a decent macro-expanding frontend is a major omission (and for the language's custodians to point to some favoured facility that can do many, but not all, of the same tricks is inexcusable aggression).

For an OS not to be extensible by device drivers is a major omission (and the same applies to loadable network protocol plugins etc.).

The devil take any developer that assumes that he understands all possible use cases to which his product may be applied, and eschews conventional wisdom and expectations.

MarkMLl

Simplify and add lightness...

I'm all for eliminating cruft and keeping systems- hardware, OS and apps- as small as possible.

The problem is, however, that if you simplify a computer (hardware and OS) too far, then it will no longer be able to support the mix of functionality that almost everybody insists is essential.

You can obviously get round that to some extent by having more physically-separated computers. But as you do that the number of communications links goes up exponentially, followed by the number of opportunities for race conditions and inconsistencies.

As should be obvious, in retrospect, to absolutely anybody who has had anything to do with a system such as the UK Post Office's "Horizon": which remains in service with a very worried population of captive users.

MarkMLl

Re: A Few Ironies

Yes, but there /is/ a fundamental problem here: if you revert to a simpler OS then you're going to have to throw stuff away. And nobody can agree what's to go.

MarkMLl

Re: UI standardised ?

Agreed. A major issue is that now that programs don't- as a rule- have menus the UI has lost the ability to "pop up" a hint telling the user what he's about to do.

RIP John Walker, software and hardware hacker extraordinaire

MarkMLl

Xanadu, and other SF

One of my favourite short stories is Walker's "We'll Return, After This Message" which he dates December 1989 https://www.fourmilab.ch/documents/sftriple/gpic.html

It includes the words "Back in '97 Xanadu still wasn't finished...", which suggest that despite his optimistic description in 1988 he very rapidly started having second thoughts.

Clarke in "Odyssey Two" (1982) seems to have swallowed the underlying idea, with just about everything (including electricity bills) stored in a handful of DP vaults scattered around the planet: strongly suggested to be a shared resource. And I think it was in "The Fountains of Paradise" (1979) where he implied that the pinnacle of computer proficiency was being able to generate imaginative SQL one-liners.

Since then we've had the Semantic Web, not to mention indisputable duds like Chandler.

And everybody's gone back to the DP bureau model, where they hire time from Google or Facebook without looking too closely at the fine print.

Raspberry Pi Pico cracks BitLocker in under a minute

MarkMLl

It took the /Pico/ less than a minute. The point being made- and I think it's fair- is that it didn't involve hours or days of computation to decrypt a key, or some substantial amount of memory to build up rainbow tables.

MarkMLl

Hardly surprising...

There's been multiple people looking at making use of the LPC bus- which is what accesses the TPM when it's not integrated into the CPU- for a couple of years: some of whom use the Pico.

https://hackaday.com/2023/06/13/bios-post-card-built-using-raspberry-pi-pico/

So the real issue here is that BitLocker uses the key from the TPM without combining it with "something /else/ you have" or "something /else/ you know": i.e. a swipeable card or a passphrase to be entered.

None of which would help in the "Dread Pirate Roberts" scenario where both he and his laptop were taken live, or... (obligatory XKCD) https://xkcd.com/538/

Fairberry project brings a hardware keyboard to the Fairphone

MarkMLl

Re: Not the way I would do this

Search for "Blackberry Q10 keyboard" on AliExpress: there's a lot around, either as unused spares or salvaged from e-waste.

Designing something like a keyboard is difficult, and refining it to the point where it is as well-regarded as the Blackberries' is well-nigh impossible. Not to mention MOQs for the specialist parts which are probably in the thousands.

MarkMLl

Fiddly bits

Apropos components, it appears to be only the Hirose connector which has to be sourced and soldered by the purchaser, and it probably wouldn't take many people clubbing together (order of 100?) to get JLCPCB to add it to their list in short order.

And they do have competitors...

More of an issue is the final housing. 3D printing- /particularly/ if it's FDM (Fused Deposition Modelling)- benefits enormously from manual finishing which is not a million miles removed from high-end car bodywork. Filler-sand-repeat several times, then paint: relatively few people have the patience to do a good job of it.

Tech billionaires ask Californians to give new utopian city their blessing

MarkMLl

Same old...

California, not to mention the rest of the USA, seems totally unable to get any form of "improvement" right:

https://www.nytimes.com/2022/10/09/us/california-high-speed-rail-politics.html

RIP: Software design pioneer and Pascal creator Niklaus Wirth

MarkMLl

Re: dangling else problem

Yes, which is my point. Have you actually read the ALGOL W source with an eye to working out how to make a significant change in the syntax?

MarkMLl

Re: Wirth and John McCarthy

According to Waychoff ** IBM had made a significant donation to Stanford in the early 1960s (i.e. pre-McCarthy) which they'd used to build a computer centre... and then put a Burroughs mainframe in it. Also according to the same narrative McCarthy was unhappy that Burroughs wouldn't give him a way round their memory protection so that he could take over memory management with LISP. It's very easy to interpret that as McCarthy wanting to get rid of Burroughs, with IBM prepared to bend over backwards to keep him sweet. (I'd note here that SLAC kept their Burroughs, and it contributed to the design of the Starwars-era S1 supercomputer which had an OS implemented in an extended Pascal.)

There was, without any doubt whatsoever, an enormous amount going on behind the scenes which might never be disentangled. We know that Wirth and Dijkstra were friends, we know that Dijkstra was very public in his disdain for IBM, we know that Wirth spent at least one sabbatical on the US West Coast but there's little indication that he visited Stanford.

So while it's very common to see accounts of there being a Europe vs USA battle centred around ALGOL/Pascal vs FORTRAN/COBOL, it might actually have been Dijkstra/Wirth vs IBM and, at a deeper level, IBM doing everything it could to disparage "The BUNCH": i.e. Burroughs et al.

** http://archive.computerhistory.org/resources/text/Knuth_Don_X4100/PDF_index/k-8-pdf/k-8-u2779-B5000-People.pdf which is Donald Knuth's copy, hence the curious annotation on the front page (which has never been elaborated, including in Knuth's oral history where he has nothing but praise for Burroughs). I believe that copy came from Bill Parker who was a Burroughs FE in the UK, and had trained on the next generation Burroughs mainframes in November '79; I suspect it got to Knuth via Warwick University which was another Burroughs site despite its proximity to an IBM campus.

MarkMLl

Re: dangling else problem

I am fully aware of what it means. The problem persisted in Pascal implementations: it is more than just a specification issue.

It was fixed in Modula-2, which Wirth designed at his relative leisure after- I believe- switching to recursive descent which is easier to maintain.

It was fixed in Ada, where Wirth- and other authors of the ALGOL-68 Minority Report- served on the HOLWG.

But most importantly it was fixed in ALGOL-68 which required an explicit FI etc., hence Wirth was aware that it was a significant issue before embarking on Pascal.

MarkMLl

Wirth and ALGOL-68

The thing that has long surprised me is the complete lack of support that Wirth appeared to get from his former boss at Stanford, John McCarthy. The quip about "knuthput" in the "Final Word" cited suggests that he felt that the entire Stanford department had become hostile to him.

Not having McCarthy's support, and probably correctly feeling that he was too junior an academic to stand up to somebody of McCarthy's stature, he resigned from the committee in May 1968 ** and spent the next few months modifying his ALGOL W compiler to implement a new language for (I speculate) the Autumn postgrad intake to work with. As it was, the delay before an intelligible ALGOL-68 specification was available probably meant that he needn't have rushed.

At my most charitable, I think that Wirth was sufficiently inexperienced to realise that he had to cultivate his former colleagues and explain what he was doing and why.

Somewhat less charitably, accounts elsewhere suggest that McCarthy had lost enthusiasm for ALGOL by about 1965, when he had Stanford's Burroughs (i.e. ALGOL-based) mainframe replaced by an IBM.

Perhaps cruelly, there's even a possibility that McCarthy intentionally sabotaged ALGOL-68 to remove a threat to LISP.

** https://www.tomandmaria.com/Tom/Writing/DijkstrasCrisis_LeidenDRAFT.pdf

MarkMLl

Re: dangling else problem

"fixed in later languages, including Pascal"

No it wasn't. Wirth was in a rush, he based his earliest Pascal compiler on ALGOL W which used recursive ascent (rather than descent) and the changes required would have taken more time than he thought was available.

In practice, because ALGOL-68 was delayed, he could have slowed down a bit and done the job properly.

Apropos the semicolon, being a /separator/ rather than /terminator/ it couldn't appear before an ELSE (and some Pascal implementations were picky if it appeared before e.g. UNTIL).

But after various people- UCSD, Borland- had "improved" the language we've ended up with a mixture of structures which handle single statements (e.g. if-then-else) and structures which require END (try-finally-end).

The comparative lack of success of Modula-2, which tidied this stuff up, is unfortunate. But Wirth no longer had Kathleen Jensen to help him make it intelligible.

War of the workstations: How the lowest bidders shaped today's tech landscape

MarkMLl

Re: Good article, if a bit too wide ranging

I agree: so wide ranging it makes comment difficult and risks having somebody who has good reason to disagree with one point say things based on an imperfect understanding of others.

Having said that, I think the priesthoods that surrounded the major mainframe architectures were so intent on protecting their precincts by means of arcane incantations that their successors really hadn't got the faintest idea what they were talking about.

It took me years to find a coherent explanation of what a "parallel sysplex" is, and how it compares with an SMP system, or NUMA, or a cluster.

MarkMLl

Re: Sorry Liam, Not Even Wrong...

"I'd heard his reason for leaving MS was his insistence on keeping the GUI out of the kernel vs. Gates' insistence it run in privileged mode for visual performance reasons"

I'd say that history has proven Gates wrong. MS was very much into "let's improve the user experience", but the amount of productivity sacrificed (and crime perpetrated) as a result of that vastly outweighs the benefits.

MarkMLl

Re: Sorry Liam, Not Even Wrong...

There is no way the PC shipped with a sound chip: it had 1-bit output to a PC and you had to hardcode almost everything.

MarkMLl

Re: Disagree on a few points

"Yes, a concept carried over by Commodore on the original PET and it's floppy disk unit. Maybe others did it to, but none I'm aware of."

In that case you've really not looked very hard. From the early 80s onwards anything using a SCSI bus had a significant amount of processing power in each host (i.e. peripheral), and the situation has continued to the present day. Haven't you ever stopped to wonder just what's inside that fancy printer on your USB bus, with a display that would put the original PC to shame?

MarkMLl

Re: Correctness and Simplicity

I believe that the original definition was from John McCarthy (MIT and later Stanford), who compared a certain type of programmer with a "ski bum" intent on shaving a fraction of a second off his downhill run time.

MarkMLl

Re: What Is A “Workstation”?

Interestingly (at least IMO), Sun's enterprise-grade machines and the Cray CS64000 were all based on an "Artificial Intelligence Workstation" architecture developed at Xerox PARC, which was then reworked by a collaborative team of Sun and Xerox engineers to use SPARC processors (the seminal papers have a couple of dozen authors from the two companies).

If nothing else, this suggests that physical size and the number of simultaneous users are barely relevant when it comes to firming up the definition: it mostly boils down to "what does this company see itself as selling?".

MarkMLl

Re: Disagree

The bigger problems with the dynamic languages is lack of internal protection: an ordinary user can make breaking changes to the underlying structure of the system.

I'm fully behind "It's my PC, and I'll pry if I want to". However I think it's indefensible for a user to (a) make some arbitrary change to a network-facing computer and then claim it's unmodified or (b) make a change to some component which he does not "own" (i.e. take full responsibility for) and then blame everybody else for the resultant problems.

Smalltalk (or for that matter Lisp) with some form of object/class-based ownership and protection mechanism would be very interesting indeed. But AFAIK, such a thing does not exist.

Kernel kerfuffle kiboshes Debian 12.3 release

MarkMLl

Re: Oh flippin' great...

Got it thanks.

66-1 = 67.

MarkMLl

Page: