* Posts by MarkMLl

138 publicly visible posts • joined 29 Jun 2019

Page:

The elusive goal of Unix – or Linux – simplicity

MarkMLl

Re: "Advocacy..."

Most PDAs and featurephones had some sort of pointing device: a four-way rocker or similar, and multiple buttons.

What is really so difficult about the idea that screen areas that /can/ be clicked on should be visually distinct, and should have popup hints telling the user what he is about to do, at least until he is familiar with the UI?

Smartphone screens are- and always have been- capable of far more resolution than early Macs, Windows or GEM systems: and arguably outperform the Xerox workstations on which the WIMP metaphor started off. There's really no excuse for walking away from well-established design principles that allow anybody familiar with one family of systems to quickly adjust to some other.

When Windows started pushing a common user interface in the early '90s there was a lot of hot air asserting that it would "stifle innovation". I was no lover of Microsoft (having dealt with them commercially) but I certainly never promoted that viewpoint, and I think that history demonstrates that a system based on menus and a two- (or possibly three-) button pointing device is vastly superior to one in which every application program requires the operator to memorise an arcane list of key-combinations that grew larger with every release.

I remember a specialist wordprocessor called the Redactron, from a company led by a woman and promoted as freeing women from drudgery. But /boy/: despite having a fancy keyboard the poor girl operating it had to memorise a truly obscene number of shortcuts.

By all means: /allow/ keyboard shortcuts in the design philosophy. By all means, /allow/ fancy context-sensitive areas of the screen (multi-finger zoom etc.). But for people who do not use that particular piece of software dozens of times a day, provide the universally-understood menu system as a fallback.

However, I have to admit at this point that perhaps I am being reactionary, and perhaps I am advocating a "traditional" solution because I am unfamiliar with the design guides published by the various 'phone OS suppliers (Apple, Google) and the people who would like their app to look like it works on a 'phone even if running on the desktop or in a browser.

But I'm still left bothered by the suspicion that most smartphones are used only as terminals to Facebook and Twitter, so really those are the only UIs that the vast majority of users need to be familiar with.

MarkMLl

"Advocacy..."

Half the problems start when one or more of the people involved in the debate does so from a position of incomplete information and limited experience. They get worse when they insist that their viewpoint is so important that they are entitled or even obliged to argue from a position of ignorance.

I remember a friend who worked at- IIRC- Red Hat, who started discussing user interfaces with some colleagues and was surprised that none of them had heard of the CUA (Common User Access) guidelines: which had of course been invented by their corporate parent IBM, and for a long time were broadly respected by Microsoft.

Now that obviously was ancient history, and he's observed in the past that many of the currently-active "professionals" are actually younger than Windows '95 and NT which form the foundations of "Windows as we know it". But one would have thought that /somewhere/ there would be a record of hacks that had been found to work and idioms which were universally understood, and that this would be taught.

Which leads us to the demise of the organised WIMP user interface, in favour of the utter mess which- IMO- we have when considering Android etc. Why did it happen: was it simply because nobody recognised that the UI worked and was worth respecting? Was it simply because nobody had managed to /explain/ to the new generation that it worked? Was is because corporates felt that they couldn't attract new blood if they weren't given groundbreaking work to do?

And I wonder how many times this has happened in the past. Mainframes were swept away (until people realised that they needed centralised databases). Minis were swept away (until people realised that they needed some sort of multitasking and interprocess communications). Windows- at least in its classic form- has been swept away, and replaced by "UI de jour" either implemented directly on the screen or in a browser. Script-based unix startup has been swept away and replaced by systemd...

And every damn time, there are people eager to argue the notional advantages of the new system. Whether or not they understand the old one.

Liz Warren, Trump admin agree on something: Army should have right to repair

MarkMLl

> The rule of thumb is that you can expect the printed part to only have 30% of the strength of thesame part in the same plastic, but injection molded.

That rule of thumb needs a further term, even for plastic. "You can expect the printed part to only have 30% of the strength of the same part to the same design in the same plastic, but injection molded."

Using 3D printing you can add strength members to a part in locations and at angles that would be completely unattainable in a (single-part) moulding, and fill relatively-unstressed parts of the body with a structural foam of appropriate (and varying) density.

So in practical terms you might be able to make something which fits into the same volume, but by reconfiguring reinforcing ribs etc. is not only of comparable strength but is also significantly lighter.

MarkMLl

Leaving aside the current crap about 3D printers, predatory behaviour from John Deere and the like: this is something that's been brewing for years after the army brought certain facts to the attention of a Congressional committee.

Basically, they'd been put in a position where they were obliged to buy off-the-shelf kit, which invariably had a "return to supplier" repair policy.

The example given at the hearing was that of portable generators failing in IIRC Afghanistan, which the army's own mechanics were entirely capable of repairing but weren't allowed to.

Frankly, I'm surprised it's taken this long to sort itself out.

A new Lazarus arises – for the fourth time – for Pascal programming fans

MarkMLl

Re: I might give this a spin

Definitely worth investigating. You will find a high level of compatibility with the various "standard" Pascal implementations (ISO, Turbo, Delphi and so on), and it's fairly trouble-free particularly on Windows or Linux.

And the Lazarus IDE gives you very good debugging facilities.

There's mailing lists and a forum which is generally helpful, until some retard starts off yes another "Why isn't Pascal more popular when it's better than everything else?" thread.

My position is that it will probably "see me out". But I'm not entirely happy with the bloat that's crept into the language, or- as I've said somewhere above- the documentation situation.

MarkMLl

Re: No OOP in the new book?

> And if there is something OOP makes harder, is reading code, since you have to follow inheritance.

You clearly haven't tried reading code heavily based on generics :-(

MarkMLl

Raspberry Pi and ARM

Actually, FPC and Lazarus have supported all Raspberry Pis from v1 onwards, and before that other ARM platforms like the NSLU-2.

They've never supported IBM mainframes and a number of others- notably the Itanic- haven't got very far, but apart from that their platform coverage is comprehensive.

MarkMLl

Re: No OOP in the new book?

> I need to mention here that the book does not go into Windows programming, OOP, software components, or the Lazarus GUI builder.

More seriously, it explicitly says that it is omitting all consideration of the RTL (FPC standard libraries) and FCL (FPC Class Libraries).

These are, by now, utterly immense, and suffer from patchy documentation (much machine-generated), sparse indexing, and members of the user community who tell newcomers that they should be using some facility that is completely unfindable unless you know where to look.

GCC 15 is close: COBOL and Itanium are in, but ALGOL is out

MarkMLl

Re: ALGOL-68 is out

> I have in front of me the 1973 "Algol Primer for Burroughs B6700" by de Souza and Manley of Otago University, and the I/O doesn't strike me as strange. There's no memory access as such; that definitely existed in ESPOL but I don't have an ESPOL manual.

Even at the application level, Burroughs ALGOL was a bit odd because they'd borrowed FORTRAN-style format notation. And code written by Burroughs themselves often grouped all the formatting in a table at the start of the deck, which could make it very difficult to follow.

There's ESPOL documentation at Bitsavers.

MarkMLl

Re: ALGOL-68 is out

> Boroughs algol was always a direct algol 60 descendent. Not a hint of '68 to it.

But that does not necessarily apply to whatever's currently being shipped by Unisys, which is why I checked.

MarkMLl

Re: ALGOL-68 is out

> I once had a job writing a translator from Burroughs Algol to PL/1. As I remember it (it was a long time ago) the Algol dialect was somewhat weird, with instructions for extracting bits from the OS' memory, and a very strange IO system.

That sounds suspiciously like the B5700, which had a completely separate set of "stream instructions" for character processing which- apart from anything else- bypassed all memory protection.

I take it that you're aware of Paul Kimpel's emulator.

MarkMLl

Re: ALGOL-68 is out

> I'm fairly sure it was never a real influence on Ada and that like, or they would have been the better for it.

That authors of the "Minority Report" on ALGOL-68 i.e. Wirth et al. were contracted by the HOLWG to advise on the early stages of Ada development.

As such, even if they didn't like what ALGOL-68 matured into, they were well-informed on the issues that needed to be addressed by any successor to ALGOL-60.

MarkMLl

ALGOL-68 is out

ALGOL, i.e. -60, was never in: and they're very much different.

However an interesting question is whether The Unisys (nee Burroughs) implementations of ALGOL hew to the -60 or -68 language: I've got limited time to delve into the manuals right now.

I was, however, looking at a 1970ish Burroughs ALGOL compiler a couple of days ago and noticed that they'd not adopted a trivial tweak to the language to eliminate the notorious "dangling else" issue: and that tweak was published by the CACM as "Revised Report on the Algorithmic Language ALGOL 60" in 1963.

So extrapolating from that unfortunate example I'd expect them to still be using ALGOL-60 "warts and all", and a very quick perusal of their ALGOL manual dated 2023 appears to confirm that.

https://public.support.unisys.com/aseries/docs/ClearPath-MCP-21.0/86000098-519/86000098-519.pdf 4-61 p259

with their systems programming language NEWP being structurally similar.

So while ALGOL-68 remains of broad interest to language history nerds, only its influence- on Ada, PL/SQL and so on- is really relevant.

Free95 claims to be a GPL 3 Windows clone, but it's giving vaporware vibes

MarkMLl

Text mode?

Appears to be hardcoded to drive VGA at 80x25.

If somebody just wanted text-mode Win-32, they could do far worse than revisit Sanos which could at least run its own toolset.

Type-safe C-killer Delphi hits 30, but a replacement has risen

MarkMLl

Re: The bottom line...

> Rust will just be VS Code and an ever growing choice of extensions, which is how it’s done now.

"slow compilation and debugging difficulty remain big challenges."

https://devclass.com/2025/02/18/state-of-rust-survey-2024-most-rust-developers-worry-about-the-future-of-the-language/, cited by El Reg 20th February.

However history shows that you can't just stick unrelated plugins into a tool, and expect them to work together. As a specific example it's not too difficult to have a form designer and it's not too difficult to have a debugger, but there are very few IDEs which have managed to integrate a debugger into a form's event handler: and most of those either came from or were heavily influenced by Borland.

MarkMLl

Re: What a trollish subheading.

Why should that be trollish? It's a statement of fact: it was basically Pascal which introduced the idea of a distinction between floats and integers, and moved the idea of records from being solely related to I/O to being a core part of the language.

I'm up to here ^ with arguing the merits of different languages, but the fact remains that most of the ideas that were introduced by Wirth in Pascal (and tightened up in Modula-2 etc.) have subsequently been embraced by most other languages- or their successors, or by things that claim to be the same language while in fact being unrecognisable (I'm looking at you, free-form FORTRAN).

We don't have to like Pascal or use it, but that doesn't mean we shouldn't respect it and recognise its place in history.

MarkMLl

Re: Compared to Delphi, FPC and Lazarus are a joke

> Compared to Delphi, FPC and Lazarus are a joke

>

> They are mostly stuck at the Delphi 7 era of many eons ago.

In that case make a one-time posting to the forum or mailing list, telling the developers what's gone wrong.

I use FPC/Lazarus, but that does not necessarily make me a shill for it.

MarkMLl

Re: A colleague of mine uses Delphi/Lazarus

> (/me not knowing Object Pascal or Lazarus): does Object Pascal and/or Lazarus not let you use the forward declaration of regular Pascal to work around that problem?

Yes.

MarkMLl
Flame

The bottom line...

...is that "more sexy" languages have nothing remotely comparable with the Lazarus IDE, which as well as including a form designer etc. has fully-integrated debugging.

Rust, in particular, is going to find that a very high bar to clear.

MarkMLl

Re: Not

> It did install, but after seeing a warning about a non-existent lazarus directory, I bailed and uninstalled.

That was probably the warning it gives you when it's about to set up a local configuration directory (~/.lazarus or similar) to hold the IDE's state.

Most people would consider that a courtesy...

However I would say that most users get it from the Lazarus repo rather than from Debian/Ubuntu, since that way you've got some level of confidence that you've got a version with bugfixes etc.

Legacy systems running UK's collector are taxing – in more ways than one

MarkMLl

Yes, but WHAT SORT of legacy systems? Are we talking about an ICL mainframe, something more recent by Fujitsu or ICL, multiple racks full of x86 servers...

Or is it unmaintainable because somebody doesn't want to touch the software any more (Solaris on SPARC anybody)?

'Maybe the problem is you' ... Linus Torvalds wades into Linux kernel Rust driver drama

MarkMLl

Re: Fair comment by Linus

> ...if the C API developers change their C API that breaks the Rust wrappers, the kernel will fail to compile because another component in the kernel - the Rust wrappers - are broken now and won't compile. Therefore either the C API developers will have to fix the Rust wrappers themselves to allow a full kernel compilation, or take steps to exclude a part of the kernel - the Rust wrappers...

If an API is changed in a way not detectable by the toolchain then any code not aware of that change will break.

The kernel community as a whole ought to be welcoming any methodical attempt to codify the API between subsystems: not necessarily in Rust, but in something testable. And they should definitely be welcoming any attempt to avoid the sort of edge cases that were aired when this stuff was discussed in the context of bcachefs.

Because as things stand I'd say that there's a real risk that somebody- probably Poettering- will fork the kernel in order to be able to favour Rust, and that any distro that is interested in commercial acceptance will use the one better able to withstand regulatory scrutiny.

How Windows got to version 3 – an illustrated history

MarkMLl

Re: Terminal network...

There's a considerable degree of internal support- "hooks"- in DOS from at least v3 onwards, as you can see from the data structures etc. in Ralph Brown's Interrupt List.

I can't speak for MS-DOS, but you could certainly get a networking layer from IBM easily and cheaply, and that gave you streams (?) and mailslots which were compatible with those later implemented by Windows for WorkGroups and Workgroups for DOS: I've programmed them for interprocess communications, e.g. to write an SMS server.

What DOS lacked was server capability, for which you originally had to go to some poorly-understood (to the average sales/support people) product from 3Com+MS. However the significance here is that it was this combination that became LAN Manager, and to at least some extent it was the LAN Manager API which gave both OS/2 and Win-32 their filehandling and interprocess communications APIs... I'm a little unclear whether than includes the "godothisoncompletion()" callback but that was certainly in place by OS/2 v1.

Another significant strand is whether the DLL structure came from OS/2, LAN Manager, or something even older: i.e. "European" multitasking DOS-4. I've actually come across a development tool from that era which backported DLLs onto straight DOS in lieu of using overlays.

So irrespective of the extent to which IBM and MS were able (through wisdom or luck) to plan things to each others disadvantage, some of the ancient history might have roots even deeper than Nina Kalinina realises.

The latest language in the GNU Compiler Collection: Algol-68

MarkMLl

Re: I would try it, but...

Actually, APL was much easier when it used a printing terminal with overstrikes: memorising the 30ish base characters was relatively easy.

MarkMLl

The seminal structured languages

In any event, and irrespective of their relative standing as the heir to ALGOL-60, the really important thing is that ALGOL-68 and Pascal reflect a recognition of the importance of types and data structures in a language.

Many machines of the day actually treated integers as a subset of floating point numbers, and most of them supported the concept of a record as the fundamental structure of input and output. But a robust codification of type and record handling was novel, and it is worth remembering- and hopefully teaching, if only in passing- where such things originated.

MarkMLl

Re: Lead to a bunch of stuff at what was RSRE Malvern

> ... should have been combined as they are in other languages like Java or Pascal.

Although regrettably today's leading Pascal implementation (Free Pascal, with the Lazarus IDE) still stubbornly refuses to provide any facility to embed SQL etc. inline.

Having a substantial amount of an ALGOL-60 dialect that relies excessively on tables declared at the start of the card deck, such separation makes me cringe.

MarkMLl

Re: I would try it, but...

> why are there no programmer focused keyboards out there on the market?

You mean like https://web.archive.org/web/20200217111612/http://www.aplusdev.org/keybdBW.html ?

MarkMLl

Re: Alghoul-68

> Why did Van Wijngaarden still call it Algol and not Mindfsck or something more appropriate?

Because one of the luminaries was McCarthy, who by then had pinned his colours to LISP and had an interest in seeing ALGOL fail.

MarkMLl

Re: Lead to a bunch of stuff at what was RSRE Malvern

> Wirth was on the algol X working group ~1966?

Wirth- and almost half of the other committee members- /resigned/ because they didn't like what they were being railroaded into.

One of the people doing the railroading was John McCarthy: who Wirth couldn't easily criticise since he'd been his boss at Stanford.

Wirth went on to knock Pascal together as a rush job, intentionally breaking the declaration syntax to make it incompatible with ALGOL.

And together with other ex-committee members (Hoare, Dijkstra et al.) he went on to do early work on Ada.

$800 'AI' robot for kids bites the dust along with its maker

MarkMLl

Re: Alternatives?

Their webpage https://moxierobot.com/ is still active, although the purchases page does have the grace to say "Sold out".

There's an interesting review of how widely the accumulated data has been shared at https://foundation.mozilla.org/en/privacynotincluded/moxie-robot/ . Suffice to say that it includes Google and OpenAI among others, and by now those have so much experience on geolocation etc. that I don't think assurances that queries are de-identified and the originals deleted are worth very much.

MarkMLl

Alternatives?

Somebody's apparently working on an alternative https://www.reddit.com/r/MoxieRobot/comments/1ha2b7s/currently_working_on_a_replacement_ai_for_when/ although I don't know his "geek cred" rating.

I obviously throw what little weight I have behind the argument that any "cloud-based" device should have its support software sources held in escrow somewhere. However the real issue in the current case is the accrued "personality" that an instance of one of these things develops talking to a "neuroatypical" child, and whether there are any circumstances in which the custodian of the servers should be allowed to pass this on to a successor: no matter how (apparently) well-intentioned.

Public developer spats put bcachefs at risk in Linux

MarkMLl

Not quite how I read it.

"...executive summary might be that the Rust folks are proposing changing how the C parts of the kernel work slightly in pursuit of cleaner, more reliable code."

I'm not sure that's entirely accurate. Having read the transcripts shortly after that particular spat broke out, the complaint was that there were a lot of poorly-documented edge-cases in an inconsistent API, which demanded poorly-documented use practices to deliver reliable results.

The proposal was not for the API to be changed to suit Rust. It was for the API to be cleaned up and- among other things- documented, which would have obvious advantages for all.

The existing maintainer reacted like a true prima donna, which was not to his credit and reflected badly on the entire community.

MarkMLl

UK farm ministry offers £27M to support legacy systems for another three years

MarkMLl

OK, but can't you at least tell us what these "legacy systems" are?

Are they frontended by PC desktop apps?

Is the backend on a single server site or is it (in principle) distributed and fault-(in)tolerant?

What's the backend: PC-based servers or Sun Enterprise? Or SGI because of the mapping involvement? or (Lord help us) S/360 running on three layers of emulation?

Help! My mouse climbed a wall and now it doesn't work right

MarkMLl

That was such a common occurrence that one learned to query any order for a 9-pin serial or PS/2 extension cable.

Of course, any user with the original MS mouse that plugged into a custom ISA card with a proprietary connector didn't even get that far.

Linux kernel 4.14 gets a life extension, thanks to OpenELA

MarkMLl

So what can only be installed on v4 kernels?

I've not gone checking version numbers, but, as a specific example, HP (nee Compaq) used to distribute RAID etc. monitoring software in linkable form. All it would take would be one important user of old machines, like the USA's Veterans Administration or some other organ of government.

Alternatively, various American organisations were big users of 32-bit SPARC systems: their manufacturer doesn't want to maintain them and they sure as Hell aren't going to get much support from the Open Source community after they cut off support of the GNU Utilities porting effort.

And for that matter, in around the v4 era somebody developed a nice way of patching a running kernel (which might not work on later versions). It was bought by Oracle...

Judge demands social media sites prove they didn't help radicalize mass shooter

MarkMLl

Re: IANAL

You can't connect to Discord without enabling Javascript in your browser, i.e. messages are filtered through Discord's client-side code.

It's down to Discord to /prove/ that they don't, ever, filter messages in any way.

Same apples to everybody else, but if they can't do that then they're not a "common carrier" and are liable for content.

MarkMLl

Re: IANAL

In short, server + proprietary_protocol + in-browser_custom_client = product.

I don't like to sound like an open-source zealot, but if somebody- and this includes the likes of Discord- is filtering messages through their custom code, and by use of a proprietary protocol are at least potentially preventing people from getting at the /actual/ postings on the server, then they're going to have a Hell of a hard time demonstrating that they're a "common carrier".

A path out of bloat: A Linux built for VMs

MarkMLl

Re: BTDT

I was either fed dud information on the forum by the developers, or the situation has improved over the last few years.

if ARCH=um, then it's possible to select either a 64- or 32-bit guest kernel build either from the menu system or by using SUBARCH=i386: the result is the same.

On an x86-64 host, I can build either a 64- or 32-bit kernel+modules set which appears to be consistent.

However the overall setup and boot has probably grown in complexity over the years: it used to be that you could fire up e.g. a UML kernel and immediately load from ISO media (e.g. Slackware from a CD or disc image) but despite spending a day tinkering I wasn't able to get anything working,

Revisiting it after a few days, I find that the best documentation comes from Debian's "man linux.uml", which is an extended derivative of https://docs.kernel.org/next/virt/uml/user_mode_linux_howto_v2.html (not very well indexed or linked to).

Following that carefully, I was able to boot UML and get a root login, which leaves me reasonably confident that I could rebuild an arbitrary kernel version.

I've not tried comparing operation speed with Qemu/KVM or Docker etc., but this is probably something which is still useful if one needs a specific kernel version, not just a particular set of libraries etc. as one gets with most container environments.

MarkMLl

BTDT

> How hard can you cut down Linux if you know it will never run on bare metal? Further than any distro vendor we know of has tried to go.

Demonstrated adequately by User Mode Linux (UML), which has been a standard build target for a considerable number of years.

The major thing that it couldn't do, when I last looked, was run a 32-bit guest on a 64-bit host. Which is a great pity, since if a guest is only going to be used for work which is not memory-intensive there's no real reason for it to carry around the burden of 64-bit pointers etc.

Starting over: Rebooting the OS stack for fun and profit

MarkMLl

Re: No. Sorry, just /no/.

> B) this is not something for the language to handle, it would be for the system

You're overlooking the fact that in Smalltalk and Lisp environments there is no significant distinction between the language and the OS.

MarkMLl

Re: Winding paths of history...

"I always thought Multics approach of making everything a segment was an idea worth doing properly - 64kb segments are just plain silly but 2^64 byte segments could fly."

The fundamental problem was that even after x86 segments were expanded to a maximum of 4Gb each, the descriptor tables- GDT and LDT- were still each restricted to 64Kb i.e. 8K entries. If one wrote an OS as Intel (originally) suggested, and if one actually had more than one descriptor per segment (e.g. one to allow code to be executed, and another to allow a debugger to examine it) that 8K was ridiculously inadequate.

MarkMLl

No. Sorry, just /no/.

First: "You load a program, then once in memory and running, you can then load a file into it to work on it. But you must remember to save regularly or you might lose your work. If your computer crashes it has to reboot and reload the OS. This resets it and whatever you had open is lost."

Look me in the eye Liam, and tell me that you've never had a program crash for no obvious reason. If you don't have a copy of your magnum opus on secondary storage then the only thing you can do is revert to the same memory image: which is likely to be inconsistent, hence your work is unsalvageable.

Second, if Smalltalk and Lisp are so great 'ow come the DoD considered neither as a foundation for Ada? (In the interest of full transparency, I wrote my MPhil on Smalltalk and related matters: I wot of which I speak).

Third, environments such as Smalltalk and Lisp suffer from not having per-object ownership and access rights. One of the early PARC books (can't remember whether it's the Blue or the Green) makes the point that if the system owner decides to rewrite the definition of (all) arrays so that they are zero- rather than one-based... well, be it on his own head. And reviewers of the earliest published version in the Green Book make the point that while great fun such a thing is virtually unsupportable since it's impossible to work out what state a user's system image is in: the one thing you can't do is say "reboot and try it again".

Look, I'm a firm believer in "it's my PC, and I'll pry if I want to" but without internal protection Smalltalk (and Lisp) are basically unusable: something I've pointed out repeatedly to Liam over the last ten years or so.

In addition to that, "If you find that you need different languages for your kernel and your init system and your system scripts and your end-user apps and your customisation tools, that indicates that there's something wrong with the language that you started in." Sorry, I disagree: the languages used at the lowest levels of implementation /have/ to be able to do things are are anathema at the application level, the canonical example being handle hardware-level pointers (i.e. physical memory addresses) and [horror] perform arithmetic on them. In addition, the lowest levels of system implementation usually (a) include the code that implements at least one heap and (b) carefully avoid using such things because the risk of indeterminacy is too great. By way of contrast, the highest levels usually /demand/ dynamic storage on the heap, plus either reference counting or garbage collection. And $DEITY help programmers with a bit of app-level experience who think they are competent to write low-level microcontroller code, and that it's appropriate to have garbage-collected strings in critical code...

And so on...

City council megaproject mulls ditching Oracle after budget balloons to £131M

MarkMLl

Re: So whose bright idea was it in the first place?

> It would be a blind tender initially and Oracle will have ticked all the boxes.

Would it be excessively cynical to suggest that Oracle have specialists to assist potential clients to prepare request-to-tender documents?

MarkMLl

Re: What would it cost ...

Contract out to... ICL? Sorry, Fujitsu.

The idea of having a preferred supplier for HM's DP Systems is not inherently bad, particularly if they genuinely have proper staff training schemes etc. (i.e. like Marconi had when they were the preferred supplier for certain types of telephone system).

But it needs politicians with some minimal level of cluefulness to get it to work.

MarkMLl

So whose bright idea was it in the first place?

I've seen comment in the past that Oracle tends to pitch its products directly to prospective customers' board members, cutting out the specialists who really know what's going on.

Has anybody looked carefully at the procurement process in the case of Birmingham?

I don't mean whether the tendering process was done by the book, I mean who started pushing the idea that relying on Oracle was a fundamentally sound bet?

The successor to Research Unix was Plan 9 from Bell Labs

MarkMLl

Re: So...

But be realistic: if you're the sort of person who still considers Slackware then you got a lot of that good attitude from your parents.

A thoroughly above-average family :-)

(The main reason I left Slackware for Debian was support for SPARC etc.)

MarkMLl

Re: OK, but is "multiuser" relevant any more?

But the Samba backend process runs- AIUI- as its own user (i.e. "samba" in group "samba"). It assumes responsibility for checking who is trying to access a file (i.e. which user on a client system) but once it's done that it still uses samba:samba at the library and kernel level.

I think I'd prefer that protection to be enforced at a lower level, in the same way that cross-system requests are enforced on many capability-based distributed OSes.

MarkMLl

Re: So...

> I believe Rob Pike studied under Wirth.

There's definitely a "citation required" tag on that. Wirth's connection with the ALGOL lineage- including Ada- is unassailable, and most modern languages seem at least tolerant of his ideas regarding strong typing. And one of the first APL implementations was done under his supervision at Stanford.

Robert Griesemer provides a link between Wirth and Google including Go, but so far I've seen no definite confirmation that Wirth had a personal connection with any of the Bell researchers- or for that matter anybody on the USA's East Coast i.e. IBM and the rest.

> Plan 9 is a successor to a multiuser OS with strong security, authentication, etc.

OK, but is "multiuser" relevant any more? Multi-*tasking* is definitely relevant, as is strong security between processes which might be briefly processing data on behalf of a known or anonymous user. Most desktop systems are now strictly single-user, and anything larger seems to have reverted to the 1970s mainframe model: a frontend would enqueue a query (etc.) on behalf of a user, and a backend would dequeue and action it with appropriate access permissions.

Focusing on "multiuser" as the prime requirement and running backends with their own fixed identity might actually be undesirable: the industry needs a good hardware-enforced security model applicable primarily to daemons, and if it turns out that that can also support a traditional multiuser architecture that's just icing on the cake.

Forgetting the history of Unix is coding us into a corner

MarkMLl

Re: Only Wayland?

While I agree, things like the position on a screen or its Z-order should be reflected by metadata in the directory structure rather than file content.

That sort of thing was completely overlooked in UNIX's design, which is why there were attempts like Alpha-Windows on character terminals and why window position and furniture are separate from window content on X11.

But even if we ignore the positioning aspect, we have to ask ourselves how best to represent process output as a file. Is it going to be a stream with embedded positioning commands etc.? Or is it going to be a sequence of rows each with a fixed number of columns representing what is actually presented to a user?

The first of those might be appropriate if the output device is a glass teletype, but once one starts considering any sort of smart terminal or form-based display one has to wonder whether IBM were actually right when they defined that fileset metadata included block and record sizes.

Page: