* Posts by Peter Gathercole

4189 publicly visible posts • joined 15 Jun 2007

TrueNAS CORE 13 is the end of the FreeBSD version

Peter Gathercole Silver badge

Re: Some backtracking here, maybe

If you look back at my posts, the fact that BSD never took the UNIX branding is a point that I make frequently, but when I say that, I often get shouted down!

I have often taken exception to your assertion that Huawei and Inspur putting their flavours of Linux through UNIXTM branding means that Linux in general has UNIXTM branding. It doesn't, because you don't know how much work Huawei and Inspur did to their versions to achieve their acceptance, nor do you know how much of that was accepted back into mainstream GNU/Linux. This is actually the other side of the double edged sword that is Free Software. People can change it to suit their needs.

In my view Linux is not UNIX, and as I have said, is moving away from UNIX at an ever increasing rate, although in those two cases I accept that I may have to make a minor, and time-limited exception.

Peter Gathercole Silver badge
Facepalm

Some backtracking here, maybe

Liam, you're previously proclaimed UNIX dead, and re-iterated this belief as recently as last week.

I think you would be a little two-faced if you don't include FreeBSD in the group of systems that are under the UNIX banner.

Suck it up. According to you Unix is dead and Linux is the new UNIX!

Linux for older phones postmarketOS changes its init system

Peter Gathercole Silver badge

Re: what's left of the commercial Unix world [...] Solaris 10

Up to about 10 years ago, I would agree.

But things change, and Linux (the kernel) gets new system calls, new features, and in the GNU and other non-kernel parts, many of the things we understand as making up UNIX which are not in any standard (like an init system or the commands to manage network interfaces) get changed, deprecated or completely removed.

Whilst it may still be mostly possible to lift some source code from a UNIX system, and compile it up on a Linux system, the reverse is no longer really true unless it is restricted to POSIX functionality, and not much new code is.

And you know what? I don't really care any more. I'm tired of trying to keep up, and once I finish working for a living, I'll stay in my niche, and use whatever I can for my personal computing, even if that makes me a dinosaur.

Peter Gathercole Silver badge

Re: what's left of the commercial Unix world [...] Solaris 10

It depends what you mean by a 'point release'

AIX 7.3 was released about 2 years ago. The latest TL for that (sometimes written as 7.3.2 aka 7.3 TL2 was released a mere 3 months ago, and AIX 7.2 is still receiving TLs as well.

TLs, or Technology Levels are released to support new features and new systems.

There are more. TL 3 is due around the end of this year. Additional TLs will be released later for Power 11 and Power 12 systems. Will 7.1.3 count as a point release (although I admit that in IBM's VRMF convention, it is a Maintenance level)? If so, I look forward to pictures of you munching some paper.

IBM looks like it has moved away from changing the first number in the AIX version. There was some thinking that AIX 7.3 might have been called 8.1, because it introduced live kernel updates, but it remained in the 7.x numbering sequence.

One of the advantages of a 'legacy' OS is that it is stable. AIX is now old enough that it has all of the features that most people need, and not so many new bugs, although I admit that the widening division between AIX and Linux distros, mainly for the reasons that Jake quotes about systemd. The result is that you don't introduce major change unless you absolutely need it, something that people who develop for Linux appear to have forgotten. And I often wonder how much of the new 'features' in GNU/Linux systems actually benefit customers.

Peter Gathercole Silver badge

Re: what's left of the commercial Unix world [...] Solaris 10

Indeed. Power 11 and 12 in the roadmap, with AIX still in the roadmap until 2030 or even later. AIX still getting major release updates (at least in terms of their numbers!)

I regard AIX as the last Genetic UNIX standing (although how much original AT&T code is in there is debatable).

There is no doubt that it is in decline, but all the time it is easier (and in most cases cheaper - including hardware costs once you factor in potential cost of risks) for Banks, Insurance companies and other people with enough money, to keep applications on updated Power hardware and AIX, IBM will continue pushing out the updates and features. In fact, they're using AIX to push hardware sales (I recently got a message saying that Spectrum Protect won't work on Power 7 servers after a certain update, so update your hardware now! - Power 7 systems went out of hardware support about 4 years ago)

The problem is that Linux is NOT UNIX (and getting less and less so as time goes by), so it is no longer possible (if it ever really was) to do a lift-and-shift to Linux, even if the enterprise grade servers with the right RAS features were available. It's certainly not easily possible to port modern software written on Linux back onto AIX, BSD and other UNIX like OS's because of dependencies on Systemd and other Linux facilities that developed natively on Linux.

I don't think AIX will become a fully legacy platform like the Mainframe. It will come to an end. Just not quite yet.

I would wager real money that Systemd won't ever make it into AIX (it probably can't. No dbus support.) Other init systems? Who knows and in a few years I will no longer care.

At one time, I thought that the RedHat purchase was to integrate RH Linux onto Power to give the same Enterprise features AIX had, to give AIX customers a different path while keeping Power. Having seen almost no evidence of that happening over the last few years, I'm a little sceptical now.

Font security 'still a Helvetica of a problem' says Australian graphics outfit Canva

Peter Gathercole Silver badge

Re: KISS

Argh!

Not Gnomeprint! It was Gimp-Print, part of the GIMP package that was split off because it was so useful to have for other things that GIMP!

Peter Gathercole Silver badge

Re: KISS

Wordstar was like most of the wordprocessors of the time, and relied on the font support in the printer itself.

To do this, there were internal device independent markers embedded in the document that selected various things like superscript, subscript, bold, underline, strikethrough and different fonts.

To implement these for a particular printer, you had a driver file that you typically loaded as part of your wordprocessor profile, typically at startup.

These driver files contained things like the escape sequences to turn on and turn off each of the features, together with a description of the fonts. For most dot-matrix fonts of the FX-80 era, printers only had fixed-width fonts so the driver only really needed to know the character width and line spacing, but contemporary daisy-wheel printers could have proportional fonts loaded. For these, the print driver also had to know the full font-metrics for the font wheel installed, so these would also be in the driver (Incidentally, nroff on UNIX, which was normally used to drive fixed-width only printers, could also drive printers with proportional fonts, not that many people used it for those).

Printers later than the FX-80 (such as the LQ-80) started adding proportionally spaced fonts, but again, the software worked within the capabilities of the printers, just turning things on and off by escape codes, although like daisy-wheel printers, the metrics for the full font would have to be included.

There were very early attempts to render a page in the computer itself, and transfer the page as a bitmap image using the printer's graphics capability. This started in very early desk-top publishing programs, but I don't know exactly when the main-stream wordprocessing packages started using these types of capabilities. When using a printer like this, there had to be a rendering engine in either the OS or the package itself. This appeared to have been incorporated into the OS in early versions of Windows with GDI in Windows 3.1. It was only when this started happening that the OS needed to know much more about fonts than their metrics.

UNIX printing pre-CUPS was a very hit and miss affair. The software (for example Troff) needed to be able to drive the printers capabilities, making the OS print system mainly a pass-through. With the advent of PostScript things became a bit easier, but the rendering was often still done in the printer (although Postscript does allow embedding fonts into a print if they were not one of the standard fonts that PostScript printers shipped with).

Just prior to CUPS, people started putting together rendering systems using GhostScript (an open source PostScript implementation) to render the image before sending it to a printer using print filters in the System V style print system (a strange beast I'm happy to forget as much as possible). This allowed different types of printer to print graphics-rich pages, and all the application had to do was generate PostScript. This happened around the time of ink-jet printers (I played with it on early Redhat systems and Epson Stylus 400 printers). This became formalised when CUPS was release into the open, and this uses either GhostPrint or GutenPrint (Formally Gnomeprint) to render the page (and now, CUPS is transitioning to prefer IPP in the printers, strangely circular as PDF is an evolution of EPS, or PostScript, moving much of the rendering back into the printer!).

With all of these client side rendering systems, there needs to be font handling in the client computer. The tools for handling scalable fonts is as vulnerable to code problems as anything else.

But getting back to the point, configuring Wordstar for an FX-80 (although why you would need to, because it was one of the standard printers that everything shipped a driver for) is as much like driving a modern priner as an ox-cart is to a Bugatti Veyron.

HP print rental service seeks more users to become subscription addicts

Peter Gathercole Silver badge

Re: A fool and his money

What spoils this is Amazon Prime delivery.

If I need to find cartridges for even my oldest HP printer (I've got an Officejet G55 and an Officejet 5610) then I can, and get next day delivery. If I keep one set ready to go, and order again when I use the ones I have on the shelf, I am rarely in a position where I cannot print. I have to order them, of course, but mostly that's an "Order again" from my Amazon purchase history.

Of course finding HP originals is difficult for these older printers, but in general I find I can use re-manufactured ones at a fraction of the price of the HP ones (even taking into account that some don't work out of the box), and of course, as the print head is in the cartridge, I don't have to worry about clogged heads.

The other advantage of using older printers is there is no nag-ware, and as I am in a mostly Linux environment, I don't have problem with the fact that HP withdraw the drivers for older printers. Of course, Cups switching to IPP by default has been a little bit of an issue, but it seems to be working itself out now. I generally have the printers hung off NAS devices, or always-on small Linux systems, so the fact that they are not network attached is not an issue.

Updates are plenty but fans are few in Windows 11 land

Peter Gathercole Silver badge

Re: Be less intrusive, less pushy, less blocking work,

Anh. There's no alliteration. Would have to be something like Intrusive Icefish, but we're a long way from I at the moment, having passed it at Impish Indri in 21.10, and the next name being Noble Numbat at 24.04 next month.

Peter Gathercole Silver badge

Re: We only just got Windows 10 settled....

Well, it used to be that after the 5 years1, you could do an in-place update, and besides the default window background changing, things looked much the same.

Not so much now, with changes a-plenty between LTS releases. Systemd, Wayland, Mir, Gnome 2/3/4 and many, many other things under the covers.

But there is now a Ubuntu Pro offering, that will (for a price) provide additional security updates for up to 10 years from the release date of the LTS. Not that I intend to fork out for it.

1 Ubuntu user since Warty Warthog (4.10) - Daily drive since Dapper Drake LTS (6.06)

Microsoft's February Windows 11 security update unravels at 96% for some users

Peter Gathercole Silver badge

Re: "Something didn't go as planned. No need to worry – undoing

Ah. So you're a waffle man!

Starting over: Rebooting the OS stack for fun and profit

Peter Gathercole Silver badge

Re: Windows NT

Precedes Linux by more than a decade. mmap() was a feature of SunOS and SVR4, although the interface to it was described (but not implemented) in BSD4.2.

According to Wikipedia, the memory mapped file concept was first seen in TOPS-20

Peter Gathercole Silver badge

Re: Hit-and-Miss

Versioning in VMS (and RSM-11 and RSTS) was a feature of the Files-11 filesystem, with enough in the OS to allow you to manage it. It was just a file generational thing, not any form of change control on the data in the files.

Peter Gathercole Silver badge

Re: case-sensitive filesystems and other thoughts

I fail to understand the thinking that case-sensitive filesystems (really you mean file and directory naming in filesystems) is really a bad thing.

The only thing I can think of is that the DOS/WINDOWS world (and extending back into VMS, RSX and the mainframe world) is so ingrained in this thinking that anything else is unthinkable.

But case sensitive file naming is not difficult to understand, and when you extend this to other written languages, it's absolutely essential that you can understand more than case insensitive English. Filesystems need UTF-8 support in modern computing, and if you have the wealth of characters available, surely it's less than intuitive to make "a" equivalent to "A" just for English. It becomes an arbitrary mapping that users of other character sets will regard as a weird and unusual quirk.

I have said before that past English language dominance in computing, particularly US flavour English is a terrible arrogance, almost a cultural imperialism, of the English speaking world. Yes, that world pretty much invented modern computing, but we need to get past that.

I admit that I find it hard to cope with Kanji or Simplified Chinese file name on CD's sourced in the far east, and there are huge numbers of other languages that are not based on the Roman alphabet, and when (not if) non-Roman characters becomes a significant part of the World Wide Web, life will become much different (we may need a translation service for filenames!)

In a world of usable computing devices, where not everyone understands this technical debt or even English as a language, things have to become more diverse. If you want the world to settle on a lingua-franca, there is no reason to think that it will be English over, say, Simplified Chinese.

On the subject of a change in the OS paradigm with regard to storage, you have to extend your thinking to an even larger name-space that includes what we currently regard as network attached storage. There is no reason, if you are designing a new operating model, to limit your thinking to just local storage. I can envisage a world where there are few to no barriers to the way storage is presented to a user. Of course, you then run into problems with permissions and ownership of data. This cannot just be handled by visibility, so it is important that you retain the concept of identity in the OS, although maybe not the multi-user model that we have at the moment (although there are good things as well as bad in that).

I did comment on this a long time ago when Optane and Memristor were in the news (I think it was on an HP "The Machine" article) but I can't find the post at the moment.

One of the problems with a 'flat' access model that you may end up with using persistent storage is that you need to have some form of index to find data. This is effectively what a 'filesystem' is. In most current cases it's a non-binary tree index of the data stored in the files. I know I'm getting old, and have not embraced the cloud, but I find cloud object storage difficult to use, because there is no obvious structure imposed on the data objects by the storage mechanism. You still need some form of index/database to find the data, and making that more opaque may not hinder applications designed for this storage method, but it just increases the complexity of a system, not making it more simple.

Current filesystem design does provide a solution to that problem, and may not be the hindrance you think it is. It may persist (possibly just as an extra index) well into the age of the 'flat' storage model.

It is a bird, a plane or a Chinese spy balloon? None of the above

Peter Gathercole Silver badge

@IGotOut

More than that, the UK trident missiles are leased from the US (obviously with some high final payment to terminate the lease!), and have to be periodically shipped to the US for maintenance.

The warheads are UK manufactured at AWE Aldermaston, and married to the missiles in the UK at HMNB Clyde in Faslane I believe.

If we plug this in without telling anyone, nobody will know we caused the outage

Peter Gathercole Silver badge

Re: Ethernet AUI were a pita too.

Fixed with so called "snagless" hoods that prevented the tab catching on other cables as you tried to untangle the knitting at the back of switches and servers.

Of course, sometimes these prevented the connectors fitting into recessed ports on the back of some servers.

I hated the AUI slide on older systems. Where ports were close together, you needed a screwdriver or some other stiff, thin object to get to the slide to undo it. I'm sure that some of them 'fell' off because someone couldn't get their fingers in, and just pulled.

In case you had forgotten, the cables that came out of the AUI were very heavy. They needed a quite substantial slide clip to prevent the weight of the cable pulling them out of the back of the system!

Some Intel Core chips keep crashing, game devs complain

Peter Gathercole Silver badge

Re: Golem.de looked closer than ElReg...

I think that you may find that some restive load heaters have thermal controls to prevent them getting hot enough to melt the enclosure. So while the device is rated 1500W maximum, if you were to run it for an extended time T, the overall heat output would be significantly less than Tx1500W.

These controls masquerade as air temperature thermostats, supposedly to not keep consuming electricity when the air temperature reaches a set level, but also to prevent the device putting the full whack of heat out so it doesn't melt.

Preview edition of Microsoft OS/2 2.0 surfaces on eBay

Peter Gathercole Silver badge

Re: Worth noting the discovery that made OS/2 1 redundant

"because IBM didn't want OS/2 to be a multi-processor"

Do you mean multiple CPUs in a single system, or did you mean more than one processor platform?

I ask this because OS/2 3.x was ported to PowerPC by IBM (although there were some problems), and from what I remember from when I worked in the AIX Systems Support Centre in the early '90s, there was actually going to be some synergy between OS/2 and AIX on PowerPC hardware, with common hardware and LPAR support (running OS/2 and AIX on the same physical system concurrently) in the roadmap. I even heard rumours of a common kernel, possibly even written as a micro-kernel, with AIX and OS/2 personality layers on top.

I no longer have any copies of the documents I saw, so it is all from memory, but I did see OS/2 running on a pre-production 7020 40P (actually, although the hardware was mostly the same, it was probably a PowerSeries 440, but apart from the covers, it looked like the 40P that I had in my herd of systems).

OpenAI tries to trademark 'GPT'. US patent office says nope

Peter Gathercole Silver badge

GPT?

For me, GPT still means GEC Plessey Telecommunications. who made the System X telephone exchanges, but I guess that they completely disappeared around the turn of the century when they were absorbed into Siemens.

The rise and fall of the standard user interface

Peter Gathercole Silver badge

Re: Motif?

Your comment about the timing of VT100 and ANSI X3.64 is interesting. I had not looked into the dates, but I just thought that the "<ESC> [" CSI was just so bizarre that it must have come from a committee (the ANSI X3L2 committee, to be precise) rather than a single manufacturer. This committee seems to have been working before the introduction of the VT100.

At the time that the VT100 came out, I was still at university, and did not actually use real VT100 terminals at all. Most of my experience came later, and from VT compatibles. The reason why I wrote a VT52 emulator was that the SYSTIME IV terminals that were on my (as in "the one I looked after") SYSTIME 5000E which was the system I was responsible for, running RSX-11M and UNIX Edition 6 and 7, were merely VT52 supersets (they had additional function, some of which I did also implement). This enabled me to keep track of what was going on from my office (where I had both a BBC Micro and a serial connection to a CAMTEC PAD which had a line to the system) without tying up one of the small number of terminals available for students in the lab itself.

The 'killer apps" that needed compatibility were EDT (as you pointed out) and a transaction monitor called SYSTEL which was being used to teach commercial application design.

In my next job, I came across all sorts of different terminals, from Wyse, Lear Segler, HP, AT&T, IBM and several cheaper VT2x0 compatibles (as well as real VT220's themselves), and I became the go-to person for anything terminal related, and I've sort of kept that reputation wherever I've been since.

Peter Gathercole Silver badge

Re: Efficient interface

Hmm. It seems that 8085 was not as common as I thought. 8051 and the later variants were used by Wyse, and probably other manufacturers as well.

It just happened that I saw 8085 in a couple of terminals I saw in bits in the '80s, so I assumed this was the common processor.

Peter Gathercole Silver badge

Re: Heh, youngsters nowadays...

I really don't want to say how much younger than me he is, but I was using video terminals on UNIX systems (and also MTS via PDP11 terminal concentrators) in 1978 at Uni. Back then they were really glass-teletypes, although we did have Queen Mary's Editor for Mortals (em) working on UNIX V6/V7 on Newbury pretty dumb terminals that allowed you to open a line and do insert/delete processing using cursor movement on a line-by-line basis. But I still remember how to use ed even today.

But I know that there even older people than me knocking around here.

Peter Gathercole Silver badge

Re: Efficient interface

In the UK, Plessey sold a VT100 look-and-work-a-like which at first glance looked a lot like a VT10x, but was a lot cheaper. There were very common in UK education. DEC terminals, like most computer manufacturer's own terminals, were quite pricey. But this was partly because of the way they were constructed. The cheaper terminals were effectively programmed single board Intel 8085 computers, with the programming in ROM, whereas DEC VT52 and VT10X terminals, although they became microprocessor based, tended to be multi-board implementations in a backplane. This made sense for maintainability and upgradeability (things like the Advanced Video Option AVO, and Regis graphics, were optional plug-in boards to a base VT100), but not necessarily for price. Later DEC terminals like the price reduced VT102 and VT220 onward followed the single board model, to bring the price down (a bit).

I was never very fond of Wyse terminals, because although they were cheap and functional, I seem to remember them having a bad reputation for reliability (and their native command set was awkward to write termcap/terminfo entries for). Of course, for Wyse 75's and later terminals, you normally operated them in emulation mode, not native.

Prior to cheap PC's like the Amstrad 1512, unless someone had a PC on their desk for another reason, providing one just for a terminal emulator seemed a lot of money for a terminal replacement. Once PCs could be bought for less than a decent terminal, the serial terminal industry died a death. Wyse and some others tried to keep it going by diversifying to X-termnals, Winterms and Thin Clients, but these could not really compete either.

But it was not just IBM PC's that got terminal emulation software. I wrote my own 98% VT52 compatible terminal emulator (it was not timing-compatible, and only implemented XON/XOFF, not hardware flow control, but was otherwise complete) for the BBC micro before the IBM PC was common, (and I also did a minimal Tek4010 emulation as well) when I worked at Newcastle Polytechnic. I stopped short of doing an ANSI/VT100 emulation, partly because commercial emulations came along, such as Computer Concepts Communicator, and Acornsoft Termulator, and partly because it would be much more complex. And of course Kermit was freely available.

I did sketch out an MSc proposal for myself to implement a WIMP managed serial terminal on the BBC micro using an AMS mouse, but I realised things were moving so fast that it would have been obsoleted by commercial offerings before I finished it!

Peter Gathercole Silver badge

Re: This far down in the comments

It depended on your printer. For nroff, it generally used whatever the fonts the printer provided. Troff, in particular Device Independent Troff (di-troff) actually was quite flexible. It was only the original troff that would only drive a Linotype phototypesetter that was limited, and I think even that had different fonts available.

I used di-troff on Xerox 9700 laser printers (although that di-troff backend may only have been in AT&Ts R&D UNIX), and that had different font sets you could load, although you did have to have a font metrics files for each of the fonts.

And later, I also used a di-troff Postscript backend driving a DEC LN03 laser printer, with all of the standard Adobe fonts available.

It was possible to drive printers with nroff doing such things as different sized and proportionally spaced fonts, but it was a pain in the neck to create the font metrics file (I did it for an OKI laser printer once, and it took weeks for me to be happy with the result).

But back then, every word processor relied on the fonts that the printer provided. It was only with the advent of GDI for windows and GS/Gutenprint for Linux and possibly Postscript as an intermediate graphics language that the possibility of rendering-to-pixmap in the PC became a thing, allowing the application to use any font that the rendering engine knows about.

Peter Gathercole Silver badge

Re: Motif?

If you look into the command sequences for VT52 and VT100, you will see that VT100 was not an evolution from the VT52. It was a completely different terminal, not even sharing the same form (the VT52 did not have a separate keyboard). It did have a VT52 mode and understood some of the commands of the older terminal in VT100 mode, but the structure of the command set was completely different. The command set for VT52's and ADM-31's looked much more similar.

But the thing that made the VT100 (well, actually probably the VT102 which was taken as the base for xterm) the reference was that it was the first commonly available terminal that implemented the ANSI X3.64 terminal specification. This, along with the fact that the VAX (and PDP-11) had a large market share meant that most other terminal manufacturers made their terminals so they would work under VMS, RSTS and RSX to maximise their potential sales. As far as I am aware, VMS did not have a terminal abstraction layer like termcap/curses (and I'm certain that RSX-11M didn't), so if you wanted to use a terminal from another manufacturer, you had to get one that worked like a DEC VT terminal.

Other mini-computer manufacturers tried to do similar things (I shudder whenever I think of an IBM 3151 in native mode), but DEC won the day to become the one everyone copied.

Peter Gathercole Silver badge

Re: Efficient interface

Before the days of shared libraries and dynamic linking, vi was compiled as a fully static linked binary, and would work as long as you had access to the executable file, and a copy of termcap or terminfo.

But shared libraries with dynamic linking were seen as the way forward, so vi stopped working if /usr/lib was not available. I remember back in the day recovering AIX systems booted from three floppy disks while using nothing other than what would fit on a 1.44MB floppy, which only just contained a shell and ed, plus a handful of other commands.

I think that ed is still linked as a single fully static linked binary on real UNIX systems, although on RH Linux it appears to be dynamically linked.

Peter Gathercole Silver badge

Re: Efficient interface

If I remember correctly, termcap (and Curses) was effectively the code that allowed vi to function on different terminals ripped out to make a more generic terminal handling library using termcap and the libcurses library. I remember reading about this, and doing a bit of testing when I got a BSD 2.something release for Edition 6 and 7 (unfortunately on a PDP11/34, which was too small to run vi), obtained because it had the Ingres RDB system on it, which was wanted to teach Relational Database. to students.

I do not know whether terminfo was a NIH thing that AT&T did, but I do remember that it was more functional than terminfo, and because it used compiled, hierarchical files containing the essential features needed to drive a terminal, it loaded faster than parsing the large, single file of terminal descriptions that was termcap. I think I remember some comments in the shipped version of termcap from BSD which suggested that you create a cut-down version of the termcap file just containing the terminals that you had on your system for the sake of speed.

Part of the code shipped with AT&T UNIX releases included a captoinfo command which would take a termcap entry for a terminal, and create a terminfo source file for tic.

Another thing that BSD included was a shell called vsh, or the Visual Shell. This provided an environment a bit like Midnight Commander (itself heavily influenced by Norton Commander) on text only terminals. I think that vsh might have actually pre-dated Norton Commander, so it is possible that vsh actually influenced Norton Commander, rather than the other way round!

I'm sure the code for vsh is available in the tuhs archives, so it may be an interesting project to ressurect it to see how it compared.

Peter Gathercole Silver badge

Re: Efficient interface

The difference here is that with vi, you tend to look for ways to do things globally often using regular expressions, so for example global find and replace is trivial in vi if you know how, but doing it from a GUI is often much more convoluted.

Maybe this is because I still think in terms of ed commands myself, In GUI editors, I've seen people doing a find string, move the cursor within the string, delete, type the replacement and repeat, even when the GUI actually does allow global find and replace, but just makes it more difficult to apply. In vi, you can even do a "repeat the last complex operation", rather akin to an automatic macro facility that remembers the last operation you did.

And vi (original vi, not any of the re-implementations) has many quite obscure features like tagging, block moves, multiple named buffers for text moves. It is much more than the simple move a cursor, make a change.

And, even though back in the 16-bit minicomputer days vi was too big to run on some systems (PDP11s without separate I&D like the 11/34 and 11/40), in this day, it's absolutely tiny compared to most things on other OS's.

If you run the version of vi that came with BSD, rather than vim or any of the other implementations, you're using what was effectively a prototype. Anecdotally, Bill Joy lost the code for a later version he was working on and had to include his working copy in the BSD releases.. I often wonder how much better than the version we use that lost version would have been.

Self-taught-techie slept on the datacenter floor, survived communism, ended a marriage

Peter Gathercole Silver badge

Re: Daily!?! RFC begs to differ

That is what the RFC says, but when they talk about transmitting a mail, they're talking about an MTA (Mail Transfer Agent) talking to another MTA, and the issue you then have is what constitutes an acknowledgement that a mail has been received.

Generally, this means that the sending MTA has engaged with a receiving MTA, and has successfully completed all the elements of a mail transfer, including the final termination. If that has been successful, then the sending MTA will stop retrying, because as far as it is concerned, the mail has been delivered to the next-hop agent.

The retransmission specified in the RFC is there ro allow for a receiving MTA to be off-air for a while, something that used to happen a lot more than it does now.

This story appears to hinge around a locally run MTA that had received mails, but had not sent them on so that the MDA (Mail Delivery agent) can show them to the recipient.

It is quite normal for there to be several MTAs in an organization, with one sending and receiving email to/from the Internet (or another mail network), and one (or more) handling all of the intermal email. Working in collaboration, they route the mail appropriately. So the external facing one could have received the mail, but not passed them on to the internal one.

Sometimes, it's instructive to turn on the view of the SMTP headers in an email just to see how many MTAs are involved between you and the sender! (Hint. It's normally a lot more than you would expect.)

Angry mob trashes and sets fire to Waymo self-driving car

Peter Gathercole Silver badge

Yes. That was a wild flight of fancy rationalized into something that could make sense. One of his earliest and most radical ideas, and one that cemented his reputation as a Master of Scientifiction(sic).

Within the expanded Foundation stories not completely written by Asimov, it is suggested that the idea for Psychohistory was originally invented by R. Daneel, who introduced it to Hari Seldon for him to take forward. I've always been a little uncomfortable about that particular thread in the story, but it was one, again, that was hinted at by Asimov himself in "Robots and Empire", which other authors ran with, filling in the gaps in the Foundation prequel books.

People criticise Asimov's books for being written in an impersonal style, never really expanding on the main characters much more than making them players in the the story. But I find the prequel books, written by collaboration, a little too personal, with too much characterization. I guess that is why I never read them as completely or as frequently as the ones penned just by Isaac.

In retrospect, I think that modern generations will never read the "Foundation" trilogy. It's written in a measured, slow style that will not grab a modern reader.

I've yet to see the Apple TV adaptation, but when it is described as "based on...", I think that it will bear as much resemblance to the books as the Robin Williams film did to the original "Bicentennial Man" short story, or even worse, "I, Robot" which was a travesty,

Peter Gathercole Silver badge

Re: I wonder what Psychohystory would have to say about that event

I think that the later Hari Seldon books were all collaborations, and that dear old Isaac's input was probably down to signing off on the books not breaking the over-all story ARC.

But he did explore the effect of robots (or maybe computers) controlling human society in the short story "The Evitable Conflict" originally in "Astounding Science Fiction" in 1950 (before my time) and then published in the collection "I, Robot" (where I first read it in the '70s) and re-published in other collections elsewhere.

And I believe that he also expanded on this in one of the Elijah Bailey/R. Daneel Olivaw novels, and then again in his later merged Robots and Foundation books, culminating in "Robots and Empire".

But in these stories, it is the computers and robots that decided to return control back to humans because they decide that them maintaining control is damaging to humanity as a whole (effectively the zero-th law, which they write themselves.)

IBM pitches bite-sized $135k LinuxONE box for smaller biz types

Peter Gathercole Silver badge

Re: Ummm.....about the rest of the "high availability" configuration......

I don't understand. If you're talking about the POP as being your drawback, then you're really talking about geo-distributed systems, which was not what I was talking about.

Peter Gathercole Silver badge

Re: Ummm.....about the rest of the "high availability" configuration......

It's quite possible to provide a resilient network for these systems. You just have to be prepared for the higher number of network cards that are required to provide two (or more) 'legs' on separate network switches to provide it. It's not that difficult, and in the higher-end midrange systems, we've been doing just that for decades. And of course almost all major components of these systems will be 'hot' swappable.

Also, remember that the I/O is virtualized so not all the LPARs have their own network cards, it is done through the Hypervisor to dedicated I/O servers (in Power land we call them VIOS, or Virtual I/O Servers), and using multiple instances of these, you can come up with some hugely resilient solutions.

Of course, they don't give you physical or geographic isolation, but that is another question entirely.

Peter Gathercole Silver badge

Re: Savings modes

I don't think that it is 8-12 times the power in terms of raw compute, but one of the things you may note with many VM technologies is that you end up with a lot of the CPU sitting idle a lot of the time. Most stand-alone, and many VM systems over provision each VM, to make sure that they can service the peaks in demand.

The VM and Linux implementation on IBM Mainframes is very, very good at making use of currently unused capacity, as long as you mix your workloads accordingly. It is even better than IBM's PowerVM implementation, which allows unused CPU resource in one LPAR to be 'borrowed' by other LPARs when needed. With Mainframe, the CPU time is not even provisioned to the LPAR unless it is needed.

This means that you can essentially thin-provision your systems, but still get the same amount of work done.

It seems to me that the current wider thinking in system design is that you don't alter resource allocation on demand to VMs, but that you just spin up more VMs to cope with the load. Well, the Linux VM mainframe implementation can do this with astonishingly fast start-up speeds, but it is often more normal to allow the VM itself to have more resource allocated dynamically at peak demand. This hugely simplifies the application design, as you can keep to a single-server application deployment that eliminates all of the load balancing and synchronisation technologies that you need with a distributed implementation, saving you the resource that that requires. And with the high availability, you don't even need to go wide to provide resilience.

The one area where this dynamic resource allocation falls down is that per-CPU software licensing struggles to cope with such a model.

NASA, Lockheed Martin reveal subtly supersonic X-59 plane

Peter Gathercole Silver badge

Re: Semitic Plane

I am no aerodynamic expert, but I remember seeing pictures of some of the shock-wave patterns that were coming out of wind tunnel experiments in the mid-60's when they were doing research work on Concorde.

One of the physics text books I used at school, "Ordinary Level Physics" by Abbot (a big blue or green book, depending on the edition) had a picture of the shock-wave pattern of a model of Concorde on the cover (It was topical, I was using the book from 1971, between Concorde's first flight and the first commercial flight).

What this shows is that Concorde, like most supersonic aircraft has one major shock-wave which has the nose of the aircraft as it's origin (there are actually other, smaller shock waves from the wing roots and tail fin).

What I believe that this exercise by NASA is trying to do is 'smear' the shock-wave along the length of the nose, so that instead of one sharp shock-wave, there will be a myriad of smaller ones, spread out over a longer period of time and distance. This is why it will appear as a 'thump' rather than a 'bang'. There is probably the same energy in the shock-wave, but it is spread out. I don't fully understand why this would be the case, but these people are much cleverer than me.

One thing that many people (but not here, I'm sure) get wrong about the 'sonic boom' is that there is not just one as the aircraft transitions from sub-sonic to super-sonic, the boom follows the aircraft all the time that it is flying over the speed of sound. You only hear it as a single boom as the shock-wave passes you (although you may also hear echoes of it).

The other thing is that as far as I am aware (never travelled in a supersonic aircraft), nobody on the aircraft can hear it (something Thunderbirds got really wrong), because the aircraft will out-run the bang because it's travelling faster than the sound. Anything you may hear on the aircraft is more likely to sound like a continuious roar, but most of the energy will be projected outward

One person's shortcut was another's long road to panic

Peter Gathercole Silver badge

Re: “..” is vital

I contend that they are not redundant in UFS, as they are a critical part of the design of UFS and derived filesystems. You could not remove them without fundamentally breaking UFS. What you say may be true for non-UFS derived systems, but that is another argument.

It seems to me that if you didn't have the concept of a link to the directory above your current directory, you would have to keep a record of the full path to their current directory in all processes, because if you didn't, finding out where you are on a hierarchical filesystem may be pretty difficult without either the processes being made knowledgeable of the device your current directory is sitting on (together with being able to read information about that device from a user-land process), or scanning the entire directory tree whenever you need to know where you are.

Of course these things can be hidden in the syscalls, or maybe in the path resolution code, but for the time, keeping links to the directory above was an elegant and simple solution to a problem that meant that you could make the code less complex.

Another point at which ".." was useful (again, an archaic argument) is that it actually allows you to piece a filesystem back together more easily in the case of filesystem corruption. If you have a back pointer, if a directory gets unlinked from it's parent (for example by the parent directory file being corrupted/deleted), it becomes easier to look in the orphaned directory, get the inode number of the parent directory, and at least link it back in, even if you don't know the full name that it used to go by.

Again, in these days of more robust filesystems, this type of repair is less likely to be needed.

I know I'm talking like a dinosaur here, but you have to remember when this was invented, the UNIX kernel had to fit in under 56KB of memory, and a similar restriction existed for individual processes. And changing it now would break things, even if you did do as you suggest, change the path resolution code in the filesystem handling routines.

To me, your arguments sound pretty petty. It's not a huge cost, and if you don't want to use it, you can happily ignore it.

Peter Gathercole Silver badge

Re: old .* gotcha

I can understand that you may find "." a little useless, but ".." is vital. Without it, you would not be able to go up a directory in the directory structure in shell. It will be used under the covers in all manner of other situations (as I said in one of my other posts, in ksh, typing "pwd", actually chases all the way back up to the top of the root filesystem, one directory at a time, using "..", identifying the name of each directory as it goes [actually, that probably used the "." entry to get the inode number of the directory to be able to obtain it's name]).

But thinking this through, even when in the shell, I use the "." entry quite frequently. If you don't have "." on your path (as you shouldn't for security reasons), you can run a script in your current directory by using ./script_name. I use that literally all the time!

If you actually bother to dig in to how the UFS filesystem works, having entries that point up in the directory structure is vital to it's function by design. It's also worthwhile knowing how the link count shown on ls is affected by having sub-directories in a directory.

Once you get away from UFS and other POSIX-complient filesystems, things may work differently, but the UNIX filesystem design was very influential for a number of different OS's filesystem design over the years.

One interesting note. For a slightly obscure but quite well documented experimental distributed filesystem back in the early '80s called the "Newcastle Connection", also known as "Unix United", the developers invented another entry "...", normally in the TLD of "/", which allowed you to do something like "cd /.../machine/<path>", which took you up to the super-root of the network, then back down into the filesystem of another system on the network. It was interesting as it could be added to a system without any kernel modifications, just by replacing the C library which contains the stub code for the system calls, and linking your programs to that library.

I remember going to the computer lab. in Clairmont Tower at Newcastle University where they were developing it, and seeing my contact there write to a tape on another machine just by specifying a path to it's tape drive entry in /dev through this mechanism (I also remember the complaints from other people in the lab. because the transfer dominated the Cambridge Ring network that was linking the systems together). It was like magic, and something that has only been possible with NFS since version 4, so much later (although AT&T RFS would allow something similar).

Peter Gathercole Silver badge

Re: moving back up across the mount point

But that is the point. Moving down into the filesystem, the permissions on the TLD allowed the access. But moving back up into the parent directory on another filesystem, which used the permissions on the underlying mount point which were not used when entering the filesystem, denied the movement (you need "x" on a directory to move through it, of course).

As I said, it's a long time ago (more than 30 years), and my memory is more than a little hazy, but I'm pretty sure I remember the circumstances correctly.

It's not worth trying to replicate it, although I probably could see whether Bell Labs UNIX Edition 7 on my PiDP11 demonstrates the problem.

Peter Gathercole Silver badge

Re: where the NFS mount had failed

There used to be a rather strange behaviour in AT&T UNIX SVR2 (and possibly other versions) whereby descending down into a filesystem across it's mountpoint through it's top level directory would use the permissions of the TLD on the filesystem, but moving back up across the mount point would use the permissions of the directory that was mounted over.

So, if the underlying directory had permissions of 0664 (drw-rw-r--) before the filesystem was mounted, but the top level directory was 0775 (drwxrwxr-x), once mounted, you could change directory into the directory at the top of the filesystem, but if you then did a "cd .." from the top level directory, it would give you a "permission denied" error. This actually created some problems with a few library routines that chased the directory structure back up to work out the fully qualified path of a file or directory.

I actually had access to the source, and so I traced it through to work out what it was doing, and it was actually working as written. When I questioned my escalation path (I was working in AT&T at the time, but it was a long time ago), I was told it was working as designed. I think I was told a reason, but I really can't remember it now. As a result, to this day, I still make sure that the permissions on the mount point and the top level directory of a filesystem match, and give the desired permissions. I do not actually know whether this behaviour is still the case in either genetic UNIXes or Linux, but I do it anyway.

Linus Torvalds flames Google kernel contributor over filesystem suggestion

Peter Gathercole Silver badge

Re: VFS/Vnodes

I was an AT&T RFS user back in the late '80s, so I know that this plugged into the File System Switch (or as Maurice J. Bach referenced it, File System Abstraction, as FSS was used as the acronym for the Fair Share Scheduler in his book "The Design of the UNIXTM Operating System", and also in the AT&T Research and Development UNIX documentation). But I do remember an internal wall poster produced by one of the OS groups at either Indian Hill, or Murray Hill in AT&T, that showed the FSS.

I knew it existed, but as RFS was a UNIX-to-UNIX filesharing protocol, I always thought of the Switch as just a way of effectively adding remote references to the inodes from remote filesystems, rather than a full abstraction layer for different filesystem types. That may have been a misunderstanding on my part, because it also looks like it contained references to alternate system calls to handle remote filesystems. They certainly weren't called VFS and Vnode in Bach's book, and the term "Generic Inode" is used for what we now call a Vnode.

But when I read the VFS/Vnode description in AIX 3.1 documentation, and in the AIX internals course I took, this looked significantly different, and I was at the time talking to a very experienced Sun OS administrator and programmer (who had joined IBM about the same time I did) who implied very strongly that what IBM implemented was very much like the Sun OS implementation.

When the RISC System/6000 was first introduced, with AIX 3.1, the most common UNIX systems in the wild were from Sun, and IBM tried everything they could to make AIX interoperate with other UNIX systems, particularly Sun.

Peter Gathercole Silver badge

VFS/Vnodes

I remember when I first came across the concept of VFS and Vnodes when AIX 3.1 came out on the first RS/6000s. I *think* it was a Sun who invented them (I may have seen them mentioned in the SVR4 developer documentation) to make NFS easier to implement on different system architectures, but at the time they were still mainly UNIX, spanning various UNIX filesystem types, mainly implementing UNIX/POSIX file semantics.

I thought at the time that it was an elegant way of abstracting different filesystem implementations, and I'm sure that it made things like AFS and DCE/DFS easier to implement, but it's understandable that things move on, and some of the complex object types in persistent storage do not fit comfortably in a traditional VFS/Vnode implementation, particularly many object storage systems in Cloud infrastructure.

But for decades, it has made UNIX and UNIX-like OS's work and feel very similar, but I now find that Linux us evolving towards something that is not really UNIX any more.

I definitly can see that it may make non-filesystem data abstraction into something like a file more complicated, but getting things in chronological order, these things have happened since the VFS concept was invented.

Peter Gathercole Silver badge

Re: One of these virtual ones @abend0c4

That's a good summary of how I understand it. Thanks.

At last: The BBC Micro you always wanted, in Mastodon form

Peter Gathercole Silver badge

Re: BASIC

Hmm. I did see that, but had forgotten.

Peter Gathercole Silver badge

Re: BASIC @Mage

I'm not sure that there really was an alternative to Basic in the '80s, especially on microcomputers.

I really don't think that you really wanted to teach the likes of Fortran or Algol to anybody other than those who needed it to support other things, and if you ever used a compiler on an early PC without a hard disk, you will probably remember it being a frustrating process that often involved swapping floppies in and out of disk drives. Basic, often in ROM, was just so much more accessible.

Yes, there were better teaching languages around, but for a microcomputer aimed at low-ish spec. PCs for home and school, you needed something quick to learn without barriers to learn like compilers, but powerful enough to do 'interesting things' to engage merely curious people.

BBC Basic was better than most, although it could have done with WHILE/(W)END in addition to REPEAT/UNTIL, and also code blocks that you could condition more easily (both rectified in later versions of BBC Basic that weren't so limited by space in the interpreter). What it didn't really teach was efficient memory management and strict data typing in programs, but then a lot of languages intended for teaching glossed over these things to make programming more accessible.

I was involved in teaching Pascal to students at around the same time as the BBC became popular, and the kids really disliked it because of it's constraints which made writing pretty much anything in strict ISO Pascal a real pain in the neck. Strict Pascal was deliberately intended as a language that enforced good programming style at the expense of ease, which really turned off a bunch of students who were presented this as a first language to learn.

I think the best teaching language I saw was PL/C, which was a teaching compiler for a cut-down version of PL/1. It explained where a syntax error was and even attempted to fix it for you (sometimes with amusing results), which was useful in a batch environment with a turn around time of hours for jobs. This is what I learned on in 1978 when I went to Uni., but this was on mainframes, not microcomputers.

Peter Gathercole Silver badge

Re: CUB Monitor

Looing at the screen of a BEEB on a 26" LG HD TV through an RGB to SCART adapter shows that the picture quality of the BEEB can be very clear. It really feels wrong to see the characters so big!

Peter Gathercole Silver badge

Re: BASIC

Never saw a C compiler for the BEEB (well, I may have seen a TinyC compiler, but I can't remember much about it). I have two Pascal systems and a Forth (but not Acornsoft Forth) in my BEEB, or at least as ROM images for Sideways RAM. Other languages I saw were Comal and Lisp, and I'm sure there were more.

There is a thriving retro community producing SD card interfaces and Sideways RAM and Flash addons, and even emulations of second processors using a Raspberry PI through the tube.

I was involved in using BBC Micros for education in higher education in the '80s, and had some really wonderful 'toys' which attached to the BBC micro. My favourite was the Bitstik with a 6502 second processor, which allowed you to play a mean game of Elite. The Bitstik put the throttle on the twist of the joystick, which made interesting things possible.

Peter Gathercole Silver badge

Re: I still have the real thing

Still got my model B, quite early Issue 3 board. Don't have a Cumana drive, mine is a Viglen 40/80 drive.

I re-capped the power supply a few years ago, so I know that it still works.

For a moment there, Lotus Notes appeared to do everything a company needed

Peter Gathercole Silver badge

Re: Fully Loaded Goats

The whole concept of working offline while remote, and then syncing the databases when you could was something that I think Notes had that was fairly unique at the time, and is now no longer necessary because of always on connectivity.

But when remote communications were typically a V.32 (or earlier) modem with no ISPs giving Internet access, being able to dial up periodically through the day to synchronise the databases, which sent and received mail as part of the sync. was invaluable. You could set up an automatic replication to occur when you shut down the client which mostly fixed the type of problem that you had, and if you had a good always-on connection, you could configure Notes to use the remote primary database as the sole repository, and ditch the local copy.

As a user, I think Notes is functional, just different. When the world started implementing Outlook, Notes was pretty established, but people who came across Outlook first thought Notes was odd, whereas you ought to look at it as not-Outlook, and accept the differences.

As my use of email goes back a long way before any GUI mail agent (I was still using mailx - OK the AT&T Toolchest mailx that handles attachments - into this Millennium, not even using Pine or Elm) , everything looked different, so I never really felt fully comfortable in any GUI mail reader.

I was using Notes up until about 2 years ago when my engaging client finally ditched it (in fact I still do have one environment that is still using Notes), and being a Linux user, I find that having to use Web Outlook is not something I enjoy. I have Evolution syncing with the cloud-based Microsoft mail server that this client is still using, but the authentication is tricky to set up, and frequently breaks when passwords are changed as it's tied into the SSO solution that the client uses which requires regular changes.

While we fire the boss, can you lock him out of the network?

Peter Gathercole Silver badge

Re: Dead mans shoes career progression

And he ended up being Emperor anyway....

Respect to all of the B5 actors who are no longer with us. You are missed.