* Posts by Peter Gathercole

3718 posts • joined 15 Jun 2007

Graphical desktop system X Window just turned 38

Peter Gathercole Silver badge

Re: Baby and bathwater come to mind.

I think you need to watch this in order to understand what the developers of Wayland were thinking.

I'm not going to watch it again, because it really sums up the attitude of the developers, and every time I watch it, I feel my blood pressure start to rise.

Peter Gathercole Silver badge

Re: Baby and bathwater come to mind.

I ought to point out that using Web technologies to render graphical application is also one of the items on my hate list, alongside Wayland, Pulse Audio and dare I say it, Systemd.

It's come a long way from it's origin, but browsers were meant to handle largely static, mainly one-way communication. Extending that into an application display framework always seemed like an ugly fudge to me, and I think the bloat and inefficiency in our modern web browsers, which are now probably the biggest resource hog in any workstation just shows how poor a decision that was.

Just imagine if X had been kept modern, with proper access control, encryption and keeping the efficient graphics transport across the network, application rendering in web pages would not have been necessary. And I believe that we would all have been much better off.

Peter Gathercole Silver badge

Re: What I like about X

Well, I'm going to argue against form here.

Most applications do not deal with X directly now (and they really haven't from the early days of Widget sets). They use various toolkits and graphics libraries that isolate them from the underlying rendering infrastructure.

Provided that you can do a complete re-implementation of the toolkits to use Wayland, in theory you don't need to change the applications. Just re-link the application objects with the new toolkits, and off you go. You may even be able to do this with dynamic linking at run-time, rather than compile time.

I know this is an idealistic argument, and that some applications may still need to be a little knowledgeable about the way that X works, but for most desktop type applications, there is probably not the case, and other, more graphics intensive operations use a variant of GL to render complex objects anyway.

I'm not arguing for Wayland here, just pointing out that it is possible.

Peter Gathercole Silver badge

Re: Secure network connection @steelpillow

Well, you've got the client/server bit the right way around, but generally speaking, a client on the remote system opens a session to the server, not the other way around.

Of course, you may be thinking of starting applications from the window manager, but it is possible to run the window manager itself on the remote system (as is often the case if you are running an X-terminal or an X Station). It's even possible to run a remote window manager in a managed window inside your local X server (as a nested window manager).

X does not have a mechanism for the X server itself to open a new session. A client can (and the window manager is just a client, after all), and that other client has to have a mechanism outside of X to do some form of RPC if it runs on a different machine. The X11 protocol is just a transport, not an RPC mechanism.

When I was working in a purely X environment, it was normal for a menu item in the local window manager to run a command on a remote machine using SSH, or some kerborised RPC, and fire up a client session on the remote system back to the local X server. It's as secure as the RPC tool you use. I presume that in your scenario, the admin had a secure method of getting from their workstation to the remote server as well.

X Windows was written in a more trusting age. There were glaring security holes. Xauth and cookie support was added to get some degree of access security, but for years now, people have tended to use X11 sessions through SSH tunnels so the X protocol is encrypted across the network and SSH is performs the authentication.

But it would have been possible to extend cookie support to include better cryptographic authentication, and an encryption layer in the X protocol itself would have been pretty easy to add, but it wasn't, presumably because SSH tunnels seemed good enough.

After all, it fitted into the UNIX way quite well.

Peter Gathercole Silver badge

Re: "Wayland's privacy controls..."

I think it means that you can't just plug in between the application and the display, as you easily can in X11 (reparenting just one window, or even the whole screen by inserting between the display manager and the root window), although this always was a bit of a security issue if done incorrectly.

This might make screen sharing a little more difficult, but any application will still be able to record a session in the application itself, that won't change.

Peter Gathercole Silver badge

Baby and bathwater come to mind.

"One of the central functions of X is that it works over a network connection, something that Wayland by design does not do"

This is THE deal-breaker for me. Beyond just using a local display, this is the core X11 function that I use. And I do it quite frequently (last weekend, I actually had VirtualBox running on a remote system using X to display it where I happened to be). It still work reasonably well, but some applications wanting to talk to the display via D-bus or one of the other in-system communication methods does not work (so why require it in the applcations?)

XWayland and the various types of VNC or RDP, whilst they do work, I find them obnoxious, jerky, prone to block, and as they are basically remote control applications for the console of the remote box are just a broken idea for multi-user systems.Not tried waypipe, but if it works like ssh tunnelling of X, it may work OK. IMHO, for everything I do, X11 works far better, and can cope with different display resolutions on server and client (whichever way you want to see this). I also work with non-intel legacy system, and for that X11 is still a must.

This really is a case of throwing the baby out with the bath water.

Back in the last century, I worked in an environment where we had powerful centralized servers with IBM X Stations on people's desks. It worked really well for the majority of what needed to be done, and passably well for most of the other tasks, with the exception of running Windows applications (which was not a huge issue at the time). It was easy to administer (just a handful of servers serving a community of 100+ workstations), and meant that people could pretty much work on any desk without having to move any hardware around.

I regard the move to desktop Windows workstations and now laptops as a really regressive move.

Original Acorn Arthur project lead explains RISC OS genesis

Peter Gathercole Silver badge


The two things that RiscIX needed that the A310 did not have were a hard disk and more memory than the A310 had. I think it had all of the rest of the required facilities (I don't think that the Ethernet was essential.

These were things that most UNIX systems needed, although the amount of memory required would have been more on a RISC processor than a CISC. At this time, MC680X0 based UNIX workstations would probably have had 1-2MB.

It would not surprise me if using a SCSI podule and disk and after market memory expansion boards (A310 had space for two podules I think, as long as the riser was fitted, so Ethernet could have been provided as well) that it could have been made to run, but it's unlikely anybody would try in this day and age.

Totaled Tesla goes up in flames three weeks after crash

Peter Gathercole Silver badge

Re: Deja vu again @nobody

And they will the want to pass the costs on and charge a disposal fee for scrapping vehicles.

Scrap yards are businesses. The scrap value of the materials must exceed the costs of breaking them for the businesses to be viable. If the value of the scrap materials does not, they will want to charge for the disposal of the vehicle.

And if you can't afford to scrap a car, I can see a lot of accidental vehicle fires at the end of many vehicle's lives.

Record players make comeback with Ikea, others pitching tricked-out turntables

Peter Gathercole Silver badge

Re: Stop wasting you’re time! @Fender strat

Unfortunately, I can't actually afford the Les Paul Custom I would like, but I do, like you, have a Columbus knock off of one.

It's really a bit of a decoration now, as I am mainly a classical player when I do play, but I did have a bit of an go at improving it as best I could, as something to do, but you can't really put lipstick on that pig (it has a ply body with a moulded, not carved top, and a bolt-on neck).

I stopped when I realized that I would have to reposition the bridge to be able to get it to tune properly (on later model Les Pauls, Gibson angled the tunamatic bridge to effectively lengthen the lower strings, which would then allow the correct fine adjustment using the bridge adjusters), but this knock off has a bridge parallel to the nut. It plays OK, but it's not great, and the bottom E is never perfectly in tune at the higher frets. Lighter strings and reducing the action helped, but it just won't tune perfectly.

When I was at collage, I played a 12-string in a band where the main guitarist played a Columbus Les Paul Custom, the same model (but not the same one, unless by incredible co-incidence) which suffered the same problem. I understood guitar set up less well then, but now I know what the problem was.

Peter Gathercole Silver badge

Re: That vinyl sound

If you can learn to listen for the sonic artifacts, then that means that they are there, and the music is not the same.

Seems to me you've argued against yourself there.

Peter Gathercole Silver badge

Re: It gets more fun... @AC

I can tell where you're coming from. I've seen so many sets of speakers where the suspension of the drives has rotted away over time, and now the cones are almost free-floating, only held in place by the secondary suspension close to the voice coil.


I have three pairs of vintage speakers. Keesonic Kubs, Mission 760s and Wharfedale Diamonds (not sure which versions, but quite early ones), and none of these have physically damaged cones or suspension, and I've previously had others as well. The Keesonics I've owned from new (bought 1979), and still sound good to my ears, even compared to the more modern speakers that Ive bought and since got rid of.

I've not tested the rigidity of the cones, but they still seem pretty stiff.

Not all power supplies in vintage amps. suffer the power problems you talk about. Many of them (such as my NAD) were designed to be able to supply high currents in bursts to allow for low impedance speakers, and anyway, I never listen at an antisocial volumes that challenge even the modest 20 watts RMS the amp's rated at. I wonder whether modern switch mode power supplies, although on paper technically better, can actually do the same.

It's also been re-capped.

The effect of time on semiconductors seems to generate much discussion, but from what I read, provided the transistors are not run at the edges of their thermal envelope, they should perform pretty well for several decades. It actually seems that there may be a faster degradation of modern devices because of the tight integration, packaging and being run much closer to their limits because we 'understand' the material properties better now.

So don't discard the best of the vintage stuff. It still can be good.

Peter Gathercole Silver badge

Re: Digital transmission?

Damned auto-correct. Rumble is inaudible, i.e. can't be heard!

Peter Gathercole Silver badge


Techmoan on Youtube has reviews of several linear tracking turntables.

Peter Gathercole Silver badge

Re: Stop wasting you’re time! @Fender strat

With the deliberate intention of starting a completely different flame war, would that be an early '60's strat, one of the absolutely terrible ones from the '70s, or one of the posy signature editions that they like turning out now.

And is a US made one better than a Mexican one? Or how about a Japanese one vs. a Chinese one (all made by Fender).

My vote would be for a Gibson Les Paul Custom from the late '50s.

Peter Gathercole Silver badge

Re: Digital transmission?

This is a very nuanced area.

I have 45 year old vinyl that has been kept reasonably well, and it still plays fantastic.

My HiFi is fairly vintage, and all pretty budget (but good budget), although components and parts have been replaced over the years.

The turntable started life as a Pro-Ject Debut II in about 2001, but has had a new motor suspension which seriously reduced the rumble (made it almost inalienable at normal volumes), has had the arm from a Debut III fitted, and the cartridge replaced with my vintage Ortofon VMS20E cartridge (unfortunately using a quality aftermarket stylus, as the original is no longer made). The most recent change was replacing the heavy rubber mat I used with 6mm of acrylic disk, which has made a huge difference to the accuracy of the bass.

For CD, I used to use a Technics CD changer, and at that time, I felt that vinyl copies of the same album were clearer than the CD. But I discovered two things. When I replaced the CD player with a vintage Marantz one, with a Cambridge Audio external DAC, the sound playback from the CDs jumped in quality using the external DAC, even compared to the built-in DAC of the Marantz. The other thing is that modern CD pressings, especially 'remastered' ones often sound terrible compared to the vintage CD pressings. And many modern albums sound really bad as well, mainly because the levels are set so high that sometimes they clip, and they rarely use the full, much greater dynamic range of CD (everything is loud, nothing is quiet).

Yes. Vinyl is a flawed medium. Yes the quality of the turntable is important, and the cartridge and stylus even more so (finer stylii sit more deeply in the grove, and are more immune to surface scratches, but suffer from debris in the bottom of the grove more). Yes, badly kept vinyl suffers from dirt and damage. Yes the dynamic range of vinyl is lower than CD. Yes, there is distortion caused by the non-linear path of pivoted tone arms. Yes the tracking speed of vinyl changes from the outer to the inner groves.

But even given all of this, vinyl can still sound superb, and many respected brand CD players can mangle the music, and modern audio engineering and production can misuse the supposed better capabilities of CD just as badly.

BTW. I bought one of the cheap (£89) Dual manual turntables from Lidl a while back, just to see what it was like (it's not the Dual of old, however, merely a badged Chinese TT, which is available under several names), and I was pleasantly surprised with the quality of playback,especially when the felt mat was replaced by my heavy rubber one. The original cartridge was quite good (an Audio Technica AT3600L which has had very good reviews for a rock-bottom budget cartridge) but also takes better cartridges quite well as well. Only the poor initial set-up of the arm and the built-in phono pre-amp let it down. Turning the pre-amp off, and feeding it into the phono input of my NAD amp cured nearly all of the problems from the pre-amp. This shows that there is still a cheap way into vinyl.

Bipolar transistors made from organic materials for the first time

Peter Gathercole Silver badge

Re: Gatekeeping @My-Handle

When I first read your use of 'scale', I was thinking volume production.

But it is really size that is the issue.

It is possible to create single devices (transistor or diode) using technology that is available in the home, after all, the labs that originally created transistors weren't that much more sophisticated than a modern day school physics lab.

But these devices will be slow, unreliable, and very large, just as they were when they were first made.

Even using the scale of integration that went into 7400 series TTL (commonly used chips from the '60s, and still used in some forms today) would be beyond all but the most dedicated home enthusiast. The requirements to mask, dope and sputter the gates in silicon, even at the scale of original TTL require you to work on die sizes measured a few millimeters across. It's not impossible, but well beyond most home enthusiasts. And these devices typically have a couple of dozen transistors, and a very simple design with only a few layers.

When I was studying electronics at University in the early '80s, we had the facilities to build chips in the semiconductor fabrication lab there, but the process was quite involved, using optical lithography, and could only build devices with a few thousand gates. Us undergraduates did not get access unless we did a relevant final year project.

Now look at more complicated chips, with millions of gates, and the area you need to use to build the gates either becomes way to big to handle conveniently, with the associated signal proporgation speed, power and heat problems, or you have to start working at a much smaller scale.

Even if we were able to print organic semiconductors, you would need something much better than your average ink-jet printer to deposit the materials, although that is what they do for large panel OLED screens nowadays, if I understand the technology correctly.

Know the difference between a bin and /bin unless you want a new doorstop

Peter Gathercole Silver badge

Re: tar(macadam) joy

I don't believe this. Tar will overwrite files that match a file in the archive, but won't remove any files that don't.

If you back up an empty directory, on restore, it will check whether the directory named in the archive exists, and if it does, it may change the permissions back to what is in the archive, but it won't delete it.

If you used rsync, things may have been different, but tar? No.

Peter Gathercole Silver badge


And what do you do about case sensitivity?

I once had to stage an AIX service pack through a Windows server (using Windows as your gateway systems is a REAL PAIN), and it squashed the case on the file names such that when they arrived on the destination AIX system, not only did the files have the wrong character case in the name and wouldn't install, but with two files, there was only a difference between the filenames in the case of some of the characters, and one file over-wrote the other!

Nowadays, I tar or PAX the files together, bsplit (or split -b) the resultant huge files (did I say that I use Linux on my workstations), and send them through like that (as long as you make the files a multiple of 4 characters long - some windows comms. packages [like the Windows FTP server] silently pad files out to the nearest 4 characters - did I say using Windows as your gateway systems was a pain?), and then re-assemble the files at the other end and unpack.

Now some of you may say "Why are UNIX-like OSs case sensitive", to which I reply, why is Windows not? In my view, having case-insensitive file names is the real limitation, not the other way round (I had this discussion with D. Evnulll, the pseudonym of the writer of the Hands On UNIX column in Personal Computer World magazine years ago, and he really showed his colours as a migrant from another OS.

Peter Gathercole Silver badge

Re: We can do better than 8.3 these days, can't we?

As Adrian said above, the slots in the directory file were 8 characters and 3 characters, and the dot was not stored but implied by the OS.

If you look at that other commonly used operating system of the time, UNIX, you could have extensions on file names, but there was nothing special about the extension, it was just part of the file name. It was normal to use things like .c, .o etc. but the extension was just a part of the file name, and the dot was just another character in the filename. You could have more than one dot as well (like the directory link ".."). Only a small subset of ASCII characters, including "/" were not allowed to be part of a name. Spaces, punctuation, non-printing characters et. al. could be in file names, and have been causing problems for UNIX users for years, but I would not have changed it. One learned about the "-b" flag in ls.

IIRC, the UNIX filesystem shipped with Edition 7 of UNIX only allowed 14 characters in a file or directory name (it was to do with an entry in a directory file being limited to 16 bytes, with two bytes being used for the 16-bit inode number). That changed sometime before UFS on System V, but I can't remember when.

Filesystems based on the Berkley Fast Filesystem also had the 12 character limitation lifted.

To check, I would have to dig through the code on tuhs, but I can't be bothered at the moment.

Peter Gathercole Silver badge

Re: We can do better than 8.3 these days, can't we?

Actually, as the dot was implied on MS-DOS and DEC files-11 filesystems, it only took 11 characters.

Peter Gathercole Silver badge

Re: Surprised that you weren't commenting on the limey use of "the boot"

In the context of computers, boot is a contraction of booting, which is itself a contraction of "pulling itself up by it's bootstraps".

It's basically a way to start small programs that call larger programs that end in loading the operating system.

I suspect, but I don't know for certain, that the term boot for a car actually originates in the term caboose, which was a storage compartment at the back of a horse-drawn carriage or coach (as well as other things).

I'm prepared to be shouted down about that.

But why trunk? That is a piece of luggage, an elephant's nose, or when paired, warn by men as swimming apparel.

Atos, UK government reach settlement on $1 billion Met Office supercomputer dispute

Peter Gathercole Silver badge

Re: Weather Forecasting

Back in the '80s, ECMWF used IBM mainframes as the systems to collect the data. In 1978, I was at a presentation where I was told that NUMAC's IBM 370-168 was the same as the system they used as a FEP to the Cray 1 at ECMWF.

For the seven years I worked at the Met Office in Exeter, they upgraded their mainframes from a sysplexed z990 to a sysplexed z196 to a cluster of two z14s.

They use mainframes as data gathering and forecast distribution systems, not because of the computing power, but because they don't crash.

During a critical power problem while I was there, we had to do a load shedding operation in order to prolong the critical environment as the UPS ran out of power (can't remember why the diesel generators wouldn't start up, but we were just on the UPS). Surprisingly, the supercomputers were the first to be shutdown, in the sequence test, collaboration, non-forecasting and then the forecasting computer (they used to keep two systems, one running the live forecasts, and the other able to pick up the live forecast, but normally running research work - it's a bit different now since they installed the current Crays). The last systems shutdown was to be the mainframe.

The reason for keeping the mainframe, storage and comms systems running was that some data cannot be gathered later if it is missed.

Concerns that £360m data platform for NHS England is being set up to fail

Peter Gathercole Silver badge


Love your references to ITIL and PRINCE2.

Very good set of practices, very often paid lip service to, even though they were 'required' for government projects.

And I believe that both were defined since 'the mid '90s' that you use as a break point between good and not-so-good.

NATS is an interesting example. It's not fair to say that the system in use at that time could not be replaced. A like-for-like replacement could have been done quite easily. But the scope of the new system was so much greater than the original, and this is often the case with replacement projects. This, and the fact that the integration problems of the US based software with the different areas of ATC in the UK were more extensive than originally expected, plus the issue of the contract being moved from one contractor to another twice during the implementation all contributed to the late delivery.

Meteoroid hits main mirror on James Webb Space Telescope

Peter Gathercole Silver badge

Re: Disappointing

Given the speed and energies they're talking about, I really doubt that a shield would have been particularly effective. I'm sure that this was less of an impact, and more of a through-and-through, with the meteoroid passing straight through the mirror.

I'm wondering whether the L2 point will actually be busier than open space. After all, if it's a point of gravitational neutrality, then will debris have gathered since the formation of the Earth and Moon.

If there is more debris, maybe the life of this telescope will be short.

I love the Linux desktop, but that doesn't mean I don't see its problems all too well

Peter Gathercole Silver badge

Re: Choosing to choose

It really depend on how you define a 'workstation'.

It began to be used as a term to describe so-called 3M systems defined by CMU, and became applied to Sun systems, which had large graphical screens, local disk and significant processing power. But even in the '80s, they weren't all UNIX. The Xerox Star system weren't and some of the ones from US educational establishments had non-UNIX workstations based around Lisp and Smalltalk.

Even the Apollo workstations did not run UNIX, although DomainOS was rather UNIX-like.

DEC's VaxStations appreared towards the end of the 80's, but if you discount the more powerful IBM PCs, IBM's main workstation offering was actually UNIX based, the 6150 runnning AIX.

Most of the other workstation vendors, like Whitechapple, SGI, Torch, NeXT, Evans and Sutherland, and others, have disappeared from memory, followed by DEC, and soon Sun and HP are or will be ex-workstation manufacturers.

Peter Gathercole Silver badge

Snaps noy (yet) essential.

Currently, you can still turn off Snap, at least up to about 20.04, and download the normal debs, but that will persist only as long as Canonical keep the debs in the repositories.

I dislike snaps intently, mainly because I would like the output from "mount" to fit on one screen, and although I would accept them on niche or specialist application software, I can't abide them being used for packages to run the system, like Gnome or Firefox.

I know what they're for. I know that it makes it easier for software vendors to ship packages for a smaller number of target systems, and I applaud that facility for non-system packages, so Linux has another opportunity to pick up paid-for applications that may make it more acceptable for end users.

But for parts of the distro itself, they are completely inappropriate.

Oxidation-proof copper could replace gold, meaning cheaper chips, says prof

Peter Gathercole Silver badge

Love your off-the-wall posts

This was a while back now, I know, but I've been trawling your posts, and this one had to have a reply.

If you use very thin film aluminium interconnects, you will have to keep it totally oxygen free for the entire lifetime of the device, because although metallic aluminium conducts well, aluminium oxide is a good insulator.

If you use a thin film in the presence of oxygen, it could not be thinner than the depth of penetration of oxygen into the surface, otherwise the trace would become totally non-conductive!

Are you another instance of the amanfrommars AI, or AI simulation?

Wordle recreated in Pascal for the Multics operating system

Peter Gathercole Silver badge


The original standards based Pascals such as ISO and ANSI did not have a real "string" type. They only had fixed length character variables (which I believe could be addressed as one dimensional arrays - I'd have to look it up to check) and individual character types. It was always difficult to handle what you would now call a variable length character string, and there was literally no concept of a null terminated string.

Many implementations of Pascal added these things, but the original standards were (deliberately) very limited

The next time your program is 'not responding,' (do not) try these steps

Peter Gathercole Silver badge


But you've forgotten that vi is ex with a visual bolt-on (in fact on many real UNIX systems, ex and vi are the same binary!)

The fundamentalists use ed.

Sick of Windows but can't afford a Mac? Consult our cynic's guide to desktop Linux

Peter Gathercole Silver badge

Re: Touchscreens, Windows drivers (via headless ReactOS?), raising bugs and improvements.

Ah. Sorry. Rereading your first post, I see I mis-interpreted what you were asking,

We get an appreciable number of posters here who take the attitude "Linux doesn't do exactly what I want it to, so I'm saying it's no good for anyone", and I tend to write replies like the one above to them.

As I said, I believe that Ubuntu, at least, should contain what you need for touchscreen systems, as long as the basic touchscreen driver is available, and I can't say I know for Toughbooks in particular. But if you read my other posts, it's my position that device vendors should provide driver support, or at least documentation. You can't expect the Linux community to provide drivers for every piece of hardware, because there's more obscure pieces of hardware out there than Open Source developers to reverse engineer and write the drivers.

As for using Windows drivers, the problem is that the OS API is significantly different between Windows and Linux. Although it is possible to encapsulate Windows drivers in an emulation layer to translate between the different APIs, there will be a degree of inefficiency when you do that. Whether it's possible to extend the Linux device API to make it more Windows like, I don't know, but I suspect that it would be difficult to implement the Windows driver model natively in the Linux kernel, bearing in mind that it's history is very different from Linux.

Peter Gathercole Silver badge

Re: Touchscreens, Windows drivers (via headless ReactOS?), raising bugs and improvements.

Ubuntu has Orca as an onscreen keyboard out-of-the-box, but it may be that you have to turn it on to come up whenever there is an application that requires input (I'm looking at it running on a 20.04 system at the login screen on another computer as I type on this one)

I'm pretty sure that I used it on some strange HP tablet system that used a Transmeta Crusoe about 8 years ago, and it worked quite acceptably (although the rest of the system was too slow to be useful). It was at the time using Gnome 2 as the UI.

Once logged in, the touchscreen worked like any pointing device, although on that system, I did have some problems calibrating it.

I'm a bit uncertain about docking stations. Mine always worked when I used them, but I was not in the habit of removing and replacing the system on the docking station while it was powered on. I nearly always turned it off. Modern USB-C port replicators would not pose much of a problem, I wouldn't have thoight.

LaunchPad is a quite mature bug reporting system, although it is really intended for reporting real bugs (rather than the usage problems that many new users think are bugs), and is not really aimed at new user support at all. Unfortunately, many reports are closed without being solved, because they are neither replicated nor reported by other users. But if it is not affecting many users, it's probably not worth significant effort trying to replicate it, and there are normally comments about how others see the problem against the problem report.

BTW. have you, as an end user, tried reporting a Windows bug to Microsoft? Do you have any means of doing that?

The driver issue is probably one that affects almost all Linux distro's, but there's probably not a lot of hardware that does not work at all using the drivers in the repositories. I don't think I've really found that much that hasn't worked for me (OK, there was one Pinnacle video capture system that I came across, but the vendor didn't provide Windows 7 drivers for that either!) What are you wanting? To use the drivers shipped with Windows? Have you tried NDISwrapper? It's sort of patchy, but can work on Intel based systems. But using native drivers is bestm and OK for a vast majority of ordinary users.

I think that your usage case might be a little different than most normal users. Linux really can work as a daily driver as more and more people are finding out.

Peter Gathercole Silver badge

Re: Humorously Scare People Away

My wife has two systems with Ubuntu (currently 18.04) installed on them (a laptop that she uses for casual use, and a desktop with two screens for genealogy work). They both are configured for her to use Cinnamon, with a Windows XP skin on it (this goes back to when I first moved her off of her Windows XP system). The only time I get bothered by her not being able to use them is when they drop off the Mesh WiFi (which happens more often than I can explain!)

She doesn't really care what they are running as long as she can get to the Internet, read her mail, and sometimes print the odd house brochure (she thinks we're going to move sometime, I've not yet disillusioned her about why we can't afford the houses she's looking at).

Peter Gathercole Silver badge

Re: Control Your Own Upgrades

Are you just counting the ones that are on?

In this room, I have 4 on right now, another 2 in standby, a stack of 4 laptops that are usable but turned off at the moment, my PiDP11 and a BBC Micro ready to be turned on, and a new firewall with PfSense made more usable by installing many of the missing bits of FreeBSD waiting to be deployed. And I think there's a working Archimedes A3020 somewhere in here!

IBM ends funding for employee retirement clubs

Peter Gathercole Silver badge

Re: Saga

The clubs are subsidized by IBM, not owned by it. This is even true of the Hursley club (which curates cricket, and football pitches and tennis courts that are used by the village teams). The mailing list is subject to GDPR, and again is not owned by IBM.

The retirees clubs are probably set up broadly the same as any number of social clubs around the country, as private organizations, sometimes registered as charities, with the normal articles of association for a private club.

It's the entrance criteria that differentiate them from other clubs.

Strictly speaking, the Hursley club is an employees club, not just a retirees one, although people who were employed by IBM but have since left are eligible to join, regardless of when they left.

During the lockdown, the catering facilities for the whole Hursley site were shutdown. For the essential workers who were on site to keep the lights on for the systems there, the Hursley clubhouse was the only place to get food, so it was filling an important role that IBM UK as a whole benefited from.

Keeping your head as an entire database goes pear-shaped

Peter Gathercole Silver badge

Re: Backups

I briefly worked as a relief system admin while they recruited someone permanent in a company where the manager did that. It wasn't just turning a system off, he'd also sometimes just unplug random cables.

Fortunately, his idea was to test how the environment coped, and also to see how long it took us to identify what he'd done, in order to get it working again, rather than a full system restore test.

All-AMD US Frontier supercomputer ousts Japan's Fugaku as No. 1 in Top500

Peter Gathercole Silver badge

Misunderstood the fundamental concept of the Top 500 list

Ah. But that is where you made your mistake. the Top 500 is only the *known* systems.

If you choose to remain under the covers (at least, not submitting your results to the list maintainers), then you'll never be #1. Or anywhere on the list, really.

Even if you exist!

Is "Gwaii Haanas" a reserved section of your system for wildlife research, by any chance?

When management went nuclear on an innocent software engineer

Peter Gathercole Silver badge

Re: Next time

Those commando sheep were not clever. They were dumb. They tried crossing the cattle grid, and fell into it. Once stuck, one of the others (most likely a lamb) took the opportunity to walk over the top, and the others, being sheep, followed.

Sheep are stimulus driven. Take away a stimulus that causes one behaviour, and they will stop doing it. Like eating if they don't get the nerve messages to say that they are hungry.

I was once farm-sitting for my father-in-law while he went on holiday. We left one of the flocks in a field with clover in the grass for just one day, as per instructions. Two of the sheep ate and ate, and eventually got so boated that they literally exploded (well, ruptured their abdomen) and died. Wool everywhere, although that may have been the crows attacking the carcasses. Very gross.

You tell me sheep aren't stupid!

'Sharp' chip inventory correction looms on horizon, warns investment banker

Peter Gathercole Silver badge

Re: Good news?

Components may become cheaper, but I doubt that finished goods will, or if they do, it won't be to do with the cheaper components, but with poor consumer demand (companies will really want higher margins on lower volumes, but may drop prices just to maintain share).

We've seen this type of thing before. Manufacturers have bumper years because of some external event, and then expect that subsequent years will be the same even after the event stops. We saw this with flat-screen TVs when TV switched from analogue to digital, and we're seeing it again because of the skewed market Covid caused.

It's interesting looking at sites like ebay to see what bargains can be had. Companies appear to be dumping systems into the second-user market because they've suddenly got more IT equipment of various types than they need. I've recently bought a couple of Windows terminal/lightweight clients (to become low-power *ix systems) and some monitors at knockdown prices.

Original killer PC spreadsheet Lotus 1-2-3 now runs on Linux natively

Peter Gathercole Silver badge

Re: Lotus 1-2-3 for UNIX

No. It was after the IBM buyout of Lotus.

Peter Gathercole Silver badge

Lotus 1-2-3 for UNIX

The list there is interesting.What they have listed there is the IBM product numbers for the saleable products that a customer could purchase or license.

On it are various versions of Smartsuite for Windows, NT and OS2, but I suspect that as Lotus 1-2-3 for UNIX was deleted as a product long ago, it does not appear on that list, and Smartsuite was never available on UNIX. So it might be worth speculating that IBM may not have sold the rights for Lotus 1-2-3 for UNIX. There is something called "Smartsuite OEM subscription", but I guess I'd have to dig into archived ULETs and PLETs (IBM keeps them seemingly forever) to work out what that was.

Ah, there is also "ORG 1.1 & 123/W 4.01 RSVP". I suppose this could be it.

Of course, a lot depends on the small(er) print in the contract, because it may contain terms like "preceding products", or "associated works", but the devil is in the detail.

There was actually a version of 1-2-3 available for AIX, sold as a product and with Level 1 support from the support centre I worked in, but pretty much nobody bought it in the UK (I don't think I kept a copy of the installable package). It was predominantly text based, and ran on a terminal or terminal window, but I'm pretty certain that when running in X-Windows, there was a documented way of graphing data in another window (none of the embedded graphics that we now expect in things like LibraOffice Calc or Excel).

But this was fairly standard for spreadsheets on UNIX. In the late '80s I used Access Technologies 20/20 and Uniplex, and they were all pretty much terminal based, with extensions for charts and graphs.

With 20/20 (the spreadsheet), I used the multiple terminal emulations of Falco terminals with a Tek. 4014 emulation to be able to draw graphs on serial terminals. It seemed clever for the time, but would appear crude by today's standards.

Peter Gathercole Silver badge

Re: Word Star

AX25 confused me. I am well versed in serial terminals, and didn't recognize it. It appears to be an amateur radio variant of X.25, so I think that it's not the case.

As far as I am aware, DEC terminals mostly used RS-232, although it would not surprise me if there was a current-loop version sold sometime.

I think you may have been referring to the connector, but that would be a DB-25 connector, as that is what most VT100 and similar terminals had as their connector. DE-9 (what most people think as an RS-232 connector) only really became popular when it was adopted as the serial plug on PCs, and I think that was so they could fit it onto the backplate of an ISA card with a DB-25 for a parallel port.

It does appear that AX.25 ports may also have used DB-25 connectors, so that may be why you're confusing the two.

Microsoft sounds the alarm on – wait for it – a Linux botnet

Peter Gathercole Silver badge

Re: knock, knock.

The reason I said "even if that is possible" is because if the firewall/router uses port 22 itself, it will not be possible for another device to use uPNP to direct that to itself.

As I understand what happens with most IoT devices is that they open an outbound bidirectional session to a well known server on the internet that becomes the conduit for communication to the IoT device. This is not a route into the IoT device from other servers, although it does rely on the function and security of the system in the Internet. This is not uPNP.

Similarly, using a VPN tunnel is not uPNP, and relies on additional security like a shared secret of some sort and some cryptography, and it requires an open port from the Internet through the firewall, possibly using uPNP. But this is extremely unlikely to be port 22, unless the tunnel is implemented using SSH, and even then it would not be sensible.

The reason why it would be nuts is because port 22 is the normal port for SSH connections, so is an obvious port to try to connect for an attack. Most times, uPNP ports that are used are non-privileged ports often numbered over 10,000, so unless the attacker knows the number in advance, the first thing they have to do is a port scan to identify open ports, and then decide on the protocol in use.

Peter Gathercole Silver badge

Re: Dear Microsoft.

I would say that this is nothing to do with OS wars, and more to do with the attitude Microsoft has regarding their position in the market.

They have shown in the past that they've been prepared to spread FUD to try to dissuade people from using anything other than Windows. I know it came from the Ballmer era, but anybody who doesn't remember the "The Truth About Linux" advertising campaign in the noughties should probably refresh their memories.

Microsoft have form, and although they've been a little more accepting about Linux recently, the adage is that it is difficult for a leopard to change it's spots.

So, yes, thank you Microsoft. But please try to give the full context and some comparisons to avoid FUD in future.

Peter Gathercole Silver badge

Re: knock, knock.

I keep getting the capitalization of uPNP wrong. I just seem to have a mental block on it!

Peter Gathercole Silver badge

Re: knock, knock.

Most of the IoT devices running old versions of Linux will be attached to LANs behind a NAT router. This makes it impossible for someone on the Internet to even get to them to try to brute force the root password.

The only exception to this is if the IoT device uses UPnP to knock holes in the firewall and NAT protection that the router provides. But they would be NUTS to open port 22 via UPnP, even if it were possible.

I suppose that it may be possible that they run SSH on a non-standard port, and ask that to be opened via uPnP, but I would be surprised if they even did that, and if they did, it would be a case of guess-the-port before you even start the attack.

Anyway, all sensible people turn UPnP of on their router, don't they?

I monitor inbound intrusion attempts on my home network (I have a full-port redirect to a Linux firewall - which had password login disabled in the SSH config, in case you ask), I noticed an uptick of login attempts (at it's peak it was about 100 a minute, from about half-a-dozen different source addresses) using a variety of user ID and passwords just after the new year. As a precaution, I switched off the port redirect, and I've not needed to turn it back on, so it has remained off. But there was definitely something going on. Not sure if I was being specifically targeted, but that seems a little unlikely.

Logitech Pop: Stylish, portable, but far from the best typing experience

Peter Gathercole Silver badge

Re: Lego?

It's not a PC or Mac keyboard, but this Lego project really got my attention.

It's amazing what people will do to generate content for YouTube.

The sad state of Linux desktop diversity: 21 environments, just 2 designs

Peter Gathercole Silver badge

I found the source for vtwm (twm with a virtual desktop), which I used to use in the early '90s not that long ago, and compiled it up.

All I can say is that we really have grown to rely on some of the desktop integration stuff that modern desktops provide! I had to start moving the network configuration away from NetworkManager and back to the OS just to get the system onto the network (Hint. The trick is to make the network configuration available for all users).

When did we let the network and filesystem configuration be taken away from the OS?

GPL legal battle: Vizio told by judge it will have to answer breach-of-contract claims

Peter Gathercole Silver badge

Re: Buy monitors not TVs ? @man_iii

Probably not the case. There is probably an embedded microcontroller or an SOC in there somewhere. Remember, HDMI, especially with HDCP is a digital communication protocol.

Does it have on-screen menus? If so, it almost certainly has some form of digital controller.

So there will be an OS, but it is very likely that it is an extremely simple, event driven thing, but it will be an OS none the less.

And, of course, there is the digibox possibly gathering information on you...

Peter Gathercole Silver badge

Re: There Oughta Be a Law

I think that the probable indicator is whether a TV had a moderately capable remote control.

I've certainly owned TV's bought in the '80s where the channel selector wa still pretty much mechanical, with latching buttons for each of the channels, thumbwheel tuners, and physical potentiometers for volume, brightness and colour.

But I would say that probably any TV with non-latching push button adjusters, on-screen menus for tuning and picture controls, and remote controls which allowd all of the settings to be changed probably either had a microprocessor, or maybe an embedded microcontroller in the chipset.

But this does not mean that there was much of an OS. The controlling software was probably in real ROM, probably not even EEPROM. This means that to all intents and purposes, these TVs could be regarded as hardware.

I think that the real advent of TVs becoming computers in a screen started with flat-panel TVs, especially when digital TV became a thing. Because of the evolving DTV standards, it made every sense to make the TV a 'soft' device which could accept firmware updates, and there were probably multiple processors in the TV. The final clincher was when TVs became network connected devices.

Although it is almost certain that you can find exceptions, I would say that this first stared in the early 2000s.


Biting the hand that feeds IT © 1998–2022