* Posts by Peter Gathercole

4213 publicly visible posts • joined 15 Jun 2007

HP: That print-free-for-life deal we promised you? Well, now it's pay-per-month to continue using your printer ink

Peter Gathercole Silver badge

Re: print-free-for-life plan was "an introductory offer,"

I have an Epson Stylus Photo 1290 (which replaced an 1160 because that had a failed power supply, which replaced an 880 which suffered permanently blocked nozzles) being used the same way, except that it's attached to the USB port of a NAS device rather than a router.

Only problem is that the Windows drivers are getting more difficult to install, but the ESC P language is sufficiently generic to allow it to work with other Epson drivers that can still install, and most of my printing is done from Linux anyway.

Does make it difficult for my kid's windows systems, though.

Think I'll look at setting up a Linux system, maybe a Raspberry PI or even my old ASUS EeePC 700 as a Postscript print server to allow generic printing. That had previously been used to drive an HP LaserJet 1000 (disgusting low cost laser printer without it's own imaging engine), so probably still has most of the needed stuff still installed.

X.Org is now pretty much an ex-org: Maintainer declares the open-source windowing system largely abandoned

Peter Gathercole Silver badge

Re: Then there's running an X session remotely.....

Maybe we spoke. I was in the ADG group of the AIX Systems Support Centre, although if you were using PEX and XGKS, you probably talked to Tim D. if you called in. He tended to grab those calls.

I remember that the accelerated line drawing of the Sabine did not match the strict pixel-by-pixel X11 specification.

Peter Gathercole Silver badge

Re: Then there's running an X session remotely.....

The problem with modern X applications working by throwing bitmaps around is actually a problem with the way that people write X client programs today, rather than a problem with X.

They've forgotten all of the short cuts and efficiencies that were built into X to allow it to work efficiently over bandwidth limited networks, as was the case when 10Mb networks were all you would expect to see.

The original font handling code, for example. It had it's limitations, but by using the font definitions stored on the X server it allowed you to transmit text into remote windows extremely efficiently (a few bytes per character compared to full bitmaps for each character), especially once they built scalable font support into X. By the problem is that you were restricted to the fonts that the server had available locally. The official solution was to create font servers, so that when you wanted to use a font that the X server didn't have, it knew how to find and download it, (although there were licensing issues with making fonts generally available on a network, but that's another story).

Instead, what people ended up doing was to locally compose to a pixmap of the entire window on the client, so only the client had to have the font definitions, and then push the whole pixmap for the window across the network.

This is inefficient (actually X even has some efficiencies that allow you to just send the deltas), and only was acceptable because of the increase in speeds in the network.

Obviously, there are problems with highly graphic intensive applications, such as video handling, but by using client and server on the same system and using shared-memory communication between the client and the server, this could be fixed, but then the 'lets throw the baby out with the bath water' brigade decided that the whole X protocol thing running locally was a waste of resource, and promptly started ripping large parts of the function out for the sake of efficiency, rather than making it work better.

Of course, abstracting the graphics acceleration layer on the local system to allow programs to directly use hardware acceleration is always going to be more efficient, but by using an intermediary interface, such as OpenGL - which the X.org implementation can use, can even make this more efficient.

And yes, there are security issues, but networking security issues have been fixed for other things, and would have been (and some have been) fixed as time passed. The computing world used to be a much more trusting environment than it is now, but people took the time to fix it.

X is an old protocol, but as has been pointed out, does things that the replacement systems simply can't do. I keep trying to explain to my wife that just because I'm not sitting in front of a system that is on in one of the other rooms in the house, it does not mean that I'm not working on that system. Most people just see the down-sides, and never the upsides.

I'm an old hack. I used to be one of the people handling X support for AIX in the UK, and I still use it frequently today, even between Linux systems. And I mean today, as I've been doing remote work on one Linux system and an AIX system at my home from my Linux laptop, and also using VNC to control a Windows system (I don't have RDP on the system, at least I don't think I have), and I can tell you the way X works is much more flexible and efficient, and more pleasant to use than VNC.

Linux 5.10 to make Year 2038 problem the Year 2486 problem

Peter Gathercole Silver badge

Re: Sigh... the K notation again.

Things like capacitors are now often marked as 4u7 (using the letter as a decimal point as well as a scaling factor), something I didn't realize until I started using miniature bead capacitors and surface mount components.

Having said that, I'm looking at the schematic for a NAD7020 HiFi reciever (circa 1977-1984), and I see C421 having a value of 2n2 (2.2nF), and R425 as 4K7 (4.7K Ohm), so I guess it has been used for a while. But it looks like different parts of the schematic were prepared by different people, because it's not consistent!.

Peter Gathercole Silver badge

Re: Linux kernel @Henry

Actually, it depends on what you define as POWER. The original RIOS processors released in 1989/1990 were 32 bit, and 64 bit was introduced with the PowerPC processor architecture extensions, with the PowerPC 620 and RS64 processors (as well as the APACHE processor from the AS400 people in Rochester) being the first 64 bit processors in the extended family.

The mainstream processors that have only ever been 64 bit since their inception were the DEC Alpha and Intel Itanium (although was Itanium really mainstream?)

But my point is that any code that has been or will be re-compiled on a more modern system before 2038 will have 64 bit time_t (and the associated C library calls) by default, unless someone takes great pains to prevent it. It is likely to only be binaries that have not been compiled which will have problems.

Of course, there may well be code that instead of using the system definitions of various structures and types, define them themselves, but that would have been poor programming that probably will rattle itself out whenever the code is ported between systems. It's always been poor practice since the very early days of UNIX systems to hard code system properties in your code rather than using the system defined types.

I mis-spoke about no 32 bit processors in 2038, but I would like to point out that many embedded processors probably couldn't give a hoot about whether they have the correct date and time. Not sure what would happen during the actual rollover, though.

It is interesting. I had an IBM 6150 AIX system (the one before the RS/6000), whose support ended before in the mid 1990's, and I ran some quick checks on it in 1999, and I found that the only thing that didn't work properly was actually setting the date with the date command. Even the RTC that the system had worked properly. I don't think it would have coped with the 2038 rollover, but that system is now long gone.

Peter Gathercole Silver badge

Re: Linux kernel

I agree, which is what the last sentence was all about, and I also agree about the space in assigned structures.

But when it comes to filesystems, for example, there's been a bit of a tweak that allows the mounter code to identify whether the filesystem was created using 32 bit or 64 bit time_t. Provided you go through the OS acquire the info contained in things like the inodes, it's possible to allow the system call to decide how to identify and present the data, keeping the function in the core part of the OS.

Anything that directly accesses this data without the OS's involvement would need special attention, however, as would any code managing it's own datafiles.

But in the next 18 years, we will not be running 32 bit processors (support for 32 bit Intel is due to be removed from the kernel quite soon, if it's not already), and I would be surprised if any system, or even code now running will be still running when the time comes without at least recompilation. It would be really clumsy system management to also not have re-created filesystems before then either.

Because of the nature of the system call interface being changed and the way that dynamic linking works, x86 Linux is not quite as tolerant when running old code (if the version of a shared library changes on a system, quite often old binaries fails to load and execute) as some other UNIX variants (I ran a binary I compiled in 1995 on a 32 bit AIX 4.1.2 system on a 64 bit system running AIX 5.3 a few years back, and it still ran perfectly)

I worked through the 1999-2000 transition working on UNIX systems, and know that in my first job in 1981/2 (not on UNIX), some of the code I created definitely would not cope with the 2 digit year rollover (I did point it out, but I was just a junior programmer). I would be interested in knowing whether anybody had any problems with parking ticket fines in the Borough of Rushmoor around Y2K, because that is the main system I worked on (although I did also work on DLO) in the fairly miserable year I was there.

I will be retired by 2038, but I hope to be mentally able enough (and still interested) to be able to say "I told you so!"

Peter Gathercole Silver badge

Linux kernel

After much digging through the Linux include files, you can see that the time_t type on 64 bit kernels is defined as __SYSCALL_SLONG_TYPE, which appears to be a signed long integer. On x86_64. this is 8 bytes, or 64 bits.

It's been like this in the kernel for a long time (can't be arsed to go back through the kernel history).

On (legacy UNIX, so who would patch that), AIX, time_t has been directly defined as a long int since about AIX 5.1, (available before Y2K) and I'm pretty certain they carried that through into the filesystem code (this tends to happen automagically when the source is recompiled on a 64 bit system, unless explicitly turned off) by the types being defined in system wide #include files.

So the kernel has been fixed on Linux and many UNIX's for a long time There's been a range of tricks deployed to allow 32 bit binaries running to still pick up 32 bit time_t. This code will break still, but who is likely to be running binaries compiled for 32 bit systems im 2038. That would be real legacy code?

UK taxman waves through £168.8m Fujitsu contract because no one else can hold up 30-year-old infrastructure

Peter Gathercole Silver badge

Re: ICL

Ah. Thanks for the background. I knew I recognized VME, but I had forgotten that ICL effectively became the services arm of Fujitsu.

VME apparanly runs on x64 servers, so I would not expect to see any Series 39 servers anywhere.

Excel is for amateurs. To properly screw things up, those same amateurs need a copy of Access

Peter Gathercole Silver badge

Re: Using a computer where pen and paper would have sufficed!

There is another side to this. Most companies do not have a general purpose database system lying around for users to use.

I've often wanted to create a small database at many of the organizations that I've worked at, mainly use on one of my tasks, but sometimes requiring it to be shared. If we had had an Oracle, DB2 or other mainstream database instance that allowed users to create their own relatively small DBs then I would use them.

Being on the UNIX side of things (and never getting on with MS applications anyway), I've often ended up writing my own series of scripts on a GP UNIX system, using things like cut, join sort, uniq, comm, and most importantly awk, when I would have been much better off with half a dozen tables in a proper DB, with SQL as the query language.

But the PTB (or should that be PHB) would never see the value in having a small real database installed in their standard image for the systems (even the Linux ones where it could be done for free), nor would they commit to a general purpose proper database on any shared system (in fact, since the advent of the PC, they've not even seen the benefit of a shared general purpose system!)

So I can see why some people would turn to Access and even Excel to meet their requirements.

Selling hardware on a pay-per-use or subscription model is a 'lie' created by marketing bods

Peter Gathercole Silver badge

Radio Rentals!

Radio Rental's business model originally worked because radios, TVs and video players were expensive, and people could pay small amounts of money regularly over a long time to have what they could not afford to buy. In additional, valve TVs and early video players went wrong often, and the rental model also provided maintenance built in. People were happy renting things they could not afford to buy.

Their business model broke down when things that were expensive became cheap, these cheap devices didn't need constant maintenance, and people could borrow money and eventually own what they wanted rather than paying a similar amount and never owning it.

RR tried to diversify into white goods, but the same thing happened, and eventually there was no money in it and they shut down.

We've seen this in the computer world. Computers were expensive, and they were leased, rented or timeshared. They became cheap and small and companies could buy them outright.

Computing appears to be going the other way now. But as an analogy, so are cars (PCP), phones (plans paying for calls with the phone part of the deal), and even media (think, we don't buy CDs and DVDs any more, now we have Spotify, Netflix Amazon Prime et. al.)

Some of us don't want to own things. So why is it a problem for companies to want the same!

After ten years, the Google vs Oracle API copyright mega-battle finally hit the Supreme Court – and we listened in

Peter Gathercole Silver badge

Re: © me 2020

I think you might have to fight Brian Kernigham and the estate of Dennis Richie for that one.

Or was it in BCPL?

Peter Gathercole Silver badge

Re: wholly negative implications

Just a couple of points of clarity.

BSD used Bell Labs. academic license internally to develop BSD (which was originally a series of add-ons and utilities to be merged into a Bell Labs UNIX distribution). This meant that they would only distribute their modifications to organizations which already had a Bell Labs. UNIX license, which meant that in reality it was just academic institutions, as Bell Labs. and later AT&T did not want to be in, and later were prohibited from commercial distribution of UNIX.

Where it went sour was when UCB started distributing complete UNIX systems (I guess this was with the 32 bit release of BSD 3.0 or 3.1 whatever it was), and they tried to break the requirement for the recipient to have a Bell Labs./AT&T license, and AT&T took exception to this. In the resultant series of court cases for this and other things, AT&T appear to have become able to distribute commercial UNIX licenses

Also, IBM had (and still has as far as I believe as they were purchased in perpituity) AT&T UNIX SVR2 source and distribution licenses which allow derivative works, and I believe that they had also updated this for SVR4 some time back. This means that the older AIX versions 1.x and 2.x were ports rather than re-implementations, so would not be affected by any change. The same could be said for HP/UX, Solaris and Tru64 Unix or whatever it is called now. There are many orphaned licenses from any number of now defunct UNIX system providers around somewhere.

I had sight of the AIX 3.1 (first release on the original POWER systems) source code at one time, and from what I saw, there was still significant AT&T code in it.

Lots has been re-written since then, but the fact that IBM have a license removes any ambiguity.

Oh how ironic it would be if there was an uptick in Solaris and AIX implementations if Linux validity came into question!

Back before I joined IBM in 1989, they were widely considered as the enemy to UNIX, and I was very heavily criticized by my colleges (I was working for AT&T at that time) for leaving the 'good guys'.

I also find it incredibly ironic that after all this time, IBM appears to be the 'last man standing' in the genetic UNIX field. Of course, Oracle and HP are still keeping a toe in the water, but only just.

Peter Gathercole Silver badge

Re: wholly negative implications

IIRC stdio.h appeared either in PWB or Edition 7 of UNIX (it certainly wasn't in Edition 6). Thus it was Bell Labs./AT&T code. But the re-implementation of it was decided in the AT&T/BSD cases, and exists in the various BSD derived implementations, including Linux and GNU because it had been deemed free of AT&Ts copyright, even it it performed the same task.

Of course, the Oracle/Google case originally revolved around the literal copying of some or part of the files defining the API. If we take the UNIX cases as a reasonable precedent, then provided Google has made reparations for the initial literal copying of the files, and now uses it's own compatible but different definitions, i.e. not a direct copy, I don't see Oracle's case.

I don't know how a Supreme Court ruling is likely to affect what is in technology terms is ancient history, but if Oracle wins, and defines a precedent that can be applied to previous cases, then the AT&T/BSD case could be overturned, and the entire UNIX/BSD/Linux ecosystem could be thrown into turmoil.

I really don't know who owns this part of UNIX IP now. It would have been Novell, but their demise was sufficiently complicated that it could be Attachmate (or whoever bought them), MicroFocus or even (gasp!) Microsoft.

Teracube whips out cheap, fixable phone with removable battery and four-year warranty

Peter Gathercole Silver badge

Understand Louis's standpoint

"Right-to-repair advocates like Louis Rossman have long argued that many consumer technology repairs are pointlessly wasteful".

This is misleadingly vague. Louis argues that devices should be fixable, and that parts and schematics should be available to anybody who wants to fix devices. What he complains about is the way that people like Apple and more recently other device makers have made it deliberately difficult/impossible for devices to be repaired (and even in some cases designed to fail - or at least design defects not fixed and carried forward into later models), meaning that devices have to be replaced rather than repaired, even when repairs could be relatively simple.

Complexity has broken computer security, says academic who helped spot Meltdown and Spectre flaws

Peter Gathercole Silver badge

Re: Hmm... @doublelayer

Your comment about single task operating systems was correct about CP/M, mostly correct about RT/11, but completely wrong about RSX/11M. It was a multi-user, multi-tasking operating system, complete with memory address space isolation and pre-emptive scheduler, together with real-time features.

But it really does help when the people designing the OS also had quite a lot of input to the hardware as well.

However, even RSX/11M was not immune from errors. It certainly was not 100% secure or bug free. The huge pile of patch bulletins that we had for the system I looked after definitely proved that.

BT cutting contractors' rates by a fifth and halving notice period because 'coronavirus'

Peter Gathercole Silver badge

Re: So do I re-negotiate my BT Broadband

My father was told that he had to accept an upgrade from copper ADSL to Fibre to the Premises, because they were upgrading the exchange and could not offer ADSL anymore. The upgrade was "free" (although strangely he ended up paying more per month). He really didn't need it, he was a very light user.

He then had to move into a care home, and when we asked about moving his phone line into the care home (it's something they're prepared to do as long as it's in the same exchange), they said that he could not have FTTP in the care home (Huh? Wasn't the exchange being upgraded), and because they had to downgrade back to ADSL, this counted as a contract amendment, and Hey! that requires a 2 year minimum contract.

BT are quite happy about forcing contract changes when it suits them to the detriment of the other party.

It all became a bit moot, as he didn't live the two years minimum contract period.

Ancient telly borked broadband for entire Welsh village

Peter Gathercole Silver badge

Re: More to the point

I take your point about this not necessarily being a point source, but I believe that old televisions have transformers and inductors pretty much all over the place, so I suspect that it would be unlikely to be on the power side of things. I suppose it could be on the input side of the aerial of the TV, with a back transmission from the tuner, but this would be unlikely as if it is a really old TV, it would have to have a set-top box for DVB Freeview (no analogue TV transmission any more), so I would be surprise if the TV was still directly connected to the antenna.

If the TV was poorly earthed, then I suppose that the chassis could be acting as a transmitter, but not like an infinite plane.

So I would suspect that it would approximate a point source, at least once you get a distance away from it.

Peter Gathercole Silver badge

Re: More to the point

I have to admit that I find it difficult to believe that BT's broadband infrastructure to be knocked out by a burst of RF interference. I know that the overground wires are probably poorly or even unshielded, so act as an aerial , but RFI transmission field strength is proportional to the inverse-square of the distance.

Unless the whole village's infrastructure goes through a pinch point just outside of the house, it should not be enough to disrupt ADSL (I suppose that if there was a microwave point to point link, this could be more easily disrupted, but there is not one mentioned).

If they were talking about the WiFi being used to distribute the service around individual houses, then I might believe that could be easily disrupted,

Contractor convicted of pinching supercomputer cycles to mine cryptocurrency

Peter Gathercole Silver badge

Re: Economics 101

Really don't know. When the systems went out of the loading bay, my role and interaction with that team came to an end, and I never got to talk to them again.

Bit of a shame, they were really decent people to know, and literally the cream of IBM hardware talent with more than a century of design experience between them,

As a result, they've probably been RA'd by now!

Peter Gathercole Silver badge

Re: Economics 101

When I was involved with the support of the UK Met. Office's IBM Power 775 systems, and their eventual decommissioning about 5 years ago, there were several weeks where the systems had to be kept operational and fully functional as a backup for their replacement, but no forecasting jobs were being scheduled on the systems.

The system manager gave me carte blanche to run any jobs I wanted.

Unfortunately, being on the support rather than the application side, I did not have anything more than jobs to test various parts of the scheduler, but in theory I could have had significant time on two top 200 (they were still very powerful) and two smaller supercomputers to mine cryptocurrency.

The decommissioning was a wonder to behold. Apart from some initial preparation work scrubbing the volume data storage, all 4 supercomputers were still in an operational state at 08:00 on the Monday, and completely removed from the machine halls and in trucks by about 19:00 on the Thursday evening, with a day's contingency. All that was left was the cooling water pipework and power sockets.

And the decommissioning crew consisted of an IBM Fellow and more Distinguished Engineers than you could shake a stick at plus some field engineers, in total more than half of the engineering team for these systems from the US.

I think they had an incentive to complete the job early, as their flights back to the US were not until the weekend, and they ended up with an extra day doing the tourist thing!

Das Keyboard 4C TKL: Plucky mechanical contender strikes happy medium between typing feel and clackety-clack joy

Peter Gathercole Silver badge

Re: Not so surprising @dajames

My post talked about people entering numbers frequently as the first group that do use the numeric keypad.

When I was working at a council back in the early 1980's I worked next to the data prep. team that keyed data in from various slips and returns, so I know that people do use these keys. But they are probably now a minority of computer users.

Peter Gathercole Silver badge

Re: There's a word for that ...

Strangely, the number of keys removed appears to be 17, not 10 (although numlock still sort of exists as a shifted key).

So the name "tenkeyless" taken literally is inaccurate.

Peter Gathercole Silver badge

Numeric key pad?

I've never been sure quite how many people actually use the numeric keypad in day-to-day use.

I know I've seen people who deal with numbers all day long use them for rapid number entry, but that's about it (oh, and some games use them in cursor key mode to get diagonal movement), but I largely ignore the extra keys, and am quite happy working on keyboards without them (like now). Some programmers seem to like them. Does one of the IDEs they have functions mapped to the numeric keypad, like EDT did on VT terminals on DEC mini's?

Actually, the keyboard I'm using right at this moment is an IBM SpaceSaver II keyboard from the HMC of a Series z9 HMC (when the mainframe was being decommissioned, they were going to throw it out so I nabbed it!) This is like a normal full travel keyboard with arrow keys and the 6 keys above them in the right place, but with the numeric keyboard completely hacked off. Oh, and a trackpoint. Not a buckling spring keyboard, but pretty good.

Peter Gathercole Silver badge

@ Mr Cumberdale Re: Perixx

Looked at these when I was trying to find space-saver keyboards for my little desk at home, but the lack of a UK enter key and the location of "#" was the deal breaker.

Still not found a good compromise, but I'm using cheap reduced size (no bezel around the keys) 'boards until I do.

Ideally, I'd like to KVM between the two systems I want to use and put my Model M in the middle of the desk, but unfortunately work security rules prevent any devices being shared between the two systems I need to use (although they are both plugged into the network, but one has a dedicated VPN for all network access other than the encrypted traffic to the router.)

Alibaba wants to get you off the PC upgrade treadmill and into its cloud

Peter Gathercole Silver badge

Re: so, a 'Network Computer'?

Well, I was thinking about something a little more than a dumb (or even slightly intelligent) ASCII or EBCDIC terminal.

All of the examples I quoted allowed for some measure of overlapping windows with multiple sessions and some graphics capability, much as people do nowadays (although the Mac generation appear to like everything running fullscreen).

For a remote access terminal, you could go back to teletypewriters hooked up using EIA current loop, but these were hardcopy devices. I think that CRTs were being adapted as display devices in the 1950s, typified by that whuch ran Spacewar! on the PDP1 at MIT in 1961, but not very useful for text work.

Peter Gathercole Silver badge
Peter Gathercole Silver badge

Re: so, a 'Network Computer'?

So this would be the Acorn Network computer, yes (and running an ARM chip)?

But there is prior art!

NCD had X terminals in about 1987 (which may or may not have had the Display Manager running locally depending on how it was configured), and it it not too far a stretch to get to the AT&T Blit in about 1983, although that was neither cheap nor compact.

It would not surprise me to find something from Xerox PARC knocking around at about the same time as well.

He was a skater boy. We said, 'see you later, boy' – and the VAX machine mysteriously began to work as intended

Peter Gathercole Silver badge

Re: Yes, those were the days - NOT

One of these days I will learn to type and spell correctly. Not holding my breath, though.

Peter Gathercole Silver badge

Re: solution?

At Clairmont Tower in Newcastle Uni. in the late 1970s, their solution was to put yellow tape on the floor around the Ampex memory cabinet on the S/370 with dire warnings in big letters that anybody going into the exclusion zone would be ejected from the data centre sharpish.

I do not know whether it was physical movement or static, but the 370 did not like losing 4MB of it's 6MB memory.

Actually, thinking about it, not that different from the social distancing measures being taken now,

Peter Gathercole Silver badge

Re: Yes, those were the days - NOT

In the 1980s, I looked after a Systime 5000E (repackaged PDP11/34A with 22 bit addressing bolted on) which had CDC SMD drives rather than DEC drives.

We payed an external data company to clean the platters of all of the disk packs about once every 6 months (normally during the academic vacations). This was mainly because we would regularly switch disk packs, as we ran 2 OSs (RSX11/M and UNIX Edition 6 and later 7) at different times of the week.

The guy came in with a machine that would not only allow him to clean (with solvent and special large lint free giant cotton bud like tools) but also check the balance and warp of the platters to try to stop head crashes. I think it earned us a slight discount on the system maintenance cost.

When they came in, they wanted the space they would work in to be deep cleaned, and insisted that the doors and windows were shut while they were working.

Our biggest bug-bear was heat. It was meant to be an office environment machine, but we found it required more ventilation than we could give it, and we anxiously watched the thermometers during the summer. Difficult decision to either open the windows and let dust in, or keep them shut and watch the temperature. They bean counters would not allow us to buy a window mounted AC unit, which would have been the best solution.

Ireland unfriends Facebook: Oh Zucky Boy, the pipes, the pipes are closing…from glen to US, and through the EU-side

Peter Gathercole Silver badge

Re: About time too @Doctor Syntax

I'm a good IT contractor. I'm a lousy company administrator and director, as was proved by the 10+ years of trying and getting fined regularly by the Revenue and previously Customs for filing my returns late (I know, you can now find big panel accountants who will do most of the work, but that hasn't always been the case).

But around 10 years ago, I found that I could forego the marginal tax and NI savings (I was actually opposed to tax avoidance measures anyway, which is why my accountants didn't like me), and found it was just easier to be the employee of an umbrella. I'm a cop-out contractor, I know. My decision.

And now, with IR35 looming again, I'm smiling inwardly at the predicament of all of the contractors I know whose anxiety is ramping up again pending next April. I've already seen many friends leave contracts that they could have extended because they're so worried about what the Revenue will do, and the way that the IR35 change was crassly delayed last March was a joke.

Peter Gathercole Silver badge

Re: About time too

I was particularly annoyed when the umbrella company I use outsourced the authentication of their web portal to Facebook.

What this means is that I have another Facebook account (besides the bare account I keep to allow me to access services from companies that think FB is the only way to interact with their customers on the Internet) that I know very little about. I don't know exactly what is stored under it. I'm also a little uncertain about how outsourcing the authentication to a third party actually fits in to GPDR, and I don't remember explicitly agreeing to have the data transferred to FB, and I normally read the T&Cs (difficult when they're so long and boring) when I'm asked to. Maybe I should put a data protection request in to see.

Nvidia to acquire Arm for $40bn, promises to keep its licensing business alive

Peter Gathercole Silver badge

Re: Nvidia and Linux? @macjules

Yes, I know that Nvidia have in the past been a bit of a problem with regard to their graphics processors, but that particular clip is from 2012. But some things have changed, and at least for some of their older GPU architectures they are providing some documentation, and do have half decent binary drivers now (their offerings used to be crap, and well ot of date).

But ARM is a completely different market. They have a high mark up on their GPUs, and need to protect their revenue stream. They cannot take the same model and apply it to ARM designs.

Firstly, they do not currently control the manufacture of the devices, and they only have the initial use license fee and a very small per-core license fee.

Secondly, there are already licensees who have the rights in perpetuity to take their existing designs and re-implement and modify them. This means that even if they decide to take future core designs private, that will not stop existing designs evolving. If they do this, they run the risk of fracturing the market, and as they need high volumes to be able to continue to get revenue on the low per-core license few.

Third, if they decide to limit or increase the cost of new licenses and license renewals, this will give the chip companies a reason to invest in other architectures like RISC-V and even MIPS (companies like themselves!)

Remember, the only thing that really makes ARM processors stand out is their low component count and power, licensing terms and ubiquity. The architecture has always been relatively simple, even with some of the newer designs. There is no reason at all to suppose that given the right impetus, other simple designs could not be produced. ARM have a head start, but there are a lot of people out there who could devote a lot of resources to try to catch up using already existing or new work. It's just that at the moment it's not worth it.

Nvidia will not want to take the technology private. It's not worth $40bn in cash and stock just to have another private design. The value is in volume and market penetration.

Peter Gathercole Silver badge

Re: Nvidia and Linux? @macjules

That's a radical statement. Care to elucidate?

They say that the licensing model will persist, and beside that, chip makers already have licenses, several of which are architecture licenses which allow them to extend the architecture, and are for perpetuity.

This means that Arm devices are here to stay, and in case you didn't notice, already have Linux ported to them.

As an example, look at Raspian running on a Pi-4. I could quite happily use that as a desktop system.

What may change is the cost of non-perpetual licenses. Some companies may find the cost of their license renewals increasing or becoming unavailable, but too much of the latter will kill the business.

ByteDance rebuffs Microsoft's TikTok purchase proposal

Peter Gathercole Silver badge

Re: Microsoft ensuring security?

ctrl-x crtl-c

Gartner on cloud contenders: AWS fails to lower its prices, Microsoft 'cannot guarantee capacity', Google has 'devastating' network outages

Peter Gathercole Silver badge

Re: Cloud. @AC

Why will they not be your competitors? You going out of business because you can't compete!

China blocks access to website hosting code-for-kids tool Scratch and its forums

Peter Gathercole Silver badge

"low cost"

Sometimes I wonder just how much of the manufacturing that happens in China is actually economical and profit generating.

I know that the postage rates are skewed against other country's postal systems, but when you can buy cheap tat direct from China with free postage at less than the price it would cost to merely ship it within the UK, you have to wonder whether there is some Chinese subsidy in the system, either explicit or implicit, designed to make sure that low-cost manufacturing is killed in other countries.

In the past, when China was emerging from it's shell and did not allow currency exchange, I always had the opinion that it was being done to enable China to get foreign currencies. You know, pay the workers and materials in Yuan which have no value outside of China, and sell in dollars and pounds, but surely we're way past that point now.

Digital pregnancy testing sticks turn out to have very analogue internals when it comes to getting results

Peter Gathercole Silver badge

Re: Low tech is too old tech @Dave

Unfortunately, decreasing the infant mortality does not in itself reduce population growth unless tied to education that large number of children is not desirable.

Many people living in marginal environments have to have a lot of children because not all of them survive, and their children are their pension. But they don't necessarily make the link between these things themselves, it's ingrained in thier society. In these societies, having a large number of surviving children is a statement of high status, and this will not change overnight.

Just allowing more of the children to live is not going to immediately reduce the family size, at least not until the 3rd or 4th generation. By this time, exponential growth as a result of more children reaching childbearing age will make the situation worse.

In the long term, I do agree, but it's not enough in itself.

AWS unleashes a new homegrown Linux that's good enough to bottle

Peter Gathercole Silver badge

Re: Missing tools? @ac

If you eliminate the containers, you don't even have those duplicated files in the first place!

I am aware of how the union filesystems work (and have been for many, many years), but on a single OS image, you do not need to even have this complexity.

I was not really serious about eliminating containers, because they do provide some isolation from the underlying hosting OS, allowing applications from different OSs to sit on a single system, without the overhead of different full OS image under a hypervisor, but I was partly serious about moving everything back to a single OS, although some of the resource isolation features may need to remain to guarantee minimum resource allocation.

Peter Gathercole Silver badge

Re: Missing tools? @Pascal

I know that "everything to run will be in the container", and have even been playing about a bit with things like Docker.

I know that you are supposed to spin up the container running as few processes as possible (although thank heaven the original "one process per container" idea seems to have been dropped), but many existing applications are not written to work like this.

The article says that it is a kernel (and presumably sufficient libraries), but also says that the tooset is written in Rust (to eliminate security holes and memory leaks, apparantly). Has the full GNU toolset been ported to Rust? I think not.

When I think what is happening, I feel that what we have with containers is a shift up the virtualization stack. We had an OS which ran applications and processes. They then put in Hypervisors above the OS, to allow isolation between different OS images. We've now moved down a level, so the hosting OS becomes the Hypervisor, the container becomes the OS, and the applications are... still applications.

I wonder how long it will be before someone suggests radically revisiting the process-to-process isolation, and deleting the containers as wasteful, so we then go back to properly isolated processes running on a secure OS. Round in a full circle.

Peter Gathercole Silver badge

Missing tools?

Ah. So this is a Linux, not a GNU Linux. I wonder how many applications will need significant work to allow them to fit in if the normal GNU toolset is missing?

I know that I'm an old foagy, but a Linux system without the familiar admin tools just won't be like Linux to me.

Hidden Windows Terminal goodies to check out: Retro mode that emulates blurry CRT display – and more

Peter Gathercole Silver badge

Bash

You are aware that bash (at least on Linux) is a shell not a terminal emulator.

The difference is that shells process commands, and the terminal emulation handles the presentation. This allows you to keep the same terminal emulation while changing the shell you want to use.

I know I'm an old foagy, but this type of confusion between components on systems is part of the root of many of the problems with modern CLIs.

You use something like Putty or an xterm to get access to the system, and you then run a shell such as bash or ksh through that access to run commands. This allows you to separate the terminal emulation from the command processor. So the terminal emulator handles driving the screen, handling keys and doing the copy/paste, and the shell runs commands.

It fits in with the Unix ethos, do one thing, and do it well.

I know this is conflated by the monolithic commands that developers appear to like developing now, where the a single tool does everything, and that has it's place for some types of applications, but basic OS commands should IMHO be independent of the terminal access.

This is something that I don't believe Windows has ever done properly. The command.com window was seen as just for legacy DOS type programs. Maybe this is a move in the right direction.

Relying on plain-text email is a 'barrier to entry' for kernel development, says Linux Foundation board member

Peter Gathercole Silver badge

The Linux Foundation

The Linux Foundation is an organizing group that looks to standardize and co-ordinate Linux development. It appears that all you need to do to become a platinum member (necessary to get a seat on the board) is donate $500,000 per year, and have some interest in Linux development.

Microsoft has an interest in Linux development. They are building Linux based technology into Windows, and they have to support Linux in their Azure cloud, because customers want it.

I believe that there are quite a lot of Microsoft contributions to the Linux kernel. According to some articles, in the last year they have been the fifth largest contributor. I have not looked myself, but they are almost certain to have put code in to ease Azure compatibility, and the Windows Subsystem for Linux will almost certainly have some code in the kernel tree to smooth out the interface with Windows. And it would not surprise me if they have more in there as well.

I am suspicious about Microsoft involvement, but it may be that there is a place in the Foundation for them. But I would be wary of it, if I was another board member.

Peter Gathercole Silver badge

Re: "they don't know how to send a plain-text email"

Other people have hinted at what I'm about to say, but I don't think anybody has explicitly said it.

Using a tool that is not open source (say Teams or Slack) puts the project at risk of changes in those tools. The good thing about email, and it's lowest common denominator 7 bit ASCII is that email is a distributed system that would allow Microsoft or Google or any other mail provider to disappear overnight, and still continue to operate.

Even if GitHub were to disappear, as long as there was a copy of the source in another git repository, it could be re-built relatively easily.

If there were a non-propriety, open source, distributed collaboration system available, that may be better. This is not the way the computing ecosystem is going, as these systems need to be paid for by someone, and for a non-profit that does not charge for what they produce, finding money to pay for something is difficult.

Everyone has email servers or at least access to them, so Linux kernel development is piggybacking on something that is being provide for other reasons.

Peter Gathercole Silver badge

Re: "plain old ASCII text is a barrier to communications"

I think when it comes to the character set that the kernel is programmed in, you have to look at the computer language that it is mostly written in. That is C. C is a computer language that assumes a certain character set, and that will almost certainly be similar or the same as 7 bit ASCII, or one of the related supersets in ISO or UTF 8 bit character sets.

You can almost certainly write comments or function or variable names using whatever is allowed by the superset that a compiler will accept, but the basic keywords are defined in English.

I'm sure that there are some code written in Cyrillic, or the compound ideograms used by East Asian languages, but as soon as you start doing this, you close the code from the rest of the world. You already see this when looking at, say, electronic datasheets on the Internet, as those written in Chinese for devices that originate in China are incomprehensible to the majority of the world.

This may be cultural imperialism, because English speaking countries were so dominant during the development of the basic computing infrastructure. but I believe that writing compilers and complex systems in something like zhōngwén is complicated and unlikely to happen.

Clarke's Third Law: Any sufficiently advanced techie is indistinguishable from magic

Peter Gathercole Silver badge

Not a printer, but still a sunlight issue

Back in the early '80s I worked at a Polytechnic, and was responsible for spending a chunk of the contingency fund at the end of one financial year, with the aim of producing a teaching room for what was called "Computer Appreciation". The purpose was to have lots of different types of hardware available so people who did not know anything about computers could see what they could do. The systems were BBC micros, of course.

One of the devices we found was a robot arm with 6 axis's of movement that was very functional for the price. To keep the costs down, they used normal rotary motors, with shaft encoders made up of IR transceiver devices, with a 4 quadrant rotating reflector on each shaft to 'bounce' the IR back from the transmitter to the receiver. It was all very elegant, and worked really well.

That is, until I tried using one in direct sunlight one day. Unfortunately the design did not have light covers on the encoders. As soon as I issued the return to home command to the controller to calibrated the position. It started all the motors (something quite interesting, because all the motor channels could run simultaneously, unlike most of the other non-industrial teaching robots), and proceeded to pull itself off the desk as all of the moving parts moved to the physical end-stops, causing the arm to contort in a way it was never really designed for.

We worked out that the bright sunlight swamped the IR detectors, so even though the motors were running, it could not detect the movement of the rotary shafts. I suspect that there was also a bug in the controller software that needed to see the shaft rotate before it would look for them to stop to detect the end stops, but I never knew that for certain.

The fall damaged some parts, so we went back to the manufacturer, and asked whether they could provide spares. We explained how the problem happened, and they said that they would see about designing some light tight covers, but although we got the parts to repair the robot (they came in kit form anyway, so replacing some of the parts was not a problem), we never saw any covers from them.

Shame really. They were the best educational robot arms I ever saw, so long as you used them out of direct sunlight. It's a shame that some of the HND students never saw fit to do their project with the kit available in the lab, as we had hoped. I often wonder what happened to all of the 'toys' when the lab. was replaced ( I left before the lab was dismantled), but I guess that some of the lecturers with their own BBC micros gave them a home.

Softbank confirms talks to offload Arm as it posts rebound profit

Peter Gathercole Silver badge

Re: What's going on?

The thing about ARM and their processor designs is not that they are particularly special as far as the techniques are, but that no other chip manufacturer has managed to produce a chip that provided performance at a very low power budget.

Even Chipzilla gave up the fight, although they were trying to reduce the poser consumption of their existing family of processors (because they thought compatibility would be a big selling point), not trying to produce a new design from the ground up.

There have been and are other processors that could steal ARM's crown. RISC-V is the most obvious candidate, but I'm sure that the MIPS processor designs could be re-engineered to produce low power designs, and I'm following whether some of the micro-controllers from people like Microchip Inc, or NXP may be enhanced with more capable instruction sets, but they seem to have selected ARM for their more capable offerings.

But ARM have a major head start, with proven designs available at reasonable licensing costs, available to build into SOC designs. And some of the features, like big.LITTLE are very clever at producing designs that are very low power, but can scale up to higher performance very rapidly.

What will decide whether ARM remains a dominant design is probably whether the people who end up owning the IP want to try to increase revenue by upping the license costs (like Softbank said they were going to try to do). Once ARM licenses are seen to me more expensive than the necessary investment to make RISC-V or another design bear fruit, the advantages of ARM will be lost.

EY to outsource compute function, sending 800 staff into the loving arms of... IBM

Peter Gathercole Silver badge

Re: "...and more relevant and specialized opportunities to support their future career growth"

I know people who were brought into IBM as part of a TUPE, and remained there for significant amounts of time (more that 10 years) and some until they retired.

But it depends on the skills and usefulness. I suspect that any of the EY staff that have any Cloud or recent security experience will be OK, but probably not those who are involved with rather more traditional technologies.

When it comes to hacking societies, Russia remains the master at sowing discord and disinformation online

Peter Gathercole Silver badge

Re: Pot Kettle Blackhat .... and a Foretaste of A.N.Other Shade of Foreshadow* to Favour ....

Jake,

Were you serious about the PDP-11? What browser are you using? I would have thought that even an 11/93 or 94 would struggle with the most lightweight of browsers, unless you're still keeping Lynx or Lyx going, and if so, do may websites still provide text only rendering of web pages? Does X11 even work?

Or do you have one of the Mentec upgrades or the ASIC re-implementations? But still, the memory restrictions inherent in the architecture would provide serious problems for modern browsers even with 22-bit separate I&D systems.

Maybe I'll give it a go, but it would have to be in emulation for me.

Of course, to counter the people who wonder why it is not vulnerable, the PDP-11 never included any speculative prefetch features, so by definition cannot suffer from any of these issues.

Microsoft runs a data centre on hydrogen for 48 whole hours, reckons it could kick hydrocarbon habit by 2030

Peter Gathercole Silver badge
Thumb Down

Picture

The picture at the top of the referenced article has clearly been Photoshopped!

The left hand set of tanks clearly 'chop' the top of the right hand yellow bollard. And when you look closer, the shadows on the three sets of tanks are not consistent with each other and the building on the left!

Shoddy work by Microsoft PR.