That threw me for a bit until I realised that a modern RPi is at least as powerful as an old fashioned ‘mini computer’.
Raspberry Pi said yesterday it would be pushing to get its miniature computers into more shops across Africa, admitting that its presence on the continent was limited to a single approved reseller with commercial ops in a few countries in southern Africa. Writing on the company blog, Ken Okolo said he had been recently …
It's orders of magnitude faster than any old minicomputer. While Wikipedia is vague about what last qualifies, it does say that the term was last really used in the mid-to-late 1980s. The IBM AS/400 would be a good comparison, but Wikipedia also lacks specs on it for some reason. However, it does say that the CPU used was clocked at 22 mHz. Even the lowest-end Pi outstrips that clock rate significantly (Raspberry Pi Zero at 1000 mHz), and although the architecture means the rates aren't directly proportional, that means at least twenty times faster. Memory speed and size are also significantly larger.
I did find sufficient specs to compare the Pi against a different computer of the era, although it's not quite a mini. It's the Cray X-MP from 1988. That was more a supercomputer than a mini, but even it lags behind that lowest-end of Pis. That's on all aspects--CPU performance, memory speed and quantity, storage speed and quantity, to say nothing of the multimillions price of one and $5 price of the other. Now add in the fact that the newest Pi out there has four cores at 1.5 GHz and 8 GB of memory. It's amazing how well such things have improved in the last three decades.
And you can support 25 SSH connections on a Pi with no difficulty. The terminals will probably be more powerful than it is, but you can do it. A text user interface is really cheap now. I don't know what the problem is with your laptop, but if it's just someone going overboard with the JS, that's not really the fault of the other links in the chain.
Well pedanted on the units. I'd like to blame my shift key, but really don't know how I missed that.
Yes, but boost that up to 100 connections or more, like old mini computers could handle...
In pure processing terms, a Pi runs rings around old mini computers. On the other hand, the pure processor speed was just a small part of what made them fast, compared to "much faster" PCs of the time, which crawled in comparison.
A lot of people don't seem to understand that most large scale commercial applications don't involve much processing of data but do have a lot of moving data about and there I/O is the important factor.
That's why mainframes refuse to die, they're not that fast computationally but they can move huge amounts of data.
I suppose the gold standard for a minicomputer in the 70s was the DEC PDP series. These were closest in general architecture to early microprocessor systems, being based around a backplane with plug-in cards. The systems had up to 256Kwords of 16 bit memory and probably clocked no faster than 25MHz. A Pi-Zero would wipe the floor with one.
As an aside I worked at a company in the late 80s had a VAX (that's the 32 bit DEC mini). This system originally managed terminals but had became the server for our desktop PCs. It was replaced one day with a PC, a 386 system. The result was sad -- where once a bustling system with chattering chain printer stood in its dedicated environment with individual A/C and raised floors -- a proper computer room -- was now a dark, empty, alcove with a single PC sitting in the corner.
It isn't just the clock speed, it is also I/O and other things that were optimized for mutli-user and multi-process use.
The VAX 11/780 had a 5Mhz processor and ~4MB RAM, but could cope with over 100 concurrent users. Today, you would be hard put to find a PC that could handle 100 concurrent users. I worked for an oil exploration company in the mid-80s. They had dozens of geologists working on seismic plots and writing FORTRAN code to clean up the raw data, A0 digitising tables for marking areas of interest on printouts etc.
They even had a 2400dpi laser plotter (which used an Olivetti mini-computer as a dedicated print server).
The same for the PROTOS2000 ERP system, running on a VAX. We had that at a manufacturing company with hundreds of users in the back office and on the shop floor entering data or analysing results.
Even considering that was mostly 80x25 or 132x25 character terminal displays, that is still a lot of capture, storage, retrieval and processing of information that it had to do with those 5mHz. It had to have a separate process for each terminal, for example.
Modern computers have much more processing power, but I honestly don't know if they could cope with hundreds of users hanging off of them and using them to process information simultaneously. But getting hold of such an optimised operating system and a method of connecting a couple of hundred serial terminals to a Pi would be near impossible these days (yes, you could probably do 100+ Ethernet based terminal sessions).
Even the first PCs were, theoretically, "faster" than a mini computer of that age, in ram Mhz terms, but they struggled when you tried connecting more than half a dozen terminals to the back of them - I had to support a 286 running Xenix and an accounting package, with a dozen terminal hung off a multi-port serial card. It was very slow, even though its processor was faster than a VAX, in terms or clock speed. What the VAX could achieve in one clock cycle, on the other hand...
All constantly running their own individual applications and hammering the I/O for data?
A website is very different to the typical load a minicomputer would run. It can cache most of the data, for a start. And it isn't doing constant processing of data, such as looking for oil in a seismic survey, or manufacturing processing loads, it is serving up the same pages with the same information time and again and not constantly, people load a page, digest, load a page, digest... On a mini, it is controlling the whole session.
A terminal server, with the ERP system running directly on the TS, along with its database, would be a better example. If you throw a big enough server at it, it is possible, but a Pi or even a desktop PC wouldn't be able to cope.
The latest Pi has multiple I/O methods to more quickly move data around. You could run a lot of SSH connections to it. I don't know how many you can do before it slows down, but thirty works at least. Of course, if you're trying to run GUIs on those connections or if they're running complex software, that would be effective in making it sluggish. With relatively simple programs for data entry and manipulation though, I don't think it would be a problem. Don't get me wrong, the efficiency of old computers is very impressive, but if we limit ourselves to the activities they performed, our modern computers can demonstrate their advancement.
That doesn't hurt I'm sure, but depending on their goal, it's not quite enough. For education, you need some method of I/O, and most screens do need mains power. Even if it's one of the tiny HDMI screens that run off USB power, it's going to kill any battery you try to run it from. It's great when power is available but limited, but if you don't have consistent power at all, it's not well suited to it. Devices like a laptop which can run from an internal battery are better at that, but they're also a lot more expensive.