back to article War of the workstations: How the lowest bidders shaped today's tech landscape

Digging into stories of 1980s OSes, a forgotten war for the future of computing emerges. It was won by the lowest bidders, and then the poor users and programmers forgot it ever happened. Retrocomputing is a substantial, and still growing, interest for many techies and hobbyists, and has been for a decade or two now. There's …

  1. Throatwarbler Mangrove Silver badge
    Thumb Up

    Survival characteristics

    Great article!

    I think the phrase "survival characteristics" is the key to the whole of it. The operating systems and programming languages which have survived and thrived are the ones which were most adaptable to the needs of the moment rather than the ones which fit some Platonic ideal. Dinosaurs died out because they were no longer suited to their environment, leaving us with their descendants in the form of modern-day birds and making room for the evolution of mammals. Of course, one can argue that poor business decisions were made by the companies who owned the intellectual property of those older computers, but from an evolutionary perspective, commercial forces are part of the overall ecosystem and, in all reality, a more important part than mere technical excellence.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Survival characteristics

      [Author here]

      > Dinosaurs died out because they were no longer suited to their environment,

      They didn't, you know.

      They died out because their environment suddenly and dramatically changed so that they didn't fit it any longer. They were superbly adapted and thrived for 5x longer than the mammals have done to a variety of different environments... but they could only adapt so fast.

      When things changed faster, they couldn't, and game over.

      For what it's worth, humans are the opposite: we evolved for a narrow temporary stable climatic band... and we've ended it. We are approaching saturation point, where over 95% of land biomass is us and our crops. We have no predators and no very successful pathogens.

      We are about to re-enact the St Matthew Island experiment, this century.

      https://en.wikipedia.org/wiki/St._Matthew_Island

      I hope a few of us survive, as we did last time...

      https://en.wikipedia.org/wiki/Toba_catastrophe_theory

      1. Throatwarbler Mangrove Silver badge
        Mushroom

        Re: Survival characteristics

        It's all relative. When your environment suddenly changes, you either adapt, or you're no longer suitable for it and you die off. I completely agree, otherwise. I was going to say something similar, in fact, but I know how tetchy the other commentards get when one mentions anthropogenic climate change.

        DAMN YOU, GRETA! DAMN YOU TO HELL!

      2. Doctor Syntax Silver badge

        Re: Survival characteristics

        "no very successful pathogens"

        A successful pathogen is one that doesn't kill its host, neither individual nor the host species. If it does the pathogen also dies, individually or as a species. We have lots of successful pathogens.

        1. Anonymous Coward
          Anonymous Coward

          Re: Survival characteristics

          We have lots of successful pathogens.

          Indeed ...

          Quite so.

          Although not so successful*, the Tory party is one of the most notable strains.

          .

          * it is slowly but steadily killing both hosts and individuals.

        2. ldo Silver badge

          Re: A successful pathogen

          Counterexample, anthrax does have to kill its host, so that it can spread from the bleeding carcass back into the soil, where it can complete its lifecycle and get ingested by the next victim.

          Moral: beware of biological analogies. In fact, beware of analogies in general.

          1. Doctor Syntax Silver badge

            Re: A successful pathogen

            Killing all the hosts would kill the pathogen. Eventually, given its persistence as spores.

            1. HuBo Silver badge
              Alien

              Re: A successful pathogen

              But only while waiting for a return trip to earth ...

            2. TDog

              Re: A successful pathogen

              An optimaxed pathogen would

              * be mild and not affect the host greatly

              * be integrated into the host in every place

              * make the host totally dependent on it

              * give evolutionary advantages that outweigh the disadvantages to the host

              * be easily spread and reproducible

              * NOT BE YELLOW (shout out to explosions and ire)

              (Ignoring the yellow bit, sort of looks like mitochondria and possibly may other inclusion organelles). Or even more interestingly cowpox, which supplanted smallpox cos it found an evolutionary niche that precluded smallpox from entering the host.

              Another optimaxed pathogen that was hostile to the host would

              * not kill the host untill the R factor was significantly greater than 2

              -+

              1. Michael Wojcik Silver badge

                Re: A successful pathogen

                sort of looks like mitochondria and possibly may other inclusion organelles

                Yes. Unfortunately your definition of "successful pathogen" rather stretches the concept of "pathogen", since at this point the invasive (proto-)organism is arguably no longer causing any pathology. In other words, a truly successful pathogen is no longer a pathogen at all, but a symbiote.

      3. doublelayer Silver badge

        Re: Survival characteristics

        "They died out because their environment suddenly and dramatically changed so that they didn't fit it any longer. They were superbly adapted and thrived for 5x longer than the mammals have done to a variety of different environments... but they could only adapt so fast."

        Dinosaurs as a group did, but that's a pretty broad group. We might as well say that dinosaurs are still surviving because birds are basically the same, right? In reality, there were lots of types of dinosaurs that didn't require a catastrophic change to die. Many went extinct anyway, slowly. Most of the dinosaur species that existed at some time were extinct by the time an asteroid caused some trouble for the rest.

        The analogy is not exact, but it fits the computers. Lisp machines died out, and Lisp as a common language has been dwindling, but a lot of concepts that made Lisp what it is were included in other languages. I can argue that Lisp is still around in lots of places, or rather that it isn't because languages from which Lisp took many of its structure were the more influential. Your article effectively draws a lot of lines between two computers and says "these are different. This one was good and died. This one is worse and survived". As demonstrated by your examples, those lines aren't as clear as you've painted them, and when the comparisons get added on top of another, they stop making sense.

      4. smart4ss
        Trollface

        Re: Survival characteristics

        "....and no very successful pathogens"

        Is this a ChatGPT fact?

      5. TheMeerkat Silver badge

        Re: Survival characteristics

        > humans are the opposite: we evolved for a narrow temporary stable climatic band...

        Only someone who have no actual knowledge but is completely captured and brainwashed into environmentalism can say that.

        The humans are the only animals on earth who can adapt to any climate, because we have brains. We can survive and adapt in any climate because instead of being perfectly suited for the current one we happen to live in (like dinosaurs were) we can change our environment artificially to support our lives in any of natural ones.

    2. ecofeco Silver badge

      Re: Survival characteristics

      No, the tech that survived was forced on us by marketing and decades of illegal business practices.

      That is not hyperbole, but documented fact.

      1. Doctor Syntax Silver badge

        Re: Survival characteristics

        Unix and its derivatives have survived remarkably well. The supercomputer in your pocket runs one of them. Not only was Unix not marketed by AT&T, marketing it was illegal. It spread because on its merits in spite of lack of marteling - or possibly because of it, the forbidden fruit syndrome.

        1. keithpeter Silver badge
          Windows

          Re: Survival characteristics

          Thanks to Liam Proven for a thought provoking essay which I have just caught up with on a wet Wednesday morning.

          I would argue that UNIX had superb and imaginative marketing.

          A small group of people in one branch of Bell Labs managed to market their operating system to Bell's bureaucracy very effectively, and through publishing technical reports and papers managed to market their system to Universities in many countries within a matter of years. Joseph Ossanna appears from what I have read to be the kind of genius middle manager you dream of working under, and it was a shame that he died so young.

          This observation brings me to a wider point: the changes in organisations that occurred in step with IT capabilities. Those flea-powered 'personal computers' when running Visi-Calc and successors enabled middle managers to argue back to central MIS using evidence. Not to be underestimated, that.

          Yes, a LISP machine could have been used in the same way but the 'right cast of mind' referred to in the OA was very rare. Hacking up a bit of a spreadsheet (as many here know to their cost having had to clean up the messes) has a much lower barrier to entry.

          1. Michael Wojcik Silver badge

            Re: Survival characteristics

            Those flea-powered 'personal computers' when running Visi-Calc and successors enabled middle managers to argue back to central MIS using evidence.

            Yes. There's plenty of evidence that PCs did not make managers more productive — quite the opposite, in fact, since dictating correspondence to experienced secretaries and having it produced by a dedicated typing pool is almost always going to be quite a lot faster than writing it in something like Outlook or Word, for example. What they did do was provide managers with new rhetorical affordances.

            The major weakness of this (generally quite enjoyable, even if it's on familiar ground) article, I think, is that Liam doesn't define "better". He's praising failed evolutionary branches such as LISP machines1 for certain engineering attributes, including a species of elegance, and that's fine. But "better" can also be conceived in economic terms, and in that sense the "winners" of this particular struggle were certainly "better". They offered more net utility to more potential users. I'd wager that was true of the Mac relative to the Newton, too. When the Newton appeared, the potential base of users who would have found considerable net utility in it was too small — much smaller than that of the Mac, which turned out to be well-suited for certain markets such as designers and education.2

            1And I'll note that I quite like LISP, and offshoots such as Scheme; as I do languages in the ML family. I don't care for Smalltalk, which in its original form contains an idiotic decision to color-code source code, and consequently never spent much time with its later incarnations.

            2Personally, I've never liked the Mac, or anything Apple's put out since the Apple //e. But I won't deny that they've found some markets.

            1. doublelayer Silver badge

              Re: Survival characteristics

              "dictating correspondence to experienced secretaries and having it produced by a dedicated typing pool is almost always going to be quite a lot faster than writing it in something like Outlook or Word, for example."

              This might be just because I'm young enough that typing pools were gone by the time I started working and dictation is done with a speech recognition program, but this doesn't seem true to me. If I dictate into a basic audio device, I can't edit anything without either verbal backspacing "Let's review this ... discuss this ... just use discuss this" or figuring out what I want to say and then reading out in one go. Either way is slower than just using the backspace key to erase the words until the document only contains the words I want. Part of it is probably that I can type quite quickly and the previous managers could not, but even so I'd imagine that they could figure it out without having to wait for their dictation to be sent to someone to type it up, a paper copy produced, them to review that, and it to be given to someone else to drive it further along. Which brings up the other speed advantages of computers which could, once a document was produced, transmit it to places faster than previous methods. I'm thinking of sending a ten-page document to ten people over one fax machine, for instance, versus sending an email to ten addresses and going back to some other work while the early network transmitted it.

            2. keithpeter Silver badge
              Pint

              Re: Survival characteristics

              "There's plenty of evidence that PCs did not make managers more productive — quite the opposite, in fact, since dictating correspondence to experienced secretaries and having it produced by a dedicated typing pool is almost always going to be quite a lot faster than writing it in something like Outlook or Word, for example."

              I'm just old enough to have worked as a messenger boy in an office where managers would often simply ask the secretary to send a 'hurry up' letter or a 'very sorry' letter for routine communications. These letters were so standard that the secretary just bashed one out on her Selectric and the manager signed it. So yes I take your point for written stuff.

              I was more on about the spreadsheet allowing data based challenges to the centralised MIS systems of the 1960s and 1970s. Perhaps because my later working life included that.

              I also take your point about productivity decline. I have a feeling that we now measure and attempt to interpret a much larger volume of numerical data than previously simply because we can. I also have a feeling a lot of that data is quite noisy so people are trying to control random fluctuations and that does not work.

              Icon: Happy new year all, I'm off out on a slightly less rainy day.

            3. timrowledge

              Re: Survival characteristics

              “I don't care for Smalltalk, which in its original form contains an idiotic decision to color-code source code”

              No it didn’t. The original Smalltalk couldn’t have, since it ran monochrome. Stop bullshitting.

              Many modern Smalltalk systems *can* syntax if you want them to. And if you don’t, then set the preferences to not do it.

            4. Liam Proven (Written by Reg staff) Silver badge

              Re: Survival characteristics

              [Author here]

              > Liam doesn't define "better"

              This is partly due to two things.

              [1] I borrowed the term from Dr Gabriel's celebrated "Worse is Better" talk. Part of the point of that, visible in the title, is the inversion of expected definitions.

              [2] This article is based on a 2018 talk I wrote and presented at the FOSDEM conference in Brussels. There was more to the talk, and then 2 further talks. I hope and plan to return to the theme again in future.

            5. Antipode77

              Re: Survival characteristics

              Don't forget the lowering of the learning curve.

      2. Ian Johnston Silver badge

        Re: Survival characteristics

        But Linux is - objectively speaking - terrible (too) and neither marketing no illegal business practices have forced us to aim that particular gun at our feet and pull the trigger.

        1. Doctor Syntax Silver badge

          Re: Survival characteristics

          A bit like democracy being the worst system of government apart form all the others.

        2. probgoblin

          Re: Survival characteristics

          >and neither marketing no illegal business practices have forced us to aim that particular gun at our feet

          They kinda have though. As more and more things began to revolve around the internet, more and more people had to move their businesses online. Even before "the cloud" was a thing, if you had a business you wanted a web presence and that meant a web page and that meant hosting. You could either host on a Windows box for a premium or get the same service for slightly less on a Linux box.

          "Free as in free beer" is some powerful marketing.

          Then, as we hit the cloudy era and you suddenly had to figure out how to scale things to potentially hundreds of "boxes" to meet temporary demand and then scale back to save money when possible, the big hosting solutions began to offer their own flavors of free beer that were brewed to their specific tastes. Why use what your want when AWS has it's own custom distro that should work better than free will?

          They may have not forced us to point the gun at our feet but they certainly put the target there.

          1. smart4ss
            Childcatcher

            Re: Survival characteristics

            They kinda have though...if you had a business you wanted a web presence... they certainly put the target there."

            Even if you had no choice of implementation, nobody can force you to have a business. As for the tools and technologies required to run the business, those that provide them aren't forcing you to use them. An economy when one is truly "forced" is more like communism.

        3. ldo Silver badge

          Re: Linux Is Terrible

          Linux is the best that a bunch of self-motivated hackers has been able to do. Windows is the best that a huge, well-funded corporation is able to do.

          Linux runs rings around Windows, and I’m sure that the best that a bunch of armchair critics can do could be even better. But they have to get around to doing it first.

      3. smart4ss
        Go

        Re: Survival characteristics

        "...decades of illegal business practices....documented fact."

        Where can I read up on that?

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Survival characteristics

          [Author here]

          > Where can I read up on that?

          Well this was my contribution:

          https://www.theregister.com/2013/06/03/thank_microsoft_for_linux_desktop_fail/

          This has caused great consternation and anger and denial across the Linux world at the time. In retrospect, that's a success. :-)

  2. Chris Gray 1
    Stop

    Disagree

    The problem with those dynamic languages is that using them doesn't scale well. The human mind can only encompass so much at a time, and what would have been produced in those languages would not have been maintainable - complex programs were basically only comprehensible for a short time by the mind that produced them.

    Where programming languages are going now is towards helping the human brain deal with complexity. Issues like the low-level problems when programming with simple C are pretty much solved - it just takes time for the solutions to spread out and replace previous tools. And, the new solutions don't introduce much overhead - far less so than doing everything dynamically.

    All IMHO of course!

    1. ldo Silver badge

      Dynamic Languages

      Dynamic languages do indeed scale. In Fred Brooks’ The Mythical Man Month, he predicted that the only way to achieve another order-of-magnitude improvement in productivity over the languages of the time was to resort to “metaprogramming”, which was his term for going to yet-higher-level languages that would effectively be orchestrators of pieces written in lower-level languages. While his unfortunate choice of AppleScript as an example has not stood the test of time, the concept has certainly been vindicated in the form of Perl, Python, JavaScript et al.

      LISP and Smalltalk may not be fashionable nowadays, but they are still useful examples of the same sort of thing. Homoiconicity, for example, is being recognized as a useful language characteristic nowadays. And the Smalltalk syntax seems to lend itself very well to defining DSLs.

      1. timrowledge

        Re: Dynamic Languages

        Smalltalk is the only language good enough to be worth the effort of critiquing. The rest are just dead text in dreary files. Don't waste your time; you only have so much of it.

    2. An_Old_Dog Silver badge

      Instant Mutability of Software Systems

      From TFA: On the LISP machines, your code wasn't trapped inside frozen blocks. You could just edit the live running code and the changes would take effect immediately. You could inspect or even change the values of running variables, as the code ran.

      Well, gee, you can do that with BASIC interpreters, and with Forth systems. With compiled code, you'd need to edit-and-recompile, but under a debugger, such as gdb you could dynamically inspect and change values.

      The issue with the interpreters is no version control beyond SAVE MYPRG001.BAS, then SAVE MYPRG002.BAS, then ... Can you save a LISP "world" without shutting down and re-powering the machine? How are they named? How do you compare two different "worlds"? How do you specify which one to boot? You conceivably could do version control in Forth, if you had a layer over the standard save command, save-buffers (which save not named files, but file blocks, by number. And doing a flush from within Forth will overwrite the last-saved "version" of the blocks with the text of the most-current dirty blocks).

      I'm not anti-LISP, nor anti-Forth, nor even anti-BASIC. I'm just pointing out their limitations. Giving people the ability to instantaneously-mutate the "operating system" software of a multiuser system seems a recipe for disaster.

      1. AndrueC Silver badge
        Meh

        Re: Instant Mutability of Software Systems

        C# / .NET supports modify on the fly. Okay so Visual Studio has an annoying habit of refusing to resume the application just when it's getting interesting but still. Set a breakpoint, inspect and/or modify variables, change the code and resume is supported.

        1. ldo Silver badge

          Re: Instant Mutability of Software Systems

          That’s nice as far as it goes. I think an even better example of “soup” programming is a Jupyter notebook. And that has the advantage that the notebook format is plain text (JSON, in fact). So you can actually put it under version control and circulate patches etc.

        2. DrBobK

          Re: Instant Mutability of Software Systems

          A long time ago (Sun3 era) I had a C++ environment called Sabre C++ (there was a name change at some point, I can't remember the newer name). It was very expensive, but worth it. You could compile your C++ in a debugger, set breakpoints and so on, but when you reached a breakpoint you could set the system to switch to its C++ interpreter, which, of course, allowed you to modify not only variables, but also code on the fly. You could step through like this until you reached a point where you resumed the compiled code. All quite amazing. I've never seen anything like it again. I think it died of expense and people buying cheap hardware and software.

          1. DrBobK

            Re: Instant Mutability of Software Systems

            It became CenterLine and then ObjectWorks. There some good screenshots at:

            https://www.softwarepreservation.org/projects/interactive_c/saberc/saberc

      2. lispm

        Re: Instant Mutability of Software Systems

        > issue with the interpreters is no version control beyond SAVE MYPRG001.BAS, then SAVE MYPRG002.BAS,

        The Lisp Machine had a versioned file system. You edit myprog.lisp.1 and the next save creates myprog.lisp.2. You can also edit myprog.lisp.newest and it will open the file with the highest version number.

        > Can you save a LISP "world" without shutting down and re-powering the machine?

        Yes

        > How are they named?

        You give it a name. It's a file on a disk.

        > How do you specify which one to boot?

        It's a parameter in a boot file to the boot command. You can also type the boot command into a boot loader command line

      3. nijam Silver badge

        Re: Instant Mutability of Software Systems

        Of course we have "instant mutability of software systems" more-or-less everywhere now. It's better know as malware, but hey...

    3. MarkMLl

      Re: Disagree

      The bigger problems with the dynamic languages is lack of internal protection: an ordinary user can make breaking changes to the underlying structure of the system.

      I'm fully behind "It's my PC, and I'll pry if I want to". However I think it's indefensible for a user to (a) make some arbitrary change to a network-facing computer and then claim it's unmodified or (b) make a change to some component which he does not "own" (i.e. take full responsibility for) and then blame everybody else for the resultant problems.

      Smalltalk (or for that matter Lisp) with some form of object/class-based ownership and protection mechanism would be very interesting indeed. But AFAIK, such a thing does not exist.

      1. Jou (Mxyzptlk) Silver badge

        Re: Disagree

        You could replace the two mentioned languages with ChatGPT and Bard, and your posting would reflect the current world. So much has not changed :D.

  3. ldo Silver badge

    What Is A “Workstation”?

    To me, the term “workstation” denotes a category of machine that crosses the boundary between “desktop” and “server”. Look at how people using their own Unix workstations back in the 1990s were able to collaborate by sharing code and data, for example.

    Dedicated Smalltalk and LISP machines gave us a certain kind of power, but at a cost: when you have no source code files, you cannot share patches to those source code files. All you can do is exchange VM snapshot blobs in their entirety. This makes it difficult to collaborate on common projects.

    Is there a way to retrofit something like Git onto the “soup” concept? I don’t think there is.

    1. Anonymous Coward
      Anonymous Coward

      Re: What Is A “Workstation”?

      This makes it difficult to collaborate on common projects.

      Version control and Team Development for Smalltalk from way back...

      ENVY/Developer / ENVY/Smalltalk

      http://www.edm2.com/index.php/ENVY/Developer

      1. ldo Silver badge

        Re: What Is A “Workstation”?

        ENVY looks like too little, too late.

        I was watching a series of walkthroughs on YouTube of an actual resurrected 1970s-era Smalltalk workstation, presented by someone who helped develop them. It was great being able to edit components of the live system on-the-fly and see your changes take place instantly. Except the downside was, if you screwed something up, like a core UI component which handled low-level mouse clicks or something, you could render your entire system unusable. And of course, rebooting didn’t help, because the change was permanently made to your system VM image. The only way out was to throw out the bad VM snapshot and restore a known-good one.

        Imagine if you had version control integrated at the lowest level, so that you could choose, at boot time, from different versions of the components you had edited, with all your changes nicely organized in reverse chronological order, so you could revert that last screwup without throwing everything else out. But of course ENVY could never offer anything that low-level, even when it finally came into existence, could it?

        Also, remember that modern (post-SVN) version control can treat changes as atomic across multiple files. This way, if for example you move a function definition from one file to another, by recording both file changes as part of the same commit, you cannot accidentally restore to the previous version of one file without reverting the other as well; because if you did, you would end up with code which either duplicated that function in both files or lost it from both, and that likely would not even compile.

        1. Chris Gray 1
          Angel

          Re: What Is A “Workstation”?

          I was thinking I hadn't ever worked on a system that worked that way, and then realized that I had. I was in a classroom with an Amiga screen projected on a monitor. I had my Amiga (3000?) there running an AmigaMUD server. Craig was my volunteer running a client connected over a serial port. We were interacting for a while. Then something occurred to me, and I live-edited the scenerio programming to reverse the direction of his movement arrow keys. Heh! Since AmigaMUD automatically persisted everything, any screwup could have made the game unplayable. However, I think you could always get into "god"/"wizard"/"programmer" mode and then fix things.

        2. timrowledge

          Re: What Is A “Workstation”?

          "And of course, rebooting didn’t help, because the change was permanently made to your system VM image."

          No; just no. That's not what happened then, and not what happens now.

    2. Vometia has insomnia. Again.

      Re: What Is A “Workstation”?

      IMHO a "workstation" is the sort of thing currently cluttering up my hallway: I brought a pair of VaxStations in from the garage so I can check their CMOS batteries haven't started leaking. A small PC-sized thing with a large CPU in it; in their case, small Vax processors (a KA43 and KA46 I think), 32 bit processing with lots of memory high-speed 3D graphics from an age when PCs were 286 and 386 with VGA and not much RAM. They ran VMS, not sure if Ultrix was ever supported, though they can probably run NetBSD now; I hope, because finding a VMS licence for Vax processors is next to impossible these days.

      I always thought it was interesting comparing the standard of manufacture with PCs then and now and everything in between. PCs were and still are awkward fiddly affairs made of flimsy metal with razor-sharp edges everywhere; in contrast these are made of big, hefty sheets of stainless steel with all the edges milled away and large, knurled captive screws that won't roll off the desk/workbench/etc and be lost forever.

      Obvs DEC weren't the only manufacturer, I just have these examples as they were bequeathed to me when I left. I should've asked for "my" Vax 4000 too but although comparatively small they're still actually quite big and would've just been cluttering the house for decades.

      1. ldo Silver badge

        Re: What Is A “Workstation”?

        Those companies (DEC, Sun, HP etc) sold both “servers” and “workstations”, but it was the same OS on both—the only real difference was in the hardware configuration.

        I did Unix admin work for a University department that was quite keen on DEC Alphas, while they lasted. These were mainly being used as compute and storage servers for a small group of researchers and grad students. As I recall, we would buy “workstation” rather than “server” boxes—there was no difference in the OS functionality per se.

        Then Microsoft came along and offered two separate OS products in the form of Windows NT “workstation” versus “server”. Only their idea of “workstation” was really what we would call “desktop”, in that its functionality was deliberately crippled to avoid encroaching too much into “server” territory. If you wanted actual “server” functionality, you had to pay extra.

        And the customers bought into it. They gave up their higher-function Unix workstations in favour of the less-capable Windows variety. Why? Probably because they were cheaper. Amd somehow Microsoft was able to persuade them that they didn’t need all that extra functionality on everybody’s desktop. And so all the products that could call themselves “Unix” went extinct.

        Luckily, today, we still have at least one full “workstation”-class OS easily available, and that is Linux.

        1. Vometia has insomnia. Again.

          Re: What Is A “Workstation”?

          I worked for DEC at the time and we'd often end up doing the same thing because they wouldn't allocate Alphas of any sort (not even for internal IT to use for the "superclusers" that they bragged to customers about being Alpha-based: they were all ageing Vax 6000s with pre-NVAX processors, i.e. slow, and hugely overloaded) and management said if we needed them we'd have to lease them from a third-party. Yeah, DEC under Greasy Bob's rule was always destined for greatness. :| Anyway, we didn't have the budget for that either, so some of us got newer PCs instead. I remember the joy of finding a discarded 486 that I could play Doom on only to have it seized by someone who needed it for actual work.

        2. smart4ss
          Unhappy

          Re: What Is A “Workstation”?

          "They gave up their higher-function Unix workstations in favour of the less-capable Windows variety. Why?"

          NT Admin ----------> $$ less functional, easier training, larger worker pool

          Unix Admin---------> $$$ more functional, long training, small worker pool

          I used to be a Unix Admin and it was like Varsity while NT was like Junior Varsity. A Unix Admin could enable more computing power for more people, but managers felt NT would give them more business options. These days a start-up can get quite far with Cloud Computing. No need to hire expensive Admins.

      2. MarkMLl

        Re: What Is A “Workstation”?

        Interestingly (at least IMO), Sun's enterprise-grade machines and the Cray CS64000 were all based on an "Artificial Intelligence Workstation" architecture developed at Xerox PARC, which was then reworked by a collaborative team of Sun and Xerox engineers to use SPARC processors (the seminal papers have a couple of dozen authors from the two companies).

        If nothing else, this suggests that physical size and the number of simultaneous users are barely relevant when it comes to firming up the definition: it mostly boils down to "what does this company see itself as selling?".

    3. Roland6 Silver badge

      Re: What Is A “Workstation”?

      I would say (ignoring Unix/Xenix) that NT and its successors turned the Microsoft PC into a workstation. Obviously, WfWg (once it got TCP/IP and few other bits) was a good initial attempt…

    4. lispm

      Re: What Is A “Workstation”?

      > Dedicated Smalltalk and LISP machines gave us a certain kind of power, but at a cost: when you have no source code files

      Lisp Machines have source code files. Even versioned.

      > All you can do is exchange VM snapshot blobs in their entirety.

      That was never the case.

    5. Paolo Amoroso

      Re: What Is A “Workstation”?

      > Dedicated Smalltalk and LISP machines gave us a certain kind of power, but at a cost: when you have no source code files, you cannot share patches to those source code files.

      Medley Interlisp, the system software of Xerox Lisp Machines, largely overcomes this issue as it's not a pure image-based environment. It's a "residential environment", a hybrid where you still develop in the image, but save the code to files that are treated more like code databases than source files. In fact, the Medley Interlisp revival project stores the full sources on GitHub and the maintenance and development work is based on pull requests.

    6. timrowledge

      Re: What Is A “Workstation”?

      Y'what? That is total and utter nonsense. Smalltalk has *always* been able to pass code around. From simple text files, to imagesegments, to Monticello packages, to the git based system Pharo uses, Envy, Metacello... How the hell did you think we share and collaborate?

  4. yetanotheraoc Silver badge

    Correctness and Simplicity

    That piece by Gabriel is a classic. Since I first came across it, it has changed my own coding to be much more incremental. In my view correctness is above all else. Mostly correct might be good enough for the immediate task, but when you try to build on it then the bugs will creep out. Proving correctness depends on simplicity. If the system is sufficiently complex, it becomes a black box. At that stage, it is only possible to disprove correctness. If we have a system that *might be* correct, but we can't prove it, then the most that can be claimed is we don't yet know what bugs are in it. The problem with simplicity is that it's subjective. What an educated person finds simple an uneducated person might reject as impossible.What an uneducated person finds simple the educated person might reject as way too many unnecessary steps. The mathematical idea that sums this up is _elegance_. To grasp the elegant solution requires either a high degree of training or a high degree of genius. Both are in rather short supply, and in practice the elegant solution just becomes, for the vast majority of people, a different style of black box.

    1. Daedalus

      Re: Correctness and Simplicity

      "If your code is complex you've either made a mistake, or you're about to make one"

      That's a quote from, er, me.

    2. ecofeco Silver badge

      Re: Correctness and Simplicity

      The guys at MIT in the early 1960s called it "bumming code".

      This means to make it as efficient, small and elegant as possible.

      1. MarkMLl

        Re: Correctness and Simplicity

        I believe that the original definition was from John McCarthy (MIT and later Stanford), who compared a certain type of programmer with a "ski bum" intent on shaving a fraction of a second off his downhill run time.

        1. Michael Wojcik Silver badge

          Re: Correctness and Simplicity

          Yes. "Bumming code" was about minimalism: fastest execution, smallest source or object code, etc. It was never about elegance. That was Djikstra's objection to APL: that it was designed for code bums.

    3. _andrew

      Re: Correctness and Simplicity

      Once upon a time "correct" used to mean "can't possibly fail". Ship in ROM. In the earlier days of auto-updateing apps and web apps it seemed to mean "worked once in the lab and the executive demo was OKed". Ship on cron-job. In these days of nation-state hackers and crypto-jacking I get the sense that it's swinging back slightly towards the former sense. Think of the hacking/spying industry as a free universal fuzz-testing service?

      The US and EU have just almost outlawed new C and C++ code. That day is still a little way off, I think, but perhaps the Lisps and Smalltalks of the world will get more of a look-in?

    4. doublelayer Silver badge

      Re: Correctness and Simplicity

      I've always had a problem with that essay. It basically doesn't explain any details, and not only sets up strawman arguments (as admitted), but doesn't even clarify what they're supposed to represent. I think my objections can be summed up in the following quote:

      I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.

      That's not a valid argument. It's as if I used the following setup in an argument about political systems:

      The Welsh approach

      * People should have choices about how their government operates and what it does.

      * Torturing people is just wrong.

      The English approach:

      * People should have choices about how their government operates and what it does unless they're causing problems.

      * Torturing people is wrong, unless those people are causing problems.

      It doesn't actually tell you what the English system is, but it is pretty clearly saying that, if you agree with any other point I might have thrown into the English system's list, then you like torturing people and that's terrible. It misconstrues the options available, simplifies things into two not necessarily opposed alternatives, and it doesn't even have the courtesy to make this fallacious argument well by telling people what the right option is. I've generally summarized this essay as "There's some software I don't like and unfortunately others have used it more than the alternative I like better, so we're stuck with it now", which is a style of essay I've seen lots of times and written myself, but this one pretends to be making a wider point that I don't think it does.

      1. smart4ss

        Re: Correctness and Simplicity

        "...pretends to be making a wider point that I don't think it does."

        I think it does make a wider point, but maybe you missed it?

        "...and unfortunately others have used it more..."

        The simple fact that others use an IT technology more gives it an advantage, and the first-mover principle means the advantage will likely be insurmountable. That is "unfortunate" if you champion the also-ran. Proven's article takes a long way around but gets there in the end with the "fungible cog" reference, and that essay blames "deskilling", which is a derogatory term for innovation. Innovations are never universally haled as good, because they disrupt, and the disrupted don't like it.

        The most ridiculous thing, though, is that using computers in business is an innovation that disrupted a lot. So complaining about disruption within the computing industry is myopic.

        1. doublelayer Silver badge

          Re: Correctness and Simplicity

          If your comment was meant to explain what I missed, I still don't get it. Yes, one of the reasons people may have used the option the writer doesn't like could be that that one came first. Or that that one is cheaper. Or that that one was faster. Or that that one is better. Or something else entirely. Either way, it still boils down to the author complaining that people don't use the thing they wanted.

          Here's an example. I don't much like Javascript because writing it is painful and it lacks a lot of stuff that a proper programming language would have, it's inefficient and calling into anything else is a mess of incompatible standards into which companies like Google want to cram everything. Unfortunately, it is the only feasible option for client-side web scripting, so we're stuck with it. The above two sentences are a better argument than that essay was because I at least told you what the system I don't like is and why, albeit in very little detail. The famous essay doesn't do either. Neither of the approaches refer to a real system with real complaints, but to theoretical systems with complaints that have been deliberately exaggerated. My two sentences are also a bad essay, because if I'm actually going to complain about web scripting, then I need to acknowledge the also ran, which would involve me explaining why Javascript is certainly a lot better than client-side Java was, and it beats Flash, and Silverlight is a word that nobody wanted to see in this sentence. In short, to admit why Javascript is where it is today. I think it is possible to create a better scripting language than JS, but that doing so is not justified because it is too popular for an alternative to be adopted, but I could specify what characteristics a replacement should have. An essay like that would make a point that others could debate. If I instead chose to make a fake language that could be JS or maybe not, then explained that everyone who was involved in it was sloppy and undisciplined, but that the good people couldn't succeed because the sloppy people got to market first, then I'm doing my own argument a disservice by ignoring reality in two different ways.

  5. Jou (Mxyzptlk) Silver badge

    Disagree on a few points

    "usually completely ignoring all the lessons learned in the previous generation" - no. Many mainframe and supercomputer concepts ended up in mainstream CPUs.

    Multiple CPUs? Since 286 generation, and got into widely available with Windows NT 4.0 - albeit I find no official information which minimum CPU was needed for multiprocessor support. I only saw Pentium boxes with NT 4.0 and two CPUs. I am ignoring Windows NT 3.1 / 3.5 which could use multiple CPUs TOO, but those machines are the "not widely available" price range. I could not get two visible CPUs in my Hyper-V VM, despite changing to multiprocessor kernel due to no matching hal which boots NT 3.51, I tried all.

    CPU cache? Mainstream since at least 486, but was possible with some versions of 386.

    Cray technique "single command on a lot of input data", i.e. give one command and the CPU with do a math operation of two input arrays, saving a few cycles. Since Pentium MMX, required by Quake 2. Later expanded with SMID / AVX for more complex stuff.

    Virtualization with Hardware support instead 100% software? AMD and Intel around 2005.

    Extended virtualization support to make Tier 1 Hypervisor possible at relatively low price? AMD beginning 2008, intel followed a half year later.

    ECC for consumer CPUs? AMD since 2011 "Bulldozer" consumer CPU, like AMD FX-4170. Intel only server/workstation. For server CPUs at least since 486-generation (could not find any 386 ECC information, but such servers might exist).

    And there is SO much former mainframe or dedicated multi-million-$-computer technique which are available in the cheapest currently available x86 CPUs.

    1. ldo Silver badge

      Re: Disagree on a few points

      Mainframes and supercomputers were built on entirely different concepts.

      The “mainframe” is founded very much on the assumption that the CPU is a scarce and expensive resource. Hence the devolution of so much intelligence to the peripherals, so they could perform long sequences of I/O operations in-between having to go back to the CPU to pester ask for more instructions. And the use of batch-oriented operating systems, with the emphasis on high throughput at the expense of high latency. The idea of using a computer interactively, where the machine would be spending a lot of its time waiting for the user to type the next keystroke, just seemed like a ridiculously wasteful idea. But when minicomputers (and later micros) became cheap enough for this to become entirely practicable, the effect on user productivity was absolutely massive.

      Naturally, the whole “mainframe” concept has been obsolete for many decades. Which is why the only mainframe computer company still in that business—namely, IBM—is but a shadow of its former self.

      1. _andrew

        Re: Disagree on a few points

        This image of the "microprocessor" model of processing, with a single shared bus and interrupt handlers that pushed the data around hasn't really been true for a very long time. Modern "micro" systems use all of the low-latency batch-processing IO-offload tricks in the book. Everything happens through DMA based on intelligent processors in the peripheral systems that execute chains of IO requests that they DMA straight out of RAM. Interrupts are fully vectored and only used once the entire buffer of IO requests are empty (if then). The bottom levels of the OSes are all asynchronous batch systems...

        And yes, they still spend much of their time sitting around, waiting for the user to push a button, but when they're playing a video game they're doing a lot more work than any mainframe or supercomputer from even twenty years ago was capable of.

        We've just found a lot more "work" for them all to do (for small values of "work").

      2. John Brown (no body) Silver badge

        Re: Disagree on a few points

        "The “mainframe” is founded very much on the assumption that the CPU is a scarce and expensive resource. Hence the devolution of so much intelligence to the peripherals, so they could perform long sequences of I/O operations in-between having to go back to the CPU to pester ask for more instructions."

        Yes, a concept carried over by Commodore on the original PET and it's floppy disk unit. Maybe others did it to, but none I'm aware of.

        1. MarkMLl

          Re: Disagree on a few points

          "Yes, a concept carried over by Commodore on the original PET and it's floppy disk unit. Maybe others did it to, but none I'm aware of."

          In that case you've really not looked very hard. From the early 80s onwards anything using a SCSI bus had a significant amount of processing power in each host (i.e. peripheral), and the situation has continued to the present day. Haven't you ever stopped to wonder just what's inside that fancy printer on your USB bus, with a display that would put the original PC to shame?

          1. Anonymous Coward
            Anonymous Coward

            Re: Disagree on a few points

            Haven't you ever stopped to wonder just what's inside that fancy printer on your USB

            We used to have RML 380z systems in the office, running under CP/M. They had a single Z80 processor.

            The terminals we connected to them had a Z80 in the video monitor to handle the display, and an 8080 in the keyboard.

      3. Michael Wojcik Silver badge

        Re: Disagree on a few points

        The idea of using a computer interactively, where the machine would be spending a lot of its time waiting for the user to type the next keystroke, just seemed like a ridiculously wasteful idea.

        Sigh. Just recite a bunch of myths, won't you?

        IBM first released TSO — you know, Time Sharing Option — in 1971. VM came out a year later, and CP/CMS dates all the way back to 1967.

        IBM mainframes had interactive OSes available since shortly after the 360 was introduced.

        I've never used Burroughs mainframes, but various online sources say MCP supported multitasking (it definitely provided multiprogramming with preemption), and MCP was first released in 1961.

        As others have noted, modern "micros" offload a great deal of processing, so that distinction is also rubbish.

    2. Fruit and Nutcase Silver badge

      Re: Disagree on a few points

      ECC For server CPUs at least since 486-generation. - IIRC IBM PS/2 Model 90/95.

      Torvalds on ECC...

      "The "modern DRAM is so reliable that it doesn't need ECC" was always a bedtime story for children that had been dropped on their heads a bit too many times. Yes, I'm pissed off about it. You can find me complaining about this literally for decades now. I don't want to say "I was right". I want this fixed, and I want ECC. And AMD did it. Intel didn't."

      https://www.phoronix.com/news/Linus-Torvalds-ECC

      1. Jou (Mxyzptlk) Silver badge

        Re: Disagree on a few points

        And he is right. With my switch to Ryzen 2700X I got ECC memory, DDR4 2400. I did not care about the speed, I wanted ECC, period. Could overclock it up to 2800 before ECC errors occured. Same mainboard same RAM Ryzen 3900x, RAM could go to 2933. Same mainboard same RAM Ryzen 5950x RAM now at 3066. Those are the safe values where I have no ECC error in my Windows log over the course of months with varying heavy CPU load. Without ECC I have to guess what caused the problem. With ECC the machine suddenly gets very slow, possibly due to not-as-fast-as-server-CPU ECC correction logic, but it keeps going I an have an "Event id 47, WHEA-Logger, component memory, corrected hardware" error. In XML view of that event I even see the exact address where it occured, albeit missing the "which module" information for obviously-not-server reasons.

        DDR5 has, by default, ECC built in, albeit not the cut-off hamming 127 (known as 64/72) but rather a much bigger hamming size so less ECC bits are needed for that extra tiny control chip. I suspect ECC with silent correction is needed, else it could not be run stable at those speeds :D. But I want the real ECC, exposed to the CPU, which is available too. But the speed step from 5950x to the newer CPUs is not big enough, especially since there is no Ryzen 7950 with 3D cache on BOTH chiplets, which my tasks could make use of. Several benchmarks show considerable advantage with CPU affinity set to check with vs. without that extra cache in the 7950x3d for my workloads.

        Aw, I got into late night blabbing again :D.

    3. Sandtitz Silver badge

      Re: Disagree on a few points

      "Multiple CPUs? Since 286 generation"

      Really? I thought 386 was the first with SMP. Or was this some sort of asymmetric 286 CPU cards on a backplane thingy?

      "Windows NT 4.0 - albeit I find no official information which minimum CPU was needed for multiprocessor support"

      NT4 required a 486 and I think it had several HAL's included: Compaq Systempro comes to mind. Not that I ever had the pleasure of using SMP 386/486.

      "I could not get two visible CPUs in my Hyper-V VM, despite changing to multiprocessor kernel due to no matching hal which boots NT 3.51, I tried all."

      Hyper-V is poor for running historic OS's. NT3.1 SMP has been done on VirtualBox.

      1. Jou (Mxyzptlk) Silver badge

        Re: Disagree on a few points

        > I thought 386 was the first with SMP. Or was this some sort of asymmetric 286 CPU cards on a backplane thingy?

        This is where I cannot find clear information on. If you run a custom build with program which uses the second CPU directly, bypassing DOS, I see no problem. There even were C64 "versions" with custome hackend in two CPUs. I saw them double the speed calculations the little appletree man.

        > NT3.1 SMP has been done on VirtualBox.

        Thank you for that link! Especially since he states "I had no luck with NT 3.5 and 3.51 SMP in a VM." So it is probably not a Hyper-V specific problem, and he is at the same "don't care enough to try further" spot :D.

        1. Richard 12 Silver badge
          Boffin

          Re: Disagree on a few points

          The 8-bit micros had multiprocessing, though usually more of an IPC than SMP.

          The BBC had The Tube, which allowed a coprocessor to be used as an accelerator.

          BBC coprocessors were often a different architecture, eg Z80. To some extent I suppose it was more like PCI-E than SMP.

          1. Vometia has insomnia. Again.

            Re: Disagree on a few points

            Mine has a 6502 co-processor, which brings extra MHz and RAM to the computer; the first 6502 effectively becomes an IO processor and GPU I think. Obvs. software is expected to use OS calls rather than trying to twiddle with hardware itself, though as there's no protected mode there's not much that can be done to ensure software authors play nice. But of those who did, IIRC the software could use the external CPU without modification.

            Definitely not SMP, but still MP. And then there's the DECSYSTEM-2020 which was an 8080-based minicomputer with a 36-bit KS10 co-processor. So kinda a microcomputer in a minicomputer cabinet with a mainframe co-processor. Poor thing must've had an identity crisis.

            1. Tony Gathercole ...

              Re: Disagree on a few points

              >>> And then there's the DECSYSTEM-2020 which was an 8080-based minicomputer with a 36-bit KS10 co-processor. So kinda a microcomputer in a minicomputer cabinet with a mainframe co-processor. Poor thing must've had an identity crisis.

              Ah ... No. The KS20 in the DECSYSTEM-2020 was always the primary processor - the 8080 had (partially) a similar relationship to the KS20 as the PDP-11/40 did to the KL10 in the various KL10 based DECsystem-10 and DECSYSTEM-20 models. That is to say the boot controller and hardware diagnostic manager, but unlike the KL10s it didn't control any IO other than the primary async console / RDC connections and access to the boot devices. No operating system or user code ever ran in either the 8080 or 11/40.

              The 2020 may have had an identity crisis when forced to run TOPS-10 but that's a story for some other time. ADP may however disagree ...

              (Was responsible for several KL-based systems at the peak of my career but the onsite 2020 had been decommissioned by the time I inherited responsibility for it and I don't think it was ever powered on again.)

              1. Vometia has insomnia. Again.

                Re: Disagree on a few points

                I was (unsuccessfully!) stealing someone else's joke about the 8080 being the main CPU! The 8080, PDP-11 and whatever it was the big Vaxes used (LSI-11 I think, at least the earlier ones) replaced all the switches and flashing lights. Bah.

                I'm still sad that my only direct experience of PDP-10s was at college, Hatfield's imaginatively-named BLUE and ORANGE (a 109X and 2020 respectively) but I still remember even at the time a lot of people had a great deal of fondness for them. I've long had a rather belated fascination with them and appreciate the elegance of their instruction set now I've read more about them. It's nice that at least emulators exist and distributions of the main operating systems are readily available to run on them.

          2. J.G.Harston Silver badge

            Re: Disagree on a few points

            Even without the Tube the BBC has multiprocessing. A rich interupt-driven background environment, almost everything a foreground application needs to do is "fire and forget". Send a byte to the serial driver, it gets plonked in a buffer and control goes straight back to the caller. An interupt will be along to deal with it. Want some serial input? Ask to look at the serial input buffer. The background deals with all the hardware. Send a network packet? Pass a pointer to the networking system, go back to work, interupts feed it to the wire while you're doing something else. Sound? Plonk it in a buffer. Printer? Plonk it in a buffer. Hand it over, get on with your work, gets done in the background.

            Going from that environment to the PC in the late '80s was staggering. "How on earth is this pile of crap so much crappier than what I started on eight years ago? You mean *I* have to frob the serial hardware *MYSELF*, like people?"

            1. Vometia has insomnia. Again.

              Re: Disagree on a few points

              I had much the same experience of the IBM PC: I encountered my first several years after becoming familiar with the BBC Micro and then the college minis and mainframes, and I was underwhelmed to put it mildly. At the same time, Acorn's Sophie Wilson was struggling to find a suitable CPU to run the successor to the BBC's OS and when she couldn't find one she created her own: the ARM was born. Yeah I know, everyone here knows that story, it's just hard to reconcile stuff like that happening at the same time as my first encounter with the PC: it already felt like it actually pre-dated yesterday's technology, yet people were saying "this is the future!" :|

              1. Michael Wojcik Silver badge

                Re: Disagree on a few points

                It was the future for an economic reason, not an engineering one. Large businesses were willing to buy a lot of IBM PCs, partly because they came from IBM, partly because they looked "businesslike" (large, steel, boring, like Steelcase desks and filing cabinets), and partly because IBM marketed to companies as replacements for the 3270 terminals they had sold to those customers a few years ago; with a suitable adapter card (such as the IRMA), the PC could take over those 3270 duties and also run PC software such as spreadsheets and word processors.

                IBM tried to lean on the last a bit harder a few years later with the 3270 PC, but by then most of the 3270 market had either migrated already or decided they weren't going to. Then TCP/IP connections to mainframes became more popular and TN3270 software implementations made dedicated 3270 hardware (original flavor or PC-based) much less appealing.

                1. Vometia has insomnia. Again.

                  Re: Disagree on a few points

                  I remember a few PCs with IRMA cards; but at our site, most desks were inhabited by VT220-compatibles, so one of my first jobs was a crash course in figuring out what on earth SNA was so I could automate logins for managers who were much too important to type, ending up with some half-scripted, half interactive 3270 gubbins connecting our Unix systems via SDLC to the mainframes' FEP. This was back in the day when our mainframers were suspicious and unconvinced that ethernet was relevant so to send emails we had a cobbled together system where Unix mail was sent to the mainframe-based VISTA and PROFS systems by JCL (which turned out to be expensive and I got told off for using it too often) over the same SDLC, and they replied by sending a telex which one of our Unix boxes reformatted into email and squirted into uucp. Ugh.

                  Er anyway, yeah, I get your point about the target market, and in fact remember one of those computer mags saying the same thing: they said as much as they weren't impressed at the complexity and price for something whose performance was so underwhelming, it didn't really matter as those three magic letters guaranteed it would succeed and would probably steamroller everything else in the process. While the latter took a bit longer to happen than anticipated (especially once the Amiga and ST appeared on the scene) they were right about it making massive headway just because IBM said to existing customers, "you want this. You need this. Buy lots of them."

    4. Anonymous Coward
      Anonymous Coward

      Re: Disagree on a few points

      Virtualization with Hardware support instead 100% software? AMD and Intel around 2005.

      ICL VME, from 1981-ish

      1. Jou (Mxyzptlk) Silver badge

        Re: Disagree on a few points

        > > Virtualization with Hardware support instead 100% software? AMD and Intel around 2005.

        > ICL VME, from 1981-ish

        You are right, they were available before. But I specifically talked about wide available hardware, affordable for consumer or cheap server, throughout the posting, which inherited techniques which were previously available only to $100'000 or million dollar machines, in $ value of 1981, so what is your point? I didn't think I would need to repeat this for every.single.line.of.my.posting just to catch that one AC, with emphasis on [verybold]C[/verybold], who picks one line out of the context to spew his "wisdom" on. ok... NOW this thread has reached "AOL entered the internet" level. Thank you AC!

      2. Michael Wojcik Silver badge

        Re: Disagree on a few points

        IBM CP, from 1968, using the 360-67's DAT Box.

  6. _andrew

    A lot of design points were being explored at that time

    The transition wasn't just Mainframe->mini->workstation->PC. As the article mentioned, Ethernet came out at this time (and token ring, over there in IBM land) and that lead to all sorts of interesting experiments in sharing. Big (Unix or VMS) server networked to diskless workstations. Multiple Servers clustered together sharing disks at the physical level. Plan9 network servers, blit terminals. Xwindows with dedicated X terminals accessing logins on multiple servers with unified user storage on NFS servers (my personal favourite at the time). A lot of these only "worked" as the result of complexity innflections: the serious graphics folk insist that X11 doesn't work once you move to 3D rendering and compositing, hence the (slow) move back to integrated models (Wayland, Quartz).

    For all that they had networking, those ancient Lisp and Smalltalk workstations had limitations: they were single-processor and single threaded systems without the sort of memory management or user/system protections that we're used to today: despite being programmed in Pascal, the early Mac System had a slightly similar run-time structure: data and graphics were shared between all levels, and any bug would crash the whole system, or any long disk activity or network activity would (or at least could) hang everything until completed. We forget how common system crashes were back then. (Although both Smaltalk and Lisp were much less able to die from an array-bounds error than later C-based systems, and the single-threaded software stack did avoid all of the shared-memory failure modes that came later.)

    Crashing the system didn't really wreck much because there was only really one thing going on at a time. At least the compiled Pascal approach of MacOS solved the 3rd party application development and distribution problem in a way that the Lisp and Smalltalk machines couldn't do. The whole system was one VM image there, and so the only way to switch to a program developed by someone else was essentially to re-booot into that other VM.

    1. PghMike

      Re: A lot of design points were being explored at that time

      They also had absolutely no security. If you put one of them on today's Internet, they'd be hijacked for crypto mining in 3 minutes.

      1. DanCrossNYC

        Re: A lot of design points were being explored at that time

        That'd be a pretty slow miner.

    2. lispm

      Re: A lot of design points were being explored at that time

      > those ancient Lisp and Smalltalk workstations had limitations: they were single-processor and single threaded systems

      They were not single threaded. Lisp Machines had multiple threads. Each window had its own thread. Even the mouse handler had its own thread.

      > or any long disk activity or network activity would (or at least could) hang everything until completed.

      That was not a problem on a Lisp Machine. My Symbolics Lisp Machine had a really capable process scheduler.

      1. swm

        Re: A lot of design points were being explored at that time

        I used both Smalltalk-76 and Interlisp at Xerox. Smalltalk-76 had the best debugger I have ever seen: you could see all of the class variables and instance variables and execute statements in the context of either. The stack trace highlighted the actual expression of the call in the source code and the code was editable dynamically. Smalltalk also had multiple "threads".

        Interlisp had a debugger that also was powerful but required more knowledge of the system to use.

        They were both a joy to use.

        1. timrowledge

          Re: A lot of design points were being explored at that time

          You should perhaps take a look at the debuggers we have now.

    3. timrowledge

      Re: A lot of design points were being explored at that time

      "those ancient Lisp and Smalltalk workstations had limitations: they were single-processor and single threaded "

      Single processor, yes mostly - there were exceptions. Single threaded - nonsense. Smalltalk has had multi-threaded execution since.. well almost forever.

  7. ecofeco Silver badge

    Well written

    Well written article.

    I first became aware of mini computers and then PCs in the mid 1970s. I did not learn how to use them until the mid 1980s. It was far too expensive for me at the time and I had to catch as catch can.

    But every word of this article is the truth. I saw it in real time. We now use crippled, overpriced tech every day and nobody knows any different. And the brand stan-bois are the worst.

    And it just gets worse every year. Which means this article will eventually disappear down the memory hole and outside of some of us who may remember it in the future, it will be gone and the morons will keep marching while looking at the those of us who know as some kind of cranks.

    Much like the article I read about learned helplessness in software writing these days. I can no longer find the article (see reason above), but it detailed how and why software is utter shite these days. This El Reg article makes a great addendum to that lost article.

    1. Ian Johnston Silver badge

      Re: Well written

      We now use crippled, overpriced tech every day and nobody knows any different. And the brand stan-bois are the worst.

      I produced the 200 pages of my first thesis in LaTeX using an Atari Mega ST2. That's 2MB of RAM, no HD. A friend has just installed LaTeX on his Windows machine. It's a 7GB download now.

      1. ldo Silver badge

        Re: Latex On Windows

        I wonder where the blame for that lies?

        I was looking around the Debian package repo, trying to figure out how large LaTEX is on there, but it’s broken up into what looks like dozens of packages. Maybe if you install them all it might add up to 7GB. But at least you have the choice of just putting in the bits you need ...

      2. J.G.Harston Silver badge

        Re: Well written

        A few months ago I found the first report I wrote when I moved on from my trusty typewriter to EDWORD on a Beeb. About 12K available when in MODE 3.

    2. doublelayer Silver badge

      Re: Well written

      "Much like the article I read about learned helplessness in software writing these days. I can no longer find the article"

      Could it possibly be Learned Helplessness in Software Engineering? It seems to fit your description, and it wasn't too hard to find.

      I think it makes some relevant points, in that the problems we're complaining about are more our fault due to our expectations. If we really think a certain system is better, we can try to get others to use it. If they don't, there may be more than we considered that goes into assessing its quality. Sometimes, the things we thought about which could make a system bad or good are not the only problems with it, and something we didn't consider will be the reason people use something we think inferior.

      1. ecofeco Silver badge
        Pint

        Re: Well written

        Holy crap! That's the one!

        Have as many as you want on me! ----------------------->>

        I've been looking for that article for the last 3 years! Lost the bookmark in a data shuffle.

  8. F. Frederick Skitty Silver badge

    "He left behind the amazing rich development environment, where it was objects all the way down."

    Not a Jobs fan, but there is a reason he left that bit unimplemented in the Lisa and Mac - the hardware would have been astronomically expensive, just as it was in the Xerox machines. When relatively affordable hardware became available, Jobs oversaw the implementation of just such a rich development environment - NeXTSTEP, with it's Smalltalk inspired Objective-C tooling.

    1. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > there is a reason he left that bit unimplemented in the Lisa and Mac - the hardware would have been astronomically expensive, just as it was in the Xerox machines.

      I disagree.

      My source for him simply overlooking this stuff is _the man himself_. He *said* that he simply overlooked it.

      https://www.youtube.com/watch?v=nHaTRWRj8G0

      It's only 57sec long. Watch it.

      The other thing you overlook is: the initial Apple GUI machine, the Lisa, *qas* astronomically expensive. It needed a meg of RAM and a hard disk, and in 1982 that meant it cost $10,000.

      Thus, the Mac: a Lisa with no multitasking, no snazzy office suite built in, no snazzy template-driven GUI, just single-tasking and conventional files and programs... but it ran in 128kB of RAM on a single floppy.

      Just barely, but it worked. And it still cost $2,500 in 1984.

  9. PghMike

    You must be kidding

    *Everything* in this article is wrong.

    My MacBook is much easier to use for a normal person than the systems I used at MIT in the 1970s. If you told someone what it was like to use MIT's ITS (Incompatible Timesharing System), they'd think I'm making a (stupid) joke. To login as "ota" you'd type "ota$u" -- no password required. And then you were logged into the DDT debugger instead of a shell. The whole thing was written in PDP-10 assembly.

    The differences between these languages is invisible to me -- Dylan looks like virtually every object oriented language I've ever seen; it's hardly amazing.

    "Rich file systems, with built-in version tracking, because hard disks cost as much as cars: gone. Clustering, enabling a handful of machines costing hundreds of thousands to work as a seamless whole? Not needed, gone. Rich built-in groupware, enabling teams to cooperate and work on shared documents? Forgotten. Plain-text email was enough."

    *Seamless clustering* in the 1980s? What planet are you talking about?

    Rich built-in groupware? I have no idea what you could be talking about. Certainly Xerox Altos didn't have that, and they were the most advanced systems from back then. PDP-10s didn't even have a GUI, and the less said about the user experience on OS/360 the better.

    People loved Lisp machines because they were some of the first systems that had a GUI at all. You had to program them in Lisp, which, in case you missed it, was a pretty unreadable language. It had some clever parameter binding rules that let you create closures, but otherwise it was a pretty low level language. C++ typing is much easier to use. Modern programming languages exist to help large teams to work on the same projects. You need clean interfaces with enforced module isolation and modern C++ versions exist to do just that.

    Editing the operating system you're running as you're running it -- what could possibly go wrong?

    "Firstly, because the layers are not sealed off: higher-level languages are usually implemented in lower-level ones, and vulnerabilities in those permeate the stack."

    No, sorry, C++ compilers are written in C++. I haven't heard of a high level language compiler written in assembly for decades.

    You want some good system design advice from that era? Here, read Butler Lampson's Hints for System Design. It's on the web here https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/acrobat-17.pdf Lampson seems to think that virtual memory is much harder to deal with than it is, but aside from that, his advice has aged well.

    1. ldo Silver badge

      Different Varieties Of OO

      Worth noting that the difference between programming languages can be subtle.

      For example, Dylan invented C3 linearization which, in retrospect, seems like the only rational way to handle multiple inheritance. Python has adopted the same system. C++ has not. I think this gives Python an advantage over languages which avoid multiple inheritance altogether (like Java and C#) because they assume it’s just too complicated to handle.

    2. Ian Johnston Silver badge

      Re: You must be kidding

      *Seamless clustering* in the 1980s? What planet are you talking about?

      By the late 80s I could log into my then university's VAXCluster with no need to specify (and I think no way to know) which of the four machines making it up I was actually using. Otherwise, i'm with you.

      1. ldo Silver badge

        Re: Seamless Clustering

        I used a VAXcluster, too, back in the day. While the different nodes were able to share the same filesystems, you were certainly under no illusion that it was all one big computer: you had to choose which node to login to, and processes on one node could not directly see what was on other nodes.

        Compare this with a modern Linux supercomputer. On the Top500 list you will see many examples with a million nodes each. And each one does indeed behave like one big computer.

        That’s progress, eh?

        1. Phil O'Sophical Silver badge

          Re: Seamless Clustering

          you had to choose which node to login to, and processes on one node could not directly see what was on other nodes.

          Not so. You could set up a network address for the cluster, and the load balancer would assign your login to the least-loaded node. It was also streets ahead of most modern clusters, with things like clusterwide semaphores (you could create a semaphore, wait on it from any cluster node, and set/clear it from any node), a cluster filesystem with full shared locking, etc.

          Compare this with a modern Linux supercomputer. On the Top500 list you will see many examples with a million nodes each. And each one does indeed behave like one big computer.

          That's a totally different concept, more like SMP than a cluster of distinct independent nodes. VAXen had early SMP, for example 8 4-processor 8800s could be clustered into one 8-node (so 32-processor) cluster. Each mechanism had different characteristics.

          1. ldo Silver badge

            Re: Seamless Clustering

            Yeah, on the LAT server we could just say “c[onnect] «clustername»” instead of specifying a node name, and it would choose which node to log us into. But we still couldn’t see processes on other nodes. They shared the filesystem, but not the process space, or network interfaces, or logical names, or IPC, or anything else.

            The “totally different concept” is precisely the point: it rendered VMS-style clustering obsolete.

            1. Phil O'Sophical Silver badge

              Re: Seamless Clustering

              They shared the filesystem, but not the process space, or network interfaces, or logical names, or IPC, or anything else.

              Not at all, there was certainly no shared process space, but each node could see processes on other nodes, and there certainly was clusterwide IPC, clusterwide logical names, redundant clusterwide access to devices (including network), distributed lock manager, etc as well as a shared filesystem.

              The “totally different concept” is precisely the point: it rendered VMS-style clustering obsolete.

              I disagree. The VMS clustering model was designed to handle business applications in a high-availabilty environment, supporting hundreds of users accessing files, databases, etc. and maintaining full service even in the present of faults. It's also been implemented in various Unixes (Solaris, HP-UX). Linux has several implementations of varying degrees of sophistication, and it's still far from obsolete in business environments.

              The million-node HPC model you describe is aimed at the High Performance Computing market, as the name suggests. It's more intended for non-interactive compute-intensive parallel processing, with little need for things like shared file & DB access. The two approaches are complementary, not hierarchical.

              1. ldo Silver badge

                Re: Seamless Clustering

                There was no clusterwide network interface. For example, the VAXcluster I mentioned had the BIND DNS server running simultaneously on its two main nodes. Both were configured as primary servers, reading from the same config files. Each one had its own IP address. That wouldn’t have worked if there was any commonality of network interface.

                Think of a VAXcluster is a substitute for the fact that DEC had no concept of a file server, so it went for a sector-level disk server instead, and you will get the idea.

                Linux is very much an interactive OS, even on a supercomputer. You can log in and use standard commands like “lspci” and “lscpu” to examine the hardware, and you get output not too different from that on a regular PC, except it goes on and on ... and on and on.

                1. Sir Lancelot

                  Re: Seamless Clustering

                  Seems to me your knowledge and understanding of VMS Cluster technology is rather limited.

                  1. ldo Silver badge

                    Re: Seamless Clustering

                    No need to take my word for it. Go to Bitsavers and read about it yourself. Half their entire documentation collection is about DEC. There’s plenty there on VMS clustering and other topics, described in exhaustive detail.

        2. Steve Davies 3 Silver badge

          Re: Seamless Clustering

          Plus this made me cringe

          Unfortunately, though, in the course of being shrunk down to single-user boxes, most of their ancestors' departmental-scale sophistication was thrown away. Rich file systems, with built-in version tracking, because hard disks cost as much as cars: gone. Clustering, enabling a handful of machines costing hundreds of thousands to work as a seamless whole?

          My workstation for 3-4 years was a diskless microvax that was part of a VAX-Cluster. It was a great day when I managed to scrounge a small HDD to fit into it. While diskless worked... Coax Ethernet was slow by modern comparisons making the Microvax a full cluster member was a big step forward.

          Naturally all Vax nodes had a version tracking filesystem as standard. That is something that we have lost. Why? Doh! MS rules ok.

          1. ldo Silver badge

            Re: VMS File Versions

            VMS file versioning was not something you wanted to rely on as any kind of history mechanism. It was best thought of as a short-term fallback, in case you should screw up in making some change to a file, you still had at least one older version to revert to—if you remembered in time. This was because older versions were so easy to make disappear with a single wave of a PURGE command. That, and being able to enforce version limits on individual files or the contents of entire directories, meant that the lifetimes of those older file versions were ephemeral at best.

            This limited use is probably why the idea has never been more widely adopted. Even Dave “Mr VMS” Cutler didn’t seem to want to carry it over to Windows NT.

            Fun fact: the original Macintosh File System (the one that was used on the 400K single-sided floppies), had a one-byte “version” field allocated in the directory entry for each file. But that was never given any value other than zero.

            1. Phil O'Sophical Silver badge

              Re: VMS File Versions

              The ISO 9660 filesystem for CDROM also supports file name versioning, although the only time I've ever seen it appear was when mounting a CDROM on a VMS system. I'm not sure what happens if a CDROM with multiple versions of a file is mounted on a Windows or Linux box, I must try it sometime.

              1. Jou (Mxyzptlk) Silver badge

                Re: VMS File Versions

                The iso9660 will always show the latest in Windows. You can even manually produce such a CD/DVD with normal burning tools, as long as you don't finalize the CD/DVD. I never tried, but using UltraISO or similar tools I might be able to access the previous versions too since I doubt that functionality was or will ever be implemented in the Windows explorer. As for Linux: You made me curious :D.

    3. Anonymous Coward
      Anonymous Coward

      Re: You must be kidding

      Perhaps the "rich built-in groupware" refers to something from the IBM mainframe? Was one of those suites of software called "PROFS?"

      VM/CMS on the IBM mainframe was also pretty decent to use and gave the user the impression that the environment they were using was "personal." My first experiences with FORTRAN were with VM/CMS.

    4. lispm

      Re: You must be kidding

      > You had to program them in Lisp, which, in case you missed it, was a pretty unreadable language. It had some clever parameter binding rules that let you create closures, but otherwise it was a pretty low level language.

      The Lisp Machine was programmed in a very capable object-oriented Lisp (with mixins, multiple inheritance, :before/:after methods, ...), down to the metal. Everything was completely object-oriented, incl. a lot of the low-level operating system code, incl. file systems, network stack, graphics, ...

  10. Bebu
    Windows

    Should be part of cs100 :)

    Forgetting history etc

    Or ghosts of Christmas Past?

    The crucial part about (biological) evolution is successful reproduction. The individuals best adapted to their prevailing environment are a dead end if their progeny (if any) don't themselves successfully reproduce.

    Probably why the world is full of rapidly reproducing vermin and there aren't any libido-less superwombles.

    The obviously analogy in the technological world is - if your product doesn't sell with enough volume or sufficient margins begetting the next is problematic. In the late '70s even the least expensive digital computer was beyond most - even with the advent of the first eight bit processors (especially outside north america.)

    My first machine was a tandy colorco (M6809) which then (outside US) in today's money would be the equivalent of a decent consumer notebook. The next was a consumer CP/M 2 & 3 machine because it was half the price of an AppleII and a lot of development software (free/shareware.) Later when the choice was between an Amiga and a PC/AT the price difference for the Amiga was even greater. At that time I was attracted to the Amiga hardware and software (tripos) design.

    If at the beginning someone had given me an Alto my future might have been rather different (I had read and owned all the Parc SmallTalk books - Adele Goldberg(?).) Eventually ran Digitalk Smalltalk-V on a PC/XT and mucked about with XLisp and Scheme (and Forth and later Actor.) All eventually the impotent superwombles.

    Concurrently I ran various Unix clones Minix 1.0, Coherent-3 etc and worked in the Unix Minicomputer bofhland where ethernet was starting to become the norm but serial (rs232) terminals were still the desktop standard. Even then it was pretty clear the network was "the thing" and the future.

    I would not underestimate the evolutinary fitness of open source (and free ~ libre) software. The technological explosion from the 1990s into the last few decades has deep roots in that fertile soil. Even of the highest quality, all proprietary closed source software eventually dies and becomes extinct and not even belated open sourcing can normally rescue it.

    As an afterthought ETH's Niklaus Wirth's Oberon system and other work probably also deserves an honourable mention here.

  11. Bitsminer Silver badge

    Yes but no

    (There are some interesting comments on HN and rebuttals by Liam. But here is my take.)

    usually completely ignoring all the lessons learned in the previous generation

    Umm, not really. RSX-11 on the PDP-11 was a decidedly calculated subset of OS/360 on the competition's machine. It had file IO added (and the whole almost-flat filesystem) but the rest is fairly simple process management etc. IBM's designs were good enough to copy. And added to the 16-bit cheap-ish CPU that was a follow-on to numerous other designs by a very experienced software and hardware design crew at DEC.

    The notion that lessons were not learned I think is incorrect. The lessons were very much learned: Choose the best two-thirds of your competition's product, add your own third. Sell and sell hard. (And, as always, beating IBM on price is child's play.)

    The Symbolics stuff was legendary (at the time) for complexity and cost. And uselessness -- were there any actual software products sold requiring a Symbolics machine? Nobody buys $100k machines to heat up a room with the wasted power. You have an expensive user because they have skills (orbital calculations, microprocessor design, chemical plant optimization) so you buy them an expensive tool. Who bought a Symbolics workstation for an end-user? Anybody? Bueller?

    Another example: the IBM 360 series had the optional model 2250 display. It was early days and used a light-pen to detect the flash on the CRT but it sold well because there was design software to go with it. That was the beginning of the workstation era -- build or design things quicker with interactive computing. Only affordable by mega-corps but the market was then "proven". And competition quickly appeared.

    I don't claim the notion that a skilled thinker could manage to produce useful products with LISP-like languages is wrong. Arguably the equivalent modern example is the construction of WhatsApp by a handful of people using Erlang. Now that is an exotic language.

    These are cases where the product filled needs of the market. Sometimes the market didn't even know it needed them.

    Arguing over New Jersey/Stanford or MIT "approaches" is irrelevant. Designers do what designers do and adding post-facto explanations by technology historians is not adding anything useful.

    There were only these two schools of thought you say? Oh.

    1. ldo Silver badge

      Re: RSX-11 ≠ OS/360

      Hard to see the connection between the two. RSX was, from the beginning, an interactive OS. OS/360 was a mainframe, batch-oriented OS. On RSX, you an interactive command line (MCR), which was never further than a CTRL/C away. OS/360 had nothing like that.

      1. Michael Wojcik Silver badge

        Re: RSX-11 ≠ OS/360

        OS/360 was not the only OS for the 360.

    2. lispm

      Re: Yes but no

      > Who bought a Symbolics workstation for an end-user?

      A lot of the high-end animation industry bought them in the end 80s. A lot of the 2d and 3d animation in TV were done with the Symbolics Graphics suite. You'll find demo reels on Youtube of TV commercials of that time. For example Apple animated an ad for the introduction of the Mac IIfx.

      iCAD started parametric CAD on the Lisp Machine. CDRs from Evans and Sutherland was deployed on those in the early days, used in the automotive industry for car design. American Express used a bunch of them as application servers for credit card transaction checking. Swiss Airlines used them to optimize bookings of airline seats, connected to a mainframe. Some power plants used them to schedule operations. Nasa had Symbolics machines overseeing the Space Shuttle starts with HDTV cameras. For scheduling the Hubble Space telescope a system called Spike was developed. The military had large troop training simulators with 3d worlds (Simnet), the troop training worlds were generated by a Lisp Machine. TI used them in Chip design. Ford used them for manufacturing scheduling. There was a language translation application (Metal) where the translation software was running on a Lisp Machine and the users were using PC frontends to edit the text. and so on...

    3. MonkeyJuice Bronze badge

      Re: Yes but no

      > The Symbolics stuff was legendary (at the time) for complexity and cost. And uselessness -- were there any actual software products sold requiring a Symbolics machine? Nobody buys $100k machines to heat up a room with the wasted power. You have an expensive user because they have skills (orbital calculations, microprocessor design, chemical plant optimization) so you buy them an expensive tool. Who bought a Symbolics workstation for an end-user? Anybody? Bueller?

      https://en.wikipedia.org/wiki/Mirai_(software) - notably used to model the original Nintendo 64 Mario characters. Now rewritten and available as the excellent and FOSS Wings3d.

      The Genera color machines were also used to render and composite the effects in the movie Free Willy.

      The Boids algorithm, widely used in the VFX industry even today, was developed by Christopher Langton on a Symbolics machine.

      Symbolics Genera lifted the presentation based user interface system from CONS/CADR lispm work. This pissed off RMS so much that he founded GNU and the GPL.

      Genera was the Jupyter Notebooks of the time (CLIM2, for all it's warts, was the ultimate outgrowth of this), it was fairly easy to knock up quick, usable interfaces that people, even the suits familiar with it's idioms could quickly use.

      Its price point was so high it assumed you were going to read the 12 volume bookshelf, and it did not hold your hand. The original 3600 required 3-phase and would crush a washing machine.

      The user interface for the lispms is clunky as all hell, and while I appreciate emac's power, it feels extra klunky unless you swap caps lock with tab on a modern keyboard because _keyboards have changed over the last 40 years_.

      If you read RPG's Good News Bad News, How to Win Big, the WIB/TRT dichotomy is really quite tongue in cheek, but notice most of what he laments is missing that Lisp has slowly appeared in modern languages.

      Your mac is easy to use, because it's built on the dead bodies of those who came before it, billions of dollars expended in UX design, and decades of trial and error. Claiming otherwise is like pointing and laughing at Newton because he mixed his own piss with lead attempting to make gold back when chemistry was called alchemy and phlogiston was vital for combustion.

      Was it The Right Thing? Probably not.

      It was just better than the Worse options at the time for a large subset of problems that you'd throw Python at today.

  12. cdrcat

    Remove those retro rose-coloured glasses

    There are great reasons why those good ideas died.

    "Save the world" was awful as soon as there was more than one developer. I played with IBM smalltalk - ugggh. I delivered commercial software written in an object oriented DB (similar problems with delivering updates and versioning as smalltalk).

    Programming languages use txt based source files not because the solution is pretty, but because the alternatives worked out to be hideous.

    LISP is amazing *if* the developer is exceptionally highly skilled. Every person that demonstrates the productivity of LISP has good taste well above the average developer. Mere mortal developers create spaghetti - insanely bad DSLs and totally opaque macros. Maybe one person can maintain their own code but nobody else ever will be able to touch it because every codebase becomes so esoterically customised. There's a reason valuable LISP code can never be passed on. There is a bad reason that there are so many incompatible LISP dialects and every LISP author writes their own library code. Certain types of productivity lead to unproductive behaviours in the large. The same problems can occur with modern programming languages but successful languages and community have norms against custom complexity.

    1. ldo Silver badge

      Re: Remove those retro rose-coloured glasses

      LISP code can be written in quite a readable fashion. The secret is to avoid the traditional “parenthesis pileup” layout, and use something more like this:

      ␣␣␣␣(funcall delete-between ; Get rid of commented-out sections

      ␣␣␣␣␣␣␣␣(lambda () (search-forward "<!--" nil t))

      ␣␣␣␣␣␣␣␣(lambda () (search-forward "-->" nil nil))

      ␣␣␣␣)

      ␣␣␣␣(dolist (tag '("head" "style" "script"))

      ␣␣␣␣␣␣␣␣; Gobble entire content for these tags, up to and including closing tags

      ␣␣␣␣␣␣␣␣; (luckily they don’t nest).

      ␣␣␣␣␣␣␣␣(funcall delete-between

      ␣␣␣␣␣␣␣␣␣␣␣␣(lambda () (re-search-forward (concat "<" tag "\\b[^>]*>") nil t))

      ␣␣␣␣␣␣␣␣␣␣␣␣(lambda () (re-search-forward (concat "</" tag "\\b[^>]*>") nil nil))

      ␣␣␣␣␣␣␣␣)

      ␣␣␣␣) ; dolist

      What do you think?

      1. Jou (Mxyzptlk) Silver badge

        Re: Remove those retro rose-coloured glasses

        But in the time LISP was created you had the 80 column limit on the display. And you already broke it. If you use two instead of four spaces as TAB your simple example fits.

        Luckily having 160 or 200 column is no problem today, and actually needed.

        1. ldo Silver badge

          Re: Remove those retro rose-coloured glasses

          I normally have my editor windows set to 100 columns wide.

        2. Michael Wojcik Silver badge

          Re: Remove those retro rose-coloured glasses

          having 160 or 200 column is no problem today

          For many people it is a problem: following text lines that long is difficult or fatiguing for them. Code readability by other programmers is very important for anything that will be maintained.

          1. Jou (Mxyzptlk) Silver badge

            Re: Remove those retro rose-coloured glasses

            Readability ans long lines don't have to exclude each other, at least no when you has some lines that long and not 20% (or more) of them. You can use obfuscated programming style with short lines to.

            Here a typical powershell switch statement where I see no reason to multiline the "+-" keys, 'cause the code is too obvious and readability does not suffer here. It is just a write-host statement which makes the line long. EDIT: Those two "+-" exaluation statements are line-broken by ElReg, not in my code :D. (Snippet is from my encode any input video to av1 with multithread by sending video snippets, from scenechange to scenechange, to ffmpeg.exe decode | SvtAv1EncApp.exe encode or | rav1e-ch.exe encode in parallel instead of using tiles, but keep X% CPU usage free)

            switch -regex ($key.Key) {

            Escape { return $true }

            P { return $true }

            Q { return $true }

            X { return $true }

            "(OemMinus|Subtract|\-)" { $script:MinFreeCPU-- ; Write-Host -NoNewline -BackgroundColor DarkGreen -ForegroundColor Yellow " MinFreeCPU changed to $($script:MinFreeCPU) "; return $false}

            "(OemPlus|Add|\+)" { $script:MinFreeCPU++ ; Write-Host -NoNewline -BackgroundColor DarkGreen -ForegroundColor Yellow " MinFreeCPU changed to $($script:MinFreeCPU) "; return $false}

            default { write-host -NoNewline " $($key.Key) "; return $false }

            }

          2. ldo Silver badge

            Re: Long Source Lines

            What I found is that my source lines, at least the parts of them that are not indentation, are not that long. So the wide editing window is there to give me room for multiple levels of indentation, so my code stays readable.

      2. HuBo Silver badge
        Facepalm

        Re: Remove those retro rose-coloured glasses

        Hmmmmm. And should we, perchance, also re-circularize and solidify Salvador Dali's melting clocks? Or de-cubify Pablo Picasso's depiction of Dora Maar? Should we now play "the notes you don't play" in Miles Davis' purple Jazz, with green breath?

        Diagonalism is a differentiating aesthetic characteristic of LISP code that conveys a sense of freedom to the reader. The code has the inspiring allure of multiple ski jumps, flowing from a mountain slope generously oriented from top-left to bottom-right. A structure that is also evocative of The Great Wave off Kanagawa.

        The jump-off points of the downhill skiing allegory, and the wave's break-off into the likes of dragon-claw sprays, evidence the crucial role of tail calls, from which the equivalence of iterative and recursive philosophies seamlessly emerges.

        Redressing the structural aesthetics of such code would be like taking the eros out of Marcuse's civilization, turning the wondrous tail call into a vulgar booty call!

      3. lispm

        Re: Remove those retro rose-coloured glasses

        Few professional programmers do that. If you look at parentheses to read Lisp code, then you doing it wrong.

        Keeping the list balanced, etc. is done by the source editor. It also counts parentheses for you. It swaps list elements, inserts new ones, deletes some, shows arguments, etc.

        1) One reads Lisp code by looking at the operators, which usually have names and are the first element of a list.

        2) One reads Lisp code by the indentation patterns.

        To an experience Lisp programmer, the code you show looks horrible. Never put parentheses on its own on a line. Never write end comments.

        One learns to read Lisp code by writing, manipulating and reading it.

        One big difference of Lisp and say C, Java and code in many other languages: the source code is actually data and often will be formatted by the Lisp system, outside of an editor. Even in an editor I would have Lisp format the code for me, incl. code layout. In the Lisp Machine editor this was called "grind" and on the Lisp data level, it is called pretty printing. Thus one learns how the Lisp system formats the code according to rules.

  13. cjcox

    Used TI Explorers

    Never used a Symbolics, but did use TI Explorers. The ability to right easily partitionable workloads that allowed for many Explorers to simultaneously work together (in my case for auto routing) was amazing. I mean, a Sun box may be "faster" at the time, but not vs. 10 TI Explorers and you just couldn't glue those Suns together that simply.

    The data is the code.... pretty cool stuff.

  14. Ian Johnston Silver badge

    "We took about ten to 12 man-years to do the Ivory chip, and the only comparable chip that was contemporaneous to that was the MicroVAX chip over at DEC. I knew some people that worked on that and their estimates were that it was 70 to 80 man-years to do the microVAX. ..."

    When different people tell you that they can achieve such a huge differential in productivity – one tenth of the people taking one tenth of the time to do the same job – you have to pay attention.

    One tenth of the people taking one tenth of the time would reduce 70 to 80 man-years to 0.7 to 0.8 man years. A reduction to 10 to 12 man-years would need about 40% of the people for 40% of the time.

  15. Ian Johnston Silver badge

    Curiously enough, I have just being re-reading the "Unix Hater's Handbook". It reminded me of just what a malign influence RMS has been, not only in his frenzied attempts to restrict what people can do with software but also in his - and, to be fair, his acolytes' - determination to preserve for ever the clunkiness of a late 70s minicomputer OS.

    1. ldo Silver badge

      You make him sound like a one-man Microsoft corporation, able to force the entire computing industry to do things his way ...

    2. Anonymous Coward
      Anonymous Coward

      @Ian_Johnston

      Quote: ".....malign influence....."

      Ah yes....malign influence....Bill Gates, BASIC, Windows, Charles Petzold, Cisco Systems, Fort Meade, "Section 230", Mark Zuckerberg.......

      ....and you think RMS is malign?

  16. Doctor Syntax Silver badge

    "On the Lisp machines, your code wasn't trapped inside frozen blocks. You could just edit the live running code and the changes would take effect immediately. "

    Can you think of a more productive environment for malware? If we have a malware problem now, just think what it would be like in Lisp.

    1. MonkeyJuice Bronze badge

      Oh the lisp machines were _riddled_. It was a more innocent time!

  17. Duncan Macdonald

    Multiple languages were and are needed

    In the 1970's and 1980's I worked on process control using PDP-11 computers. With their low speed and limited memory, device drivers were normally written in assembler for minimum memory and maximum speed. However assembler is only suitable for smaller jobs - the application programs were written in higher level languages to make better use of the one part of computing that has not improved in performance - the human brain. At any one time a human can reasonably work on a module of at most a few hundred lines of code (one of the biggest reasons for the development of subroutines was to cordon off sections of code so that they could be developed independently). A hundred lines of a high level language (eg C, FORTRAN, CORAL-66, PL/1 etc) can easily be the equivalent of over a thousand lines of assembler.

    One project I worked on was the monitoring system for the Dinorwig power station - the system had a single PDP-11/34 (1/3of a MIP, 248kB RAM and 4.8MB of disk storage) to handle about 5000 plant inputs, 6 line printers and 3 displays. This required tight coding to make everything fit and run fast enough - when it was replaced about a decade later with a DEC ALPHA system (over 100MIPs, 32MB RAM and 200MB of disk storage) there was no longer the tight coding requirement and the 12 man year initial development was reduced to a small fraction of the time.

    With current computers the only places where assembler code is still used are in the early stages of startup (setting up hardware parameters and loading microcode), some shared high speed maths libraries and in some computer viruses(!!). Because of the HUGE speed improvement and the MASSIVE price drop since the early days, the best way to do computing projects has changed - a cheap £60 Android phone has a thousand times the speed, memory and storage of the Dinorwig PDP-11 system that cost the equivalent of well over £200,000 in today's money. The most expensive component is now programmer time. The FAST,CHEAP,GOOD - choose any two - has firmly gone to FAST and CHEAP for almost all systems.

    1. Dan 55 Silver badge

      Re: Multiple languages were and are needed

      Present-day systems may be cheap but I remain to be convinced about whether they are fast. There's never been more bloat than now.

      1. doublelayer Silver badge

        Re: Multiple languages were and are needed

        And in most cases, the bloat doesn't much hurt you. In most devices, the fact that the operating system uses a few more gigabytes than it would need to isn't a problem because your disks are so massive and RAM is so cheap. If it is a problem, of course, you can start to strip out pieces to make it run more smoothly, but most people don't have to. The hardware is faster, so the software doesn't have to be really efficient to run at the same speed anymore. If you need a program to run really fast, chances are that you don't have to try running it close to bare metal to get what you need.

        It's similar to our bodies. There are various things we eat which are not digested, and even those things we can digest are not completely turned to useful energy. That's not a problem; we have no need to get a digestive system that can usefully consume paper and turn it into energy, and most of us live in an environment where consuming more than we need is the bigger problem. The inefficiencies of our digestive system have, so far, not been a problem justifying extra effort be spent to improve them. Similarly, although developers could probably go to every program and strip down its memory usage, that would take more developer time than the efficiency justifies. This is especially true when we compare this software to software from decades fast, where one resource would usually be spent to avoid having to use another. For example, compressing data in memory so you didn't need much RAM to run it, but now the CPU had to do more work to get at the data, so the program itself ran more slowly. That is much less common now that we can keep all of the uncompressed data there.

        We also take fewer shortcuts. For example, the versioned filesystems that Liam bemoaned the loss of in the article. We didn't have those on early personal computers because the disks were tiny and more files needed to be stored on them. Running a versioned filesystem would have used up the disk way too fast. A server had an admin who could probably make a dent in that, which is why those made some sense, and they were probably also not storing a lot of large files which personal computers eventually did. Because they had to be cheap enough to use, the feature was too expensive for users to have. While they may not come out of the box today, we can have a versioned filesystem if we want it just by configuring one. So what that it uses more storage, we can afford a bigger disk, right? This is what we mean by "fast": you get what you want out of the computer with speed that's a lot faster than you need it to be.

    2. Nbecker

      Re: Multiple languages were and are needed

      One reason assembly is less needed today that was omitted, completed have gotten much better. Completed used to be stupid, just translating high level to low level by rote. Clever programmers could try to hand optimize. Now completed can optimize better than humans in most circumstances.

      1. Jou (Mxyzptlk) Silver badge

        Re: Multiple languages were and are needed

        > completed

        > Completed

        Whichever OS or program you were running, correcting "compiled" and "Compiler" to "complete*": It is time to switch to a less invasive "I know better than the user" one.

  18. Anonymous Coward
    Anonymous Coward

    Sorry Liam, Not Even Wrong...

    There are so many fundamental technical errors in this article that it is very obvious that you were A) Not around at the time; B) never actual used any of the machines / tech you mentioned in any meaningful way. Professionally.

    Just to pick a few totally wrong statements at random.

    The original IBM PC, the 5150, shipped with a CGA graphics card and a sound chip. Read the BYTE articles for the rest of 1981 after launch for details. VGA came later. It did not ship with "no graphics and no sound". The original Apple II in 1977 had semi memory mapped graphics and a pretty good sound chip although it was Apple II Plus in 1979 that made nice looking memory mapped graphics a lot easier.

    You most definitely had to boot Lisp machines. They were not like magnetic core memory minis like the DEC PDP 8's were you could power off / power on and everything in application memory was still there. Unchanged. We ran PDP 8's for months between reboots with no ROM's, HD's, etc . Just turning power off and then back on again. But power off something like a Symbolics 3600 and it mostly definitely did have to boot up again on power on. Now your application / cons space etc was reloaded to the state on power off. That's all. Just like hibernate in recent Win32 OS's.

    Dylan was a total mess of a functional language that was a huge resource hog and never worked properly on the Newtons. It was the Objective C of functional languages. The moment I heard the Newton was going to run a functional language I immediately wrote it off as a failure. An opinion totally confirmed when I saw the first device in action. The guys at Palm did not make such an amateur hour mistake. Which is why the PalmPilot was a huge success and the Newton an industry joke from the get go.

    And I am afraid the rest of the article is full of equally serious technical errors. Thats not how it actually was. How it played out. Or why. To those of use who were around at the time. And actually used most of these technologies. Professionally.

    Almost all these technologies that failed failed for very good reasons. Sometimes technical, but usually business. They were either expensive, more than customers were willing to pay, late to market, never worked properly, or just sold by people who did not know what they were doing - so easy pickings for those who did know how to run a business.

    In a fight between a shark and a dolphin guess who wins? And when it came to most of these technologies the sharks won. And not just the business sharks either. The "political" ones too. Dave Cutler, Bjarne Stroustrup, Richard Stallman, and Linus Torvalds etc comes to mind.

    1. Vometia has insomnia. Again.

      Re: Sorry Liam, Not Even Wrong...

      DC in particular seems to get way too much credit. It's also interesting to see how much of a contrarian he was: he hated Unix, he hated DEC's large systems, he even hated DEC's medium systems that weren't his baby and ISTR trod on a lot of people's toes with RSX (probably still giving him too much credit; I mainly remember its naming, the letters "sounded cool", revisited in the '90s by the woefully inept marketing for EVAX, or VAXng, Alpha, or AXP, or whatever the fuck they ended up calling it). He was happy to be credited for VMS when it was more a case of the big boys taking over something that DEC wanted to straddle its mainframe and mini lines. Seems that since then he's been a bit of a prima donna whose approach has always been his way or no way.

      That's probably a bit scathing, but the way some people would have it, so many that I'm worried it's just become accepted "fact", is that DC was DEC's lone tech genius and DEC died because he left and went on to create VMS's "natural successor", Windows NT. None of that is true.

      1. Roo
        Windows

        Re: Sorry Liam, Not Even Wrong...

        I haven't worked with Dave Cutler, but I have spent a fair amount of time using judging him by his works - or at least the stuff he is credited for. The words that come to mind for the products attributed to DC are "baroque", "half-baked", "flakey", "obtuse" and "sophisticated". This might be unfair on Dave Cutler, because he was part of a team of people that produced these products. My guess is that no one else in the teams wanted to "own it" having seen the result of their labours - so they were only to happy for their work attributed to the man.

        "Inside Windows NT" by Helen Custer & David N Cutler illustrated the *massive* gulf between DC's "vision" for NT vs what it actually was. I was probably the only person to actually read that book, and then actually measure it up to the grim reality of mid-late 90s vintage Windows NT. Suffice to say Windows NT 3.51 fell some way short of the "vision". The common narrative is that Microsoft somehow double crossed DC so he took his ball home, leaving the product as a half-baked pile of not very good OS. Having compared the "vision" presented in the book vs the grim reality of mid-late 90s Windows NT that story sounds plausible, but the take away point remains that Dave Cutler was given a stratospheric budget & timescale (compared to say RSX or VMS) and still delivered a half-baked pile of not very good OS.

        TL;DR : I think DC should have taken a leaf out of Linus' book and spent more time writing his OS instead of writing books about his OS. Case in point: an early cut of Redhat Linux (ie: $0 budget, amateur developers) beat NT 3.51 hands down in every department: hardware support, technical support (Microsoft's support amounted to "pay us $128 to file a bug report that we will ignore"), networking, reliability, performance and scalability.

        1. Vometia has insomnia. Again.

          Re: Sorry Liam, Not Even Wrong...

          None of that really surprises me. I'd heard his reason for leaving MS was his insistence on keeping the GUI out of the kernel vs. Gates' insistence it run in privileged mode for visual performance reasons but "I'd heard" was more hearsay than anything. MS were probably at their peak of double-crossing people at the time having just done the same with IBM (again) and DEC too, as well as countless others. A lot of stuff they got from DEC they failed to implement, I remain astonished that they never used the clustering technology.

          My first impression of NT was similarly underwhelming. By that time I'd heard endless hype about how it was VMS for the modern generation and it just totally wasn't. At all. I only once had to develop something on it and the experience was one of the very lowest points of my career.

          I've also heard stuff about DC moving on from NT to work on one of the popular games consoles at the time which a gamer friend describes in pretty much the words you describe above. It was apparently a bit of a disaster.

          He's probably good at being a talking head giving his opinions about stuff, which he was always enthusiastic about, but apparently not so much to work with.

          1. MarkMLl

            Re: Sorry Liam, Not Even Wrong...

            "I'd heard his reason for leaving MS was his insistence on keeping the GUI out of the kernel vs. Gates' insistence it run in privileged mode for visual performance reasons"

            I'd say that history has proven Gates wrong. MS was very much into "let's improve the user experience", but the amount of productivity sacrificed (and crime perpetrated) as a result of that vastly outweighs the benefits.

            1. Vometia has insomnia. Again.

              Re: Sorry Liam, Not Even Wrong...

              I completely agree. As much as I may think that people give DC way more credit than is due (something I was also corrected about long ago) I'm certainly not going to the other extreme of saying he didn't know what he was talking about: Gates (or whoever was responsible) was absolutely wrong about this and it's indicative of the sort of decision-making that's always prevailed at MS. As long as it's shiny enough to guarantee sales, other people can deal with the problems it causes. Horrible way to write software and do business, sad that it keeps on working out for them.

        2. Duncan Macdonald

          Re: Sorry Liam, Not Even Wrong... -- Dave Cutler

          I have done device driver work on RSX-11M - the kernel code that Dave Cutler wrote was very well written (and unusually even better commented!!). Getting a multitasking multiuser protected mode OS to fit into a memory footprint of under 32kBytes and making it one of the most stable operating systems I have ever used was a superb coding job. All the kernel code for RSX-11M was written in MACRO-11 assembler.

          The only times that I have ever seen RSX-11M crash was due to hardware faults (or once someone pulling the mains plug out!!!).

          The later VMS operating system for the VAX (and later Alpha) computers was if anything more robust. One VMS cluster that I was using had the leads from one computer to the disk controller pulled out by accident - the computer that had lost its direct disk access saw that another member of the cluster still had a connection to the disk controller so it rerouted the disk traffic over the Ethernet network to the other computer and thence to the disks. No user interaction was required - the only observed effect was a minor slowdown until the leads were reconnected at which point VMS resumed using the direct connection.

          Many VMS clusters had uptimes of multiple years despite hardware faults and upgrades, computer replacements and OS upgrades.

          If Dave Cutler had not joined M$, OS/2 might well have taken the place of Windows NT.

          1. Roo
            Windows

            Re: Sorry Liam, Not Even Wrong... -- Dave Cutler

            "I have done device driver work on RSX-11M - the kernel code that Dave Cutler wrote was very well written (and unusually even better commented!!)"

            That adds weight to my assertion that Cutler should have spent more time writing code and less time writing books / whatever it was he was doing that wasn't writing code. The history of RSX-11 is actually quite interesting as it turns out with multiple (independent ?) strands of development and it originating as a port from a PDP-15 OS (RSX-15). It's incredible how many OSes DEC produced - they were developing & supporting TOPS-10, TOPS-20, Ultrix, RSX-11* (several distinct variants) and VMS at one point - so they must have had a *lot* of decent system programmers working at t'Mill.

            FWIW I used VMS for a decade or so and found it to be a reliable if eccentric friend. That said I really don't want to see DCL & TPU again. VAX MACRO was kinda fun though - especially having come from 6502 machine code & assembler. :)

    2. Dan 55 Silver badge

      Re: Sorry Liam, Not Even Wrong...

      The original PC shipped with MDA as the base specification, MDA had no bitmap graphics. It had a beeper like the Spectrum though, but CPU interrupts were better than the Spectrum's so it could drive it better while it was doing other things.

      1. Anonymous Coward
        Anonymous Coward

        Re: Sorry Liam, Not Even Wrong...really?

        Its always interesting to be "corrected" by someone who I strongly suspect had not even been born when I first sat down with the 5150 technical manuals in 1981.

        I never saw non CGA video card 5150's in real life although I know a lot of MDA config machines must have been shipped to someone. Based on how many MDA cards you could still find in Weird Stuff a decade later. But as both video cards used the MC6845 video controller they most certainly could do (low res) bitmap graphics. If you knew what you were doing. But the MC6847 was a lot easier to work with than the 6845. And by the time the AT came along in 1984 the only time you saw mono was because mono (green) monitors were so much cheaper. Not that it really mattered as by 1983 the Hercules video card had become a defaco standard on every IBM PC that was not being used for data entry. So we had "VGA" bitmaps graphics on 5150's way before VGA. In 1982.

        As for sound on the 5150 you had a 8255 to play around with. Which as anyone who worked on early audio knew was not great but it was amazing what some developers could get out of that (plus a timer) on a crap speaker. No long after we had the first proper audio chips with multiple VCO's, VCF's, VCA's, mixers etc and 8 bit polyphonic sound was born. In stereo.

        Although the 8 bit audio rendition of Stairway to Heaven etc was a recent retro development but very much in the spirit of the early microcomputer audio developers from the mid 1970's to mid 1980's. Very ingenious, very creative, and ever so slightly nuts,

        1. PRR Silver badge

          Re: Sorry Liam, Not Even Wrong...really?

          Most business PCs and early XTs got the MDA video for clear text. CGA made awful text, so was found in home machines (games) especially where cost forced use of a TV set for a (even more terrible text) monitor.

          Yes, I had the complete 5150 tech ref with schematics and listings by my reading chair. I implemented a hypertext announcements board on a 5150 PC with CGA frills (eventually ray-traced scalable fonts). Yes, as clones came a LOT of Hercules cards appeared; did Herc have poor IP protection? Still monochrome, but that suited the CRT market of the day.

          > for sound on the 5150 you had a 8255

          Calling a 8255 a "sound card" is quite a stretch. Yes, a few hardcore coders could do "Wow!" works, eventually, with significant limitations. Many of the non-IBM game machines had far better audio chips.

          1. Anonymous Coward
            Anonymous Coward

            Re: Sorry Liam, Not Even Wrong...really?..again?

            From 1981 onwards never used any IBM PC / clone using RF with a TV set. That was an Apple II in 1978 kind of thing. Or saw one either for that matter. Developing MS/DOS business software mainly. MDA was never a factor in my part of the software world. Everyone was CGA. Because a single mode 25x80 MDA text really does not cut it apart for simple text entry. Even on a VT100 25x80 was a pain. Which is where I expect all those Weird Stuff MDA video cards came from. Some big corp IBM PC data entry terminals. Remember boxes full of them in the Aladdins Cave that was Weird Stuff in Sunnyvale in its heyday.

            There again no one really cared from what I remember because once the Herc card came out soon after the first solid clones arrived that's what we all got. By 1983 it was mostly Victor/Sirius clones with Hercs as the preferred MS/DOS PC. By 1984 I had shifted to commercial MacOS software dev so when the AT shipped it was a real, who cares. Like anyone was going to develop Windows 1.0 software for it. Or VisiOn or any of the others for that matter. It was only with Win3.x from 1990 onwards that MS/DOS dev started to decline. And after 1984 there was nothing more technically boring than the MS/DOS dev world .

            Never said the 5150 had a sound card. Just said it had sound and could produce more than "beep". Lots of very creative sound outputs were produced in the early days (since 1976) by single channel output before the first proper standalone audio chips started to be added to the motherboards in the early 1980's. On the lower end micros mostly. ISA cards like SoundBlaster was more of a 1990's thing. With the high end DAW audio cards being more of a MacOS thing. NuBus could do more (much more) heavy lifting in the early days.

            1. Roo
              Windows

              Re: Sorry Liam, Not Even Wrong...really?..again?

              Here in Blighty the PC was pretty expensive relative to the competition - and was primarily bought & sold as a business computer to run business software. MDA was the cheapest option - and the most common in the early 80s (at least where I lived). Most *business* software was written for that baseline too - you know the stuff: text entry, word processing, spreadsheets, dBase II, etc. I still remember the feeling of acute disappointment on encountering my first PC: the graphics and sound were piss poor relative to a BBC Micro or a C64, and you actually had to spend *extra* money to get sound and graphics that were usuable... :)

              1. Vometia has insomnia. Again.

                Re: Sorry Liam, Not Even Wrong...really?..again?

                My starting point is that I remember thinking the list price of the TRS-80 was "optimistic" and the asking price for an Apple 2 was outrageous: at the time Acorn were resisting dropping the price of the BBC Model B from £400 by bundling more and more stuff with it, an absolutely bare-bones Apple 2 (16K? Don't remember offhand) without even decent graphics had a price tag of £650. That was the only time I ever saw one in a shop, and nobody I knew owned one. The PC was already available to buy by that time but apart from businesses there were no takers that I knew of because even the basic version was 2-3 times the price of the already exorbitant Apple 2. Even the computer mags at the time were distinctly unimpressed, describing how it used so many more components than any other micro to do much less, and all for a price tag that only IBM could get away with. They did like the keyboard, though, albeit in retrospectively unconvincing terms like "as good as the BBC Micro's" (though that's another debate).

                Most people had to connect whatever computer they had to their TV; few had monitors, though that number increased once Amstrad released the CPC464 with apparently decent monitors for probably less than the usual asking price of a dedicated computer monitor. IBM eventually cottoned on but kinda seriously fumbled by creating the rather duff PCjr, which was more like the home computers that'd been so popular only a bit more crap, a lot more expensive, and late; and IIRC not entirely compatible with the actual PC. And a rubbish keyboard whose novelty feature was unreliable wireless.

              2. Anonymous Coward
                Anonymous Coward

                Re: Sorry Liam, Not Even Wrong...really?..again?...same in US..

                Well if you care to look at the software available for the Apple II by 1979 (despite Apple's "Ladies Home Journal" style ads) it was almost all business software. Even in the US. The first application that turned micros like the Apple II from a reasonably successful electronic novelty into a huge success was VisiCalc. The very opposite of "home computer software". The ads in Byte from 1979 from retailers like Lifeboat Associates gives a very good idea of what a percentage of mid / high end micros sold to businesses / not home users. Most of them.

                Same story when you looked at who actually went into a Radio Shack to buy a Trash 80. Far more likely to be bought to run business accounting software than Space Invaders. The true "home computer" mass market only started even in the US when sales of low cost machines like the Atari 400 and Vic 20 started taking off in 1980.

                So no different from UK. Except UK was a few years later on the curve.

                Outside of UK schools BBC Computers were as rare as hens teeth in my experience. They were not that much cheaper than an Apple II by the time you had bough all the required kit and then there was no business software to run on it. Which was most of the market. So people bought an Apple II. And VisiCalc etc. Even in the UK. A quick look at any issues of Practical Computing / PCW from say 1982 will give a very good idea of what micros were actually being bought and used at the time. Outside of UK schools. And by the time Acorn shipped cheaper machines the market had moved on.

                The big bifurcation in the home PC market between the US and UK / Europe only really started in the late 1980'/early 1990's when it was fairly common to find Macs and IBM PC clones as home computers in the US but very rare in the UK and Europe. Where the rare computer found in the home was mostly machines like the C64 and the Lada of PC's - the Amstrad. Due to very different disposable incomes for most people between the US and UK/Europe.

                So back then the only micros I saw connect through a RF converter to a telly was everything from Oric's and ZX 80's to Ohio Scientific Challenger 1P boards. Stuff like Apple II's and Exidy Sorcerers had monitors. As did the Commode Pets and TRS 80's. And the last time I fiddling around with an RF converter and a telly was with a Spectrum. In 1983.

                1. Jou (Mxyzptlk) Silver badge

                  Re: Sorry Liam, Not Even Wrong...really?..again?...same in US..

                  > it was fairly common to find Macs and IBM PC clones as home computers in the US but very rare in the UK and Europe

                  Erm, no as for Germany. My two uncles, and a friends of my sisters, had them very early on, around the time I was ten. I.e. around 1984. My mother worked on them in her job around that time too, being one of those who could handle them better and helping others at work. My big brother was on the Atari 520ST trail at the beginning of 1987 right when it appeared in Germany (extended to be 1040ST a year later I think). I, of course, was on a cheaper trail with my C16 with datasette and later Plus/4 with floppy by that time, but got an AMD 80286 with Hercules monochrome around the time I was 15. Some neighbour kids with more money had Amiga.

                  At least in my wider environment computers were always common, including Schneider CPC, C64, C128 and so on. And being near a University exposed me to them early on too. We even had PCs in school, by that time running Ashton Tate Framework III to learn spreadsheet and word processor basics still valid today.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: Sorry Liam, Not Even Wrong...really?..again?...anacdotes

                    Those are just personal anecdotes dear boy. I'm talking actual sales. Based on actually working in the business at the time. You know, writing software for companies that was sold to people who bough the hardware and software. For money. In very large numbers. From 1982 onwards. So I actually do know what was sold and what was installed at the time. In the UK, US, France, Germany etc.. And even Japan after 1983.

                    Just because you knew people with micros/pc's at the time means damn all. I remember the actual market penetration numbers at the time. Amply confirmed by what I saw in the general population. In the US, UK and the three European counties I have immediate family in. My immediate family was PC'ed up from the get go, everyone else, not so. That only really started to change in the mid 1990's.

                    If you want to play that game secondary schools had minicomputers in 1974. Because mine did. People had Apple II's in 1977. Because I spent Christmas 1977 learning my way around one of the very first Apple II's in European. Brought in as hand luggage (as a "typewriter") after being bough in Sunnyvale. And in 1986 people were on the internet because I had to dial into a site over ARPANET. They even had a DNS address. And so on...

                    So no, just because you knew people with micros/pcs (and I do remember the German market numbers from the early / mid 1980's, and Chip magazine back then too) means diddle squat in the bigger scheme of things. If you were wondering about PC hardware / software sales revenue split by 1984 it was 50% US, 20% Japan, 30% Europe/UK. With the 30% split 70/30. UK/Germany/France 70. The rest 30. And the UK, France, Germany revenue numbers were about the same. In total. This was micros/PC's. Not the rest of the market. Minis/Mainframes etc. Were the German revenue share was a good bit larger than the UK / France share.

                    Thats how it actually was. Back then. And not just how you remember it was in your neck of the woods at the time.

                    1. ianbetteridge

                      Re: Sorry Liam, Not Even Wrong...really?..again?...anacdotes

                      "Those are just personal anecdotes dear boy. I'm talking actual sales."

                      If you still have the Visicalc spreadsheets to prove that, I'd be impressed ;)

                2. druck Silver badge

                  Re: Sorry Liam, Not Even Wrong...really?..again?...same in US..

                  Outside of UK schools BBC Computers were as rare as hens teeth in my experience. They were not that much cheaper than an Apple II by the time you had bough all the required kit and then there was no business software to run on it. Which was most of the market. So people bought an Apple II. And VisiCalc etc. Even in the UK.

                  BBC Micro's weren't as popular as the much cheaper spectrum, but there were vast numbers sold to both home and business users, and there was plenty of business software such as Wordwise, View and Viewsheet - which had the advantage of being ROM based and loaded instantly.

            2. Michael Wojcik Silver badge

              Re: Sorry Liam, Not Even Wrong...really?..again?

              From 1981 onwards never used any IBM PC / clone...

              Here, let me help you. Just write "my anecdotal experience outweighs everyone else's" and be done.

              Yeah, I was there too, and my experience contradicts much of the crap you've posted. So what?

        2. Dan 55 Silver badge

          Re: Sorry Liam, Not Even Wrong...really?

          Hiercoles when it first came out was hi-res two colour, fine for business software displaying something on the screen which looks similar to what it's going to print on paper but not great for a multicolour GUI or games.

          Until VGA came out the de facto standard which actually gave you a decent palette comparable to Mac, ST and Amiga was Tandy and until Soundblaster came out the de facto standard was once again Tandy.

          I suppose that was a strength of a PC, third parties could step in where IBM failed until IBM could pull its finger out and catch up. Sucked for you if you were a PC user and didn't choose Tandy though.

          1. Liam Proven (Written by Reg staff) Silver badge

            Re: Sorry Liam, Not Even Wrong...really?

            [Author here]

            > Until VGA came out the de facto standard which actually gave you a decent palette comparable to Mac, ST and Amiga was Tandy and until Soundblaster came out the de facto standard was once again Tandy.

            Tell me, @Dan 55, are you by any chance American?

            Tandy PCs were a think in the British Isles in the 1980s, but barely.

            Here, the standard for hires colour on 8086 PCs was if anything the Amstrad PC1512/PC1640... because they were half or a third of the price of American PCs.

            And that's _why_ the PC industry had to wait on official "standards" from IBM itself: VGA, PS/2 ports, etc. Because smaller vendors did improve on elements of the design, but in proprietary ways that never went mainstream.

            The Amstrads had "CGA" with 640*480x16 colours... but in a nonstandard way. Only GEM supported it. They had mice, as standard on XT class kit... using a proprietary port. They had no PSU because it was in the bundled screen. They had separate keyboards... but using a properietary port.

            But this stuff enabled Amstrad to sell an 8086 PC for £600 when a dirt cheap US clone was nearer £2000.

            So, maybe sadly, no, Tandy graphics weren't standard everywhere.

            1. Vometia has insomnia. Again.

              Re: Sorry Liam, Not Even Wrong...really?

              AFAIK the only Tandy computers to make much headway in the UK were the TRS80s, mostly model 1s, and not many of them either due to the "optimistic" pricing; and though I saw the Color Computer in shops once or twice, again they didn't sell many as the Dragon was practically identical, much cheaper with twice the RAM as standard (I think) and had a much better keyboard: as much as the computer magazines of the times maligned it endlessly, it had the same switches as at least one revision of the endlessly-praised BBC Micro and nice heavy double-shot keys (I didn't care for the stepped profile compared to sculpted, but that's a matter of taste); the 51(?)-key layout was restrictive but few if any commented about that, and was the same as the CoCo's. Which was a "chicklet" keyboard. Not as bad as the Spectrum's, but not great. Don't think I ever saw a Tandy PC; IIRC even their catalogues seemed less than enthusiastic about it.

              Amstrad's version is classic Amstrad: buy a job lot of something that costs next to nothing because nobody else wants it and make it work, somehow. Same thinking that brought 3" floppies to their CPC66x computers: nobody really cared that 3½" would be The Standard™ (eventually; took quite a few years for 5¼" to stop being the most commonplace let alone die out), they were more interested in no longer having to bugger about with cassettes for what was at the time a weirdly small price premium. Their hifi systems may have always been a bit hmm but the CPC series was incredibly good; even the software (in terms of BASIC and OS) was surprisingly good and IIRC Locomotive BASIC was one of the best dialects of its day, both in terms of features and speed.

              But I digress. Again. D:

        3. ldo Silver badge

          Re: Sorry Liam, Not Even Wrong...really?

          IBM never believed in monochrome graphics. Their Monochrome Display Adapter was text-only. That left a gap that the Hercules Graphics Adapter filled, offering bitmap graphics to those who couldn’t afford colour.

        4. HuBo Silver badge
          Windows

          Re: Sorry Liam, Not Even Wrong...really?

          That's what we had, Hercules graphics for the small monochrome (amber) CRT on the IBM PC clone at home, and CGA for the brand-name IBM PCs running AutoCAD at the Uni's computer lab, with their color monitors. That was 38.5 years ago!

    3. HuBo Silver badge
      Gimp

      Re: Sorry Liam, Not Even Wrong...

      RIght on!

      "In a fight between a shark and a dolphin [...] the sharks won."

      That's pretty much what RIchard P. Gabriel concluded ... which is also what this here article chooses to end with ... the shark (worse-is-better) has better survival characteristics than the dolphin (the-right-thing), unfortunately for us all (especially the bipolar Lisp programmer and other non-fungibles!)!

    4. Jou (Mxyzptlk) Silver badge

      Re: Sorry Liam, Not Even Wrong...

      > In a fight between a shark and a dolphin guess who wins?

      You comparison is flawed. In reality sharks avoid dolphins. Simple reason: Shark are mostly solitary, or loose and not well organized groups. Dolphins are among the masters of tactics, especially in groups, and they support each others with their sense for community. There are many more additional small factors for the dolphin side, but in the end: Statistically sharks lose by an order of magnitude.

      1. ldo Silver badge

        Re: Sharks vs Dolphins

        Not sure how they do one-to-one, but in a battle between a school of sharks and a school of dolphins, the dolphins tend to win. The strategy is simple: keep headbutting the sharks until they fall unconscious. They stop swimming and, being denser than water, they sink. And because the oxygen-carrying water stops flowing over their gills, they also drown.

      2. Anonymous Coward
        Anonymous Coward

        Re: Sorry Liam, Not Even Wrong...nope dolphins and sharks

        A perfect example of taking a throwaway analogy and beating it to death

        Software and hardware companies very much fall into the dolphins / sharks categorization. They either produce high quality product which shows much intelligence, treat their customer well, and are cooperative and (mostly) ethical. Then we have the shark companies.

        Over the last five decades lets see which companies actually survived and prospered. And with them the products and technologies they sold. The sharks or dolphins?

        Need I say more.

        In (economically speaking) Ill Informed Markets which mass market tech customers most certainly qualify as, a Gresham's Laws applies. The Bad Technology will drive out the Good Technology over time. Because the Bad Technology is always sold by shark companies.

        The processor in this laptop and the OS it is running being a casebook example. The least best tech won. As usual.

        1. Michael Wojcik Silver badge

          Re: Sorry Liam, Not Even Wrong...nope dolphins and sharks

          A perfect example of taking a throwaway analogy and beating it to death.

          Much as dolphins do to sharks?

          It was a lousy analogy, at least for the argument it was originally intended to support. In fact, its poor showing as a rhetorical device highlights the weaknesses of that argument: it's simplistic and over-generalized.

          The processor in this laptop and the OS it is running being a casebook example. The least best tech won.

          And yet in the largest category of end-user computing devices, by far, ARM CPUs have "won".

          (Incidentally, orcas, which are a genus in the dolphin family, routinely attack and kill sharks.)

          1. HuBo Silver badge
            Gimp

            Re: Sorry Liam, Not Even Wrong...nope dolphins and sharks

            ... and Flipper gangs are notoriously evil incarnate ... ( https://slate.com/human-interest/2009/05/the-dark-secrets-that-dolphins-don-t-want-you-to-know.html )

    5. MarkMLl

      Re: Sorry Liam, Not Even Wrong...

      There is no way the PC shipped with a sound chip: it had 1-bit output to a PC and you had to hardcode almost everything.

      1. Anonymous Coward
        Anonymous Coward

        Re: Sorry Liam, Not Even Wrong...eh?... "Sound Chip"?

        Who said it shipped with a "sound chip"? It shipped with a 8255. Which if you had been around at the time (which you obviously were not) would have been a very familiar Intel i/o chip. For making sounds, controlling things, and doing lots of other weird and wonderful things Like moving radio telescopes. Or providing the cassette interface for the IBM PC. We read the schematic diagrams for the motherboards of all new micros that came out back then. No NorthBridge / SouthBridge. Just lots of jellybean 74LS chips and the CPU and CPU support chips. So chips like the 8255 and what they did would have been very familiar by 1981.

        Old issues of Byte, Kilobaud etc from those days 1976-1982 are full of articles about hardware projects using PIO etc chips like the 8255 to do lots of strange things. You will find the old magazine in the Internet Archive and other places online. Back when men were real men, women were real women and small furry creatures from Alpha Centauri still had not been written a punchline yet. That was not till 1978.

        1. Dan 55 Silver badge

          Re: Sorry Liam, Not Even Wrong...eh?... "Sound Chip"?

          Who said it shipped with a "sound chip"?

          You said the PC shipped with a sound chip in your original post. I'm afraid you're getting befuddled by your own anecdotes.

          Also even supposing you did rig up another kind of sound output with an 8255 and strings and yoghurt pots slightly better than the standard PC speaker, it had no mainstream software support so such an expansion was pretty limited in application.

          The previous poster is 100% right, you had to bit bang the PC speaker if you wanted anything other than a simple beep, something which you can't do now on a modern CPU and Windows as userland software is not allowed to do that.

    6. lispm

      Re: Sorry Liam, Not Even Wrong...

      > Dylan was a total mess of a functional language that was a huge resource hog and never worked properly on the Newtons. ... An opinion totally confirmed when I saw the first device in action.

      Where did you see Dylan on a Newton? The actual product never used Dylan. They were programmed in C / C++ and NewtonScript. No shipped Newton ran any form of Dylan.

      1. Anonymous Coward
        Anonymous Coward

        Re: Sorry Liam, Not Even Wrong...well those ATG releases..

        So someone read the wiki article I see..

        Well some of us had been getting and working on beta / prototype machines since the 512k Mac. Those boxes with the smooth cases. When they actually had full cases. Which the first Mac II prototypes did not. I found a letter recently among a pile of stuff in a box (with the lose leaf and the telephone book Inside Mac) from Alain Rossmann when he was just one of the Mac evangelists. That came with some ROM's and system disk for the latest prototype Mac. We got to play with lots of fun cutting edge toys back then. Supplied by Apple. Gratis.

        And some of us had been getting alpha / beta releases of internal builds of everything from MPS, MPW, MacApp etc through to (later) some of the interesting ATG stuff. I remember Dylan as just like the Apple Smalltalk80 kicking around in 1985. Something that turned up on a bunch of disks. Docs and software. Was installed to see if it was what I thought it was. And it was. I might add my review was based on having written a large chunk of a (shipped) functional language compiler (with full MacOS calls integration into the Lisp class frameworks) almost a decade before. In asm. So, yes, I know exactly how this stuff works all the way down to the bare iron. Can still remember (mostly) the clock cycle count for the 68K instruction sets. 00,20,30,40. Apple never shipped a 68010 box. Its was a real pity they never shipped the '060 box. But that would have sunk the AIM/CHRP PowerPC marketing camel even before it got going..

        And NetwonScript was just as big a mess as Dylan. Had a look at that as well. Would have been around early '93. Magic Cap was also a performance pig but it was a lot more coherent and elegant. As expected. I quite like Swift. Although when the alternative is ObjectiveC pretty much every language of the last 50 years would look good in comparison. Although I would draw the line at Modula II.

        You know some of us actually did work at the sharp pointy end of the business and at that level back back in the old days. On this kind software. And on this kind of hardware. God its boring now. When the last really interesting piece of hardware was the cell processor in the PlayStation 3. Amazing technology which no one knew how to use to its full power. Although the Sony SCEA guys in Foster City did try.

        Anyone know if you can get a standard cell Transputer T400 / T800 that you could stick in multi meg cell FPGA's. Now that would be interesting piece of kit for end user software. GPU's are still little more than SIMD processors from 30 years ago. So you can do just a few very simple thing very very fast. If the code is amenable to parallelization. Which most isnt. So not a lot you can do with the hardware. Outside of shaders and suchlike. A real pity. As there are so many IPU's not doing much most of the time in even a low end GPU.

        1. Roo
          Windows

          Re: Sorry Liam, Not Even Wrong...well those ATG releases..

          "Anyone know if you can get a standard cell Transputer T400 / T800 that you could stick in multi meg cell FPGA's."

          Fairly sure the answer to that is no because the T4/T8 were put together using a custom CAD system (Fat Freddy's Cat) and targetted a bespoke process. INMOS did produce the Reusable Micro Core (IIRC - might have that wrong) in the early 90s - which ended up as the ST20 series. That *might* have made it into Macrocell form. In the mid 2000s Jon Jakson produced the R16 Transputer design for FPGAs hooked up to LPDDR memory, and wrote a paper on it - it was a neat bit of work, IIRC it's a more conventional RISC core than a stack machine like the T4/8. I see that there's been a few papers written on the same lines since then too... Have a google. :)

  19. Doctor Syntax Silver badge

    "In Jobs' own words, he saw three amazing technologies that day, but he was so dazzled by one of them that he missed the other two. He was so impressed by the GUI that he missed the object-oriented graphical programming language, Smalltalk, and the Alto's built-in ubiquitous networking, Ethernet."

    Wisely he passed on another technology, the underlying hardware. There's a series on YT about reconditioning an Alto which drives down to the H/W implementation which was a mass of TTL (obviously that could have been integrated for mas production) and the memory chips of the day, probably 1Kbit. The basis was a clock ticking round a ring of separate H/W functions - disk controller, ethernet, display, etc. all implemented in H/W - no smart peripherals. One of the H/W functions was the actually programmable machine that ran the S/W. Was it some implementation of object oriented H/W to underlie the OO S/W? No, it was an emulation of an existing mini, the Nova.

  20. The H'wood Reporter

    Been there done that

    I suspect I'm one of very few old enough on here to have actually worked through the entire process and come back around to the realization that there are many, many ways to do things, and a whole lot of them are totally, utterly, rubbish!

    I've done 8008 assembly, AMD bit slice coding, 68000, TI 9900, Zetalisp, Common Lisp, Fortran, Pascal, Algol, Swift, Json, Basic, I digress. I've designed real time operating systems for aluminum smelters, air-to-air tracking radars, robot grippers, as well as PLCs.

    Zetalisp was my favorite OS. Squeak comes in a close second. Would I like to live with them in the malware society of today? Hell no! VMS clusters ran the worlds wire transfers for decades. You kept them in locked rooms with extremely tight access. Why? Because with physical access you owned the world! Otherwise they were as bulletproof as anything today.

    When DEC was pitching NT clustering, I had the privilege of pitching it to a major international bank IT department. The answer to the question, "Who uses DEC clustering anyway?," was, "You do in the wire transfer department." Response, "They don't let us in there." I stayed silent and didn't even smirk.

  21. Howard Sway Silver badge

    Good article, if a bit too wide ranging

    The only machine still commonly in current use which comes close to the concepts described in the article is the IBM i series (formerly AS/400), which is mentioned at the end, but not in the article itself. However, having used this completely object based system a bit, the drawbacks soon become apparent. Much of your time working on it still involves working on 2 types of objects : FILE and JOB. Does that start to sound a bit similar to unix based systems? In fact the system even provides a unix shell, as many people find that's the most efficient way to work on these things....

    However, it is still very different, and I always recommend a look for people who complain about the limitations and drawbacks of the unixy way of doing things (every approach has some of those).

    As for smalltalk and lisp, both great concepts, but the fact is that writing complex applications in them becomes difficult in terms of readability - smalltalk is just too object orientated to get a clear overview of whole applications, and as for lisp, well the example given shows what a syntax nightmare it quickly becomes. Both suffer from the same "everything must be a ...." problem that's at the root of a lot of what the article's complaining about, whether that's a file, object or list. At the end of the day, computers move numbers around and do calculations with them, which is why C and its descendants are still so prevalent.

    1. MarkMLl

      Re: Good article, if a bit too wide ranging

      I agree: so wide ranging it makes comment difficult and risks having somebody who has good reason to disagree with one point say things based on an imperfect understanding of others.

      Having said that, I think the priesthoods that surrounded the major mainframe architectures were so intent on protecting their precincts by means of arcane incantations that their successors really hadn't got the faintest idea what they were talking about.

      It took me years to find a coherent explanation of what a "parallel sysplex" is, and how it compares with an SMP system, or NUMA, or a cluster.

  22. AndrueC Silver badge
    Meh

    The article is both wrong and right. Yes, what initially took over the world was less functionally capable but that was because of economies of scale and what people needed to do with computers. Clustering, complex filesystems, sophisticated languages. All good stuff but they required too much hardware, too much skill and far too much money. I wasn't there for all of it (I started in computing in the early 1980s with a Sinclair Spectrum) but I've seen enough to be able to comment on the computing (r)evolution.

    Other commentards have already pointed this out: Survival of the fittest. A 1970s mainframe might have had as much functionality as a modern workstation but I didn't have access to mainframes. I never have had access to them. They've forever been locked away in universities and large corporations. If I'd had to wait until things got to the point where I could experience all the wonderful stuff the author talks about I'd never have had a career as a computer programmer. It would've taken several decades before that stuff trickled down into the real world.

    The article smacks to me of typical 'ivory tower thinking'. The author apparently is unaware of how the real world works. I was using a word processor and had access to spreadsheets in my flat in the 1980s when I was at polytechnic. I was writing software commercially (for a small company based in North Wales) well before that decade was out. That would never have happened if the IT industry had refused to lower its sights and produce simpler, less powerful computers. That is not (or shouldn't be) a mystery. That's why we called them 'mini computers' and 'micro computers'. We knew they were cut down versions of 'the real thing'. That was deliberate and it didn't matter. What mattered was that we had access to computers no matter how small the company we worked for and that we had access at home.

    It's called being pragmatic.

    1. gernblander

      You said it well and I agree. I went to college in the early 90s and Unix workstations and IBM pcs were in our computer lab. They were cheap and accessible. Also, Linus Torvalds creates Linux based on the Posix (Unix) standard. It was acessible to him and the university was using Unix. The universities and tech programs define the future.

  23. Candy
    WTF?

    Highly enjoyable read

    Thoroughly enjoyed this article even though I take issue with some of the conclusions and explanations.

    What really has astonished me is how it has provoked so much critique that has remained civil and reasoned.

    I don't know what's going on any more...

    1. Jou (Mxyzptlk) Silver badge

      Re: Highly enjoyable read

      > What really has astonished me is how it has provoked so much critique that has remained civil and reasoned.

      Sounds like a lot of commentards here know Internet before AOL and still behave accordingly when it comes to things from that time :D.

      1. HuBo Silver badge
        Devil

        Re: Highly enjoyable read

        Oh stop it! You're making me blush ...

      2. Androgynous Cupboard Silver badge

        Re: Highly enjoyable read

        Seconded. Interesting article, even more interesting responses. To the many Graybeards that posted here, even though you are clearly far, far too old to be of any practical use, you’ve got some good stories :-)

        1. Chris Gray 1

          Re: Highly enjoyable read

          Hey, I resemble that comment!

          (How many recall that usage? :-) )

          Even though I don't have a beard - they itch too much.

  24. tyrfing

    The history of commercial computing appears to be going from expensive shared stuff (shared because it's expensive) to cheaper stuff that's less capable, but not shared, or shared by fewer people.

    Because people want to do what they want with computers, and not what others say they can do. Or what their budget shows they can charge.

    Accountants really wanted the microcomputers for the electronic spreadsheet application. Why? They bought the machine and the application, and *they could use it as much as they wanted*.

    No opex monthly billing, no matter how many times they had to run the spreadsheet.

    Cloud computing is a step back - we're back to paying someone else (too much) for every CPU cycle and bit of storage.

    1. Dan 55 Silver badge

      Reducing capex to as little as possible has become a dogma, even if it makes absolutely no sense and the business would end up spending more money.

  25. Jou (Mxyzptlk) Silver badge

    Anybody remembering hot-plug PCI(e)/MCA/EISA/ISA etc?

    While we are at the "old computers had more capability" run: I am old enough to have experienced, but not administered, hot plug exchanging one of two SCSI controllers while Windows NT 3.something server was running. Tech guy used the utility to switch to the second built in controller, and then live unplug-swap the controller, but be careful with the attached SCSI cables and everything else for obvious reasons. Similar for network cards, open the box, do swap procedure at "open heart". For other OS-es even CPUs were hot-plug. THAT is all gone.

    The last time I've seen x86 servers with actual PCI Express hot plug capabilities supported was around 2012 I think. And in all that time I never needed that capability.

    For a good reason: For the cost of having one machine with all that capabilities you can have several machines which run as a cluster. The engineering required to make it work with one machine isn't worth it any more if the simpler solution is not only cheaper, it is even faster, easier to maintain, can be extended by another node easily, offers the possibility of separation by several meters in another room. It is simply a much better way of avoiding a single point of failure which the first mentioned machine cannot offer since no matter how much redundancy you put into one box, it is still that one box that can trip, be under water, be on fire or mistreated in some other ways.

  26. Andy 73 Silver badge

    History is written by the winners, then argued by the most stubborn loosers.

    The problem with any discussion on languages and architectures is that you will always get people who've devoted their lives to understanding some arcane system and will not let go. That applies both to the ones that have been lost to obscurity, and the newest and shiniest - there is always someone who will come along and insist that they are simply *better* than any other alternative.

    The bottom line is that the best computing device/OS/software stack is the one that ships (in a state capable of delivering what the user bought it for). There are a remarkable number of systems, languages and even whole machines that didn't do that. And the best thing about a failed machine is that you can always claim that if only the dumb users had stuck with it long enough, it would have evolved into the perfect tool.

    As an aside, Cambridge saw the development of Magik in the late 80's - a Smalltalk inspired language that ran under a VM that save the world state. And what a state the world could get into when whole teams of developers could duct-tape half baked Friday afternoon ideas into the system, and then just disable the bits that didn't quite work. One attempt at cleaning up a client's mission critical system saw two thirds of the running environment removed (at great length and cost) to no discernable change in function. It was (and still is) a very interesting language, but it's far easier to forgive the flaws in a language that you're not bound to use and get nostalgic about the cute ideas that came and went.

  27. HuBo Silver badge
    Joke

    Microsoft's multi-user Widows

    I think I get this nifty Dollo devolution concept. Looking at Microsoft's original plan for the Surface, it was pretty big and lofty, a flat, table-sized computer, to do everything (2010): https://www.theregister.com/2011/11/22/microsoft_and_samsung_touch_up_multimedia_table/ . By 2012, they'd dumb this down to just a tablet, running Windows RT ("Not everybody's [...] Microtard", as discussed here: https://www.theregister.com/2012/06/26/brics_love_android/ ), and then scaled that up and sideways, some, to the Surface Laptops and tablets of today.

    Had they stuck with their original, and dare I say revolutionary, Surface Table concept instead, and scaled that up despite the many technological hurdles, we'd probably have the first truly multi-user operating system from Microsoft already by now today, as needed to actuate the equisized plug-in keyboard, kindly provided by Alienware: https://www.tweaktown.com/news/94219/alienware-builds-the-worlds-largest-mechanical-keyboard-and-mouse/index.html !

  28. aerogems Silver badge

    Part of the problem

    Custom silicon like that is expensive. I remember for a time Sun Microsystems playing with the idea of creating a Java chip, which would run Java bytecode natively. It ended up costing so much they scrapped the whole idea. If you have the resources and talent, Apple has shown us a bit of what can be done with custom built silicon, but a lot of places don't have that kind of money. Just the equipment needed to design a chip is prohibitively expensive for a lot of companies, never mind actually finding someone to design it and fab it. Then, the amount of time it would take to add any new functionality would be measured most likely in years, and you'd have to buy all new hardware. If the CPU is basically acting as a generic interpreter as they do today, you can add all kinds of new functionality to a language and it's available pretty much instantly to everyone. You can even create whole new languages if you don't like any of the current crop.

    1. MonkeyJuice Bronze badge

      Re: Part of the problem

      Absolutely the case. For many years the "language so advanced you need a custom chip to run it" carried a certain cachet. Cachet we would today call "Technical Debt". Fortunately compiler theory has advanced an incredible amount, and it is possible to obtain hardware for users to actually execute the software that is written.

    2. druck Silver badge

      Re: Part of the problem

      ARM actually made chips which could run Java bytecode with the Jazelle extension, it was used in lots of phones in the late 90s and early 2000s.

  29. Doctor Syntax Silver badge

    AFAICS there have been two technical trends, pushing for maximum power and pushing for minimum cost per user.

    Initially there was only the mainframe, expensive and defining power. The second trend started by connecting dumb terminals to the mainframe so the expensive computing power could be brought to many users. Then, as mainframes became bigger and hairier the minis appeared, workstations, desktop PCs*, laptops, tablets and phones. In the meantime the drive for more computing power evolved into the hyper-scale server and the supercomputer. In between these two diverging trends there was room for all manner of variations, some successful long term, some not.

    Alongside these trends has been a to and fro battle for control between centralised management and users and another between selling services (bureaux and cloud) and box-shifters. Both will likely continue.

    * No, the IBM and its descendants are not the only PCs. They weren't even the first.

    1. Michael Wojcik Silver badge

      IBM mainframes, at least, use smart (page-oriented) terminals, not dumb ones.

  30. HuBo Silver badge
    Holmes

    Metacircular reflection

    A most wonderful piece, on how it may be best not to overly hang on to nostalgia, fondly remembering the ephemeral beauty of the past, the roads less traveled, what might have been, the past promises of a most awesome of futures, if only ... (if only I read correctly!).

    Whenever I get this feeling, it deeply cheers me up (or not?) to remember Edsger W. Dijkstra's famous 1972 words ( https://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340.html ):

    "LISP has jokingly been described as “the most intelligent way to misuse a computer”. I think that description a great compliment because it transmits the full flavour of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts."

    Hardware-wise though (IMHO), there's no better illustration in my mind of the worse-is-better effect than the devolution of the Berkely RISC. RISC II started a revolution (and Sun Microsystems) [1]. RISC III, SOAR, broadened its reach with SmallTalk on a chip [2]. RISC IV, SPUR, expanded it to multiprocessing LISP [3]. RISC V devolved it all right back to MIPS [4, especially Chapter 18, p. 117]. It's the talk of the town these days ... but my grandma could verilog it unto an FPGA, in her sleep (eh-eh-eh!)!

    [1] Patterson,D.A., 1985. REDUCED INSTRUCTION SET COMPUTERS. https://dl.acm.org/doi/pdf/10.1145/2465.214917

    [2] Unger, D., 1984. Architecture of SOAR: Smalltalk on a RISC. https://dl.acm.org/doi/pdf/10.1145/773453.808182

    [3] Hill, M.D., 1985. SPUR: A VLSI Multiprocessor Workstation. https://www2.eecs.berkeley.edu/Pubs/TechRpts/1986/CSD-86-273.pdf

    [4] https://github.com/riscv/riscv-isa-manual/releases/download/Ratified-IMAFDQC/riscv-spec-20191213.pdf

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Metacircular reflection

      [Author here]

      > the roads less traveled

      It may please you to learn that much of the text of this story was derived from a FOSDEM talk I delivered 6Y ago called "the circuit less travelled". :-)

      There were 2 sequel talks, as well, and I hope to return to the themes and develop them on the Reg.

  31. J.G.Harston Silver badge

    "Then those were improved, usually completely ignoring all the lessons learned in the previous generation, until the new generation gets replaced in turn by something smaller, cheaper, and far more stupid."

    Every few years I find another generation of programmers strugging with serial comms, to which the answer is always "FLOW CONTROL!!!!!". How in earth are we in a universe where simple, basic, fundamental parts of our engineering craft is so completely omitted from people's learning, again and again and again?

    Take the Post Office Horizon thing. Validating two-way message communication has been a known problem since CAESAR! I picked up an understanding of it in the early '80s before I'd even written a byte of networking code. Yet, people were writing, AND BEING PAID for omitting a fundamental piece of architectural structure, just like omitting cement from concrete, and not even being aware of the concept of needing cement in the first place.

    1. ldo Silver badge

      Re: FLOW CONTROL!!!!!

      Yes, but which kind of flow control? If one end is doing XON/XOFF while the other end is expecting RTS/CTS, you’re not going to have much fun at all.

      Yes, there was a (brief) time when I was earning part of my living from being able to wield a breakout box ...

  32. chuckamok

    Massachusetts and New Jersey

    Even though the Jersey school won the market, their stuff was built on the PDP and VAX bones of Massachusetts (if not MIT) products. Ken Olson was an MIT alumnus.

    My first IT job was working for AT&T. My office building was a former DEC office.

  33. Anonymous Coward
    Anonymous Coward

    One Nightmare -- But Otherwise Very Interesting

    Quote (about Lisp machines): "You could just edit the live running code and the changes would take effect immediately."

    Also: "...it was objects all the way down..."

    Liam:

    Today, even on my pathetic Intel N5000 laptop, there can be multiple applications running at once:

    (1) Chromium (email and research windows)

    (2) Gedit (programming)

    (3) glade (GUI meintenance)

    (4) gcc (see items #2 and #3)

    I wonder whether the two quotes from your article describe something I would want today:

    - The end-user "just edits the running code". What happened to the idea of version control and testing?

    - What is the hapless maintenance person to do when something bad happens?

    - And does the hapless maintenance person (and the wider organisation) really want to deal with a different environment on each and every end-user machine?

    Many of the points in the article are valid, but I'm afraid the two represented in the quotes depict a nightmare!

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: One Nightmare -- But Otherwise Very Interesting

      [Author here]

      > but I'm afraid the two represented in the quotes depict a nightmare!

      I think other earlier comments -- you did read the history before commenting, right? -- addressed this well.

      These are, frankly, given the big picture and the already overlong article, trivial implementation details, and there are already technical solutions to them.

      Yes, these need more development, but then, the whole point of the piece was to talk about other ways to do things which the "lowest bidder" approach drove into near extinction.

  34. robmacl

    Blast from the past

    Kept reading because I wondered where you are going, but I certainly didn't expect to end up at Dylan. I worked on Dylan back in the day, and before that on CMU Common Lisp, which was a public domain implementation of CL, in the end running on conventional hardware. We aspired to have a Lisp Machine experience.

    It's hilarious these days when I see Chrome using 64 meg to implement an empty tab, and back then we were really self conscious about our unreasonable use of 4 meg for an entire IDE.

    It was a highly productive environment at the time, and you could certainly spin alternative histories around it. As to what happened, Gabriel's good news/bad news paper is a solid take. At the highest scale economics is the best lens.

    What you don't mention is that each wave of crappier computers was 3x-100x cheaper than what had come before, and consequently so many more were made and used. A crappy computer is so much better than no computer that it spread like crazy.

    The rise of Unix workstations was in parallel with PCs, and initially PCs were not powerful enough and software not sophisticated enough to give a mainframe replacement experience. But the explosion of the PC dramatically shifted the center of gravity of software development away from the universities and research labs by the late 80's.

    In the trenches, as an implementor, the arrival of popular GUI frameworks was a big problem. There was just too much going on out there for us to hook it up to Lisp in an artful way. We made tools that users could use to do the hookup, but the GC memory model created problems, and the APIs were big and every growing. There just weren't enough people who cared and knew how to do it.

    Gabriel's good news/bad news take is solid, and worth a read. See also "The Cathedral and the Bazaar" by Eric S Raymond. Ultimately the self reinforcing network effects of greater market share and mind share are too much to overcome. Too much is happening somewhere else, and you are stuck in a backwater.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Blast from the past

      [Author here]

      > I worked on Dylan back in the day

      Oh, splendid. I am delighted that it's made that connection. :-)

      I feel that there must be some other paths forward possible, and I am working towards some future articles developing this theme.

  35. MonkeyJuice Bronze badge

    Top series!

    Absolutely loving these misty eyed reads. It's delightful to see the language holy wars are just as polarising amongst those still around so us commentards can reopen old wounds in these comments. The endgame, Common Lisp, is truly a bizarre artifact, with crazy idiosyncracies I wouldn't wish on anyone today, but my gosh it just felt so nice for many years, and performed _so_ well when compiled, in a world of Perl, Tcl, and other similar kludgy interpreted languages.

  36. Jou (Mxyzptlk) Silver badge

    All that knowledge in those comments...

    You make me feel young!

    1. AndrueC Silver badge
      Unhappy

      Re: All that knowledge in those comments...

      And the (un)funny thing is how old you can sometimes feel. I remember the 80s fairly well. They were after all my formative years. I'm sure I owe my political development to the Thatcher years. But my strongest memories are of the 90s. The glory years. The world was becoming sane and the future seemed bright.

      I remember the 00s clearly but to me that's the start of the modern world with all its problems and troubles. And yet.. I like watching true crime documentaries and I've noticed that cold cases are now dealing with incidents that happened in the 00s. That is such a strange feeling. To realise that what I consider 'the modern world' is actually so long ago.

      The financial crisis of the mid 00s feels to me like a recent event..

      I need a drink.

      1. J.G.Harston Silver badge

        Re: All that knowledge in those comments...

        My local newspaper has a "Down memory Lane" page.... which is often events from 2010-2012. !

  37. Anonymous Coward
    Anonymous Coward

    Lisp and RISC nostagia...

    First RISC.

    It was a total failure. When you look at the original reasons give for it. In the Hennessy Patterson books et al.

    The MIPS chips carved out a small niche in embedded ( a really nice instruction set) but by the time the big time "RISC" processors arrived in the mid 1990's, like the POWERPC, you had an instruction set that was larger and more complex than the CISC's it was supposed to replace. And the two main CISC CPU architectures , 68K and x86, already had implemented all the core RISC processor "features" in new CPU's (the 68060 and AMD K5) thereby negating pretty much all the real world advantages of RISC. Which is why the '60 outperformed the PPC601 at any given clock speed for example

    So all these decades later at least for general purpose CPU's CISC still rules. ARM and x64. And for special purpose CPU'S it SIMD. The one area where RISC has a real presence is in embedded processors with the POWERPC. And I challenge anyone who has ever had to write PPC assembler to claim that it is "simpler", "less complex", "more orthogonal" than ARM or any other CISC instruction set out there. Because it never was. PPC is my second least favorite instruction set. After x86/64 of course.

    And Lisp.

    A great language to know and might have some use as a quick lets try something throw away code idea but thats its. There is very good reason why Lisp (and all other functional languages except when very well camouflaged) have failed for the last 50 years. Except as a very niche language. Very niche. It just not very good for writing software for other people to use. In large numbers. With a maintainable code base.

    For personal projects, if that's what floats your boat, great. Go ahead. For those who love mathematical puzzle type problems it can give a lifetime of amusement. I learned to code in Lisp the hard way. I had to write a compiler for it. For paying customers. Not only was it a fantastic education in the nuts and bolts of the language but I also learned that - A) not one single Lisp Fanatic seems to know how the language actually works. If they did they would be a lot less extravagant in the claims they make for it. And B) I learned exactly why Lisp has failed and will always fail as a usable language for writing large complex end user application codebases.

    I would highly recommended learning Lisp or some other functional language as a way of making oneself a better programmer. To learn how to map problems / solutions into a very different way of writing programs. I know it made me much better C/C++ programmer. Especially learning OO programming in a very different programing universe. Which C++ most certainty does not. OO plus functional languages is a heady mix once you get the hang of it. I still miss some of the features we had almost 40 years ago that made throwing together non trivial test of concept applications in a very short period of time in a Lisp dev environment so easy. Hours / days of work in traditional IDE's reduce to often just minutes. There again there never was any simple (and bullet proof) way of actually delivering any of these Lisp applications to any other users (and often machines) other than the one it was written on. And there still is nt. Really.

    And thats just one of the several very serious problems with functional languages like Lisp. But just as with logic languages like Prolog, its a very good use of a professional programmers time to learn a language which allows a completely different way of writing software. It makes one a much better programmer. And they are a lot fun to learn too. But thats as far as their utility actually goes. When it comes to developing shippable software.

  38. smart4ss
    Mushroom

    The desire for "fungible cogs" is a strong business motivator. The linked essay blames "deskilling", which is a derogatory term for "innovation". Innovations are disruptive, and those disrupted usually don't like it. But using computers in business has been quite disruptive to other industries. Before the 90's, most workers wouldn't have touched a computer. Today most jobs require computer interaction at some point. The newspaper printing industry has been decimated, and the retail industry is still being disrupted. Anybody who points out that the changes aren't all good is usually dismissed as stuck in the past. It's kinda silly for an IT worker to complain about being disrupted, considering the IT's ongoing disruption

    Except that now IT is disrupting more than industries or shifting the job skill landscape. It is disrupting our socialization. It is disrupting our thinking. It is disrupting our systems of trust. It is disrupting our political institutions. It is just beginning to disrupt our biology.

  39. Anonymous Coward
    Anonymous Coward

    Yup...Toxic Waste Dump...But We Need Answers...Not Critiques!!!!

    Quote from Liam (elsewhere):

    "The real point here is working out what are the 1970s assumptions buried in existing languages and OSes, removing them, and building clean new systems with those assumptions removed.

    So for a start, that means throwing out absolutely everything built with C, burning it in a fire, and scattering its ashes on the wind, because C is a toxic waste dump of 50 years of bad ideas."

    Liam:

    All of the above might be true.....but WE ARE WHERE WE ARE!!

    What do you suggest?

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Yup...Toxic Waste Dump...But We Need Answers...Not Critiques!!!!

      [Author here]

      > What do you suggest?

      Hahaha! Is that from the Reddit thread? Wow did those kids really not get it.

      My suggestions formed the basis of the 3rd of my FOSDEM talks. This article was carved from the script for the first.

      But as a hint: just as you can now go to an artisanal barber shop and have a hand-done haircut from a chap who gives you his own personal card and uses £50/jar hand-crafted wax on your barnet at the end, I think the potential is opening up for artisanal hand-crafted bespoke or small-batch software.

      If it works for clothing, food, drink, and furniture, it can work for software, too.

  40. Michael Wojcik Silver badge

    Not all micros

    Every computer in the world today is, at heart, a "micro."

    I really don't understand what Liam is trying to say here.

    The vast majority of "every computer in the world" are embedded systems. While a great many of those use CPU cores descended from families popularized by microcomputers, they're not "micros" in any useful sense, since the whole point of the microcomputer was to be user-interactive.

    At the other end of the scale, we still most definitely have mainframes, which are in no way "micros". Not at all. And we have supercomputers, which also aren't "micros" under any sensible definition — not even "at heart", not even if they use Intel or AMD CPUs. And we have China building Longsoon-based MIPS-derived supers, while there's never been a MIPS-based micro. Yes, SGI sold MIPS-based workstations, and those were often primarily single-user machines, but they were not in any useful sense microcomputers.

    The only way this claim could be true is tautologically, if you defined "micro" as, essentially, any CPU; and in that case the statement says nothing.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Not all micros

      [Author here]

      > I really don't understand what Liam is trying to say here.

      That's OK. This article was carved out from a modest part of a talk I delivered 6Y ago. I had to cut probably over 50%. What remains is incomplete, but I hope to return to it.

      > The vast majority of "every computer in the world" are embedded systems.

      [...]

      > While a great many of those use CPU cores descended from families popularized by microcomputers, they're not "micros" in any useful sense, since the whole point of the microcomputer was to be user-interactive.

      [...]

      > At the other end of the scale, we still most definitely have mainframes, which are in no way "micros". Not at all. And we have supercomputers, which also aren't "micros" under any sensible definition — not even "at heart", not even if they use Intel or AMD CPUs. And we have China building Longsoon-based MIPS-derived supers, while there's never been a MIPS-based micro. Yes, SGI sold MIPS-based workstations, and those were often primarily single-user machines, but they were not in any useful sense microcomputers.

      OK, look, if you are going to argue over definitions, then we have to start with agreeing what we are trying to define and what words we are defining them in.

      So, step 1: let us approach this problem from 2 different angles and see if we can enclose it.

      In biology, in taxonomics, there are two types of analysis and people doing it: "clumpers" and "splitters".

      Splitters want to subdivided clades, groups, by their differences. Clumpers seek to group them by their similarities.

      Let's try both.

      From there... step 2: the "clumper" approach. Lines of descent.

      Is an embedded system a computer? Is a washing machine with a microchip a computer? I'd say no.

      Is its embedded controller a computer? Maybe. Probably no. Is it a general purpose machine used by a human to run programs? No. Is it reprogrammable? No. Is it general purpose? No. Is it interactive? Itself, with humans? No.

      So, it's not a computer and we can ignore it.

      *But* is it derived from and closely enough related to conventional general-purpose interactive computers that it can run software developed and tested on them? Is it a generic CPU design running a generic OS?

      In many cases, yes.

      Then while it may not itself be "a compurer" in any recognisable sense, then it's descended from and closely related to things that *are* computers, and if those are micros, then it's a micro.

      Step 3: the "splitter" approach:

      There were, historically, 3 types of computer: (a) mainframes, (b) minicomputers, & (c) microcomputers.

      [a]

      Mainframes are room sized, cost millions, and were primarily batch oriented, not interactive. (That was bolted on later.)

      I doubt any embedded systems are based on mainframe designs. I welcome correction. They are not mainframes.

      [b]

      Minicomputers are extinct. (Some software survives.) Minicomputers were inherently multiuser, inherently interactive, desk-side in scale, shared by departments. They did not have their own keyboards or displays: they were driven from terminals.

      This category of machine is extinct, but the designs of their OSes dominate all computing today.

      [c]

      Microcomputers. Driven by a single-chip processor, designed to be used by a single person. Generally inherently single-tasking.

      They are everywhere, but their OSes are today mostly based on minicomputer OSes. But the OSes are, mostly, compatible with earlier OS designs, even if that is disappearing.

      64-bit Windows can't run DOS apps, but 32-bit Windows can, and 32-bit Windows is intercompatible with 64-bit. It's the same family, same UI, same filesystem, etc. It is recognisably a DOS descendant.

      It's a micro OS even if its design owes much to VMS, a mini OS.

      Windows PCs are microcomputers.

      Does your notional embedded chip run Windows? Then it's a micro. Does it use a micro architecture, from Z80 to 8086 to MIPS? Then it's a micro.

      Mainframes are still around. They are still binary compatible with 1960s kit. They are still mainframes. End of discussion.

      Supercomputers built from clusters of microcomputers are built from microcomputers therefore they are microcomputers.

      Any single user general-purpose PC, be it Arm or MIPS or whatever, is a microcomputer built from tech that evolved from microcomputers.

      Therefore they are microcomputers.

  41. Herby

    Life goes on...

    I wonder how things will be in (say) 40 years from now. About the only thing that I now for sure is that we will need worrying about is Y2038. Everything else will be a big guess. Computers are "tools" that can be used in various and strange ways, and will continue to be. How they do that will be a continuing question. I'm sure that languages and Operating systems will abound, and most likely morph into something we don't recognize. Could we have predicted the present from 1973? I hardly think so. Will things evolve, yes, they will. Who will "win"? Hard to tell. I keep reminding myself that if IBM hadn't chosen an Intel chip for its PC, Intel would still be making memories (or possibly dust).

    Put this article and its comments in a time capsule to be opened in 2063. Have fun, I won't be here then.

  42. Numpty

    I certainly didn't expect to be reading about the Linn Rekursiv on my first day back at work. That was partially developed by a professor at Strathclyde University shortly before I did my CS degree there, and they still had a few prototype boards kicking around.

  43. Blackjack Silver badge

    Lisp sounds something I would have liked to learn if I was born about two decades earlier that I was. Then again my Interest in computers started with Videogames and I think that if I was born two decades earlier I would have got way more into Pinball Machines that computers cause money dear boy.

    Unix got beaten by Linux cause you can't get cheaper that free. Oh you can try but actually paying people to use your stuff never works long term.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like