back to article Version 100 of the MIT Lisp Machine software recovered

The LM-3 project has announced version 100, the last ever release of the system software for the MIT CADR Lisp Machine. So, both a new release and a very old one all at once. News of this release came yesterday in a LinkedIn post by IBM's Eric Moore, who helped in the recovery effort. A more detailed post from the restoration …

  1. NoneSuch Silver badge
    Trollface

    It was recovered to a ZIP drive.

  2. elDog

    No love for Forth (or assembler in 10,000 variations?)

    I cut my teeth on BAL on an early IBM 360 and haven't regretted a single new language learned since (excepting COBOL and RPG).

    Also, don't forget JOVIAL (Jules Own Version of the International Algorithmic Language) which may have pre-dated Pascal (not sure.) We had to do our software prototypes in JOVIAL when the I-86 Ada compiler from DEC wasn't ready yet. Those were the days....

  3. that one in the corner Silver badge

    The Forgotten Fifth Generation

    What can be seen as another attempt to take the path started by the LISP machines, the 1982 to 1992 (approx) Japanese "Fifth Generation Computing Machine" project, funded by MITI, is worth a mention in passing, at least. Even if its greatest legacy seems to be the stupid name[1] applied to database languages from 1983 onwards!

    As with LISP machines, the intent was to create hardware that could run a language suitable for (among other things) creating small programs that can integrate together and work on top of a rich environment: as the article notes, an approach that Smalltalk also allows. The difference is that the language chosen, Prolog, was also deemed to be one that would work well across a multiprocessor environment, as the (putting it crudely[2]) search tree could be run in parallel along many branches at once, as well as working with a distributed set of "database clauses/statements"[3].

    5G machines are less remembered than the LISP boxes (perhaps because they never[4] were actually sold, so few people ever saw them - or just because they didn't produce anything as memorable as the Space Cadet Keyboard!). Prolog has faded back into its niche, despite its moment of public glory (MicroProlog on the BBC[4] and Spectrum, even Borland had TurboProlog[4] for the PC).

    [1] you know, calling them 4GLs, because the marketing twonks saw the slogan "Prolog, the Language of the Fifth Generation", thought it said "Prolog is the fifth generation of languages" (confusing s/w with h/w) and decided that the rash of languages for PC databases in the 80's had to be the generation behind... (Please, don't try to fling the Wikipedia article as a rebuttal!)

    [2] very crudely, and let us ignore 'cut' for the sake of a brief comment!

    [3] crude, I said crude!

    [4] the beta ROMs on the BBC ruined my maze-solver for weeks, because we'd not noticed the "debugging aid" that was the top left character changing to show the garbage collector state, which ruined the Mode 7 graphics!

    [5] TurboProlog was - quirky - compared to, well, every other Prolog; then again, it was released by Borland, so no-one was surprised.

    1. Sceptic Tank Silver badge
      Unhappy

      Re: The Forgotten Fifth Generation

      Borland quirky? That's not how I remember their products. Coding using Turbo Pascal and Turbo C++ using an IDE was a heck of a lot better than anything else I knew of at the time. Turbo Vision: loved it!

      1. that one in the corner Silver badge

        Re: The Forgotten Fifth Generation

        Quirks aren't necessarily bad, although they are things you have to be aware of in order to get the best out of whatever it is that is quirky.

        For example, to make the best use of an old timey DOS PC, TurboPascal has the ability to do overlays - which you certainly didn't read about in the "Pascal User Manual and Report". You could quite easily learn all about the Fascinating World of Thrashing Disks using overlays, especially fun if you gave a hard drive and your Users have dual floppies. Not unique to TP but it made them so easy you could just sort of wander into using them.

        Even just if you were, at the time, used to big multi-pass compilers that would eat your sources and spit out an error listing twice as long as your code, with every single missing semicolon mercilessly highlit. Switching over to TP's "halt on the first typo" did require a change in attitude. True, TP compiles really fast, but one of the ways it does that is to not be able to continue past that typo.

        I tend to think of the various Turbo products as being pragmatic more than standards based, so you definitely needed to run through the manuals to find the odd bits that allow you to write code that'll give access to the PC's whole feature set.

        1. doublelayer Silver badge

          Re: The Forgotten Fifth Generation

          "TP compiles really fast, but one of the ways it does that is to not be able to continue past that typo."

          I see that as a bit of an asset. When I was learning, I made such typos more often, and I found that compilers weren't very good at identifying what to do after pointing out a typo. It would probably be fine with a semicolon, but if there was a missing parenthesis, it would likely generate hundreds of spurious errors that would go away as soon as I put the parenthesis in the right place. I tended to run the error output through a script that would identify the first three errors then cut off the output.

          1. Anonymous Coward
            Anonymous Coward

            Re: The Forgotten Fifth Generation

            Late 1980's: Turbo Pascal compiles complex CAD program in <5 secs on '386 (or was it a '286?)

            Early 2022: C++ and friends take 130 secs to build trivial program for my phone on a 3GHz, 8 core processor.

            1. Zippy´s Sausage Factory
              Windows

              Re: The Forgotten Fifth Generation

              I could actually see that as a meme in my head.

              May I also add:

              Late 1980's: entire programming language and OS lives in 8K ROM

              Early 2022: Windows needs 10GB just to cache its updates

    2. hugo tyson
      Go

      Re: The Forgotten Fifth Generation

      Thumbs up for point [4] - that's the way to do it!

      1. that one in the corner Silver badge

        Re: The Forgotten Fifth Generation

        [4] sigh.

    3. Liam Proven (Written by Reg staff) Silver badge

      Re: The Forgotten Fifth Generation

      [Author here]

      > anything as memorable as the Space Cadet Keyboard!

      Oh blast. I knew there was something else I meant to mention...

      1. Arthur the cat Silver badge

        Re: The Forgotten Fifth Generation

        I used to love my Symbolics keyboard. I occasionally think about finding a hardware keyboard maker and funding a joint project.

        1. Zippy´s Sausage Factory

          Re: The Forgotten Fifth Generation

          That's not a bad idea. I might even be tempted myself, especially if I can assign those keys to functions in various programs. (Why spend all that time picking up a mouse when you can remember a handy key combination like ^Q^Q^B?)

          (No icon because there's no "I'm a greybeard" one)

    4. Arthur the cat Silver badge

      Re: The Forgotten Fifth Generation

      Prolog has faded back into its niche, despite its moment of public glory

      Prolog has faded, but Erlang, which was based on the same principles as Prolog, has ticked over for years, and Elixir, which is (sort of) Ruby re-engineered for the BEAM machine that runs Erlang and (sort of) Erlang under the coat of paintsyntax, is growing popular and is probably the best way of handling massive parallelism available today.

    5. Hoagiebot
      Boffin

      Re: The Forgotten Fifth Generation

      First of all, there was a computer from the Japanese ICOT "Fifth Generation Computer Systems (FGCS)" project that was commercially sold. Mitsubishi sold a couple versions of the ICOT "PSI" ("Personal Sequential Inference") computer. Its operating system was written in a language called "KL0", which was a derivative of Prolog that was expanded so that it could also be used as a kernel development language. (That is where the "KL" in the name came from-- it was "Kernel Language 0". As an aside, there was also a KL1 language that was concurrent so that it could run on massively parallel ICOT research inference machines known as "PIM"s.) The KL0 language's code execution was sped up by being run on a custom microcoded accelerator chip in the system designed specifically to execute KL0. This custom processor was more or less an extended "Warren Abstract Machine (WAM)" implemented in hardware. Anyway, the Mitsubishi PSI was targeted specifically towards natural language processing and automatic document language translation, and was sold to Japanese multinational companies for that purpose. If I remember correctly, only a few hundred Mitsubishi PSI's were sold, so they were not common machines.

      With that said, the history portion of Liam Proven's article was an absolute mess. The rise of the specialty LISP and Prolog machines of the 1980's was a result of the keen interest in expert systems at the time. Expert Systems were considered to be bleeding edge A.I. in those days (and a third of Fortune 500 companies were using them), but if you gave the expert systems more than a few hundred rules to follow in their decision trees they became painfully slow at returning answers. Thus several research efforts around the world started up to find ways to speed up them so that they could use exponentially more rules to make inferences from so that they could be used to in broader and broader knowledge domains and be ever more flexible, usable, and powerful. This research lead to the development of the before-mentioned Warren Abstract Machine and Prolog compilers based on it at the University of Edinburgh, *many* custom LISP and Prolog accelerator chips from many different universities, and the Japanese FGCS project, which hoped to both use hardware accelerators *and* run them in parallel for an even bigger speedup.

      What killed specialty hardware systems like the Symbolics LISP machine and the Mitsubishi PSI was Moore's Law and bigger better-funded companies making huge advances with developing high-performance RISC CPU's. UNIX workstations in particular took over this space because UNIX systems were very general-purpose and could be used for lots of things, not just specialty A.I. applications, and by the early 90's Sun SPARCstations and Digital DECstations with their SPARC and MIPS RISC CPU's, respectively, had become *faster* at running Prolog and LISP than the specialty machines with the hardware LISP and Prolog accelerators were. (There is a great PDF report written by DEC about this here: https://www.info.ucl.ac.be/~pvr/PRL-TR-36.pdf ) What were you going to spend your tens of thousands of dollars on if you were a company? Buying a specialty hardware Prolog or LISP machine or a far more useful and flexible UNIX workstation that could also run LISP or Prolog faster than the specialty machine? It was a no-brainer decision. Even the Japanese FGCS project saw that it was futile to keep producing their custom inference machines when competing against the ever-rising performance of SPARC CPU's and created a UNIX KL1 compiler so that they could start porting their inference machine software over to SPARC machines to continue their research. So it wasn't so much a "MIT/Stanford" style of programming vs. a "New Jersey" style of programming war like the author tried to portray it as. Instead, it was that better funded large CPU companies were able to increase the performance of their CPU's faster than university researchers and tiny niche workstation companies could increase the performance of their custom single-use special purpose chips. Just like the Commodore Amiga and it's custom chipset, the Symbolics LISP machine and the Mitsubishi PSI had the performance edge in their particular niche at first, but their specialty custom nature and disadvantage in R&D funding eventually allowed more standardized, general purpose, and better-funded platforms to steamroll over them and crush them in the market.

      1. that one in the corner Silver badge

        Re: The Forgotten Fifth Generation

        > First of all, there was a computer from the Japanese ICOT "Fifth Generation Computer Systems (FGCS)" project that was commercially sold...

        Thanks for that info.

        That particular footnote [4] was supposed to say "as far as I'm aware, updates most welcome" but you may have noticed I was having trouble keeping the footnotes in line.

        > , but if you gave the expert systems more than a few hundred rules to follow in their decision trees they became painfully slow at returning answers.

        Can you provide any citations for this? Serious question, not snark: in about 1985 we had an Expert System Shell applying simple Bayesian stats and I don't recall ever having to worry about it becoming painfully slow at calculating as you increased the rule set - it might start to ask more questions to help it traverse the data, but generally IIRC it organised the rules along the lines of a decision graph, walked from one side to the next, choosing the next node in the path based upon the data & weightings so far: a "wide but thin" set of k nodes would be solved faster (as in, it asked fewer questions before finding a sufficiently likely node that couldn't be disproved, aka "the answer") than a "narrow but thick" arrangement of the same number of nodes that arranged themselves in a different fashion.

        But there are other ways of working an XPS (e.g. it need not be interactive with the User - ours was and so any calculation time was peanuts compared to waiting for them to type) and it is never too late to learn about them.

        What helped kill off XPS (in the public eye) from our p.o.v. was mainly not showing a good enough cost/benefit: creating and validating rule sets is expensive and takes time away from your human expert. Simple XPS that could be created cheaply, such as an initial triage scheme, could be done as fast by the human operator once they'd run through the system a few times (i.e. the humans were trainable! Who knew?). Large rulesets (from our set of clients at least) were used too infrequently compared to the creation costs (and we didn't get into any of the big funded research studies). And some fields just learnt embarrassing lessons: like Stock Market trading. They can afford the set up costs, given the potential returns, right? But remember all those trials that showed that a monkey, an octopus or just throwing a die tended to give better returns on the market than the high-paid, so-called "expert traders"? And you *tried* to get an octopus to work alongside a Knowledge Engineer? Or vice versa, for that matter (the suckers! The horrible, horrible suckers!).

        And back in the day there was something called "ethics" that controlled the use of computer models released into the wild: the XPS could show how it worked out a path to its answers, but if there wasn't a human around who could validate that when someone disagreed, the ethical liability was too high - and your human experts were not wild about being tied forever to that project, especially if they were planning to retire...

        Luckily, all the current crop of LLMs and GANs etc are totally incapable of being queried for how they reached a result, so there are no ethical restrictions on just dumping them into the wild.

        > more standardized, general purpose, and better-funded platforms to steamroll over them and crush them in the market.

        True.

        1. yetanotheraoc Silver badge

          Re: The Forgotten Fifth Generation

          `And back in the day there was something called "ethics" that controlled the use of computer models released into the wild...`

          It seems to me this fits right in with the worse is better mentality.

          1. Anonymous Coward
            Anonymous Coward

            Re: The Forgotten Fifth Generation

            The only way is Ethex - that's the problem when you have a lithp.

        2. DoctorPaul

          Re: The Forgotten Fifth Generation

          I did my PhD in expert systems in the late 80s at the University of Brighton (as opposed to Sussex where I did my degree), anyone remember POP11?

          I added a capability to RBFS, the Rule Based Frame System, which I still think is the best named software I've ever developed - it was the Truth Maintenance System.

          1. Michael Wojcik Silver badge

            Re: The Forgotten Fifth Generation

            The Truth Maintenance System seems to be broken now. Could you take a look?

  4. doublelayer Silver badge

    Good or just entertaining

    When the winners forget they've won, or that they were fighting, that means that the losing side get to write some of the best summaries of the war. One famous account is a 1991 article called Lisp: Good News, Bad News, How to Win Big, which says:

    The two philosophies are called The Right Thing and Worse is Better.

    And an entertaining article that is too, but it's not a good summary of anything. Right from the start, from the quote I use, it's obvious that they're setting up a straw man as the method they don't like. They weren't being subtle about that either, and I'm almost entirely certain that they knew it and assumed readers did as well (the alternative is that they were some of the most irritating and self-deluded people in all of computer science, and I trust that they knew what they were doing).

    This makes the article fun to read as someone who came along too late to participate in the war. It's also not compatible with their being right about what would have been better. If I ever use hyperbole to complain about things I don't like*, I'm doing it to make my comments less dry but I still have a reason why I didn't like it and that reason might be justified. However, by taking this attitude, it ends up lacking a lot of important context if you're trying to understand or summarize what really happened. If you say your thing is good compared to an alternative you've just made up, it does not show the reader why it is better than the real alternative that exists. I contend that the description is wrong. It's a great piece, but it is not one of the best summaries.

    * For example, I could say something like "JavaScript's developers looked at the idea of error handling and decided that they didn't want to do it and they didn't want anyone else to either". This is not true. There are error handling methods in the JS specifications and people use them. I just don't like them compared to those used by other languages, and I've seen way too much JS code that doesn't use them and needs to. Treating the quoted statement as a fact would demonstrate misunderstanding of the language, but readers can understand that I mean it as a lighthearted way of expressing a less extreme point that, at least in my opinion, is still valid.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Good or just entertaining

      [Author here]

      > It's a great piece, but it is not one of the best summaries.

      Thank you. I'll take it, because TBH it wasn't really trying to be a summary.

      Long time readers of my tech blog (of which there are some from comments when I moved it to Dreamwidth, but I have no way of tracking them) may recall that I had a period of writing a lot about Lisp and Lisp Machines.

      This wasn't a summary, or even an attempt at one, really. It was sort of a..

      "This is the story of two sisters..."

      "Confused? You will be, after this week's episode of... Lisp!"

      All these questions and more will be answered next week. Or possibly some point thereafter.

      The whole subject is way too big to cover in one short news piece, but it's not very often that I get to write news stories about Lisp Machines, so I attempted the carpe the jolly old diem.

      1. doublelayer Silver badge

        Re: Good or just entertaining

        I didn't mean your article; that was a good description and covered interesting aspects. I was referring to the 1991 article you linked to, which I think is a great piece if you want to hear about views and opinions but doesn't provide much reliable context for understanding what happened.

        1. yetanotheraoc Silver badge

          Re: Good or just entertaining

          You wrote: "it's obvious that they're setting up a straw man as the method they don't like. They weren't being subtle about that either"

          So unsubtle that they even used the term strawman. I don't think you are correct that they don't like the worse is better method.

          They wrote: "I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. (p/) However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach."

          1. yetanotheraoc Silver badge

            Re: Good or just entertaining

            Oh, wait. Did you mean they set up the right thing as a straw man?

            I'm not that bright, sometimes I need things spelt out for me.

            1. Michael Wojcik Silver badge

              Re: Good or just entertaining

              No, no. Gabriel says in "Worse is Better" that he's deliberately setting WiB up as a strawman but then arguing that in fact it's more successful.

              I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.

              However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.

              In effect: "Here's the approach I worked under, and here's the competition. I like mine better, but the competition is more successful and arguably better for developing software". Then he argues that latter point by noting how easily C and UNIX spread (the "virus" argument), and how that in turn attracts resources to improve them.

    2. that one in the corner Silver badge

      Re: Good or just entertaining

      > it's obvious that they're setting up a straw man as the method they don't like

      Did you read the second URL, https://www.dreamsongs.com/WorseIsBetter.html ? Possibly not, as the article describes as an excerpt of the longer piece.

      However, it is actually a fun meta-piece about the "Worse is Better" idea, where the author describes how, over time, he flips between both points of view, including writing (and having published in the same organ) articles where he argues for both sides (battling between his own name and a pseudonym)!

    3. doublelayer Silver badge

      Re: Good or just entertaining

      I come back to reply to this comment and notice I made a typo, and that classic one that involves a few characters and totally reverses the meaning:

      I said: "It's also not compatible with their being right about what would have been better."

      I meant to say: "It's also not incompatible with their being right about what would have been better."

    4. abend0c4 Silver badge

      Re: Good or just entertaining

      What is quite stark about the 1991 article is that it actually says almost nothing about the potential merits of functional programming vs procedural programming (except in generic remarks about consistency) in principle. It's really all about implementation details: how better compilers have allowed the performance of Lisp and C to be similar and how better development tools have given C programmers some of the debugging tools that were once the preserve of the Lisp environment. Implementation details are not fundamental differences.

      Indeed, it just seems to be a carefully-constructed false dichotomy largely based on institutional rivalry. If the intention had ever really been to have a radical alternative to procedural bit-twiddling, fundamental constructs of the language (car and cdr) wouldn't have been named after arcane architectural features of the IBM 704.

      1. Gene Cash Silver badge

        Re: Good or just entertaining

        fundamental constructs of the language (car and cdr) wouldn't have been named after arcane architectural features of the IBM 704.

        Thank you. Finally somebody said it.

      2. Michael Wojcik Silver badge

        Re: Good or just entertaining

        Your argument assumes LISP was designed ab initio under a deliberate "the right thing" approach. Obviously that's not true. It wasn't even originally intended to be implemented, and the REPL came along later more or less on a whim. But that doesn't prove that subsequent LISP development wasn't largely guided by a TRT philosophy.

        The histories of LISP variants on the one hand, and of UNIX and C on the other, are of course far more complex than Gabriel's article even touches on. His piece is at best an interpretation of certain prominent aspects of those histories &mdash; and at worst an utter fiction; I'm not interested in trying to support any position along that spectrum. But in any case, the existence of anecdotal nits like the etymology of car and cdr (or other LISP awkwardness, like some of the special forms or the over-reliance on arbitrary punctuation or some of the infelicities Gabriel himself mentions) is a feeble counter to his argument.

  5. Anonymous Coward
    Anonymous Coward

    Forgotten Projects In Glasgow, UK..........

    @Liam_Proven

    In the late 1980's Ivor Tiefenbrun and David Harland (of Linn Products, hi-fi manufacturers in Glasgow)......well, they were developing a machine called Rekursiv.

    The idea was to have a machine which implemented object-oriented code at the hardware level. The prototype was an add in board to implement Rekursiv in a Sun workstation.

    The project was shelved in 1988 or 1989.

    So far so interesting. Object-oriented.......1988.....Glasgow.......

    .......but where are the similar visionary projects in the UK 2023?..........even if they have (or will) fail.....

    Commentards need to tell!

    1. Arthur the cat Silver badge

      Re: Forgotten Projects In Glasgow, UK..........

      well, they were developing a machine called Rekursiv.

      To be programmed in a language called Lingo(*), IIRC. I may even still have a book on it stashed away somewhere. It's not obvious on my office shelves though.

      (*) Not the John H Thompson one, but if you look at the Wikipedia article on that you'll find it mentioned as the second entry in the "Other Languages" section.

      There's also a Wikipedia entry for Rekursiv.

      And a Hacker News thread about it from 2019.

  6. Some Random Kiwi

    Interlisp-D

    There's also https://interlisp.org/ - recovering and modernizing Xerox PARC's Interlisp-D system. It's running quite nicely on modern hardware, and many many times faster than on the original D-machines.

  7. Anonymous Coward
    Anonymous Coward

    Off Topic? Well....Not Quite!!

    I've always wondered why the truly excellent design of the Motorola 68000 chip never went any further...

    .....while the appalling x86 architectural mess turned out to rule the world.

    Recently I was talking to a friend who spent his life in the semiconductor business (1970 to 2010). His take on Intel's success was simple. He suggested that in the late 1970's (4040, 8080 time) Intel had mastered the chip business and were getting MUCH better chip yields than anyone else in the business. In summary, better chip yields at Intel trumped better architecture elsewhere!

    What a pity!

    1. _andrew

      Re: Off Topic? Well....Not Quite!!

      Intel was definitely winning on process technology, but that's not the whole story.

      Perhaps by good design or just good luck, the x86 architecture did not have much in the way of memory-indirect addressing modes. The 68k, like the Vax, PDP-11 and 16032 did have memory-indirect addressing modes (and lots of them: they all tended to have little mini-instruction sets built into the argument description part of the main instruction sets, to allow all arguments and results to be in registers, or indirect, or indexed, or some combination). This seems like a nice idea from a high-level language point of view. After all, you can have pointer variables (in memory), and data structures with pointers in them (CONS, in the lisp context, for example). The trouble is that this indirection puts main memory access into the execution path of most instructions. This wasn't such a problem in the early days, when there was no cache, and memory wasn't much slower than the processor's instruction cycle. But that changed, memory got progressively slower, compared to CPU cycles, CPUs developed pipelining, and that pretty much put the kibosh on indirect addressing modes, and the processor architectures that supported them. RISC designs were the extreme reaction to this trend, but there was sufficient separation between argument loading or storing and operations in the x86 instruction set to allow them to follow RISC down the pipelining route.

      You can still buy 68000 derivatives, by the way. Motorola semi became Freescale became NXP, and they still sell cores called "ColdFire", which are mostly 68k, but with a lot of the extreme indirection that came in with the 68020 to 68040 streamlined off. They're essentially microcontrollers now though.

    2. Arthur the cat Silver badge

      Re: Off Topic? Well....Not Quite!!

      I've always wondered why the truly excellent design of the Motorola 68000 chip never went any further...

      .....while the appalling x86 architectural mess turned out to rule the world.

      I'd argue that the NatSemi 32k architecture was even more elegant than the M68k one, but the chips arrived very late to the party and never got the traction.

      1. prandeamus

        Re: Off Topic? Well....Not Quite!!

        As I recall, the natisemi CPUs were bugging and didn't match quoted performance (https://en.wikipedia.org/wiki/NS32000) and by the time they sorted that out, anyone who wanted a vax-ish CISC architecture had already settled for the 68K line. I had a work colleague who bought a BBC micro co-processor with the 16032 and remembering it running Ackermann's function awfully quickly, but there wasn't enough momentum, no "killer app" or sexy hardware platform.

  8. Red Ted

    RISC Machines

    Nice article. I did have to check part way through that is wasn’t one of the better April fools.

    The other famous vendor of RISC architecture to rise during the period was…

    Acorn RISC Machines!

    1. Michael Wojcik Silver badge

      Re: RISC Machines

      Yes. And it's a bit odd to say "RISC is enjoying a renaissance" without noting that the vast majority of personal computers &mdash; smartphones, of which there are ~10 times as many as PCs of various stripes &mdash; are all using RISC CPUs.

      And, of course, x86 CPUs have RISC cores, but Liam more or less alluded to that.

      1. doublelayer Silver badge

        Re: RISC Machines

        This depends on how complex something has to be before it jumps from RISC to CISC. ARMV9, with over a thousand instructions, or ARMV8 which has most of those and a bunch of others because it supports all the 32-bit instructions as well are pretty complex. It's not just load, store, and some mathematical operations these days. There are a lot of instructions that execute complex operations or that work on multiple pieces of data in one go. Is that still as RISCy as RISC-V which has about 60 instructions in the base set? If not, is RISC-V still RISC when it's used along with a bunch of extensions?

  9. Anonymous Coward
    Anonymous Coward

    This war was, like most, between two rival camps.

    Unlike the movies, the Triffids win out over the Boffins due to the power of mass production.

  10. spold Silver badge

    Showing my age...

    One of my first languages was Algol 60 on an ICL 1902 (along with Basic, Fortran, and COBOL) - I was happy when Pascal was able to run on my Apple II.

    (missing semicolon at line 5)

    ;;;;;;;

  11. -v(o.o)v-

    Worse-is-better lives on

    It is obvious that worse-is-better is alive and well, look no further than the current "most popular" programming language Python. An absolute train wreck being shoehorned in to massive projects by support of frameworks like Django.

    Oh the horror.

  12. masinter

    Lisp Machine revival

    For another Lisp machine revival, check out the work we're doing to revive Interlisp at https://Interlisp.org

  13. disgruntled yank

    To quote Ken Thompson

    """Yeah, yeah. I knew all those guys. It thought it was a crazy job. I didn't think that Lisp was a unique enough language to warrant a machine. And I think I was proved right. All along I said, "You're crazy." The PDP-11's a great Lisp machine. The PDP-10's a great Lisp machine. There's no need to build a Lisp machine that's not faster. There was just no reason ever to build a Lisp machine. It was kind of stupid."

    Interviewed in Coders at Work by Peter Seibel. A little above this, by the way, Thompson professes not to have read "Worse is Better" and goes on to say "I think MIT has always had an inferiority complex over UNIX."

  14. joeldillon

    'Today, though, thanks to Apple Silicon Macs on the high end and RISC-V on the low end, RISC is enjoying a renaissance.' - errrr. I think you'll find there's a bloody ton more ARM chips out there, and they've never gone away in an embedded context. Not everything is a PC, and PCs are not being built with RISC-V.

  15. tfknight

    Garbage Collection and Memory Tags

    This posting misses entirely the two major differences between the Lisp oriented languages and architectures and the C oriented world. Lisp pushed automatic memory management with garbage collection. Major advances in GC technology, such as real-time collection, followed. The C world pushed back energetically, and resisted GC technology for decades. But not now.

    Lisp, of course, relied on type coercion, and allowed any variable to contain any type. Best implementations, such as the CONS/CADR/LMI/Symbolics machines used tags associated with each memory word to efficiently execute this code. Today, similar techniques are being actively worked on for information security issues. Types allowed array limit enforcement, e.g.

    Someday we may get architectures that help with these problems, but not yet.

    Tom Knight - CONS/CADR architect

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like