I would love a 1984 Subaru GL 4WD,
emacs not so much
Xerox's pioneering graphical Lisp workstation operating system is not only alive, well, and MIT-licensed, but running in the cloud as well as on modern OSes. Last week, the British Computer Society hosted a talk about the the Medley Interlisp Restoration Project by Steve Kaisler of George Washington University, who literally …
What's the problem?! It has four seats! Two of them face backwards and sit in the load bed -- those are the kids' seats. Whipping along in the fresh, cold air, bouncing over ruts and potholes ... kids love that stuff! They've got serious handles to hold onto. And the back seats comprise the "smoking section" for any adult who desires to light one up along the way.
I like this new approach that doesn’t just hide the annoying parenthesis with indentation. A big improvement in readability.
> “Rhombus is a new language that is built on Racket. It offers the same kind of language extensibility as Racket itself, but using traditional (infix) notation. Although Rhombus is far from the first language to support Lisp-style macros without Lisp-style parentheses, Rhombus offers a novel synthesis of macro technology that is practical and expressive.[…]”
Rhombus: A New Spin on Macros Without All the Parentheses (SPLASH 2023 - OOPSLA) - SPLASH 2023
https://2023.splashcon.org/details/splash-2023-oopsla/52/Rhombus-A-New-Spin-on-Macros-without-All-the-Parentheses
presentation by Matthew Flatt at SPLASH’23
https://youtu.be/c7S5WPsw_gM?si=ki-PmtCSuVaGp3Eb&t=4371
Just indent your Lisp code correctly, so the parameters to the function calls line up. Then it's easy. Lisp is a great language. It's simple syntax is a plus, and used properly it is extremely flexible, with so many other languages using ideas that were first demonstrated in Lisp.
It won't go away for a reason, but because it makes you think about programming differently, it is often seen as difficult to learn. But once you get it it's easy.
Well, I was ultra-surprised to find (20 minutes ago) a JavaScript edition of Abelson and Sussman's "Structure and Interpretation of Computer Programs" (abbrev. SICP JS), as adapted by Martin Henz and Tobias Wrigstad (2022). This should help raise the level of JS programmers worldwide (or broaden their perspectives), and will make a most superb of end-of-year present for many!
Unfortunately this first sets up a scheme-like library where cons is possible and then uses this exclusively. It is nothing like standard JavaScript and does not provide much insight into either native JavaScript programming or how you would use native JavaScript to write code like that found in SICP.
One is better off using standard SICP to learn the ideas presented there in a general sense, then attempting to apply them in JavaScript.
Ah-ha! Nice try! Prompted me to go over to the OpenAccess version ( https://sourceacademy.org/sicpjs/index ) and check the contents. For example:
1.3.4 Functions as Returned Values -- nice (all JS)
3.5.1 Streams Are Delayed Lists -- cool (all JS)
4.1 The Metacircular Evaluator -- evaluator for JavaScript implemented in JS
5.4 The Explicit-Control Evaluator -- regs/stacks for funcalls/arg-passing in JS
5.5.1 Structure of the Compiler -- generate machine code instructions for JS, in JS
After reading that, and Andy Wingo's blog, kids should be ready to hack into SpiderMonkey and V8 (or make their own JSengine for ARM microcontrollers!)! ... (unless I've missed something of course)
One amusing thing I heard here recently was "what are all the fingernail clippings in the code?" which I'm wholeheartedly stealing.
I have used EMACS for 30 years and I've still never got the hang of LISP. I just use snippets I find on Altavista or Google or whatever.
I guess I used up all my brainpower memorizing EMACS keystrokes so I have none left for LISP.
All those parentheses is not that much of an issue with modern editors. However, one early version of microcomputer LISP (Old enough to be built for CP/M) hit on the idea of using the square bracket - ']' -- as a "close everything parenthesis". It was useful and really just a typing shortcut, it didn't alter the syntax of the language.
BTW -- My problem with LISP is that I can never think of anything to do with it. I'm also aware that all this list processing involves indirect references, something that's 'mostly harmless' for older computer systems where the memory speed is similar to the processor clock, but awkward with modern processors. But then most of my work's been with real time / embedded systems where elegance of the top level code takes a back seat to determinism and efficiency. (The idea of "Embedded LISP" -- possible.....but seriously?)
My problem with LISP is that I can never think of anything to do with it.
Just a bit of fun. :-)
If you have 20 horses in a race, what is the number of ways you can choose first, second and third? Well, the first could be any one of 20. The second, any one of 19 and the third any one of 18. So 20*19*18 ways
or 20!/17![1] = 20!/(20 - 3)! = 6840
And in Lisp:
$ clisp
Type :h and hit Enter for context help.
[1]> (defun fact (n) (if (< n 1) 1 (* n (fact (- n 1)))))
FACT
[2]> (/ (fact 20) (fact (- 20 3)))
6840
[3]> (quit)
$
Recursive functions, aren't they beautiful.
[1] And this is read, of course, as 20 bang over 17 bang. :-)
> Interlisp or Interlist? Parehelion... who the heck were they?
Whoops. My bad. I have pointed these out to TPTB and they will be fixed forthwith.
> And if LISP really was so great, how come it was almost totally ignored when the DoD was looking for a foundation for Ada?
I find Ada quite readable, myself.
Whereas I recently had strips torn off me on Mastodon for calling Lisp unreadable -- by Piers Cawley, renowned (retired) Perl guru, who ought to know about unreadable languages and who has clearly not read any of my tech blog, ever.
I think things like RPN, prefix and postfix notation, Forth and Postscript and Lisp, are readable to a certain kind of mind, but will remain forever opaque to most people. And programmers are just people. While C notation is easy enough to construct and Java can be thrown together by untrained non-domain-experts and will work fine, in a language designed for safety, a simple, clear, Pascal-like syntax was probably the right choice to make.
FYI that was an exceptionally tasty article, just in time for Thanksgiving today.
I do like seeing other ways of doing it. One of my college roommates was a big LISP Machine user.
I think the biggest fault of LISP was car & cdr. If they'd called 'em "first" & "rest" I think there would have been far more acceptance.
Thank you!
(edit: "oh yeah, CAR is contents of address register and CDR is contents of decrement register" "Seriously? WTAF is what the actual fuck. Are you really expecting that to make sense to new coders?")
> "oh yeah, CAR is contents of address register and CDR is contents of decrement register" "Seriously? WTAF is what the actual fuck. Are you really expecting that to make sense to new coders?"
In the '50s', '60's, and into the '70's, serious computer instruction started with the mechanics. Registers, addresses, and decrements were fully understood concepts by the time the student was writing machine code, even pseudo-code for assembler. I always "see" a CPU as a 12 to 48 bit Main Register, like an abacus, usually with several other registers. Do they even teach that way anymore?
When I was taking my first Pascal course, back in uni, we "diverted" a few weeks from learning Pascal to learning, reading, and writing programs in "Gear's Assembly Language", which was for a virtual CPU emulated on our CDC 3300 mainframe.
I'd love to see any old docs on G.A.L. and/or its interpreter, or even a copy of the source, but I can't find what I want via Google. Google won't ignore keywords I put in the Advanced Search's "none of these words" field, and shows me tons of irrelevant hits to machine tooling sales sites. :-(
I never learned much LISP, but I thought (incorrectly?) that CAR stood for "Content Address Register", and "CDR" stood for "Content Data Register".
(Icon for old guy.)
Probably not. Older architechtures are useful for teaching machine-language concepts because they are simple compared to modern CPUs.
Have you ever seen a "programmers' card" for any amd64 CPU? I'd bet not.
The errata docs alone for Intel's current 64-bit CPU are more than 100 pages, which was about the size of the entire (and excellent) manual for the Motorola 6800.
The 6800 had six addressing modes which I can remember:
* implied (target register number encoded into the instruction; consumes one byte)
* direct [aka zero-page] (addresses $00FF ~ $0000) (one-byte page zero address concatenated to the 1-byte instruction, for a total of 2 bytes)
* indirect [aka absolute] (the address of the operand [addresses $FFFF ~ $0000] concatenated to the 8-bit instruction, for a total of 3 bytes)
* immediate (one-byte operand concatenated to the instruction for a total of 2 bytes)
* indexed (upper-byte operand address concatenated to instruction, lower-byte operand address contained in index register; consumes a total of 2 bytes)
* relative (uses current location with 1-byte offset of +127 ~ -128; consumes a total of 2 bytes)
"Do they even teach that way anymore?"
I asked one of our new hires that exact question. He confirmed that they do...in his Computer Engineering course, they still design and build CPUs and interface them to memory, peripherals, etc and design microcoded instruction sets. Instead of doing it on paper, though, like we did in the 70s, they do them in VHDL or Verilog and implement them in FPGAs.
I was encouraged.
I'm disappointed that your editorial staff aren't sufficiently on the ball to have picked up the lisp/list inconsistency.
The more time I spend with software projects' excuse for documentation, the more I respect the rigour that the Ada community attempted.
I've dug around the history a bit, and in actual fact the Strawman requirements did mention LISP's indeterminate-length lists as something useful to have. Other than that it was ignored, and in the end they explicitly based the language on Pascal (i.e. as distinct from ALGOL-60 etc.).
And then it appears that the DoD's HOLWG actually hired Dijkstra, Hoare and Wirth as consultants: they being the prime movers behind the "Minority Report" which pointed out flaws in ALGOL-68 as first defined.
All of which could be very easily interpreted as an aggressive dismissal of a whole bunch of ivory tower academics including John McCarthy and van Wijngaarden's coterie.
But the bottom line is that neither Ada nor ALGOL-68 had an easy/cheap implementation which allowed an engineer or project specifier to take a copy home or run it standalone on his office workstation. And that's probably why C (and, in its day, Turbo Pascal etc.) outsold it something like 5,000-to-1. Hell, I've seen more copies of LISP sold than Ada...
neither Ada nor ALGOL-68 had an easy/cheap implementation which allowed an engineer or project specifier to take a copy home or run it standalone on his office workstation
Algol 68 rather predated home computers and office workstations. I learnt it (Algol 68R dialect at RRE Malvern) on an ICL 1907F, and I don't think anybody had one of those at home in the early 70s.
I find Ada quite readable, myself.
I've often thought that, in another world where Ada really caught on, I'd've quite liked to have specialised in that instead of C. But I know that in reality I'd've hated it, precisely because it would stop me doing all the evil things that C allows (or even encourages).
I also had the good fortune to work with Piers many years ago. Very bright, and not someone I should've tried to argue with about technical matters, but probably did anyway.
Pascal for the CDC 6600 had an obscure feature called, I think, "non-discriminated type unions". It let you choose, on-the-fly, what type the compiler should "think" a particular variable had. The CDC 6600 had 60-bit words. Each word could hold: A floating-point number, or an integer, or a string of 10 six-bit characters, or a Boolean, or a Pascal "set" (max 60 members). If you were writing, say, an assembler, it was handy to be able to re-interpret a 10-character identifier string as (after appropriate munging) an integer, which you then could use as a key into a hash table.
"Untagged variant records" in more recent Wirth-style parlance. A lot of machines were like that, and while I've not gone into that facet of history in detail my suspicion is that the original intention of types as conceived by Hoare/Wirth was to handle those, with extension to more complex data structures etc. following.
It's interesting reading the C created by a dyed-in-the-wool but poorly trained COBOL programmer. Procedures many pages long, often just one of them.
I think my first manager may have been a COBOL programmer. He knew nothing about C, but the point is that he knew that he knew nothing about C so his approach to writing C was to tell me to do it. Clueful, and one of the best managers I've worked for.
This post has been deleted by its author
But one of the most significant things about Smalltalk as described in the "Blue Book" was that it went to significant lengths to encourage commenting etc. to make large-scale programming manageable.
Kay, unlike many who attempt to design a language, was intimately familiar with the state of the art: warts and all.
Quote:
"But one of the most significant things about Smalltalk as described in the "Blue Book" was that it went to significant lengths to encourage commenting etc. to make large-scale programming manageable."
But thats standard when writting any sort of code... comment the hell out of it.
I liked smalltalk the year I used it ... but it seemed to involve frogs and toads and lilypads(if you get the reference... )
I may be weird, but the nice clear syntax of Smalltalk beats everything else I have ever seen for readability. Although it is possible, as with anything, to write it badly, the syntax would fit on a postcard. The only gotcha is left to right (no-precedence) of binary expressions but the only problem that has ever caused me is making me think twice about "Facebook maths test" problems.
If you want to have a quick play, I heartily recommend pharo
I started with C, then tried to learn Pascal in my spare time - absolutely hated it After a few years of C under my belt, a new job introduced me to Smalltalk - took to the language, concepts and IDE like a duck to water - then, moving to Java when moving jobs, and had the opportunity to go back to Smaltalk a couple of times Back now on Java, but the Smalltalk experience influences heavily
So in Smalltalk you have left-to-right, in APL you have right to-left, and Lisp does its own thing. Then you've got RPN as used by Forth, HP et al.
All of which has left me with the definite feeling that we owe an enormous amount to ALGOL as originally defined, which despite the fact that few people at the time knew how to write a compiler did its best to stick to normal algebraic evaluation.
I think things like RPN, prefix and postfix notation, Forth and Postscript and Lisp, are readable to a certain kind of mind, but will remain forever opaque to most people.
Do you have any evidence for that, or is it simply a guess? If there is evidence, how much is attributable to the fact that infix is taught first?
I'm suspicious of any claims about "X is difficult for most people" without support from methodologically-sound research. We already know about a whole host of things which have strong support for such claims; there's little value in making up others.
Well, for one thing, infix is hard to parse, like: 9 + 3 * x^2 / 4, for which one has to find individual operators and operands, and then build an evaluation tree for the expression that respects operator precedence rules, prior to figuring out the value of the expression.
Prefix is easier to parse by machine because the operators (or functions) are always at the same place (1st position in the list), and are followed by the args (that may be sub-expressions), for example: (+ 9 (/ (* 3 (square x)) 4)). Hencewiseforthwith the evaluation tree is pre-built in the prefix-style code.
Postfix (Forth) is the easiest in this respect as it needs no parsing at all, essentially (one item per line): 4 x dup * 3 * / 9 + Here, the full responsibility for everything rests in the head of the reverse polish thinker who programmed this.
Infix is more humane, possibly (as you suggest) because we learn it in grade school arithmetic. But it should pay great dividends to be a polyglot in this, even if that is only to better enjoy Yoda speech such as: "market to the going I am" for the more conventional "I am going to the market"! (IMHO).
It is interlisp. I used it for many years at Xerox as my desktop. It had mail, text editing, a couple of lisp structure editors, etc. in fact, everything you needed to get your work done. If it crashed (e.g., due to building power failure), when it came up all of your windows were just where you left them.
I still miss it (and Smalltalk 76).
This post has been deleted by its author
Great article, I've poked around with this a few times since I was always interested in Lisp machines when I was younger. Compiles and runs quite well on Manjaro ARM. That they're trying to create documentation for new users is also impressive, comparable to the project to revive MIT ITS and provide info for new users.
Now, if somebody could also take care of
+ ALGOL Mainframes from ICL, Unisys and Moscow Precision*, a precursor of the JVM runtime and Memory Safe Kernels
+ Wirth's systems from Modula-2 to Oberon
+ Smalltalk-based systems of many shades.
Why ? Because these systems are still leading-edge in some aspects, especially security and simplicity.
The second reason is that the hard work, which went into these systems, should be somehow preserved for future engineers and scientists to study. Maybe there is even some social-science knowledge to be found in the future. Currently we preserve Greek scriptures, but we throw away important inventions of just a few decades ago. That's sad.
* now MCST
Oberon: https://github.com/pdewacht/oberon-risc-emu
Lilith: http://pascal.hansotten.com/niklaus-wirth/lilith/emulith/
Xerox Alto: https://github.com/livingcomputermuseum/ContrAlto
Xerox Star/Dandelion: https://github.com/livingcomputermuseum/Darkstar
And for fun, some LISP machine emulators: http://unlambda.com and https://lisp-machine.org/ and https://github.com/jjachemich/linux-vlm
I learned to program on a BBC Micro. First BASIC (which, as a PhD student, I was teaching to university undergrads - I was one or two weeks ahead of them), then 6502 assembler (because I had a couple of psychology research projects involving control and data collection from multiple units of custom hardware which couldn't be run any other way - I have written thousands of lines of assembler), then LISP (for fun, but I learned enough to get it to do useful things and *it made me change the way I thought about programming*).
I only learned C after all of those, and it seems to me that that was a good order for learning languages. I wrote a lot of things in C, but I suspect with quite a bit of LISP inspiration. After that I did some maintenance on some code written in FORTRAN-2, some newer stuff written in FORTRAN-IV (no chance of any style in either of these), quite a lot of C++ (neural network simulations), some C# (as the language you had to use with Unity), and tons and tons of MATLAB (we had equipment which only came with libraries for MATLAB). BTW, I'm not a programmer, I'm a University Professor, and this journey through programming started when I was a PhD student over 40 years ago - BBC LISP definitely played an important role, even though I never really used it in anger).
Replies to comments:
* Lisp parentheses are less annoying when you use Medley's SEdit structure editor and managed files.
* It's a different style of development -- you don't start with a text editor typing in your program, you grow software incrementally. You don't have to have your program completed before you can start running it. There are files, but the files are more like backup/checkpoint than source truth.
* The "sweet spot" for the system was exploratory programming, incremental development, rapid prototyping, for researchers in AI, GUIs, linguistics, etc. The users weren't primarily concerned with software engineering, modularity, security, at least until much later.
* 'first' and 'rest' are part of Common Lisp, and mean the same thing as CAR and CDR.In Interlisp since the early 70's, you can define a "record" format and hide all of those calls.
* WebAssembly: there's a port , but some details need to be worked out, like what to do for a file system. Way back when, files lived on file servers, and we're looking at that as a possible (retro) architecture for Medley in the browser.
We're far from done. We're doing this for fun.
--
https://larrymasinter.net https://interlisp.org
Absolutely. Smalltalk and its surrounding tooling has very much the same vibe; it's an all-encompassing environment, and one uses it very differently to almost all current systems. "Debugging a program into existence" is very much a thing, where the debugger fires up on doesNotUnderstand: calls and you write the code that wasn't there yet.
Keep on doing it for fun :-). When do you reckon you'll be able to remote-control a coffee pot with it?
Cool thing about WebAssembly (for this) is that it targets a "portable virtual stack machine", which has philosophical similarities to your (1980) optimizing InterLISP compiler that targeted an "abstract stack machine" (0-address arch; no register-allocation torture).
Next steps might be to run those VMs on-the-metal, and develop OS and apps on top (eg. your suggestion: "If we could make a computer with an operating system written in Lisp, can't we build a phone OS written in JavaScript?")!
While I’m a big fan of retro-computing, but it would be a mistake to think lisps were an evolutionary dead end. After Common Lisp was standardised in the 90’s the Scheme community continued to innovate. In that tradition I highly recommend trying out a modern lisp that is still actively developed, has a great native code compiler, a compiler to Javascript, advanced metaprogramming, is cross platform, open source, has many tools and libraries, lots of documentation and books and a vibrant community: https://racket-lang.org/
(It’s the VW Multivan of modern lisps)
Was it really a war?
There was a substantial mindset difference between traditional Von Neumann architectures and procedural programming that went with then, and what the AI researchers were doing with neural networks, non-procedural programming etc.. To me back then it was obvious, the simplicity of the Von Neumann architecture suited the types of problems computers were being brought into address.
I suggest the lessons from this work are very applicable to the new AI "coprocessors" such as the Neural Processing Units (NPUs) that Intel are integrating into their CPUs. So what we actually had was a technology research stream that was decades ahead of the world, which in some respects you want research to be about.
One of the many "odd" skills in my resume is - Can write all Common Lisp prims in assembly language Which I did for a commercial Lisp compiler in 1986. Can still pretty much recite the Guy Steele book from memory. Even then Interlisp had lost the war. Just being an academic installed base afterthought. Interlisp compatibly or porting never ever came up with our customers once as a serious question. And we had some pretty "interesting" customers. All the usual suspects.
Rooms / Notes as "Hypercard before Hypercard". I dont think so. Not once when Hypercard came out in 1987 did anyone say - Hey its just like Notes on Interlisp. Honestly I think the only time I even saw a screenshot of Rooms/Notes was in one of those "Collection of Papers" books that you would see downstairs in the Stanford Bookstore in the immense Comp Sci section. Back in the good old days when the Stanford Bookstore was the best academic bookstore in the world bar none. Sadly destroyed in the late 1990's to be replaced by a very bad Borders clone which soon got rid of all those pesky books for those much more lucrative Stanford sweatshirts.
Lisp code was very easy to edit and navigate once the code editor window supported blinking matching parens and keyword highlighting. Something we had from the beginning. And you would not believe the source level debugger we had either. It was not till later versions of CodeWarrior in the late 1990's that other languages started catching up.
As for the parens. Just like braces in JS. After you have looked at enough source code they just kinda disappear as you read the code. As for the guy who mention FORTH. If you have to write a large enough FORTH codebase you ended up with some kind of structural partitioning that functioned just like parents / braces. Those dictionary entries dont organize themselves. But most FORTH code was just the low level version of high level APL code. Write only then immediately forget how it works. Saying that almost all large Lisp codebases ended up write only too. Which is why they died after the original developers moved on. No one else could work out how they worked.