RISC OS
There's some bits written in C. Some.
But the system API and the huge majority of, well, everything is hand crafted ARM code.
Start here (main OS startup, after HAL init).
Believe it or not, not everything is based on C. There are current, shipping, commercial OSes written before C was invented, and now others in both newer and older languages that don't involve C at any level or layer. Computer hardware is technology yet very few people can design their own processor, or build a graphics card. …
There's some bits written in C. Some.
But the system API and the huge majority of, well, everything is hand crafted ARM code.
Start here (main OS startup, after HAL init).
As I recall, RT-11 (Dec PDP-11) is also written entirely in assembler. It's quasi-open-source in that you can compile in your own device drivers along with the standard ones. As a runtime OS it was designed to control equipment and so i understand that the 'Unibus' and 'Q Bus' architectures were designed with custom peripheral boards in mind. RT-11 had foreground and backround capability and with memory management hardware, could swap blocks of memory in and out of visibility. Anyway, experiment with simh if you are so inclined (I have). Built the kernel a couple of times from source even, making fixes here and there for various things. But some of the userland programs may have been written in FORTRAN or some other lingo. Still kinda fun to toy with. And then when you see PDP-11 op codes doing post-increment and pre-decrement you can easily see how that ended up in the C language.
I started my computing career on a DEC-10 which was all written in Macro-10 assembler and that had shared libraries within an application (HIGH SEG). Then it was a mixture of CPM on a RML380Z, whatever the PET ran, BBC Basic and a very early Microsoft Basic on a Challenger II
I used RSTS/E back in the early 80's and most of the core tools were written in DEC Basic-Plus with the low level kernel in Macro-11 assembler. It also had shared libraries.
Then OS4000 all written in Babbage for the start of my University degree where I finally met the first Unix and so the first OS written in C, some 10 years after I started using computers.
I guess the C centric OS view really only works for the youngsters.
RISC OS is an impressive bit of software, but the fact that it is written in 32-bit ARM assembler (which was a somewhat sensible decision back in the 1980s, when it was written) makes it a dead end. It can't even run on the 64-bit-only ARM variants, which are becoming increasingly common.
AFAIR, RISC OS was not Acorn's first go at an OS for their ARM-based machines. There was a project written in (IIRC) Modula2, but that was far from ready by the time the first Archimedes computer hardware was finished, so Acorn made ARTHUR (some jokingly said it was short for "ARM OS by Thursday"), which took many elements from the OS from the BBC Micro (which was written in 6502 assembly language) and added a very limited GUI. This was after a year or so replaced by RISC OS, which was clearly an extension of the code used in ARTHUR. After another year (about 1990, IIRC), this was upgraded to RISC OS 2, which had many advanced features that would not appear in Windows until Win95, and some that haven't made it there yet. At the time I loved it, but in spite of an active fan base, it will not have a long-term prospect unless rewritten in a high-level language that can be compiled to multiple platforms. Rust would be ideal, but it is probably hard to retrofit RISC OS to the Rust memory model. Maybe it is better to write a new OS from scratch that takes the best elements of RISC OS and adds support for new stuff such as multicore processing and UNICODE.
RISC OS 2 followed Arthur, and the advanced one was RISC OS 3.
Yes, it was fairly common to write stuff in assembler back then, but in this day and age it limits you to compatible processors. Thus the only two options are either to devise some sort of emulation to carry on running it as it is, or to recreate something new that uses concepts of what made RISC OS unique.
Pretty much the entire underlying API would need to be thrown out, though. It is based around the behaviour of the ARM's SWI instruction, which is unlikely to map onto other sorts of processors. Time, perhaps, for something a little less bare metal?
The proposed original OS was ARX, I think it was called. Needed a 4MB machine and a harddisc and started swapping after no time. Could have been impressive a decade later, but was a complete dead end in 1987. Look at the prices of harddiscs and memory back then.
RISC OS, on the other hand, could manage a rudimentary but functional DTP package on a 1MB machine.
The old Apollo workstations & servers during the 1980'2 (before Apollo has absorbed by HP) ran an OS that was written in a a version of Pascal. In incredible system; at one time we ran up a lab of about 20 diskless nodes from a single disked workstation with no appreciable performance headaches.
I believe that Cray also used Pascal-based OSes. However I'd caution that it's likely that the implementation language was a long way from (standard) ISO Pascal or (de-facto standard) Turbo Pascal, in the same way that Burroughs' implementation language for the MCP (ESPOL) was distinct from the ALGOLs that they used for application programming.
Multics was written in (a version of) PL/I with the added challenge of having to develop the compiler in parallel with the operating system.
IBM has written chunks of its various operating system using PL/S and its successors.
VAXELN, a bit of a curiosity in the Digital Equipment Corporation range of operating systems, was written in Pascal. Chunks of VMS were written in BLISS-32 (though a lot of it was Macro-32).
Yep. I wrote a new OS Directive in Macro-32 after attending a Kernel & Device driver course. IT would give the user full system privs if you executed a program that called it.
Yes, I know that it was a huge great security hole but I only ever installed it on a machine that was used for testing. I just wanted to prove that it could be done.
DEC operating systems at that time were not the most secure. RSTS/E had a program called 'init' that would often be publically runnable and (necessarily) with the privilege bit set. As a college student I experimented with it and discovered that you could send it commands like "LOGIN KBxx" and it would log KBxx in with whatever account you specified, and NO password required. Ooops. Ginormous security crater built right in. (this program runs the RSTS/E startup sequence - another simh exercise and you'll see it in action - somewhere there is an RSTS/E image you can DL and run under simh, i found one)
A lot of RSTS/E was written in BASIC and compiled into a kind of P-code. But I am pretty sure the kernel was done in the MACRO assembly language.
When DEC ported VMS from the 32-bit CISC VAX architecture by 64-bit RISC Alpha, they wrote a compiler to translate the MACRO assembly language into native Alpha machine instructions, treating it as a high level language. (Some of it even had block structure, using the powerful macros that the assembler did indeed have.)
Our VAX 11/780 came with a complete set of source code... on microfiche. I spent many an hour studying the BLISS and Macro. (Many files signed by the legendary Dave Cutler.)
The VAX instruction set is probably the most elegant CISC set that I've ever programmed in. The National Semiconductor 32000-series was directly inspired by it, so I've always thought that it was a pity that the bastard x86 won.
Ah, VAX assembly. All those handy CISC opcodes, and the system macro libraries... I had a school assignment where we had to use a macro (SYS$UNWIND or something like that?) to implement exception handling. Very cool.
The VAX assembly course tied with the LISP and Scheme courses for being the most fun just to write code.
IBM has written chunks of its various operating system using PL/S and its successors.
Yep. IIRC the VMM (monitor) that underpinned AIX 2 on the RT PC was written in PL/M, and the OS/400 kernel in PL.8 (which used the "." rather than "/" in its name for some reason). PL/I variants were all the rage at IBM for a while.
I think it was "VRM", or Virtual Resource Monitor or Manager.
VMM is the Virtual Memory Manager from AIX 3.1 and later, and is definitly part of AIX,
VRM was a very interesting abstraction layer that presented AIX a full 32 bit address space, even though the RT maxed out at around 24MB (I think, it's a long time ago). This meant that AIX did no paging itself, and any that happened was actually done by the VRM.
Multics was the best, most influential OS ever written and I still miss it.
If you are interested there is now an emulator for the DPS8M mainframe it ran on and you can run Multics on that (it took all of 5 minutes to download and get running).
There was indeed no PL/1 compiler in existence when the project started and the company that was contracted to supply one were extremely late, so a subset (EPL for Early PL/1) was created and Multics written in that before the first full compiler was finished. I forget which version (I think maybe the V1 PL/1 compiler) was up and running before IBM even had a compiler in alpha test.
Eventually the V2 PL/1 compiler was complete and was a full implementation of the language and arguably the best PL/1 compiler on any system. It was generally reckoned back in the early 80s that it produced more efficient object code than anyone but the very best assembler programmers could produce.
Some notable OS's have been written in Assembler but tend to get forgotten.
My personal favourite is RSX-11-M/M-Plus/S
I had great fun modifying the boot code for 11S so that some extra custom features were enabled. Those were the days when IT was fun and there was always something new happening.
Yes, MS-DOS was a bought in reverse engineering of CP/M 86, which was assembler. Much of CP/M 86 and CP/M 86 or DOS Apps were auto translated from 8080 code to 8086 by an Intel tool. The 8088 / 8086 was really a superset of 8080, hence the awful 64K segments and no 16 bit flat addressing like all the true 16 bit cpus in early 1980s.
Early C was barely more than a Macro assembler and the most common C bugs where/are
Unexpected expansion of the macros
Array bound violation because no compile or runtime checking. purely null terminations. Both Modula-2 and VB6 had a far better way of doing strings. C++ also had a better way of doing strings from about 1988, but mostly ignored.
Libraries buggy or misunderstood.
No strong typing, issues with parameters, inappropriate casts (Solved and ignored in C++, solved better in Modula-2).
Inappropriate use of pointers (dereferencing pointer to unallocated RAM, arithmetic etc).
Big failing of C++ was AT&T's insistence on backward compatibility. Strustrupp didn't want it and it's been crippling since.
Backward compatibility is a strength of C++, not a weakness.
@Dan 55 I wholeheartedly agree.
Thanks for the video link, where he talks about code from 1996 made me smile we still have code from 1995 in our codebase. Which have been compiled nightly through multiple iterations of our c++ compiler without requiring change.
But when changes have to be made (not because the C++ version is newer) in code that has not be touched in over 20 years you are totally on your own as none of your workmates have a clue about it.
>The 8088 / 8086 was really a superset of 8080, hence the awful 64K segments and no 16 bit flat addressing like all the true 16 bit cpus in early 1980s.
Blame the IBM PC which used the Intel 8086 in preference to Motorola or National Semiconductor (who's architecture and instruction set was perhaps the best of the three), and the rest is history.
Agree you are technically correct: the 8088 was the 8-bit 8080 chipset compatible version the 8086.
Funny to think that back then the upgrade from 8-bit to 16-bit architecture was as big a jump in circuit board complexity and cost as moving from 16 to 32-bit and then 32 to 64-bit.
However, the point is IBM chose Intel...
Although the Motorola 68000 was available around the same timeframe as the 808x processors from Intel, the corresponding peripheral chips weren't. And the 68008 (used in the QL), which allowed the use of some 6800 peripheral chips was a few years later still.
The team creating the IBM PC were trying to work under the covers, because the IBM leadership were worried about a functional desktop computer undermining the sales of their other systems. So the PC team were not able to leverage IBM's name to make chip suppliers change their delivery schedules.
As a result, they went with the 8088, which allowed them to use a lot of existing, off the shelf 8 bit peripheral suport chips from both the 8080 and Z80 families, which were available, cheap, and widely understood. And you have to remember that the systems they thought they would be selling against were the Apple ][ and it's follow up systems.
It made sense to them at the time, even though in hindsight it was the wrong decision.
Of course early C compilers and interpreters were also coded in assembly, which was still pervasive for another couple decades. And why not? It was useful to have a portability layer, but those layers didn't strictly NEED to be implemented portably themselves.
Assembly is still useful to know, but a bit masochistic if you are not hacking away at bits of existing code, or flashing chips IMO.
"early C compilers and interpreters were also coded in assembly".
Nope. The earliest C compilers were written in C. It's possible, but highly unlikely, a few masochists chose to write these in assembly because they didn't know any better.
Read this article by the bloke who invented the language (amongst one or two other things):
https://www.bell-labs.com/usr/dmr/www/chist.html
Not strictly true. If you think of C as a sort of upscale macro assembler then you can bootstrap C by writing assembler macros. This was the way that the original compilers were made -- implement a subset using whatever means were to hand and then use that subset to implement a more complete compiler. Rinse and repeat until you have the complete compiler.
Once you've got Compiler Version One then its straightforward.
Whether that's necessary depends on your definition of "compiler".
You write a minimal compiler in a subset of C for the same subset of the language, hand translate that to assembly — it can be quite straightforward — and assemble that.
That bootstraps the minimal compiler. Compile it with itself and confirm the assembly it produces is valid.
Now add more features, still using the subset in the compiler itself, and compile it with itself again. And so on.
Is the human who translates the initial compiler also a compiler? Or do you use that term only for a program? It's a question of definition, and people disagree. But you certainly don't need to create a compiler for the entire C language in assembly or some other HLL in order to create your first C compiler. You can start with a very small subset of the language. If you're willing to allow certain deviations from the standard, you can start with even less.
Then, of course, once you have a working compiler you can split it into a portable front end and just write the code generator for each new platform, as first pcc and later GCC did (as well as various others).
I believe it was initially written in "New B", which was written in B.
I can't remember whether B had its own bootstrap compiler written in assembly or relied on BCPL for that, but I think it did because, from my hazy memories of what I read, B was created to fit a BCPL derivative into the cramped memory of the PDP-7.
You do realize that every OS has bits written in assembler, yes? For example, the entire arch directory of the Linux source tree. This is because there are ISA features, particularly supervisor-mode instructions, for which the high-level language has no matching construct. There are still many hardware features that aren't implemented in the language, but are rather handled by intrinsic functions, for example SIMD instructions. So, actually it is still ESSENTIAL to know, at least for those not working exclusively with user-mode code.
I don't see any indication of intended humour, so I'll feed the troll...
APL is darn near the highest level of the programming languages from long ago. One character to multiply matrixes. (OT: I worked with Ken Iverson's nephew for a few years.)
I also wrote a RAM disk driver for CP/M. You do that by taking the assembler source to the existing disk driver for your system and modifying it as needed. I don't recall if the assembler source for the rest of the system was available from Digital Research or not.
CP/M was written in PL/M, which was suggests it was developed to be a micro-computer version of PL/1. The original PL/M compiler was written in FORTRAN and the source code had been put i the public domain a few years ago.
A lot of early MS-DOS applications were written in Pascal, with the compiler running on a VAX - remember a review of a circa 1982-83 MS Macro Assembler where the reviewer said it seemed to be more Pascal oriented than 8086 assembly code oriented.
The 8086 was designed to make it easy to translate 8080/8085 assembly language to 8086 assembly language. The 8080 was designed to be make it easy to re-use 8008 source code, so the latest and greatest intel processors are still carrying baggage from the 8008 announced in 1972.
>>the latest and greatest intel processors are still carrying baggage from the 8008
Wasn't the 8008 an 8 bit version of the 4004? So the baggage actually comes from an even more ancient CPU... though I did read once that the 8008/4004 microcode has been expunged from modern Intel CPUs.
This approach (maintaining old tech in newer iterations) isn't limited to CPU design though - I learned that Mitel MCD (as it was called when I was installing it) was actually a wrapper around a virtualisation of their earliest (physical hardware) switch - the hardware worked so why change it? convert it to a VM and what could possibly go wrong...
That approach explained the programming interface bacially being a kermit session to the now virtual hardware... which made replicating programming between remote switches rather pedestrian.
From the article:
This is not intended to be a comprehensive list. There are too many such efforts to count, and that's even with intentionally excluding early OSes that were partly or wholly written in assembly language.
I think they know about them (and so do we now, thank you!), but them there presses were sitting impatient to go.
My first job after graduation was doing RSX-11M/M+/S kernel software development in PDP11 assembler for DEC's small development team in the UK. Fortunately, after a year or so I moved to VMS kernel code - not only was MACRO32 much more powerful than MAC11 but almost all new VMS kernel development was happening in BLISS32 by then.
However, my first programming job (in my year off before university) was in APL! Now there was a unique language. Although that wasn't kernel code.
>Those were the days when IT was fun and there was always something new happening.
Such negativity. Why, these days after decades of research and hard work we have mandatory activation, always-on DRM, the ability to patch applications and OSs whenever the developer feels like it without having to wait on the pesky user... almost every day, we get a whole new radical paradigm in UI design, whether it's flat, flatter, monochrome, greyscale, slightly-less flat... heck, we've even managed to accelerate and streamline applications by renaming them "apps". What's not fun about all that, you cynic??
Have a beer and join me building my time portal back to about 1992, won't you?
An OS written on Prolog would just be, well, "diseased" is the kindest word I can think of!
Take a look at the Japanese Fifth Generation project. The idea was that the resulting machines would run Prolog natively.
I was at the conference in Bristol in the early 80s when they did their big presentation over here, and got to talk with the people who'd actually be doing the research(*). I was fairly cynical about the idea so asked them if they thought it would work. Their reply was on the lines of "Who cares? We've got 5 years of big funding and will get another 5 years unless we totally screw up in the first 5. It's a government initiative so will never be deemed a failure."
(*) As opposed to the nominal heads who were basically political academic players and government placemen, and too busy glad-handing their UK and European opposite numbers to speak to low-lifes like me.
>The idea was that the resulting machines would run Prolog natively.
Back in the 80's people weren't so fixated on one chip architecture, so you had chip designers building chips to support hihgh-level languages including the "AI" languages which naturally had a different architecture to chips designed to run conventional languages like C.
Obviously, if you look at the development of the Intel x86 family you see in each generation it provided better support for structured stack-oriented procedural languages (i186), OS's (i286) etc.
Linking to the related article on C and Rust/Swift, it would seem that Rust/Swift et al need to get closer to the chip and ensure the silicon and microcode directly supports key language features.
Rust is about as close to the metal as C is. It just imposes more restrictions on the programmer, and provides more syntactic sugar and abstractions at the library level. But it runs comfortably in the same environments as C does.
For an OS, the reduced need for serialization of execution provided by Rust's object-ownership guarantees (which are imposed at build time, not run time) should overcome any incremental overhead introduced by good Rust programming practices.
For end-user applications, any performance difference is almost certainly irrelevant, because the vast majority of them will be I/O-bound.
For stuff in the middle -- execution environments and libraries and subsystems and whatnot -- some of it will be CPU-bound and performance-sensitive, but again most of those things will get more back from the reduction in locking than they'd lose from Rust's level of abstraction, which is not, after all, much higher than that of C, much less that of C++.
Fucking Modula 2. It's soooo close to Pascal, but there are little quirky differences, you know? In the early 90's, I was doing anything and everything in TurboPascal on DOS. It was my mother tongue and I had near total mastery of the language. Then in the mid 90's, I took two Modula 2 classes at Uni, and that was enough to corrupt my previously pristine Pascal knowledge. Suddenly my old Turbo friend was making hurtful comments about my code, implying I had done something syntactically wrong, and refusing to compile. It was that bloody Modula 2 that had wormed its way into my brain, and would peek-out at unexpected times and places in my code. I never did fully recover from that and wound up chained to the VB grindstone for a couple of years.
I mean, Module 2 is a great successor to Pascal, but it does make things harder if you try to go back to Pascal.
Hum, Modula 2 was Pascal done proper. Real I/O and the ability to pass functions (and other stuff) as arguments, plus a whole load of cleaning up of the syntax. One of my favorite languages. What do we (almost) all use now, Oh, yes, pathetic Java script and PHP; the twins from hell. Now there's progress...
The block structure of Modula-2 is still my favorite. NO begin or { except for function and module level blocks. A minor flaw was that the adding of the function/module name to function/module END keywords. That made refactoring needlessly complicated and setting them apart for better error generation could also have been achieved with END function/module.
My main grudge was the lacklustre string support. It was as bad as C. Static arrays or pointer to chars via a library construct. Ugh. Fine for embedded work, but for anything up the application tree, it was cumbersome
I agree. Basically, Pascal was a rush job designed in between April when Wirth threw his toys out of the pram and resigned from the ALGOL-68 committee and the intake of new graduates in the Autumn.
But it made too much of a splash, and Wirth was never able to really popularise the "done right" version that was Modula-2.
What is not generally known is that Borland had an 8-bit Modula-2 that they decided not to sell themselves but licensed to IIRC Echelon (Ciarcia's company). However they didn't license the documentation, which meant that it was essentially unsellable.
The 16-bit implementation became TopSpeed, published by JPI and later Clarion.
It ran Forth Programs on a Z80, but I don't know what its minimalist OS was written in.
I wrote an almost OS in Modula-2 for x86 to run a Game engine. It only used DOS to load and had simple co-operative multitasking using coroutines. I wrote drivers in M2 for keyboard, CGA, Hercules, EGA, VGA, soundblaster PCM & MIDI and audio on PC speaker. It could do MIDI music, sound effects on PC speaker, SB audio, scroll background, move foreground tiles, animate sprites, multi-key input and read HDD all at the same time. Performance as good or better than assembler coded games. Though only using 8086 compilation it needed at least IBM AT compatible hardware rather than XT for various HW timers, DMA etc.
Sadly getting artwork done was too much work.
It was, and probably still is used to control large telescopes.
And the BBC micro had a full fledged version:
https://www.retrogames.co.uk/016223/Other-Formats/Forth-On-The-BBC-Microcomputer
These guys still support it with some high quality, native compilers:
https://www.mpeforth.com/software/
I don't know what its minimalist OS was written in.
In Z80 assember, like its Sinclair cousins.
I say cousins because it was designed by Richard Altwasser (hardware) and Steven Vickers (software) after they finished working for Sinclair.
A fully fledged FORTH implementation does everything including its own disk I/O.
I wrote a FORTH interpreter/compiler whatever you call it for the 8086 in assembler and it came in at under 8KB - which at the time was a big plus when memory was more expensive than programmers.
FORTH - quirky definitely, elegant yes in lots of places, but maintainable, usable .... not so sure :)
I wrote most of Kuma Computers FORTH implementations for things like Sharp MZ80K MZ80B, Newbrain, Tatung Einstein, Osborne 1, Tulip computer, Atari ST, Commodore Amiga and several other Forths for 68k series boards and some military machines as well as for IBM PC.
All were written in Forth (using inbuilt compiler and assembler or even cross-assembler when moving to a new platform).
Biggest mission critical application was my Walton Hindsight RARE/ARR radar recorder in use at CAA for many years in the 1990s and 2000s used to record air traffic movement and Comms. Ran on multiple 68k machines and can be seen working at the National Computer Museum at Bletchley Park.
Not joking, the early versions of PR1MOS, the OS for Prime machines, were written in Fortran 4 (with a few system-ish hacks like a predefined common block that was all of memory, and assignments missing left or right sides to set or get the accumulator). Later versions were in PL/P, a systems programming dialect of PL/I.
There's a reference on Wikipedia to an operating system written in COBOL. While I can find other sources for the existence of the product and its focus on running COBOL applications, I'd be interested to see some confirmation of the extent that COBOL featured in the OS, particularly given the variety of peripheral hardware it apparently supported: I don't fancy writing a device driver in COBOL...
"As for C well Primes did not have byte addressing, Prime ASCII had the top bit set and null pointers were not 0."
Byte addressing was possible. The original architecture had 32-bit pointers which addressed down to the 16-bit "half-word", and 48-bit pointers where the additional 16-bits contained a 0 or a 1, to reference a byte within a half-word (!)
As best I remember (it's a while ago) there was a new addressing scheme introduced - specifically for the later versions of the C compiler - which allowed for byte addressing in 32-bits. I forget the fine details, but they can probably be found on one of the sites run by ex-pr1me-mates.
The null pointer thing is perfectly legitimate in C, but made porting C programs which tried to treat pointers as integers a little difficult.
Other challenges included segmented memory where addresses wrapped at segment end rather than rolling into the next segment, and lack of native byte-based I/O (again, the natural unit for any manipulation in Primos tended to be the 16-bit half-word).
Porting C to Primos was generally hard and often unsatisfactory as I recall - it was a good test of whether your C was actually clean...
PL/P and SPL I remember as being quite nice - PL/P was a little challenging as it was intended for kernel use, so had some interesting limitations (and a quirky compiler based on old technology), while SPL (more for user-space code) was almost identical to PL/1-G (and I believe used the same compiler technology) but using library calls rather than language syntax for I/O.
I never saw any of the FORTRAN code, which is probably a good thing. Later on, some system code was written in MODULA-2. I don't believe C was ever used significantly internally - it was provided for porting software (for example to make Oracle available on Prime) and for customer use.
When I was working at a large teaching hospital, I managed a lab with a DEC ecosystem (started with RT-11, then some sort of multi-user version of RT, then RSX-11m), and did most of my coding in FORTRAN (and assembler). A friend ran a similar (larger) lab that ran UNIX on a VAX. His productivity was about 3x mine, writing mostly in C.
Then I found out about RATFOR, a C-like syntax, with a pre-compiler, for FORTRAN. I got a version at one of the many DECUS meetings (ah - the days before software patents!!) when we all shared code, and it greatly sped up my productivity - and code readability!
One of the many varieties of OS / language from the 60s, 70s, had to win, it just happened it was C and UNIX.
What really cemented it though was the US DoD, in the 1980s, effectively plumping for the UNIX and C with POSIX. That cast an official US governmental seal of approval on C and *nix style operating systems, and that was that. At the time it was rapidly becoming the de facto choice anyway, but after that there was no money to be made in developing something radically new and trying to sell it to what was at the time one of the biggest customers in the market.
Actually USA DoD plumped for ADA in 1980s. C was a decade earlier.
GNU, BSD etc and later Linux born because the developers of UNIX thought AT&T unfairly had copyright. A lot of it wasn't paid for or developed by AT&T.
POSIX is a separate thing from UNIX.
DoD didn't mandate Ada until 1991, but granted a ton of exceptions. Not many projects actually went ahead with Ada, being based on C / Posix instead. The "Ada" mandate lasted 6 years only.
In 1988 POSIX effectively defined the standard against which one had to write C code to be portable as per DoD's requirements (which is what the DoD actually wanted). By 1992 POSIX was also specifying command shell and command line utilities. POSIX and The Single Unix Specification merged in 2001. At the end of the 1980s DoD also mandated open hardware standards for various classes of system, plumping for VME (which is still supported to this day). ADA, where it was used, had to fit in on top of all this.
It depends on what you mean by "Unix". These days, something can officially be labelled "UNIX V7 (tm)" if it passes the Open Group's compatibility tests against POSIX:2008 2013 edition, aka Single Unix Specification v4. Note that OSes can meet these requirements even if their native shells, APIs, etc. are not unix-like in anyway whatsoever.
Pretty sure Linus did Linux for fun / university, originally, not because he was passionate about the corporate doings of AT&T. FreeBSD was the open re-write project that was motivated to wrest "Unix" out of proprietary control. It came out at about the same time as Linux. Both ended up achieving the same end result.
That's just not true.
Unix and C won in the 1980s because (a) they were far superior to the alternatives; (b) vendors could port Unix and C to their hardware far more quickly and cheaply than developing and maintaining their own OS and compilers; (c) the market was moving to desktop workstations and these mostly ran some variant of Unix; (d) customers and developers really liked the concept of open systems (=> no vendor lockin through custom APIs and supposed ease of migration between vendor platforms); (e) Unix IPR was open to any vendor on equal terms because it was owned by AT&T which wasn't allowed to be in the computer business and therefore couldn't use Unix to eliminate competitors.
The DoD did insist on POSIX compliance - sort of. But they didn't enforce this. Not that (lack of) POSIX compliance hindered sales to the DoD from Microsoft, DEC, IBM, DG, HP, Burroughs etc. They still bought shitloads of non-Unix mainframes, PCs and minis from those vendors.
Back in the 80s, it was considered witchcraft that the same source code could compile and run on different hardware and software from the likes of DEC, Sun, HP, IBM and the rest. Today we take that for granted. Unix and C made that possible.
The DoD mandated POSIX (C is part of that, UNIX isn't) This is why MS gave Windows NT a POSIX-compliant API.
Heh, I do agree but...
"bring out your dead, bring out your dead"
I can't quite yet. We still need our dead to run the entire industry ;)
Not to mention, dead stuff is always more stable. I.e you wouldn't exactly build a house out of *living* people would you?
I wouldn't build a house out of dead people either. It would get rather smelly and unhygienic.
Mind you, I suppose that invites the philosophical viewpoint that when you die, your body doesn't... it's just becomes a somewhat different sort of "living" - more of an ecosystem.
This article is the equivalent of the era of bubonic plague(*)
In some cases it's more
Your father's OS. An elegant system for a more … civilised age(**).
Multics, Burroughs' MCP and Genera all had features that today's mainstream OSes lack or do badly.
(*) First known bubonic plague outbreak was ~3000 BCE, it's still around today. Long era.
(**) ObXKCD.
"all had features that today's mainstream OSes lack or do badly."
Once upon a time it was a fairly easy job to undelete a file, just as long as you had your "oh shit" moment right away.
All these fancy dodahs these days, nuke a file and it's gone for good. Delete a folder by mistake and everything vanishes in a microsecond with no way of undoing it (and since you don't have low level access to anything without jailbreaking, you can't even try to retrieve whatever might remain).
Yup, progress.
"For fun and profit" is an old trope(*), much used. Pr1me's manual for their text formatter used to have examples involving the "Raniburger Corp" handbook "Frog breeding for fun and profit".
(*) And much debated, see Stack Exchange's thread on it.
Xerox Parc machines could and did run Smalltalk-80 all the way down to the bare metal.
For a brief moment HP (nearly) had a pure Smalltalk-80 box running on a NatSemi 32032. I had a prototype to play with under NDA and for the time it was a rather fine, if specialist, workstation. However, it developed a hardware fault so they took it back on a Friday and by Monday HP had always been at war with Eastasia had never had a Smalltalk machine and never even considered developing such a thing. If it hadn't been for the h/w fault I might have managed to keep a non-existent HP ST-80 box as a home computer.
I had an early NS32032 machine ~1983, though I never got Smalltalk running on it (I wrote a very simple 3D interactive solid modeller on it, terrible performance).
On the other hand I had Smalltalk running on ARM before more than a couple hundred people knew about ARM. And ~25 years ago I was working on a Smalltalk medium-hard real-time OS on custom ARM hardware. It worked quite well, and was intended for a consumer network/media system that would have given us al the useful parts of IOT in the mid 90s. If only bill gates hadn’t stuck his foot in it...
Others have made Smalltalk sorta-OS systems too. So much nicer .
I think the core Lisps were generally built around SECD machines which I guess you easily implement in machine code (CAR and CDR were taken from the machine code of the IBM704), Or equally simply create a hardware implementation of it. Symbolic Lisp Machines made machines with that in mind but never quite got round to doing it fully in silicon.
The scariest ever and now dead OS from the 80s was the thing that ran on Symbolics Lisp workstations.
Genera was wonderful, a really good OS that I loved(*) with a proper space cadet keyboard too. Shame about the hardware's need for cooling though, not so much fan noise as fan jet noise.
(*) Happy fun times back then, I had a Sun workstation, a Symbolics and the non-existent Smalltalk machine(**) on my desk(s).
(**) See other comment for details.
Maurice Wilkes used to introduce his LISP lectures saying it stood for "Lots of Irritating Superfluous Parentheses".
All (untyped) languages aspire to the condition of Lisp.
Historically I'd say that Lisp was the test bed in which we as a profession worked out exactly what the hell it was we were actually doing when we designed and used computer languages. If you look at all the debates which came up in the Lisp community - scope, dynamic vs static binding, downwards funargs, upwards funargs, closures, single vs dual namespaces, continuations, macro hygiene, message sending, purely functional vs stateful programming - they reflected design and implementation choices that occur in all languages, but which could be tested rapidly in Lisp because it is such a simple and flexible language. As such it's in many ways the Ur-language from which all other languages have sprung even if it's always been a minority sport in real world use.
Lisp is the only truly beautiful programming language I've found. It's pure elegance. In the last year since starting to use it, I've learnt so much about being a better programmer from it that I missed in years of C-like languages, and I'm not even using it in an especially idiomatic way.
What really shocked me about it was finding out it originates from the 1950s. It's the polar opposite of Fortran, the only other language of the time. You're right that it was used as a test bed for just about everything modern languages have, and it had an alarming amount of it from early on. It's hard to think that garbage collection was being added to Lisp only a year or two after the modern "if" was invented (in Lisp, naturally). It brought so much innovation to the table all at once. Like the Citroen DS of programming languages.
I'm surprised AI has moved away from it. Its effortless self-modification capabilities would lend itself well to modelling ML concepts would it not? Not my field, not my problem.
I'm fortunate I'm able to use it at work, and it's been a real blessing. It's a shame I probably won't be able to use it more directly in my later career.
You can create OS and applications in any language really.
But some lack the dirty aspects of memory management (and even lower like CPU protection switching) so they need bits in assembler or something C-like. Equally such low-level code has enormous potential to screw the system.
Horses for courses really, and most decisions come down to what will do this job well enough to get by.
Lawrence Livermore needed an operating system with more features than what was offered by the vendors of supercomputers at the time - classified level support, time-sharing, etc. So they rolled their own. LTSS for the CDC and Cray machines, CTSS for Cray hardware, and later a new version of LTSS - NLTSS for Crays. All of these systems, utilities, support libraries were written in the local dialect of Fortran-77, later Fortran-90.
All of the systems used concepts developed for Multics, but were heavily modified to suit the needs of the laboratory computing community. One notable feature was the mapping of program memory to disk ( aka core dump ) was also used for swapping out when the time slice expired AND was restartable.
Eventually these systems were replaced by Unix derivatives - notably Linux.
Please tell me more. A friend worked at Lawrence Livermore and once mentioned an OS, written in Fortran, for a ModComp computer. Would that have been related?
I do know that he worked in the not-networked systems, which were used for unclassified and highly classified jobs. Not at the same time, of course. Context switches involved physical removal of all storage media, and armed guards.
Yeah, right. My ass.
Here's the thing: people who write C or C++ software don't feel the need to spam the world with announcements, pronouncements and self-promoting spam. They just do shit, quietly. It happens to be shit of consequence.
But, there's a small community of Rust fanbois who are just very loud. And that's because that part of the world where consequential stuff happens doesn't pay much attention to them.
So, Rust wants to stick its tail into the Linux kernel. Wants to hang out with the bro's. Respect Mah Authoritaaaaay!!! [0]
How did that go? Whoops! Thar she blows. Something about operating systems, ABI's and calling conventions.
Rust's reaction? Fuck C!!
And it's not even because Rust has something concrete to offer to the Linux kernel. The prevailing advantage of Rust over C that's being pushed around by fanbois is: it's safer to write in Rust than in C. Translation: Rust makes an inept C pointer boob less of an inept boob. And that's somehow reassuring?
It's just an ego trip. Rust wants to be in the Linux kernel because it thinks the Linux kernel is cool, so that's where Rust wants to hang out. With the cool kids.
Know thy limits, Rust. You're just a pile of marketing talk being pushed by Microsoft and Google, in their quest for the complete monkeyfication of the software engineering profession.
-----
[0]: Cartman. See: South Park
Have a beer sir. Sorry about the icon but it was the nearest I could get to oxidisation.
When I was reading the article I just knew there'd be a 'rust angle' in it somewhere. It wasn't _really_ about C was it.
I think we're going to need another icon before too long, rust gets way too much air time here, it's in danger of becoming the new systemd.
I've been (slowly) reading the Rust doc. I just want an understanding of what it does at the lower levels - haven't got far enough yet. What *is* "borrowing"? The language appears to not have any kind of struct with named elements - perhaps thats added with their macro stuff. They seem to think that declaring a local variable of a given type a few lines after a previous declaration with a different type, thus shadowing the old one, is a *good* thing. WTF???
> What *is* "borrowing"?
Semantically, it's locking a compile-time reader-writer lock that gets completely eliminated during the compilation process. (`&` is shared reader, `&mut` is exclusive writer)
Implementation-wise, it's taking a pointer to something.
> The language appears to not have any kind of struct with named elements - perhaps thats added with their macro stuff.
Could you clarify what you mean by "struct with named elements"?
Here's some runnable example code for the cases I imagined you might be referring to.
> They seem to think that declaring a local variable of a given type a few lines after a previous declaration with a different type, thus shadowing the old one, is a *good* thing. WTF???
It's a trade-off they considered acceptable, given how strict the type system is, in order to enable things like:
let req = HttpRequest::new();
let req = req.begin_streaming(); // `req` no longer has methods to set headers
(The "typestate pattern", where changes in type are used to implement a compile-time verified state machine and the borrow checker is used to prevent you from holding and using references to stale states.)
That has to be the silliest straw man I've read for some time.
Rust can call C and C can call Rust. It's not even hard to do. There are even explicit language constructs for it so that function names aren't mangled and structs are packed the same way. It can even be used in kernels and against bare metal. That is not "fuck C" by any stretch, since if you have C code that works you can just call it. If you have new code in Rust you can call it from C. I call OpenSSL all the time from my Rust code. No biggie.
So if you love C so much, then that's fine please continue using it. Nobody is forcing you to write in Rust even if the reasons others may do is obvious and self evident - i.e. less bugs in compiled code, more scalable thread safe code.
Just keep writing C. Or you could go on a ridiculous ill-informed tirade every time Rust is mentioned. Somehow I expect it'll be the latter.
Translation: Rust makes an inept C pointer boob less of an inept boob.
Translation: Rust significantly reduces likelihood of class of errors which have given rise to and continue to give rise to almost all security vulnerabilities in software systems.
Apparently this is bad thing.
Apparently this is bad thing.
It's not a bad thing -- for copy-pasta types who fashion themselves as "programmers" ... no, that is too generous ... "coders" who will never ascend, nor even aspire to ascend, to any sort of level of expertise or excellence in the field.
c.f. web programmers (not that your bog-standard web programmer could be arsed to learn Rust).
Ouch "C fans tend to regard BCPL as an unimportant transitional step"! BCPL has more expressive flow control than C, it's compiler is designed for easy portability to new architectures, and it's closer to the machine for systems programming. Back in the day I've worked on compiler and run-time library ports across multiple target platforms that allowed a single application code base to work seemlessly across them all. Ah, nostalgia ain't what it used to be...
Here's an interview with one of the developers of TRIPOS, written in BCPL, which he ported to the Amiga (i.e. AmigaDOS). It also goes over what he did afterwards.
I've never programmed in BCPL but code samples look like it would be easy to parse & tokenize into an AST and spit out machine code. Far more so than C. That's probably why it enjoyed some early success, especially when C was kind of an ad hoc mess until ANSI C put some semblance of portability onto it.
Once that happened the reason for BCPL kind of died with it. I know AmigaOS 2.0 began the migration to C and I wonder what compiler they used. I remember using Lattice/SAS C on the Amiga which was pretty good but there was also an Aztec C which wasn't so great.
IBM's mainframes (VS1, MVS, VSE, the original DOS, zVM) running on 360/370/390 hardware were not written in C. 40 years ago they were written in Assembler, and PL/S (A pl/1 syntax like language). This evolved into PLX which is used these days.
These days application type stuff like web servers (and TCP/IP) may be written in C and Java. Even if you wrote in "Metal C" which was closer to the hardware, you still have to drop into assembler to do the hard stuff.
C has a rich library of functions like printf(), but the assembler and PLX had equivalents but they were not so easy to use.
Hi Colin,
You commented whilst I was adding mine.
Metal C basically spits out assembler, which gets assembled. We've thought about it, due to the increasing number of instructions that appear with every iteration of IBM hardware which the Metal C guys (and gals) save you the bother of thinking about.
I write a whole load of C for mainframe these days, but remain convinced that the string functions are the work of the devil in terms of good practice. Also, I imagine that any reputable software house had some analogue of most of the functions in the C runtime 30 years ago. I'm still using a printf/sprintf()-like thing that I knocked up 20 years ago on OS/390 run under the Hercules emulator.
The big problem in the interesting end of the mainframe world is that the people doing it are all getting old, and not getting replaced.
Just the other week, I wrote a routine in assembler, to add a little function as a product enhancement to things running on an OS mainly written in assembler.
I didn't see it mentioned in either the article or the comments when I posted, and it might not be the most common in terms of the number of machines that run it but it is a common thing when you consider how many financial transactions run through it.
Ladies and Gentlemen, I give you - z/OS.
MPE was a very successful business (as opposed to scientific and technical) computer operating system.
https://en.wikipedia.org/wiki/HP_Multi-Programming_Executive
http://www.robelle.com/smugbook/classic.html
MPE had a rough start, but after a couple of years it developed into a rock-solid OS, loved by loyal customers of the corporate world.
It was in fact a kind of economic mini-mainframe which could connect thousands of end user terminals for transaction processing, email, order processing, manufacturing management and the like.
It was implemented in a kind of Pascal.
MPE would still be in use if customers had to make the call, because it was so reliable and secure.
There's a passing mention of the HP 3000 series in there, under things Burroughs inspired, but I'm afraid I just know next to nothing about them.
I'm not claiming that entire piece was written from memory -- I did a lot of fact-checking -- but most of it springs from my own research over the last few years.
I'm very glad to see that people seem to be enjoying it.
The MPE operating system of HP3000 minicomputers was written in SPL. This was a perfect match to the stack based architecture so it was easy to write compact, fast code commensurate with the limited RAM available in those days. Hewlett-Packard also made the HP1000 which on paper was faster hardware, but running theiir version of Unix (HPUX) and programmed in C, was a dog compared to the HP3000. My language of choice is now C++.
I loved writing stuff in SPL for MPE V on my Series 39. The intrinsics were very well documented. Not quite the "DEC grey wall" but not bad documentation.
I never did try MPE XL as all my HP9000 PROMs are configured for HPUX. There is a way to unlock the PROM so you can boot either one allegedly.
Happy days wrestling with my first multitasking operating system. A quantum leap [pun intended] in possibilities for me having just progressed from a hand-coded assembler 'monitor' program running on a home-brew Z80 machine.
I was super impressed but didn't get too far, however I believe a smarter guy than did:
If you're going to go down the 'this is a C++ operating system route' based on some parts of the OS, Windows is also a C++ operating system.
Sure, the Windows kernel API is C based, but if you want to write interface components or drive various data querying and transport objects you'll fire up a C++ compiler if you have any sense. That's how the interfaces are specified.
The difficulty I have with Rust and others is the fundamental question 'is it doing anything cool?'. The answer to the end user is generally : no. Yes, security professionals and some programmers love it for excellent reasons, but a quick look at the available Rust OS offerings shows they seem to be stuck very firmly in architecture research rather than day to day usability.
BeOS was a terrible OS[1], but at least it tried to do some cool things at the time, and definitely managed to do so with the BeBox.
It shouldn't be news that security, backup, and networking that doesn't fall over tend not to be selling points. What's on offer has to be a tangible improvement, and in general the record of C based OS on security and memory protection has improved significantly from the past.
[1] At the time it came out for x86 I was running OS/2, which had multitasking, better networking, printing that actually worked, multimedia codecs that were somewhat better than those in BeOS, hardware support that wasn't completely desperate, and even OpenGL.
The Win32 APIs are C, not C++. I've written plenty of software where it's basically processing WM_ messages from a loop.
The likes of MFC, ATL, QT, wxWidgets etc. are C++ but they're still using Win32 underneath it all.
These days Microsoft have semi-deprecated Win32 for UWP. And while you can still write C++ with UWP it has a mess of extensions to the language to do so. So IMO it's not really C++ per se.
As for Rust, there is huge interest in it from bare metal development all the way up to servers. I'm only aware of one kernel in Rust, reactos, but the progress made on it demonstrates it more than capable in that role. It's also used for webassembly development, webservers, databases, crypto, embedded devices, IoT etc. etc. Probably its weakest area at the moment is actually in UI development. There are plenty of frameworks (eGUI is very cool), but nothing that comes close to something like QT.
Here are two questions to which I've never received a clear and concise answer and it relates to this article.
First, how is a new CPU or other processor first programmed? Going back in time, how did Intel get the 4004 or 8008 work if nothing like them had ever existed before?
Second, how is a language made to function if it's the first of its type? How did Xerox or Bell bet their respective languages to work if they were built from scratch?
To me it's all a variation of the "chicken or the egg" question. How do you know if a chip will function if there's no language for it? How do you know the language will work if there's no chip for it?
Assembler. And before that, nightmare territory of having to program directly at the per-bit level, by "literally" throwing the switches.
The earliest concepts of what one might think of as an assembler appeared around 1947 on the EDSAC.
https://www.davidsalomon.name/assem.advertis/asl.pdf#page=16
All the power we have on tap today of course means an assemblers role is somewhat diminished by brute force capability. But there's a lot to be said for being able to operate right down at the hardware level. How many attack vectors are symptoms of design decisions in higher abstraction layers? Too many.
Fully accept practicality of writing a full blown OS on todays myriad and modular hardware is a tough ask. Some people have tried to do it.
Assembler. And before that, nightmare territory of having to program directly at the per-bit level, by "literally" throwing the switches.
I loved those switches. Hand translating your code, loading it into the console via the switches and run.
Later once you had a tape you would manually enter the boot loader and run. Happy, Happy Joy, Joy. Real hacking.
And before that, nightmare territory of having to program directly at the per-bit level, by "literally" throwing the switches.
There was a stage before switches. I never did it but an older friend had the experience of "programming" hardware by plugging brass and ferrite slugs into a pinboard that then got inserted into an array of induction coils. Brass (non-magnetic) was 0, ferrite (magnetic) was 1.
"How is a new CPU or other processor first programmed?"
Usually an *emulator* is written for the target machine, which runs on an existing machine. Instructions are just 1s and 0s, so all you need is the definition of what they are (the *architecture*) and you can write a program to do the same thing on any computer.
For example, when DEC developed the VAX they wrote an emulator for it that ran on their much larger PDP-10 mainframe. VMS booted on the emulator before any physical VAX existed. This allowed operating system and hardware development to proceed in parallel.
"How is a language made to function if it's the first of its type?"
You can find this discussed in a compiler textbook under the name "bootstrapping". There are several different strategies, depending on what you have available and whether you are designing a new language, a new CPU, or both.
If it's just a new language L, you can write the first compiler for it in some other language Q. Then you write *another* compiler for L, this time in L itself, and use the compiler you wrote in Q to compile that. At this point your language is what is called "self-hosting" and you can throw the old implementation in Q away.
On the other hand, if your language is already implemented on architecture W and you want to get it running on architecture X, you first modify the compiler you have on W to still run on W but output a program that runs on X. You then tell it to compile itself to X (still running on W), and copy the executable file from W to X.
The first machine were coded in binary - there was a website you could run a Baby (the first modern electronic computer!) but the code its was written in no longer runs on a browser but basically the binary code was set on switches and loaded into the memory, then the next bit of binary code was loaded and stored and once the program was loaded you set the start address and hit the run button. I would say its worth while for any programmer to try and get a grip on how a simple CPU works (or worked). Basically its a quite simple arrangement of switches and registers and a bit of logic. Laziness being the mother of invention the next thing that came along was assembly language where mnemonics replaced the binary codes and then they were changed into binary and entered. Then someone came up with the idea of an assembler that took a text file and converted that into binary and ran it. Once we got to that point (and allowing defines and macros and including other files etc ) we were pretty much where we are today - computer languages are really just smarter machine independent assemblers with attitude.
http://www.visual6502.org/ can show you how the code works in silicon! But there are other 8080 virtual machines that you can easily run to see how assembler works!!
Cross-compilers.
For example, the Burroughs Large System (B6700 etc, which has evolved to Clearpath), was always written in Algol. There was never an assembler for it. A compiler which emitted B6700 code was implemented on a B5000 (different architecture, different instructions), and the output of that compiler was used to boot the B6700. And of course, the compiler was written in Algol, so once the B6700 was running, subsequent compilation could be done on the new system.
Looking at the source code for the MCP, and the patch history of it, there were in fact some lines which dated from the very first iteration.
Learned a bit about the Burroughs systems after the merger with Sperry. Apparently some people had a hard time believing there wasn't an assembler.
It can take a bit of time to wrap your head around that. But hey, the compiler writers just need to know what binary to put out. Not much different then spitting out assembler code.
As far I know every "modern" languages and frameworks end being written in C. Almost all I do not know even a single one modern language that is not implemented using C. Most compliers still are being implented using C to avoid higher language been blotaware and speed up just is time execution. When object programing was coming up and replacing procedural languages like C and Pascal was to solve shot coming of procedural languages like automatic memory management and implement object programing paradigms but they were not directly written in assambly they just used C libraries under the hood for compiling and generatiing executables and a reusable memory management native C code also was used to provide those capabilities to hover languages like Java. Even the celebrated C++ uses heavily C in its implementation in the early days. Most probably Go runtime is also written in C with reusbale memory management and process thread synchronization libraries. In fact C is much alive and present everywhere is just the generation of programers used to high level languages like Java Python , etc do not see the connection.
That's likely true, but is not required. Any language (which may have been bootstrapped via C) that allows the needed lower-level operations can be used to bootstrap from. So, it ought to be possible to bootstrap your way from Modula-2, for example. Hmm. There wasn't a decent C for the IBM mainframes, so my sequence there was AlgolW => QD => QC => ALAI. Don't bother trying to look up any but the first - you won't find anything.
The rustc frontend is written in Rust and was originally written in Ocaml. It currently uses LLVM as a backend, but they're working on an alternative backend for debug builds that's faster than LLVM but less optimizing. That's also written in Rust.
Free BASIC appears to also be written in itself, as are Haskell and the PyPy implementation of Python. (PyPy's "translator" and JITing runtime use a subset called RPython which allows static types to be determined at compile time.)
The term is self-hosting and it's considered a requirement for a language to be taken seriously in various circles.
Free Basic can generate assembler on x86 and generates C for gcc for the rest.
Free Pascal writes ELF .o directly, but still uses LD on popular *nix targets.
On more static targets like Windows and go32v2/dos it has complete stack (assembler/archiver linker, resource compiler and, finally, in the development version, a debugger).
Wow this is impressively wrong.
Take SBCL (Common Lisp implementation, runs on *nix, Windows I think mostly x86/x64 now by look of it but compiler also targets many other architectures or has done so). Current git repo has 571,925 lines of Lisp code (including whole compiler of course, complete with 9 backends). 49,174 lines of C, all of it runtime support. 3,673 lines of assembler.
Or Clozure Common Lisp: 417,298 lines of Lisp, 40,649 lines of C and related.
For these languages and many others C is used as the necessary glue to stick the language to the platform, since the platform's native interfaces are all defined in C. And that is essentially all. Sometimes it is also used for low-level memory management (GC for instance).
Hey,
Would you think that mr. Putin is aware this process and long the journey really is from bootstrapping your to-be-designed programming language compiler upwards. It may take a while until you have completed hole sw stack from next to metal up to cloud of clouds as we currently are by solely using in-country developed stuff. No linux there unless you write your own compiler for C by using your own programming language compiled by your own proramming language bootstrapped to live somehow...
Still, newly minted law prohibits to use foreign sw after 2025...
Per Brinch Hansen?
https://en.wikipedia.org/wiki/Per_Brinch_Hansen
http://pascal.hansotten.com/per-brinch-hansen/
My contention is that his Concurrent Pascal of c. 1977 was a microkernel in disguise, with a bundle of user-level operating system personalities though not much like what we'd call OSes these days.
Classic MacOS originally was in Pascal (up to V8 ?)
Microsoft Windows 1.0 was said to be also in Pascal, though it was more a shell than an OS?
IIRC the Modula2 OS was Lilith, and Modula2 was used a lot in embedded applications. Standalone application as a system language, but maybe doesn't count as OS?
Noob here, first comment, name's Bob (68yo retired programmer). My favorite OS, and best overall IMO, was one called RTE--for 'RealTime Executive--on HP1000 minis (descendant of the HP2100). HP got into the OS business because they needed a computer/system to manage all the test and measurement equipment they built (signal generators--including their first product; a synthesizer used to create sound for Disney's 'Fantasia'--spectrum analyzers, O-scopes, gas chromatographs ... you get the idea.). Also, to communicate with all their devices, on a single bus, HP invented the HP Interface Bus, or 'HPIB,' later to become IEEE488 parallel bus.
The word 'realtime' gets bandied about, but there are actual criteria; basically, the machine's response has to be at all times deterministic. Obviously, responses cannot occur at the same instant as a stimulus, but in order to be considered 'realtime' the machine has to guarantee response within a specified maximum time. Since the 1000 was an I/O processor extraordinaire, this was accomplished as each I/O type--serial, parallel, etc.--had its own Z80-based processor, essentially a microcomputer with its own RAM, microcode, backplane hardware and specialized chips (e.g. UARTs). The CPU would specify an input's parameters, the I/O board would collect the input until the request was complete, DMA the data to main memory then fire an interrupt to tell the CPU ready to process the data when the DMA was complete. Depending on the process's priority, the CPU would save its state, service and process the interrupt then restore its previous state and process until a higher priority process needed servicing. By contrast, the later RISC-based HP3000 was strictly a polling architecture; the CPU would query each I/O subsystem in turn no matter its 'priority' (it was a round-robin process handler, higher 'priority' processes would merely get a larger time slice). The theory was the RISC processor was so speedy it didn't need true realtime capability but in testing the HP1000 'mini'--early versions used 4 x 4-bit microprocessors daisy-chained together for a 16-bit system--would routinely blow RISCy 3000s away. When I was at HP, 1983 to 1996, one of the 1000's applications was as the signal processing unit for AWACs early-warning aircraft. HP was obliged to continue supporting the 1000 into the 2000s for use in AWACs aircraft. I worked a contract programming a bank of HP instruments on IEEE488 run initially on HP Pascal, later on a micro using LabVIEW (ugh) for one of the GOES weather satellite's comm systems.
I had fun in school with LISP--one of the Linux text editors is written in it IIRC--and Forth but the strangest language I encountered was an oddity from IBM called APL (for 'A Programming Language'). Commands parsed right-to-left and an assignment character was a left-facing arrow; there were specialized keyboards available for command symbols resembling Greek letters but us mortals had to use clunky ASCII transliterations. But, it had a single character command that would perform a full matrix inversion. Wonder what happened to it; was probably too far out of the norm for many to use.
APL was invented by Ken Iverson (actually as a notation before a programming language) and ran on more than IBM, although the most common implementation was on OS/360. I know it was still staggering along in 1980, being used for turnkey stuff in finance by a Canadian company called I.P. Sharp.
The Lisp-based editor you're thinking of is emacs ("Eight Megabytes and Constantly Swapping", at a time when 4MB was a largish mainframe memory) and it has little to do with Linux. It arose from the MIT media labs and ran on pre-Unix operating systems like ITS. Then Richard Stallman reimplemented it with a Lisp interpreter written in C, from where it went everywhere. Up until a couple of years ago you could find it installed on every Mac.
Actually, EMACS (Editor MACroS) was originally a bunch of - surprise! - macros written in the TECO editor by various people and collected and made more uniform by Stallman at MIT.
The next implementation was by Bernie Greenberg on Multics using MACLISP, which inspired Stallman to use LISP for GNU Emacs.
You can read Bernie's account of all this here: https://multicians.org/mepap.html
HP1000 was awesome machine, although I had quite limited exposure to it. Was used a lot in process control etc due to its realtime capabilities, I mainly came across it in conjunction with data acquisition and instrumentation.
I have a pre-RISC (labeled "classic" after the introduction of PA-RISC based machines) HP3000 running MPE V. Everything apart from serial I/O is based on HP-IB (7970E tape drive, 7911 winchesters (sans QIC option).
Macintosh OS 1.0 to 7.x was written in Pascal, and all interfaces were Pascal, no matter what programming language you used.
OS X was a brand new OS.
Early 1980-ties I wanted the white Forth computer, looked like a ZX-81½ but with rubber keys, like they later came to the ZX Spectrum.
I doubt the ZX-81 with its 1kb of RAM had a C-based OS.
I have been programming in assembler, BCPL, C, C++, Pascal, Modula-2 and many more. I think I was introduced to Oberon at university. But programming languages comes and goes. C has stuck around forever.
Go back far enough and the idea of purchasing an operating system for your computer was optional. The manuals described the hardware architecture, processor registers, i/o processing, data storage formats, switch functions etc. You could purchase an o/s or write your own in any language you could write a compiler program for.
Fundamentally at a low level, the o/s language needs to able to cater for operations direct in CPU registers with specific hardware instructions, shared memory segments, the ability to read from / write to direct memory addresses, indivisible (test+set) instructions, interrupts,volatile latches or semaphores at known (fixed) addresses. Most languages these days try to abstract all this away from the possibility of harm. As CPUs these days are gaining more high level instructions (e.g. AES, TLS) then the number of languages to support it all reduces.
Yes I wondered if that would get a mention, I was there a few years ago. God I get everywhere don't I?
Anyhow even that wasn't pure Java (Java SE on a phone), there were sections of "native" code written in C, typically for a device driver etc, to get the performance up.
But yes, to the user it appeared to be a Java O/S.
Anyone here taken a deep dive into the ARM architecture to determine it's instruction set is biased towards C or some other language and non-Unix like OS's?
I ask as from the discussion so far this platform hasn't been mentioned, perhaps because it is designed for Unix/Linux and C.
RISC OS was written specifically for the original ARM architecture; with a mix of Assembler, C, C++ and even elements in BBC BASIC! I imagine what we think of as the Kernel would all have been Assembler in those earlier versions.
C based toolchains were developed from the get-go on ARM. Current RISC OS Open source (a cursory glance made in a few mins - not a proper dive) looks like it's largely C, C++ too. (Quite well annotated code too - if a few too many macros for my liking, though in something as big as a Kernel I understand the decision to use them).
As has been noted elsewhere, C as a translation layer to Assembler on a PDP-11 was very direct. I don't think this is particularly the case with any other arch since then.
The first versions of RISC OS used no C whatsoever. IIRC, a C compiler only came later. ARTHUR, the predecessor to RISC OS, did indeed write parts of the GUI in BBC BASIC, but when RISC OS replaced it, the GUI was all assembly language.
As for C influence on the ARM ISA, I doubt there was any. Yes, there are autoincrement/decrement loads and stores, but I believe these were inspired more by PDP11 and VAX, and were more general than the C ++/-- operators. Essentially, after you calculate an address (which can include a scaled register offset), you can write that back to the base address register.
en.wikipedia.org/wiki/ARM_architecture_family#Design_concepts
I'm somewhat irked by the fact you guys forgot SerenityOS, a highly usable and advanced operating system written in C++. Its web browser recently passed the Acid3 web compatibility tests, which is quite a feat.
Mind you that it Firefox is still building on code written more than 25 years ago whilst SerenityOS's web browser was written in just a few years.
Honeywell's CP-6 OS, the successor to Xerox' CP-V (itself following from BPM/BTM/UTS OSes, on Scientific Data Systems, then Xerox, Sigma series hardware) was implemented almost entirely in a new designed-for-purpuse high-level language, PL-6.
As the CP-6 preliminary design review (http://www.bitsavers.org/pdf/honeywell/cp-6/CP-6_Preliminary_Design_Review_Sep77.pdf) described it, PL-6 was:
* PL/1 LIKE SYNTAX
* BLOCK STRUCTURED
* SIMPLE DATA TYPES
* MINIMAL RUN-TIME ROUTINES
* NO HIDDEN OVERHEAD
* INTERFACES TO SYSTEM SERVICES
* FACILITATES CODING IN NSA ENVIRONMENT
* USES CAPABILITIES OF L66 INSTRUCTION SET
It lasted for at least a decade, but I doubt there are any systems extant.
And bloody annoying it was to program in if you were used to PL/1.
Way back when, Aberdeen University were looking to replace their system and wanted Multics, unfortunately Honeywell had cancelled Multics and sold them CP-6 instead (features included a flat file system - no directories, just a bunch of files per user - and the world's worst text editor - no buffers, changes were made *immediately* in the original file, which was really fun if you were editing code over a noisy phone line).
So I ended up as part of the team implementing the "Rainbow books" (aka "Coloured books" https://en.wikipedia.org/wiki/Coloured_Book_protocols) used by JANET (the Joint Academic NETwork) in PL/6.
Some of the Multics PL/1 code was ported -which was no easy task. Fortunately I got to write the connection manager (conman) from scratch.
Not an experience I'd care to repeat.
The first versions of the variously-named OS'es on Prime computers, apart from the necessary assembly portions, were written in FORTRAN, which seemed weird to me even at the time but became useful as we cycled through four different instruction sets. Primos added PLP (an in-house cut-down version of PL/1) for new stuff; I still have a listing of my PLP Apple LaserWriter driver. Then the language gurus came up with what they called SPL, which was an over-complicated version of PLP, but I think it was mainly a vanity project intended to exercise a new compiler-compiler technology. It sucked, but soon afterwards it disappeared along with the rest of the company.
>> After his first sabbatical in Palo Alto in 1976-1977, once Wirth got back home
>> to ETH Zürich in 1977, he and his team designed and built the Lilith workstation
>> as a cheaper replacement for the $32,000 Alto. Its object-oriented OS, Medos-2,
>> was entirely built in Modula-2.
I was part of the development team at RSE lead by Brian Kirk that developed MOSYS for Motorola 68000 based workstations in the UK — we licensed the base Modula-2 compiler for Lilith from ETH. MOSYS was completely separate from Medos-2. It saw limited usage in the UK and was not a commercial success. The work was partially re-used for 68000 based embedded systems.
I later moved to GEC Avionics where among other things I helped port the BCPC compiler to GEC4000 machines, which has an OS written in Coral-66 (also influenced by ALGOL) and Assembly Code. The Coral-66 compiler was written in BCPL which is why it needed porting to GEC4000 (from the VAX).
I was very interested to hear about your work on MOSYS. I initially read about it in the August 1984 issues of BYTE magazine (misspelt as MOSES!) and SAGE News. Do you know if any copies still exist? I would love to try it out on one of my SAGE II/IV systems.