VT (Video Terminal) escape sequences, based on an ancient system for formatting text by including codes prefixed with the Esc character
Is there anyone here who remembers the VT52 or VT100? Or the horrors of /etc/termcap
Microsoft's new terminal app is now available in the Windows Store - so naturally your vultures took it for a spin. What's the point of the new Windows terminal? There are a few things. One obvious one is multi-tab support. You can click + to add a tab. There is also a drop-down menu that lets you select which command line …
VT (Video Terminal) escape sequences, based on an ancient system for formatting text by including codes prefixed with the Esc character
Is there anyone here who remembers the VT52 or VT100? Or the horrors of /etc/termcap
And the horrors of curses, ncurses, etc. and all the language settings and character sets. It is still a horrible mess. However, the *nix philosophy is still better than many other things seen or created.
And, with MS implementing the "ancient ways", it surely looks like they are admitting that their approach was not the best way to go.
So, when will they implement fork() and forget?
I haven't been overly bothered by fork() as yet, but a working select() would be nice. MS's function cannot wait across a set of fd's of different types (such as a serial port and a network socket). In the end I gave up and used the cygwin platform, which has a working select(), which was enough to do the job of talking to a telehone exchange via RS232 and fielding network requests at the same time. The unix philosophy that at the end of the day, everything is a file, pays huge dividends. Using cygwin also gave me a nice simple tcgetattr()/tcsetattr() which allowed serial port setup in a tiny amount of code, compared to the sheer lovecraftian horror that is the windows API for such things.
This post has been deleted by its author
Not necessarily a worse way, just a non-standard way and now they are moving back to the standard way, which they abandoned when they moved from MS-DOS to Windows...
As they are trying to integrate *NIX use into Windows, it makes sense to use the same methods on all terminals and as trying to get Linux to switch to a Win32 API on the Linux terminal would probably provoke a apoplectic fit, it is probably safer and easier to Microsoft to re-adopt the escape sequences again.
Xenix was actually licensed by Microsoft from AT&T in 1979. It was the exact same bog standard PDP11 Version 7 Unix that I had access to at UCB. Microsoft never actually coded anything[0] for Xenix, rather they sub-licensed the AT&T source code to third parties, who did the actual coding and porting.
For example, it was SCO who ported it to the IBM PC's 8086/8088 architecture in roughly 1983. Yes, the very same machine that shipped with MS-DOS. Most of us yawned[1] ... although looking back, it was a pretty good hack by SCO![2][3] Hindsight's 20/20 ...
The name Xenix came about because Ma Bell couldn't (or didn't want to) let them use the UNIX name. The claim for jealousy guarding the trademarked UNIX name was because MaBell was regulated and wasn't allowed to get into the retail trade, although that always rang a trifle hollow to me.
Before SCO's port was released, there was a TRS-68000 version, a Zilog Z8001 port, and an Altos 8086 version (not necessarily in that order, my mind is concatenating time). There were several others. Microsoft didn't write any of them, rather the third-party companies in question did the coding.
A version of SCO Xenix is available for the download here: ftp://www.tuhs.org/UnixArchive/Distributions/Other/Xenix/ ... Don't blame me for the www in that URL.
[0] Unless you consider adding Redmond copyright crap to a few header files "coding".
[1] Those of us working on BSD at the time looked on Xenix as BSD's somewhat insane & slightly neurotic little brother.
[2] A while back when I posted something along these lines, I asked if anyone could remember who ported Xenix to Apple's Lisa. Turns out it was SCO ... I have a copy, my Lisa looks a lot happier running a un*x than the OS she came with. (Don't worry, all you purists, I have the stock software for her, too.)
[3] Not the SCO of recent (ongoing?) litigation. Not by a long shot.
It doesn't really matter who the original author was. Do you think OSX was entirely written by Apple? MS could have used Xenix as the basis for a proper PC unix OS which would probably have meant linux and freeBSD never existing or at least staying as small research projects no one had heard of. Whether that would be good for us is another discussion but it would have been very good for MS. Instead they spent over 30 years putting ever more lipstick on the Windows pig and it still goes Oink on a daily basis.
> MS could have used Xenix as the basis for a proper PC unix OS
It was "a proper PC unix OS", it was AT&T UNIX edition 7 ported to the 8086 by SCO. Later when SCO bought the rights (partly funded by Paul) it was updated to later x86 and to System III and System V and was renamed to Open Server.
> have meant linux and freeBSD never existing
BSD (1977) existed _before_ Xenix (1980) and was free.
If Xenix/OpenServer continued being sold by Microsoft then Linux probably would have taken off sooner.
"it was AT&T UNIX edition 7 ported to the 8086"
God knows how, it didn't have an MMU - even linux requires a minimum of a 386. I'm guessing it required extra hardware on the board. Anyway, what I meant by proper was something more consumer friendly like OSX. The early X windows window managers were to say the least, minimalist and having to edit config files just to change the desktop or add items to menus was a complete non starter for Joe Average.
"BSD (1977) existed _before_ Xenix (1980) and was free"
And ran on a PDP-11, not x86.
"If Xenix/OpenServer continued being sold by Microsoft then Linux probably would have taken off sooner."
No idea how you worked that one out.
> God knows how, it didn't have an MMU
I have the Bell System Technical Journal July-August 1977 here, the Unix issue (and a collectors item). One article is 'UNIX on a Microprocessor'' which describes implementing Unix (edition 6 I think) on a DEC LSI-11 microcomputer with 20K words (16bit words) and no MMU. Granted that was single-user.
An 8086 can support 1 megabyte and is perfectly capable of running multiuser systems. I have some here: ICL PC2s - 8086 with 1Mb running Concurrent-CP/M-86 (though not switched on for a couple of decades).
> even linux requires a minimum of a 386
Yes and that was deliberate because Linux said: """It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386."""
However, ELKS* is a fork of Linux for lesser CPUs such as the 8086.
> And ran on a PDP-11, not x86.
Unix ran on a large number of different processors. BSD (Berkeley System Distribution) was a distribution of Unix, based on actual Unix source code, with some changes and additions. In fact 1BSD was just the additions.
> No idea how you worked that one out.
No, you don't, do you.
* https://en.wikipedia.org/wiki/Embeddable_Linux_Kernel_Subset
> what I meant by proper was something more consumer friendly like OSX.
OSX was nearly 20 years after Xenix and relied on hardware (and its relatively low costs) that was inconceivable in the early 1980s. It was also BSD based and didn't have to pay royalties to AT&T as Xenix had to. NeXT (who wrote the core OS) and Apple were primarily hardware manufacturers and didn't have to make a profit from software.
As I previously said: at the time there were expensive workstations such as Star and PERQ (and Lisa) running graphical Unix (or Unix-like) but the profit was in the hardware, which MS did not do, and the market was very small.
Windows on the PC may have been crap but it was cheap enough to sell a lot of. DRI's GEM* had sold a million copies by the time Windows 1 was released and that was the market that Bill wanted. Star and PERQ sold in the hundreds, not the millions.
* GEM was also the basis of Atari's TOS which was 5 years later (than Xenix) and also made its profit from the hardware sales.
> Microsoft never actually coded anything[0] for Xenix,
That is not quite true. Xenix included additional code that was owned by MS, though they may have paid SCO to actually write it. For example there was record locking which was not in Unix and this continued to exist in Open Server.
> it was SCO who ported it to the IBM PC's 8086/8088 architecture in roughly 1983.
Yes, SCO, a father and son team, did the work but they were paid by Microsoft under contract and did it on Microsoft's development machines (DEC VAX) in-house. It was actually released in 1980, before the IBM PC and before MS bought a licence to use QDOS/SCP-DOS to make PC-DOS and MS-DOS.
Later SCO did buy the rights to Xenix and developed it further but still had to pay licence fees to MS for code that MS owned (but may have been written originally by SCO).
> Yes, the very same machine that shipped with MS-DOS.
Actually it was used on machines quite unlike the 5150 IBM-PC. Altos 8086 boxes ran an 8086 and had a full megabyte of RAM and ran multiuser with serial terminals. Until 1983 MS-DOS 2 (and the PC-XT) it couldn't support hard disks while Xenix required a hard disk. So, no, they weren't the same machines.
I expect there are quite a few of us, but what bothers me is that I thought how clever curses / ncurses was - a library designed to give you api based access to the screen and hide all the termcap/info mess. We seem to be full circle somehow...
However, having a protocol for remote display / network transparency of terminal apps is a good thing. I hadn’t realised some of the remote access services were scraping hidden console windows to make some apps remotely accessible.
My config.sys always included device=ansi.sys in earlier versions of Windows.
Unless I'm misremembering, the VT100 and VT2** supported ANSI escape sequences, but the VT52 didn't. Some of the sequences would control lights on the keyboard - I remember maintaining a program where some idiot flashed the lights to draw attention to an error.
Unless I'm misremembering
You're perfectly correct, the VT52's escape codes were all followed by a single letter (and then a cursor address in one case) whereas the VT100 and later used the more complex ANSI codes (though could emulate VT52s).
At one time in the distant past I had a VT52 at home attached to what was then an extremely advanced 1200 baud modem for remote working.
Perhaps as a result, I'm not overjoyed at the prospect of erasing* 40 years of UI development in favour of ASCII text, however brightly coloured it might be.
*Which presumably would be either Esc[2J or EscHEscJ depending...
Is there anyone here who remembers the VT52 or VT100? Or the horrors of /etc/termcap
Yes I remember it well. I still use the current incarnation in the guise of terminfo. The thing that it is great for is running something like vim/emacs on a remote machine over a slow ssh connection when anything GUI would be impossibly slow. Actually: I also do it on my local machine: not having to worry about the mouse is great - touch typing makes doing things fast.
I would like to know what I should set my TERM variable to if I wanted to use this MS terminal, hopefully they have adopted something along the lines of 'xterm' as that encodes things like shift-function key (& a few like that).
Have they produced a terminfo entry for this ?
Also: does it understand UTF-8 encoded characters ? Anything else is now obsolete.
OK, I'll bite ... it's not infrequent that I use vim from WSL (at work - at home I can use gvim). In general, I can open up vim, change code, recompile & get the changes pushed before Visual Studio's even got around to opening. So terminfo's still very important for me :). Excellent point about a terminfo entry for the shiny new terminal (ROFL - MS've caught up with the noughties. Next thing you know, we'll even be able to split it into a grid! OTOH, why you'd want to try doing anything serious on Windows eludes me anyway unless you've got a sadistic employer.)
Icon because it's what I look forward to after a week of fighting Windows at work.
Those "in the know" preferred NANSI.SYS or NNANSI.SYS or even ZANSI.SYS..
Indeed. I have never understood how Microsoft managed to make the bundled ansi.sys of MS-DOS so slow. This was one of the reasons why all cool programmers avoided using it and poked the display directly (killing portability).
There was also a commercial FANSI.SYS ("Fancy") which was sold back in the 1980s.
I was always amazed that the default IBM/MS ansi driver had direct video connection to the display, and was only marginally faster than 1200 baud dial-up speeds. The number of DOS apps that wrote directly to video to bypass the lethargic video driver was legendary, and caused no end of problems once windowing systems started to become popular.
> we would load ANSI.SYS in DOS
MS-DOS, up to MS-DOS 5, could run on many different architectures - as long as they were 8086/8088 or similar. Some of these ran serial terminals (SCP and LDP S100 bus machines for example), others had adaptors that were not IBM compatible (DEC Rainbow, Wang PC).
PC-DOS only ran on IBM PC and clones.
Most early software could be configured to use whatever terminal or adaptor was available. Borland's Turbo-Pascal 3, for example, came in four versions: CP/M, CP/M-86, MS-DOS, and PC-DOS. The MS-DOS version could be configured (as could the CP/M versions) to use various terminals including ANSI which was suitable to run on IBM-PC or clones. The PC-DOS version only used bit-banging the mono or color adaptors (or hercules).
It still has a local console for when the worst happens - or the modern equivalent of the big red switch...
In our case, we use the Hyper-V or VMWare viewers for direct access and either try and fix the problem or just reboot the server, if all else fails.
Sure, but if you get only a far more limited local console, it's not fun when you have an issue that requires local access - whatever "local" means (physical or virtual console), and you have to work with very limited tools only.
Sometimes just rebooting may not help at all, it could even make things worse....
IIRC, the local console in the Core edition of Windows Server is a powershell window. it's not exactly limited, except that you can't run/spawn a GUI. We don't use it here for various and sundry reasons. (most of which is that our vendors have no clue what to do if confronted with a server running Core. *wry grin* )
Requires version 19H1 (18362.0) or superior which is pretty much the absolute bleeding edge..
I'm running 1809 ( 17763.557) which is a relatively up to date version of W10..
Why the f*** does the store offer applications that I can't install, it's not like it doesn't already know which version I am currently running.
It's a little while since I used Linux for anything, but I still have nightmares about trying to find the right libraries and tools to get MAKE to run so that I could recompile something-or-other to get it to recognise my (common) graphics card so that the window manager server would run.
I'm sure it's better these days, but of course, the question is how much better?
Linux's advantage is its flexibility, but it sacrifices simplicity. Operating systems are tools, to do a job, and most people prefer their tools to be easy to understand and use, with little to no expert knowledge required. To stretch the analogy, Windows is a pre-assembled multi-tool, but with limited customisability. Linux is a box of parts, which can be combined to make any number of different power tools, with instructions for how to put some of the bits together, where someone could be bothered to write them. They might even be in English.
Linux's advantage is its flexibility, but it sacrifices simplicity.
You say that ... but I've always found it simpler than Windows. These days it's better documented, too (though, sadly, that says more about the state of Windows documentation than it does about that of Linux documentation).
Operating systems are tools, to do a job, and most people prefer their tools to be easy to understand and use, with little to no expert knowledge required.
If only!
To stretch the analogy, Windows is a pre-assembled multi-tool, but with limited customisability. Linux is a box of parts, which can be combined to make any number of different power tools, with instructions for how to put some of the bits together ...
... but a Linux distro is a pre-assembled multi-tool with considerable customizability and container-load of pre-prepared attachments (applications) thrown in. The installer and the package manager do all the heavy lifting, leaving an experience that is simpler than Windows and less frustrating.
Yeah, sure, Linux has some rough edges ... but so do all OSes.
> I still have nightmares about trying to find the right libraries and tools to get MAKE to run so that I could recompile something-or-other to get it to recognise my (common) graphics card so that the window manager server would run.
That must have been a looong time ago. At least for the past 0x10 years, all major Linux distributions Just Work on common graphics cards, hardware detected automatically.
> To stretch the analogy, Windows is a pre-assembled multi-tool, but with limited customisability. Linux is a box of parts,
But actually most people run pre-built Linux distributions, which are just as much "pre-assembled multi-tools". With the difference, that you are free to take them apart and tinker, if you like.
If you're going to post about how difficult Linux is to use, which I certainly found it to be in the 90s, you could do yourself, and all those people who donate their time and knowledge, a favour by updating your experience with a popular contemporary distro. Earlier Windows versions needed a pile of driver discs for very common bits of hardware, and some issues couldn't get resoled. Microsoft tend to resolve such issues by ignoring them, and leaving it to the user
This post has been deleted by its author
Linux's advantage is its flexibility, but it sacrifices simplicity.
All depends on how simple you want it. Which is more useful?
[programname]: error in loading shared libraries: lib xxx..so. x: cannot open shared object file: No such file or directory.
which at least lets you know where to look, or
Uh Oh! Something went wrong. Hold on while we try to fix it for you
which tells you fsck all about anything?
Windows Terminal now uses GPU-based text rendering (DirectWrite and DirectX), which means high quality fonts as well as emoji and so on if you want them.
Are we really now at the stage where users can't survive without pretty fonts and emojii?
FFS, it's a terminal window for typing commands and running scripts, not a fucking chat app!
What your predecessor meant was that “full utf-8” support also means “support for emoji”, since those are all defined within utf-8. So if you talk to the Direct* text API’s you get support for ClearType rendering, as well as scaling for all kinds of resolutions AND emoji support =D =D
Now that brought back memories from way back when I wrote a PDP11/70 program to change the font / font size on an Epson dot matrix printer connected to the serial port on a VT52, then VT220, and another I called 'lpr', which turned on the printer port, dumped a file to it, and turned it off again. Them were the days.