The Register Home Page

* Posts by abufrejoval

81 publicly visible posts • joined 29 Jan 2014

Page:

How CP/M-86's delay handed Microsoft the keys to the kingdom

abufrejoval

Re: "handle 16 separate segments of 64 KB – for a total of one whole megabyte"

This only appears "f**d up" until you fully understand what they were trying to achieve and then try to find a better solution.

The 8086 memory model wasn't designed to last a century, but to provide a cheap pragmatic solution to the 64K RAM limit of the classic 8-bit CPUs, while maintaining a maximum of backward compatibilty.

Intel also made a 'proper' new architecture the 80432 (or iAPX 432), but that failed because it was just far too big a jump ahead.

The 8086 simply tried to provide what a typical 16-bit MMU e.g. for a PDP-11 would do, without using a full external MMU, by turning the segment registers into some type of static MMU map and assigning things like code, data and stack/heap a default segment. With that you could still run full "8-bit semantics" using 16-bit offsets with in a 64K RAM space or the classic *.COM model of CP/M and use CP/M software with the least amount of software aided translation.

Or you could immediately benefit, if your code was "segment clean", not mixing code and data or doing funny thing with heap and stack, giving you potentially a full 64K for each of the three (*.EXE), much like MP/M also enabled, I believe, with discrete MMUs.

Finally going all in via large code or data models to cover the entire RAM range was also possible, with the minimum of effort 16-bit offsets allowed: either by playing with a single byte instruction prefix (ES) or going all in with 32-bit segment:offset pairs.

It is really a rather genius solution, because the static map means it doesn't have to be managed or initialized, which would have required a completely new set of operating systems, something like a RSTS or RSX-11, while the target audience was CP/M++ (the 80286 basically spent its entire life cycle waiting for one in vain). It also meant no extra chip or circuitry, because it was simple enough to include into the CPU design.

Using a full 16 bit for the segment registers might seem wasteful, but the 8086 was already trying hard to make everything 16 instead of just 8-bit: just imagine the outcry if segments might have been limited to something like 11 bit. It probably cost only very few extra transistors to hold them and enabled normal computation without extra masking of shifting on segment values, which an OS, but also large memory model library code might have to do.

That leaves the degree of overlap at 'only' 4-bit of left-shift, initially, which could have been 8-bit or any other number: 12-bit, which the article's author erronously implies, definitely would have been the worst.

For that you have to look at the context: 64K was a maximum configuration of near the entire microcomputer industry. 256, 512 or 1024K were what they would be in gigabytes on a PC today.

The VAX had pretty much replaced the PDP-11, 32-bit and virtual memory were obviously the way to go and Intel had been working on 32-bit CPUs before the 8086 even got started, so nobody thought this 16-bit PDP-11 interim would last more than a few years in the low-end.

And wasting up to 256 bytes instead of only 16 bytes at the end of every segment to round-off must have seemed the wiser choice than plan for 16-bit machines that could handle 16MB of RAM: not even IBM mainframes had that much RAM at the time and the Compaq 386 had 16MB as a physical maximum configuration a full generation later!

Accusing the Intel designers of incompetence or short-sightedness in hind-sight is a very cheap shot, and getting it all wrong (like the original article author), makes it so much worse, when the guys who made these decisions are no longer around to defend themselves.

You just go and try to come up with something better for those constraints!

Everyone can argue that CPU should be 256bit all around, because single level store is best or even bigger so we can have larger vectors.

But people prefer affordable computers, even if those compromises cost extra effort later: even 32-bit isn't dead just yet.

abufrejoval

Re: The real reason

>The issue with the 286 was it couldn't be taken out of protected mode once it entered it without a reset, and once in protected mode, it couldn't trap efficiently real-mode applications attempts to access >directly memory and I/O ports. Techniques to execute real mode applications were or switching back to real mode, or trying to handle somehow well-behaved applications without much CPU support. IBM >added a feature to the keyboard controller that could issue a 286 warm reset to return it to real mode witthout altering memory - but it was slow and required re-initializazion of CPU states.

Rumor has it, that early prototypes of the 80286 did support a real-mode return, similar to LOADALL on the 80386.

I wrote a protected mode extender for the 80286 to run DOS (actually GEM) applications in protected mode. I found that a tripple fault was way faster to cause a full CPU reset,than telling the keyboard controller to trigger the reset line, but you better managed the CMOS flag so the BIOS reset code wouldn't go astray on a cold start. It was ok enough for my GEM apps, because they were visualizing huge amounts of data as maps and there was no alternative. And I had more RAM than I could otherwise use, not your typical swapping or paging scenario.

>The 286 had hardware support for swap - but that was handled at the segment level. A segment descriptor bit tells if the segment is in physical memory or not. Accessing a segment with the bit set issues >an interrupt that allow the OS to load it from external storage (and swap other unused ones if neeed). and let the process continue.

>Having to swap segments up to 64K was already not ideal - in 32 bit mode having to swap segments up to 4G was unfeasible - hence the introduction of the MMU and memory pages.

That's where you fall into the same trap, as the article's author: segments only had a *maximum* size of 64K on a 16-bit machine, but a minimum of only 16 bytes, it was up to you to manage the sweet spot and potential size variants. But of course addressing 16 byte segments with 32-bit large pointers on a machine with 16-bit registers and a 16-bit data bus was quite a bit less efficient than dealing with 32-bit all around. And while the 80386 could have stuck with segment swapping at sizes 16 bytes to 4GB, using paging instead was far more natural to 32-bit designs. So much so, that segments were mostly useful for backward compatibility, MMU is generic, covers segmentation, paging and more.

I'd just say it was the state of the (PDP-11 derived) art, while the 80386 went VAX much faster than DEC did: easy, when you have a mature design to copy from.

> But the 386 success was it could run Windows 3.x properly enough - and that also meant the death of DOS applications.

That's both not true in a literal sense. Windows 3 used 16-bit code for applications, 32-bit support only came with Windows 95. So in a world of pure Windows 3 apps, the 80286 could have flourished for much longer.

And actually the main benefit of the 80386 was the fact that you could run those DOS applications, even togther with Windows 3 apps: so it extended the usabilty of DOS apps for many years, where Windows 3 and DOS co-existance on a 80286 was very hard, that single DOS box running CPU resets at frequency.

None of that is really relevant any more, but those since I learned those facts in months of work at the time, they haven't quite faded yet.

abufrejoval

Re: Not the main reason MS "won" the day.

Separation of code and data:

RAM first had to be invented. Early computers would use hard coded external sequencing logic and had to be reconfigured to run a different computation. Reference or conditional data and internmediate results would sit in some kind of register or actually circulate in delay lines. When scratch pad storage or RAM finally came about, that was initially all for data, being so expensive and small. It was the ENIAC, designed by Mauchly and Eckert, which changed that after RAM became cheaper than manpower and time-loss for reconfiguration, putting data about the control flow also in RAM and thus created software, an idea quickly published by John von Neumann, before the two could patent it.

Von Neumann was also the first to note that having code and data in the very same address space could lead to self modifying code that might "evolve" like a virus: he died before fixing the problem, largly considered the only human being bright enough to do that.

It led to two schools, the Harvard school wanted to separate code and data (mostly to half the memory bottleneck) while the Princeton architecture combined them for extra flexiblity, which includes the viral risks.

Note, that Harvard architectures were rarely pure e.g. in RISC machines, but "modified" to allow near Princeton ease in operating systems while retaining much of the memory bottleneck relief.

As to whether those Charlie Chaplin ads made the decisive difference, not for me and I thus I wouldn't speculate that far.

abufrejoval

You're evidently not alone (Re: I'm confused)

The choice was to use overlapping chunks. Actually, you can address every byte of memory on the 8086/8088 via 4096 different segment addresses, adjusting the offset in parallel.

And no, nobody thought that 4096 different segment/offset combinations were a benefit, it was just that left shifting the 16-bit segment address by 4 (instead of any other number between 1 and 16) gave enough physical address space with 20-bit (1 Megabyte: "who needs more than 640k?"), while the granularity of 16 bytes (every extra bit of shift would have doubled that), resulted in precious little RAM being "wasted" between segments... these guys had done the 4004 and 8008 and founded the company on selling 1Kbit RAM chips!

In a fully partitioned RAM with 64 separate chunks, that waste could be close to 64K, or what used to be the maximum size of RAM, while the entry level model of the IBM-PC, launched 3 years after the 8086, came with only 16K, adding more than 64K already required going over the add-on bus.

It would also make those chunks full partitions without connecting doors, unless you could remap those chunks via some type of MMU: how do you even write an OS for that?

I know it's not that obvious nor was it much appreciated later, but what they came up with is really rather genius.

abufrejoval

Re: Sorry Liam, in the future you'll have to let me proof read your articles!

The planet may burn because people fail to realize that you can't debate at Twitter length.

This is an article refuting a key assumption of yours.

And a full explanation of why a bit of thinking should have kept you from making the mistake in the first place.

And an addition to the history lessons you like to make, where you clarify reasons that may not be obvious to everyone.

I went out of my way to attempt the same, explain why this approach was used, when most everyone judges it bad or short sighted by skipping over the topic and context.

When you argue from the ground up, that can be long, rambling is a value judgement and together with incoherence let's me conclude you didn't bother to read the article.

If you refuse a comment, its's worth basing it on content, not a perceived form. Get into the actual argument and try again.

Lack of citation is another smoke bomb, because again, while a fully developed argument may not be original, I'm not using somebody elses work to short-circuit the chain of reasoning or try to add statistical support to something that's pure logic, two main reasons for a citation that come to mind.

I'm most surprised you couldn't take the joke...

abufrejoval

Sorry Liam, in the future you'll have to let me proof read your articles!

Liam, you’ve written many thoughtful and entertaining articles that I loved to read. But given the giant blunder you’ve committed in your latest, I have to recommend that you send your drafts to me, to avoid a similar catastrophic repeat.

The problem is this phrase: „Intel delivered that with the 8086, which, rather than being limited to 64 KB, could handle 16 separate segments of 64 KB – for a total of one whole megabyte of memory space.“

What’s even more shocking is the fact, that none of the commenters seem to have caught the blunder yet, risking that this might sink in, sediment into accepted general lore, and one day be regurgitated by AIs as true gospel!

Having been along for the ride since those days of CP/M, I’m obviously facing a diminishing life expectancy myself, and the original technical geniuses at Intel, who designed what you fail to understand, can no longer fight for themselves.

So I’m taking up arms against this fallacy and it won’t be pretty, sorry mate!

The 8086/88 could in fact handle 65536 or 2^16 different segments, because the segment registers were also 16-bit wide. Both would be used together to form effective (or physical) addresses and laid side by side they would have actually formed a 32-bit value, enough for 4 gigabytes of memory space, and without very much change to the initial generation of application software!

So why didn’t they, or why did they do what they did?

As you allude to, memory banking to extend the 64k address space offered by a lot of early computer designs, had been around for a while. Let’s remember that very few machines using 16-bit addressing actually had a full complement of 64KB of RAM! I don’t think CP/M was run on ferrite core memory so the starting point would have been Intel 1103 DRAM at 1 Kb or 1024 bits per chip.

But with the success of those 8/16-bit (ALU/address bus width) microcomputers running CP/M that 64k limit became an issue, just like it had on many other architectures, which like the larger PDP-11 models then resorted to employing MMUs to support segmentation and bank switching to extend the physical address space beyond what applications saw logically.

Various S-100 vendors, the type of machine where CP/M dominated, added RAM cards with usually proprietary bank switching support when 64KB of RAM got too tight, and Digital Research developed a multi-user companion, MP/M, to abstract bank switching and support the larger RAM amounts that required.

Note that this mostly meant that you could run several applications with a maximum of 64k each, not that an application could use more than 64KB of RAM... easily.

In fact the first Unix system I installed myself was an SCO Xenix (sorry!) I put on a 8086 box from Siemens with a discrete MMU: it ran Multiplan for Xenix just fine!

Now switching RAM banks in 64k chunks, the granularity you imply, creates a huge OS problem from the very start: you need to load those apps and you need to transfer between applications and the OS. And if you switch those banks under your feet, that’s like trying to rearrange stuff between rooms without a connecting door. And then those early applications didn’t always use all available RAM, they came from modest roots and not only made do with far less, your initial MP/M system might only have 128KB of RAM instead of 64, but you’d still want to pack in everything that would fit, not just one per 64k partition, you’d want to allow physical overlap!

See where your notion of 16 64KB partitions goes terribly off the rails?

Now the early bank switching logic had to be implemented in the then standard TTL logic chips, which were only relatively cheap: using smaller banks meant both more chips and bigger tables to manage them: 16KB chunks were a common option, some supported distinct sizes.

And if this reminds you of EMM and Intel Above Boards, then you’re right on track, because that was a bit of a repeat kludge.

The 8086 designers aimed to onboard the best of discrete MMU design ideas around at the time with the minimum number of gates and the potential for both immediate benefits and future expansion.

The result is obviously a compromise, but I challenge anyone who wants to criticise the result to find a better one given the constraints!

Using a full 16-bit for a segment address, made things, dare I say “orthogonal”, a term not often associated with x86, but true where the 8086 is 16-bit all around, including the ALU, otherwise. It’s also cheap in terms of transistors within the CPU.

Managing the segments via a fully featured MMU table, which allows for all sorts of mapping, would have been quite another level, both in terms of logic, but also because the table itself would take up memory and either extra memory cycles or caches, much like a small page table in VAX class (or 80386) designs. Such an external MMU would have also allowed for overcommitting physical RAM via disks, if bus fault logic and proper exception handling was included: that’s what that 8086 Xenix system did or a big PDP-11 would do. But the 8086 itself wasn’t designed to compete with those.

So instead the 8086 designers went with a hardwired computed mapping, where the segment address and the 64k offset were simply added together, not mapped via an external table, to get an effective or physical address. It got you quite a few benefits as I’ll explain below.

They could have chosen an offset of 1 to get 128KB instead of 64 via effectively 17 address bits. They could have chosen an offset of 2 for 256KB, ...up to perhaps an offset of 8 bit, to get to a whopping 16MB via an effective 24 bit address: please note that we’ve already excluded a full 16-bit shift for 32 bit effective, because it closes the doors between segments.

So they looked at each offset value between 1 and 16 and gauged what it delivered in value vs the alternatives.

And here, in hindsight I’d say, only here, they could have picked a better number than the 4 bit offset with 20 bits of effective address that they settled on. I’d love to read the notes on why they didn’t go with 8, but perhaps somebody else would like to hazard a guess?

But how could they know that this architecture would run the world of PCs for at least a decade, when they had far more reasonable designs like the 80286 and the 80386 already on the drawing board, which would mimic a big PDP-11/70 or a VAX, the first with full mapping table support and segment fault handling, the latter with both a full 32-bit address space and page based virtual memory?

Where Intel didn’t skimp was in supporting distinct segment registers for all parts of the application that might naturally support disjunct RAM spaces, code and data, but also the stack and something funny and new: a heap. The big advantage was that those segments were implicit and didn’t need to be explicitly “mentioned”, included or quoted within the code: code would use the code segment base, data the data, stack …you get the picture. Then there was even an extra segment, just in case you’d need one, accessible via an override, at the cost of an extra code byte, not a full segment address on top of the offset.

Now you need to remember that the first part of the mission was to run 8-bit 8080 code with a minimum of change. That code at the time typically wasn’t in some high level language, but machine code written with assembly mnemonics and you couldn’t just compile that for another architecture… actually they did support that as much as possible at the time.

But mixing code and data used to be considered proper, even genius or von Neumann, it was also the only original option. And things like recursion, variable length strings, complex linked data structures, or object oriented programming weren’t done on small computers ...until much later.

Then of course stacks and heaps might crash and burn, while turning data into code caused all kinds of other problems, but your old 64k assembly code with hard-wired data offsets strewn all across would simply be loaded with identical code and data segment addresses or full overlap, while say a Cobol program would have run just fine with code and data “tight but separate” just as it originally ran on machines about the same size a decade earlier.

Of course, people did funny things with equivalence statements in Fortran (variant records in Pascal) or even tried recursion via a proxy function, because only direct recursion was detected by the compiler and abort code generation… Yup that was me, on a PDP-11, a quick way to cause a segment fault, because the PDP-11 could detect segment overruns.

To those with discipline or using a safe language like Cobol separating code and data segments immediately provided extra room, while very few stacks would ever grow large without recursion.

And then, if that wasn’t enough, there would always be the possibility to use long addresses for code, or data, or both, to take advantage of every last bit of RAM your PC might have: still better than being tied to 64KB on a machine that increasingly had more while you didn’t share it like those MP/M machines.

Getting the original 8086 segmented architecture wrong, which ruled PCs long past the day the 80386 launched, is hard to forgive.

Sorry, Liam, your site admin can provide you with my e-mail address!

Windows Update is a torture chamber for seldom-used PCs

abufrejoval

Can't say most Linux are all that different

I did an inventory just the other day: I keep 28 personal computers in operation (not counting mobiles and tablets), 15 Windows, 17 Linux, 1 BSD installations; you might have already noted that some must be dual boot.

I'm not counting the VMs, some nested, of which there are many more, a good clutch running clusters on clusters 24x7, others on-demand, yet others, rarely.

Patch day is a busy day, but of course I never really have to wait for any system while patching is going on: I just have to monitor things and use a set of cascaded KVM to switch to another screen for work or fun.

But I can't say that it makes much of a difference which OS they run, they all need regular patching.

And if a release update is required, that typically also requires getting to the latest patch stage, before doing the release update, which in the case of a Proxmox cluster (also Ceph cluster), can be quite involved with Ceph versions/upgrades not being closely aligned with Debian or Proxmox base.

Supposedly read-only OS Linux variants are different, but that's mostly how they operate inside. It doesn't remove the update burden, nor can I say that it's a lot faster. Again, since I'm not exactly twiddling my thumbs while things are happening, "50%" faster might escape my attention. It's also not necessarily a single pass, but might require base OS and then apps/snaps updates, with Steam then updating next. And appliation updates like Firefox, LibreOffice, VLC and NextCloud are all over the place: some do their own in-place updates, some come as OS managed packages, some snap or flatpak, others may be Docker images from one source or another, I see mostly fragmentation, no clear winner: all cost time. GPU vendors may chose to reskin entire portions of the OS even with a propriety UI style, some terrible 'default dark' mode and an assortment of proprietary hot-key combinations, which have me suffer backflashes to DOS TSR (terminate and stay resident) applications.

Perhaps Steam in all this is the most painless in terms of how it's getting done. But it's also the worst offender in terms of frequency and volume: I guess they'd be long out of business if at least it didn't just usually work flawlessly (or automagic recovery).

Sure, I remember the days when I did a sysgen on my Unix perhaps twice a year, to incorporate the latest patches, which were never about viruses or trojans, just stability or new hardware back in the "golden old days".

The Internet has been turned into a weapon, just like very other instrument has been weaponized, so that tribute needs to be paid.

But I sure wish John von Neumann had put a bit more Harvard into the Princeton architecture before he pushed it to fame; he thought the virurs potential more exciting than dangerous and unfortunately didn't live long enough to fix his mistake.

Scammers try to SIM-swap Dubai citizens hours after Iranian missile strikes

abufrejoval

Most everyone has one... (Re: Arschlöcher)

budholes are very evenly distributed, among sexes, races, classes, nationalities... just turn around and start counting!

So would an Iranian hacker feel justified in scamming people who've allowed their guests to war on them from across the gulf?

If I'd put myself into his boots or sandals, I could only answer "yes".

Is there any sense to talk about legitimacy in this context?

Das Recht des Stärkeren ist die neue Normale der Weltpolitik.

(basically, if you hold the gun, you win the argument)

That's the tiny little problem: you ignore international law, that eventually costs you civilization right into arm's reach.

But you can rest assured, all money, including crypto, is worthless, once that happens: nothing lost, nothing gained by then.

Memory is running out, and so are excuses for software bloat

abufrejoval

DOS "smal" and "large" memory models

>I remember competing in the BCS's annual programming competition back then, too: each team was given a PC with a copy of Quick-C and you had to keep it small and not bust the "small" memory model >which if memory serves was something like 640KB. Taught you to think about the algorithm and not just throw a highly recursive, clunky monster at it and hope, because the judges (of which I later became >one) would see that coming and would have test cases that would make the code bust the RAM limit. I once set a question (Sudoku solver) which did that, for that precise reason - if you brute-forced it, >you'd blow up, so you had to write a vaguely clever algorithm.

Your bio memory is unfortunately befuddled by early PC memory abstractions...

The 8086 or "DOS" memory model took the 8008/8080 or "8-bit" memory model, which consisted generally of 8-bit registers and ALUs and an 16-bit effective memory addresses space, which either combined two 8-bit registers with base and offset (e.g. 6502) or included a few 16-bit registers in a generally 8-bit architecture (8008/8080/Z-80 and lots of others) and extended that via 16-bit registers (which could also be used in a 8-bit manner, e.g. "AX" (16-bit) also being usable as "AL" (lower 8-bit) and "AH" (higher 8-bit) and extended it via a "segmentation" approach.

A segment was mostly the 64KB area which a 16-bit offset could address natively and then typically 'implied' or translated behind the back via a MMU (memory management unit). E.g. PDP-11 machines would have code, data, stack and heap segments that could be mapped to distinct physical memory spaces e.g. for each process, allowing different processes to run both with physical memory isolation and using far more than just a single 16-bit or 64KB physical address space.

The 8086/8088 only went half-way, not using a full function MMU with flexible mapping and segment faults for transparent on-the-fly translation and virtualization, but shifting segment addresses four bits to the left and then adding 16-bit segment address on top. It gave it an effective 20-bit (1 MByte) address space with a fixed physical mapping, where different address segments might actually overlap do a large degree in the same physical address space: the idea was that lots of programs wouldn't actually need a full 64KB code, data, stack or heap segment so making them non-cotiguous via a full 16-bit shift avoided excessive RAM use when typical segments were smaller.

The only reason that 1024KB address space became 640KB effectively on PCs was the fact that the upper 384KB were mapped to I/O by IBM's PC designers: they just couldn't imagine that the Apple ][ replacement they were designing might actually ever use the full 20-bit address range, which today has reached 64-bit (while IBM's "proper" single address space architecture, the i-series or AS/400 went from 48 to 128 bit during that time...).

The overhead of using a real MMU, including exception handling, was pretty near minimal, even in those early days, comparable to what the IBM PC-AT then used to implement 24-bit DMA for floppy operations, but that's just one of those many personal computing "whatifs", that are so interesting to loose yourself in, ex-post.

A "small memory model" program would then be basically an "8-bit" application, perhaps using 16-bit registers and arithmetic, but only 16-bit addresses/offsets for everything, code, data, stack and heap.

The benefit was tight/native "single action" 16-bit addresses being used throughout, even if very few instructions actually completed in a single clock cycle in those early and pre-RISC days.

If 64k wasn't enough, programmers would have to use a "large memory" model, which implied that you'd have to use "DWORD" addresses, which were a full 16-bit segment and 16-bit offset, 32-bit in total, even if on a 8086 those 32-bit of address only yielded 20-bit of physical address space.

The overhead was significant, but if your code, or your data just would no longer fit into a 16-bit address space, you'd at least be able to make do. Compilers of those days would actually support chosing between "small" and "large" for each domain, e.g. you'd be able to combine a "small code" application with a "large data" model, vice versa, or combine both.

I don't think that "large stack" applications were supported, and I'm not sure about segmented heaps either.

Needless to say, it was a mess, especially once applications and operating systems needed to support both, 16-bit relative addresses and 32-bit DWORD parameters in calling conventions, especially with so few registers to use in case of x86. But in those days it was considered a privilege to be able to somehow compute at all: everything was better than a human computer, or having to resort to pencils and paper, or having to wait for a time-sharing slot.

Recursion was great for transitioning from extremely hardware oriented early code to mathematical abstractions, but meant that a lot of critical data structures wound up on a stack, that then would only take a max of 64KB of RAM; actually heap and stack were typically forced into a single segment, used from the bottom and the top conversely, only to terribly crash, once they met, if "non-typical" input data led them on such a collision course...

The 80286 protected mode implemented the fully "PDP-11" class memory abstraction and eliminated the fixed mapping of segment addresses (via the 4-bit left shift), replacing it with a full MMU and an exception handling mechanism, to implement physical memory overcommit and on-demand swapping of memory segments. The physical memory space was extended to 24-bit, while DWORD pointers still consumed 32-bit, and registers mostly remained 16-bit.

Since VAX like abstractions with 32-bit registers, offsets and 4k page granularity followed only 2 years later via the 80386, the "PDP-11" like memory model on the 80286 never really took off, which turned out to be a great thing: virtual 8086 and DOS was bad enough already.

Lowercase leaving you cold? Introducing Retrocide

abufrejoval

If you had put a render at the start, I wouldn't have read past the headline

I agree, that monospace does have its benefits in programming and on consoles, been there 45 years now.

And good monospace is hard to find, so you got me to not just read your prose: I disliked I actually had to click extra to finally reach a sample...

And what I saw there is just terrible. What makes it worse, is the demo page, which is in dark mode, and that probably explains why I dislike it with a vengance now.

I started computing in the dark ages, phosphors lighting upon a dark background, not the dark ink on light vellum that enabled the enlightenment.

It was in 1980 with a) an IBM 3270, the best terminal ever made, and b) a Tandy RadioShack TRS-80, with one of the worst displays and keyboards in computer history.

Both used CRT tubes, and both formed letters from pixels, but the visual difference was staggering: for the 5x7 pixel based letters on the TRS-80, every pixel was clearly distinguishable, even if it was a wavery blueish-white cloud and the resulting letter hard to recognise on a truly miserable b/w flickery TV screen, while the letters on the 3270 seemed formed via, what with today's insights I'd call pure vector drawn shapes, with obvious optimizations for monospace readability, punched in a bright but never searing green on deep black, no discernible pixel structure and rock solid on the screen without any flicker.

Incidentally the 3270 probably shared a somewhat similarly slow phosphor with the original IBM-PCs monochrome display, but since the 3270 was a block mode device, that practically never scrolled any content (update in place or paging was typical), it made the screen updates even better, as things tended to softly transition. On the PC and all other line mode terminals, were scrolling was much more common, slow phosphors resulted in streaks when scrolling and eye fatigue from trying to focus on a blur.

Anyhow, for the last 45 years I've been happy with pixel densities so high, screens so vast, the richest imaginable colors on displays with ample contrast but also fine nuances, where anything you can put on paper seems coarse or dull even at the highest print resolutions: and the stuff on paper doesn't even move or change so fast the blur is from your mind, not the medium!

Why anyone would want to go back to letters that aren't the best Romans could hew into the finest marble for legibility across centuries and millenia, totally escapes me. The colors they originally put on their sculptures and sculptured letters have worn off, but the shapes at least persist, including the serifs that optimized readability for each and every letter in its own way.

We seem to regress from the enlightenment in so many ways, but Retrocide quite literally kills the past, erasing the progress made since, just as the name implies.

Struggling to heat your home? How about 500 Raspberry Pi units?

abufrejoval

Re: Most stupid use of RPs from A to Z

I just can't even see this launching or lasting longer than a blink...

One underlying assumption is that the compute power from those 500 PIs can actually be sold in a meaningful manner: I very much doubt it can, because it's neither competive nor attractive on the ISP side of things for those "business workloads customers": PIs can't compete in absolute performance nor performance/Watt with current server kit.

Now, you might not care if Thermify can't sell the compute and UKPN doesn't get paid for the electricity, but just how long are they going to supply those 500 PIs in your house with power, when Thermify goes belly up?

That leaves you out in the cold because if electric heating was economically viable, you could have just gotten a heating coil installed: much cheaper and less e-waste than PIs.

abufrejoval

Most stupid use of RPs from A to Z

One of the reasons I like The Register so much is that common sense still seems to be a thing there.

But with articles like this one, I wonder...

The PIs are terrible in terms of computing efficiency, units of compute per Watt at peak, but also rather wasteful on anything down to idle.

Sure an ECL VAX or Cray-1 might be still worse (and slower), but the main design goal of PIs is putting a reasonable amount of computing power into some spot for a minimal price.

Once you start aggregating them, e.g. to scale out computing workloads, their inefficiency and the overhead of any interconnect very quickly kills their economy and you'd be better off putting those workloads into containers or even VMs running on something both more powerful and more efficient per Watt.

I can't think of a single application, where 500 PIs in a single place might be able to do a useful workload, that couldn't be done better by just about anything else built from PC hardware, up to a single server, or perhaps three, if you want some fault resilience as well.

Sure, you might not be able to heat a home with those, but that's sort of the point: that heat ist pure waste!

And unfortunately neither the PIs nor the potentially hazardous cooling liquid just decompose peacefully at the end of their life cycle, the cost of disposing of all this might ruin whoever that job will fall to, after the provider has left the scene (likely insolvent).

Putting datacenter heat output to secondary use may be a good idea when carefully planned for the full life cycle of both the producing and the consuming side.

But datacenters and housing rarely align in terms of life cycle times.

The one thing SME IT can do that the big guys can’t: Change the world

abufrejoval

Text is better than graphics, graphics are better than video. Text is magic, ...

That could have come from me.

And then I talked and wrote to my children, only to notice that they didn't understand, mostly because they got tired after the opening salvo.

It had me realize that things are much worse, or perhaps even more different, than I realized.

Yet, for most things, my kids are not only functioning adults these days, they are even cultured, charming, smart and very much philosophers, like me.

Even if they don't read or write.

What's a bit jarring is that they know many of the ancient concepts by different names (English instad of Greek or Latin) and associate some Youtuber instead of the ancient philosophers, which first formulated those concepts by writing Greek or Latin.

We are essentially speaking a different language, even if the concepts are largely shared.

Text works for you and me, but that's because we are of that text generation.

What's wrong is to assume that it's best.

Yes, I'd agree it great for abstraction, because most can't write as fast as I can type, so they'd rather spend a bit extra time on abstracting while massaging their fingers, resharpening their pens and cleaning up the ink blots.

But humanity has enjoyed abstraction and text only for a very short period of its existance and most of social code or culture was actually transmitted in tales or actually dance and music.

Robin Dunbar is quite convincing when he argues that actually music and dance came from laughter and actually precede language by eons.

So videos may be far more brain compatible than text ever was.

And once AI has grown smart enough, every video can be reduced to or reproduced via a prompt, a rather good compression and small representation after all.

Anyhow, I generally like your posts a lot, this one was so much of a rant I felt tempted to throw something at you.

P.S. My kids can do videos, really cool stuff and as if it was nothing!

Torvalds' typing taste test touches tactile tragedy

abufrejoval

XT vs AT keyboards (Re: The best keyboard...)

With IBM the switch occured with the launch of the PC-AT in 1986. Before it was 10 function keys on the left, an integrated keypad on the right that would toggle between edit and numbers, the control key left of A, no Alt-GR and a DIN connector with a very different electrical protocol than the AT variants, so that mainboard had to support both during a transition period, mostly 80386 variants.

Obviously I preferred the XT variant, because that's how I started, but also because nearly all of the early editors were written before keypads came around so switching between text entry and navigation via the control-key was rather essential. I loved it, too, because it meant that I didn't have to take my fingers off the home keys and the eyes off the screen, which would otherwise waste precious brain cycles to re-orient on the problem to solve rather than how to get the code entered.

The biggest problem arose, when I used computers more to enter text, because text was often enough written in German, which has the three öäü umlauts and an sz-dieresis ß, which have rather regular frequency in written text. Because their positions in German keyboards are used for equally common special symbols in programming, those symbols had to be moved elsewhere.

During XT-times, the only viable solution was to switch keyboard layouts between US-ASCII and a German variant and while I was very comfortable with using both, I was never that comfortable with the complex keyboard acrobatics used for the switching itself, worst when you had to start documenting code in German.

The introduction of Alt-GR with the AT keyboards created a middle ground, where switching was no longer required, but a lot of symbols like brackes and curly braces required a bit of Alt-GR acrobatics, which I managed to do still without seriously leaving the home-keys too far for a blind resume.

Many editors also started to support the keypad, but that had seen some very unfortunate changes with the AT, mostly a split between cursor and numbers (programmers don't need numbers) and the move of the escape key away from the edit-pad, where it truly belonged.

As I started more human languages to my portfolio, the ugly head of having to switch keyboard layouts again reared its head. Spanish wasn't so bad, because it mostly kept the normal keys in place and it was mostly transitioning between umlauts and some new diacritical marks like ñ or the inverted ¿question? or ¡shout!

With French and AZERTY that would have been a Dvorak scale change, which I mostly avoided because speaking it turned out good enough, most of the written stuff was accepted in good old English.

I never cared that much about function keys, by the time I had found them, used them and re-homed my fingers, I could have written a whole set of commands. I just don't use them, unless I really have to.

The very strong preference for using keys and without looking down came because I started using keyboards as a pianist and on Steinways, too.

Also I have enjoyed long fingers, which where the envy of many of my piano teachers, as pianos were designed by male chauvinist pigs with larger hands.

And then those fingers automatically line up properly over the home row with the elbows on both sides of my belly or resting on a chair: they are pre-curved to suit normal keyboards, ergonomy is built-in.

And on top I took a typing class at high-school as an exchange student in the US. It gave me a head-start in IT that have always been immensely grateful for.

I salvaged several original XT and AT keyboards from idiots which just didn't know what they were throwing away, all including proper IBM labels.

But I did actually discard the XT variants, both because they would no longer work (easily) with modern computers (where PS/2 to USB remaines easy to this day) and because their original layout advantages eventually because more of a liability: I just wasn't coding enough any more to make that worthwhile.

But I've even switched to a Cherry for daily use now, because I game more: Original AT keyboards can't support more than 2 pressed normal keys, which is never an issue in typing, but a very basic need in games: it became a matter of life and death!

I preserve two AT originals for my memoires, both because the fingers start deteriorating and because the feel is just so much better than anything else.

Too bad there is just no way to fit those into an ultrabook...

How to stay on Windows 10 instead of installing Linux

abufrejoval

Re: ¿Why not use Windows Server 2022 as an alternative to Win10's demise?

>I am a native spanish speaker. I am well aware that english does not use '¿' or '¡' . I use them anyway, so not go into the reverse problem and forget to use them when writing in spanish.

>

>Some of my (native speaking) english teachers LOVED '¿' and '¡' because, when you were reading out loud (say, in a play, or in a public reading of your books) you knew from the start the needed >intonnation of the sentence, and could not be taken of guard by a non-common sentence structure.

>I now send you back a question ¿why does the writing style of others in an international technology site triggers you so much?

For me Spanish is one of four languages I speak (German, English, Spanish and French), but also one of three I regularly write in (the incentives for writing in French were too low to try so far).

The prefixed marks are quite necessary in Spanish, because often enough the tone is the only difference between a statement, a question, or an invective, whilst in the other three languages word order would change to give a reader an early warning, and make reading an unknown text out loud no problem.

My mother toungue German is an often infamous exception, where many authors enjoyed putting complex issues into complex phrases, only to switch the entire meaning of a paragraph spanning a page or two with a negation or turning it into a question after all.

It's even a style element and used to create suspense, and not really an issue if you read to yourself. But it risks wrong-footing someone who reads such a passage aloud without knowledge of the text, because the tone does normally change also with the other three languages for questions and shouts.

As a parent who utterly enjoyed reading to my children, I've often found this rather nasty, because I then would have to retrack, recalibrate and re-read, and would have wished for the Spanish inverted marks to give me early warning and the ability to better maintain the suspense of the story.

I switch keyboard layouts for the languages I write and quite probably the horrible AZERTY is the reason I don't actually write in French.... while I the computer I type this on, actually has French AZERTY labels on them.

The single most useful thing I learned in the US was touch typing and since then, I've never taken my eyes off the screen again when typing.

abufrejoval

>

>

>...the IoT edition only offered US English as the system language.

>

>Oh, the same as El Reg then.

One of the things that are really rather nice about Windows is multi-language support. Adding a language, mixing languages, letting different users use their own preferred languages, all of that is really rather easy.

So yes, the IoT variants install in US English, but adding your preferred language and variants is very easy and no different from all other variants of Windows.

These xx language editions basically only set the installer and initial preferences, baked-in localization was eliminated a very long time ago, probably with Windows 2000 but perhaps even earlier.

What's perhaps a litle tricky is things like the removablilty of Edge. I was surprised to see that an option on German Windows editions once the legal mandate came through, but seemingly those IoT editions won't offer it, even if you're configuring the "jurisdiction" of the OS to be within the EU.

Of course, I've created my own LTSC ISOs which strip M$ nasties before they ever have a chance to land on the systems.

abufrejoval

Re: don't include the Windows Store or any "modern" apps

>You know what, I looked into this, and you _can_ put the LTSC installer into a half-Nelson and force it to reinstall over the top and preserve existing apps and settings:

>

>https://gravesoft.dev/in-place_repair_upgrade

>

>It looks a bit worrying and I am not confident enough in it to recommend it, but it's apparently possible...

That looks like a really good pointer, I'll give this a try in a VM first.

I got a few Windows 11 24H2 Enterprise installs that I'd like to switch to LTSC IoT to keep them from updating beyond 24H2 and stable until 2036.

With 25H1 around the corner it's high time I did something about them and if I can save the effort of a re-install that's great!

EU OS drafts a locked-down Linux blueprint for Eurocrats

abufrejoval

I applaud the initiative...

But perhaps more important than just having a distro is to be able to maintain all the important bits, including the compilers, browsers and applications as forks, should the rifts spread wider.

As to fat stateful desktop vs. dumb enough for everybody: binary choices are rarely good.

As usual I look to Downton Abbey for inspiration when it comes to servants keeping your house with loyalty and discretion (instead of bloodletting and spying).

And in most mansions people adapted rather easily to someone elses even on a visit, because a lot of activities were tied to specific rooms, which might look different in every mansion, but had the same basic services and interfaces: dining room, breakfast room, drawing room, nursery, servants hall etc. you get the drift.

So instead of swishing between desktops, one should swish between rooms. Their inside looks might combine vendor and user elements, but their layout within the mansion could be purely according to user preferences to profit from spatial memory.

Quite a few of us organize our private little cyberspace much like a home. Except rent is much lower and space not really constrained nor necessarily fixed size.

abufrejoval

Re: Why even have a local disk?

I liked my Sparcstations, too. But with 1TB of 2x SATA SSD performance on a Kingston Data Traveller USB stick, I just stick to USB as a boot medium for whatever OS I want to run on a machine in front of me. Windows 11 IoT LTSC to go (without TPM on anything since Nehalem), Server 2025 or any brand of Linux... no local disk requried, nor carring case or power supply.

If only they weren't so easy to loose...

abufrejoval

Re: Why even have a local disk?

Transient Quicksilver. Just what you need to run Sparc on x86, Power on x86 or z/Arch on whatever.

Locked tight in a safe with "poison" in fat big letters all over it.

Fear of the unknown keeps Broadcom's VMware herd captive. Don't be cowed

abufrejoval

Could you perchance list the alternatives you allude to?

The enterprise VM product market has become quite thin over the last years, I wonder if you had noticed.

There is Nutanix, I guess, never tried it when it was still tied to hardware. But TIBCO has never been known for being cheap, even before they needed to pay for mergers.

Oracle dropped their Xen stuff, went with oVirt/RHV, only to have Red Hat drop both: what's left will fold any day now, because Oracle sure won't take over the development for a stack full of RH aquisitions while RH itself tries to survive on painting Kubernetes red and getting CentOS fans red faced.

Citrix-Xen and Xcp-ng have more than 20 years of extremely popular software liabilities like OCaml and core engineers retiring while the market outlook remains extremely cloudy.

Promox' main advantage is that it's dumb, no agent for automation means much less code to maintain or refactor. I love it for easy minimal 3 node clusters with Cepth, but I wouldn't want to run 100 servers with 10-50x the VMs on it: it's just not in the same league and still just KVM and LXC with proprietary API handles.

Did I miss anyhing?

Ah yes, TrueNAS can run a VM via KVM now, after moving to Linux. Nothing cluster in that crowd, HA means dual ported storage with them.

Even in terms of hypervisors quite a few have gone, even VMware workstation is mostly just a GUI these day, much like VirtualBox, for KVM or Hyper-V.

LibreOffice still kicking at 40, now with browser tricks and real-time collab

abufrejoval

Marco Börries started it at 16, on the cheap, selling code he had not written

I ordered Turbo Pascal 1.0 from Borland the minute I saw the ad in BYTE: $49 for a compiler including a WordStar compatible editor was just too good a value to pass off, my Apple ][ clone (with Z-80 SoftCard and Videx 80-column card for the "professional" stuff) was way more than an RTX 5090 would be in today's money.

If you weren't programming 40 years ago, you just can't imagine the productivity boost it provided at an age, when the edit/compile/debug cycle was measured in coffee cups, not milliseconds: ever wondered why BASIC was so popular?

Anyhow, only two years later Borland offered a functional equivalent of their editor as Pascal source code in a package called Turbo Editor Toolbox, at a similar price, I believe.

That gave you a WordStar equivalent editor you could change and extend any way you wanted, without any constraints as to redistribution of the results: basically even less restrictive than a BSD license AFAIK.

And that's exactly what the first release of StarWriter was: a simple compile of the Turbo Text Toolbox, sold on diskettes with a StarWriter label at I believe more than the Turbo Toolbox would cost for the full source code.

I know, because it was bug-for-bug compatible: it had exactly the same annoying little differences from the "real" WordStar (a $500 product) the compiled editor had, which kept me from using the Text Toolbox myself, instead of the real WordStar or the Turbo Pascal internal editor, both of which didn't have those annoying quirks (around end-of-line handling, as far as I remember).

Today that type of behavior is more likely to results in public adoration than what it deserves and I distinctly remember sharing none of the Wunderkind reverence the press gave Marco Börries, because he was only 16 years old: they didn't know he had written none of what he sold.

He did eventually invest some of the money he made as a cheap rip-off artist that way into completely refactored variants of StarDivion's office suite, but every version I tried, always fell short of the "originals", it was supposed to be compatible with: every existing document I loaded, was somehow off or mangled, so the evaluations typically stopped after much less time it took to install them.

But with 365 snooping every keystroke and gesture to feed the AI monster Microsoft believes their own, there is little choice or alternative: cheap copycat turned into salvation, who would have thought!

EU plans to 'mobilize' €200B to invest in AI to catch up with US and China

abufrejoval

Those billions are more urgently spent to compensate Trump's treachery

Let's be honest: most money spent on AI would

a) do little to benefit the taxpayers it was taken from

b) go to feed a former ally, who's broken all vows of fealty.

Sounds as if it was proposed by OpenAI, another self-serving "intelligence".

BTW: when I asked DeepSeek (run locally, fresh start) on what was the equivalent of Paris in Germany, 70% of the answer was waxing on about how China was all about World peace... when I asked, why it had mentioned China, it was at a bit of a loss to explain its bias...

Not quite after a fresh start, when I asked who Marie-Antoinette's mother was (Maria-Theresa, emperor of the Holy Roman Empire and queen of Austria), it contended that she had "no biological mother", and that she somehow died in obscurity decades after being executed...

It's much easier to see how AI would make mistakes in life-and-death situations than how it's to benefit humans.

How the OS/2 flop went on to shape modern software

abufrejoval

Re: Not so

Thanks for your illuminating response!

I guess we all tend to generalize our individual perspective a bit and in the case of OS/2 it's probably safe to say that it died from more than one wound.

abufrejoval

It does, actually. Just requires the right variant

The best proof of Microsoft's lame excuses about old hardware is produced by Microsoft itsself.

It's called Windows 11 IoT Enterprise LTSC and does away with nearly all restrictions, except 64-bit ISA and POPCNT support.

I'm running it on anything Sandy Bridge and up, or simply on anything that I also used for Windows 10.

No TPM (unless it's a travel laptop and has one), no HVCI (I run VMware Workstation as type 2 hypervisor), no OneDrive (not stupid), no Co-Pilot (not that stupid), no Edge (that would be *really* stupid) nor many other "improvements".

It was released in October 2024 and comes with support until 2034.

And to deploy I simply take a minimal install I keep current on a Windows to Go USB stick with all my applications and all the various drivers for older and newer hardware and put that on the target's boot storage, MAKs and ISOs came with MSDN and remove all activation hassles.

After perhaps a reboot or even two to reconfigure the hardware it's good for longer than the hardware will likely still last, since some of it is already more than 10 years old.

And I find it somewhat embarrassing that it's easier to transplant than most Linux variants and across a vast range of systems ranging from Atoms and small laptops to powerful mobile or tower workstations with all sorts of storage, NICs, integrated or discrete GPUs.

And if a brand new laptop comes with some "OEM enhanced" pre-built image? I just plaster it with the live image from the stick, because OEMs are just badly imitating the abused notion which Microsoft has copied from the Fruity Cult: that they own your personal hardware including your data.

Windows Server 2025 is and works pretty much the same, btw., I'm running the Data Center edition as "to Go" on a nice Kingston Data Traveller 1TB USB 3.2 stick that isn't quite NVMe, but will go 2x SATA speeds on matching hardware. Actually Windows server is mostly a PoC, because it's a bit rough on AMD desktop hardware due to AMD's penny pinching and Microsoft charging extra for server signatures.

Every Windows 11 has always installed perfectly fine without any issues on VMs running on much older hardware, including things like device pass-through (e.g. GPUs for CUDA or gaming) on KVM/Proxmox/oVirt: all those blocking checks are only performed on physical hardware by SETUP.EXE.

And even if Windows to Go also no longer officially exists, Rufus will help you out for any edition Microsoft produces.

And no, I cannot imagine Microsoft ever blocking security updates to LTSC IoT editions based on hardware generation. Application vendors are the far bigger risk to long-term viability: some games now refuse to run without TPM (could be inconvenient) and Facebook might be next... no problem for me, except when you're forced to used them to do your taxes returns next year.

abufrejoval

BIOS and HAL (Re: The ghost of Intergalactic Digital past)

Sorry but CP/M's BIOS was no HAL and HAL wasn't particularly novel or powerful.

IBM's 360 architecture (by Gene Amdahl) which allowed a single instruction set to span a large range of machines that differed significantly in terms of capabilities and physical architecture was much more forward looking. It basically had a virtual instruction set, some of which even the smaller machines could execute in hardware, while any of the more complex (e.g floating point) ones, would be emulated in microcode, fully transparent even to the OS.

CP/M had to run on S-100 machines, where few ever had the same hardware so a BIOS had to be written (or adapted) for each machine, much like run-time libraries in the 1950's.

And HAL was Microsoft's insurance, both against a multitude of ISAs, but also against a PC platform which had zero abstractions or support beyond a CP/M style BIOS in ROM.

I've never investigated the abstraction capabilities of HAL, but everything in PCs went to the quickly evolving metal when it had GUIs look better or stuff run faster, which nobody could have ever anticipated when HAL was designed.

Congrats on deriving value from a code base that old, but I can't think of INT13 BIOS or INT21 DOS calls as "fancy". They were a primitive replacement of CP/M's CALL 80(BDOS), which was necessary because of the 8088's segmented memory and because it lacked a proper system call instruction. And they were so incredibly primitive and slow that everyone who could bypassed them whenever they could.

Intel itself was so embarrassed by them that they overcompensated by really fancy system call instructions and mechanisms like task and call gates for the 80286, which OS/2 was designed for. But while they only cost a single instruction to call and seemed to offer good process isolation and protection, they were so incredibly slow to execute, that Linus had to replace all that code to make his initial OS perform anywhere near to BSD386 levels.

And later not even Intel managed to keep track of all the registers that actually needed saving and restoring, which is why it was removed for the 64-bit ISA.

abufrejoval

Re: I remember reading Letwin's post

I started my first computer courses in 1980: BASIC on a Tandy TRS-80, Fortran and Cobol on an IBM mainframe.

The 3270 screen for the IBM was just beautiful: 80x25 characters that were just wonderfully chiseled in bright green on black and IBM keyboards were like Steinway grand pianos. The TRS-80 was washed out dots on a bad TV and the keyboard a nightmare.

But BYTE magazine convinced very quickly that little was more desirable than having your own computer: I've always bought PCs and then tried to turn them into mainframes since. With Microport Unix on my 80286 I thought I had gotten close... I always used original IBM keyboards on my cheap PC clones, but put a pilfered IBM metal sticker on the chassis.

My professional life has mostly been replacing the mainframe. But that has never kept me from admiring the admirable parts. Gene Amdahls forward looking 360 architecture was certainly one of them, but as with virtualization you could argue that more was invented *at* IBM, even against their management's wishes, than *by* IBM: TSS vs. VM/370 is one of many such stories.

One IBM architecture which I feel still undervalued and much more advanced than even today's mainstream operating systems is what started as System/38 and became AS/400.

I've never consciously used one and they weren't exactly personal computers, but as a technical architect I've at least come to admire their forward looking principles, the single level store and capability based addressing, both of which might have saved unimaginable man years and trillions of IT spending, had they been more affordable or even open source.

Unix was a hack that turned everything into a file, because the PDP it was born on, had too short an address bus to support a Multics like virtual memory system. It's designers were so embarrassed by its success they developed Plan-9 and Go, just so they'd have something done properly to be remembered for.

And who would want files, when they could have persistent objects, like on Smalltalk machines or at least a database like on AS/400?

But these days I'm reminded ever more fact that humans started out as segmented worms and were also not designed to sit in front of a computer for a day of work: we might have evolved there, but the design is can't anything but optimal for the job, or where to those back pains come from?

abufrejoval

Re: Of course NT 3 was great, it was VMS after all

The automatic versioning of file (and the "purge" command to get rid of older versions) was already present on DEC's PDP-11 machines or rather their operating systems.

Can't actually speak for RSTS, because I never used that, but RSX-11, where I spent a few years, had it, too. DCL, or DEC's variant of a [shell] command language was quite nice generally and there was some early cross pollination to DOS via CP/M, whose programmers evidently were familiar with PDPs, too, since a few command and even utilities like PIP (peripheral interchange program) were purported to have been inspired by RSTS.

True, the VAX cluster facilities never quite made it to mainstream appeal on Microsoft's Windows, mostly I guess because Wolfpack came a the same time when NT4 let device drivers run at ring 0, obliterating the main security advantage that would have made it feasable: clusters can't help against broken software.

And I don't know if IBM's cluster product were older than VAX clusters, but the latter can only be called "inexpensive" when compared to what IBM keeps charging for mainframes (or Tandem for NonStop).

Ken Olson eventually led DEC into ruin by trying to emulate ECL mainframes via the VAX 9000 at a time when IBM itself was going CMOS on one hand and trying to conquer the PC market at mini-computer prices via the DEC Rainbow at the other.

I can see him shaking his head at a Raspberry PI emulating a VAX 9000 (or a Cray XMP for that matter) faster than that ever ran.

I was somewhat involved in the HPC motivated Suprenum project during my Master's thesis, back when even (Bi)CMOS CPUs were unsoldering themselves from their sockets at 60 MHz clocks (first Intel Pentiums), so I've always retained an interest in scale-out operating systems, which would present a single-image OS made from huge clusters of physical boxes connected via a fabric (e.g. Mosix, by Moshe Bar).

But with currently 256 cores on a single CPU die (or thousands on a GPU) each delivering large SIMD vector results per multi-gigahertz clock cycle, that (scale-out operating system) domain has become somewhat irrelevant, or rather transformed far beyond recognisability and often rather proprietary.

abufrejoval

OS/2 was dead by design, because it was hard coded to the 80286 segementation and security model

I remember that period very distinctly, because I had just sunk the equivalent of a used Porsche (or a new Golf) into am IBM PC-AT clone with an EGA graphics adapter, basically two years savings from freelance programming work while I studying computer science.

I then wrote my own memory extender so my GEM based mapping application could use extended memory, while GEM and DOS were obviously tied to x86 real-mode. It basically switched the 80286 into protected mode for all logic processing, and then reset it into real-mode via a triple fault to do the drawing bits.

It worked, but every PC had its own little differences on how to trigger the reset or do the recovery because the mechanism might have been IBM intellectual property (who used the keyboard controller to toggle reset).

Anyhow, having worked with PDP-11 in the form of a DEC Professional 350 and with VAX machines, I was utterly bent on overcoming the CP/M feel of my 80286, and also ran Microport's Unix System V release 2 on the machine, which included a free Fortran compiler that unfortunately produced pure garbage as code.

It also included a working DOS-box, long before OS2 could deliver that. Using the same reset magic, I'd exploited for my personal extender. I ran a CP/M emulator on that with WordStar inside just for the kicks of running CP/M on a Unix box!

Then the Compaq 386 came along. I even got one coming to my doorstep. The dealer who I had purchased the 80286 from came to my house, rung the bell and told me he had a 386 for me.

You see, when these machines were the price of a new car, house deliveries up the stairs and setup of the machine were actually part of the service...

Can you imagine just how painful it was to tell him that I had not ordered it? And finding out that in fact my father had ordered it for himself? Including a full 32-bit Unix that actually worked like it would on a VAX?

BTW: that Compaq wasn't slow. Perhaps that ESDI HDD wasn't super quick, but the EDO RAM was 32-bits wide and way faster than anything on my 8MHz 80286. And Unix apps don't typically block on physical disk writes.

Anyhow, finally going on topic here:

OS/2 was an OS tailor-made for the Intel 80286. The 80286 was very similar to a PDP-11 and their discrete MMUs, which kept processes and users apart by allocating their code and data into distinct smallish memory segments (16-bit offset addresses) and protecting them from unwarranted access. Unless your program was permitted access to a memory segment, any attempt to load and use it would result in a segmentation fault via hardware and program termination by the OS exception handler.

The 80286 went a bit further yet and allowed for a full context switch between processes via call and task gates, putting almost the entire logic of a process switch into microcode which could be executed via a single call or jump.

That was continued on the 80386 and caused an overexited 16-year old Linus Torvalds to think that writing a Unixoid OS couldn't be all that difficult and fit on a single page or two of code!

It wasn't until Jochen Liedtke of L4 fame carefully dissected just how horribly slow those intrinsic microcoded operations were, that Linux gained the performance which enabled its wider adoption by ditching all those Intel shenanigans and eventually discarded all segmentation use with the transition to 64-bit.

The 80286 didn't have that option, nor did OS/2.

Linus grew great acknowledging, enabling and encouraging others to do better than he did. Perhaps the size of his early mistake burned that lesson in extra strong.

Whatever you say about politics, IBM and their ill fated micro-channel machines has very little to do with the fate of OS/2.

It was doomed by being an 80286 designed OS for 64k segments, very similar to a PDP-11 and its various operating systems.

32-bit CPUs and virtual memory made for a completely different OS design and Microsoft clearly understood that it called for a complete restart.

They snatched Dave Cutler to get their hands on one of the best virtual memory operating systerms availabale at the time, that wasn't Unix.

And the rest is history.

The so called 32-bit versions of OS/2 weren't really a 32-bit OS. To my understanding they were a lot like DOS extenders in that the kernel and many base services mostly remained 16-bit code, but allowed 32-apps with virtual memory.

A re-design of OS/2 for 32 or 64 bits wouldn't have been OS/2, because the segmentation model and its hardware security mechanisms were really at the heart of the OS.

I bought Gordon's OS/2 book when it came out, read it and it spelt out its tight integration with the 80286 on every page and thus its doom. I chugged it into the reycling bin decades ago. With some lingering regret, since I had spent my fortune on the wrong box, but boy am I glad I wasn't in Gordon's place and mispent a career!

I had to read the details on the 80286 architecture to make my extender work.

And I remember reading about those tasks gates and call gates and feeling a pull somewhat similar to what Linus must have felt.

I also remember reading about the Intel 80432. Intel has a penchant for designs that look great on paper.

But by then I had an 80486 running BSD386 and/or various "real" Unix variants, as well as various closed source µ-kernels GMD/Fraunhofer was developing at the time. And my focus was on getting smart TMS34020 based graphics cards to work with X11R4, so I wasn't biting.

I also had access to Unix source code, so why should I settle for something amateur?

After finishing my thesis porting X11R4 to a µ-kernel with a Unix emulator built on Unix source (thus unpublishable), I actually got a job where I was to create a TIGA graphics driver for OS/2 so it could run the PC as an X-terminal. Got the SDK and went diving deeply into OS/2... for a month, after which I was called away to work for my dad's company.

I was glad to go in a way, because even if the technical challenge was interesting and so called 32-bit variants of OS/2 had emerged by then, the smell of death was too strong.

DOS boxes whetted my appetite for VMs and containers and I've built my career on crossing borders or merging operating systems with VMS and Unix lineage and far less µ-kernels than I ever thought likely. Nor did scale-out operating systems like Mosix ever really take off or clusters ever become significant at OS level, except in niches like Non-Stop.

OS/2 to me is the 80432 of operating systems: dead on design.

How Intel then crippled the 80386 to not support full 32-bit virtual machines is another story.

As is how Mendel Rosenblum, Diane Greene and team overcame that limitation via the 80386SL/80486SL SMM (system management mode) and code morphing.

Intel wasn't amused and it's nothing short of irony how Gelsinger came to head the company that destroyed Intel's CPU business case.

abufrejoval

Of course NT 3 was great, it was VMS after all

Of course WNT was great, but it was hardly a v1 OS.

It was basically a VMS clone (V++=W, M++=N, S++=T), a "re-creation" done by Dave Cutler (and team?) who had been lured into Microsoft from DEC.

As such it had tons of multi-user credentials, a properly designed security model, using the 386 ring isolation to keep the kernel out of harms way (device drivers) and thus stable... like a VAX.

Of course a CPU driven pixmap GUI wasn't part of VMS' design so having to push pixels through security barriers made for inacceptable GUI performance on VGA hardware, especially when you add a 16-bit ISA bus twixt CPU and screen on your typical 386 clone.

I mostly ran the Citrix variant of NT 3.51 but again modified by an X-terminal vendor (was it Tektronix?) and on X-terminals, so a lot of the pixel pushing was instead translated to much higher level X11 line, pixmap and text calls rendered on the client side, that resulted in rather good multi-user performance for office apps. The Citrix ICA variant could use normal 32-bit RAM for rendering, but still just wouldn't scale to higher resolutions (1024x768 or even 1280x800 was becoming popular) at 8-bit color (or deeper).

In Windows NT4 graphics and other device drivers were moved to ring 0 which meant that badly written printer drivers for stuff like ink-jet printers would kill a dual CPU NT4 50-user terminal server in the blink of an eye because it wasn't written thread-safe: I very distinctly remember seeing this happen (and hunting down the cause), and it didn't help that Microsoft had cut off Citrix from access to NT4 sources, either, unless they gave them access to their technology.

Ransomware attack forces Brit high school to shut doors

abufrejoval

Love those girls!

I know it's most likely a standard shutterstock pic, but boy, did these girls go into giving everything to project boredom!

They obviously came well prepared and groomed to their best but then just imagine their fun at trying look both the possible best (at looking good) and projecting the possible worst (boredome/frustration).

I'd say they nailed it! Bravas! Da capo!

And I guess a lot was also in the brain behind the lense: well done, the effort shows and really carries over to the viewer!

GM parks claims that driver location data was given to insurers, pushing up premiums

abufrejoval

Re: Who wouldn't knowingly consent to have their driving assessed by their insurance company?

Well, I guess you need to zoom out a bit to get my point.

People may prefer privacy for some things and under some circumstances.

But peope also feel the urge to go in quite the opposite direction and broadcast their virtue.

In both cases they want to influence or manipulate how they are perceived by others because that provides benefits to them.

Social networks got one of their most important boost from people who wanted to broadcast a projection of themselves to a wider audience and in a more controllable manner: could be being bigger, smarter, sexier, whatever. Few things wind up as deadly as fanatics trying to outdo each other very publically on some imagined virtue.

And for some it's just proving to the insurance company that they deserve lower premiums or some other benefit: they are happy to forfeit privacy there to gain an advantage.

And insurance companies are very happy to shift the load to the less obviously virtuous and increase the premiums as a windfall: it's one type of greed or another all around.

Everyone tries to manipulate everybody else for a benefit. And that is behaviour far older than homo sapiens, you can see it in plenty of other species, too.

And sometimes the worst offenders don't get what they deserve, they just get re-elected instead.

abufrejoval

Who wouldn't knowingly consent to have their driving assessed by their insurance company?

Plenty of people with plenty of different motivations...

Humanity is diverse and that includes what others regard as perverse.

Let's tackle the poor bastards first: if you struggle to stay afloat, lowering your insurance premium by *any* means becomes a priority. So if you can demonstrate "safe driving" by going half the (upper) speed limit, they will, regardless if they have the opposite effect by inciting others into reckless passing to compensate.

You can see it with the elderly: they know both that their sensory equipment is deteriorating and their reaction time increasing. But quite a lot really depend on driving to participate in life or stay out of assisted living they can neither afford nor tolerate. So they go extra, extra careful to stay out of trouble... regardless of their impact on traffic flow.

And then there are those, who try to impose their "virtuosity" on others. You know the type, who will stay on the left lane at 130km/h because that's the recommended maximum speed on the Autobahn and far more green than going full throttle... whilst they didn't take the even more ecological train, either, which runs 300km/h alongside, ...but unfortunately won't stop were you need to go.

They are in permanent "driver's ed" mode and probably expect to be given not just a lower risk rating but essentially an insurance knighthood.

As my utterly corrupt almost-ex-wife used to say: "90% of all virtue are functioning social control" which she used to great effect to cover her misdeeds. Yet without closing the feedback loop in one manner or another, insurance cannot work, as California wildfires demonstrate rather fiery these days.

And I guess the logical extension of that is your insurance getting cancelled automatically as your car heads into an unavoidable collision.

A neutral party in the middle seems a necessary solution, but won't come free (money) and is very difficult to maintain free from coercion via AIs or plain old software.

Where does Microsoft's NPU obsession leave Nvidia's AI PC ambitions?

abufrejoval

Re: What is the point?

Cooling!

It's called dark silicon and required to keep temperatures within manageable limits by leaving cells only partially filled with active transistors or entirely void next to noisy neighbours.

Except they are giving that wasteland a fancy name and sell it extra pricey now.

Ransomware scum who hit Indonesian government apologizes, hands over encryption key

abufrejoval

Re: Criminals with a conscience? I don't buy it!

Perhaps I should have chosen another word like hostilities, but effectively we live in a world of constant smaller undeclared wars and this would have started another, if perhaps only a civil war within Indonesia. And there are far too many non-Chinese who might then want to claim parts of the large Pacific that the PRC would rather conquer diplomatically.

abufrejoval

Re: Criminals with a conscience? I don't buy it!

The Indonesian elite is very Chinese. They expect to be treated like family. And like family they'd retaliate more vicerally if they're not.

Of course, there is little chance of Indonesia invading the PRC in retaliation, but deep and long retaliation for endangering their nation there would be, in all importune manners possible, with anyone who wants to play ally (the enemy of my enemy...)

The PRC is aiming for dominance in the Pacific: turning a nation that claims sovereignty over a vast and strategically important swath of said Pacific, from a cousin into an enemy, because some of your backroom scum overdid it, simply doesn't cut it ....yet.

Because it's also a demo of PRC power, just in case Indonesia and neighbors might need reminding that there is value in allying yourself with the PRC.

Some cousins are bullies, too.

abufrejoval

Criminals with a conscience? I don't buy it!

Sorry, but I smell rat here: the only reason these guys backed off was that somebody up their food chain told them to drop it.

If I understood things right, this was a potentially nation crippling attack.

And a nation that is at risk of going under completely, faced with an enemy that only wants money, can't afford to just say no: they will have to negotiate a price for their survival.

So clearly someone in that nation knew this was a government sponsored attack and they had a quiet chat with someone from that sponsoring government about the risks of starting a war.

And that sponsoring government called off their punks, who cannot say no to their puppet masters.

Please, don't just buy into the superficial story!

Kernel tweaks improve Raspberry Pi performance, efficiency

abufrejoval

couldn't agree more on server core and RAM energy savings, can't see it happen, though

I've owned some big Xeon E5 workstations for about 10 years now and watched the HWinfo reports of them with some degree of fascination and dread:

Even Haswell 18 and Broadwell 22 core CPUs would clock down to tiny little single digit wattages on an idle desktop, that's only many cores sleeping not all.

But those 128GB of ECC RAM (non registered UDIMMs) would never drop below 50 Watts for the memory controller, the DIMMs themselves probably added another couple of Watts each.

On heavy loads the memory controller would report 120Watts, which was actually more than even an all-core full load (HWinfo never reported more than 110 Watts consumption on a CPU that was officially 150 Watts TDP.

Not only was server RAM the biggest part of the server purchasing, it was also nearly always the biggest energy spender.

And it's not like the RAM was significantly faster (ok, quad channel), than the desktop equivalent DDR4 or did some other magic.

RAM on mobile chips may be another class of device, but that still manges to retain content and do that on less than a Watt on gigabytes: So clearly there is some room for improvment here!

The only problem is that apart from idiots like me who run their own servers, nobody wants energy proportional compute any more. Once pretty near demanded from the industry by AWLs CTO, they soon corrected course and made sure they ran their servers always near 90% load instead, because energy savings only happend if you failed to make money from them running.

So there we are: the only servers that are any good on idle energy consumption are smartphones and laptops.

Raspberries and other SBCs are terrible power hogs over time, perhaps wasting more energy over their life time than a well designed desktop with much bigger peak power and consumption, but with at least a half ass understanding on how to save power on unused assets.

No, nobody in his right mind should buy Raspberries for their low power consumption. Any Intel Atom based NUC is likely to do much better in every which way except being cute. Heck, even Core based NUCs might be more energy efficient on idle, but it might take a while until they make up for the higher purchase price.

Server design is completely hyperscaler driven these days. And Ampere is probably best at explaining why spending any transistor on energy savings in server hardware is total folly... unless you're still one of those idiots who still operate their own servers.

RIP: WordPerfect co-founder Bruce Bastian dies at 76

abufrejoval

WordPerfect had me stumble on the first step and never recovered

The first computer I owned was an Apple ][ clone that included all the professional extras like an 80-column card and a Z-80 Softcard to run CP/M.

Word processing was an obvious bonus, especially since my handwriting was terrible and I had learned to touch type in high school.

WordStar was great mostly, because it immediately told you how to get around after launching, giving you a legend of the most important navigation keys and the option to hide/restore the help menu at any time to not waste precious 80x24 screen real-estate.

Word and Multiplan likewise gave you immediate hints, although they tended to waste the lower lines for menu and those wouldn't go away. But it was logical, dense and Word had inheritance for formatting, which was crucial for consistent documents. Multiplan was also always way more logical than VisiCalc with it relative and symbolic references in the formula language and I never felt any temptation to use 1-2-3.

WordPerfect left you with an empty screen after launching. In fact just trying to get out of it without resetting the computer turned out to be difficult: none of the known keystrokes worked (this was long before SAA and there was only one function key labelled "CTL").

Perhaps RTFM would have made all the difference but with WordStar there was simply no incentive to change and then the Turbo Pascal built-in editor with WordStar compatible commands was the main tool for editing code anyway, and not even just for Pascal.

Function keys only ever arrived with the IBM-PC, none of the early computers had them. But to get to them, you'd have to leave your home keys and look at the keyboard to find them, a complete break in the midst of writing, that WordStar controls didn't suffer, as long as the Control-key was in its proper place. And then they even started to move the function keys from the left to the top, where chances of hitting them blind were even worse! But that's another story...

Combing back to WordPerfect: I've always felt that a product that left me near helpless right after starting, should never be called WordPerfect: nothing perfect about being left in the dark!

I guess that always felt a bit arrogant so I felt little inclination to ever change my mind.

But I know that some of my favorite writers just loved it, so I guess it did a lot of good for me eventually.

Andrew Tanenbaum honored for pioneering MINIX, the OS hiding in a lot of computers

abufrejoval

Microkernels, those were the times...

I bought the book. And that might have included floppy disks, I don't remember. And if I ever ran it, it wasn't for long or for much.

But just like Linus, I didn't really read the book in full.

I had already read the Unix v6 sources in full, in my CS classes at university.

When Linus decided that jumps to task state segments on a 80386 would make task switching fit on a singe page of code, I had been using QNX, UnixWare, Lynn's and Bill's 386BSD, and a competing µ-kernel called AX for for years: Linux combined the worst of everything and I ignored it for years, because I actually had full access to Unix and AX source codes, too, and could compare: I was not impressed by what I saw, I fully agreed with Jochen Liedke (of L4 fame).

QNX was really cool and very usable already on the tiniest 8086 even without any MMU and AX was likewise made for Suprenum supercomputers with lots of pure compute notes that had no I/O whatsoever. So in that sense Minux wasn't that much better than Linux as a badly made monolithic Unix clone, because it didn't make distributed computing the default.

The competition was Moshe Bar with his Mosix kernels or Transputers, who did that in hardware and at the Occam language level.

What completely destroyed all that computer science for a decade or two was the clock race: who would have thought that a lowly 8086 successor could outperform a "Cray on a chip" iAPX 860 and run at several Gigahertz?

Today it's all to multi-cores, but instead of dozens, it's millions of GPUs with thousands of cores each: all Unixoids were ever trying to do was to offer Multics abstractions at vastly inferior cost. And Multics was all about multiplexing a single incredibly powerful CPU among as many users as possible to create the illusion of everyone having their own [single] CPU.

Endless OS 6: How desktop Linux may look, one day

abufrejoval

Re: Missing German, immutability clashing with increasing internationality

It's been rather interesing to observe just how different this can be. In Brussels, just everybody is at least bilangual between French and Dutch, because as much as the two groups are at each other's throats outside the capital, inside you just can't avoid speaking both, it would be a total breakdown otherwise: very few people risk annoying 50% of their customers over something so trivial. And when the francophone speak Dutch, it's slow enough for me to undertand as a German. And in some corners of Belgium, you'd have to add a German dialect to the mix, with language barriers often running along a street in the middle of a town and the the only bakery on one side: food is such a catalyst!

Somewhat similar in Spain, especially in Catalonia where the language issue between español and català is politically charged, yet you'll just have people juggle between those two and switch in a heartbeat in Barcelona without even thinking about it. Among my colleagues many then add French and English, simply because they spend hours each day with them in conference calls. And the French for some reason, very unlike their close cousins just behind the borders North and South, just don't manage foreign languages very well at all, something they share with their stray subjects across the Channel for some reason.

A little further South all across the Southern seaboard of the Mediterranean nearly nobody can make do with only one language. And even if if they only speak Arabic, that's already two, the local variant and what they speak on TV. In the Magreb region, most will have school or university in either French or Spanish, plenty of Arabic at the mosque and then one or two of the various Tamazin dialects at home.

I've met colleagues from all of those places and several more working together in Dallas for a project some years ago and was trying to enjoy the internationality of the setting.

But evidently I was the only one, because starting with the Hispanics, almost no one dared to speak anything but English, even the French (well the Canadians had no issues with Quebequois, but that could have been Zulu as far as I could tell). The peer group pressure to speak nothing else was quite astonishing and a total surprise, because it carried over to places like restaurants. While family owners evidently saw no issues speaking Spanish among themselves, they acted as if they'd been caught in an illegal act, when I addressed them in my slightly Southern Malageño, which is very close to what got exported to Latin America.

I learned my first variant of English in small town USA, South Eastern Ohio. And it's still somehow the easiest and most natural for me to use. But I've spent four decades of my professional life mostly with either Brits doing RP or Europeans slaughtering it. Continuing with my Appalachian seemed like running a false flag operation and clearly would have had me stand out for something that wasn't even me. As a result you'll catch me zig-zagging in a mixed group or just following whoever started the conversation. I remember a project with some Spanish and mostly Brits from the UK's North-West, so I used my finest RP for months. Then this Kiwi or Aussie walked in one day and started some friendly banter, which had me answer in the closest variant in my portfolio, which was my good ole Mid-West... People who'd been working with me all those months were completely stunned and and looked at me agape or as if I'd suddenly turned a spy or traitor...

Even accents are so political and in these days I can't identify with either country nor with any of the many classes in the UK.

The strong political pressure for English-only in the US has definite advantages, eliminating a lot of complexity that many other places have no choice but to deal with.

Yet, somehow I think that the bilangual approach a lot of countries with dozens if not hundreds of local languages (China is estimated to have 600 different ones) have chosen with Mandarin even spoken in the global Chinese diaspora, is going to remain a global minimum, three or more rather more normal the more globally we work.

abufrejoval

Missing German, immutability clashing with increasing internationality

I speak every language they offer, except Portuguese (I understand Gallego pretty well), and have to juggle with all pretty much in parallel on top of my native German, which I still prefer as a default on my computers, but others in the family prefer English or French. And in my workplace things quickly get more complicated, just my Swiss, Belgian and Spanish colleagues routinely deal with 3-4 languages, and that's only witin Europe and a Latin alphabet.

I think the myriad of linguistic permutations are the worst issue with these immutable images, because there are quite a lot of places in the world, where people routinely need to deal with several languages and input systems at near any level of granularity, from per sentence or conversation, to per application, time of day, or day of week.

I don't know if they should try an EU edition and perhaps some other clusters for areas where people are multi-lingual and multi-alphabet by default. It could get out of hand quickly.

So perhaps they need to build some kind of a staging cache, which allows automated builds of multi-language images, so that you still have the advantage of immutability on the client, yet offer a degree of customization, while maintaining reproducibility and the ability to fail-back.

Codd almighty! Has it been half a century of SQL already?

abufrejoval

Funny thing: everybody thought that functional languages were too complicated

I had to do stuff like ML and Prolog at university near that time, also "initial quotient term algebrae" in formal proof methods.

Can't say that I liked it all that much, I just wanted results and loved e.g. Turbo Pascal, because it was so fast to compile and detect syntax errors, making for quick turn-around times even on tiny 8-bit CP/M machines like my Apple ][ clone with a Z-80 card.

And while all that formal, functional and logical stuff was somewhat fascinating, everybody just thought it was too impractical and complicated for anyone to use in daily life.

Only years later it dawned on me that spreadsheet formulae were functional. And by then I really wanted them inside my databases, too, to be evaluated as part of the query from within the DB engine, while the queries were functional too. You could do HPC that way, much better than in Fortran!

I guess some of the fuzzy things I had in my head were actually realized in MUMPS or Caché. I guess the lack of an open source variant meant I never found out.

So in fact these "complicated" languages may have seen far more use by far more people than those classic imperative programming languages we IT guys always thought were what made us better than those mere "users".

And today they also seem more likely to survive because loops, which we were taught to think in, beause they pretty much offered inductive proof of correctness, today only mean you haven't done your work to parallelise it to those tens of thousands of cores everybody with a smartphone or better has at his fingertips.

Because in functional that's natural and imperative sequential is really an aberration...

Firefox 124 brings more slick moves for Mac and Android

abufrejoval

Pest Control (Re: Consent-O-Matic add-on)

Usually adding Ublock Origin and enabling all filters is one of the first things I do, not just with Firefox, but every browser.

But sometimes I get distracted and forget and sometimes I face computers in the extended family, which I haven't set up in that way...

And it's only then when I get reminded of just how gruesome and intrusive the Internet is for most folks!

All that bling, all those pop-ups, all those cookie banner waits are a complete nightmare that I'd very nearly forgotten, once those ad-block maintainers (and perhaps even the Mozilla Foundation) have started to put daily sweat and tears into keep them out: that constant battle between the obnoxious street-criers, pouch-cutters, or all the other streen scum vs the defense team I've just taken for granted for years now. I can't help feeling time-warped into somewhere between the middle-ages and the introduction of sewers, when the streets overflowed from chamber pots being poured out on the street and pest infected rats might rather snap at you then letting you step on a dry spot.

And when things do get through (e.g. recent Youtube nags), they typically get sorted out in just a few days.

I was very reassured in my choices, when I saw that the newest Raspberry OS just came with Ublock Origin installed and fully enabled within Firefox (Chromium, too, I think), almost the standards setup I'd choose as well.

So far at least, Ublock Origin is pretty consistent across browsers so while I'll typically enabled browser based defences, destroy-cookies-on-close, and "Do not..." settings, I put most of my faith in UO.

Which of the two kills the rats first, I haven't bothered to check, because by the time I get to see the page, even the cadavers are gone!

The Tab Session Manager is the only other add-on I sometimes add on my main machines, where some tabs are left open for days if not weeks and thus cross patch days.

abufrejoval

What's all that noise? For me Firefox just works fine on everything...

I can't even remember when or why I went with Firefox, must have been really early days.

But ever since, I've just seen no reason to change. IE was garbage and Chromium had one giant disadvantage: it was made by Google.

And it's just deeply unhealthy to have all parts of an eco-system owned by a single company. Same with Edge or Safari.

If Firefox were to publish their own OS (as they did at one point), I'd probably run Chromium on that, just because I consider balance of powers essential, to society in general, and to my software environment.

I run a lot of systems, dozens, really, physical and virtual, spread across Linux, Android and Windows. And I need browser access on nearly every one.

And I want consistency, same layout, behaviour etc. when I switch between them.

Well, at least as much as possible; there are some differences between a mobile phone and anything desktop (even with a touch screen), that are implied by the form factor.

Firefox delivers that, Edge is a no-go, Brave comes close (and is often the 2nd option), I gave up on Opera when it became Chinese and the phones became powerful enough to handle Firefox.

Of course, I dislike having to disable all the money makers like Google search or that "Pocket recommendations" stuff every time I run a freshly installed Firefox, but even that is a lot faster, when it's pretty consistent across OSs and versions.

I don't understand the "Firefox is garbage" allegations: everything I do on the Internet works as expected, except where sites get too snoopy or refuse the ad-blockers, which I obviously run with pretty much all filters enabled for sanity.

And those sites I'm happy not to revisit, unless it's the government and I have to (with the ad-blocker disabled).

I got the whole family on Firefox, too, and it's been easy probably because I started them there long ago, before Chrome or Edge became as aggressive (and repulsive) as they are today.

If it were to go away, that would be very hard indeed, much like finally letting go of Microsoft Office completely and finally embracing StarOffice, sorry LibreOffice despite its quirks.

Functional or performance differences that I noticed have been very rare.

Google Maps in the 3D Globe view is really impressive in terms of how much it's able to squeeze out of relatively modest hardware. For the longest time I've been astonished on how it would render the neighborhood much better (in terms of accuracy) and much faster (in terms of speed) on a modest Atom system even at 4k than Microsoft Flightsimulator on an RTX 4090.

But for the Atom (or smaller ARM SBC) I generally had to use Chromium to get that speed, Firefox stuttered on these smaller systems, while I never noticed anything wrong on my normal "desktop" or "worktation" class machines.

Even that has changed now, I can't see any noticeable disadvantage for Firefox e.g. on Raspberies 4 or 5 with the current software.

Where I actually *do* see a disadvantage for the Chrome based browsers is on WASM, where it regularly detects and uses less than the full set of cores on machines with lots of cores and threads.

Yes, only Chromium at the moment seems to enable WebGPU, but once that becomes popular enough, hopefully that will change: I'd really like to see WASM being able to take advantage of the GPU as well, but hardware independance and the ability to exploit ISA extensions and accelerators are rather too conflicting to sort out easily.

A path out of bloat: A Linux built for VMs

abufrejoval

Windows Subsystem for Linux uses 9P and why both IBM and Intel hated VMs

I my view WSL mostly exists so even Linux uses have to pay a M$ software tax, which is why I abhor it generally (and continue with Cygwin out of spite).

But I did notice, that they know how to use the good stuff (9P) for their ulterior world domination gains.

Did IBM invent the hypervisor?

I'd say that people at IBM invented the hypervisor, but pretty much against IBM's will.

IBM was all bent on making TSS (time sharing system) or Big Blue's take on a Multics like OS a success instead, and it's failure was recorded in a famous study and book called "The Mythical Man Month", AFAIK.

But there were far too many people with 360 workloads out there who needed to make them all work at once on their newest 360 machines, some of which came with the extra hardware bits, that made VMs possible. So some people at IBM started this skunkworks project, that then later some IBM execs noticed and turned into a product, the VM370, pretty much out of necessity because TSS had utterly failed.

So I really don't want to give IBM the hypervisor credit, but to the people who made it happen there anyway.

And to Intel's everlasting shame they made sure their 80386 didn't have that same full set of extra hardware bits, so VMs couldn't be done on their 32-bit CPU, only 16-bit VMs were supported.

It would have been an obvious and easy thing to do, but Intel was evidently afraid they'd sell fewer CPUs if people started consolidating their workloads.

And that's why VMs on x86 became such complex beast, because the abstractions were simply never at a similar height as on the 370.

Again a skunkworks project, but not by Intel guys but Mendel Rosenblum, Diane Green and other collaborators (and VMware founders) enabled VM support via bits Intel added to x86 for notebook enabled 80386SL and 80486SL CPUs which had introduced a System Management Mode or ring -1 layer into the CPU to allow operating systems like DOS to be run on battery powered hardware. In Intel's typical cut & paste manner, that got included even on non-mobile chips, where it had no official purpose.

VMware employed a few other patented tricks like binary translation of privileged guest code to make it performant enough for real usage and were sailing towards a future of riches, which a very furious Intel then wanted to shoot down rather quickly. They had withheld VMs from their 32-bit CPUs because they wanted to sell more of them, not to have this uipstart eat the extra value.

Only then Intel did add the necessary hardware bits and sponsored Xen's transition from a software VM approach to "hardware virtualization" so VMware's patents lost their value it the company eventually became ready for an internal takeover via one their creatures: Mr. Gelsinger, who had held the keys to VMs before and probably chose to withhold them.

Broadcom moves to reassure VMware users as rivals smell an opportunity

abufrejoval

Rebirth of mainframe licensing, IBM should sue them

IBM came up with this way of continually "licensing" the use of what you owned already, when AMDahl and others created 360 clones.

Personal Computers where the result of them squeezing to the point where pain had even IT-managers jump.

I bought my first VMware in 1999, because I just loved how they circumvented Intel stopping short of full VMs by exploiting the SMM of the 80386SL/80486SL: they were the underdog and Intel so furious they actually sponsored Xen to piss into VMware's patent pot after immediately pulling the full virtualization stops.

And then they went was far has having an Intel guy running the company and setting it up for sale into the ground. Is it *that* personal?

Too bad Qumranet's KVM is owned by IBM now.

And that's the company that also rather keeps Transitive's QuickTransit in their poison lockers than have humanity benefit from a great idea.

Forgetting the history of Unix is coding us into a corner

abufrejoval

That's a very long and windy buildup for Plan9

I've struggled for many years trying to understand and explain how Unix could survive for so long, given it's utter terrible shortcomings.

For starters, please remember that the very name "Unix" or "Unics" was actually a joke on Multics, an OS far more modern and ambitious and a fraction of the current Linux kernel in code size and finally open source today.

Everything *had* to be files in Unix, because the PDP only had a couple of kwords of magnetic core memory and no MMU, while Fernando Carbató made everything memory on Multics, a much more sensible approach driven further by single level store architectures like the i-Series.

I love saying that Unix has RAM envy, because it started with too short an address bus :-)

And I was flabbergasted when Linus re-invented Unix in about the worst possible manner possible: there was just everything wrong about the initial release! I was busy writing a Unix emulator for a distibuted µ-kernel inspired by QNX at the time (unfortunately closed source) so I could run X (the windows system, not the anti-social cloaca) on a TMS 34020 graphics accelerator within the SUPRENUM project: I had access to AT&T Unix and BSD source code, so I wasn't going to touch his garbage even with a long pole...

...for many years, by which time none of his original code bits survived; but his social code, his excellent decision making capabilities, had showed its value for accelerating the Linux evolution via developer crowd scale-out far beyond what the best husband and wife team (Bill and Lynne Jolitz) could do.

I've always thought that the main reason why the Unix inventors came up with Plan 9 was, that they didn't want to be remembered for the simpleton hack they produced when they came up with Unix to make use of a leftover PDP that would have been junk otherwise. They felt they could do much, much better if they had the opportunity to turn their full attention to an OS challenge!

In a way it's like the Intel guys, who hacked the 4004, 8008, 8080 and 8086 but wanted to show the world that they can do *real* CPUs via the 80432, i860 or Itanium.

So why did those clean sheet reinventions all fail?

The short answer is: because evolution doesn't normally allow for jumps or restarts (isolations can be special). It will accelerate to the point where the results are hard to recognize as evolution, but every intermediate step needs to pay in rather immediate returns.

(And if in doubt, just consider the body you live in, which is very far from the best design even you could think of for sitting in front of this screen)

Once Unix was there and had gained scale, nothing fundamentally better but too different had a chance to turn the path.

I've tried explaining this a couple of times, you be the judge if I got any close.

But I've surly used many words, too.

https://zenodo.org/records/4719694

https://zenodo.org/records/4719690

A little more on the cost o code evoution:

https://zenodo.org/records/4719690

of the full list via https://zenodo.org/search?q=Thomas%20Hoberg&l=list&p=1&s=10&sort=bestmatch

Sam Altman's chip ambitions may be loonier than feared

abufrejoval

An invest of Trillions requires a matching return: who would pay that?

My doubts actually started with IoT. The idea of having all things in your home somehow smart, sounds vaguely interesting... until the next patch day comes around and you find that now you have to patch dozens or more vulnerable things, most of which are more designed to feed the vendor's data lakes than providing any meaningful empowerment or value.

I've also always marvelled at my car vs. my home: my car was made in 2010 so it isn't even new any more. yet everything inside is connected and "smart", will adjust to whoever is driving it automagically, things happen at the touch of a button or even on a voice command, if that were actually any easier or faster.

Of course, once I took the wrong key, the one which had all adjusted to a person half my size and I feared mutilation if not death as in I searched in total panic a way to halt the seat sqeezing me into the steering wheel... And since I never really came out of home-office, I tend to spend so little time in my car, I often can't even remember how to turn on the defrost when the season changes.

Yet sometimes I find myself wanting to click my key when I enter my home, especially when I carry my supplies, hoping the door would open just as automagically, perhaps even carry the darn boxes up two rather grand flights of stairs, as you see my home was built around 1830: mine is the part under the roof where the domestics used to live, who never found a worthy successor, but gave me perspective.

You see, Downton Abbey provided me with the perfect vision of what IoT should be: life with non-biological servants. Most importantly, life not with intelligence somehow scattered all across things, but with an absolute minimum of non-biological servants: one servant per domain, the butler for the shared family mansions, a valet or ladies maid for each individual's personal needs, a chauffeur for all inclusive transportation, an estate agent-secretary to manage all fortunes, that's it! Delegation for the lesser services like cleaning and food suply, scale-out for grand events, coordination amongst them, life-long memory for anything relevant would be all part of their job, not for me to worry about.

Alexa, Siri, Co-plot, none of them ever came close even envisioning that for me. And you know where their loyalty lies: Downton Abbey has plenty of proof what happens if servants are disloyal to their masters. Actually, what I really want aren't even servants that might just go off and marry or have a career of their own, but good old roman/greek non-bio-slaves where obedience is existential, even if it includes proper warnings against commands that might in fact be harmful. And I don't recall slaves ever being more loyal to their slavers than their owners. So just imagine how Apple would be treated by owners a few centuries or two millenia ago!

Yet, how much would all of that be worth to me or the vast majority of the poplation which are consumers?

Trillions after all means a thousand bucks for each individual with billions of consumers... And that is just the chips portion of what it requires to make it happen.

It comes back to my smart car: would I have paid extra to have all that intelligence in it?

Not really, I bought it used. It just happened to have all that stuff in it, and I would have rather liked to forego those "extras". I paid for the room, the transport capacity and it's ability to cruise the Autobahn at speeds I consider reasonable with adequate active safety.

It's really a lot like the electric sunroof which I couldn't opt out from: it limits the head-room every time I enter the car, yet by the time I find myself actually using it, it's typically broken and would be very expensive to repair: so it winds up just being a glass brick covered up 99% of the time. I'd have much rather had the cruise control, but a used car with these options wasn't on sale when I needed a replacement.

Same with the electric seats, which may be ever so slightly easier to adjust, once you've figured out how they work and how to keep them from breaking your bones. But they become one big giant liability if they're stuck in some ridiculous position, because my son wanted to show it off to his lovely but tiny girl friend.

Turns out the main reason I've never seriously considered making my home "smart" is the fact that I need it to function 100% of the time, I don't really have a backup if the door failed to open, the windows failed to close, or if chairs at the dinner table were suddenly glued to the floor.

So count me very sceptic when it comes for AI based automation creating empowerment with enough value and trustworthyness to choose the AI variant over the stupid one EVEN at EQUAL PRICE.

Chances of me actually paying extra? Very ultra slim with an extra dose of heavy convincing required.

But next comes the corporate angle, whence my disposable incomes currently comes.

Yes, there may be a lot more potential for money savings there, but how much AI are consumers going to spend on once it's reduced workforces by the percentages corporate consumers of AI are hoping for?

New jobs and opportunities take time to arrive and one thing is very sure: those investing billions if not trillions today cannot wait a decade for demand to pick up again. Their shareholders demand sustained order entries month by month, quarter by quarter and returns best within a year.

And that's where I see bloody noses coming all around already with Microsoft & Co. spending billions or the GDP of smaller countries on nothing but AI hardware.

I can hardly see myself using Co-pilot even if they force it into my desktop and my apps.

Actually, much of my late career has been worrying about IT-security and the very idea of Microsoft infusing every computer with an AI begging everyone to use it, gives me nothing but nightmares about the giant attack surface they are opening up: that company still doesn't even manage to print securely, decades after selling their first operating system, CP/M was safer than that!!

Much less I can see myself paying for it, nor do I see 90% of consumers paying a significant amount for it, either.

Sure, that's belly button economics, but I humbly consider myself mainstream and ordinary enough to represent your regular John Doe.

Investors spending billions and trillions need matching returns and I fear their desperation more than anything else about AI.

PIRG petitions Microsoft to extend the life of Windows 10

abufrejoval

Re: Why extend Windows 10's life when Windows 11 could do just fine

Precisely, I've been running Windows 11 directly on Skylake hardware, which is nearly exactly the same as Kaby Lake in anything that an OS would care about and I'm also running the very latest Windows 11 just fine on Haswell and Broadwell Xeons under KVM as a hypervisor. With GPU (and USB) pass-through I even get it to run game at native performance on Windows 11.

Just proves those checks are completely arbitary and nothing but planned obsolescence in cahoots with Intel and AMD.

Microsoft needs to be broken up and the OS part (among others) spun off into a separate company under strict guidance not to create artificial obsolescence.

Perhaps it's ok to let go 32-bit x86 today, but anything 64-bit should run Windows and there is no reason a TPM should be required, especially since not everyone even wants to encrypt their disks.

I much prefer mine movable between systems and easy to copy and always disable it.

And with Windows 12 M$ is likely to go even further in terms of obsolescence and integrating ever more AI backdoors when they are already acting as if they owned your Personal Computer.

Apple users may be happy to give up any right to self-determination to their iNanny, but when I hire a janitor or property manager for my PC (that's what on OS is) I don't want him to run my life or report on me to his agency. I just expect him to do the job he was hired for and not get smart on me!

Page: