* Posts by abufrejoval

60 publicly visible posts • joined 29 Jan 2014

Page:

LibreOffice still kicking at 40, now with browser tricks and real-time collab

abufrejoval

Marco Börries started it at 16, on the cheap, selling code he had not written

I ordered Turbo Pascal 1.0 from Borland the minute I saw the ad in BYTE: $49 for a compiler including a WordStar compatible editor was just too good a value to pass off, my Apple ][ clone (with Z-80 SoftCard and Videx 80-column card for the "professional" stuff) was way more than an RTX 5090 would be in today's money.

If you weren't programming 40 years ago, you just can't imagine the productivity boost it provided at an age, when the edit/compile/debug cycle was measured in coffee cups, not milliseconds: ever wondered why BASIC was so popular?

Anyhow, only two years later Borland offered a functional equivalent of their editor as Pascal source code in a package called Turbo Editor Toolbox, at a similar price, I believe.

That gave you a WordStar equivalent editor you could change and extend any way you wanted, without any constraints as to redistribution of the results: basically even less restrictive than a BSD license AFAIK.

And that's exactly what the first release of StarWriter was: a simple compile of the Turbo Text Toolbox, sold on diskettes with a StarWriter label at I believe more than the Turbo Toolbox would cost for the full source code.

I know, because it was bug-for-bug compatible: it had exactly the same annoying little differences from the "real" WordStar (a $500 product) the compiled editor had, which kept me from using the Text Toolbox myself, instead of the real WordStar or the Turbo Pascal internal editor, both of which didn't have those annoying quirks (around end-of-line handling, as far as I remember).

Today that type of behavior is more likely to results in public adoration than what it deserves and I distinctly remember sharing none of the Wunderkind reverence the press gave Marco Börries, because he was only 16 years old: they didn't know he had written none of what he sold.

He did eventually invest some of the money he made as a cheap rip-off artist that way into completely refactored variants of StarDivion's office suite, but every version I tried, always fell short of the "originals", it was supposed to be compatible with: every existing document I loaded, was somehow off or mangled, so the evaluations typically stopped after much less time it took to install them.

But with 365 snooping every keystroke and gesture to feed the AI monster Microsoft believes their own, there is little choice or alternative: cheap copycat turned into salvation, who would have thought!

EU plans to 'mobilize' €200B to invest in AI to catch up with US and China

abufrejoval

Those billions are more urgently spent to compensate Trump's treachery

Let's be honest: most money spent on AI would

a) do little to benefit the taxpayers it was taken from

b) go to feed a former ally, who's broken all vows of fealty.

Sounds as if it was proposed by OpenAI, another self-serving "intelligence".

BTW: when I asked DeepSeek (run locally, fresh start) on what was the equivalent of Paris in Germany, 70% of the answer was waxing on about how China was all about World peace... when I asked, why it had mentioned China, it was at a bit of a loss to explain its bias...

Not quite after a fresh start, when I asked who Marie-Antoinette's mother was (Maria-Theresa, emperor of the Holy Roman Empire and queen of Austria), it contended that she had "no biological mother", and that she somehow died in obscurity decades after being executed...

It's much easier to see how AI would make mistakes in life-and-death situations than how it's to benefit humans.

How the OS/2 flop went on to shape modern software

abufrejoval

Re: Not so

Thanks for your illuminating response!

I guess we all tend to generalize our individual perspective a bit and in the case of OS/2 it's probably safe to say that it died from more than one wound.

abufrejoval

It does, actually. Just requires the right variant

The best proof of Microsoft's lame excuses about old hardware is produced by Microsoft itsself.

It's called Windows 11 IoT Enterprise LTSC and does away with nearly all restrictions, except 64-bit ISA and POPCNT support.

I'm running it on anything Sandy Bridge and up, or simply on anything that I also used for Windows 10.

No TPM (unless it's a travel laptop and has one), no HVCI (I run VMware Workstation as type 2 hypervisor), no OneDrive (not stupid), no Co-Pilot (not that stupid), no Edge (that would be *really* stupid) nor many other "improvements".

It was released in October 2024 and comes with support until 2034.

And to deploy I simply take a minimal install I keep current on a Windows to Go USB stick with all my applications and all the various drivers for older and newer hardware and put that on the target's boot storage, MAKs and ISOs came with MSDN and remove all activation hassles.

After perhaps a reboot or even two to reconfigure the hardware it's good for longer than the hardware will likely still last, since some of it is already more than 10 years old.

And I find it somewhat embarrassing that it's easier to transplant than most Linux variants and across a vast range of systems ranging from Atoms and small laptops to powerful mobile or tower workstations with all sorts of storage, NICs, integrated or discrete GPUs.

And if a brand new laptop comes with some "OEM enhanced" pre-built image? I just plaster it with the live image from the stick, because OEMs are just badly imitating the abused notion which Microsoft has copied from the Fruity Cult: that they own your personal hardware including your data.

Windows Server 2025 is and works pretty much the same, btw., I'm running the Data Center edition as "to Go" on a nice Kingston Data Traveller 1TB USB 3.2 stick that isn't quite NVMe, but will go 2x SATA speeds on matching hardware. Actually Windows server is mostly a PoC, because it's a bit rough on AMD desktop hardware due to AMD's penny pinching and Microsoft charging extra for server signatures.

Every Windows 11 has always installed perfectly fine without any issues on VMs running on much older hardware, including things like device pass-through (e.g. GPUs for CUDA or gaming) on KVM/Proxmox/oVirt: all those blocking checks are only performed on physical hardware by SETUP.EXE.

And even if Windows to Go also no longer officially exists, Rufus will help you out for any edition Microsoft produces.

And no, I cannot imagine Microsoft ever blocking security updates to LTSC IoT editions based on hardware generation. Application vendors are the far bigger risk to long-term viability: some games now refuse to run without TPM (could be inconvenient) and Facebook might be next... no problem for me, except when you're forced to used them to do your taxes returns next year.

abufrejoval

BIOS and HAL (Re: The ghost of Intergalactic Digital past)

Sorry but CP/M's BIOS was no HAL and HAL wasn't particularly novel or powerful.

IBM's 360 architecture (by Gene Amdahl) which allowed a single instruction set to span a large range of machines that differed significantly in terms of capabilities and physical architecture was much more forward looking. It basically had a virtual instruction set, some of which even the smaller machines could execute in hardware, while any of the more complex (e.g floating point) ones, would be emulated in microcode, fully transparent even to the OS.

CP/M had to run on S-100 machines, where few ever had the same hardware so a BIOS had to be written (or adapted) for each machine, much like run-time libraries in the 1950's.

And HAL was Microsoft's insurance, both against a multitude of ISAs, but also against a PC platform which had zero abstractions or support beyond a CP/M style BIOS in ROM.

I've never investigated the abstraction capabilities of HAL, but everything in PCs went to the quickly evolving metal when it had GUIs look better or stuff run faster, which nobody could have ever anticipated when HAL was designed.

Congrats on deriving value from a code base that old, but I can't think of INT13 BIOS or INT21 DOS calls as "fancy". They were a primitive replacement of CP/M's CALL 80(BDOS), which was necessary because of the 8088's segmented memory and because it lacked a proper system call instruction. And they were so incredibly primitive and slow that everyone who could bypassed them whenever they could.

Intel itself was so embarrassed by them that they overcompensated by really fancy system call instructions and mechanisms like task and call gates for the 80286, which OS/2 was designed for. But while they only cost a single instruction to call and seemed to offer good process isolation and protection, they were so incredibly slow to execute, that Linus had to replace all that code to make his initial OS perform anywhere near to BSD386 levels.

And later not even Intel managed to keep track of all the registers that actually needed saving and restoring, which is why it was removed for the 64-bit ISA.

abufrejoval

Re: I remember reading Letwin's post

I started my first computer courses in 1980: BASIC on a Tandy TRS-80, Fortran and Cobol on an IBM mainframe.

The 3270 screen for the IBM was just beautiful: 80x25 characters that were just wonderfully chiseled in bright green on black and IBM keyboards were like Steinway grand pianos. The TRS-80 was washed out dots on a bad TV and the keyboard a nightmare.

But BYTE magazine convinced very quickly that little was more desirable than having your own computer: I've always bought PCs and then tried to turn them into mainframes since. With Microport Unix on my 80286 I thought I had gotten close... I always used original IBM keyboards on my cheap PC clones, but put a pilfered IBM metal sticker on the chassis.

My professional life has mostly been replacing the mainframe. But that has never kept me from admiring the admirable parts. Gene Amdahls forward looking 360 architecture was certainly one of them, but as with virtualization you could argue that more was invented *at* IBM, even against their management's wishes, than *by* IBM: TSS vs. VM/370 is one of many such stories.

One IBM architecture which I feel still undervalued and much more advanced than even today's mainstream operating systems is what started as System/38 and became AS/400.

I've never consciously used one and they weren't exactly personal computers, but as a technical architect I've at least come to admire their forward looking principles, the single level store and capability based addressing, both of which might have saved unimaginable man years and trillions of IT spending, had they been more affordable or even open source.

Unix was a hack that turned everything into a file, because the PDP it was born on, had too short an address bus to support a Multics like virtual memory system. It's designers were so embarrassed by its success they developed Plan-9 and Go, just so they'd have something done properly to be remembered for.

And who would want files, when they could have persistent objects, like on Smalltalk machines or at least a database like on AS/400?

But these days I'm reminded ever more fact that humans started out as segmented worms and were also not designed to sit in front of a computer for a day of work: we might have evolved there, but the design is can't anything but optimal for the job, or where to those back pains come from?

abufrejoval

Re: Of course NT 3 was great, it was VMS after all

The automatic versioning of file (and the "purge" command to get rid of older versions) was already present on DEC's PDP-11 machines or rather their operating systems.

Can't actually speak for RSTS, because I never used that, but RSX-11, where I spent a few years, had it, too. DCL, or DEC's variant of a [shell] command language was quite nice generally and there was some early cross pollination to DOS via CP/M, whose programmers evidently were familiar with PDPs, too, since a few command and even utilities like PIP (peripheral interchange program) were purported to have been inspired by RSTS.

True, the VAX cluster facilities never quite made it to mainstream appeal on Microsoft's Windows, mostly I guess because Wolfpack came a the same time when NT4 let device drivers run at ring 0, obliterating the main security advantage that would have made it feasable: clusters can't help against broken software.

And I don't know if IBM's cluster product were older than VAX clusters, but the latter can only be called "inexpensive" when compared to what IBM keeps charging for mainframes (or Tandem for NonStop).

Ken Olson eventually led DEC into ruin by trying to emulate ECL mainframes via the VAX 9000 at a time when IBM itself was going CMOS on one hand and trying to conquer the PC market at mini-computer prices via the DEC Rainbow at the other.

I can see him shaking his head at a Raspberry PI emulating a VAX 9000 (or a Cray XMP for that matter) faster than that ever ran.

I was somewhat involved in the HPC motivated Suprenum project during my Master's thesis, back when even (Bi)CMOS CPUs were unsoldering themselves from their sockets at 60 MHz clocks (first Intel Pentiums), so I've always retained an interest in scale-out operating systems, which would present a single-image OS made from huge clusters of physical boxes connected via a fabric (e.g. Mosix, by Moshe Bar).

But with currently 256 cores on a single CPU die (or thousands on a GPU) each delivering large SIMD vector results per multi-gigahertz clock cycle, that (scale-out operating system) domain has become somewhat irrelevant, or rather transformed far beyond recognisability and often rather proprietary.

abufrejoval

OS/2 was dead by design, because it was hard coded to the 80286 segementation and security model

I remember that period very distinctly, because I had just sunk the equivalent of a used Porsche (or a new Golf) into am IBM PC-AT clone with an EGA graphics adapter, basically two years savings from freelance programming work while I studying computer science.

I then wrote my own memory extender so my GEM based mapping application could use extended memory, while GEM and DOS were obviously tied to x86 real-mode. It basically switched the 80286 into protected mode for all logic processing, and then reset it into real-mode via a triple fault to do the drawing bits.

It worked, but every PC had its own little differences on how to trigger the reset or do the recovery because the mechanism might have been IBM intellectual property (who used the keyboard controller to toggle reset).

Anyhow, having worked with PDP-11 in the form of a DEC Professional 350 and with VAX machines, I was utterly bent on overcoming the CP/M feel of my 80286, and also ran Microport's Unix System V release 2 on the machine, which included a free Fortran compiler that unfortunately produced pure garbage as code.

It also included a working DOS-box, long before OS2 could deliver that. Using the same reset magic, I'd exploited for my personal extender. I ran a CP/M emulator on that with WordStar inside just for the kicks of running CP/M on a Unix box!

Then the Compaq 386 came along. I even got one coming to my doorstep. The dealer who I had purchased the 80286 from came to my house, rung the bell and told me he had a 386 for me.

You see, when these machines were the price of a new car, house deliveries up the stairs and setup of the machine were actually part of the service...

Can you imagine just how painful it was to tell him that I had not ordered it? And finding out that in fact my father had ordered it for himself? Including a full 32-bit Unix that actually worked like it would on a VAX?

BTW: that Compaq wasn't slow. Perhaps that ESDI HDD wasn't super quick, but the EDO RAM was 32-bits wide and way faster than anything on my 8MHz 80286. And Unix apps don't typically block on physical disk writes.

Anyhow, finally going on topic here:

OS/2 was an OS tailor-made for the Intel 80286. The 80286 was very similar to a PDP-11 and their discrete MMUs, which kept processes and users apart by allocating their code and data into distinct smallish memory segments (16-bit offset addresses) and protecting them from unwarranted access. Unless your program was permitted access to a memory segment, any attempt to load and use it would result in a segmentation fault via hardware and program termination by the OS exception handler.

The 80286 went a bit further yet and allowed for a full context switch between processes via call and task gates, putting almost the entire logic of a process switch into microcode which could be executed via a single call or jump.

That was continued on the 80386 and caused an overexited 16-year old Linus Torvalds to think that writing a Unixoid OS couldn't be all that difficult and fit on a single page or two of code!

It wasn't until Jochen Liedtke of L4 fame carefully dissected just how horribly slow those intrinsic microcoded operations were, that Linux gained the performance which enabled its wider adoption by ditching all those Intel shenanigans and eventually discarded all segmentation use with the transition to 64-bit.

The 80286 didn't have that option, nor did OS/2.

Linus grew great acknowledging, enabling and encouraging others to do better than he did. Perhaps the size of his early mistake burned that lesson in extra strong.

Whatever you say about politics, IBM and their ill fated micro-channel machines has very little to do with the fate of OS/2.

It was doomed by being an 80286 designed OS for 64k segments, very similar to a PDP-11 and its various operating systems.

32-bit CPUs and virtual memory made for a completely different OS design and Microsoft clearly understood that it called for a complete restart.

They snatched Dave Cutler to get their hands on one of the best virtual memory operating systerms availabale at the time, that wasn't Unix.

And the rest is history.

The so called 32-bit versions of OS/2 weren't really a 32-bit OS. To my understanding they were a lot like DOS extenders in that the kernel and many base services mostly remained 16-bit code, but allowed 32-apps with virtual memory.

A re-design of OS/2 for 32 or 64 bits wouldn't have been OS/2, because the segmentation model and its hardware security mechanisms were really at the heart of the OS.

I bought Gordon's OS/2 book when it came out, read it and it spelt out its tight integration with the 80286 on every page and thus its doom. I chugged it into the reycling bin decades ago. With some lingering regret, since I had spent my fortune on the wrong box, but boy am I glad I wasn't in Gordon's place and mispent a career!

I had to read the details on the 80286 architecture to make my extender work.

And I remember reading about those tasks gates and call gates and feeling a pull somewhat similar to what Linus must have felt.

I also remember reading about the Intel 80432. Intel has a penchant for designs that look great on paper.

But by then I had an 80486 running BSD386 and/or various "real" Unix variants, as well as various closed source µ-kernels GMD/Fraunhofer was developing at the time. And my focus was on getting smart TMS34020 based graphics cards to work with X11R4, so I wasn't biting.

I also had access to Unix source code, so why should I settle for something amateur?

After finishing my thesis porting X11R4 to a µ-kernel with a Unix emulator built on Unix source (thus unpublishable), I actually got a job where I was to create a TIGA graphics driver for OS/2 so it could run the PC as an X-terminal. Got the SDK and went diving deeply into OS/2... for a month, after which I was called away to work for my dad's company.

I was glad to go in a way, because even if the technical challenge was interesting and so called 32-bit variants of OS/2 had emerged by then, the smell of death was too strong.

DOS boxes whetted my appetite for VMs and containers and I've built my career on crossing borders or merging operating systems with VMS and Unix lineage and far less µ-kernels than I ever thought likely. Nor did scale-out operating systems like Mosix ever really take off or clusters ever become significant at OS level, except in niches like Non-Stop.

OS/2 to me is the 80432 of operating systems: dead on design.

How Intel then crippled the 80386 to not support full 32-bit virtual machines is another story.

As is how Mendel Rosenblum, Diane Greene and team overcame that limitation via the 80386SL/80486SL SMM (system management mode) and code morphing.

Intel wasn't amused and it's nothing short of irony how Gelsinger came to head the company that destroyed Intel's CPU business case.

abufrejoval

Of course NT 3 was great, it was VMS after all

Of course WNT was great, but it was hardly a v1 OS.

It was basically a VMS clone (V++=W, M++=N, S++=T), a "re-creation" done by Dave Cutler (and team?) who had been lured into Microsoft from DEC.

As such it had tons of multi-user credentials, a properly designed security model, using the 386 ring isolation to keep the kernel out of harms way (device drivers) and thus stable... like a VAX.

Of course a CPU driven pixmap GUI wasn't part of VMS' design so having to push pixels through security barriers made for inacceptable GUI performance on VGA hardware, especially when you add a 16-bit ISA bus twixt CPU and screen on your typical 386 clone.

I mostly ran the Citrix variant of NT 3.51 but again modified by an X-terminal vendor (was it Tektronix?) and on X-terminals, so a lot of the pixel pushing was instead translated to much higher level X11 line, pixmap and text calls rendered on the client side, that resulted in rather good multi-user performance for office apps. The Citrix ICA variant could use normal 32-bit RAM for rendering, but still just wouldn't scale to higher resolutions (1024x768 or even 1280x800 was becoming popular) at 8-bit color (or deeper).

In Windows NT4 graphics and other device drivers were moved to ring 0 which meant that badly written printer drivers for stuff like ink-jet printers would kill a dual CPU NT4 50-user terminal server in the blink of an eye because it wasn't written thread-safe: I very distinctly remember seeing this happen (and hunting down the cause), and it didn't help that Microsoft had cut off Citrix from access to NT4 sources, either, unless they gave them access to their technology.

Ransomware attack forces Brit high school to shut doors

abufrejoval

Love those girls!

I know it's most likely a standard shutterstock pic, but boy, did these girls go into giving everything to project boredom!

They obviously came well prepared and groomed to their best but then just imagine their fun at trying look both the possible best (at looking good) and projecting the possible worst (boredome/frustration).

I'd say they nailed it! Bravas! Da capo!

And I guess a lot was also in the brain behind the lense: well done, the effort shows and really carries over to the viewer!

GM parks claims that driver location data was given to insurers, pushing up premiums

abufrejoval

Re: Who wouldn't knowingly consent to have their driving assessed by their insurance company?

Well, I guess you need to zoom out a bit to get my point.

People may prefer privacy for some things and under some circumstances.

But peope also feel the urge to go in quite the opposite direction and broadcast their virtue.

In both cases they want to influence or manipulate how they are perceived by others because that provides benefits to them.

Social networks got one of their most important boost from people who wanted to broadcast a projection of themselves to a wider audience and in a more controllable manner: could be being bigger, smarter, sexier, whatever. Few things wind up as deadly as fanatics trying to outdo each other very publically on some imagined virtue.

And for some it's just proving to the insurance company that they deserve lower premiums or some other benefit: they are happy to forfeit privacy there to gain an advantage.

And insurance companies are very happy to shift the load to the less obviously virtuous and increase the premiums as a windfall: it's one type of greed or another all around.

Everyone tries to manipulate everybody else for a benefit. And that is behaviour far older than homo sapiens, you can see it in plenty of other species, too.

And sometimes the worst offenders don't get what they deserve, they just get re-elected instead.

abufrejoval

Who wouldn't knowingly consent to have their driving assessed by their insurance company?

Plenty of people with plenty of different motivations...

Humanity is diverse and that includes what others regard as perverse.

Let's tackle the poor bastards first: if you struggle to stay afloat, lowering your insurance premium by *any* means becomes a priority. So if you can demonstrate "safe driving" by going half the (upper) speed limit, they will, regardless if they have the opposite effect by inciting others into reckless passing to compensate.

You can see it with the elderly: they know both that their sensory equipment is deteriorating and their reaction time increasing. But quite a lot really depend on driving to participate in life or stay out of assisted living they can neither afford nor tolerate. So they go extra, extra careful to stay out of trouble... regardless of their impact on traffic flow.

And then there are those, who try to impose their "virtuosity" on others. You know the type, who will stay on the left lane at 130km/h because that's the recommended maximum speed on the Autobahn and far more green than going full throttle... whilst they didn't take the even more ecological train, either, which runs 300km/h alongside, ...but unfortunately won't stop were you need to go.

They are in permanent "driver's ed" mode and probably expect to be given not just a lower risk rating but essentially an insurance knighthood.

As my utterly corrupt almost-ex-wife used to say: "90% of all virtue are functioning social control" which she used to great effect to cover her misdeeds. Yet without closing the feedback loop in one manner or another, insurance cannot work, as California wildfires demonstrate rather fiery these days.

And I guess the logical extension of that is your insurance getting cancelled automatically as your car heads into an unavoidable collision.

A neutral party in the middle seems a necessary solution, but won't come free (money) and is very difficult to maintain free from coercion via AIs or plain old software.

Where does Microsoft's NPU obsession leave Nvidia's AI PC ambitions?

abufrejoval

Re: What is the point?

Cooling!

It's called dark silicon and required to keep temperatures within manageable limits by leaving cells only partially filled with active transistors or entirely void next to noisy neighbours.

Except they are giving that wasteland a fancy name and sell it extra pricey now.

Ransomware scum who hit Indonesian government apologizes, hands over encryption key

abufrejoval

Re: Criminals with a conscience? I don't buy it!

Perhaps I should have chosen another word like hostilities, but effectively we live in a world of constant smaller undeclared wars and this would have started another, if perhaps only a civil war within Indonesia. And there are far too many non-Chinese who might then want to claim parts of the large Pacific that the PRC would rather conquer diplomatically.

abufrejoval

Re: Criminals with a conscience? I don't buy it!

The Indonesian elite is very Chinese. They expect to be treated like family. And like family they'd retaliate more vicerally if they're not.

Of course, there is little chance of Indonesia invading the PRC in retaliation, but deep and long retaliation for endangering their nation there would be, in all importune manners possible, with anyone who wants to play ally (the enemy of my enemy...)

The PRC is aiming for dominance in the Pacific: turning a nation that claims sovereignty over a vast and strategically important swath of said Pacific, from a cousin into an enemy, because some of your backroom scum overdid it, simply doesn't cut it ....yet.

Because it's also a demo of PRC power, just in case Indonesia and neighbors might need reminding that there is value in allying yourself with the PRC.

Some cousins are bullies, too.

abufrejoval

Criminals with a conscience? I don't buy it!

Sorry, but I smell rat here: the only reason these guys backed off was that somebody up their food chain told them to drop it.

If I understood things right, this was a potentially nation crippling attack.

And a nation that is at risk of going under completely, faced with an enemy that only wants money, can't afford to just say no: they will have to negotiate a price for their survival.

So clearly someone in that nation knew this was a government sponsored attack and they had a quiet chat with someone from that sponsoring government about the risks of starting a war.

And that sponsoring government called off their punks, who cannot say no to their puppet masters.

Please, don't just buy into the superficial story!

Kernel tweaks improve Raspberry Pi performance, efficiency

abufrejoval

couldn't agree more on server core and RAM energy savings, can't see it happen, though

I've owned some big Xeon E5 workstations for about 10 years now and watched the HWinfo reports of them with some degree of fascination and dread:

Even Haswell 18 and Broadwell 22 core CPUs would clock down to tiny little single digit wattages on an idle desktop, that's only many cores sleeping not all.

But those 128GB of ECC RAM (non registered UDIMMs) would never drop below 50 Watts for the memory controller, the DIMMs themselves probably added another couple of Watts each.

On heavy loads the memory controller would report 120Watts, which was actually more than even an all-core full load (HWinfo never reported more than 110 Watts consumption on a CPU that was officially 150 Watts TDP.

Not only was server RAM the biggest part of the server purchasing, it was also nearly always the biggest energy spender.

And it's not like the RAM was significantly faster (ok, quad channel), than the desktop equivalent DDR4 or did some other magic.

RAM on mobile chips may be another class of device, but that still manges to retain content and do that on less than a Watt on gigabytes: So clearly there is some room for improvment here!

The only problem is that apart from idiots like me who run their own servers, nobody wants energy proportional compute any more. Once pretty near demanded from the industry by AWLs CTO, they soon corrected course and made sure they ran their servers always near 90% load instead, because energy savings only happend if you failed to make money from them running.

So there we are: the only servers that are any good on idle energy consumption are smartphones and laptops.

Raspberries and other SBCs are terrible power hogs over time, perhaps wasting more energy over their life time than a well designed desktop with much bigger peak power and consumption, but with at least a half ass understanding on how to save power on unused assets.

No, nobody in his right mind should buy Raspberries for their low power consumption. Any Intel Atom based NUC is likely to do much better in every which way except being cute. Heck, even Core based NUCs might be more energy efficient on idle, but it might take a while until they make up for the higher purchase price.

Server design is completely hyperscaler driven these days. And Ampere is probably best at explaining why spending any transistor on energy savings in server hardware is total folly... unless you're still one of those idiots who still operate their own servers.

RIP: WordPerfect co-founder Bruce Bastian dies at 76

abufrejoval

WordPerfect had me stumble on the first step and never recovered

The first computer I owned was an Apple ][ clone that included all the professional extras like an 80-column card and a Z-80 Softcard to run CP/M.

Word processing was an obvious bonus, especially since my handwriting was terrible and I had learned to touch type in high school.

WordStar was great mostly, because it immediately told you how to get around after launching, giving you a legend of the most important navigation keys and the option to hide/restore the help menu at any time to not waste precious 80x24 screen real-estate.

Word and Multiplan likewise gave you immediate hints, although they tended to waste the lower lines for menu and those wouldn't go away. But it was logical, dense and Word had inheritance for formatting, which was crucial for consistent documents. Multiplan was also always way more logical than VisiCalc with it relative and symbolic references in the formula language and I never felt any temptation to use 1-2-3.

WordPerfect left you with an empty screen after launching. In fact just trying to get out of it without resetting the computer turned out to be difficult: none of the known keystrokes worked (this was long before SAA and there was only one function key labelled "CTL").

Perhaps RTFM would have made all the difference but with WordStar there was simply no incentive to change and then the Turbo Pascal built-in editor with WordStar compatible commands was the main tool for editing code anyway, and not even just for Pascal.

Function keys only ever arrived with the IBM-PC, none of the early computers had them. But to get to them, you'd have to leave your home keys and look at the keyboard to find them, a complete break in the midst of writing, that WordStar controls didn't suffer, as long as the Control-key was in its proper place. And then they even started to move the function keys from the left to the top, where chances of hitting them blind were even worse! But that's another story...

Combing back to WordPerfect: I've always felt that a product that left me near helpless right after starting, should never be called WordPerfect: nothing perfect about being left in the dark!

I guess that always felt a bit arrogant so I felt little inclination to ever change my mind.

But I know that some of my favorite writers just loved it, so I guess it did a lot of good for me eventually.

Andrew Tanenbaum honored for pioneering MINIX, the OS hiding in a lot of computers

abufrejoval

Microkernels, those were the times...

I bought the book. And that might have included floppy disks, I don't remember. And if I ever ran it, it wasn't for long or for much.

But just like Linus, I didn't really read the book in full.

I had already read the Unix v6 sources in full, in my CS classes at university.

When Linus decided that jumps to task state segments on a 80386 would make task switching fit on a singe page of code, I had been using QNX, UnixWare, Lynn's and Bill's 386BSD, and a competing µ-kernel called AX for for years: Linux combined the worst of everything and I ignored it for years, because I actually had full access to Unix and AX source codes, too, and could compare: I was not impressed by what I saw, I fully agreed with Jochen Liedke (of L4 fame).

QNX was really cool and very usable already on the tiniest 8086 even without any MMU and AX was likewise made for Suprenum supercomputers with lots of pure compute notes that had no I/O whatsoever. So in that sense Minux wasn't that much better than Linux as a badly made monolithic Unix clone, because it didn't make distributed computing the default.

The competition was Moshe Bar with his Mosix kernels or Transputers, who did that in hardware and at the Occam language level.

What completely destroyed all that computer science for a decade or two was the clock race: who would have thought that a lowly 8086 successor could outperform a "Cray on a chip" iAPX 860 and run at several Gigahertz?

Today it's all to multi-cores, but instead of dozens, it's millions of GPUs with thousands of cores each: all Unixoids were ever trying to do was to offer Multics abstractions at vastly inferior cost. And Multics was all about multiplexing a single incredibly powerful CPU among as many users as possible to create the illusion of everyone having their own [single] CPU.

Endless OS 6: How desktop Linux may look, one day

abufrejoval

Re: Missing German, immutability clashing with increasing internationality

It's been rather interesing to observe just how different this can be. In Brussels, just everybody is at least bilangual between French and Dutch, because as much as the two groups are at each other's throats outside the capital, inside you just can't avoid speaking both, it would be a total breakdown otherwise: very few people risk annoying 50% of their customers over something so trivial. And when the francophone speak Dutch, it's slow enough for me to undertand as a German. And in some corners of Belgium, you'd have to add a German dialect to the mix, with language barriers often running along a street in the middle of a town and the the only bakery on one side: food is such a catalyst!

Somewhat similar in Spain, especially in Catalonia where the language issue between español and català is politically charged, yet you'll just have people juggle between those two and switch in a heartbeat in Barcelona without even thinking about it. Among my colleagues many then add French and English, simply because they spend hours each day with them in conference calls. And the French for some reason, very unlike their close cousins just behind the borders North and South, just don't manage foreign languages very well at all, something they share with their stray subjects across the Channel for some reason.

A little further South all across the Southern seaboard of the Mediterranean nearly nobody can make do with only one language. And even if if they only speak Arabic, that's already two, the local variant and what they speak on TV. In the Magreb region, most will have school or university in either French or Spanish, plenty of Arabic at the mosque and then one or two of the various Tamazin dialects at home.

I've met colleagues from all of those places and several more working together in Dallas for a project some years ago and was trying to enjoy the internationality of the setting.

But evidently I was the only one, because starting with the Hispanics, almost no one dared to speak anything but English, even the French (well the Canadians had no issues with Quebequois, but that could have been Zulu as far as I could tell). The peer group pressure to speak nothing else was quite astonishing and a total surprise, because it carried over to places like restaurants. While family owners evidently saw no issues speaking Spanish among themselves, they acted as if they'd been caught in an illegal act, when I addressed them in my slightly Southern Malageño, which is very close to what got exported to Latin America.

I learned my first variant of English in small town USA, South Eastern Ohio. And it's still somehow the easiest and most natural for me to use. But I've spent four decades of my professional life mostly with either Brits doing RP or Europeans slaughtering it. Continuing with my Appalachian seemed like running a false flag operation and clearly would have had me stand out for something that wasn't even me. As a result you'll catch me zig-zagging in a mixed group or just following whoever started the conversation. I remember a project with some Spanish and mostly Brits from the UK's North-West, so I used my finest RP for months. Then this Kiwi or Aussie walked in one day and started some friendly banter, which had me answer in the closest variant in my portfolio, which was my good ole Mid-West... People who'd been working with me all those months were completely stunned and and looked at me agape or as if I'd suddenly turned a spy or traitor...

Even accents are so political and in these days I can't identify with either country nor with any of the many classes in the UK.

The strong political pressure for English-only in the US has definite advantages, eliminating a lot of complexity that many other places have no choice but to deal with.

Yet, somehow I think that the bilangual approach a lot of countries with dozens if not hundreds of local languages (China is estimated to have 600 different ones) have chosen with Mandarin even spoken in the global Chinese diaspora, is going to remain a global minimum, three or more rather more normal the more globally we work.

abufrejoval

Missing German, immutability clashing with increasing internationality

I speak every language they offer, except Portuguese (I understand Gallego pretty well), and have to juggle with all pretty much in parallel on top of my native German, which I still prefer as a default on my computers, but others in the family prefer English or French. And in my workplace things quickly get more complicated, just my Swiss, Belgian and Spanish colleagues routinely deal with 3-4 languages, and that's only witin Europe and a Latin alphabet.

I think the myriad of linguistic permutations are the worst issue with these immutable images, because there are quite a lot of places in the world, where people routinely need to deal with several languages and input systems at near any level of granularity, from per sentence or conversation, to per application, time of day, or day of week.

I don't know if they should try an EU edition and perhaps some other clusters for areas where people are multi-lingual and multi-alphabet by default. It could get out of hand quickly.

So perhaps they need to build some kind of a staging cache, which allows automated builds of multi-language images, so that you still have the advantage of immutability on the client, yet offer a degree of customization, while maintaining reproducibility and the ability to fail-back.

Codd almighty! Has it been half a century of SQL already?

abufrejoval

Funny thing: everybody thought that functional languages were too complicated

I had to do stuff like ML and Prolog at university near that time, also "initial quotient term algebrae" in formal proof methods.

Can't say that I liked it all that much, I just wanted results and loved e.g. Turbo Pascal, because it was so fast to compile and detect syntax errors, making for quick turn-around times even on tiny 8-bit CP/M machines like my Apple ][ clone with a Z-80 card.

And while all that formal, functional and logical stuff was somewhat fascinating, everybody just thought it was too impractical and complicated for anyone to use in daily life.

Only years later it dawned on me that spreadsheet formulae were functional. And by then I really wanted them inside my databases, too, to be evaluated as part of the query from within the DB engine, while the queries were functional too. You could do HPC that way, much better than in Fortran!

I guess some of the fuzzy things I had in my head were actually realized in MUMPS or Caché. I guess the lack of an open source variant meant I never found out.

So in fact these "complicated" languages may have seen far more use by far more people than those classic imperative programming languages we IT guys always thought were what made us better than those mere "users".

And today they also seem more likely to survive because loops, which we were taught to think in, beause they pretty much offered inductive proof of correctness, today only mean you haven't done your work to parallelise it to those tens of thousands of cores everybody with a smartphone or better has at his fingertips.

Because in functional that's natural and imperative sequential is really an aberration...

Firefox 124 brings more slick moves for Mac and Android

abufrejoval

Pest Control (Re: Consent-O-Matic add-on)

Usually adding Ublock Origin and enabling all filters is one of the first things I do, not just with Firefox, but every browser.

But sometimes I get distracted and forget and sometimes I face computers in the extended family, which I haven't set up in that way...

And it's only then when I get reminded of just how gruesome and intrusive the Internet is for most folks!

All that bling, all those pop-ups, all those cookie banner waits are a complete nightmare that I'd very nearly forgotten, once those ad-block maintainers (and perhaps even the Mozilla Foundation) have started to put daily sweat and tears into keep them out: that constant battle between the obnoxious street-criers, pouch-cutters, or all the other streen scum vs the defense team I've just taken for granted for years now. I can't help feeling time-warped into somewhere between the middle-ages and the introduction of sewers, when the streets overflowed from chamber pots being poured out on the street and pest infected rats might rather snap at you then letting you step on a dry spot.

And when things do get through (e.g. recent Youtube nags), they typically get sorted out in just a few days.

I was very reassured in my choices, when I saw that the newest Raspberry OS just came with Ublock Origin installed and fully enabled within Firefox (Chromium, too, I think), almost the standards setup I'd choose as well.

So far at least, Ublock Origin is pretty consistent across browsers so while I'll typically enabled browser based defences, destroy-cookies-on-close, and "Do not..." settings, I put most of my faith in UO.

Which of the two kills the rats first, I haven't bothered to check, because by the time I get to see the page, even the cadavers are gone!

The Tab Session Manager is the only other add-on I sometimes add on my main machines, where some tabs are left open for days if not weeks and thus cross patch days.

abufrejoval

What's all that noise? For me Firefox just works fine on everything...

I can't even remember when or why I went with Firefox, must have been really early days.

But ever since, I've just seen no reason to change. IE was garbage and Chromium had one giant disadvantage: it was made by Google.

And it's just deeply unhealthy to have all parts of an eco-system owned by a single company. Same with Edge or Safari.

If Firefox were to publish their own OS (as they did at one point), I'd probably run Chromium on that, just because I consider balance of powers essential, to society in general, and to my software environment.

I run a lot of systems, dozens, really, physical and virtual, spread across Linux, Android and Windows. And I need browser access on nearly every one.

And I want consistency, same layout, behaviour etc. when I switch between them.

Well, at least as much as possible; there are some differences between a mobile phone and anything desktop (even with a touch screen), that are implied by the form factor.

Firefox delivers that, Edge is a no-go, Brave comes close (and is often the 2nd option), I gave up on Opera when it became Chinese and the phones became powerful enough to handle Firefox.

Of course, I dislike having to disable all the money makers like Google search or that "Pocket recommendations" stuff every time I run a freshly installed Firefox, but even that is a lot faster, when it's pretty consistent across OSs and versions.

I don't understand the "Firefox is garbage" allegations: everything I do on the Internet works as expected, except where sites get too snoopy or refuse the ad-blockers, which I obviously run with pretty much all filters enabled for sanity.

And those sites I'm happy not to revisit, unless it's the government and I have to (with the ad-blocker disabled).

I got the whole family on Firefox, too, and it's been easy probably because I started them there long ago, before Chrome or Edge became as aggressive (and repulsive) as they are today.

If it were to go away, that would be very hard indeed, much like finally letting go of Microsoft Office completely and finally embracing StarOffice, sorry LibreOffice despite its quirks.

Functional or performance differences that I noticed have been very rare.

Google Maps in the 3D Globe view is really impressive in terms of how much it's able to squeeze out of relatively modest hardware. For the longest time I've been astonished on how it would render the neighborhood much better (in terms of accuracy) and much faster (in terms of speed) on a modest Atom system even at 4k than Microsoft Flightsimulator on an RTX 4090.

But for the Atom (or smaller ARM SBC) I generally had to use Chromium to get that speed, Firefox stuttered on these smaller systems, while I never noticed anything wrong on my normal "desktop" or "worktation" class machines.

Even that has changed now, I can't see any noticeable disadvantage for Firefox e.g. on Raspberies 4 or 5 with the current software.

Where I actually *do* see a disadvantage for the Chrome based browsers is on WASM, where it regularly detects and uses less than the full set of cores on machines with lots of cores and threads.

Yes, only Chromium at the moment seems to enable WebGPU, but once that becomes popular enough, hopefully that will change: I'd really like to see WASM being able to take advantage of the GPU as well, but hardware independance and the ability to exploit ISA extensions and accelerators are rather too conflicting to sort out easily.

A path out of bloat: A Linux built for VMs

abufrejoval

Windows Subsystem for Linux uses 9P and why both IBM and Intel hated VMs

I my view WSL mostly exists so even Linux uses have to pay a M$ software tax, which is why I abhor it generally (and continue with Cygwin out of spite).

But I did notice, that they know how to use the good stuff (9P) for their ulterior world domination gains.

Did IBM invent the hypervisor?

I'd say that people at IBM invented the hypervisor, but pretty much against IBM's will.

IBM was all bent on making TSS (time sharing system) or Big Blue's take on a Multics like OS a success instead, and it's failure was recorded in a famous study and book called "The Mythical Man Month", AFAIK.

But there were far too many people with 360 workloads out there who needed to make them all work at once on their newest 360 machines, some of which came with the extra hardware bits, that made VMs possible. So some people at IBM started this skunkworks project, that then later some IBM execs noticed and turned into a product, the VM370, pretty much out of necessity because TSS had utterly failed.

So I really don't want to give IBM the hypervisor credit, but to the people who made it happen there anyway.

And to Intel's everlasting shame they made sure their 80386 didn't have that same full set of extra hardware bits, so VMs couldn't be done on their 32-bit CPU, only 16-bit VMs were supported.

It would have been an obvious and easy thing to do, but Intel was evidently afraid they'd sell fewer CPUs if people started consolidating their workloads.

And that's why VMs on x86 became such complex beast, because the abstractions were simply never at a similar height as on the 370.

Again a skunkworks project, but not by Intel guys but Mendel Rosenblum, Diane Green and other collaborators (and VMware founders) enabled VM support via bits Intel added to x86 for notebook enabled 80386SL and 80486SL CPUs which had introduced a System Management Mode or ring -1 layer into the CPU to allow operating systems like DOS to be run on battery powered hardware. In Intel's typical cut & paste manner, that got included even on non-mobile chips, where it had no official purpose.

VMware employed a few other patented tricks like binary translation of privileged guest code to make it performant enough for real usage and were sailing towards a future of riches, which a very furious Intel then wanted to shoot down rather quickly. They had withheld VMs from their 32-bit CPUs because they wanted to sell more of them, not to have this uipstart eat the extra value.

Only then Intel did add the necessary hardware bits and sponsored Xen's transition from a software VM approach to "hardware virtualization" so VMware's patents lost their value it the company eventually became ready for an internal takeover via one their creatures: Mr. Gelsinger, who had held the keys to VMs before and probably chose to withhold them.

Broadcom moves to reassure VMware users as rivals smell an opportunity

abufrejoval

Rebirth of mainframe licensing, IBM should sue them

IBM came up with this way of continually "licensing" the use of what you owned already, when AMDahl and others created 360 clones.

Personal Computers where the result of them squeezing to the point where pain had even IT-managers jump.

I bought my first VMware in 1999, because I just loved how they circumvented Intel stopping short of full VMs by exploiting the SMM of the 80386SL/80486SL: they were the underdog and Intel so furious they actually sponsored Xen to piss into VMware's patent pot after immediately pulling the full virtualization stops.

And then they went was far has having an Intel guy running the company and setting it up for sale into the ground. Is it *that* personal?

Too bad Qumranet's KVM is owned by IBM now.

And that's the company that also rather keeps Transitive's QuickTransit in their poison lockers than have humanity benefit from a great idea.

Forgetting the history of Unix is coding us into a corner

abufrejoval

That's a very long and windy buildup for Plan9

I've struggled for many years trying to understand and explain how Unix could survive for so long, given it's utter terrible shortcomings.

For starters, please remember that the very name "Unix" or "Unics" was actually a joke on Multics, an OS far more modern and ambitious and a fraction of the current Linux kernel in code size and finally open source today.

Everything *had* to be files in Unix, because the PDP only had a couple of kwords of magnetic core memory and no MMU, while Fernando Carbató made everything memory on Multics, a much more sensible approach driven further by single level store architectures like the i-Series.

I love saying that Unix has RAM envy, because it started with too short an address bus :-)

And I was flabbergasted when Linus re-invented Unix in about the worst possible manner possible: there was just everything wrong about the initial release! I was busy writing a Unix emulator for a distibuted µ-kernel inspired by QNX at the time (unfortunately closed source) so I could run X (the windows system, not the anti-social cloaca) on a TMS 34020 graphics accelerator within the SUPRENUM project: I had access to AT&T Unix and BSD source code, so I wasn't going to touch his garbage even with a long pole...

...for many years, by which time none of his original code bits survived; but his social code, his excellent decision making capabilities, had showed its value for accelerating the Linux evolution via developer crowd scale-out far beyond what the best husband and wife team (Bill and Lynne Jolitz) could do.

I've always thought that the main reason why the Unix inventors came up with Plan 9 was, that they didn't want to be remembered for the simpleton hack they produced when they came up with Unix to make use of a leftover PDP that would have been junk otherwise. They felt they could do much, much better if they had the opportunity to turn their full attention to an OS challenge!

In a way it's like the Intel guys, who hacked the 4004, 8008, 8080 and 8086 but wanted to show the world that they can do *real* CPUs via the 80432, i860 or Itanium.

So why did those clean sheet reinventions all fail?

The short answer is: because evolution doesn't normally allow for jumps or restarts (isolations can be special). It will accelerate to the point where the results are hard to recognize as evolution, but every intermediate step needs to pay in rather immediate returns.

(And if in doubt, just consider the body you live in, which is very far from the best design even you could think of for sitting in front of this screen)

Once Unix was there and had gained scale, nothing fundamentally better but too different had a chance to turn the path.

I've tried explaining this a couple of times, you be the judge if I got any close.

But I've surly used many words, too.

https://zenodo.org/records/4719694

https://zenodo.org/records/4719690

A little more on the cost o code evoution:

https://zenodo.org/records/4719690

of the full list via https://zenodo.org/search?q=Thomas%20Hoberg&l=list&p=1&s=10&sort=bestmatch

Sam Altman's chip ambitions may be loonier than feared

abufrejoval

An invest of Trillions requires a matching return: who would pay that?

My doubts actually started with IoT. The idea of having all things in your home somehow smart, sounds vaguely interesting... until the next patch day comes around and you find that now you have to patch dozens or more vulnerable things, most of which are more designed to feed the vendor's data lakes than providing any meaningful empowerment or value.

I've also always marvelled at my car vs. my home: my car was made in 2010 so it isn't even new any more. yet everything inside is connected and "smart", will adjust to whoever is driving it automagically, things happen at the touch of a button or even on a voice command, if that were actually any easier or faster.

Of course, once I took the wrong key, the one which had all adjusted to a person half my size and I feared mutilation if not death as in I searched in total panic a way to halt the seat sqeezing me into the steering wheel... And since I never really came out of home-office, I tend to spend so little time in my car, I often can't even remember how to turn on the defrost when the season changes.

Yet sometimes I find myself wanting to click my key when I enter my home, especially when I carry my supplies, hoping the door would open just as automagically, perhaps even carry the darn boxes up two rather grand flights of stairs, as you see my home was built around 1830: mine is the part under the roof where the domestics used to live, who never found a worthy successor, but gave me perspective.

You see, Downton Abbey provided me with the perfect vision of what IoT should be: life with non-biological servants. Most importantly, life not with intelligence somehow scattered all across things, but with an absolute minimum of non-biological servants: one servant per domain, the butler for the shared family mansions, a valet or ladies maid for each individual's personal needs, a chauffeur for all inclusive transportation, an estate agent-secretary to manage all fortunes, that's it! Delegation for the lesser services like cleaning and food suply, scale-out for grand events, coordination amongst them, life-long memory for anything relevant would be all part of their job, not for me to worry about.

Alexa, Siri, Co-plot, none of them ever came close even envisioning that for me. And you know where their loyalty lies: Downton Abbey has plenty of proof what happens if servants are disloyal to their masters. Actually, what I really want aren't even servants that might just go off and marry or have a career of their own, but good old roman/greek non-bio-slaves where obedience is existential, even if it includes proper warnings against commands that might in fact be harmful. And I don't recall slaves ever being more loyal to their slavers than their owners. So just imagine how Apple would be treated by owners a few centuries or two millenia ago!

Yet, how much would all of that be worth to me or the vast majority of the poplation which are consumers?

Trillions after all means a thousand bucks for each individual with billions of consumers... And that is just the chips portion of what it requires to make it happen.

It comes back to my smart car: would I have paid extra to have all that intelligence in it?

Not really, I bought it used. It just happened to have all that stuff in it, and I would have rather liked to forego those "extras". I paid for the room, the transport capacity and it's ability to cruise the Autobahn at speeds I consider reasonable with adequate active safety.

It's really a lot like the electric sunroof which I couldn't opt out from: it limits the head-room every time I enter the car, yet by the time I find myself actually using it, it's typically broken and would be very expensive to repair: so it winds up just being a glass brick covered up 99% of the time. I'd have much rather had the cruise control, but a used car with these options wasn't on sale when I needed a replacement.

Same with the electric seats, which may be ever so slightly easier to adjust, once you've figured out how they work and how to keep them from breaking your bones. But they become one big giant liability if they're stuck in some ridiculous position, because my son wanted to show it off to his lovely but tiny girl friend.

Turns out the main reason I've never seriously considered making my home "smart" is the fact that I need it to function 100% of the time, I don't really have a backup if the door failed to open, the windows failed to close, or if chairs at the dinner table were suddenly glued to the floor.

So count me very sceptic when it comes for AI based automation creating empowerment with enough value and trustworthyness to choose the AI variant over the stupid one EVEN at EQUAL PRICE.

Chances of me actually paying extra? Very ultra slim with an extra dose of heavy convincing required.

But next comes the corporate angle, whence my disposable incomes currently comes.

Yes, there may be a lot more potential for money savings there, but how much AI are consumers going to spend on once it's reduced workforces by the percentages corporate consumers of AI are hoping for?

New jobs and opportunities take time to arrive and one thing is very sure: those investing billions if not trillions today cannot wait a decade for demand to pick up again. Their shareholders demand sustained order entries month by month, quarter by quarter and returns best within a year.

And that's where I see bloody noses coming all around already with Microsoft & Co. spending billions or the GDP of smaller countries on nothing but AI hardware.

I can hardly see myself using Co-pilot even if they force it into my desktop and my apps.

Actually, much of my late career has been worrying about IT-security and the very idea of Microsoft infusing every computer with an AI begging everyone to use it, gives me nothing but nightmares about the giant attack surface they are opening up: that company still doesn't even manage to print securely, decades after selling their first operating system, CP/M was safer than that!!

Much less I can see myself paying for it, nor do I see 90% of consumers paying a significant amount for it, either.

Sure, that's belly button economics, but I humbly consider myself mainstream and ordinary enough to represent your regular John Doe.

Investors spending billions and trillions need matching returns and I fear their desperation more than anything else about AI.

PIRG petitions Microsoft to extend the life of Windows 10

abufrejoval

Re: Why extend Windows 10's life when Windows 11 could do just fine

Precisely, I've been running Windows 11 directly on Skylake hardware, which is nearly exactly the same as Kaby Lake in anything that an OS would care about and I'm also running the very latest Windows 11 just fine on Haswell and Broadwell Xeons under KVM as a hypervisor. With GPU (and USB) pass-through I even get it to run game at native performance on Windows 11.

Just proves those checks are completely arbitary and nothing but planned obsolescence in cahoots with Intel and AMD.

Microsoft needs to be broken up and the OS part (among others) spun off into a separate company under strict guidance not to create artificial obsolescence.

Perhaps it's ok to let go 32-bit x86 today, but anything 64-bit should run Windows and there is no reason a TPM should be required, especially since not everyone even wants to encrypt their disks.

I much prefer mine movable between systems and easy to copy and always disable it.

And with Windows 12 M$ is likely to go even further in terms of obsolescence and integrating ever more AI backdoors when they are already acting as if they owned your Personal Computer.

Apple users may be happy to give up any right to self-determination to their iNanny, but when I hire a janitor or property manager for my PC (that's what on OS is) I don't want him to run my life or report on me to his agency. I just expect him to do the job he was hired for and not get smart on me!

Get 'em while you can: Intel begins purging NUCs from inventory

abufrejoval

At current prices you cannot only enjoy their quality, they become a steal!

Originally I hated the NUCs, because they had everything I wanted, except the Mini-ITX form factor I was after. I wanted the ability to control their noise via self-selected (Noctua) fans and the extra expansion space and flexibility Mini-ITX offers.

But nobody sold mobile chips in that form factor and eventually even Atoms became Unobtainium.

I really wanted low-idle-consumption µ-servers, which could deliver a bit of a punch at peak on a budget: is that too much to ask?

I eventually got one of each, a NUC8, NUC10 and NUC11 each with the top-of-the-line i7, added 64GB of DDR4 RAM, NVMe and a 10Gbit NIC on the Thunderbolt port to create a RHV/oVirt HCI cluster.

But when the prices on the enthusiast variants started dropping near their iGPU cousins, it got my attention!

Currently the Tiger Lake i7-1167G5 based NUC11 Enthusiast (NUC11PHKi7) with an 6GB RTX 2060m included sells at the same €450 price as the same without or an Alder lake brethren. That's an DLSS 2 capable dGPU which runs every game up to Cyberpunk 2077 at ultra settings on THD near 60Hz or better.

And even as a NUC it's so much better than the Panther Lake NUC11 (NUC11PAH), which I also have, because it's much quieter, even when you push it to 64Watts of near permanent PL2.

The classic Furmark+Prime95 worst case runs both fans, but hard to notice and never a bother.

The BIOS of all these NUCs btw. is much more generous than Intel generally is with overclocking: PL1, PL2 and TAU as well as fan curves can be set to anything physically possilbe on ever NUC since Gen8. But on the older NUCs I did have to play with these settings to achieve a similar level of unnoticeability to make then home-lab compatible, 24x7.

I got my Serpent Lake NUC12 for €777, but it currently sells at €600 with German VAT included and at that price it was already to much to resist at least giving it a 14 day trial: I didn't return it, because again, it's an i7-127000H based µ-server with a plethora of ports and internal options with an ARC A770m thrown in for free!

You basically get a 16GB VRAM RTX4060ti with a full system included... except that XeSS isn't quite DLSS nor will it do CUDA: but I got an RTX 4090 based workstation for that.

Serpent lake has the same loving attention to physical details, e.g. the screws you need to loosen to add RAM and NVMe drives won't come out but hang on, so they can't get lost (too bad that's impossible for the M.2 drive). Gen8 to Gen12 you can just see how engineers tirelessly tried to improve every aspect of the design.

In my testing the Serpent Lake was good enough to drive a 27" 2560x1440 curved and game optimized Samsung display near its 144Hz limit with practically everything except Cyberpunk, where it at least exceeded the 60Hz base line beyond which I mostly don't care.

But with 100 Watts more to cool the two small fans which did such a great job on its predecessor are much harder to ignore when and if it got near the limits of the 300 Watt power brick, that seems to weigh more than the system.

Still it's hard to build a system more powerful for less money, and near impossible one with as little idle power and noise, when used as a µ-server or office desktop.

At list prices they were hard to recommend, at the current prices you almost can't go wrong: you get a fully competent gaming computer at the price of a console.

I've tried other NUCalikes like BRIX, and they were never near as good and feature complete as Intel's NUCs: fan control, BIOS options, sockets and ports, they'd "economize" everyhwere and the customer value suffered more than they seemed to save.

Mummy and Daddy Musk think Elon's cage fight against Zuck is a terrible idea

abufrejoval

Please let Putin join and have them bash their brains in all together

there are probably a few more names to add to the list, but I believe the world would be better off, if none of them came out fit to continue.

Probably more eco than a joint one-way to Mars...

Here's how the data we feed AI determines the results

abufrejoval

Works as designed

Thanks for that nice article!

I confirms both my personal bias and my extensive testing, not with the publicily available models, mind you, but with the ones I could operate somewhat more safely on my own kit.

Now as to if you should qualify The Pile et. al. as "garbage", I don't know. But it's absolutely biased towards what humans will react more strongly on.

And with computers helping along with cognitive science and the stashes of data the likes of Meta have at their disposal that will be used to influence, but generally only towards where people want to go anyway: it's not enough to have rivers flow backwards.

We can observe that rather well these days, because try has he might, it's not enough to turn Meta's Zuckerberg-navelverse into a roaring success, which I consider a giant relief and would like to see play out to utter agony.

But it will continue to dis-empower the underdogs, because it's humanity that creates them, no matter how loud they're yapping!

Happy coronation to all who consider themselves doing better!

HPC's lost histories will power the future of tech

abufrejoval

Too big for gaming, too small for GAFA ML--who are facing dire straits

I don't quite know what's worse: That these GPUs (and CPUs) are getting too powerful for any practical gain or the fact that some overhyped games are still so utterly bad: Microsoft's Flight Simulator still sucks, even on Raptor Lake and an RTX4090 because it's pretty much single threaded and does a terrible job at merging real-world map data and generated terrain.

But when it comes to machine learning, really the more important market that the GPU makers want to addess with this kit where consumers are really just meant to provide the scale benefits, these GPUs are becoming much too small and difficult to scale out.

Cerebras and Graphcore solve the scaling issues for another 100:1 but they don't have consumers to fall back on, if GAFA stumbles or fails.

HPC may survive because it's paid from taxes not sales. But apart from mass surveillance I don't see that much machine learning governments might sponsor (unless you include recommendation engines from politicians).

Underwater datacenter will open for business this year

abufrejoval

Re: WTF

Pretty sure wave friction is heating the planet significantly. So if you use tidal power generators and use that to run the data centers, net net it would be ok.

Except for the pollution and the fact that nobody will lift and remove a DC that stopped earning money.

abufrejoval

Re: How do you keep the looters away?

Short term, I wouldn't be worried.

Eventually, I'm sure either DC eating bacteria (that ran out of plastic) or lobsters that grew incredibly powerful claws as a consequence of high-frequency inductive electrical muscle therapy might evolve.

abufrejoval

How do you keep the looters away?

I can see that the Coast Guard or Navy might keep foreign marauders away from phishing data containers where they lie within coastal zones. And I guess there are legal precedents for keeping thieves from open water salmon farms, based on historical fishery law. But anything unmanned left lying on the seabed looks awfully much like flotsam to me, free for anyone to salvage and take posession of. I'm thinking that harvesting open water wind turbines isn't more popular mostly because they seem to be rather firmly anchored into the ground.

And then, once this becomes mainstream, how do you govern and manage the space? Evidently coastal property doesn't automatically extend into the water, in many more civilized places the coast itself is public land and on top the sea is both shifty and three-dimensional: top positions could wind up getting slow cooked from ground feeders in a race to the [ocean] bottom.

I've always been wondering about the legalities around marine cables, especially since there seem to be specialists who dig them up and ...steal[??] them: how can they be stolen, when they have been abandoned in international waters? How does the repair ship even stop a theft in progress, other than by superior force?

And should there be cabling conflicts due to cross-overs and repairs required: how is that being managed?

Once you add server farms to sea cables, complexity is sure to follow them closely.

Concentrated mass deployments could imply heat pollution, which is already a big issue at many of the power plants that use lakes and rivers for cooling. That's why a combination of tidal power generators with a DC (data centre) add-on seem much ...cooler, especially with DC (direct circuit) power.

My biggest worry is that those open sea data center containers will "unfortunately" get lost at sea, once they've outlived their usefulness (or the owning shell company has sunk).

abufrejoval

"Maintenance was delayed by a denial of surface attack..."

...because Ever Given needed a new business model to return to profitability

Why you should start paying attention to CXL now

abufrejoval

And how does a DOS like Linux know how to handle a dynamic heterogeneous fabric...

...when it's never even understood networks and treats GPUs like any other dumb device?

Unix was born a DOS (disk operating system) and a God: Sharing, delegation, coordination, submission, social interaction with other instances are entire alien concepts: just try how accomodating it is by pulling some CPUs and RAM, or switching out GPUs! There is a reason VMs are much easier to migrate as a whole than in bits that match fluctuating resource demands.

Yes, some cloud vendors will eventually be able to make that work with their kit, they already have so much bypass code that the Linux kernel is only used for boot I/O. But once scheduling also has gone library, the first memory to reclaim would be the Linux bootstrapper.

Not that any other popular OS is naturally social....

Windows to become emulation layer atop Linux kernel, predicts Eric Raymond

abufrejoval

Re: Yes, of course

I'm afraid you misunderstand.

Of course Google/Facebook/Amazon etc. won't use Microsoft's Linux, just like they don't use RedHat or Ubuntu either.

But every corporate, governmental and private user will find it much harder to use anything but the Linux that Microsoft publishes, which unfortunately only works when you pay directly or indirectly a Microsoft tax for its dependence on Azure.

It's precisely the Chromebook/G-Suite, iPhone/AppStore, Android/PlayStore approach, because copycat is what Microsoft has always excelled at.

SoftBank: Oi, we paid $32bn for you, when are you going to strong-Arm some more money out of your customers?

abufrejoval

Pretty sure China will take it and return the Huawei favor!

little more to add, really.

Deepfake 3.0 (beta), the bad news: This AI can turn ONE photo of you into a talking head. Good news: There is none

abufrejoval

Are *you* real, Katyanna?

Given that one or the other editors here at The Register seems to have picked up a foreign language or deux, I cannot help that "Quach" sounds very much like "Quatsch", which means 'nonsense' to Germans... which begs the question above, Alias o'Dabbs?

Story is too surreal for a fake and too well done for a run of word2vec inference.

What's this under the Christmas tree? A gift-wrapped Mellanox, for Microsoft? Say it ain't so

abufrejoval

Need a new type of anti-trust

Of course, Microsoft won't dominate the network market after that acquisition. But as a web-hoster you can now much to easily find yourself in a position, where your ability to compete with Azure is impaired by using Mellanonx switches and NICs.

I don't like this one bit better than Bigfoot being bought by Amazon, Google or Facebook (nor RedHat by IBM for that matter).

Mark Zuckerberg did everything in his power to avoid Facebook becoming the next MySpace – but forgot one crucial detail…

abufrejoval

Actually, most people enjoy being lied to... until they find out

In Germany's "Guardian" these days (http://www.spiegel.de/international/zeitgeist/claas-relotius-reporter-forgery-scandal-a-1244755.html) we see proof of the opposite: The best lies get awards, because they make such beautiful stories, much better than the soul numbing prose the real world etches into the lines below our eyes: The truth is so difficult and so complex, we simply yearn for something beautiful and simple.

That's why populists flourish and Facebook became an Internet supernova. Just because populist typically hang after a bang and supernovas leave nothing worth looking at, doesn't mean that history won't repeat.

I'll make to copy all the great tech stuff they open source and publish these days before they're gone; something else will crop up: Always does.

Memo to Microsoft: Windows 10 is broken, and the fixes can't wait

abufrejoval

Re: Am I missing something here?

Windows NT 3.51 was ok, especially the multi-user variants from Citrix and NCD (X11 support).

But NT 4 was a nightmare: Any cheap printer driver that wasn't thread-safe could crash a terminal server with 50 users on it just because they decided to go against everything Dave Cutler had been preaching and put device drivers at ring 0 to make them fast enough to beat Apple.

abufrejoval

Re: At least the Stasi had ...

No they didn't, they had shitty commuist stuff. I am ever so glad they didn't and keep waking up drenched in cold sweat, imagining "what if?" they had today's technology at their disposal. As with the Nazis, it wasn't the brightest who ruled at the top of this repressive organzation, if only because Mielke was little more than a shifty bastard and brute.

abufrejoval

Re: "Sell Office on Steam, make sure it runs on Linux, too"

Edge, Chrome etc.:

I guess what I dislike most about Edge is that it's Windows-only. If it were simply another browser besides Firefox, Opera and Chrome it would be worth a try, but tying it to Windows and pushing it they way they do is just not doing anyone a favor. Every time I switch the preferred browser to Firefox, I have to click extra and confirm that I really am not interested in even trying Edge: I won't try, because they don't make it a simple choice. And I won't try, because they overwrite that setting on every upgrade for every user: An upgrade is supposed to maintain the previous settings, but they overwrite them every time. It shows lack of respect for user choices and I won't even consider using one of their two browsers precisely for that reason.

Of course I am also not using Chrome if I can avoid it, for the exact same reason: They make it hard to do what I want. I want to delete all cookies when I close the browser. Chrome makes that extra difficult and you're left thinking, that "delete all" actually means "delete all non-Google cookies" to them.

That's at least lack of respect if not downright fraud, so I treat Chrome with the respect it deserves.

Everybody has a bias, but I tend to use what fits best. I do prefer running my desktop on Windows over running it on Linux, because it tends to be snappier and I am quite simply more used to it. In fact I like it so much, I'd love to run Linux Docker containers on Windows without having to switch the OS. They come with a Linux base, because that's what developers use and because it does a rather good job at most things servers: Even Microsoft seems to agree. Does that make me a Linux dreamer? Not in my book.

I own Crossover Linux and regularly try running Windows applications on Linux as well, just to see how or if things are progressing. Typically that doesn't last very long and I am back to Windows. Actually these days I even prefer RDP over X11, even if X-Windows originally (except perhaps for SunView or NeWS) was the only proper remote GUI environment und much better than the first MS terminal servers.

Have you tried Microsoft Office for Android? I cannot see it being any worse than the Windows variant. And there is plenty of other software out there, which gives a much better desktop experience than some of the 'native' Linux apps. I run PhoenixOS, an Android-x86 variant for PCs as one of the many operating systems I regularly track for their evolution. It's perhaps the best desktop OS I have found for low-power Atom computers: Much snappier and flexible than any CentOS/Ubuntu/FreeBSD/PC-BSD/Hackintosh or Windows.

I actually run ext4 on Windows via a Paragon Systems add-on. It's just that they tried to position ReFS against ZFS and Btrfs and failed somewhere mid-way, wasting a tons of engineering time they could have spent on QA. AFAIK file systems can be dynamically loaded on NT and thus not risk violating the GPL. NT at its base was very much designed by Dave Cutler to be a multi-kernel-API OS, supporting OS/2, Posix, Win32 and NT from what I remember.

I have an MSDN subscription so I typically run Windows server editions on my machines, if only because that way the store and all the data forwarding are disabled by default. I like any-2-any RDP, NFS services and some other stuff the server editions activate, but I hate drivers which fail, because they won't support 'servers' that are actually also workstations: Either way there are annyoing restrictions which are all politics.

And unfortunately Hyper-V is about the worst hypervisor, Virtualbox wonderfully consistent across Windows/Linux, while it will actually use KVM as hypervisor on a Linux host and dropped whatever hypervisor they originally had. I guess if Hyper-V as type 1 hypervisor could be used with VirtualBox the way KVM is on Linux, I would prefer that to using VirtualBox as type 2 as I do now: Because I move VMs between Windows and Linux hosts quite regularly.

Did I say that I need access to Nvidia GPUs in the Docker containers for CUDA applications? Not sure that's anywhere close to working on Windows.

Vulkan is a standardized API. If Microsoft had serious quality concerns about the quality of the API, I am quite sure the Khronos group would welcome their contributions, especially for a new Ray-Tracing extension or Augmented Reality.

But instead Microsoft is pushing their proprietary derivatives, the way they have always done. And I give them the respect they deserve for that.

abufrejoval

Here are some tips on how to reduce the testing workload

Slowing down, is not really an option, slimming down should help making the workload manageable.

Short version: Concentrate on the Operating System, not an ecosystem of vendor lock-in that nobody wants

Detailed version for things to kill:

- The microsoft shop or store or whatever it's called: Never used it, never needed it, deactivated it. Nobody wants a Microsoft software tax on applications. Sell Office on Steam, make sure it runs on Linux, too

- Edge, Internet Explorer: You are not a browser company, but more importantly: Nobody wants you to be. How many more decades do you need to understand that it's not a good thing to do what nobody wants you to do?

- Anything Xbox: Steam works better, Uplay and Origin are ok, nobody wants yours!

- Stop this editions crap: S, Home, Professional, Ultimate, Enterprise, Server, Client... Just create a single server edition, eliminate all that license checking stuff, because it breaks things

- Sell the OS at a reasonable price per user independently of computers: Don't penalize people who run several perhaps even a dozen different physical/virtual computers or just OS images that get moved/swapped between PCs. The ease with which a single SSD can be booted on a handful of systems is one of the major advantages Windows 10 currently has even over any Linux, is something I have come to enjoy (with VLK enterprise editiions). Look at Android (any number of devices) or Steam (no concurrent use) for how to not penalize buying more hardware, when they only ever use one at any given time.

- stop trying to play catchup with Apple: Why would anyone want to sink that low?

- stop collecting user data

- stop sending collected data to Microsoft servers

- stop Cortana and this Microsoft specific OS embedded AI stuff: Create usable AI API frameworks which allow users to chose Cortana, Alexa, Siri or Whatnot if they want, but don't try to make it the new MediaPlayer, InternetExplorer etc.: You're evidently too small a company to do that properly

There are also things to add:

- support running Android applications, including Play Store, seamlessly

- support running Linux applications, including native Linux kernel API docker containers, seamlessly

- native Linux file system support

I got really big machines with dozens of cores, hundreds of gigabyte of RAM, Atoms and many things in between: Every month I am banging my head on the table when I see how slowly patches get installed, while nothing, absolutely nothing is going on these machines: One core is burning hot, no network or storage I/O of any kind, just some code ruminating on: "To copy, or not to copy this file, that is the question..." Pitiful!

Unforgivable sins:

- Knowing "better": At one point in time, Microsoft decided that users who click "shutdown" on their computers, would rather 'hibernate' their systems, even if that is a different button on my Classic Shell (without which Windows 10 would be unusable). So whenI then take that SSD and start it on a different computer, it looses all the data and changes in the hibernate file, because the new computer has different hardware and cannot just blindy resume a suspended image. I knew that this would be the case, which is why I hit "shutdown". But Microsoft knew better and after a couple of swaps forth and back I finally figured out I had to hit a greyed out option somewhere deep in the energy settings...

That's how engineers just following manager's orders get shot on their way home

- Forced Windows upgrade etc.

Generally:

- Don't go for world domination, try being better than the competition for a change, that might just be enough to ensure a leading position

- Concentrate on slimming down

- If you really think the world needs a new file system, make sure it also works with Linux

- work with open standards e.g. Vulkan instead of DX12. If Vulkan is worse, make it better

'Prodigy' chip moonshot gets hand from Arm CPU guru Prof Steve Furber

abufrejoval

I was thinking MIPS and Mill/Belt architecture

My first reaction to moving smarts from hardware to software is "MIPS learned the hard way".

And when it comes to a promising new architecture fulfilling the aforementioned goals, the last rather exciting thing I saw was the Mill Computing belt architecture (https://millcomputing.com) fabulously expounded by Ivan Godard in sessions easy to find on YouTube.

Voyager 1 fires thrusters last used in 1980 – and they worked!

abufrejoval

Re: Code does not deteriorate

even bit rot requires a radiation source to provide the energy and there is only one, very nearby. So since bit rot would be self-inflicted I assume they designed the nuclear battery and the storage to stay out of each other's hair.

abufrejoval

Even wear and tear...

...requires something to play with and there isn't

anything

or

any

one

out

.

.

.

there

Page: