* Posts by _andrew

118 publicly visible posts • joined 7 Dec 2019

Page:

Guide for the perplexed – Google is no longer the best search engine

_andrew

Suppose that it depends on which "us" you're talking about

For those of us there at the time, no forcing was required to switch from AltaVista to Google. Towards the end it seemed as though AltaVista would give you a list of 10000 answers (URLs) in no particular order. Yes, the right one was probably in there somewhere, but you'd be better off trawling through Yahoo lists or asking someone than trying each of them in turn. Google had ranking that "worked". That was an edge that has since been both gamed (SEO) and internally corrupted.

Apple macOS 15 Sequoia is officially UNIX. If anyone cares...

_andrew

Re: But Toronto sucks

I had the impression that QNX was originally written in Canada. Now there's a properly advanced operating system! Unix-feel, but properly microkernel and also real-time, with network-abstracted device drivers, so you could mount any peripheral from any other QNX system on the network and use it as though it was local.

AMD aims latest processors at AI whether you need it or not

_andrew

Re: Flavor of the week

More likely, it will be like cars: sooner than you would like it will be impossible to buy one with a manual transmission and without a permanent internet connection in support of various "smart" features, but really there for the surveillance and post-sales monetization. Just like TVs.

Switching customers from Linux to BSD because boring is good

_andrew

Re: FreeBSD predates Mac OS X

And FreeBSD itself has a history that traces back through PatchKit on top of 386BSD (Jolitz/Dr Dobbs) and thence to the Net/2 tapes and 4.3RENO repo-edited/patched with 4.4 after the AT&T case.

To be fair, 386BSD was '92, which still doesn't predate the NEXT demos, but you could have read about the latter on Usenet from a SunOS system based on 4.2 or an Ultrix box running 4.3 and felt very much the same, and I'm pretty sure that I did.

_andrew

Re: I love FreeBSD for its reliability

I've been rebuilding from source for longer than I remember: can probably count the number of cold-reinstalls since the PatchKit on the fingers of one hand.

I manage a portmaster run (to upgrade installed ports) and a buildworld/buildkernel every week and it usually only takes a couple of hours. Sometimes a bit longer if some system or database has aged out and the upgrade doesn't go smoothly, but that's rare. My "file server" is an 8-core Zen1 system at the moment though: dimensioned according to the compile times, rather than how much CPU is needed for samba, apache and mysql or whatever. I wouldn't be able to do this on a raspberry pi I think.

I do like the sense of security that knowing that I have the source code to everything that is running on the system, and that I can work at fixing anything that breaks. That's the key beauty of FreeBSD I think: there's nothing mysterious or magical. Does what it's told.

Google's Rust belts bugs out of Android, helps kill off unsafe code substantially

_andrew

Re: 70% the norm

Percent of "vulnerabilities" caused by memory safety bugs.

Quoting the cited blog article: "The percent of vulnerabilities caused by memory safety issues continues to correlate closely with the development language that’s used for new code. Memory safety issues, which accounted for 76% of Android vulnerabilities in 2019, and are currently 24% in 2024, well below the 70% industry norm, and continuing to drop."

The 70% figure has been published many times, by many software security types. It isn't hard to find.

This one (cited by the predecessor article to the one cited in this article) is an example, and it cites a dozen other reports: https://alexgaynor.net/2020/may/27/science-on-memory-unsafety-and-security/

GNU screen 5 proves it's still got game even after 37 years

_andrew

Re: There are options to Terminal

Multiple windows over one SSH connection: I never understood why this was useful, as there isn't a limit to the number of concurrent ssh sessions that you can have. I don't muck about with learning the escape sequences to do multiple panes on terminal interfaces: just open another terminal window and another ssh session.

_andrew

Alacritty is pretty nice

https://github.com/alacritty/alacritty/blob/master/README.md

I don't know whether "fastest" is true or even important. I like it because it does all of the right things and knows about modern "all-of-the-colours" escape sequences that lots of the cool kids seem to be using in their editor colour profiles.

Also a vote for mosh. Mosh is like an ssh upgrade mixed with screen. It uses ssh to establish a connection and then switches to its own protocol over UDP, which is why it doesn't care if the connection goes away and comes back later. It doesn't do tiling like screen, but it does do the server-side render+diff-update thing, so if you go away and come back you see the latest state of the remote process, which did not get paused on disconnect. This also means that you don't get scroll-back. Swings and roundabouts. Also: the predictive echo seems to work very nicely and does a good job of hiding line latency.

What is this computing industry anyway? The dawning era of 32-bit micros

_andrew

Re: ARMed and Ubiquitous

Nice as the StrongARM was, for its time, I don't think that it ever had a floating point unit. Even in integer the multiply did 12 bits at a time. Which is all fine for what Arms were used for at the time: hand-held 2D GUI devices. MIPS, PowerPC, and SPARC (and Alpha) all had good floating point, and "PC" class systems, even in laptop form factor, needed that for spreadsheets and 3D graphics and (eventually) JavaScript.

Version 256 of systemd boasts '42% less Unix philosophy'

_andrew

That running root process that run0 talks to?

It's running somewhere other than $CWD and without this shell's stdout.

I don't know why more isn't made of this. Running things "as root" is very rarely the end of the story. Usually I want to run something as root (or some other user, usually "www") in a particular place, and then do something with the output, via pipe or file. This doesn't sound as though either of those things will be possible or easy. Having all of the admin stuff actually run by a single, central process sounds a lot like DOS.

IMO doas is almost as broken. I tried it, but went back to sudo when I discovered that it does some weirdness to force output to the controlling terminal, breaking piping and output redirection.

Presumably systems that run systemd and therefore run0 will still be able to install the sudo package and proceed as normal. Having fork inherit the parent process' environment is the Unix philosophy in question.

The chip that changed my world – and yours

_andrew

That's not true at all. The Z8000 was quite completely different to the Z80, fully 16-bit with instructions and addressing modes to match, and it was paired with a memory management unit that did protected segmented memory. The first Unix system that I touched was some sort of Z800[01] box running Xenix, I think. The problem, as I've since read, was that they used a hand-designed (and laid-out) network of discrete logic for instruction decoding, instead of a more structured microcode ROM system, and they never got all of the bugs out of it. I'm not sure how that manifested (probably instructions that ought to work but didn't?), but it probably contributed to the demise. The 286 did its segmentation and memory management in the single chip (only the FPU was an add-on) and what's even better, you could bypass it if you wanted to run an unprotected operating system like MSDOS.

DEC's LSI-11 chip(set) was contemporary with the Z8001, and it didn't get very far in PC-land either, although it wound up in quite a bit of industrial control systems.

Forget the AI doom and hype, let's make computers useful

_andrew

Re: Statistical models

Perhaps I was a little abrupt: I didn't mean the comparison as a put-down. Most computer systems "barely work" largely because they're all design-driven and have had precious little exposure to the real world and it's teeming data. A bit of stamp collecting is no bad thing.

I'm going to reserve judgement for a while on how much science is involved in the design of the DNNs. My experience, and what I get from reading the papers is very much of the flavour of "I tried this tweak to last month's best design and it got better results (on the usual published test case)". Mostly the dimensions and "hyperparameters" are dictated by the size of the hardware that can be afforded, rather than any particular insight into the information density or structure of the problem at hand. Meta's latest model was still "learning" (loss function decreasing) at the point where they said "ship it", mostly because they'd run out of internet to train it on. The last time a human had read everything that had ever been written (it's said) was in the 14th century. There's still a lot to actually be learned about the process of learning, IMO.

_andrew

Re: Statistical models

To my mind the "Deep Learning" school of AI is the revenge of the "stamp collecting" side of science, where all previous approaches to computation had been of the physics (mathematics) side. The popular press likes to call them "algorithms", but really they are anti-algorithms. They operate on the basis of what is (or was), rather than what a designer of code supposes underlies all, axiomatically. They solve problems without requiring the problem to be understood. They optimize the scientist out of the solution.

Google all at sea over rising tide of robo-spam

_andrew

Kagi for search and RSS for news - works for me

This is basically following a couple of the suggestions mentioned as "no hopers", but where's the fail? Work from a large enough set of self-curated feeds and anything that's genuinely interesting will be mentioned by one of them and interesting new out-links can be added to the feed. No need to keep the bubble static.

Kagi's no-ads search results and down-rating spammy sites seems to be working pretty well too. Well, it's working for me. YMMV.

Malicious SSH backdoor sneaks into xz, Linux world's data compression library

_andrew

Re: GitHub CoPoilot…

A valid point in general (probably why studies show so many security vulnerabilities in chat-generated code), but in this case there's an extra wrinkle: the exploit was _not_ on github. It was only inserted into the "upstream tarball" that the packagers depended on, rather than cloning and building from source themselves.

_andrew

Re: Haters Should Be In The Headline, Not systemd

Not exiting with an error code (and error log) used to be the perfectly acceptable record of success. It's what happens on other (non-systemd) systems. Why would you want a daemon that started, failed to initialize as instructed in its config file (or otherwise) and what, just hung around? Exit on failure (with a documented error code) is a fine protocol.

_andrew

Re: Systemd should be in the headline, not `xz` or `liblzma`.

There has always been a perfectly serviceable mechanism for services (daemons) to notify the system of a failure to start properly: exit codes. That's literally what they're for. You can also throw in some logging to syslog on the way out, if you want to get fancy.

UXL Foundation readying alternative to Nvidia's CUDA for this year

_andrew

Re: Oh, No, Not Again

They're in there, apparently. Layers upon layers.

Google's AI-powered search results are loaded with spammy, scammy garbage

_andrew

Re: Internet search is broken and has been for a long time

I've been using Kagi for a while. Seems pretty great to me so far. They added Stephen Wolfram to the board and Wolfram Alpha to their search data (and use that for google-style quick-facts column), which I think is a good move. Alpha being a huge, mostly curated "real data" database and logic-engine based derivation system that has been available by subscription for years, but now is also rolled into Kagi. Yes, there are a few "AI" features now too: page summaries and so on, but they're user-driven. Write the search terms as a question (with a question mark) and you'll get an AI answer. Otherwise not. The big feature of course is the complete absence of advertising in results and the down-rating of SEO-infested pages.

Beijing issues list of approved CPUs – with no Intel or AMD

_andrew

Those Chinese Linux distributions are still Linux, right?

I suspect the description "Only Chinese code is present" is somewhat hyperbolic. There's about 28 million lines just in the kernel, much written by people from other countries.

Starting over: Rebooting the OS stack for fun and profit

_andrew

I like the 90s-vintage QNX demo disk: a single 1.44M floppy that booted into a full multi-tasking Unix clone with (very nice) GUI. Didn't have many spurious device drivers on there, but it was a demo, after all.

Speaking of lots of RAM: the RAM in my current desktop (64G) is larger than any hard disk that you could buy in the 80s. The RAM (1 to 4M) in the diskless Unix workstations that I used in the 80s-90s is smaller than the cache on just about any processor that you can buy today.

So you very likely could run a system entirely out of optane or similar, and rely on cache locality to cut down on your wear rates. I think that you'd want some DRAM though anyway: there are things like the GPU frame buffer, network and disk buffers and so on, that are caches anyway, and have no need to persist across reboots.

As has been mentioned before: Android (and much modern software) manages the appearance of persistent storage quite adequately, and it does it through application-level toolkits and patterns that encourage deliberate persisting of "user state" to various database mechanisms, because the application execution model does not guarantee that anything will be preserved between one user interaction and the next. It isn't an execute-main()-until-exit model, with all of the attendant persistence of state.

_andrew

Re: Y2K times a million

Indeed. We are ever so close now to the internet being our storage architecture, with our data locked up not in "apps" as the mobile devices would have it, but in "apps" that are services running on some cloud somewhere. Liam can reinvent the low level pieces in Smalltalk or whatever he likes, but as soon as he builds a web browser on top, he can immediately do everything that a chromebook can, or indeed most of the stuff that everyone who has an enterprise IT department that provides access to a smorgasbord of single-sign-on web apps, and no-one would notice.

_andrew

Re: Chickens and swans: they are harder to tell apart than one might think

Surely even your command-line has the recent history of what you were doing, and might even start in the same directory that you were previously in. Command-line users aren't savages either.

_andrew

Re: Other smalltalks/lisps

On the other smalltalks/lisps, one might want to consider Racket (racket-lang.org) a scheme. It used to be built on C underpinnings, but it has recently been restumped on top of a native scheme compiler/jitter/interpreter, so it's scheme all the way down, now. It also comes with an object system and it's own IDE/GUI, and can do neat things like use graphics inside the REPL. It also has a neat mechanism to re-skin the surface syntax in a variety of ways, which is how it supports both backwards compatibility to older schemes and to other languages like datalog. There's an Algol, apparently.

Drowning in code: The ever-growing problem of ever-growing codebases

_andrew

Re: A Few Ironies

Memory size was probably the bigger barrier than complexity. After all, the Laser Writer and PostScript were more-or-less contemporaneous with the early Macintosh, and they both just had 68000 processors in them. Once you've rendered your postscript fonts to a bitmap cache, it's all just blitting, just like your single-size bitmap fonts. (Complexity creeps in if you want sub-pixel alignment and anti-aliasing, but neither of those were a thing in those early 1-bit pixmap graphics systems.)

Sun eventually had Display Postscript, but you're right that that was quite a bit later, and used quite a bit more resources, than the likes of Oberon. Postscript itself is not so big or terrible though.

_andrew

Re: m4 macro processor

Have to call "nonsense" on that one. A language without a macro level is a language fixed in time that can't learn new idioms or constructions. It's one of the things that the Lisp family totally got right: a macro mechanism that exists as a means to extend both the language syntax and the compiler. Rust has one too. I'd be surprised if any more languages were developed that did not have a way to write code that operated "at compile time".

_andrew

If your zoom/teams/whatever client is doing background replacement or blurring, then it isn't the web-cam doing the video compression (well, it might be, but then it had to be decoded into frames, processed and then re-encoded to be sent). What's worse, most of the video systems are now using "deep learning" video filters to do the edge detection to determine where the subject and background meet, and that's probably leaning on some of that "AI" hardware that you mentioned, as will be the (spectacularly effective, these days) echo and background noise suppression on the audio signals (which have their own compressions and decompressions going on too, of course).

I just had to retire my the older of my 5K iMac systems, despite the screen still being lovely and it being perfectly capable for most of the office applications, because it absolutely could not keep up with the grinding required to do zoom calls.

It might not have made it into the likes of the geekbench-style benchmarking suites, but time-in-video-conference is a key performance measure of battery consumption reports for modern laptop systems.

_andrew
FAIL

Re: Thank you Liam

The thing about libraries, in the original construction, was that you only had to link in the 50% (or whatever) that you actually used: the rest stayed on the shelf, not in your program. The ability to do that went away with the introduction of shared libraries. Shared libraries were originally considered to be a space-saving mechanism, so that all of the programs that used, say, the standard C library, did not have to have a copy of it. Even more so when you get to GUI libraries, it was thought. But it was always a trade-off, because as you mentioned, every application only used their favourite corner of the library anyway: you needed to be running quite a few applications to make up the difference. But now it's even worse, because lots of applications can't tolerate having their (shared) dependencies upgraded, and so systems have developed baroque mechanisms like environments (in python land) and containers and VMs so that they can ship with their own copies of the old and busted libraries that they've been tested against. So now every application (or many of them) hang on to a whole private set of shared libraries, shared with no other applications, rather than just statically linking to the pieces that they actually wanted. The software industry is so useless at looking at the big picture...

The successor to Research Unix was Plan 9 from Bell Labs

_andrew

Re: ...files...

Bit of a shame that IP and it's numbered connections fit so badly into that scheme. The way Bell Labs networking was going, with abstractions over multiplexing of communication connections seems much more scalable and "container-able", but I don't think that there's any way to get to there from here, now.

War of the workstations: How the lowest bidders shaped today's tech landscape

_andrew

Re: Correctness and Simplicity

Once upon a time "correct" used to mean "can't possibly fail". Ship in ROM. In the earlier days of auto-updateing apps and web apps it seemed to mean "worked once in the lab and the executive demo was OKed". Ship on cron-job. In these days of nation-state hackers and crypto-jacking I get the sense that it's swinging back slightly towards the former sense. Think of the hacking/spying industry as a free universal fuzz-testing service?

The US and EU have just almost outlawed new C and C++ code. That day is still a little way off, I think, but perhaps the Lisps and Smalltalks of the world will get more of a look-in?

_andrew

Re: Disagree on a few points

This image of the "microprocessor" model of processing, with a single shared bus and interrupt handlers that pushed the data around hasn't really been true for a very long time. Modern "micro" systems use all of the low-latency batch-processing IO-offload tricks in the book. Everything happens through DMA based on intelligent processors in the peripheral systems that execute chains of IO requests that they DMA straight out of RAM. Interrupts are fully vectored and only used once the entire buffer of IO requests are empty (if then). The bottom levels of the OSes are all asynchronous batch systems...

And yes, they still spend much of their time sitting around, waiting for the user to push a button, but when they're playing a video game they're doing a lot more work than any mainframe or supercomputer from even twenty years ago was capable of.

We've just found a lot more "work" for them all to do (for small values of "work").

_andrew

A lot of design points were being explored at that time

The transition wasn't just Mainframe->mini->workstation->PC. As the article mentioned, Ethernet came out at this time (and token ring, over there in IBM land) and that lead to all sorts of interesting experiments in sharing. Big (Unix or VMS) server networked to diskless workstations. Multiple Servers clustered together sharing disks at the physical level. Plan9 network servers, blit terminals. Xwindows with dedicated X terminals accessing logins on multiple servers with unified user storage on NFS servers (my personal favourite at the time). A lot of these only "worked" as the result of complexity innflections: the serious graphics folk insist that X11 doesn't work once you move to 3D rendering and compositing, hence the (slow) move back to integrated models (Wayland, Quartz).

For all that they had networking, those ancient Lisp and Smalltalk workstations had limitations: they were single-processor and single threaded systems without the sort of memory management or user/system protections that we're used to today: despite being programmed in Pascal, the early Mac System had a slightly similar run-time structure: data and graphics were shared between all levels, and any bug would crash the whole system, or any long disk activity or network activity would (or at least could) hang everything until completed. We forget how common system crashes were back then. (Although both Smaltalk and Lisp were much less able to die from an array-bounds error than later C-based systems, and the single-threaded software stack did avoid all of the shared-memory failure modes that came later.)

Crashing the system didn't really wreck much because there was only really one thing going on at a time. At least the compiled Pascal approach of MacOS solved the 3rd party application development and distribution problem in a way that the Lisp and Smalltalk machines couldn't do. The whole system was one VM image there, and so the only way to switch to a program developed by someone else was essentially to re-booot into that other VM.

Portable Large Language Models – not the iPhone 15 – are the future of the smartphone

_andrew

The flaw in the targeted advertising theory

I've always been liberal with Google and tracking cookies for that exact reason: the argument that they would show me useful stuff is superficially compelling. Doesn't work: all of the ads are still worse than useless. It turns out that all of the stuff that I'm interested in, I know where to find already. Ergo, no-one needs to pay to put it in front of me, so they don't. The only people who will pay to put things in front of me "by force", are those flogging things (or ideas) that I'm not interested in, and don't want. So IMO the answer isn't ever-better targeted advertising, it's paying for services. If the Reg had a reasonable "no ads" subscription option, I'd pay it.

_andrew

Re: Sure, it's possible, but why would you want it?

Orac was more omniscience in a box than just AI, and even he/it convinced Blake and the crew to do things occasionally just because it wanted to see what would happen. But Orac also had infallible accuracy on his side: LLMs are just stochastic echoes in idea space: when they come up with the correct answer it's pure chance. That makes them much closer to useless, in my book.

Never did see Star Cops, I'm afraid. Auntie must not have picked up the rights, here in the antipodes.

_andrew

Sure, it's possible, but why would you want it?

Just about every single "AI in the machine" story ever written has been a cautionary tale. From the Cirrius Cybernetics Corporation onwards. No one (besides, perhaps, Ian Banks, perhaps Asimov) has posited an AI sidekick actually doing something useful and productive, let alone not turning on the user. I have no use-case for such a thing. What would you do with it?

FreeBSD can now boot in 25 milliseconds

_andrew

Re: VM vs Process

Many (many) years ago I made good use of Wine to run a Windows ('95-vintage, I think) command-line assembler for a Motorola DSP. Worked really well, surprisingly, and allowed me to use "real" make and work in a bash shell and other development niceties, without having to maintain and feed an actual Windows box.

AVX10: The benefits of AVX-512 without all the baggage

_andrew

Re: flags

One of the other new instruction set changes is an "architecture version" capability tasting mechanism, which certainly seems like a step in the direction of making things more uniform and easier to target.

Or you could look at it as yet another thing that needs to be tested for, if you want to address older processors that don't have it.

_andrew

My reading of the released docs suggests that they _still_ haven't addressed the compatibility problem, at least not in the way that Arm SVE and RISC-V V do. Yes AVX10 has versions that work with 256 bit and with 512-bit vectors, and if you write code for the 256-bit ones that will work on 512-bit hardware, but only 256 bits at a time. And if you're silly enough to write code that uses the 512-bit registers, it will only work on the (high end) processors that have them. So you're back at processor capability tasting and multi-versioning. (And of course there are still millions of older systems that only have SSE and AVX{,2}.)

Meanwhile Arm and Apple are keeping up throughput-wise by cranking up the number of NEON execution units and scheduling multiple SIMD instructions every cycle...

Soft-reboot in systemd 254 sounds a lot like Windows' Fast Startup

_andrew

Re: faster. The latest FreeBSD quarterly report mentioned that the boot-speedup work has got a (presumably vm) kernel boot down to 12ms. Since booting real hardware is something that happens as close to "never" as kernel upgrades allow, the VM use-case appears to be the only instance where boot speed is of any concern at all. Or perhaps embedded systems: it's useful if they just "turn on", but they're usually sufficiently simpler that they can. The need for VMs to boot quickly is because they have become the new "program" as the process model has failed to keep up with the complexity of software distribution and the reality of anti-dependencies. I blame shared libraries and perhaps the IP model of network communication.

_andrew

Since Gnome comes from the house-of-systemd, there have long been GUI applications that consider that not working on not-Linux is not their concern. One of the reasons why there are now BSD-flavoured/specific desktop systems (although I'm sure that a lot of the older ones still work fine too). The Linuxization of the desktop GUI and (especially) graphics card drivers was what motivated me to switch my desktop activity from FreeBSD to macOS years ago. Still runs my servers. I have a suspicion that the situation is improving though, and have been tempted to try putting a FreeBSD desktop together again. We'll see.

It's 2023 and memory overwrite bugs are not just a thing, they're still number one

_andrew

Re: Aren't there other options?

Sure: Ada, Pascal, Modula-[23], Go, Swift, Java, C#, JavaScript, Python, all of the Lisps, all of the MLs: the list is long. Really, memory unsafety is a pretty unusual property for a 3rd-gen programming language. It's just that a couple of examples have historically been really, really popular (C, C++).

_andrew

Re: Memory Errors: 70% of CVE

And yet log4j/log4shell was written in a type-safe, memory-safe language, involved no memory errors of the sort described, and is still not "solved" or fully patched: it's everywhere. Because someone implemented an algorithm with more flexibility than it needed. The code was/is performing as designed, it was just unsafe as designed...

As memory-safe languages (and memory-safe subsets of C++) are increasingly used, especially for network-facing code, that 70% number will inevitably go down (IMO). Writing the wrong algorithm, and accidentally violating security, or providing unexpected behaviour is going to be with us forever.

_andrew

Re: RE: Cough, cough, Rust

I think that tide is finally turning. Not necessarily towards rust: there are arguably simpler languages (the garbage-collected sort) that lots of applications are now being built in.

However it is well to remember that the CVEs mentioned in the last few paragraphs, authentication errors ant the like, can be written in any language. No amount of type-checking or fencing will protect against implementing the wrong algorithm. Things like that are also often difficult to test: tests are good for verifying that the functionality that you intended is there and behaves as expected. Much harder to check the negative space: that there isn't some other, unintended functionality, like the ability to log in with an empty password...

Bosses face losing 'key' workers after forcing a return to office

_andrew

Re: Its all a matter of what you call "work"

Not just that, but you no longer need to "book a meeting room" (unless being joined by people in the office). When most meetings involve people from other countries and timezones anyway, it's usually _more_ convenient to participate in meetings from home.

Rocky Linux claims to have found 'path forward' from CentOS source purge

_andrew

Re: "Certified"

You're welcome! I've been a happy BSD user since about '86 and never felt the need to change. Obviously some Linux is unavoidable, but mostly just observed for interest.

_andrew

"Certified"

I've watched and used Linux since it was released, but never Red Hat. (Mostly used the BSDs, which seem to be more of a literary tradition than the Linux tied-to-executable form.) I find it hard to understand who is inconvenienced by these moves. The closest I've read is people who are clients of (expensive) proprietary software that is only "certified" to work on specific builds of Red Hat. And all you get for that is a specific set of versions and build configurations (bugs, if you like) for software that is readily available elsewhere.

Seems to me that the complaint must be with these proprietary software providers, for the lack of faith in their product, or lack of testing against other distributions. If the software is so expensive and so singular, I expect that most of the clients do indeed spring for matching Red Hat licenses. So again, I'm missing who's actually inconvenienced here.

Why you might want an email client in the era of webmail

_andrew

Re: About that local-storage advantage...

Exactly: IMAP enables the multi-device use-case, but in many people's mind it means "mail is on the server". That is not a problem when the server is your own. (You can also make it part of your backup schedule, then). Really not an easy problem to solve, I think, because providing a vendor-agnostic personal IMAP storage application is not really in the interests of any of the current players, who get significant lock-in stickiness by having you use theirs.

_andrew

About that local-storage advantage...

One of the arguments against webmail presented here is that all of your mail stays on the server, which has various risks associated with it. That's true, but that is not unique to webmail clients:the k9mail/Thunderbird-for-Android client does that too. It's also possible to configure several other local clients to do the same thing. Is this bad? I find that there are good reasons to not keep local copies. In the case of k9 (which I use), it makes it lighter and faster: my "home" folder wouldn't fit on my phone anyway. I get around this conflict by running my own mail server, using fetchmail to sweep messages from my ISP's inbox. That's a lot more effort than most people would be prepared to go to though. I don't know of any "nicely packaged" way to achieve this setup, where I control and posess my mail archive while also allowing access from many lightweight client systems.

I'm curious about the hate for IMAP that always comes up in these discussions. What bad experiences can have led to that? In my experience it simply works, really well, and is to my mind the obviously correct answer. (Well, if you were doing it again, perhaps some sort of REST-style thing would be set up instead, but it comes from the an earlier era, and still gets the job done.)

Intel says AI is overwhelming CPUs, GPUs, even clouds – so all Meteor Lakes get a VPU

_andrew

Re: Chicken or Egg?

Narrow use-case: IMO yes. And if the photo and its caption is to be believed, they're supporting this supposed use case with the largest of the three dies on the carrier. I think that there's a chunk of the story missing.

Version 100 of the MIT Lisp Machine software recovered

_andrew

Re: Off Topic? Well....Not Quite!!

Intel was definitely winning on process technology, but that's not the whole story.

Perhaps by good design or just good luck, the x86 architecture did not have much in the way of memory-indirect addressing modes. The 68k, like the Vax, PDP-11 and 16032 did have memory-indirect addressing modes (and lots of them: they all tended to have little mini-instruction sets built into the argument description part of the main instruction sets, to allow all arguments and results to be in registers, or indirect, or indexed, or some combination). This seems like a nice idea from a high-level language point of view. After all, you can have pointer variables (in memory), and data structures with pointers in them (CONS, in the lisp context, for example). The trouble is that this indirection puts main memory access into the execution path of most instructions. This wasn't such a problem in the early days, when there was no cache, and memory wasn't much slower than the processor's instruction cycle. But that changed, memory got progressively slower, compared to CPU cycles, CPUs developed pipelining, and that pretty much put the kibosh on indirect addressing modes, and the processor architectures that supported them. RISC designs were the extreme reaction to this trend, but there was sufficient separation between argument loading or storing and operations in the x86 instruction set to allow them to follow RISC down the pipelining route.

You can still buy 68000 derivatives, by the way. Motorola semi became Freescale became NXP, and they still sell cores called "ColdFire", which are mostly 68k, but with a lot of the extreme indirection that came in with the 68020 to 68040 streamlined off. They're essentially microcontrollers now though.

Page: