* Posts by _andrew

104 publicly visible posts • joined 7 Dec 2019

Page:

Malicious SSH backdoor sneaks into xz, Linux world's data compression library

_andrew

Re: GitHub CoPoilot…

A valid point in general (probably why studies show so many security vulnerabilities in chat-generated code), but in this case there's an extra wrinkle: the exploit was _not_ on github. It was only inserted into the "upstream tarball" that the packagers depended on, rather than cloning and building from source themselves.

_andrew

Re: Haters Should Be In The Headline, Not systemd

Not exiting with an error code (and error log) used to be the perfectly acceptable record of success. It's what happens on other (non-systemd) systems. Why would you want a daemon that started, failed to initialize as instructed in its config file (or otherwise) and what, just hung around? Exit on failure (with a documented error code) is a fine protocol.

_andrew

Re: Systemd should be in the headline, not `xz` or `liblzma`.

There has always been a perfectly serviceable mechanism for services (daemons) to notify the system of a failure to start properly: exit codes. That's literally what they're for. You can also throw in some logging to syslog on the way out, if you want to get fancy.

UXL Foundation readying alternative to Nvidia's CUDA for this year

_andrew

Re: Oh, No, Not Again

They're in there, apparently. Layers upon layers.

Google's AI-powered search results are loaded with spammy, scammy garbage

_andrew

Re: Internet search is broken and has been for a long time

I've been using Kagi for a while. Seems pretty great to me so far. They added Stephen Wolfram to the board and Wolfram Alpha to their search data (and use that for google-style quick-facts column), which I think is a good move. Alpha being a huge, mostly curated "real data" database and logic-engine based derivation system that has been available by subscription for years, but now is also rolled into Kagi. Yes, there are a few "AI" features now too: page summaries and so on, but they're user-driven. Write the search terms as a question (with a question mark) and you'll get an AI answer. Otherwise not. The big feature of course is the complete absence of advertising in results and the down-rating of SEO-infested pages.

Beijing issues list of approved CPUs – with no Intel or AMD

_andrew

Those Chinese Linux distributions are still Linux, right?

I suspect the description "Only Chinese code is present" is somewhat hyperbolic. There's about 28 million lines just in the kernel, much written by people from other countries.

Starting over: Rebooting the OS stack for fun and profit

_andrew

I like the 90s-vintage QNX demo disk: a single 1.44M floppy that booted into a full multi-tasking Unix clone with (very nice) GUI. Didn't have many spurious device drivers on there, but it was a demo, after all.

Speaking of lots of RAM: the RAM in my current desktop (64G) is larger than any hard disk that you could buy in the 80s. The RAM (1 to 4M) in the diskless Unix workstations that I used in the 80s-90s is smaller than the cache on just about any processor that you can buy today.

So you very likely could run a system entirely out of optane or similar, and rely on cache locality to cut down on your wear rates. I think that you'd want some DRAM though anyway: there are things like the GPU frame buffer, network and disk buffers and so on, that are caches anyway, and have no need to persist across reboots.

As has been mentioned before: Android (and much modern software) manages the appearance of persistent storage quite adequately, and it does it through application-level toolkits and patterns that encourage deliberate persisting of "user state" to various database mechanisms, because the application execution model does not guarantee that anything will be preserved between one user interaction and the next. It isn't an execute-main()-until-exit model, with all of the attendant persistence of state.

_andrew

Re: Y2K times a million

Indeed. We are ever so close now to the internet being our storage architecture, with our data locked up not in "apps" as the mobile devices would have it, but in "apps" that are services running on some cloud somewhere. Liam can reinvent the low level pieces in Smalltalk or whatever he likes, but as soon as he builds a web browser on top, he can immediately do everything that a chromebook can, or indeed most of the stuff that everyone who has an enterprise IT department that provides access to a smorgasbord of single-sign-on web apps, and no-one would notice.

_andrew

Re: Chickens and swans: they are harder to tell apart than one might think

Surely even your command-line has the recent history of what you were doing, and might even start in the same directory that you were previously in. Command-line users aren't savages either.

_andrew

Re: Other smalltalks/lisps

On the other smalltalks/lisps, one might want to consider Racket (racket-lang.org) a scheme. It used to be built on C underpinnings, but it has recently been restumped on top of a native scheme compiler/jitter/interpreter, so it's scheme all the way down, now. It also comes with an object system and it's own IDE/GUI, and can do neat things like use graphics inside the REPL. It also has a neat mechanism to re-skin the surface syntax in a variety of ways, which is how it supports both backwards compatibility to older schemes and to other languages like datalog. There's an Algol, apparently.

Drowning in code: The ever-growing problem of ever-growing codebases

_andrew

Re: A Few Ironies

Memory size was probably the bigger barrier than complexity. After all, the Laser Writer and PostScript were more-or-less contemporaneous with the early Macintosh, and they both just had 68000 processors in them. Once you've rendered your postscript fonts to a bitmap cache, it's all just blitting, just like your single-size bitmap fonts. (Complexity creeps in if you want sub-pixel alignment and anti-aliasing, but neither of those were a thing in those early 1-bit pixmap graphics systems.)

Sun eventually had Display Postscript, but you're right that that was quite a bit later, and used quite a bit more resources, than the likes of Oberon. Postscript itself is not so big or terrible though.

_andrew

Re: m4 macro processor

Have to call "nonsense" on that one. A language without a macro level is a language fixed in time that can't learn new idioms or constructions. It's one of the things that the Lisp family totally got right: a macro mechanism that exists as a means to extend both the language syntax and the compiler. Rust has one too. I'd be surprised if any more languages were developed that did not have a way to write code that operated "at compile time".

_andrew

If your zoom/teams/whatever client is doing background replacement or blurring, then it isn't the web-cam doing the video compression (well, it might be, but then it had to be decoded into frames, processed and then re-encoded to be sent). What's worse, most of the video systems are now using "deep learning" video filters to do the edge detection to determine where the subject and background meet, and that's probably leaning on some of that "AI" hardware that you mentioned, as will be the (spectacularly effective, these days) echo and background noise suppression on the audio signals (which have their own compressions and decompressions going on too, of course).

I just had to retire my the older of my 5K iMac systems, despite the screen still being lovely and it being perfectly capable for most of the office applications, because it absolutely could not keep up with the grinding required to do zoom calls.

It might not have made it into the likes of the geekbench-style benchmarking suites, but time-in-video-conference is a key performance measure of battery consumption reports for modern laptop systems.

_andrew
FAIL

Re: Thank you Liam

The thing about libraries, in the original construction, was that you only had to link in the 50% (or whatever) that you actually used: the rest stayed on the shelf, not in your program. The ability to do that went away with the introduction of shared libraries. Shared libraries were originally considered to be a space-saving mechanism, so that all of the programs that used, say, the standard C library, did not have to have a copy of it. Even more so when you get to GUI libraries, it was thought. But it was always a trade-off, because as you mentioned, every application only used their favourite corner of the library anyway: you needed to be running quite a few applications to make up the difference. But now it's even worse, because lots of applications can't tolerate having their (shared) dependencies upgraded, and so systems have developed baroque mechanisms like environments (in python land) and containers and VMs so that they can ship with their own copies of the old and busted libraries that they've been tested against. So now every application (or many of them) hang on to a whole private set of shared libraries, shared with no other applications, rather than just statically linking to the pieces that they actually wanted. The software industry is so useless at looking at the big picture...

The successor to Research Unix was Plan 9 from Bell Labs

_andrew

Re: ...files...

Bit of a shame that IP and it's numbered connections fit so badly into that scheme. The way Bell Labs networking was going, with abstractions over multiplexing of communication connections seems much more scalable and "container-able", but I don't think that there's any way to get to there from here, now.

War of the workstations: How the lowest bidders shaped today's tech landscape

_andrew

Re: Correctness and Simplicity

Once upon a time "correct" used to mean "can't possibly fail". Ship in ROM. In the earlier days of auto-updateing apps and web apps it seemed to mean "worked once in the lab and the executive demo was OKed". Ship on cron-job. In these days of nation-state hackers and crypto-jacking I get the sense that it's swinging back slightly towards the former sense. Think of the hacking/spying industry as a free universal fuzz-testing service?

The US and EU have just almost outlawed new C and C++ code. That day is still a little way off, I think, but perhaps the Lisps and Smalltalks of the world will get more of a look-in?

_andrew

Re: Disagree on a few points

This image of the "microprocessor" model of processing, with a single shared bus and interrupt handlers that pushed the data around hasn't really been true for a very long time. Modern "micro" systems use all of the low-latency batch-processing IO-offload tricks in the book. Everything happens through DMA based on intelligent processors in the peripheral systems that execute chains of IO requests that they DMA straight out of RAM. Interrupts are fully vectored and only used once the entire buffer of IO requests are empty (if then). The bottom levels of the OSes are all asynchronous batch systems...

And yes, they still spend much of their time sitting around, waiting for the user to push a button, but when they're playing a video game they're doing a lot more work than any mainframe or supercomputer from even twenty years ago was capable of.

We've just found a lot more "work" for them all to do (for small values of "work").

_andrew

A lot of design points were being explored at that time

The transition wasn't just Mainframe->mini->workstation->PC. As the article mentioned, Ethernet came out at this time (and token ring, over there in IBM land) and that lead to all sorts of interesting experiments in sharing. Big (Unix or VMS) server networked to diskless workstations. Multiple Servers clustered together sharing disks at the physical level. Plan9 network servers, blit terminals. Xwindows with dedicated X terminals accessing logins on multiple servers with unified user storage on NFS servers (my personal favourite at the time). A lot of these only "worked" as the result of complexity innflections: the serious graphics folk insist that X11 doesn't work once you move to 3D rendering and compositing, hence the (slow) move back to integrated models (Wayland, Quartz).

For all that they had networking, those ancient Lisp and Smalltalk workstations had limitations: they were single-processor and single threaded systems without the sort of memory management or user/system protections that we're used to today: despite being programmed in Pascal, the early Mac System had a slightly similar run-time structure: data and graphics were shared between all levels, and any bug would crash the whole system, or any long disk activity or network activity would (or at least could) hang everything until completed. We forget how common system crashes were back then. (Although both Smaltalk and Lisp were much less able to die from an array-bounds error than later C-based systems, and the single-threaded software stack did avoid all of the shared-memory failure modes that came later.)

Crashing the system didn't really wreck much because there was only really one thing going on at a time. At least the compiled Pascal approach of MacOS solved the 3rd party application development and distribution problem in a way that the Lisp and Smalltalk machines couldn't do. The whole system was one VM image there, and so the only way to switch to a program developed by someone else was essentially to re-booot into that other VM.

Portable Large Language Models – not the iPhone 15 – are the future of the smartphone

_andrew

The flaw in the targeted advertising theory

I've always been liberal with Google and tracking cookies for that exact reason: the argument that they would show me useful stuff is superficially compelling. Doesn't work: all of the ads are still worse than useless. It turns out that all of the stuff that I'm interested in, I know where to find already. Ergo, no-one needs to pay to put it in front of me, so they don't. The only people who will pay to put things in front of me "by force", are those flogging things (or ideas) that I'm not interested in, and don't want. So IMO the answer isn't ever-better targeted advertising, it's paying for services. If the Reg had a reasonable "no ads" subscription option, I'd pay it.

_andrew

Re: Sure, it's possible, but why would you want it?

Orac was more omniscience in a box than just AI, and even he/it convinced Blake and the crew to do things occasionally just because it wanted to see what would happen. But Orac also had infallible accuracy on his side: LLMs are just stochastic echoes in idea space: when they come up with the correct answer it's pure chance. That makes them much closer to useless, in my book.

Never did see Star Cops, I'm afraid. Auntie must not have picked up the rights, here in the antipodes.

_andrew

Sure, it's possible, but why would you want it?

Just about every single "AI in the machine" story ever written has been a cautionary tale. From the Cirrius Cybernetics Corporation onwards. No one (besides, perhaps, Ian Banks, perhaps Asimov) has posited an AI sidekick actually doing something useful and productive, let alone not turning on the user. I have no use-case for such a thing. What would you do with it?

FreeBSD can now boot in 25 milliseconds

_andrew

Re: VM vs Process

Many (many) years ago I made good use of Wine to run a Windows ('95-vintage, I think) command-line assembler for a Motorola DSP. Worked really well, surprisingly, and allowed me to use "real" make and work in a bash shell and other development niceties, without having to maintain and feed an actual Windows box.

AVX10: The benefits of AVX-512 without all the baggage

_andrew

Re: flags

One of the other new instruction set changes is an "architecture version" capability tasting mechanism, which certainly seems like a step in the direction of making things more uniform and easier to target.

Or you could look at it as yet another thing that needs to be tested for, if you want to address older processors that don't have it.

_andrew

My reading of the released docs suggests that they _still_ haven't addressed the compatibility problem, at least not in the way that Arm SVE and RISC-V V do. Yes AVX10 has versions that work with 256 bit and with 512-bit vectors, and if you write code for the 256-bit ones that will work on 512-bit hardware, but only 256 bits at a time. And if you're silly enough to write code that uses the 512-bit registers, it will only work on the (high end) processors that have them. So you're back at processor capability tasting and multi-versioning. (And of course there are still millions of older systems that only have SSE and AVX{,2}.)

Meanwhile Arm and Apple are keeping up throughput-wise by cranking up the number of NEON execution units and scheduling multiple SIMD instructions every cycle...

Soft-reboot in systemd 254 sounds a lot like Windows' Fast Startup

_andrew

Re: faster. The latest FreeBSD quarterly report mentioned that the boot-speedup work has got a (presumably vm) kernel boot down to 12ms. Since booting real hardware is something that happens as close to "never" as kernel upgrades allow, the VM use-case appears to be the only instance where boot speed is of any concern at all. Or perhaps embedded systems: it's useful if they just "turn on", but they're usually sufficiently simpler that they can. The need for VMs to boot quickly is because they have become the new "program" as the process model has failed to keep up with the complexity of software distribution and the reality of anti-dependencies. I blame shared libraries and perhaps the IP model of network communication.

_andrew

Since Gnome comes from the house-of-systemd, there have long been GUI applications that consider that not working on not-Linux is not their concern. One of the reasons why there are now BSD-flavoured/specific desktop systems (although I'm sure that a lot of the older ones still work fine too). The Linuxization of the desktop GUI and (especially) graphics card drivers was what motivated me to switch my desktop activity from FreeBSD to macOS years ago. Still runs my servers. I have a suspicion that the situation is improving though, and have been tempted to try putting a FreeBSD desktop together again. We'll see.

It's 2023 and memory overwrite bugs are not just a thing, they're still number one

_andrew

Re: Aren't there other options?

Sure: Ada, Pascal, Modula-[23], Go, Swift, Java, C#, JavaScript, Python, all of the Lisps, all of the MLs: the list is long. Really, memory unsafety is a pretty unusual property for a 3rd-gen programming language. It's just that a couple of examples have historically been really, really popular (C, C++).

_andrew

Re: Memory Errors: 70% of CVE

And yet log4j/log4shell was written in a type-safe, memory-safe language, involved no memory errors of the sort described, and is still not "solved" or fully patched: it's everywhere. Because someone implemented an algorithm with more flexibility than it needed. The code was/is performing as designed, it was just unsafe as designed...

As memory-safe languages (and memory-safe subsets of C++) are increasingly used, especially for network-facing code, that 70% number will inevitably go down (IMO). Writing the wrong algorithm, and accidentally violating security, or providing unexpected behaviour is going to be with us forever.

_andrew

Re: RE: Cough, cough, Rust

I think that tide is finally turning. Not necessarily towards rust: there are arguably simpler languages (the garbage-collected sort) that lots of applications are now being built in.

However it is well to remember that the CVEs mentioned in the last few paragraphs, authentication errors ant the like, can be written in any language. No amount of type-checking or fencing will protect against implementing the wrong algorithm. Things like that are also often difficult to test: tests are good for verifying that the functionality that you intended is there and behaves as expected. Much harder to check the negative space: that there isn't some other, unintended functionality, like the ability to log in with an empty password...

Bosses face losing 'key' workers after forcing a return to office

_andrew

Re: Its all a matter of what you call "work"

Not just that, but you no longer need to "book a meeting room" (unless being joined by people in the office). When most meetings involve people from other countries and timezones anyway, it's usually _more_ convenient to participate in meetings from home.

Rocky Linux claims to have found 'path forward' from CentOS source purge

_andrew

Re: "Certified"

You're welcome! I've been a happy BSD user since about '86 and never felt the need to change. Obviously some Linux is unavoidable, but mostly just observed for interest.

_andrew

"Certified"

I've watched and used Linux since it was released, but never Red Hat. (Mostly used the BSDs, which seem to be more of a literary tradition than the Linux tied-to-executable form.) I find it hard to understand who is inconvenienced by these moves. The closest I've read is people who are clients of (expensive) proprietary software that is only "certified" to work on specific builds of Red Hat. And all you get for that is a specific set of versions and build configurations (bugs, if you like) for software that is readily available elsewhere.

Seems to me that the complaint must be with these proprietary software providers, for the lack of faith in their product, or lack of testing against other distributions. If the software is so expensive and so singular, I expect that most of the clients do indeed spring for matching Red Hat licenses. So again, I'm missing who's actually inconvenienced here.

Why you might want an email client in the era of webmail

_andrew

Re: About that local-storage advantage...

Exactly: IMAP enables the multi-device use-case, but in many people's mind it means "mail is on the server". That is not a problem when the server is your own. (You can also make it part of your backup schedule, then). Really not an easy problem to solve, I think, because providing a vendor-agnostic personal IMAP storage application is not really in the interests of any of the current players, who get significant lock-in stickiness by having you use theirs.

_andrew

About that local-storage advantage...

One of the arguments against webmail presented here is that all of your mail stays on the server, which has various risks associated with it. That's true, but that is not unique to webmail clients:the k9mail/Thunderbird-for-Android client does that too. It's also possible to configure several other local clients to do the same thing. Is this bad? I find that there are good reasons to not keep local copies. In the case of k9 (which I use), it makes it lighter and faster: my "home" folder wouldn't fit on my phone anyway. I get around this conflict by running my own mail server, using fetchmail to sweep messages from my ISP's inbox. That's a lot more effort than most people would be prepared to go to though. I don't know of any "nicely packaged" way to achieve this setup, where I control and posess my mail archive while also allowing access from many lightweight client systems.

I'm curious about the hate for IMAP that always comes up in these discussions. What bad experiences can have led to that? In my experience it simply works, really well, and is to my mind the obviously correct answer. (Well, if you were doing it again, perhaps some sort of REST-style thing would be set up instead, but it comes from the an earlier era, and still gets the job done.)

Intel says AI is overwhelming CPUs, GPUs, even clouds – so all Meteor Lakes get a VPU

_andrew

Re: Chicken or Egg?

Narrow use-case: IMO yes. And if the photo and its caption is to be believed, they're supporting this supposed use case with the largest of the three dies on the carrier. I think that there's a chunk of the story missing.

Version 100 of the MIT Lisp Machine software recovered

_andrew

Re: Off Topic? Well....Not Quite!!

Intel was definitely winning on process technology, but that's not the whole story.

Perhaps by good design or just good luck, the x86 architecture did not have much in the way of memory-indirect addressing modes. The 68k, like the Vax, PDP-11 and 16032 did have memory-indirect addressing modes (and lots of them: they all tended to have little mini-instruction sets built into the argument description part of the main instruction sets, to allow all arguments and results to be in registers, or indirect, or indexed, or some combination). This seems like a nice idea from a high-level language point of view. After all, you can have pointer variables (in memory), and data structures with pointers in them (CONS, in the lisp context, for example). The trouble is that this indirection puts main memory access into the execution path of most instructions. This wasn't such a problem in the early days, when there was no cache, and memory wasn't much slower than the processor's instruction cycle. But that changed, memory got progressively slower, compared to CPU cycles, CPUs developed pipelining, and that pretty much put the kibosh on indirect addressing modes, and the processor architectures that supported them. RISC designs were the extreme reaction to this trend, but there was sufficient separation between argument loading or storing and operations in the x86 instruction set to allow them to follow RISC down the pipelining route.

You can still buy 68000 derivatives, by the way. Motorola semi became Freescale became NXP, and they still sell cores called "ColdFire", which are mostly 68k, but with a lot of the extreme indirection that came in with the 68020 to 68040 streamlined off. They're essentially microcontrollers now though.

Requiem for Google Reader, dead for a decade but not forgotten

_andrew

Re: RSS-feeds are valuable!

The Feedly interface is pretty easy, with the ability to search for feeds by name, rather than having to find and past-in a feed URL. Also has pretty nice grouping and management interface. Not perfect, but pretty good.

Also: there aren't all that many sites (certainly only one or two that I care about) that don't have an RSS feed. I put that down to 50% of web sites running on Wordpress, and Wordpress has RSS turned on by default. Comes for free and brings traffic to the site, so why would anyone turn it off?

The Stonehenge of PC design, Xerox Alto, appeared 50 years ago this month

_andrew

First? The Xerox and Symbolics and TI Lisp machines were contemporaneous

They also had very similar (to the Alto, not necessarily the Apple follow-ons) windowing GUIs, mice, networking. Single user machines. Common Lisp and Interlisp had object systems. All put in a corner after the (first) AI winter.

Taking notes from AWS, Google prepares custom Arm server chips of its own

_andrew

Re: Google and AWS can where Sun couldn't?

The end of Denard scaling, and Intel fumbling the 10nm transition. Intel used to wield an unbreakable year-or-two process technology lead over all of the other fabs, and did not fabricate for other designers: they had to use the merchant fabricators (IBM/AMD which became Global Foundaries which gave up at 14nm, Samsung, TSMC, UMC, and others). Only TSMC and Samsung are still on the bleeding edge, and they will fab parts for anyone who pays.

The other factor is that previously only Intel, MIPS, DEC (Alpha) and IBM were designing big, wide, out-of-order speculative architectures. Speculation wastes work effort, and so run hotter than you want for battery powered devices. Well, that was the case until the speculation became really good, and the battery-powered devices needed to run desktop-style workloads. Now there are big, wide speculative out-of-order designs for all of them.

_andrew

Re: re: mplementing proprietary code to keep you locked into their service?

The cloud lock in is already far stronger through software than through processor architecture. All of the clouds provide different APIs for databases and search and other "helpful" facilities, and those are the things that software has to accommodate to. What instruction set the compilers spit out, in order to run, almost doesn't matter. Heck, lots of the cloudy software is JavaScript or Java (or perl or python) anyway, and that already shields you from the processor architecture. Most of the rest is open-source C or C++, and that, for the most part, also doesn't care what processor it's running on (yes, existing processor-specific optimizations bite you here).

So of course there's lock in: friction equates to margin. It's not so much in the processor architectures though. It likely could be in the attached accelerator architectures (that the applications come to depend on). FPGA vs TPU vs GPU etc...

GCC 13 to support Modula-2: Follow-up to Pascal lives on in FOSS form

_andrew

Re: I call bullshit

I recognize myself in that last paragraph. Many years ago, as an undergraduate in a CS graphics course, we were given an assignment that required the use of some graphics libraries that were available in Modula-2, and were pointed towards the Modula-2 compiler and documentation. Everyone else just did the assignment in Modula-2, somehow. In the first couple of days I found that the bug that I was chasing was a compiler code-generation bug, so I immediately switched my design to use two processes: one was a thin wrapper around the Modula-2 graphics library that took simple drawing commands over stdin; the other was the main logic of the assignment, which I wrote in C, writing drawing commands to stdout. Compiler bug safely contained I got the assignment working on time...

The only use of Modula-3 that I was ever aware of was the "cvsup" tool that FreeBSD used, back when the repository was in CVS. It was a remarkable, multi-threaded high efficiency piece of kit that the author claimed relied heavily on the support that Modula-3 provided for writing safe multi-threaded code. While I admired the code, it did suffer a bit from being written in "not C", which always made it difficult to maintain, as they needed to maintain both the code and the toolchain. Git seems to have solved that problem to everyone's (?) satisfaction.

Qualcomm talks up RISC-V, roasts 'legacy architecture' amid war with Arm

_andrew

Re: No better time to start a new mobile OS

Exactly correct, with a side-order of "check out the history of all of the great-but-failed mobile OSes that fell by the wayside while the Android/iOS duopoly was establishing itself". (Meego, Tizen, Sailfish, Blackberry/QNX, Windows--three attempts, KaiOS, ...)

Oddly enough, the most likely path to a working alternative to Android might just come from Google themselves (Fuchsia).

Voice assistants failed because they serve their makers more than they help users

_andrew

Re: "... they serve their makers more than they help users"

> you are not using the internet... you are a product OF the internet.

The internet isn't an entity though. It's a bunch of independent entities forming something that behaves a bit like a coherent whole by dint of mutually agreeing to run the same set of protocols, and some of the same applications.

Never confuse the internet with the web, nor the web with any particular corporate web-facing product or service. Category error.

Arm says its Cortex-X3 CPU smokes this Intel laptop silicon

_andrew

Re: Girding of Loins

I tried to buy a reasonable Alpha based system for most of that decade, even after Samsung produced a couple of lines with the usual PC-style PCI peripheral interfaces in it. Lots of press releases and magazine articles, but would anyone sell me one? No, they would not. Pretty hard to gain market share when you won't sell a system to anyone with money in their pocket.

I'm in much the same situation right now with Arm: there are some pretty nice "server" parts that have been announced and are supposedly going into racks in data centers, but has it occurred to anyone to plop one on a MiniITX motherboard and sell it to me? Gigabyte claimed to have such a thing a few years ago, but no response to enquiry. As expected.

FreeBSD 13.1 is out for everything from PowerPC to x86-64

_andrew

That depends on whether you consider NetBSD forking (founding) itself from the Patchkit work (ostensibly to focus on support for non-PC hardware) means that it is older than the project that it forked from. That will depend on whether you think of FreeBSD starting when the name was changed (a recognition that 386BSD was never going to incorporate the patchkit into a new release) or before then, when the project of supporting and developing 386BSD started.

(I had a nice little 386BSD 80486 workstation at the time, forked to NetBSD for a while, before switching back to FreeBSD a bit later. The rest of the research group were Decstations, X-terminals and a couple of big Sun and Sony boxes, while a few weird, feeble networked PCs running DOS or Windows were starting to invade.)

Jeffrey Snover claims Microsoft demoted him for inventing PowerShell

_andrew

Re: You don't have to like it to appreciate it.

It pays to keep in mind that the "traditional" unix tools are all about flat/regular files embedded in a hierarchial file system. They don't do things like JSON queries well, but that's OK: there wasn't any JSON when they were pulled together. *

Today there's jq, and we can get on with pulling information out of JSON files in simple one-liners as the gods of REST intended.

(*): Call me old-fashoined and a slave to my favourite tools, but I've never met a JSON file that I prefer to the obvious alternative collection of flat multi-column files. Still, it's all the rage, and there's likely no going back. "Self-documenting", they say, as though that was a virtue.

_andrew

Re: Serious question @KSM-AZ

I've been using Unix for more than 30 years, and at first I had no idea what this comment and its parent was talking about. So I googled, and discovered something new. Thanks to you both! The gist is:

https://www.gnu.org/software/coreutils/quotes.html

which describes how GNU broke ls in 2016, apparently in order to make it easier to cut-and-paste with a mouse.

I'm especially interested that it has apparently been a thing for eight years, and even though I use Linux and the gnu utilities quite often, they are not my "primary" unix environment, so "hmm, that's odd" hadn't percolated to the level of working out what was wrong, until now. As explained: the decoration is only applied if stdout is the terminal, not a pipe.

Regarding the comment "aren't many other UNIXes around anymore": well, there's still macOS and the BSDs, which is where I do most of my command-line work.

macOS Server discontinued after years on life support

_andrew

Re: Push email for IMAP

Isn't IDLE the right way to do that for IMAP? The IMAP servers that I know (I use Dovecot myself) have done IDLE for years, but I've never seen Apple Mail take advantage of it.

Apple's Mac Studio exposed: A spare storage slot and built-in RAM

_andrew

RAM upgrades? More than twenty years since that was a thing.

I don't get the hand-wringing about RAM upgrades. I think that the last time I actually upgraded DRAM on a system was mid-90s, and that was because I had under-spec'd it in the first place, due to being a poor student at the time. Ever since then I've found that by the time a machine started to feel as though more DRAM might be a good idea the technology had changed to the point where a motherboard and CPU replacement would also be necessary.

These days, DRAM is big enough. 32G is bigger than the hard drives you could buy back in the 90s, and since close-storage is flash and really, really quick itself, you don't really need to have all your files cached. So buy the 64G model and be happy.

The only workloads I know of that really chew through RAM are virtual machines (because they're stupidly inefficient that way), and these M1 systems can't do VMs anyway, so that's not a problem.

Meet Neptune OS, an attempt to give seL4 a Windows personality transplant

_andrew

Windows suits seL4 security model

The reason that seL4 doesn't already have a working Unix/Posix-shaped microkernel arrangement (like minix or Geode or GNU) is that their security thinking, of restricted communicating processes, really doesn't like the "inherit all the state" forking model of Unix. The "create a new process that just does this thing, with access to only these things" is closer to the Windows process model.

It's also the model used by Fuchsia, apparently, so they could have just gone with that? Not that Fuchsia has much in the way of application ecosystem yet.

Of course Unix is growing capability mechanisms too (such as Capsicum from Cambridge University, now in FreeBSD, and CHERI now getting hardware support from Arm), but I suspect that it's a bit of an up-hill battle.

Page: