* Posts by Nate Amsden

2607 publicly visible posts • joined 19 Jun 2007

FCC looks to torch Biden-era cyber rules sparked by Salt Typhoon mess

Nate Amsden Silver badge

this administration

would try to convince people that drinking water is bad for your health if Biden had come out and told people to "drink plenty of water"

(of course if you drink too much you will drown...)

Microsoft's first Windows 10 ESU Patch Tuesday release fails for some

Nate Amsden Silver badge

I'll wait

I'm patient, give them a year to figure out this ESU stuff maybe I'll sign up then..

Server virtualization market heats up as VMware rivals try to create alluring alternatives

Nate Amsden Silver badge

Re: Ditch the emotion

I agree to some extent. Prior to broadcom, vsphere enterprise+ was about the same price as it was 15 years ago minus the 32 core limit per license (?). CPUs have gotten so much more powerful not to forget inflation during that time.

I think even if you doubled the enterprise+ license it would be worth it. But they went far farther than that of course.

Nate Amsden Silver badge

Re: Ditch your viewpoint

I was willing to do just this for VVF. Not VCF. The only thing I care about is esxi and vcenter. At current VVF pricing I could eat the higher costs with massively slashed core counts vs old servers I have now. Cost would still be good. But VMware refused to quote more than 1 year and actually wouldn't even sell VVF by itself they said the only option is upgrade some existing licenses. If they'd do that and give me 3 to 5 years of licensing up front I'd go for it. But they won't.

Also specifically said VCF prices are going up this month (new fiscal year), so waiting to see what that is like (won't change my mind just curious).

Nate Amsden Silver badge

Re: I'm hopeful

Damn, sorry to hear that. Appreciate the info though. I had a feeling I'm going to miss VMFS more than I thought... hopefully with customers working through issues they can figure things out over the next few months.

Nate Amsden Silver badge

I'm hopeful

for HPE VM Essentials. Though they have been terrible to-date releasing much of any info. On one of my fishing expeditions to find out more info I came across an excellent PDF document on some server in Italy. It answered most of my questions, almost all of them, the only remaining ones are around "how well does it work...". My biggest questions are around fibrechannel storage with GFS2(not as if GFS2 is a new filesystem, it's apparently 20 years old). Have read mixed results on that for other platforms at least. HPE supports it out of the box(unlike Proxmox last I checked). I got ~300T of HPE 3PAR fibrechannel flash storage across multiple arrays I don't plan to stop using anytime soon.

But this document is awesome(27 pages long, even has feature by feature breakdown comparing them vs vSphere), though found out apparently it is an internal only document not meant to be released, they have since removed it from the site in Italy. I actually found it doing web searches trying to find info about GFS2 and KVM. I sent HPE as strong wording as I could they need to get this info out there. The doc answered a ton of my other Qs and really made me feel good about using it. Only paranoia bit is are they not saying all this stuff in the doc yet because it doesn't work well? maybe..

I haven't spent TOO much time looking at other alternatives but from what I have seen, at least for my use cases(not ripping out my storage), nothing seems to come close to VM Essentials (not even Ubuntu's LXD product)

Also helps that Ubuntu 24 (which VM Essentials runs on top of) is my standard OS anyway. 90%+ of what would be deployed on it is other linux VMs.

I plan to evaluate it in 4-5 months, giving it more time to mature.

Canonical pushes Ubuntu LTS support even further - if you pay

Nate Amsden Silver badge

hopefully individuals can pay too

Unlike MS where they make it pretty much impossible for individuals to get LTSC legally

AI companies keep publishing private API keys to GitHub

Nate Amsden Silver badge

move fast and break often

Clearly breaking so often nobody has time to care about security. Someone nabs their cloud keys and starts stealing resources they probably will think it's normal activity (bill so high already what's another million between enemies)

Google’s Ironwood TPUs represent a bigger threat than Nvidia would have you believe

Nate Amsden Silver badge

cost

Surprised the article doesn't mention cost, maybe just isn't any info. But the vertical integration has got to provide some pretty huge savings to Google vs using a 3rd party for their chips. Also appears they use their own networking as well, which will save a bunch more. I suspect similar will be true for Amazon/Meta/Microsoft(who all have already or I believe are making their own "AI" chips). Add in China making their own stuff, Nvidia is probably going to be in a world of hurt (relative to today) in the not too distant future(especially once the current generation of accelerators age out, over whatever effective lifetime they are expected to have). I'm also sort of assuming most/all of the non Nvidia players won't be(or maybe/probably can't) using CUDA? Having so much of the market not leveraging Nvidia's tech will shake em up quite a bit.

Microsoft's lack of quality control is out of control

Nate Amsden Silver badge

As it's been shown/exploited countless times over the past couple of decades, it's easier to ask for forgiveness then ask for permission.

Nate Amsden Silver badge

Re: The Legendary Legend

Lack of stability in NT is what drove me to Linux in 1998. I was using DOS/Win3, then as I was in the pirate scene at the time I used many betas/etc of Win95, and not long after the release of 95 I got sick of it and switched to NT 3.51 Server (friend worked at MS and sent me a real install CD for it). Then NT4 came out and I moved to that(I had pretty good hardware at the time, I recall I was an "official" beta tester for NT4 and they sent me a set of CDs with the betas at one point), memory is hazy how much I liked (or not) NT 3.51, but NT4 wasn't great for long and I was already dual booting Linux and Windows, then decided to just dump windows entirely. Ironically(?) earlier this year I decided to formalize some of my older hardware as retro gaming type things, setting up fresh SSDs on two older laptops with XP/Win7/Win10, and setting my Ryzen system which till then was only used for video encoding, as a dual boot Linux/Win11 box to run games in Win11.

In an era where games on Linux is maturing quite well, I took the opposite approach. I was playing Unreal Tournament and other Loki games 25 years ago on Linux(and other games with Cedega years later), but now I don't bother with games on Linux computers. Main reason I don't want to risk stability on my main system which goes months between reboots. Second is I have the other hardware already so I can just set it up the way I want(and finally a decent desk setup with lots of monitors with switches for HDMI and audio). There isn't really any personal data on any of them so MS can spy all they want for systems they support still I don't care. I even bought fresh copies of Win10 Pro one laptop, and Win11 pro for my Ryzen direct from MS, the first MS operating systems I have paid money for(as stand alone products) since Win95(obviously got copies of other MS OSs through computer purchases etc).

Rideshare giant moves 200 Macs out of the cloud, saves $2.4 million

Nate Amsden Silver badge

Re: ...unless you have no other option

I realize of course Apple probably doesn't care, but I do believe they have a large data center footprint for some of their things. I also believe they do spend a lot on public cloud providers (their mindset is "we have so much money we don't care[1]", and that's fine...).

But my point is they probably have a good opportunity to make a data center with their hardware(probably change the form factor so it is more rack friendly/etc like they had with their rackmount servers years ago) for the sole purpose of this ability to remotely develop apps for their platforms(and host it themselves, selling access to the stuff to others). It may even benefit their own employees (for all I know they probably already do this to some degree for their employees). With their vertical integration they can probably do a ton of stuff from a (Apple) hardware perspective that other cloud players can't/won't do.

[1] I think this is their mindset due to something I read years ago where the claim was Apple would say something like this to people who interview there "I don't care if you can save us money, I care if you can make us more" (something like that), and obviously they have tons of cash.

Nate Amsden Silver badge

Re: Yet another case..

I can certainly agree there are likely other use cases out there where public cloud makes sense, in my 25 year career those are very few and far between, and many use cases I see of companies moving in our out of cloud line up with what I have seen.

The one example of very low requirements is a decent one I suppose, there is a whole "VPS" hosting market out there catering to those kinds of folks. Myself I used to pay for web hosting space on small ISPs back in the 90s. Just pulling a number out of my ass here of course, but if your IaaS spend is more than say $5-10k/mo then you should look closer at how things are being used.

It's jaw dropping astonishing how many organizations spend $100k/mo+ without blinking on public cloud stuff, this is going back to my first public cloud hosting 15 years ago where the company's bills were upwards of $500k/mo for a SMALL startup. Completely clueless doesn't even begin to explain things. The costs were in part driven by very high turnover, no documentation, we literally had 100 maybe 200 cloud servers that nobody even knew how to access(I worked on a project to try to identify unused things and shut them down to cut costs there), and nobody knew if they were being used by anything, so everyone was scared to turn stuff off.

The company I helped move out of cloud in 2012 my manager told me years later he had to put his job on the line in order to get the board of the company to approve the plan. We were spending ~$90k/mo in AWS for a few months. He hired me explicitly to move us out(he hired me at the previous company that was spending $500k/mo at times). So he knew what I could do. The board was skeptical, fortunately we had a very supportive CTO and my boss told him he guarantees this will work or he'll quit(or something). They agreed and it was a massive success, even on the first day. The performance improvements on day 1 were dramatic(honestly more than I expected), I still have the email the CTO sent to everyone on that day. Over the following decade as management changed I went through several more rounds of "I want to use public cloud" and then I prove to them what the costs are and they are just shell shocked. So. many. times. Even at my current org, I just went through that again in the past few weeks(technically I just gave the data to a manager who went and got the pricing which was 4-8x more depending on the cloud provider, I try to not waste my time with direct involvement).

Years later, our new CIO at that company left for another company and as some layoffs mounted others joined him. They were in a data center running old stuff, I don't know much details other than they decided to go to Google Cloud. Costs were huge, and the funniest part was they spent 4 years trying to move stuff and they never got everything moved, so they were paying both bills at the same time. Leadership refused to commit to "reserved instance" pricing to cut costs out of paranoia(???), simultaneously refused to reverse course on the cloud migration - eventually they ran low on cash and started laying people off themselves eventually being acquired and I think most of the staff were axed?

Another similar example early on, take a look at the image showing this company's budget/spending in this news article from 11 years ago https://www.geekwire.com/2014/moz-posts-2013-5-7m-loss/

You can ignore the text in the article(or don't..) because it does not mention their move out of AWS. I am friends with a guy who knew the CEO there at the time and told me the CEO was so incredibly angry after realizing how much $$ they were wasting. Beyond moving out I have no idea what happened in the years that followed.

But yeah, if you are spending $5k or so a month or less perhaps my posts aren't as relevant to you(?)

But seeing so many clueless idiots doing this stuff over the past 15 years sorry has just driven me crazy. Because even though I host my own stuff and have for a long time, the marketing crap that people eat up still comes back to me and drives needless stress (at times). So I feel compelled to try to fight back/inform where I can, even if it's a futile effort, at least I tried.

Nate Amsden Silver badge

Re: Yet another case..

sure thing, ironically enough yesterday I purchased the domains cultofthecloud.com and cultofthe.cloud with the idea that I want to write some of these thoughts out and put it on that site(I host my own sites on my own server in a co-location facility) in a more organized fashion rather than try to come up with the stuff on the fly every time I go to write a comment. Not sure when I will get around to actually doing the rest of it, but maybe soon...

Nate Amsden Silver badge

Re: Yet another case..

I am unsure what you are trying to say here.

I happily admit there are some great use cases for public cloud, they involve extreme elasticity with low amount of time spent in the high mode. For example you need 2 thousand CPU cores for not more than ~60 hours per month. Once a month 60 hours, or say average 2 hours per day over a given month. I haven't run the exact math but hopefully you get the idea, buying/running infrastructure 24x7 for something that only runs for a short time doesn't make a lot of sense (again haven't run the math so maybe it still does).

Another use case I have seen here on el reg(can't find the article) is similar just larger scale, one off super scale tests, the article I'm thinking about was a HPC test of some kind where they spun up a few thousand systems to run a benchmark to show scalability of some software stack. Maybe it cost them $250k or something to run the test, a lot cheaper than leasing/buying a bunch of hardware to do the same thing. Assuming of course you don't run such things often(maybe once every few years or something).

Of course the number of these workloads are few and far between.

Pretty much anything that has some kind of steady state workload will cost a lot more to run in a public cloud IaaS. Even those with "bursty" workloads you should do the work to determine exactly how "bursty" your stuff is, in the last 25 years of my career I have not worked with a company where their "bursts" would come even remotely close to needing public cloud(talking on the scale of say ~100 CPU cores bursting to several thousand). Also keep in mind it is a BAD IDEA in almost all situations to burst from a data center, to a another data center (including public cloud). Latency between the two locations will often seriously hurt performance (you could get around this to some degree if you were to locate your primary facility very close to your cloud provider). Too often, people think "oh my traffic surges 500% so I want to burst to cloud", not taking into account how much resources that 500% surge actually requires. In all cases in MY experience that surge is nothing to get excited about capacity wise. If you are talking a 100,000% surge then yeah..but your bigger concern is probably application scalability at that point anyway.

I came to this new realization recently though, that many organizations are filled with "cloud people" (I am starting to use the term "Cult of the Cloud") who have bought the marketing BS hook line and sinker over the past decade, and as mentioned in previous comment don't really care about the costs, until they are forced to. Look at examples like SAP (https://www.theregister.com/2025/09/04/sap_sovereign_cloud/), Geico (https://www.thestack.technology/warren-buffetts-geico-repatriates-work-from-the-cloud-continues-ambitious-infrastructure-overhaul/), 37 signals (https://www.theregister.com/2023/09/18/37_signals_cloud_repatriation_savings/), DropBox (https://www.trgdatacenters.com/resource/dropbox-left-the-cloud-in-2015-and-never-went-back/), and my personal example in the small business space moving my previous employer out of AWS in 2012 with 7 month ROI saving $10M+ over the following decade and provided BETTER availability than Amazon did provide in us-east-1 region. Ask yourself why it took Geico many years to realize public cloud was so expensive, same for SAP and others .. there is only one answer.

I can't count how many conversations/chats/threads I have had over the past 15 years talking with people in the "Cult of the Cloud", I swear it's like asking a "MAGA" person how they think Trump is doing as president. The recent AWS outage that number has spiked quite a bit as I try to clarify things for people on LinkedIn. I was in one thread with a person who is deep in the cult. At one point she said she had "1 million endpoints, how can I deal with that many without auto scaling?". My answer was simple: without knowing more I can't really say for sure, but I can say I worked for an advertising company where we served several billion requests a day(very light weight mind you) without a lot of infrastructure(I think it was about 30-40 servers in 3-4 data centers running active-active). She was convinced you can't build a CDN without leveraging public cloud, and I provided evidence that most of the CDNs do not use public cloud you just have to do a WHOIS on their CDN endpoint see who owns the IP(completely unrelated thread a few days later a Cloudflare employee specifically stated to me that they do not leverage public cloud for anything, I sort of assumed they may "burst" into public cloud for their workers but he said that is not the case). She said public cloud really pays off when you are at high scale, and said I don't work on high scale stuff so I wouldn't know. I happily admit I don't want to work for a big company. I gave her examples of Geico and SAP spending hundreds of millions of dollars on cloud and they are moving out. She stopped replying at that point(was about 7-8 days ago). You can present these people with overwhelming evidence and for whatever reason in many cases they just can't believe it, it really is like a cult.

If you(or anyone) is happy with your public cloud provider in what they provide to you at whatever price you pay for it, that's great, keep using them. Just don't pretend you are saving any money (I'll admit again there are use cases where you would save, but they are few and far between). There are certainly technologies out there that I am willing to pay a premium for myself.

Nate Amsden Silver badge

Yet another case..

of a company filled with cloud folks who don't care about costs... until they are forced to care... just need to keep on repeating this again and again across more organizations.

Microsoft: Don't let AI agents near your credit card yet

Nate Amsden Silver badge

Probably just common sense

Those with common sense probably know not to do this anyway, those without common sense(likely the majority) probably won't care even after being advised explicitly not to, sadly.

Boffins: cloud computing's on-demand biz model is failing us

Nate Amsden Silver badge

IaaS was always flawed to begin with

It's bad by design, whether it's cost, efficiency, availability, and complexity. It's got a bunch of shiny marketing BS selling it to people is really the only thing it has going for it. I realized this myself fifteen years ago(won't link to my blog post yet again..), situation is unchanged.

People shocked "why isn't XYZ multi region or multi cloud?" The obvious answer is cost, and complexity. You should have little doubt most orgs are well aware of the risks they just choose not to do anything about it(Looking at you Amazon Alexa). Look at South Korea and their lost data, they said "there was too much data to backup", meaning nothing more than "we weren't given the budget to make it right".

I'm sure scientists were all giddy about getting access to a cloud dashboard thing and be able to spin up things whenever they want, they didn't think or care(initially) about the costs associated with it(nor do many orgs), but now they are being forced to face it, they cry.

Look at companies like Geico (apparently spent a decade moving into public cloud spending $300M/year realizing it was 2.5X more costly and now are moving back), and SAP (burning untold billions probably) and last month announcing $20 billion investment in their own facilities over the next decade, filled with people who flat out didn't care about the costs...

until they were forced to care.

If I can save a small company $1M+/year ($10M+ over a decade, probably closer to $12M really), and Geico can save $120M a year, who knows how much SAP will save, you can save a ton of money getting out of public cloud IaaS whether you are small or super huge. I actually had someone ask me recently was the money saved including "extra costs for staff" for operating equipment. I sort of laughed. I told them the truth, the same people that ran cloud stuff ran the infrastructure, no staff changes. Don't trust me? Look at the well documented "37 signals" move out of AWS over the past year where their CEO said exactly the same thing(on LinkedIn anyway unsure if he mentioned it in his blog posts). Even if it didn't include extra employee costs, the savings at my small company were in excess of $1M per year, you can hire a bit more people if you really needed to and still be saving a ton.

Me? I care about the costs a bit, I mean it's the easiest way to justify things to non technical folks. But for me really at the end of the day moving out of IaaS was more about control, availability, and peace of mind, systems running for months and years without issue. Not having to rebuild a server due to some failure in over a decade of operation(was a semi regular process while using AWS). My oldest flash storage array still in production just entered it's TWELFTH YEAR OF CONTINUOUS AVAILABILITY. 12 #$#@$ years, that is insane, and being that it's flash, and has 4 storage controllers it's still damn fast and works perfectly fine(I added 3 more refurb quad controller arrays to distribute the risk two years ago), and of course I have four hour on site hardware(no software) support. So far beyond my expectations, and only a single component failure in that time. I have network switches that just passed FIVE THOUSAND DAYS OF OPERATION, no faults (Technically I have had replacements for them for 2 years on site just haven't gone on site yet to deploy them, planning on next year, last time I was on site I ran out of time).

SonicWall fingers state-backed cyber crew for September firewall breach

Nate Amsden Silver badge

Told it to their reps

Earlier this year had a conf call with them where they were pitching replacing my existing Sonicwalls with newer stuff and were pushing their cloud management stuff. Long before this breach. I told them then, I really don't trust any org with cloud management of network stuff for security reasons and control reasons too.

I've been using Sonicwall successfully as a layer 4 firewall and site to site VPN for over a decade and have never used their cloud backup, I wrote scripts that login to the firewalls and tell them to upload their config to a local server instead, and integrate with internal monitoring as well. Also never deployed their SSL VPN on any of my firewalls, because simply it is just a bad product(first discovered this on their Gen5 products they are on Gen7 today and functionally it's still not a good product), always has been (which they probably admit as they have a dedicated SSL VPN client product line as well, which I have never used, evaluated it briefly a long time ago but immediately ruled it out as it could not fully integrate with Duo inline enrollment at the time).

BUT as a layer 4 firewall and site to site IPSec VPN they've been pretty rock solid very few issues over the past decade+, and I plan to continue to use them for those purposes.

Their response was the typical "everyone wants cloud stuff so we think you'll like it too" something like that ...

When Debian won't do, Devuan 6 'Excalibur' Linux makes the grade

Nate Amsden Silver badge

Re: Cooperation ?

I just finished my last upgrades to Devuan 5 a few weeks ago.. I have one 32-bit only Devuan system I built 15 years ago back when my VM server had only 4G of ram and I was looking to limit memory usage as much as possible. But at least there are a few years left before that goes EOL and I have to build that fresh in 64-bit. Currently seems to be using less than 512M of memory(though has 2G allocated to it, 15 years ago I think I had it limited to 768MB).

Myself I mainly care about systemd(not a fan) when I need to mess with things, which is on my servers(I run 17 instances of Devuan either in VMs or LXC for personal use), don't need to mess with things on all servers but prefer sysvinit anyway.

I do have to deal with systemd at work which I have about 650 Ubuntu systems, the original migration from Ubuntu 12(sysvinit) to 16(systemd) took well over 100 hours of work fixing things up to work right(spread over several months), though mostly smooth since, though still issues come up here and there. One recent one I ran into was apparently systemd newer than something like 254 (which is included in Ubuntu 24) prevents LXC containers from starting without a special apparmor workaround (https://wiki.debian.org/LXC/SystemdMountsAndAppArmor#Permissive_AppArmor_profile ) -- note that running Ubuntu 20 containers on a Ubuntu 24 host requires no special step by contrast. I put the workaround in place so I won't have an issue anymore, still annoying to have it to begin with though.

Though on my desktop/laptop I don't really need to mess with the system startup stuff at all so systemd is ok for me there(I have run Mint/MATE LTS as my desktop of choice since Ubuntu 10.04 LTS went EOL).

Deploying to Amazon's cloud is a pain in the AWS younger devs won't tolerate

Nate Amsden Silver badge

noped out in 2012 for IaaS

Saved over $10M in the decade that followed for my last org meanwhile providing better availability, performance and peace of mind at the same time.

Even my personal stuff sits at a colo. Total of 19 physical and virtual systems in a quarter rack consuming less than 200W for 200 bucks a month, unlimited 100meg bandwidth, and about 5 years since the last power outage (no redundant power in that 30yo facility). 15 years and no critical hardware failures. My personal VMware hosts may be EOL but they've been running for 5 years continuously without a slight hiccup.(Not even a reboot). Don't mind EOL for personal stuff. Been hosting my stuff at various places including my home since 1996.

Of course my professional gear is all highly available in a top tier facility.

IaaS is broken by design as I wrote about 15 years ago. Still broken today as the design is unchanged.

Alaska Air phones a friend to find out what caused massive October outage

Nate Amsden Silver badge

Probably too much complexity

The first SaaS company I worked for 2 decades ago had a lot of reliability issues with their app stack, so many outages... a couple years after I left they moved to a redundant data center setup, though due to latency requirements the facilities had to be pretty close to each other, I assume they were doing real time replication perhaps with Oracle, as they did run the largest single OLTP database in the world (by that point probably around 60T, that was due to bad app design). I remember messaging a former co-worker at that point randomly, just asking how were things going, and his response was something like "we're 9 hours into a hard downtime on our five 9s setup". Couldn't help but laugh ...I don't think I ever found out (or even asked) what the root cause of the issue was. But really goes to show at least in that case, having such tight linkage between active/backup systems can in itself cause problems.

In a completely different situation at a big company(fortune 500 - that I had no direct relation to), they had deployed multiple storage systems in a high availability configuration, I think it was 4 (large) systems total. Though they ran into a nasty bug apparently at one point that took all 4 systems down at the same time(only knew because I had insider info from the vendor). Not sure how long they were down for or if any data was lost(or what the impact was). I think the bug was related to replication. Myself I have worked through 3 different primary storage system failures in my career, each taking a minimum of 3-5 days to fully recover from (with varying amounts of data loss), in all cases the org did not want to invest in a secondary storage system(nor did they want to invest in one after the incidents occurred at least not right away).

My storage systems are oblivious to each other, as is my storage network. I try to keep a balance towards far more simplicity (trade off vs features and automation) rather than more fancy integrated things. A point for my network is my core network runs on a network protocol(ESRP) that has been on the market for probably 25 years providing subsecond layer 2 loop prevention and layer 3 high availability. No MLAG, no TRILL, no SDN, no dynamic routing protocols, don't need/want complexity. I'm sure more complexity is suitable for certain orgs, just not for ones I have worked for in the last 25 years. If I was building a new network today I'd use the same method as I have for the last 20 years, provided the requirements don't shift dramatically(if they did then I'd consider alternatives, assuming the requirements were realistic).

Debian demands Rust or rust in peace for legacy ports

Nate Amsden Silver badge

Re: What a jerk

What dependency hell? Hasn't been a problem for quite a while. I've been using Debian (more recently Debian derived in the form of Devuan and Mint) since Debian 2.0(apt came out in 2.2). Only time I used testing was with debian 3, never used unstable though have built many packages from it over the years.

I did decide to abandon the desktop linux nextcloud file sync software in favor of syncthing recently due to suspecting newer versions will be more difficult to run on my Mint 22 system which I plan to run till 2030 or 2031. The dependency list on that app is absurd and I have no interest in running their container things. But that's just a side effect of trying to shoehorn newer stuff on older systems. Packages in the distribution repos should have no dependency issues.

I have managed to compile the gnome utility brightside which is critical to my daily workflow. It hasn't seen an upstream release since 2004, and last included in Ubuntu 16. Along with 17 other packages as dependencies I got it to built clean to last another 5 years.

I don't really have an opinion either way on the rust stuff at this point. Scrapping small ports doesn't seem too bad, not as if that hardware is getting maintenance anymore just run an older version of the distro or use netbsd or something. Windows 7 still runs fine today and even XP runs fine for retro gaming(on appropriate hardware for me that is a Toshiba tecra A11 laptop I bought in 2010), don't need the latest for everything.

If you want the latest, run more up to date hardware.

ISPs more likely to throttle netizens who connect through carrier-grade NAT: Cloudflare

Nate Amsden Silver badge

blocking by IP hasn't really been effective for a long time

(I'm sure Cloudflare knows this of course already...)

Blocking by IP made sense in the 90s and maybe up till late 00s, but cloud providers, bot nets, etc have massively reduced their effectiveness. Add to that Geo IP services that have bad information on wide swaths of IPs (including the IP I am posting with, for example you can use https://www.maxmind.com/en/locate-my-ip-address ), the IP I am coming from has been present at it's current address I'd bet for at least 20 years. If you did a WHOIS on the IP it literally shows you the correct street address where the IP is at. Yet MaxMind has no idea other than somewhere in the U.S..

I briefly fought a credential stuffing attack in 2023 with a similar situation. Over 10,000 unique IP addresses trying to get through the service, mostly from "Republic of Seychelles", five /16s, and ten /24s, and the Geo IP databases (for the most part) had no idea where those IPs were coming from. I found a service at the time that would run Geo lookups against multiple providers and it was comical to see such radically different responses, thousands of miles apart depending on the IP space/Geo IP provider queried. WHOIS information appeared to be accurate (confirmed with traceroute in a few cases I tested at the time).

I fired off an email to Maxmind(who was/is the Geo provider for the CDN my org uses) at the time with a list of example subnets, asking for clarification of their data vs WHOIS data, but they never responded.

The clock's ticking for MySQL 8.0 as end of life looms

Nate Amsden Silver badge

Migrated to Maria 10

From Percona 5.x (multi master replication) seems like almost a decade ago for a bunch of apps(including two internally built e-commerce platforms), zero issues on anything that I can recall. MariaDB systems have been running Galera clustering (which itself has required some minor changes with some queries at times due to how clustering works, same queries run against standalone instances never an issue). Perhaps the apps are just super simple, not sure.

Was a happy Percona customer for a couple years at least then one year they jacked the prices up by something like 8-10X, so we cancelled at that point.

This security hole can crash billions of Chromium browsers, and Google hasn't patched it yet

Nate Amsden Silver badge

Re: That's a bummer

doesn't seem to affect ESR? I have firefox ESR 140.3.1 on linux processes dating back to Sept 27. I run 4 different isolated firefox browsers simultaneously in linux each running under a different username, for better isolation(I still have 104G/128G of memory available so ram is not a problem).

I checked a Win10 system that I use less frequently it is running Firefox ESR 128.13.0, I ran a powershell script command ((Get-Process firefox).StartTime | % {New-TimeSpan -Start $_}) and it says some of the processes for firefox have been running for 75.4 days. I haven't used that system in a few weeks at least I think. Browser sitting there with 3 tabs open on basic websites nothing fancy(cygwin.com being one of them)

Microsoft Azure challenges AWS for downtime crown

Nate Amsden Silver badge

Re: Homogeneous Hybrid+Multicloud is the answer

that sounds like a lot of work and costs to try to make things seamless, just keep it simple if your critical stuff is on prem keep the non critical stuff there too, it won't cost much more, you'll probably end up saving a bunch anyway because you won't need all that extra work to wrangle multiple clouds and complex things like OpenStack and keep in mind you can't oversubscribe in any of the hyperscale IaaS clouds, you pay for what you provision not what you use. Unless you have a really good handle on provisioning stuff and deprovisioning when it is not in use on cloud providers(most don't, hell there are often dedicated roles for people that do nothing more than cost analysis/management for public cloud at some companies), vs on prem you can just let stuff sit idle, CPU/disk isn't used much so the capacity can be used elsewhere, memory is still used to some extent.

At the end of the day it depends on the situation, no company I have worked at in the last 25 years would benefit from anything other than on prem. Though there have been PLENTY of people at different companies that WANTED to use public cloud, really for no other reason because they thought it was cool and "on trend"(same sort of folks pushing for kubernetes which solves problems that we didn't have). Others WANTED to use public cloud because they thought it would be cheaper but in the end they were proven wrong(by a hysterical amount of money).

Ubuntu Unity hanging by a thread as wunderkind maintainer gets busy with life

Nate Amsden Silver badge

Unity drove me to Mint/MATE

I moved from Debian to Ubuntu probably around 2006 on my main laptops(for better hardware compatibility). Ubuntu 10.04 LTS was the last version I used on my systems, since the next LTS had Unity. I went to Mint/MATE and haven't looked back. I never actually tried Unity, but really had no interest in a significant UI change (even now). I have managed to compile one of the most critical bits of my MATE experience, an app called brightside which hasn't seen an upstream release since late 2004 (http://archive.ubuntu.com/ubuntu/pool/universe/b/brightside/) even now on Mint 22. Ubuntu 16 I think was the last time they shipped with brightside, which provides edge flipping for virtual desktops in GNOME, something I have used going back to around 1998 (originally with AfterStep). Getting it to build clean on Mint 22 required about 17 other packages to be built from source as well(fortunately all built cleanly based on original source deb files, though I did have to hack a few of them up to build).

Kind of ironic perhaps if Unity as a desktop environment is struggling, when MATE, which maintains a GNOME v1-style(at least, if not more than style) UI is a much older UI than Unity. Thought it was interesting that it seems that Mint 22 ships with pretty much the same version of MATE that Mint 20 did (there is a newer version of MATE too). Not that it matters much to me the version it has works fine, a few bugs here and there that I have long figured out how to workaround/live with.

EY exposes 4TB+ SQL database to open internet for who knows how long

Nate Amsden Silver badge

"professional and effective"

Didn't finish reading the article since my last comment.

"professional and effective" makes me sort of laugh given it took them a week to fix the issue, shouldn't have taken more than an hour to fix it once they were aware of the situation.

Nate Amsden Silver badge

web.config

Would be interesting to know how a db connection string in a web.config file led to anything. If you have the password to the DB it doesn't help unless you can connect to the DB. So I'm assuming the attacker had other means of connecting to the DB, maybe the DB was exposed externally as well.

Another article on security stuff here made me think briefly back to the SQL Slammer worm, which when I looked it up again Wikipedia I think said there was about 75,000 exposed SQL databases at the time and the worm took just 10 minutes to hit all of them across the globe. This was before the "cloud" era too.. people have been doing stupid things for a long time ...

Signal president Meredith Whittaker says they had no choice but to use AWS, and that's a problem

Nate Amsden Silver badge

Re: Depends on their use case specifically

memory triggered ... I remember one time the tech leadership of that social media company were freaking out claiming someone was attacking our site, and our site was crashing. It was crashing, they were hitting some "special" API endpoint I don't recall the details other than it was something like not even 3 requests per second. It was a joke, what a terrible code base (made in part again due to high turnover, stress, death marches etc).

I also recall a couple of years after I left I happened to be in Seattle again visiting folks, I got a call early in the morning on my cell phone. Someone was trying to get in touch with someone at the company but they could not find contact info. Website had nothing, and I guess they weren't trying very hard because they came to me, apparently my contact info was on their domain still even though I left the company a long time ago. It seemed kind of strange... then he eventually came clean saying "I don't want to alarm you, but I am calling from the FBI". Oh, wow, ok. I never learned as to the cause of them wanting to contact the company(it was legit as far as I know). This caller was in search of log events for something... I was able to contact the company and get him in touch with them. I sort of joked with the company saying "Hey your splunk instance is on the internet you can just give him a login to it". The app stack did support "user generated content" forums, and other things, so I imagine some users posted some illegal content of some kind and that triggered the response. Nobody told me what the end result was beyond they were successfully in contact with the FBI.

Nate Amsden Silver badge

Re: Depends on their use case specifically

Viral moments should be cached by CDN. I worked for 2 social media startups in 2006-2008 and 2010-2011 (both in Seattle). The latter one used AWS (when I wrote that blog post). Their bill at times was in excess of $500,000/mo(I have always suspected due to the relationships they likely did not pay the full dollar value on their bills, but I have no proof either way). Not because they had tons of users, but because things were in such a chaotic state and high turnover.They did have bursts of traffic but in the grand scheme of things it was not a lot of traffic. I had a plan with a 6 and a half month ROI for bringing stuff in house. I didn't like the company much so I spent WAY TOO MUCH time on that presentation and research and stuff(I enjoyed it). (the executive slideshow was only 15 pages including a few pages with mostly images, the full technical slide show covering every aspect of things was a full 170 pages)

Everyone in the company was on board from my manager, to the CTO, the CEO, the software developers, everyone. The board shot the plan down and wanted to re-evaluate in a year or so. I left within a week of that. My manager resigned the day after I left, and a bunch more left soon after. My hiring manager at THAT company hired me at the next company where I spent over 10 years(that manager left after 2-3 years).

I know AWS' support is better now, but an example from the time, my (then) new manager had a decade of experience working at Amazon(we had many ex-Amazon employees including our CTO). Our CEO was the sister of the head of AWS. We were in the same city as AWS. My manager reached out and in a kind way said basically "everyone at my company hates your product, non stop problems. We must be doing something wrong, can you come on site and talk to us about what is going on? Knowing we spend a lot of $$ in your cloud and we have a lot of relationships with your leadership". Their answer ? (something along the lines of) "Tough shit, that's not our model, you figure it out". Even my manager was floored at the response. An earlier company I was at Oracle flew people on site on one occasion to deal with problems(for multiple days) we were having and we were spending a FRACTION on Oracle DB as my social media company was spending on AWS. My (then) manager later went to work for Oracle cloud for a few years till he retired(he tried to hire me several times), another person on my team at that social media company still works for Oracle cloud as a tech architect of some kind (very smart guy, I didn't know him well)

Nate Amsden Silver badge

Depends on their use case specifically

and their requirements etc, someone on LinkedIn mentioned to me last week something along the lines of "if you're going to build a CDN you have to use cloud", which is a line of BS, they thought pretty much all CDNs used public cloud (and I have no doubt many/most/all CDNs probably have some aspect of public cloud usage). Global, real time mass comms platform as CDNs are as well, all of the major ones and probably most/all of the minor ones use their own infrastructure for their edge. Not only for cost reasons but also (more important for them) routing/traffic control reasons (less important for an app like Signal).

You can see pretty easily whether or not a CDN node(the most important part of a CDN) is using a public cloud or their own stuff by just looking at it's IP, if the WHOIS info for the IP reveals a public cloud provider then that is clearly cloud, if it does not then most likely it is their own infrastructure.

I know for example when Snapchat went public, here on el reg there was an article (https://www.theregister.com/2017/02/03/snap_files_for_ipo/), where Snapchat said they had commited to spending $400 MILLION PER YEAR to Google for their cloud stuff. Sorry it's going to be hard to convince me that they can't build their own global network for a lot less than $400M per year... Snapchat is in a similar model as Signal I think ... ? (never having used Snapchat though I do use signal).

To me, one of the best (on paper) use cases for public cloud is you have to go from say ~100 CPU cores to 5,000 CPU cores for max of 2 hours per day (averaged over a month, so say max of 60 hours per month). Building infrastructure for ~60 hours of month of usage probably doesn't make sense (though I haven't run the numbers specifically). Another really good use case for public cloud is one off things, such as I think I have seen at least one article here on el reg about some group doing some kind of HPC test on cloud where they spun up a few thousand servers or something to do one test, then spun them down(never to be needed again). Obviously such situations are few and far between.

(Again on LinkedIn) there was a cluless tech leader dude from State Farm who wrote a dumb post saying everyone should use cloud, at their scale they want their business not to be focused on computers etc (typical outsourcing BS), anyway found it kind of ironic more recently another person posted about how Geico (same industry as State Farm, insurance) spent a decade moving into public cloud spending $300M/year, only to find out(why did it take a decade to find out?) that it costed them 2.5X more, and now they have reversed course.

But most anything with a real steady state load in 95%+ of use cases doesn't make sense to have on public cloud.

THAT SAID - if you are happy with overpaying your public cloud provider and don't care about the costs you are just a happy customer that is fine, continue to use them, just don't pretend that you are saving any money.

IaaS is broken by design, something I first wrote about 15 years ago and posted a link here on El reg, here is the link again

http://www.techopsguys.com/2010/10/06/amazon-ec2-not-your-fathers-enterprise-cloud/

Some back story to that, at the time the CEO of the company I was at was/is the sister of the head of Amazon cloud (now is the CEO of Amazon). I actually met with him and his chief scientist back in 2010 to complain about their bad service and he spent a bunch of time apologizing for it. But that's not the real story. The real story is even though I sent that link to my boss on that same day, he read it, and he thought it was a well thought out balanced post, someone over at Amazon got into a hissy fit and that came down on my employer(was before noon on the same day as I posted it) whom then gave me legal threats to take the post down(BS reasons), they threatened me again when I left the company (and triggered a mass exodus from the tech team, about a dozen came to the next company). I complied and hid the post for a few years, they eventually went out of business and I put it back up online about a decade ago.

I've started to think I will refer to these people (like that State Farm person above), as members of "Cult of the Cloud". (for whatever reason I came up with that sort of named similarly as "Cult of the Dead Cow"), where they can be faced with so many different facts and figures and they are so brainwashed that they just can't believe their eyes/ears (similar to "MAGA" folks). Same sort of thing applies to so many folks pushing Kubernetes as well(and "IaC" to a lesser extent). All complicated coping means to try to tame "the cloud". Make it simpler, don't use it. (I happily admit there are use cases for all of these things they just don't apply to everyone(don't apply to most really), and many of these folks think these things should apply to everyone).

The post is still valid today, as the flawed design of IaaS remains unchanged.

I moved my last org out of AWS in early 2012 with a 7 month ROI, and followed with a decade of flawless operation.

OpenBSD 7.8 out now, and you're not seeing double, 9front releases 'Release'

Nate Amsden Silver badge

Have never upgraded OpenBSD

Given that I have used it on my firewalls for about 17 years(FreeBSD before that) seems kind of weird to say huh... well I guess I haven't upgraded perhaps mostly out of fear of something breaking during the upgrade. So instead, I replace the system entirely. I started doing this with Soekris systems originally but for about the past 9 years I have used PC Engines APU2(which are EOL now I think). I have 3 of them. One is my home firewall, one is a firewall at a co-location where I host some personal servers, and the 3rd is a spare.

When I want to "upgrade", I take my spare, and build it to replace my home firewall, same configs, same IPs everything the same. Then I just switch the hardware out, and hope for the best, worst case if there is an issue that I can't fix(so far hasn't happened, but still takes me an hour or three to get everything setup the way I think I want it) then I switch back to the other hardware, minimal downtime and risk. Then I take my recently replaced home firewall and configure it for my co-location firewall and do the same, then the co-location firewall turns into my spare firewall (Fortunately haven't had any of them fail). Works pretty good. Currently still on OpenBSD 7.5, which I think I upgraded to in part for (what I assumed would be) better wireguard support (vs whatever OpenBSD I was using before 6.something), though it's been a year and a half and still haven't touched wireguard yet.. (still using openvpn though on Linux for my site to site vpn as openvpn on openbsd on APU2 is quite slow, still using openvpn on openbsd for remote access vpns as performance is not a concern there)....someday I hope to get around to trying wireguard.

Everything else I run is Debian-based linux distros, haven't tried OpenBSD outside of a firewall configuration myself but I have always liked "pf"(though I often go long periods of time without touching it so I often forget the syntax), haven't used Linux as a network firewall probably since 2003ish.

Trump's workforce cuts blamed as America's cyber edge dulls

Nate Amsden Silver badge

I have a solution

don't make any recommendations then you don't need to worry about folks not implementing them. Works for everything else the Trump is doing, right? If we're not tracking people who are not getting enough food, seems we don't need to track this other stuff either as it is less important.

A single DNS race condition brought Amazon's cloud empire to its knees

Nate Amsden Silver badge

Re: is that how people normally do load balancing?

Depending on the situation yes, most often DNS is used for load balancing across multiple sites, though more commonly such load balancing is used for geo traffic distribution. Internal load balancing with DNS only is relatively rare, but some things do leverage it. Amazon has a history of layering DNS on top of DNS to mitigate(?) issues with them wanting to rotate IP addresses on some things (most commonly an issue with ELB(ironically? you can find old articles where customers got flooded with traffic for other customers due to DNS cache issues after AWS changed their ELB IPs), and I think RDS). Fortunately haven't had to deal with AWS myself in over a decade so stress levels are much less since I run my own stuff that is super stable.

Nate Amsden Silver badge

So maybe that's why

Some folks have DNS outages, automation messing things up.

All critical DNS entries on my systems have always been manually managed, and the majority of them can go for many years without needing any changes. IPs are statically assigned across the board, and lifetimes of systems are measured in years again. Slower change = less times things can go wrong. There is some automation for folks who wish to create new systems that DNS entries can be created automatically (I don't leverage this myself but others use it), but there has never been anything that modifies existing DNS entries (worst case you get a duplicate DNS entry if DNS/IPAM doesn't agree when building a new system but that only impacts that one new system).

Some people strive to automate everything, with cloud the automation needs are quite a bit higher as there is far more complexity. I prefer simpler systems, generally have less automation, my mantra is more if your automation saves a bunch of time in the long run then that's fine could be good to do it. But if you spend a lot of time creating, maintaining and testing that automation that consumes as much time as doing it manually(or even close to as much time) then don't bother.

Dropping Nvidia for Amazon's custom chips helped gene therapy startup Metagenomi cut AI bill 56%

Nate Amsden Silver badge

can't help but laugh

Not at the random 404s el reg is throwing for comment pages this morning...

Not at this article specifically

But at Nvidia kicking and screaming so sad they couldn't sell more GPUs to China before they were shut out. They'll be shut out of Amazon, Google, MS, and Meta soon enough (at least to a much larger degree than they have market share today) in the not too distant future due to their custom chips as well. Perhaps only Oracle would be left not making any custom AI chips (or are they? ).

And they try to spin the reason why they should be allowed to sell to China (assuming China wants them which appears most do not), in the nost pathetic way. As if China was not going to develop their own chips regardless the timeline was just accelerated.

Nate Amsden Silver badge

makes me laugh

Not at the random 404s that el reg is throwing on comment pages that ate ny comment a moment ago so here it is again shortened.

Not at this article specifically

But at Nvidia being butthurt they were locked out of the china market for GPUs and now they are mad that China is making their own chips. The way they spin the story is pathetic as if China wasn't going to develop their chips regardless the timeline just got accelerated.

Nvidia will be locked out of Amazon, Google, MS, and Meta soon enough with all of them making custom chips as well. Maybe leaves Oracle left? {Along with on prem customers). At least they will lose a ton of market share vs what they have now.

Nate Amsden Silver badge

can't help but laugh

Not at the random 404s that el reg is throwing on comment pages that ate ny comment a moment ago so here it is again shortened.

Not at this article specifically

But at Nvidia being butthurt they were locked out of the china market for GPUs and now they are mad that China is making their own chips. The way they spin the story is pathetic as if China wasn't going to develop their chips regardless the timeline just got accelerated.

Nvidia will be locked out of Amazon, Google, MS, and Meta soon enough with all of them making custom chips as well. Maybe leaves Oracle left? {Along with on prem customers). At least they will lose a ton of market share vs what they have now.

Amazon brain drain finally sent AWS down the spout

Nate Amsden Silver badge

Re: The great Godaddy outage of 2012

No, at the time in 2011 we did use godaddy but were hosted in Amazon cloud (moved out early 2012), and due to cloud wonkiness we sometimes had to point dns at a different address and wanted a low TTL for that. I don't recall specifically why now.

I had previously used dyn at an earlier company (again did not use dynamic dns just their enterprise stuff and geo load balancing). So when management asked about the TTL I suggested dyn, and we stuck with them till the end(eventually migrated to OCI DNS which was dyn on rhe backend but I left the company not too long after that).

When we moved out of cloud the TTL thing was no longer an issue.

I do miss dyn though, the real dyn. Specifically the ability to update dns records then later be able to publish them in bulk(reviewing changes during publish), rather than update one entry at a time in real time. It wasn't cheap though, OCI DNS was about 99.5% cheaper. Company struggled with costs as time went on but for whatever reason nobody ever questioned the DNS bill nor the Internet provider bill.

Current org uses GoDaddy as well but uses dns made easy as their dns provider.(Both established before I joined). It works fine, just not fond of the dns update process compared to dyn.(I only used OCI for a tiny bit I don't remember the interface).

Nate Amsden Silver badge

That poisonous environment sometimes follows employees after they leave. Encountered a few situations where ex amazon folks wanted to overhaul companies in the Seattle area often with disasterous results. I lived in the area 2000 till 2011, one of the reasons for leaving that region was to get away from those folks. By contrast, never had an issue with ex MS folks.

NYT had a good article about their culture about 7 to 9 years ago. Explained a lot to me about co workers I had back then.

Nate Amsden Silver badge

Re: The great Godaddy outage of 2012

Being a dns registrar and providing authoritative dns are quite different. At the end of the day it is the root servers that point you to who is authoritative, not the registrar.(Of course the registrar is responsible for updating the root servers if there are changes but that is pretty rare during a domain's life)

Company I was at in 2012 was a GoDaddy customer for registrar, though we used dyn for dns hosting, initially because GoDaddy wouldn't allow TTL of 60s or something. I don't recall any issues with them at any point but it was a long time ago...

Nate Amsden Silver badge

last dns outage I had

Was about 5 years ago due to this bug

https://kb.isc.org/docs/aa-01315

Which caused several brief dns outages lasting a few seconds a piece randomly over several months(with all 8 of my recursive resolvers freezeing up at the same time, always auto recovered). Super difficult to trace the cause. Was annoying that someone had already reported it to Ubuntu but they did not fix it for some time.

Before that the last dns outage I can recall was in 2016

https://en.m.wikipedia.org/wiki/DDoS_attacks_on_Dyn

I remember it was one of the few times I was on the data center floor working on things during this attack. People calling me saying our sites were down when they were not, took a couple of minutes to determine it was dyn that was having issues..

Not sure what causes others to have outages related to dns, it's a pretty simple service.

Chamber of Commerce sues over Trump's $100K H-1B paywall

Nate Amsden Silver badge

Re: POTUS has not exceeded his powers though

In this case they have not, as Legal Eagle explains https://www.youtube.com/watch?v=-kz-Hn1k0Lk I saw it a few weeks ago so don't recall all of the details, but it is an interesting watch.

If there was a situation where the fees, etc were not explicitly stated in law already, the president may have a way to do what he is doing, but at least in this particular case the fees are explicitly set and so he does not have authority to change them without passing a new law.

'Highly sophisticated' government goons hacked F5, stole source code and undisclosed bug details

Nate Amsden Silver badge

Hard to steal the nginx code

When they release it freely https://github.com/nginx/nginx -- am sure there are probably some closed source addon bits that F5 makes though. Certainly will be a tough situation in the near future for F5 load balancer security. Citrix (who make Netscaler, probably F5's main competitor features/performance wise) got hacked as well at some point though I don't think any source code was claimed to be taken only business data.

Stargate is nowhere near big enough to make OpenAI's tie-ups with AMD and Nvidia work

Nate Amsden Silver badge

Re: big donut

Absolutely, this is clearly a government conspiracy to distract people away from the real Stargate program in Cheyenne mountain!

Nate Amsden Silver badge

Re: There are clear signs

I was thinking recently that the Oracle/OpenAI $300B deal("moment") felt sort of like when AOL acquired Time Warner, which I didn't recall exactly when that happened but just checked, looks like the announcement was January 2000, filed in Feb 2000. The Nasdaq topped out on March 10, 2000.

Microsoft CTO says he wants to swap most AMD and Nvidia GPUs for homemade chips

Nate Amsden Silver badge

meta only one left?

Read or heard something recently comparing Google's AI stuff to Meta's AI stuff, saying something like Meta had deployed something like 1000x more Nvidia GPUs vs Google but their AI stuff wasn't as good as Google's, failing to mention that Google has their own AI hardware too.

Doing a quick web search seems to indicate that Meta does have their own AI chips, and may start testing the first systems in production later this year, apparently ordering up to 6,000 racks of them.

Nvidia's going to need all the (other) customers they can get, no wonder they are itching so hard to sell to China, once these hyperscalers get going, for sure they will dramatically scale down purchases of Nvidia hardware.

Pop! System76's 24.04 beta is here – complete with a beta of polarizing COSMIC

Nate Amsden Silver badge

I don't upgrade anymore

Probably since I first moved from Ubuntu 10.04 LTS to Mint 16 I think it was at the time? Maybe two months ago completed a migration from Mint/MATE 20 to 22. I install the OS on a new partition and set everything up again from scratch to make it clean(even clean home dir, though I do copy some things over not in bulk though).

Given I only use LTS releases this doesn't happen often maybe 4 times in the past 9 years? (before 2 months ago the last time was when I got a new laptop in 2022). I can still boot into Mint 20 if I need anything(fortunately haven't had to more than a couple of times early on), and of course all of my old data is on the old root volume which I can mount and access anytime. This time around it was surprisingly smooth, my biggest concern was getting an old gnome app brightside to work but once I compiled that in a VM ahead of time(along with about 17 other dependencies) and tested it out, only took a few hours to install and setup everything else(over the span of a few days). A good chunk of my work stuff runs in a windows VM and that is mounted on another filesystem didn't have to make any changes to that of course. The VM runs LTSC 1809 so no worries about major changes before 2029.

My mint 20 system had 2452 packages and my mint 22 has 2205.

I don't anticipate having to go through this process again before 2030, my 8-core Xeon laptop with 128G of ECC ram should be more than enough for the next 5 years if not longer. I used to like to tinker with stuff in Linux in my early days(~1996 till ~2004 I still have my Corel Linux inflatable penguin), but things have been "good enough" for me for probably close to 20 years now, super rare for me to need to go outside the repos to get something.