* Posts by Nate Amsden

2511 publicly visible posts • joined 19 Jun 2007

Microsoft mystery folder fix might need a fix of its own

Nate Amsden

Re: Quality control - yes we’ve heard of it

doubtful so many computers will go to landfill immediately following win10's expiration(non LTSC). Many will just continue to use their OS without new patches for a while. Most software will continue to get updates and probably the most important of which is the browser. Chrome stopped supporting win7 about 3 years past EOL. I think firefox supported Win7 through Firefox ESR for at least another year so 4 years past EOL. Steam stopped Win7 early 2024 looks like it. I suspect Win10 will have an extended life much like Win7 did. I know some more resourceful folks got extra updates out of Win7 by changing to embedded license or something (I never tried that).

While not used for anything too serious I still have at least 4 or 5 systems at home that run Win7, and yes I do do internet things on them occasionally. Most of the time they are off as I don't need them often. I actually bought licenses for Bitdefender earlier this year to install on a couple of Win7 systems for retro gaming(mainly paranoid about "no CD/DVD" cracks of games I own, also used virustotal to scan those).

my main daily driver system of course is Linux and has been since 1998.

IBM orders US sales to locate near customers or offices

Nate Amsden

Don't forget Red hat Linux is IBM

Though I haven't used Red Hat since RHEL 2.1

Google, AWS say it's too hard for customers to use Linux to swerve Azure

Nate Amsden

how many

how many google and amazon software/services can you run in/from Azure at all? (NOTE: doesn't mean making remote calls to services, means running their software IN Azure). I wouldn't count Amazon's linux distribution(or other re-badged that was otherwise open source under another name) as there are many free sources of Linux systems. Talking ground up written by Google/Amazon.

I don't have skin in this game I don't like any of the IaaS clouds, I'm all on prem (even for my own personal dns/web/email/file hosting I run my own systems in a colo).

I suspect other big software companies that provide cloud services are similar, guessing Oracle provides added discounts for running Oracle products in Oracle cloud(or otherwise hosted by Oracle). Probably others too(Salesforce? SAP? IBM?)

AWS claims 50% of Azure workloads would jump ship if licensing costs allowed

Nate Amsden

I looked up Azure hybrid benefit (using Bing..) and the first hit was MS which says

"Azure Hybrid Benefit is an Azure offer that helps organizations reduce expenses during their migration to the cloud. By providing Azure discounts on Windows and SQL Server licenses, and Linux subscriptions, it supports infrastructure modernization and a cloud-first strategy."

(key point is providing discounts, which is what my assumption was originally)

They also mention discounts for Red Hat Linux, SuSE Linux, and Nutanix, none of which of course are MS products.

I dug up the SQL server 2019 licensing doc (the version my org uses), and on page 30 it seems to indicate what I expected, specifically mentions non Azure clouds.

"In the case where you are using AHB to license your primary database running on shared hardware in the non-Azure cloud, you may run the two passive SQL Server instances (one for HA and one for DR) in a separate OSE running in the cloud on shared hardware to support failover events."

They have a chart showing 36 total cores, 12 are "active" and 24 are standby, only 12 cores are needed to be licensed. I don't know what "AHB" is that is the only mention of that term in the document. Though the version of the doc I have is copyright 2019, I found a newer version copyright 2022(page 26 on the newer doc) which changes the language for this section to be Azure cloud, but the below section remains the same.

Then below that they have another scenario with 24 cores in use, 12 active, 12 in a non Azure cloud. Only have to pay for 12 licenses.

"Primary server licenses covered with SA include support for one Disaster Recovery secondary server only (outside Azure), and any additional secondary Disaster Recovery servers must be licensed for SQL Server. Note: The rights to run a passive instance of SQL Server for failover support are not transferable to other licensed servers for purposes of providing multiple passive secondary servers to a single primary server" (which seems reasonable to me)

Pretty confusing I guess in any case..keep it simple & on prem is my strategy(though I deal with windows maybe 10% of the time) which seems to be the most cost effective/most reliable for me. Moved out of cloud in 2012 and have never looked back.(though that hasn't stopped others in the orgs from trying but then they give up when they see how much more expensive it is).

Oracle is certainly tricky to deal with too, haven't had to deal with them in a while. I do remember educating the sales/audit team back in 2007 about their own CPU licensing (specifically Standard edition had no limit on cores/socket at the time anyway which they didn't know) when the org I was at at the time was undergoing another Oracle audit.

Nate Amsden

Sorry for stupid question but what is the cost difference? My assumption was that if for example you have sql server licensed on prem and have a software assurance contract you could put that anywhere for the same cost(provided you maintain the software assurance).

I have to guess that perhaps MS gives discounted prices for situations where the customer is hosting in their cloud only? (Vs the model in first paragraph).

If so then MS is taking a hit on their income to attract those customers. Something Amazon and google at least have tons of cash to be able to do the same(eat the cost of the difference), if they want.

New SSL/TLS certs to each live no longer than 47 days by 2029

Nate Amsden

Re: Yeah but most of that is trivial.

perhaps for larger scale orgs. I don't do windows stuff, mainly Linux. No company I've ever worked for had windows staff manage SSL anything(they were always fine with all self signed device certs/default certs - or in the case of anything using LDAP they just used plaintext LDAP not LDAPS), they've never had CAs. Maybe it's easy to set that stuff up I'm not sure.

Only reason I did it like that is because if I didn't do it, it wouldn't get done. I'm not about to touch anything with domain policies/GPO or anything.

Nate Amsden

Re: Oh, fuck off

The first SaaS company I worked at two decades ago, used tons of certs, 99% server certs (I remember I had a special portal with Verisign so I could issue my own certs), and some client certs even. Cert problems were almost constant.

One time I recall reporting a cert issue, I found the source and was going to fix it, one of the higher tier support people was confused he didn't see any issues. So I went to his desk and walked him through the workflow.

His browser popped up the expired cert warning and he clicked through it without even realizing it. I said THAT'S THE PROBLEM THERE. Took him a few seconds to realize what he had done. It was pretty funny, so used to errors you just click through without thinking about it.

Nate Amsden

Re: Yeah but most of that is trivial.

Windows is a pain to do manually at least. I haven't done it many times but for example installing a cert for IIS or installing a cert on Active directory for LDAPS. Using openssl to convert the cert/key/CA to a pkcs12 keystore, then importing it in the right place and switching over to it. It can be quite a process, and scary for first timers, or if you haven't done it in a while.

With IIS I recall when I install a new cert that has the same name, it installs fine, but then you go to the IIS config to tell it which cert you want to use, and the drop down menu is duplicated, one entry for each cert, so if you have 5 certs in there with the same common name(even if say 3 are expired), it will show 5 options and you sort of have to guess which one to use. Maybe the newest is on top or bottom I am not sure. It would be nice if it showed the cert dates or whether or not a cert was expired at least.

LDAPS certs you have to put it in the special place and LDAPS picks it up automatically and starts using it. But it's still a funky process.

https://learn.microsoft.com/en-us/troubleshoot/windows-server/active-directory/enable-ldap-over-ssl-3rd-certification-authority

I have since modified the process to not create the cert request from the server itself and just do everything end-to-end on a Linux-based CA, the import the resulting keystore at the end, much easier/faster.

Nate Amsden

Re: hoping this is only external certs

Certainly don't have to use an internal CA. If I left and nobody did anything worst case is certs expire and they either issue new certs through the CA, or issue certs another way, or go back to device self-signed certs, or just leave them expired. Most things with the certs on them don't care if the certs expire(internal websites, management interfaces etc), and the user can just choose to ignore the expired cert and continue using the service.

I've been using internal CAs since around 2007 just to make my experience cleaner, not prompted by every self signed internal device in different browsers over time. One of my managers pointed out that's a nice way to improve security to know the certs are trusted and that's true though security wasn't the reason for doing it.

Nate Amsden

Re: Why not...

such a check would be the least of the worries, the most of the worries would be the cert install process encounters an unexpected error and leaves the system is a broken state, for basic web servers installing such certs is not too complicated, becomes more complicated when you have infrastructure services relying on certs. Some systems offer API interfaces to manage that stuff(or a CLI) but that is more complexity and I'd wager in some cases such interfaces aren't well tested because they aren't frequently used(yet anyway). Many embedded systems have to be rebooted entirely for the new cert to take effect.

Nate Amsden

Re: Why not...

Certs have nothing to do with TLS/SSL versions or the type of encryption used. The only algorithm is in what signed the cert. Should be nothing stopping anyone from running a cert issued in 2025 on a SSL v3 system for example (other than some clients may not like it).

Cert is used for identifying, nothing more.

Nate Amsden

hoping this is only external certs

I've been self CA signing internal certs for 800 days for at least 5 years(prior to that it was for 10 years at a time), never had a complaint that a browser didn't like it(assuming they installed the CA to trust). Probably 95% of my certs are like that, only a few externally signed. No fancy automation (too many different kinds of systems and processes) other than decent alerting to know when they expire.

Overall seems pretty pointless to me. History shows that attackers maintain access to networks and systems for extended periods (months) on average so they can just grab the newer certs as they get installed.

Not sure about the quantum angle that makes no sense. The cert is about identification not about securing data transmission.

Nvidia joins made-in-America party, hopes to flog $500B in homegrown AI supers by 2029

Nate Amsden

guessing they don't have much of a plan

yet on how to get those rare earth metals that China just stopped allowing exports from, maybe they are just hoping that situation will change in time............

Google Cloud’s so-called uninterruptible power supplies caused a six-hour interruption

Nate Amsden

Re: Unbelievable

I'm assuming this is one of Google's own cloud data centers and not a 3rd party colo they are using.. but cloud data centers are generally built less reliable to save costs, and you get the redundancy by having systems in multiple zones/regions rather than higher availability in a single facility. The article makes it sound like just a single zone went down in the affected region?

I'm not sure how a battery failure would not take anything down if there was no other form of power at the time. Utility power was gone(perhaps connected to only a single source of power), and it seems likely they just have a single bank of UPS(s) that failed. Reason could be anything including not replacing the batteries when they expired.

Better engineered facilities would fare much better but also cost quite a bit more. Too many people assume just because it's cloud that everything from the ground up is designed/deployed as well as robust as more traditional facilities/systems.

The facility where I host my personal equipment is old, I think from the 90s, single power feed, single UPS system(as far as I know), no redundant power anywhere in the facility. They have had quite a few power outages over the past decade(more than my home) though none in the last 3 years. It's good enough and pretty cheap so not a big deal to me...

The facility where I've hosted the gear for the orgs I have worked for over the past decade plus is by contrast far better with N+1 everything, they do tons of regular testing, have at least two power feeds to the facility, I can only recall one occasion where they went on generator power. Though have seen several notifications around UPS failures here and there, though since everything is N+1 redundancy was never impacted. Never so much as a blip in power feeds distributed to the racks themselves.

There is a facility that my previous org hosted stuff in Europe, I hated that place so much. At one point they needed to make some change to their power systems, and to do that they had to take half of their power offline for several hours, then a few days later did the same process on the other half. Obviously no N+1 there, as we did lose power on each of our circuit pairs during the maintenance. Though no real impact to us as everything was redundant(did lose a couple of single power supply devices during the outage but there was other units that took over automatically). So many problems with that facility and their staff & policies was so happy to move out.

Back in mid 2000s I was hired at a company that had stuff hosted in a facility that suffered the most outages of any I've ever seen, and the only other full facility power outages, bad power system design. Power outage causes included dead UPS batteries that were not replaced and UPSs failed when the power cut, as well as a customer at one point intentionally pressed the "emergency power off" button to kill power to the whole facility because they were curious. After that 2nd event all customers had to go through "training" about that button and sign off on getting that training. There was probably 3 power outages in less than 1 year at that facility and by the time the 3rd one happened I was ready to move out just needed VP approvals and the approval came fast when that 3rd outage hit. The facility later suffered an electrical fire a few years later and was completely down for about 40ish hours, till they got generator trucks on site to power back up, took them probably 6 months to repair the damage to the power system. Bad design...though I recall that facility being highly touted as a great place to be during the dot com era..

By contrast I recall reading a news article around that same time about another facility, that was designed properly, had a similar electrical fire on their power system, and it had zero impact to customers. Part of their power system went down but due to the proper redundancy, everything stayed online. I recall a comment where they said often times a fire department would require full shutdown to safely fight the fire, but they were able to demonstrate they had isolated that part of the system so they were not required to power down.

On that note I'd never host in a facility that uses flywheel UPS, and/or who doesn't have tech staff on site 24/7 to handle basic issues during a power outage (like the automatic switch to generators not working). Flywheel UPSs don't give enough time(usually less than 1 minute) for a human to respond. Would like to see at least 10-15mins of battery runtime capacity(hopefully only need less than a minute for generators to start).

VMware revives its free ESXi hypervisor in an utterly obscure way

Nate Amsden

same for me too, 6.7U2, Dell R630, also have a Dell R640 with the same (though I think it can run newer) to keep it consistent. Have them in a colo, no power outages in 1366 days(no redundant power at the facility my personal gear is at). Have an older Intel NUC NUC8i7BEH running 6.7U2 too there as well, which run my only windows VMs, also up for 1366 days. Been impressed with how stable the NUC has been, only issue is the SNMP service seems to hang a few times a year, then LibreNMS starts complaining, till I restart the service.

Since I saw this news on esxi being free again a few days ago I was wondering how software updates may be handled, if any updates are provided at all given they are behind a support contract wall at this point. Maybe they haven't decided on that bit yet.

VMware sues Siemens for allegedly using unlicensed software

Nate Amsden

Re: On the 72 core minimum..

also wouldn't surprise me if the 72 core minumum turns out to be per socket, I wouldn't be too surprised to see AMD and/or Intel release 72-core CPUs(since there would be a huge demand for that to maximize license utilization, though broadcom would have to demonstrate that they won't change the rule to 73 cores a year later). I think similar things have been done in the past at least regarding Oracle DB licensing on one or more occasions.

Nate Amsden

Re: Ballsy

Could say the same for Microsoft per my comment above, SQL server enterprise is not cheap to license, but if you have the software(available from archive.org), there are no keys to install, no activation, pure honor system. I think still that is how most software operates. Oracle too(last I checked), one of the most expensive off the shelf software products available(at one point costed ~$40,000 per CPU for base Enterprise edition without any addons, don't know about now), free to download, no license keys. Hell even Oracle Java you must pay for in a commercial environment, no license keys, no activation.

99% of old shareware didn't do any validation other than the fact that the key was a valid form. Keygens were released for countless programs over the years.

vmware was the same, you bought a license, used their portal to issue the key(s) and then used the keys. The software validated the keys were in the right form, that's it. Obviously old shareware/etc you didn't have to go to a license portal to get the keys.

Nate Amsden

Re: On the 72 core minimum..

They may mean 72 core minimum per socket(or per server)? At one point I think vmware changed their scheme so 1 socket was considered 32 cores, so if you had a 64 core CPU you needed 2 licenses for that CPU. And if you had a 48 core CPU, you needed... 2 x 32 core licenses(I think...), and I don't think those licenses could span more than 1 socket.

otherwise, purchasing in 72 core blocks doesn't seem like it would be too huge of a deal in the grand scheme of things if that license can span sockets and/or servers.

Nate Amsden

Re: Maybe a lesson to learn here

Companies have been exploring alternatives for over a year now...I made a comment last year regarding AT&T who was at one point a very big supporter of Openstack, then one year they dropped of the list of supporters, and apparently moved to VMware(as the news article was about AT&T complaining about vmware support fee costs).

AT&T had invested upwards of a decade of time in Openstack, and probably only they know if they continued to use it even after deploying(more) vmware. But given they dropped off the contributor list that seemed like a pretty significant signal they weren't too focused on Openstack anymore.

Speaking of which I just checked their list again, and another super huge company Verizon dropped off their list as well, though they were just a corporate sponsor of Openstack(back in 2017 at least), vs AT&T was the highest tier "Platinum Member".

One thing Broadcom has done for sure is infuse a ton of motivation and money(whether customer money and/or investments into the software) into the vmware alternatives.

Nate Amsden

Re: This sounds like a bit of a mess

Historically at least vmware never had any kinds of checks to ensure anything was in use other than you had a valid key to install(at least for esxi+vcenter, and no phone home stuff either). Perhaps Oracle-like in a way, as it makes it easier to deploy more than what you have licensed then they can come audit you... I'm sure you can easily buy vmware keys on ebay or other places that would work fine (at least pre-broadcom), since there was no activation process(pre broadcom anyway) there was no way for them to block keys from being reused. Just did a search for esxi keygen and apparently they put this out in the open: https://gist.github.com/DnsKzn/60e190caa4d630afe77c0bc85a5a307b lots of vmware keys.

I'd like to think that still the majority of software out there is like that(preferred to me), no phone home, honor system for licensing. Microsoft is that way for at least Windows server+SQL. If I have X number of windows server licenses as far as I can tell nothing stops me from re-using them over and over(eventually they may come try to audit). Also MS licenses stuff differently for VMs, and again is honor system, it has no way to know if you are deploying Windows VMs on more underlying physical servers or physical CPUs/cores than what you purchased, as technically in some(most?) cases you must license all of the underlying CPUs in your VM environment that run Windows VMs, regardless of whether or not you are using even more than a single core on a single server.

Last I checked there was no license keys needed for SQL server. Unsure if MS provides ISO images for SQL if you don't have a license, but I just checked internet archive and there is a SQL Server 2022 standard edition available for download. You can download SQL server developer edition that is free (I don't think there are any differences technically between that and enterprise??)

VMware splats guest-to-hypervisor escape bugs already exploited in wild

Nate Amsden

can disable VMCI ?

Says no workaround, but would be curious to understand why disabling VMCI for the VMs wouldn't prevent the VMs from exploiting something that was disabled

set this is the vmx file

vmci0.present = "FALSE"

and restart the VM (I unregistered the VM just to be sure the new vmx got loaded)

I have never done anything with VMCI before so I ran some searches and came across this

https://knowledge.broadcom.com/external/article/316396/windows-virtual-machine-fail-to-boot-wit.html

which mentions

"Disable vmci by changing the value of vmci0.present to vmci0.present = "FALSE" in the virtual machine vmx file."

I tried it on one of my VMs at home (esxi 6.7, so no patches) and can see no ill effects, don't know if anything I've ever done in vmware

in the past has ever used VMCI.

I saw another reference to someone mentioning an ESXI 5.x hardening guide suggesting specifically to disable VMCI

as well.

I do see that apparently it can/could be used to accelerate traffic between VMs ?

SonicWall firewalls now under attack: Patch ASAP or risk intrusion via your SSL VPN

Nate Amsden

Re: The easy way out

It someone has that level of access to AD, vpn logins are pretty low on the concern list at that point to me.

Google: How to make any AMD Zen CPU always generate 4 as a random number

Nate Amsden

if trust is a true issue

Don't use someone else's computer to run your VMs on regardless. There will always be security issues around that. Most people probably don't care. But I would never put much stock in such features myself.

I for one am not a fan of all this nrw fangled signing and encryption stuff in thr name of security to take control away from the person who owns or operates the equipment.

Even Windows 10 cannot escape the new Outlook

Nate Amsden

kind of confused

I'm more of a Linux person though I do use Windows 10 LTSC 1809 in a VM as my main windows desktop for work stuff mainly.

I looked it up again to confirm, MS says they will support classic Outlook until at least 2029(I assume for users of LTSC 1809 which has support till 2029 as well?). Users can continue to use/install classic Outlook until the "cutover" phase which as of Nov 2024 has not yet been specified, and when it is, there will be at least a 12 month time before it takes effect. Maybe the situation is different for personal accounts(vs accounts associated with a company that has office365).

https://techcommunity.microsoft.com/blog/outlook/new-outlook-for-windows-a-guide-to-product-availability/4078895

Kind of ironic I regularly get warnings in MS teams saying classic teams is not supported, so I should upgrade, but the newer teams is not compatible with LTSC 1809 as far as I know, and the "fine print" (last I checked) said MS will continue to support classic teams on LTSC platforms until their EOL.

I do see a "try the new outlook" option in the top right of my outlook on Win10, I have not tried it, I'm kind of assuming it would not work (like newer teams won't work) due to the newer system requirements.

Most of my outlook usage is OWA from my Linux system but sometimes I do use the classic outlook to do some things.

Zyxel firewalls borked by buggy update, on-site access required for fix

Nate Amsden

Re: Yet another reason not to enable automatic updates

This same kind of thing happened to Sonicwall a few years ago, and I was pretty shocked to get hit by it on firewalls that didn't even have that feature licensed.. I think the impact was limited to Gen7 firewalls, of which I only had 1 pair at the time, all of the important stuff was/is on Gen6.

SonicWall flags critical bug likely exploited as zero-day, rolls out hotfix

Nate Amsden

Re: SonicWall is still a thing?

I've been using Sonicwall as basically L4 firewalls and site to site VPN since early 2012 without much issue. Current firewalls go EOL next year that would make about 8 years of service for those Gen 6 units. I think their SSL VPN on the firewalls is no good (though usable for the most basic use cases). I remember evaluating their SMA SSL appliances many years ago and ruled them out right away as they lacked the ability to do full duo prompt integration (Sonicwall firewalls can't either). Their early Gen7 stuff was pretty buggy though seems better now. Gen5 was ok for me as well (my first exposure to Sonicwall).

For me initially Sonicwall was only going to be used as a site to site VPN, and speaking of marketing, the VAR I was working with at the time (knowing my use case of IPSec VPN ONLY) was trying to push Palo Alto firewalls to me at probably 4-6x the cost. PAN is a fine product but super overkill for only site to site VPN(and the suggested model had a fraction of the IPSec throughput of Sonicwall). I have since expanded use cases to layer 4 edge firewalls as well and they work fine in that regard, very few issues. I haven't touched their layer 7 stuff, assuming there are more bugs there.

As for long TCP timeouts, all depends on how long you want.. I don't think I've ever needed to set something for longer than an hour or two. I did work at one place where the network engineer set their Cisco ASAs to have ~1 week timeouts then struggled with semi quarterly firewall outages where they had to power cycle both firewalls to get them working again. Neither they, nor Cisco support were smart enough to do something as simple as check the state table, then realize hey those 500k entries are the limit of the hardware, then check the timeouts... after I joined and saw that happen I had him fix it, and started monitoring it, states never went above about ~2000 after that, and nobody complained that I recall. The original reason for the 1 week timeouts he said was people were complaining their sessions were being killed.

Miscreants 'mass exploited' Fortinet firewalls, 'highly probable' zero-day used

Nate Amsden

Re: unpatched zero-day?????

Seems you are ?

From what I can see their advisory was posted today, yet the article talks about systems being compromised last month.

It is interesting to note that apparently only their 7.0 build is affected they seem to have several other 7.x branches that are not.

https://fortiguard.fortinet.com/psirt/FG-IR-24-535

I recall reading comments on several occasions (few years back at least) where seemingly experienced network engineers would comment on Fortinet

"find a stable firmware version and don't upgrade" on a fairly regular basis, same folks often touted Fortinet as a good solution lower

cost than Palo Alto(which they ranked #1), but with the big caveat that their software wasn't that great(unless you happened to land on a

good build of it).

Nate Amsden

Re: switch out modems

Sounds nice but if they could make that secure then they would have just used that secure VPN for regular stuff instead of the less secure stuff. Really it seems if you want a "secure" VPN from a vulnerability standpoint the best solution is to avoid any SSL based/browser based VPNs and use another protocol like IPSec. I've never noticed any credential stuffing attacks against my Sonicwall IPSec endpoints but of course the SSL portions get hammered. My most "secure" Sonicwall firewalls have never had SSL VPN enabled and management access while exposed to the internet is limited to just a couple subnets that are allowed, so I consider that secure. I noticed another person comment that there have been some minor vulnerabilities against Fotinet IPSec though only denial of service.

Though IPsec VPNs tend to have a lot less functionality vs the pretty SSL VPNs. Cant' speak for Fortinet, never used it, but the Sonicwall SSL VPN is crap from a few different perspectives (though for me Sonicwall makes a solid IPsec site to site system as well as layer 4 firewall, haven't touched their layer 7). Citrix and Ivanti both have nice SSL VPNs (at least the basic VPN functionality and access control), though like everyone else they have had their share of security exploits over the years.

I haven't seen a modem used for remote access myself since 2002, and even then I recall turning that 3COM modem thing we had off as we migrated users to some Cisco VPN concentrator appliances.

Nominet probes network intrusion linked to Ivanti zero-day exploit

Nate Amsden

Re: Starter for Ten

There have been stats passed around for years that suggest a large number of intrusions take upwards of 6 months to detect on average. Unless the attackers resort to destructive things right away following intrusion.

Zero-day exploits plague Ivanti Connect Secure appliances for second year running

Nate Amsden

9.1 code base systems NOT AFFECTED by remote code execution bug

Dealt with this yesterday...

Had a pair of recently EOL'd Ivanti/Pulse Secure systems running 9.1 code base still, I knew they were going end of life but still assumed they would be supported for critical fixes till true end of life which was slated till some time later this year(haven't cared about new features for the past 5 years), then I dug in my email box and saw that they accelerated the end of life for software to end of last year. And they would not release patches(yet they re-iterated end of life is Dec 23, 2025 - this is after they accelerated end of life of the hardware to summer of LAST year). So a bit of panic set in.... I was planning on replacement systems to be deployed by end of the month already.

Of course remote code execution is bad so I had to accelerate plans to replace those systems with my newer Ivanti systems which I wasn't ready to do yet due to additional work to get Duo Universal prompt working, the method I had been using with Duo for the past decade is EOL as well.. but I got a workaround in place(Duo without universal prompt). Why can't I keep using the regular Duo prompt it works fine........

Then today I noticed in the fine print of the security advisory

----

What versions of Connect Secure do these vulnerabilities impact?

The versions of code that each CVE impacts is reflected in the chart above. The 9.x line of code reached End of Life on December 31, 2024, and will not be receiving a patch for CVE-2025-0283. It is important for customers to know that we are not aware of any exploitation of CVE-2025-0283 in the wild and CVE-2025-0282 does not impact the 9.x line of code.

---

So the super scary remote code execution didn't affect my 9.1 systems after all the other bug isn't great either but of course huge difference between the two for severity I didn't have to rush but at least it's out of the way then I can focus on getting the better Duo support enabled next (https://help.duo.com/s/article/8019?language=en_US).

Sort of ironic to me Ivanti accelerated the EOL of 9.1 because they said the CentOS that it used was no longer getting updates so they couldn't make it secure any more, and the next super critical security bug is only on their NEW system, not on the OLD one that they can't update.

Even Netflix struggles to identify and understand the cost of its AWS estate

Nate Amsden

the lengths they go through

To justify continuing to use amazon is just sad. Goes to show how broken the mindset is for public cloud. Especially when that cloud is a major competitor.

Krispy Kreme Doughnut Corporation admits to hole in security

Nate Amsden

Re: "if you're a regular customer, check any credit cards"

Lock your credit reports.

I did that the day Equifax was hacked. Looks like in 2017. It's not perfect but goes a long way towards making yourself a less easy target. I've had to unlock my credit reports on just a few occasions in the years since and I believe all 3 major credit reporting companies systems are setup so you can unlock for a short period of time then auto lock again. There is no cost for this service, but in trying to find the service (for free anyway) you'll probably have to navigate past their advertising for their subscription services.

Nate Amsden

Re: Krispy Kreme still exists?

Guessing this is a joke? (On mobile can't see icons) Have seen news of many brands in trouble but not this one. Did a search and only see mentioned closing about a dozen stores each in 2022 and seems like 2024 too, a rounding error compared to the 1400 Wikipedia says they have.

The only Kristy Kreme newr me closed maybe 5 years ago. Though multiple supermarkets carry their donuts near me and I think they claim are fresh daily. I'm not picky and like most any standard flavor donut including Krispy Kreme, though have never been fond of their classic "airy" texture. Flavor is fine though. My favorite donuts are old fashioned and apple fritter. Now that I said tgat probably going to get some this morning.

Now if you said you're surprised to hear Quiznos is still in business that'd make a lot of sense by contrast.

Mr Intel leaving Intel is not a great sign... for Intel

Nate Amsden

Random thought...

Maybe if Intel didn't make itanium and drive the alpha processors out of business, thus driving many(?) alpha engineers to AMD they wouldn't of come up with the amd64 instructions? Seems a lot of similar technologies in the amd design that came from alpha. I'm not an expert, maybe just coincidence....

1,000s of Palo Alto Networks firewalls hijacked as miscreants exploit critical hole

Nate Amsden

I went through one upgrade on a HA pair of Palo Alto 3000 series firewalls that I briefly inherited at my last company i think probably in 2019. This had to go through two version jumps.

I had zero PAN experience so I opened a support ticket to verify the process. They were very helpful. They pointed me to their best practice guide which ended up being completely WRONG. They had a support person on the line in case I had a problem and said their guide was right. That person went off shift and said everything is going ok, so if I need help just call back in.

I followed the process till I got to the point where I had to fail over and the firewalls refused. Called support, waiting for an hour(thought it'd be faster). Told them what was going on and at that point they admitted the best practice guide was wrong. The guide specifically said fully upgrade the first firewall then fail over. I didn't think that sounded right so before I did that I asked them and they said the guide was right. Ok. Well ended up having to yank the HA cables out and force things to fail over, then upgrade the other unit and do the same again. What a mess. Fortunately the outages weren't an issue and I was physically on site.

I was kind and pushed Palo Alto to fix their guide, took them about a year to do so. Even then it really wasn't to my satisfaction, they could of made it more clear but at that point I stopped caring.

Fortunately nothing else broke that I recall. Their firewall config was pretty basic no SSL interception, but they were using BGP. The impression I got was the IT network engineer didn't touch them since they were installed years earlier by a 3rd party. Same person also did not apply updates to their ASA firewalls which had over 100 vulnerabilities by the time I took over, and they were EOL and Cisco refused to let me buy a support contract to simply download the latest code. I have read some stories about PAN upgrades breaking a bunch of stuff, so it's not uncommon to not upgrade frequently, especially if the customer relies on 3rd party hourly support for day to day maintenance.

Most of my commercial firewall experience over past 15 years is Sonicwall. Just layer 4 and site to site vpn. Never touched their layer 7 stuff and their end user ssl vpn sucks(but it is cheap, I wouldn't use it though). Never had a major issue with upgrades but again config is simple. Despite that I am somewhat terrified to upgrade my external firewalls at my main remote colo because if something goes wrong I lose all connectivity to the site. If we were a significantly larger company it would be easier to justify a back door setup.

Originally that site had a stack of 4 switches that ran everything. I did the stack just to keep it simple for others and it was small. I never had to roll back a switch OS update in the prior 7 years. I needed to upgrade but wasn't going to do it from 2000 miles away. So I went on site and decided to do it from my hotel room about 20min from data center. Just so happen that the upgrade broke compatibly with the vendor SFPs that I was using for the internet uplinks. So when I upgraded I lost all connectivity and had to drive on site to figure it out. Ended up rolling back and vendor later acknowledged the compatibility issue. I later changed architecture with the switches so the setup was more robust at the cost of complexity. Still simpler than other vendors though.

Moral perhaps don't take updates lightly especially on systems that provide core connectivity.

HPE goes Cray for Nvidia's Blackwell GPUs, crams 224 into a single cabinet

Nate Amsden

just go direct

With a system like this who needs a data center, just drop one or two of these racks at your local nuclear power plant and get a network connection to it ...

Windows 10 given an extra year of supported life, for $30

Nate Amsden

nice to see

Hopefully they offer LTSC to consumers in the future as well(Though I'm not holding my breath of course). I run LTSC(which was a super annoying process for the company to buy as their normal reseller didn't know what it was and couldn't figure it out) as my main Win 10 VM on my linux system, build 1809 supported till Jan 2029. Super strange to me that LTSC 21H2 (the last LTSC I guess) only supported til 2027 by contrast(except for IoT which goes to 2032, why not just make both enterprise and IoT 2032 they have to make the same fixes anyway, stupid).

https://en.wikipedia.org/wiki/Windows_10_version_history

MS teams always bugs me saying that teams classic is unsupported and I should upgrade. Though last I checked I could not upgrade as the newer teams is not compatible, also read that classic teams will continue to be supported on LTSC windows regardless (I assume till LTSC EOL but not sure).

It's about time Intel, AMD dropped x86 games and turned to the real threat

Nate Amsden

Re: "amid growing adoption of competing architectures"

I think the future is ARM for those that are vertically integrated, for the rest, not so much at least in the server space. Very few can afford to be vertically integrated to that level, perhaps at this point that is mostly companies with trillion dollar market caps for the most part. I wouldn't be surprised if in a decade or so RISC-V takes over from ARM as these vertically integrated companies go to take another layer out of the supply chain.

Sysadmins rage over Apple’s ‘nightmarish’ SSL/TLS cert lifespan cuts plot

Nate Amsden

Re: Evidence?

Along those lines I don't recall ever seeing news reports of ssl cert hijacking. I have seen reports of CAs improperly issuing certs but of course that is an unrelated issue.

I don't doubt ssl cert hijacking as an issue exists, but it doesn't seem to be prevalent to the point where reducing time to expire as being a useful tool.

Most surprised that Apple of all companies seems to be behind it, as they were behind the last round of reductions.

And I guess everyone gave up on revocation, nice idea in theory but guess almost never gets used.

Also curious if anyone had had experience with ssl cert expiration issues with non web systems. Take email for example, haven't heard of any email clients enforcing such limits on certs, also take server side apps talking to other apps over https. I'm guessing the ssl libraries don't care either, as I've never heard of someone complain about such things.

I'd bd happy to go back to 3 years myself. I've only been on the Internet since 1994, and running Internet facing servers since 1996, so what do I know, apparently nothing. Sigh

Nate Amsden

sounds terrible

But more to the point, I'd be surprised if it improves security much, being that it's been touted for over a decade the average intrusion is something like 6 months or more before being detected.

from 2022

https://www.cybersecurityintelligence.com/blog/how-long-does-it-take-before-an-attack-is-detected-6602.html

"In fact, the average breach lifecycle takes 287 days, with organisations taking 212 days to initially detect a breach and 75 days to contain it. "

Gives the attacker plenty of time to just get the new certs as they are issued if they are expiring every 45 days..

I assume this expiry time doesn't impact self signed(internal CA) certs, I recall that was the case for the earlier reductions in intervals, and my internal certs I set to expire after about 800 days and have yet to have any browsers complain in the past 5-6 years.

AT&T claims VMware by Broadcom offered it a 1,050 percent price rise

Nate Amsden

Openstack

I recall in the earlier days of Openstack, AT&T was a big name in that space. I was surprised to see these articles around VMware and AT&T, though it made sense that AT&T used some VMware, had no idea they had so many systems on it.

See this from 2016 and 2018

https://about.att.com/innovationblog/openstack_cloud

https://about.att.com/innovationblog/airship_for_openstac

Seems Openstack was deployed at AT&T as far back as 2011

https://www.openstack.org/blog/openstack-deployments-abound-at-austin-meetup-129/

The last AT&T VMware article I poked around more and was quite surprised to see AT&T seems to have withdrawn from Openstack, maybe VMware gave them a deal too good to pass up, perhaps that is why their new bill is 1000% more expensive..

You can see here, in 2020 & 2021 AT&T was a Platinum sponsor of Openstack:

https://web.archive.org/web/20201027213028/https://www.openstack.org/community/supporting-organizations/

https://web.archive.org/web/20211019122932/https://www.openstack.org/community/supporting-organizations/

but as of 2022 they dropped off the list, I could not find any other search engine hits as to why AT&T was not on the list anymore:

https://web.archive.org/web/20220515130825/https://www.openstack.org/community/supporting-organizations/

(disclaimer I have never used openstack, I had high hopes for it when it first came out(mainly during the VMware vRAM fiasco), but by ~2014ish time frame I came to the conclusion it was too complex to be useful for anyone but orgs with large amounts of resources to support it, and could not replace simple VMware deployments, and it seems that hasn't changed, but AT&T has such resources and could do it if they wanted to).

That doomsday critical Linux bug: It's CUPS. May lead to remote hijacking of devices

Nate Amsden

https://www.theverge.com/2021/7/2/22560435/microsoft-printnightmare-windows-print-spooler-service-vulnerability-exploit-0-day

(note: from 2021)

"Microsoft is warning Windows users about an unpatched critical flaw in the Windows Print Spooler service. The vulnerability, dubbed PrintNightmare, was uncovered earlier this week after security researchers accidentally published a proof-of-concept (PoC) exploit. While Microsoft hasn’t rated the vulnerability, it allows attackers to remotely execute code with system-level privileges, which is as critical and problematic as you can get in Windows."

this CUPS thing is a joke. Here I was worried about things like network switches, storage arrays, firewalls, web servers. My local linux laptop runs cups (though I have no printers), I manage roughly 800 other linux systems at work and not one of them has ever had CUPS installed.

Nate Amsden

this better be in the kernel

I saw someone post in another forum speculating that the issue was with CUPS. Since I saw this last night I'm assuming it's somehow a kernel network exploit. If it's anything but that, really this will end up being another super over hyped thing.

Saying "every linux system in the past decade", the only thing those have in common is the various kernels. Obviously not an SSH or Apache, or whatever service exploit. If it does end up being with CUPS then that will be one of the biggest security jokes of the past decade, as only a fraction of 1% of linux systems run CUPS.

so, like others, I await the truth to be revealed. Assuming it is a kernel network thing I have to wonder if such an exploit is mitigated by means of passing the traffic through another device such as a firewall, especially if that firewall is running a kernel that is NOT linux.

I do recall I think in the late 90s there being one or two or more kernel bugs similar to Windows' "ping of death" though it wasn't a RCE, just a system crash...though maybe am remembering wrong.

CrowdStrike's Blue Screen blunder: Could eBPF have saved the day?

Nate Amsden

200+ data sources

for Grafana, doesn't sound like much...

I've been using LogicMonitor since 2014 due to it's large data source support, and they claim 3000+ integrations.

I remember having talks with Data dog a few years ago they were convinced they could replace LM for me, but after some talks it was quite a joke what they were offering (they were suggesting use their generic SNMP templates to make everything manually). My previous org used the open source grafana for a while(for some things, while most core infrastructure was monitored by LM), though I never touched it, was too complex for me to get interested. LM is super simple and powerful(and I've added tons of custom stuff that they don't support). Though they do sometimes change their UI around(they are in the midst of that now) which is SUPER ANNOYING.

LM isn't cheap though, I'm sure it's more expensive than grafana cloud stuff.. LM itself is hosted by them (I think partly in public cloud partly in their own colo(s).), can't self host. Though I suspect even if they did allow self hosting I may not want it I suspect it's too complex and buggy to be able to run myself(have said this before, I suspect many SaaS apps are just buggy messes that they host themselves because it's easier to have their staff manage than try to make the software super stable enough for customers to run, also the services revenue doesn't hurt either).

91% of polled Amazon staff unhappy with return-to-office, 3-in-4 want to jump ship

Nate Amsden

spacex

With spacex being elon, would be shocked if they allowed work from remote? Given his stances for that at Twitter and tesla at least.

Datacenters bleed watts and cash – all because they're afraid to flip a switch

Nate Amsden

one situation that may be common

I recall about 10 years ago deploying some bare metal systems for an e-commerce site. Our VMware stuff was all max performance and the bare metal was just the default OS settings (which happened to have some power management stuff enabled by default). Ubuntu 12 I think at the time (though that doesn't matter, situation is the same with Ubuntu 16 and 20, haven't tried 24 yet). Most of the time the servers sat at under 10% cpu usage. It wasn't a huge deal but some folks complained that the performance of those systems was less than the VM-based systems, even though the bare metal systems had 10x the capacity.

Ended up being the power management, given the low utilization the clock speed was just cranked low and stayed low, I assume if CPU usage went way up then clock speed would ramp up as well, but those situations were very rare maybe not even a couple of times a month. (despite that, the bare metal systems were FAR cheaper than the VM based ones due to the software licensing). So I hard set the bios to be max performance, power usage was up a bit but not much and people stopped complaining.

Maybe newer CPUs work much better in that regard, for example maybe they have the ability to have a small subset of their cores to run at max/normal clock speed, and the rest of the cores be lower speed, and ideally the OS is capable of scheduling tasks on those higher clocked cores unless load gets pretty high then distribute the load to other cores. I know Intel has that P core / E core thing, though I don't think that is too widely used in Xeons at this point(if at all?) I haven't tried to check. Of course such co-ordination between the CPU and the software introduces extra complexity and could be more bugs as a result.

Research suggests more than half of VMware customers are looking to move

Nate Amsden

Re: Open source replacements not good enough ?

An easy way around this is just to adopt a new stack that is based on OSS tech even if it is "commercial". Proxmox is mentioned often, can pay them for their services, they appear to have a lot of open source stuff, and I assume they probably contribute upstream to projects. Same for Red Hat's Enterprise Virtualization, and I suspect HPE's latest hypervisor is similar. Maybe even Nutanix contributes upstream too for their AHV. (disclaimer I've never used any of these products)

Double Debian update: 11.11 and 12.7 arrive at once

Nate Amsden

Re: Lack of Nvdia 390 and 470 support is a problem - but solutions abound

Fortunately for more experienced users, the linux kernel is fairly divorced from the distro. You can run older kernels on newer distros without much of an issue in most cases. With debian/dpkg it's easy to flag a package to "hold" so you don't get the new kernel. On my previous laptop I held my kernel for probably 3 years as I got tired of every 2nd or 3rd kernel update breaking the sound on the laptop in some way.

So worst case, you can probably run the 6.8 or whatever is being shipped now probably for the next 5+ years if you REALLY wanted to and continue to update your distro in other ways and keep your old Nvidia card running(assuming you care about using the proprietary drivers). Of course if there is some super critical security thing that comes out then you may want to consider that, but for most people, the only such situations I'd be concerned with is with remote exploits (such as bugs related to handling malformed network packets which could lead to a crash etc which are super rare).

Nate Amsden

Re: unsure about debian

now that I'm back home could poke at my system for more info, seems this is the legacy driver

https://packages.debian.org/bookworm/nvidia-tesla-470-driver

" This version only supports GeForce, NVS, Quadro, RTX, Tesla, ... GPUs based on the Kepler, Maxwell, Pascal, Volta, Turing, Ampere, or newer architectures. The (Tesla) 470 drivers are the last driver series supporting GPUs based on the Kepler architecture. Look at the legacy driver packages for older cards. "

The server that I use this on has a "NVIDIA GeForce GT 720" GPU (fanless). Except for laptops that have fans, the only Nvidia GPUs I've ever used have been fan less, with the first being a whitebox TNT 2 that I recall purchasing at Fry's electronics in 1999.

Nate Amsden

unsure about debian

But the latest devuan has a legacy Nvidia package. Allowed my older card to work fine after upgrading. Wasn't super smooth to figure out as this is the only time I've run into compatibility issues with a kernel since I started using Nvidia back in 1999.(I tend to be very conservative in my configs, am aware of lots of issues if I ran more current kernels over the years).

Since devuan is based on debian I assume it's the same there too

https://packages.debian.org/bookworm/nvidia-legacy-check