* Posts by Nate Amsden

2490 publicly visible posts • joined 19 Jun 2007

SonicWall firewalls now under attack: Patch ASAP or risk intrusion via your SSL VPN

Nate Amsden

Re: The easy way out

It someone has that level of access to AD, vpn logins are pretty low on the concern list at that point to me.

Google: How to make any AMD Zen CPU always generate 4 as a random number

Nate Amsden

if trust is a true issue

Don't use someone else's computer to run your VMs on regardless. There will always be security issues around that. Most people probably don't care. But I would never put much stock in such features myself.

I for one am not a fan of all this nrw fangled signing and encryption stuff in thr name of security to take control away from the person who owns or operates the equipment.

Even Windows 10 cannot escape the new Outlook

Nate Amsden

kind of confused

I'm more of a Linux person though I do use Windows 10 LTSC 1809 in a VM as my main windows desktop for work stuff mainly.

I looked it up again to confirm, MS says they will support classic Outlook until at least 2029(I assume for users of LTSC 1809 which has support till 2029 as well?). Users can continue to use/install classic Outlook until the "cutover" phase which as of Nov 2024 has not yet been specified, and when it is, there will be at least a 12 month time before it takes effect. Maybe the situation is different for personal accounts(vs accounts associated with a company that has office365).

https://techcommunity.microsoft.com/blog/outlook/new-outlook-for-windows-a-guide-to-product-availability/4078895

Kind of ironic I regularly get warnings in MS teams saying classic teams is not supported, so I should upgrade, but the newer teams is not compatible with LTSC 1809 as far as I know, and the "fine print" (last I checked) said MS will continue to support classic teams on LTSC platforms until their EOL.

I do see a "try the new outlook" option in the top right of my outlook on Win10, I have not tried it, I'm kind of assuming it would not work (like newer teams won't work) due to the newer system requirements.

Most of my outlook usage is OWA from my Linux system but sometimes I do use the classic outlook to do some things.

Zyxel firewalls borked by buggy update, on-site access required for fix

Nate Amsden

Re: Yet another reason not to enable automatic updates

This same kind of thing happened to Sonicwall a few years ago, and I was pretty shocked to get hit by it on firewalls that didn't even have that feature licensed.. I think the impact was limited to Gen7 firewalls, of which I only had 1 pair at the time, all of the important stuff was/is on Gen6.

SonicWall flags critical bug likely exploited as zero-day, rolls out hotfix

Nate Amsden

Re: SonicWall is still a thing?

I've been using Sonicwall as basically L4 firewalls and site to site VPN since early 2012 without much issue. Current firewalls go EOL next year that would make about 8 years of service for those Gen 6 units. I think their SSL VPN on the firewalls is no good (though usable for the most basic use cases). I remember evaluating their SMA SSL appliances many years ago and ruled them out right away as they lacked the ability to do full duo prompt integration (Sonicwall firewalls can't either). Their early Gen7 stuff was pretty buggy though seems better now. Gen5 was ok for me as well (my first exposure to Sonicwall).

For me initially Sonicwall was only going to be used as a site to site VPN, and speaking of marketing, the VAR I was working with at the time (knowing my use case of IPSec VPN ONLY) was trying to push Palo Alto firewalls to me at probably 4-6x the cost. PAN is a fine product but super overkill for only site to site VPN(and the suggested model had a fraction of the IPSec throughput of Sonicwall). I have since expanded use cases to layer 4 edge firewalls as well and they work fine in that regard, very few issues. I haven't touched their layer 7 stuff, assuming there are more bugs there.

As for long TCP timeouts, all depends on how long you want.. I don't think I've ever needed to set something for longer than an hour or two. I did work at one place where the network engineer set their Cisco ASAs to have ~1 week timeouts then struggled with semi quarterly firewall outages where they had to power cycle both firewalls to get them working again. Neither they, nor Cisco support were smart enough to do something as simple as check the state table, then realize hey those 500k entries are the limit of the hardware, then check the timeouts... after I joined and saw that happen I had him fix it, and started monitoring it, states never went above about ~2000 after that, and nobody complained that I recall. The original reason for the 1 week timeouts he said was people were complaining their sessions were being killed.

Miscreants 'mass exploited' Fortinet firewalls, 'highly probable' zero-day used

Nate Amsden

Re: unpatched zero-day?????

Seems you are ?

From what I can see their advisory was posted today, yet the article talks about systems being compromised last month.

It is interesting to note that apparently only their 7.0 build is affected they seem to have several other 7.x branches that are not.

https://fortiguard.fortinet.com/psirt/FG-IR-24-535

I recall reading comments on several occasions (few years back at least) where seemingly experienced network engineers would comment on Fortinet

"find a stable firmware version and don't upgrade" on a fairly regular basis, same folks often touted Fortinet as a good solution lower

cost than Palo Alto(which they ranked #1), but with the big caveat that their software wasn't that great(unless you happened to land on a

good build of it).

Nate Amsden

Re: switch out modems

Sounds nice but if they could make that secure then they would have just used that secure VPN for regular stuff instead of the less secure stuff. Really it seems if you want a "secure" VPN from a vulnerability standpoint the best solution is to avoid any SSL based/browser based VPNs and use another protocol like IPSec. I've never noticed any credential stuffing attacks against my Sonicwall IPSec endpoints but of course the SSL portions get hammered. My most "secure" Sonicwall firewalls have never had SSL VPN enabled and management access while exposed to the internet is limited to just a couple subnets that are allowed, so I consider that secure. I noticed another person comment that there have been some minor vulnerabilities against Fotinet IPSec though only denial of service.

Though IPsec VPNs tend to have a lot less functionality vs the pretty SSL VPNs. Cant' speak for Fortinet, never used it, but the Sonicwall SSL VPN is crap from a few different perspectives (though for me Sonicwall makes a solid IPsec site to site system as well as layer 4 firewall, haven't touched their layer 7). Citrix and Ivanti both have nice SSL VPNs (at least the basic VPN functionality and access control), though like everyone else they have had their share of security exploits over the years.

I haven't seen a modem used for remote access myself since 2002, and even then I recall turning that 3COM modem thing we had off as we migrated users to some Cisco VPN concentrator appliances.

Nominet probes network intrusion linked to Ivanti zero-day exploit

Nate Amsden

Re: Starter for Ten

There have been stats passed around for years that suggest a large number of intrusions take upwards of 6 months to detect on average. Unless the attackers resort to destructive things right away following intrusion.

Zero-day exploits plague Ivanti Connect Secure appliances for second year running

Nate Amsden

9.1 code base systems NOT AFFECTED by remote code execution bug

Dealt with this yesterday...

Had a pair of recently EOL'd Ivanti/Pulse Secure systems running 9.1 code base still, I knew they were going end of life but still assumed they would be supported for critical fixes till true end of life which was slated till some time later this year(haven't cared about new features for the past 5 years), then I dug in my email box and saw that they accelerated the end of life for software to end of last year. And they would not release patches(yet they re-iterated end of life is Dec 23, 2025 - this is after they accelerated end of life of the hardware to summer of LAST year). So a bit of panic set in.... I was planning on replacement systems to be deployed by end of the month already.

Of course remote code execution is bad so I had to accelerate plans to replace those systems with my newer Ivanti systems which I wasn't ready to do yet due to additional work to get Duo Universal prompt working, the method I had been using with Duo for the past decade is EOL as well.. but I got a workaround in place(Duo without universal prompt). Why can't I keep using the regular Duo prompt it works fine........

Then today I noticed in the fine print of the security advisory

----

What versions of Connect Secure do these vulnerabilities impact?

The versions of code that each CVE impacts is reflected in the chart above. The 9.x line of code reached End of Life on December 31, 2024, and will not be receiving a patch for CVE-2025-0283. It is important for customers to know that we are not aware of any exploitation of CVE-2025-0283 in the wild and CVE-2025-0282 does not impact the 9.x line of code.

---

So the super scary remote code execution didn't affect my 9.1 systems after all the other bug isn't great either but of course huge difference between the two for severity I didn't have to rush but at least it's out of the way then I can focus on getting the better Duo support enabled next (https://help.duo.com/s/article/8019?language=en_US).

Sort of ironic to me Ivanti accelerated the EOL of 9.1 because they said the CentOS that it used was no longer getting updates so they couldn't make it secure any more, and the next super critical security bug is only on their NEW system, not on the OLD one that they can't update.

Even Netflix struggles to identify and understand the cost of its AWS estate

Nate Amsden

the lengths they go through

To justify continuing to use amazon is just sad. Goes to show how broken the mindset is for public cloud. Especially when that cloud is a major competitor.

Krispy Kreme Doughnut Corporation admits to hole in security

Nate Amsden

Re: "if you're a regular customer, check any credit cards"

Lock your credit reports.

I did that the day Equifax was hacked. Looks like in 2017. It's not perfect but goes a long way towards making yourself a less easy target. I've had to unlock my credit reports on just a few occasions in the years since and I believe all 3 major credit reporting companies systems are setup so you can unlock for a short period of time then auto lock again. There is no cost for this service, but in trying to find the service (for free anyway) you'll probably have to navigate past their advertising for their subscription services.

Nate Amsden

Re: Krispy Kreme still exists?

Guessing this is a joke? (On mobile can't see icons) Have seen news of many brands in trouble but not this one. Did a search and only see mentioned closing about a dozen stores each in 2022 and seems like 2024 too, a rounding error compared to the 1400 Wikipedia says they have.

The only Kristy Kreme newr me closed maybe 5 years ago. Though multiple supermarkets carry their donuts near me and I think they claim are fresh daily. I'm not picky and like most any standard flavor donut including Krispy Kreme, though have never been fond of their classic "airy" texture. Flavor is fine though. My favorite donuts are old fashioned and apple fritter. Now that I said tgat probably going to get some this morning.

Now if you said you're surprised to hear Quiznos is still in business that'd make a lot of sense by contrast.

Mr Intel leaving Intel is not a great sign... for Intel

Nate Amsden

Random thought...

Maybe if Intel didn't make itanium and drive the alpha processors out of business, thus driving many(?) alpha engineers to AMD they wouldn't of come up with the amd64 instructions? Seems a lot of similar technologies in the amd design that came from alpha. I'm not an expert, maybe just coincidence....

1,000s of Palo Alto Networks firewalls hijacked as miscreants exploit critical hole

Nate Amsden

I went through one upgrade on a HA pair of Palo Alto 3000 series firewalls that I briefly inherited at my last company i think probably in 2019. This had to go through two version jumps.

I had zero PAN experience so I opened a support ticket to verify the process. They were very helpful. They pointed me to their best practice guide which ended up being completely WRONG. They had a support person on the line in case I had a problem and said their guide was right. That person went off shift and said everything is going ok, so if I need help just call back in.

I followed the process till I got to the point where I had to fail over and the firewalls refused. Called support, waiting for an hour(thought it'd be faster). Told them what was going on and at that point they admitted the best practice guide was wrong. The guide specifically said fully upgrade the first firewall then fail over. I didn't think that sounded right so before I did that I asked them and they said the guide was right. Ok. Well ended up having to yank the HA cables out and force things to fail over, then upgrade the other unit and do the same again. What a mess. Fortunately the outages weren't an issue and I was physically on site.

I was kind and pushed Palo Alto to fix their guide, took them about a year to do so. Even then it really wasn't to my satisfaction, they could of made it more clear but at that point I stopped caring.

Fortunately nothing else broke that I recall. Their firewall config was pretty basic no SSL interception, but they were using BGP. The impression I got was the IT network engineer didn't touch them since they were installed years earlier by a 3rd party. Same person also did not apply updates to their ASA firewalls which had over 100 vulnerabilities by the time I took over, and they were EOL and Cisco refused to let me buy a support contract to simply download the latest code. I have read some stories about PAN upgrades breaking a bunch of stuff, so it's not uncommon to not upgrade frequently, especially if the customer relies on 3rd party hourly support for day to day maintenance.

Most of my commercial firewall experience over past 15 years is Sonicwall. Just layer 4 and site to site vpn. Never touched their layer 7 stuff and their end user ssl vpn sucks(but it is cheap, I wouldn't use it though). Never had a major issue with upgrades but again config is simple. Despite that I am somewhat terrified to upgrade my external firewalls at my main remote colo because if something goes wrong I lose all connectivity to the site. If we were a significantly larger company it would be easier to justify a back door setup.

Originally that site had a stack of 4 switches that ran everything. I did the stack just to keep it simple for others and it was small. I never had to roll back a switch OS update in the prior 7 years. I needed to upgrade but wasn't going to do it from 2000 miles away. So I went on site and decided to do it from my hotel room about 20min from data center. Just so happen that the upgrade broke compatibly with the vendor SFPs that I was using for the internet uplinks. So when I upgraded I lost all connectivity and had to drive on site to figure it out. Ended up rolling back and vendor later acknowledged the compatibility issue. I later changed architecture with the switches so the setup was more robust at the cost of complexity. Still simpler than other vendors though.

Moral perhaps don't take updates lightly especially on systems that provide core connectivity.

HPE goes Cray for Nvidia's Blackwell GPUs, crams 224 into a single cabinet

Nate Amsden

just go direct

With a system like this who needs a data center, just drop one or two of these racks at your local nuclear power plant and get a network connection to it ...

Windows 10 given an extra year of supported life, for $30

Nate Amsden

nice to see

Hopefully they offer LTSC to consumers in the future as well(Though I'm not holding my breath of course). I run LTSC(which was a super annoying process for the company to buy as their normal reseller didn't know what it was and couldn't figure it out) as my main Win 10 VM on my linux system, build 1809 supported till Jan 2029. Super strange to me that LTSC 21H2 (the last LTSC I guess) only supported til 2027 by contrast(except for IoT which goes to 2032, why not just make both enterprise and IoT 2032 they have to make the same fixes anyway, stupid).

https://en.wikipedia.org/wiki/Windows_10_version_history

MS teams always bugs me saying that teams classic is unsupported and I should upgrade. Though last I checked I could not upgrade as the newer teams is not compatible, also read that classic teams will continue to be supported on LTSC windows regardless (I assume till LTSC EOL but not sure).

It's about time Intel, AMD dropped x86 games and turned to the real threat

Nate Amsden

Re: "amid growing adoption of competing architectures"

I think the future is ARM for those that are vertically integrated, for the rest, not so much at least in the server space. Very few can afford to be vertically integrated to that level, perhaps at this point that is mostly companies with trillion dollar market caps for the most part. I wouldn't be surprised if in a decade or so RISC-V takes over from ARM as these vertically integrated companies go to take another layer out of the supply chain.

Sysadmins rage over Apple’s ‘nightmarish’ SSL/TLS cert lifespan cuts plot

Nate Amsden

Re: Evidence?

Along those lines I don't recall ever seeing news reports of ssl cert hijacking. I have seen reports of CAs improperly issuing certs but of course that is an unrelated issue.

I don't doubt ssl cert hijacking as an issue exists, but it doesn't seem to be prevalent to the point where reducing time to expire as being a useful tool.

Most surprised that Apple of all companies seems to be behind it, as they were behind the last round of reductions.

And I guess everyone gave up on revocation, nice idea in theory but guess almost never gets used.

Also curious if anyone had had experience with ssl cert expiration issues with non web systems. Take email for example, haven't heard of any email clients enforcing such limits on certs, also take server side apps talking to other apps over https. I'm guessing the ssl libraries don't care either, as I've never heard of someone complain about such things.

I'd bd happy to go back to 3 years myself. I've only been on the Internet since 1994, and running Internet facing servers since 1996, so what do I know, apparently nothing. Sigh

Nate Amsden

sounds terrible

But more to the point, I'd be surprised if it improves security much, being that it's been touted for over a decade the average intrusion is something like 6 months or more before being detected.

from 2022

https://www.cybersecurityintelligence.com/blog/how-long-does-it-take-before-an-attack-is-detected-6602.html

"In fact, the average breach lifecycle takes 287 days, with organisations taking 212 days to initially detect a breach and 75 days to contain it. "

Gives the attacker plenty of time to just get the new certs as they are issued if they are expiring every 45 days..

I assume this expiry time doesn't impact self signed(internal CA) certs, I recall that was the case for the earlier reductions in intervals, and my internal certs I set to expire after about 800 days and have yet to have any browsers complain in the past 5-6 years.

AT&T claims VMware by Broadcom offered it a 1,050 percent price rise

Nate Amsden

Openstack

I recall in the earlier days of Openstack, AT&T was a big name in that space. I was surprised to see these articles around VMware and AT&T, though it made sense that AT&T used some VMware, had no idea they had so many systems on it.

See this from 2016 and 2018

https://about.att.com/innovationblog/openstack_cloud

https://about.att.com/innovationblog/airship_for_openstac

Seems Openstack was deployed at AT&T as far back as 2011

https://www.openstack.org/blog/openstack-deployments-abound-at-austin-meetup-129/

The last AT&T VMware article I poked around more and was quite surprised to see AT&T seems to have withdrawn from Openstack, maybe VMware gave them a deal too good to pass up, perhaps that is why their new bill is 1000% more expensive..

You can see here, in 2020 & 2021 AT&T was a Platinum sponsor of Openstack:

https://web.archive.org/web/20201027213028/https://www.openstack.org/community/supporting-organizations/

https://web.archive.org/web/20211019122932/https://www.openstack.org/community/supporting-organizations/

but as of 2022 they dropped off the list, I could not find any other search engine hits as to why AT&T was not on the list anymore:

https://web.archive.org/web/20220515130825/https://www.openstack.org/community/supporting-organizations/

(disclaimer I have never used openstack, I had high hopes for it when it first came out(mainly during the VMware vRAM fiasco), but by ~2014ish time frame I came to the conclusion it was too complex to be useful for anyone but orgs with large amounts of resources to support it, and could not replace simple VMware deployments, and it seems that hasn't changed, but AT&T has such resources and could do it if they wanted to).

That doomsday critical Linux bug: It's CUPS. May lead to remote hijacking of devices

Nate Amsden

https://www.theverge.com/2021/7/2/22560435/microsoft-printnightmare-windows-print-spooler-service-vulnerability-exploit-0-day

(note: from 2021)

"Microsoft is warning Windows users about an unpatched critical flaw in the Windows Print Spooler service. The vulnerability, dubbed PrintNightmare, was uncovered earlier this week after security researchers accidentally published a proof-of-concept (PoC) exploit. While Microsoft hasn’t rated the vulnerability, it allows attackers to remotely execute code with system-level privileges, which is as critical and problematic as you can get in Windows."

this CUPS thing is a joke. Here I was worried about things like network switches, storage arrays, firewalls, web servers. My local linux laptop runs cups (though I have no printers), I manage roughly 800 other linux systems at work and not one of them has ever had CUPS installed.

Nate Amsden

this better be in the kernel

I saw someone post in another forum speculating that the issue was with CUPS. Since I saw this last night I'm assuming it's somehow a kernel network exploit. If it's anything but that, really this will end up being another super over hyped thing.

Saying "every linux system in the past decade", the only thing those have in common is the various kernels. Obviously not an SSH or Apache, or whatever service exploit. If it does end up being with CUPS then that will be one of the biggest security jokes of the past decade, as only a fraction of 1% of linux systems run CUPS.

so, like others, I await the truth to be revealed. Assuming it is a kernel network thing I have to wonder if such an exploit is mitigated by means of passing the traffic through another device such as a firewall, especially if that firewall is running a kernel that is NOT linux.

I do recall I think in the late 90s there being one or two or more kernel bugs similar to Windows' "ping of death" though it wasn't a RCE, just a system crash...though maybe am remembering wrong.

CrowdStrike's Blue Screen blunder: Could eBPF have saved the day?

Nate Amsden

200+ data sources

for Grafana, doesn't sound like much...

I've been using LogicMonitor since 2014 due to it's large data source support, and they claim 3000+ integrations.

I remember having talks with Data dog a few years ago they were convinced they could replace LM for me, but after some talks it was quite a joke what they were offering (they were suggesting use their generic SNMP templates to make everything manually). My previous org used the open source grafana for a while(for some things, while most core infrastructure was monitored by LM), though I never touched it, was too complex for me to get interested. LM is super simple and powerful(and I've added tons of custom stuff that they don't support). Though they do sometimes change their UI around(they are in the midst of that now) which is SUPER ANNOYING.

LM isn't cheap though, I'm sure it's more expensive than grafana cloud stuff.. LM itself is hosted by them (I think partly in public cloud partly in their own colo(s).), can't self host. Though I suspect even if they did allow self hosting I may not want it I suspect it's too complex and buggy to be able to run myself(have said this before, I suspect many SaaS apps are just buggy messes that they host themselves because it's easier to have their staff manage than try to make the software super stable enough for customers to run, also the services revenue doesn't hurt either).

91% of polled Amazon staff unhappy with return-to-office, 3-in-4 want to jump ship

Nate Amsden

spacex

With spacex being elon, would be shocked if they allowed work from remote? Given his stances for that at Twitter and tesla at least.

Datacenters bleed watts and cash – all because they're afraid to flip a switch

Nate Amsden

one situation that may be common

I recall about 10 years ago deploying some bare metal systems for an e-commerce site. Our VMware stuff was all max performance and the bare metal was just the default OS settings (which happened to have some power management stuff enabled by default). Ubuntu 12 I think at the time (though that doesn't matter, situation is the same with Ubuntu 16 and 20, haven't tried 24 yet). Most of the time the servers sat at under 10% cpu usage. It wasn't a huge deal but some folks complained that the performance of those systems was less than the VM-based systems, even though the bare metal systems had 10x the capacity.

Ended up being the power management, given the low utilization the clock speed was just cranked low and stayed low, I assume if CPU usage went way up then clock speed would ramp up as well, but those situations were very rare maybe not even a couple of times a month. (despite that, the bare metal systems were FAR cheaper than the VM based ones due to the software licensing). So I hard set the bios to be max performance, power usage was up a bit but not much and people stopped complaining.

Maybe newer CPUs work much better in that regard, for example maybe they have the ability to have a small subset of their cores to run at max/normal clock speed, and the rest of the cores be lower speed, and ideally the OS is capable of scheduling tasks on those higher clocked cores unless load gets pretty high then distribute the load to other cores. I know Intel has that P core / E core thing, though I don't think that is too widely used in Xeons at this point(if at all?) I haven't tried to check. Of course such co-ordination between the CPU and the software introduces extra complexity and could be more bugs as a result.

Research suggests more than half of VMware customers are looking to move

Nate Amsden

Re: Open source replacements not good enough ?

An easy way around this is just to adopt a new stack that is based on OSS tech even if it is "commercial". Proxmox is mentioned often, can pay them for their services, they appear to have a lot of open source stuff, and I assume they probably contribute upstream to projects. Same for Red Hat's Enterprise Virtualization, and I suspect HPE's latest hypervisor is similar. Maybe even Nutanix contributes upstream too for their AHV. (disclaimer I've never used any of these products)

Double Debian update: 11.11 and 12.7 arrive at once

Nate Amsden

Re: Lack of Nvdia 390 and 470 support is a problem - but solutions abound

Fortunately for more experienced users, the linux kernel is fairly divorced from the distro. You can run older kernels on newer distros without much of an issue in most cases. With debian/dpkg it's easy to flag a package to "hold" so you don't get the new kernel. On my previous laptop I held my kernel for probably 3 years as I got tired of every 2nd or 3rd kernel update breaking the sound on the laptop in some way.

So worst case, you can probably run the 6.8 or whatever is being shipped now probably for the next 5+ years if you REALLY wanted to and continue to update your distro in other ways and keep your old Nvidia card running(assuming you care about using the proprietary drivers). Of course if there is some super critical security thing that comes out then you may want to consider that, but for most people, the only such situations I'd be concerned with is with remote exploits (such as bugs related to handling malformed network packets which could lead to a crash etc which are super rare).

Nate Amsden

Re: unsure about debian

now that I'm back home could poke at my system for more info, seems this is the legacy driver

https://packages.debian.org/bookworm/nvidia-tesla-470-driver

" This version only supports GeForce, NVS, Quadro, RTX, Tesla, ... GPUs based on the Kepler, Maxwell, Pascal, Volta, Turing, Ampere, or newer architectures. The (Tesla) 470 drivers are the last driver series supporting GPUs based on the Kepler architecture. Look at the legacy driver packages for older cards. "

The server that I use this on has a "NVIDIA GeForce GT 720" GPU (fanless). Except for laptops that have fans, the only Nvidia GPUs I've ever used have been fan less, with the first being a whitebox TNT 2 that I recall purchasing at Fry's electronics in 1999.

Nate Amsden

unsure about debian

But the latest devuan has a legacy Nvidia package. Allowed my older card to work fine after upgrading. Wasn't super smooth to figure out as this is the only time I've run into compatibility issues with a kernel since I started using Nvidia back in 1999.(I tend to be very conservative in my configs, am aware of lots of issues if I ran more current kernels over the years).

Since devuan is based on debian I assume it's the same there too

https://packages.debian.org/bookworm/nvidia-legacy-check

Admins wonder if the cloud was such a good idea after all

Nate Amsden

different kinds of cloud

Personally I realized the cloud scam (specifically, IaaS) back in 2010, and have been bitching about it ever since, fortunately have not really had to deal with it since 2012.

But there is another kind of cloud that may be worth while depending on your needs, that is SaaS. Even before cloud was a thing SaaS was in many places, from DNS/email/web hosting to CDNs and things like that. I have personal experience at my first SaaS company back in 2003-2006 which was years before I think I heard of the term cloud, where the software was so complex, immature, and unstable that the customers really could not operate it on prem. I was technical lead of a project in fact to demonstrate to our largest customer at the time AT&T that they could in fact operate it themselves. It involved me setting up the software on their systems and then we sort of trained them to use it from a demo perspective. It never saw any transactions nor had any crashes because there was no activity on it, but even they ran screaming and were happy to have our org continue to operate the software. There was literally hundreds of XML config files, where even an extra space in the config would cause the app to puke. We used client SSL certs for authentication for one part of the system(the only place I've ever used client certs), super complex Java stack running on Tomcat/Weblogic/Oracle. AT&T paid the company I worked for a $1 million check for successful completion of that particular project. The company was acquired in 2006, a couple of months after I left (fortunately was still able to go back and buy the rest of my stock options).

Fast forward 5-6 years and the different org I was at decided to use Chef configuration management. Similar situation (though not nearly as bad), operating that on prem looked to be pretty bad just based on comments I was reading at the time, so we opted for their cloud version (I think cost wise it was the same it was just a matter of preference which you wanted to do). Years after that I recall downloading Gitlab and installed it on prem, took one look at everything that was running and decided to nope out of that. Too complex, makes me feel like it would be super fragile. Developers ended up going with bitbucket at the time.

Companies have an excuse to make their software practically unusable from an on prem perspective by just making it so complicated and fragile, that the only real way to use it is with SaaS. I think that is applying to more and more products out there (at least ones that aren't desktop oriented).

Upside I suppose is for people like me who have been running mission critical internet facing infrastructure for the past 21 years that gives me plenty of opportunities in theory..

But for sure, IaaS, as deployed by all the major clouds is, and has always been quite a scam. Biggest factor is resource utilization, paying for what you provision rather than what you use. Fixed instance sizes, etc etc. Object storage I suppose is one of the few things that is pay for what you use, but even then of course the big clouds are super expensive compared to other (even cloud) options from what I've seen over the years.

But at least in many cases with SaaS the billing model is more clear(often times $ per user account which is easier to budget for and justify yes/no), and unlike IaaS you(customer) don't have to deal with the potential unreliability of the underlying infrastructure or software itself.

Broadcom boss Hock Tan says public cloud gave IT departments PTSD

Nate Amsden

for sure from the 90s, last I checked(few weeks ago) on their support site the downloads section etc are dedicated to mainframe products(according to the web pages themselves, which don't even show the mainframe products I assume since I've never had a license for any of them). I can understand not integrating VMware so soon, but even about a year ago I was looking for Fibrechannel switch software and could not find it. Eventually, with HPE's help opening a ticket with HPE who then opened a ticket with Broadcom I was able to get what I needed.

Proof-of-concept code released for zero-click critical IPv6 Windows hole

Nate Amsden

what if there is no local IPv6 network?

I mean if IPv6 is not configured(but still enabled on the windows system), and there are no IPv6 routers on the local network, no other devices running IPv6, it's not as if you can just configure IPv6 on a single system and expect to connect to anything using that protocol? That said, on Linux at least I do go out of my way to disable IPv6(kernel option ipv6.disable=1), just to make things simpler because there is no IPv6 network to connect to(don't anticipate that changing in the next 5+ years) and I just prefer it to be more clean. There is inbound IPv6 support to the e-commerce website I support but that is NAT'd to IPv4 at the CDN transparently.

I remember back in 2001, the Extreme Networks Summit 48 I had for our office at the time had a protocol support thing. Default was all protocols but then you could restrict it to say just I want to say IP traffic or something like that? I recall setting that option, everything worked fine, except one or two people that used Macs complained due to whatever protocol Mac was using(forgot the name) was being rejected by the switch so I went to the switch and just allowed all. Haven't seen that particular feature in switches since (though I'm sure could be handled manually using ACLs, this was a simple drop down box selection in the web UI).

Intel's Software Guard Extensions broken? Don't panic

Nate Amsden

exactly

Nate Amsden

For one, I'm pretty sure AMD lacks capacity to produce enough chips to take over a large swath of the market in a short period(few years) time. Intel has been rethinking their strategy and seems to be going back to their roots which would probably be a good thing after messing up in the past decade or so. AMD likewise messed up for several years(at least ~2013 till ~2019 ?) and only recently got back into things at least on the server side.

For intel's sake I hope they stay the course (despite near term losses) to fix their stuff rather than get cold feet and go back to trying to pump the stock price on short term stuff only.

ARM and RISC-V will likely remain very niche on the server end for a few years at least yet. The ones most likely to benefit from it are the vertically integrated cloud companies designing their own chips. Intel/AMD may very well end up with a very competitive solution both cost/power/performance vs ARM/RISC-V for other situations. Especially when at least the ARM server chips pack tons of cores and use tons of power. Companies messed around with the micro server concept (even AMD bought Seamicro) but came to the conclusion more powerful chips with more cores are better than tons of small powered chips. Those that are making their own chips will have lower costs due to fewer players in their supply chain.

Too much stuff is built on x86. The seeming lack of standard interfaces that x86 has on ARM (and I assume RISC-V has the same issues) will continue to cause complexities with deployment (this is quite an issue in the mobile space).

'Uncertainty' drives LinkedIn to migrate from CentOS to Azure Linux

Nate Amsden

has moved, not is moving

First reaction was WTF why are they just deciding on moving to another distro now(reading the first line of el reg's article), after EOL is past for CentOS 7. But the first line of the blog post says as of April 2024 almost everything is already moved.

Skimming the blog post seems most of their challenges would have been the same regardless of what they were migrating to, as CentOS 7 is a 10 year old platform and I assume most updates during that period were just back ported to their own versions, even going to whatever the latest RHEL is would be a similar jump.

Sounds as if Azure linux is RPM-based. Odd that they felt the need to connect a GUI for their IDE for development. As long as I can recall at least the developers I've worked with going back maybe as far as 2006 or 2007 used Macs for development and that stuff got pushed to linux servers (same holds true for the org I am at now). Not many issues, though once Macs went ARM the number of issues for developers went up a bit (but they handle that themselves). Though all of the languages they use are interpreted,nothing C/C++ etc. I did maintain a developer VM of one of the app stacks a previous org ran for several years, used the same OS/configs(as close as possible) to production. Though that certainly goes out the window with developers using ARM and the servers being x86.

Myself, waiting for Ubuntu 24.01 to come out to start testing transitioning from 20.04, haven't seriously run CentOS since CentOS 5. I was RHEL/CentOS only at work from 2003 till about 2010 then Ubuntu since. (and Linux Mint as well as Devuan at home).

AMD won’t patch Sinkclose security bug on older Zen CPUs

Nate Amsden

Having it be super secure is also bad for many. I think most techies would prefer having more control over their devices and not be locked down and somewhat powerless.

It's part of the appeal of open source and linux and PCs in general vs more locked systems like game consoles, mobile phones etc.

It's really hard to provide a (truly) secure, open system. Perhaps impossible.

Never having a known compromise of any system of mine since the 90s and the [STONED] virus, I don't care about this kind of security thing. It's super unlikely that I'd ever be impacted by it. Same reason I didn't apply patches for meltdown/spectre and specifically disable the fixes in the linux kernel.

Don't forget that there will always be some new vulnerability around the corner.

If you are in the unlucky position of being an at risk target then I feel for you.

I suspect the vast majority of servers out there at least are VMs which I assume should lower the attack surface as you'd have to compromise the host in order to do anything with the firmware.

Been running Internet connected services since 1996.

CrowdStrike hires outside security outfits to review troubled Falcon code

Nate Amsden

what happened

I think regardless of the bug, the core problem was pushing obviously very untested code/data to all customers production systems, even customers that had policies regarding staging stuff to non critical systems first. But even then, this sounds like a bad enough bug that it should have been caught before going anywhere outside of crowdstrike. Hell, push it to crowdstrike's own systems first. Not sure how long it took between the update being installed and a BSOD (never used crowdstrike or similar "EDR" I think they are called tools).

Twitter tells advertisers to go fsck themselves, now sues them for fscking the fsck off

Nate Amsden

curious

Never having used Twitter, given musk controls it and Tesla, are there Tesla ads shown around "objectional" content on Twitter? I think I saw musk say Tesla would spend a bunch on ads there but haven't heard of incidents with that brand unlike other brands.

More curious if he would even care whether or not Tesla's ads show up alongside such content.

Ransomware gangs are loving this dumb but deadly make-me-admin ESXi vulnerability

Nate Amsden

ok, so perhaps the issue is more that ESXi my recognize a mailing list distribution group as an access level group and grant admin access based on that? Which seems strange to me I would have expected if esxi requests the group membership, given it is a server, that AD would return just the security groups not distribution lists as that is a different type of object. But in any case sounds like a more advanced AD thing, and if that behavior was critical then the AD admins should know about that already, and set things accordingly when they link up ESXi or anything else that authenticates against AD.

I believe in the org I am with there are no users that are allowed to make distribution lists, I have regular discussions with the IT folks and they have frequently mentioned user's requests to have a new distribution group which then IT then evaluates and performs the actions as needed. I think I recall discussions like that at prior companies as well going back to 2008 if not earlier. I assume there were cases where some users could manage existing distribution lists(perhaps their own group's or something).

Per another user's comment I use LDAP with my Netscalers as well and was not aware of that nsroot thing, for me I just needed to provide named "admin" accounts for PCI purposes so am using the LDAP to authenticate for those passwords, then I had to "map" each user to admin role using a local user account(the local user password isn't used, though still must be defined).

Nate Amsden

Maybe it's just an AD thing, though I suspect many orgs especially smaller ones don't have non admin users that can create security groups. At least I've never heard that as being a practice anywhere I worked at, though I freely admit I'm no AD expert by any means.

Otherwise, how is this any different from mapping a group in LDAP (in my case, OpenLDAP) to an admin group in vCenter? or any other app? That is what most apps do. Though usually you define what group name you want to use.

So in my case for example if you happen to be in the "ops-prod-vmware" LDAP group, that will map you to full admin access in vCenter on the production cluster. Seems like a perfectly normal thing to do. ESXi was just lazy in that they picked some arbitrary name and said here use this.. I have seen comments from others saying there is a setting or two you can use to change the default, but I assume that setting is not visible in the UI ( you may have to go to the advanced settings to find it).

Nate Amsden

by design

Noted someone in another forum dig up esx 4.1 guide(the last version of esx I was excited about), and saw this group setup is specifically how you would assign admin access, on that initial implementation of AD support.

For me, another nothing burger, as a Linux person, have never had my hosts connected to AD. I did run AD for a few years for vcenter 5, but once we switched it to 6.5, wemt with ldap auth against OpenLDAP which we used in Linux already(and killed the windows domain). ESXi hosts only using local auth. Almost never have to login directly to a host anyway, most things are done through vcenter.

CrowdStrike Windows patchpocalypse could take weeks to fix, IT admins fear

Nate Amsden

I'd wager 85% of the times that auto updates are either enabled, or that updates are completely disabled(or perhaps the software version is obsolete and can't get updates). Which is worse? Most places likely lack the skills to perform proper testing, I know this from experience working for companies that have built their own software for 24 years now. Talking internal QA failures, which of course MS and obviously Crowdstrike has as well. The safest bet is to just delay the update by a bit see if others get bit by it first.

Problem is, for security software like this I suspect 95%+ is updated from "the cloud" anyway(and likely updates are checked for multiple times a day). Likely large numbers of systems running in isolated areas not connected to main corporate networks so no easy way to slip in some kind of intermediary update management platform. Not to mention remote employees who may almost never login to a VPN to do their work. I'm sure there are some systems that can work around those kinds of things but it adds more cost and complexity and more often than not the organizations don't want to pay for it(and may not have the staff skilled to handle it adding to more costs). Same can be said for going "multi cloud" (in the truest sense of the phrase), extra cost, complexity. I recall at the last org I was at they used Sophos for their IT security endpoint solution. I recall at one point I asked Sophos a question about something and they said something along the lines of, "do you know you don't have ransomware protection enabled? you just need to go into this setting and click this check box". The IT staff at the company never paid attention to some of the most basic things, which I think is the norm rather than the exception. After the "network engineer" quit I found out that he had not applied any software updates to their ASA firewalls in ~4 years and I counted 120+ known security vulnerabilities in it. He didn't ever put them on software support because "the devices never fail" .... .... ...

For me, I feel sorry for those folks impacted myself I don't have any real suggestions. Glad I don't really have to deal with corporate IT endpoints, my work has been on internet facing linux stuff for the past 21 years. Though I do deal with windows servers as well, just is a tiny fraction of my routine.

On my Windows 10 1809 LTSC VM that I use for work stuff I only apply updates there manually, by using local security policy or whatever it's called to disable the auto updates(apparently disabling the windows update service in Win10 wasn't sufficient like it was in Win7 which I used till late 2022). I get updates till 2029 I believe so don't have to worry about Win11 for a while, by 2029 even Win11 should be to a decent point of stability. I haven't had a known security incident on my home systems since the [STONED] virus in the early 90s. Though I did have AV software flag some malware in some pirated game stuff I did back in the late 90s(none if it appeared to be actually harmful as far as I could tell).

That and I moved my org out of the cloud 12 years ago, so I don't have to worry about that aspect of things either, my co-lo runs smooth as butter in their super conservative configurations.

Release the hounds! Securing datacenters may soon need sniffer dogs

Nate Amsden

just about 21 years ago...

I visited a real datacenter for the first time. An AT&T datacenter in Lynwood, WA. Facility is still there just not operated by AT&T for at least a decade.

Unlike the facility I have used since 2011, this Lynnwood facility had no gates, just security cameras outside. Going inside the guard checked my ID to see if I was on the list. If so they gave me the key to our cage. From there I went into a man trap, where I put in my passcode I believe. Then I had my hand scanned. I learned much later apparently it checked your weight too. Assuming you paased then the trap opened on the other side and you were free to go to your cage.

The weight checking thing was interesting as one of my coworkers was actually too heavy for it. So they had to bypass the mantrap for him. I was on a first name basis with the entire staff there so frequently I wasn't forced to use the man trap especially if I was bringing in equipment.

The more modern QTS datacenter I am familiar with also has multiple man traps for different parts of the 1M sq foot facility. Though no weight checks the traps are regular rooms maybe 64 sq feet. They used to check fingerprint to get inside the man trap then iris scan to get out of it and onto tge datacenter floor. Though the fingerprint scanners were really problematic, so I assume that is why they removed them. Also have a badge for it, no ID checks required if you have a badge. Well there is at least one more sensitive area of the facility that has a man trap with what appears to be a security guard inside(man trap door has a small window in that particular case). Twitter is in that facility, don't know if it's for them or some other customer.

Point is of course, having badge only access hasn't been a thing in proper datacenters in decades.

Porting the Windows 95 Start Menu to NT

Nate Amsden

preview

I had a friend who worked at MS at the time and he sent me a NT 3.51 Server CD back then and that was my main OS until NT 4 came out(was tired of Win9x stability issues). I do recall on MS's ftp site they had a explorer shell preview for NT 3, though was always too worried about stability to install it. I was fond of NT4 for a while as well but eventually got tired of stability issues(and seemed like I had to re-install Windows about every 6 months to resolve stability issues) there and that was my real shift to Linux as my daily driver sometime in 1997, and Linux ever since. Windows certainly has far better stability now vs back then.

Though the workflow I adopted in the late 90s with 16 virtual desktops(on a single monitor) and edge flipping(first with AfterStep(at one point I had probably 60 virtual desktops with that), and past 15 years with Gnome and brightside) isn't compatible with how Windows(or Mac) works (I do use Windows on a regular basis inside of VMware workstation which runs on a dedicated virtual desktop).

AMD predicts future AI PCs will run 30B parameter models at 100 tokens per second

Nate Amsden

flashback

I see 128-bit memory bus as being leading edge stuff today I guess, I had a flashback to one of my early video cards, the Number 9 Imagine 128 Series 2 (from 1996), which I just checked to confirm that they claimed at the time it had a 128-bit memory bus with 800MB per second of memory throughput.

https://www.dosdays.co.uk/topics/Manufacturers/numbernine/imagine128_s2.php

"Number Nine believes that the Imagine 128, Imagine 128 Series 2 and Series 2e are the only true 128-bit graphics cards in the world, utilizing 128-bit technology in all three major subsystems -- the graphics processor, the internal processor bus and data path to graphics memory."

The one thing I think I recall about the card, was I think *technically* the memory bus was not 128-bit, but rather a dual ported 64-bit? I believe the 4MB VRAM card that I had actually had 8 x 512kB VRAM chips on it. But that is just my speculation, based on the number of memory chips the card had (which I wasn't aware of until I owned the card, if I recall right it had 4 memory chips on each side of the card). But their entire marketing campaign was claiming everything was 128-bit throughout.

I also had the original Imagine 128 as well. I loved those cards, just for the specs really, I had no idea what I was doing at the time they just seemed so cool. While the series 2 had OpenGL 3D acceleration in it, it was not game capable(and I didn't do any 3D CAD stuff). My first 3D card was a PowerVR PCI card(which had no VGA output), another thing that I thought was super cool at the time tech wise, I played a lot of the original Unreal with that combination. I remember using AcceleratedX on Linux to use the Number 9 cards (which I don't believe offered any 3D support, for 3D had to use windows)

I know it's unrelated, just felt an urge to write that once, old memory comes back..

Latest MySQL release is underwhelming, say some DB experts

Nate Amsden

Wonder how many care

I mean for most, myself included for at least the past 5 maybe 6+ years "mysql" has meant MariaDB. I haven't formally used the "official" MySQL from Oracle I don't know maybe a decade now? Prior to MariaDB the org I was with used Percona MySQL 5.5 (with Percona support, the Percona build had some of their own custom enhancements). It was only after Percona's costs went ballistic one year(up something like 800% YoY for an unlimited site support license which we probably had opened one ticket in the previous year) that we cancelled support, and a new DBA pushed us towards MariaDB.

I've never been a formal DBA, though I have used and managed "MySQL" off and on for about 20 years now(including replication, backups, Galera clusters, and custom monitoring), first production stuff I recall being I think on top of Red Hat Enterprise 2.1, or maybe 3.0. I have also been a shotgun DBA (with focus mainly on operations not things like queries/schema/etc) for Oracle for a few years too. I'll never forget the arguments I had with my manager back in 2007 regarding latch contention and bad application design while he and others were trying to blame Oracle for the outages.

I've dabbled a BIT in Postgres over the past year for a couple of different small apps, I don't doubt it's a fine DB, but wow it is such a PITA to work with operationally after using MySQL for so long(such as create a new DB, create a user/pass, grant access to the DB over the network etc for me today that's many web searches and trial & error). Even Oracle DB seems more friendly in some respects. But hey as long as there is someone else to manage the DB I don't really care, just ask them to figure it out. My job isn't DBA at the end of the day.

I don't doubt if Postgres in the orgs I work with picked up and I actually used it more frequently I'd become more used to it and wouldn't be so bad. But you could also say the same thing about other tools, like Chef configuration management which still annoys me significantly after 12 years of using it on a semi regular basis.

You had a year to patch this Veeam flaw – and now it's going to hurt some more

Nate Amsden

kind of confused

It sounds like the attackers managed to remote desktop into the server running veeam then exploit it locally? Or was this a remote exploit of Veeam itself?

To me if an intruder is able to RDP into the Veeam system, regardless of Veeam patching or not might as well call it game over.

My most recent Veeam deployment last year, I decided to put all of the vmware, vcenter, Veeam, and backup (StoreOnce, and another Linux system running XFS with reflinks) systems on an isolated VLAN behind a restrictive firewall(restrictions inbound to the VLAN none on outbound), on top of that the Veeam server itself is not joined to the windows domain (which I think is regarded as good practice), also has Duo 2 factor auth on it as well(which is super easy to setup).

Not a Veeam expert by any means, technically that was my first ever install of Veeam though I have interacted with it in the past when it was setup by other people.

VMware license changes mean bare metal can make a comeback through 'devirtualization', says Gartner

Nate Amsden

Re: started doing this in early 2014

certainly a fine personal preference. My preference is the opposite, maybe it is just I'm stuck in my old ways, been doing this for about 27 years now, and feel the systems I have run really well so nothing inspiring me to dramatically change things.

My stuff runs for a long time. A couple of years ago I decided to notice some errors in my personal mail server log from RBLs (Realtime Blackhole lists), and thought it was ironic/funny/crazy that I had two RBLs in my postfix config that went offline 10+ years ago (yet other than log entries it wasn't causing any issues). My mail server config is fairly unchanged in about 20 years now. I still use a software package called sanitizer, which is still in my distribution, though hasn't seen a software update since Jan 2006.

Of course regardless not everything can run in a docker-style container so there will be some sort of system needed anyway to manage that stuff.

Last year I finally resolved an annoying docker/kernel/networking issue that had caused major headaches for my org's developers for the past 4-5 years. Another person worked on the issue for hours and hours but no real fix, just some potential workarounds that helped a bit. Issue was with some software downloading dependencies as part of their build process would error out with cryptic SSL/TLS errors. Light bulb didn't go off for me until I was unable to reproduce the error outside of a docker container. But believe me I was still throwing shit at the wall to see what would stick(after about 5-6 hours of poking it) I had no idea never saw that before(I'd like to think I've seen a lot in my time..). In the end, tuning the "net.core.rmem_max" and "net.core.wmem_max" settings (from default of 212992 -> 12582912) in the host kernel resolved the issue. Been using Linux since 1996 and never had to touch those settings to resolve a problem(have seen them mentioned many times over the years regarding tuning, but this was a very simple workload on a 1 CPU server). Even now I have no idea why that fixed it(at the time I was just tweaking kernel options and seeing if they had any effect)..but wow what an annoying issue..

I don't even know how to classify this issue, I want to say it was a docker problem since it could not reproduce outside of a docker container. But I didn't have to change docker to fix it. I don't want to say it is a networking issue if the issue occurs before the network packet even leaves the network interface. Or is it a kernel problem because I had to tweak the kernel, but if not using docker I would not have to tweak the kernel.

Nate Amsden

started doing this in early 2014

(vmware customer since 1999) .. While investigating ways of improving performance and cost for the org's stateless web applications I decided on LXC on bare metal for them. My more recent LXC servers go a bit further in that they have fibrechannel cards and boot from SAN. My original systems just used internal storage. I still only use it for stateless systems, that is systems that can fail and I don't really care. I have read that newer version(s) of LXC and/or LXD allow for more fancy things like live migration in some cases?? but I have never looked into that. Management of the systems is almost identical to VMs, everything is configured via Chef, and they all run the same kind of services as the VMs. You wouldn't know you were in a container unless you were really poking around. Provisioning is fairly similar as well(as is applying OS updates), mostly custom scripts written by me which have been evolving bit by bit since around 2007. Fortunately drivers haven't been an issue at all on the systems I have. I recall it being a real PITA back in the early-mid 2000s with drivers on bare metal Linux especially e1000* and in some cases SATA drivers too(mainly on HP Proliants). I spent tons of hours finding/compiling drivers and inserting them into kickstart initrds which were then pxe booted. Only time in my life I used "cpio".

I adopted LXC for my main server at home back in ~2017 as well, which runs 7 containers for various things, but I still have VMware at my personal co-lo with 3 small hosts there with a couple dozen VMs on local storage. Provisioning for home stuff and management there is entirely manual, no fancy configuration management.

I do plan to do a migration for some legacy MSSQL Enterprise servers to physical hardware as well soon as the org is opting not to renew software assurance so licensing costs for a VM environment will go way up(as SA grants the ability to license just the CPUs for the VMs running SQL(regardless of number of CPUs in the VM environment), but you lose that when you stop paying for SA), simpler just to consolidate onto a pair of physical servers in a shared nothing cluster. I've never tried boot from SAN with Windows before but from what I read it should work fine..(yes I like boot from SAN, in this case each server will be connected to a different storage array).

I've never personally been interested in docker style stuff so have never gone that route(I do have to interact with docker on occasion for a few things and it's always annoying). Previous org played with kubernetes for a couple of years at least and it was a practical nightmare as far as I was concerned anyway, I'm sure it has it's use cases but for 95% of orgs it's way overkill and over complexity.

Lenovo claims Dell has run off the VxRails and can't sell hyperconverged VMware

Nate Amsden

as of earlier this year Dell could still sell VxRail

I was had a call with a Dell rep (as a small VxRail customer), and we talked about the VMware situation, and at least as of that time (this was AFTER Dell removed VMware from being an option on their servers on dell.com), the rep specifically said that does not affect VxRail. But it's certainly possible something else expired(their contract mentioned) or something in the months since. Not that I really care either way am not fond of VxRail and in the process of retiring that particular system(in part because more than half the cluster went EOL a year ago hardware wise anyway, Dell says they can still support the hardware but from a VxRail perspective it is end of life). I put those EOL systems on 3rd party HW only support last year just in case though no failures since I joined the company.