* Posts by Nate Amsden

2095 posts • joined 19 Jun 2007

Devuan Beowulf 3.0 release continues to resist the Debian fork's Grendel – systemd

Nate Amsden

I like sysvinit

I had used Debian since v2.0 in 1998 and Slackware before that on my personal servers. Switched to Devuan on servers (Mint on my laptops since Ubuntu 10.04 went EOL) in the past year or two(fortunately a fairly seamless upgrade, no re-installs just apt-get upgraded to Devuan).

Most of my issues with systemd could be easily addressed if it would just co-exist nicely with init scripts. It sort of does, but not nearly far enough, I would be happy with systemd if it ran an init script that it just goes into "dumb" mode and run it like any other interactive script. Don't try to be smart, don't try to keep track of state, no timeouts, none of that stuff just run it.

cmd.exe is dead, long live PowerShell: Microsoft leads aged command-line interpreter out into 'maintenance mode'

Nate Amsden

I miss 4NT

and 4DOS as well.

Powershell seems so slow, I have seen it take 30+ seconds to launch or to execute a command (especially after initial login to a server). I'm sure Powershell is a real great tool, though as someone who grew up mostly on linux(since about 1996 anyway), I am more used to(and so prefer) shells/commands using strings rather than objects. With the registry and other things on windows I do realize that strings are harder to use with windows vs the text file config on linux.

I have used bash from cygwin using rxvt on windows for 15+ years now I think(still use it almost daily), unfortunately a few years ago the cygwin team retired the native rxvt that I rely on(had to go to the developers to figure out where it went as there was no mention of losing it at the time), so using old cygwin while I can (which probably doesn't work on the latest windows). The alternative from them was running the full X11 rxvt instead of the native one which isn't what I wanted.

One malicious MMS is all it takes to pwn a Samsung smartphone: Bug squashed amid Android patch batch

Nate Amsden

turn off auto mms download

I did this after the first android media bug came out many years ago. There's been tons since and probably tons more in the future. Doesn't eliminate the vulnerability of course but allows you to ignore random mms from people you don't know which reduces the likelihood of getting hit probably by 99.999%.

Kind of shocked at this point there isn't more protection or sandbox or something around the messages app. It's probably been at least 4 to 5 years since that first one made big news.

Adobe’s Flash fade may force vCenter upgrades unless you run dodgy browsers

Nate Amsden

Re: This is why

yes they should reverse course and go back to the .NET client. The flash and HTML clients are both incredible downgrades compared to the .NET client. I remember I hated the reliance upon windows when I first starting using ESX 3 many years ago(as a Linux on the desktop person since the 90s). But then I learned things could be worse, in the form of the flash, and even the html clients.

I stayed on 5.5 till a bit past EOL one of the reasons was the .NET client(I know that works in 6.0 as well), so I haven't had the pleasure of using the flash client too much. The html client while mostly better than the flash one(but still missing bits that the flash can do in 6.7 at least), is still terribly slow compared to the thick client. I ran my .NET thick client over a Xenapp connection from a remote facility. It was so fast and easy to use. Even the folks in my team that used Macs could use it easily (my main system is Linux but I do run a windows VM for work stuff). I had a cheap version of Xenapp (Fundamentals) which Citrix stopped supporting a long time ago, worked great, 5 user license I think, totally self contained and simple.

Same can be said for Citrix Netscaler. They swore up and down to me years ago that their new HTML client was going to be much better than the Java one - that I'd love it after an adjustment period. Here we are 5 years later and the situation is the same, want the old client(SO MUCH FASTER). Can't get it so it has driven most of my Netscaler admin stuff to the CLI, because the html interface is so slow. Still have to constantly reference the raw config to figure out the syntax since I don't mess with the Netscaler every day(but still more than everyone else in the org combined). I used to run an older Firefox with Java 1.6 for Netscaler (9.3) for the Java UI again on top of Xenapp back in the day, worked really well. Managing Netscaler on CLI probably takes 2-3x longer for many things than the Java UI did, but is still probably faster than the HTML UI(or at least less frustrating).

At least with Flash on vmware, flash was pretty stable (as in didn't change that much) for a long time I think. HTML technologies are changing quite fast making compatibility a bigger issue. For firefox users the ongoing compatibility issues between Firefox and vCenter update manager is one such example.

Doing stuff in the HTML vCenter is probably on average 35% of the speed of doing things in .NET client the way I had it setup anyway. Better than flash which is probably 25% of the speed of .NET.

That critical VMware vuln allowed anyone on your network to create new admin users, no creds needed

Nate Amsden

more so seems to only affect 6.7 systems that were upgraded from earlier versions(which is probably many of them, mine included), doesn't affect new installations.

https://kb.vmware.com/s/article/78543

Oh ... Fudge This Pandemic! Google walks back on decision to switch off FTP in Chrome 81

Nate Amsden

they seem to say the old code is hard to maintain

But I just keep thinking that generic plain text ftp which is probably 99% of ftp sites out there hasn't changed much at all in probably 20+ years so there shouldn't really be much of anything to maintain.

I don't use ftp too often, and generally when I do I use ncftp.

Checking ncftp's changelog they released version 3.0.0 in March 2000, now they are at 3.2.6(from late 2016), for a dedicated ftp client just a point as to how little ftp has changed over the past 20 years.

Samsung's Galaxy S7 line has had a good run with four years of security updates – but you'll want to trade yours in now

Nate Amsden

huge blow?

unlikely.

I ran my Galaxy Note 3 up until less than a year ago(daily driver). Still on Android 4.4.x too (Android 5.x was a downgrade in my opinion). It was my first Android device, prior to that I used WebOS. VERY reluctantly switched to Galaxy S8 Active(still new) last summer as the phone I thought sucked "the least" of any phone on the market.

I have two Note 3s and one Note 4.

I'm taking every precaution I can to try to ensure it lasts as long as it can. Including having a backup phone, having replacement OEM batteries(though I dread the thought of having to get the battery replaced), and invested in two devices called "Chargies", which sync with bluetooth to my phone to ensure the device doesn't go beyond a given charge level (for me that level is 79%), provided it is connected to a charger with Chargie connected. Add Accubattery to that list as well.

I happily replaced the batteries in my Note 3s (one of which is still in daily use, though uses Android 5, it's really only used to view slack) every year because well it was easy. I never needed user replaceable batteries on a daily basis but being able to safely replace them myself annually was a big selling point(keeps things fresh).

Plastic back, wireless charging(I have been wirelessly charging since the original Palm Pre), flat screen were the biggest selling points for me on the S8 active. The bigger battery is nice too as that implies the life will be longer.

Currently on Android 9, which is a downgrade over 8, in fact at least for me every newer release of Android takes away more and more things and is a downgrade. If I wanted an iPhone I'd buy an iPhone.

Other than faster performance the S8 Active doesn't really do much that my Note 3 didn't do already(and in many cases better).

I'd happily drop $2k for an up to date Note 3 with plastic back, wireless charging, removable battery, flat screen, with Android 4.4.x look & feel & control.

I'd also happily pay a subscription fee to get those fancy security updates for the older OSs, so I'm not forced into upgrading to a major version of android to get fixes. But that isn't available either. I'll take no fixes and a more usable phone 100000000000000x more than the latest updates given the history of things. I am also careful about what I use my mobile devices for, no social media, no banking, and the only purchases I do are with virtual credit cards generated by my computer. Also have had the android auto download of MMS disabled for years, I do not open MMS from people I don't know (which isn't a guarantee on anything but goes some ways to improving security regarding the exploits in the android media framework). Not perfect, but at the same time I am not aware of any security incidents on my mobile devices ever (same goes for my desktop and laptop devices(mostly running linux).

Things I miss about note 3 include the stylus, user replaceable batteries, the IR blaster, Android 4.4 experience, having more control over the OS (e.g. can view cpu usage per task and kill stuff etc), MHL (not sure if the S8 active does that or not). IR blaster and MHL I only use while traveling, and I still bring at least 1 note 3 and 1 note 4 when I travel for connecting to hotel TVs for media viewing(they have big SD cards). I have yet to use the headphone jack on my S8 active, only time I've used my phone with headphones was when flying, and I used my Note3/Note4 for that. I do not own any bluetooth headphones.

I'm sure most of the android fans will hate this post so expecting lots of down votes. I prefer freedom myself, if you WANT the latest you should be able to get it, if you DON'T want it, it should not be forced on you. I can't put into words how mad I was when my S8 active upgraded to Android 9 from 8. I went to great lengths to prevent OS upgrades. I successfully blocked my Note 3 from upgrading to Android 5 for 3-4+ years. The AT&T website even specifically said you have to be connected to wifi to get the upgrade. I was traveling at the time, not connected to wifi and it downloaded the upgrade anyway somehow.. On my home wifi I have AT&T and Samsung servers blocked at the DNS level so any attempts to upgrade will fail instantly.

Hoping this S8 active lasts as long as the Note 3 did(it still works fine now just a bit slow).

California emits latest layoff statistics. March's numbers are ugly. It's 19,000 total, including many in tech

Nate Amsden

unemployment systems overloaded

Keep reading reports about how the unemployment systems are overloaded so probably could of been much higher had they been able to process the volume of transactions. To me 19k seems way low I would of expected easily over 50k. Also find it pretty shocking how many companies are trying to self justify themselves as "essential" (Gamestop was one of the more talked about ones in tech anyway).

Is that a typo? Oh, it's not a typo. Ampere really is touting an 80-core 64-bit 7nm Arm server processor dubbed Altra

Nate Amsden

Re: Late

Wasn't Cloudflare super excited about Qualcomm's ARM CPU? Then that CPU just got dropped entirely.

Kinda curious what might make these ARM server CPUs better when it seems every ARM server cpu has failed(probably half a dozen big attempts so far? Even AMD was all in on ARM for server at one point).

Seems ARM has about as much trouble scaling up(200W+ server cpus?) as x86 has scaling down.

Nokia said to be considering sale or merger as profits tank

Nate Amsden

How can 5G be displaced so quickly

Doesn't make much sense to me outside of massive equipment failures, the tech is still so new. I also find it consistently sad that I see so many comments here touting Chinese for 5G(specifically for 5G infrastructure most often referring to European deployments) when you have two 5G companies in Europe(possibly more), would be nice if they saw (a lot) more local support.

Nate Amsden

Re: Kirk Douglas moment

Maybe because they have always had more than a phone division?

https://en.wikipedia.org/wiki/Nokia

"The company has operated in various industries over the past 150 years. It was founded as a pulp mill and had long been associated with rubber and cables, but since the 1990s has focused on large-scale telecommunications infrastructures, technology development, and licensing."

Wi-Fi of more than a billion PCs, phones, gadgets can be snooped on. But you're using HTTPS, SSH, VPNs... right?

Nate Amsden

Re: "MitM attacks on unencrypted network traffic do happen"

My home wifi broadcasts. I wanted to disable it but then read that caused the clients to broadcast at least when they are not connected. I do have mac filtering enabled. I know it's not difficult to spoof macs, but it helps with the casual case of someone trying to connect, on top of an ok password.(16 letters 1 number 1 special char rest is average complexity).

Nate Amsden

Re: "MitM attacks on unencrypted network traffic do happen"

True but even more unlikely. Last i recall there were over 50 SSIDs broadcasting within range of my laptop. I'll add my home wifi is restricted similar to DMZ, no access to my internal network. I use a nice asus AP in 'AP' mode which hangs off a port on my openbsd firewall which handles dns, dhcp, and general network routing.

99.9% of the time my laptop where i do the bulk of my computing sits on my desk connected to ethernet. I do make use of a few powerline ethernet adapters that are on my internal network. I feel those are less vulnerable than wifi but not perfect. They have some limited encryption, but more importantly are protected to some degree being the signal has a hard time crossing an electrical breaker. Add the unlikely scenario that there is an attacker i feel pretty safe. Though the thought has crossed my mind locking that network segment down more.

I'm sure my setup is overkill I don't have much if anything worth trying to steal. So the paranoia is not justified. BUT as a systems and network person for over 20 years its not difficult to setup and runs without trouble for years at a time.

(Posted from my phone on home wifi about to get out of bed 630am here)

Nate Amsden

Re: "MitM attacks on unencrypted network traffic do happen"

Really seems like the poster is implying the concept is similar and the end result is the same, regardless if you are using an unencrypted wifi connection or you exploit something that allows you to decrypt the packets, you get the data the same. The likelihood of something like that happening is very low. Probably should be more concerned about connecting to public wifi in general and the infrastructure in place there(the stuff that sees the traffic after it is terminated on the AP with whatever wifi encryption is used etc).

I go out of my way to avoid public wifi in general, out of just a little paranoia. I'll usually tether to my phone at hotels/etc even if it means a slower experience unless for whatever reason that is completely unusable (signal strength wise). I don't do any media streaming so generally my network data usage is quite low.

Admins beware! Microsoft gives heads-up for 'disruptive' changes to authentication in Office 365 email service

Nate Amsden

People keep saying this but fail to disclose the blast radius of such issues. An org that has a downed exchange system affects only that org. Doesn't affect dozens, hundreds or in some cases thousands of orgs.

MS has had a large number of outages in their services over recent years.

One thing that pissed me off most about office 365 recently is a bug in Outlook web access. If I sent plain text emails (which I did until I gave up on getting the bug fixed) the OWA client would merge all the lines together making things unreadable in many cases. It looked fine from the "Sent" folder, but received emails were jumbled up. Super easy to reproduce. IT team informed MS, and we waited for a fix. Meanwhile I was fine to stay on the "legacy" UI which had no such problem. (and yes the majority of my email is done from linux on OWA, I do run Outlook 2010 on a windows VM which is hooked up as well but that gets a minority of use, mainly for better searching, and yes I know 2010 will break later this year).

Fast forward a few months and they turn off the "legacy" UI, and the bug still isn't fixed. So I have to switch to html email.

Similar issues with "cloud" Confluence from Atlassian, they are dramatically changing their UI breaking TONS of things and there is no recourse. It's quite sad the state of software these days and it's getting worse.

Having on prem, or at least self managed, would allow you to wait until you are ready to upgrade.

Nate Amsden

Re: Hmmmm....

one bonus(depending on your point of view) to using the outlook app over native, is likely the phone can not be remote wiped by the admins. I didn't have a fear of admins at my org doing that to me but it was more of a fear of a software bug or something tripping that could cause it.

I used office365 mail/calendar on Android native(4.4) up until about July of last year. Newer phone newer Android and using Outlook app since. It's not as nice not being able to use the native calendar app to view things(extra clicks to get to the calendar), but the trade off of having a much lower privileged app vs all the insane permissions the built in stuff got I guess is worth the trade off for me personally. I mean it's one of the least annoying things about using the newer Android system.

Firefox, you know you tapped Cloudflare for DNS-over-HTTPS? In January, it briefly knackered two root servers at the heart of the internet

Nate Amsden

Re: But

More likely at least for the non technical users is they would probably switch away from firefox because it's not working and resort to another browser IE or Safari or whatever is default in their OS.

Maybe if firefox defaulted to evenly distributing the load between all of the DoH providers they support with the option of using only one if you prefer only one. Or at least do automatic active/failover.

Not that I intend to use this feature in any case so it wouldn't affect me personally. I've run my own DNS for over 20 years now, and if I am out and about I connect with openvpn to my server at a co-lo facility and proxy through that.

Firefox now defaults to DNS-over-HTTPS for US netizens and some are dischuffed about this

Nate Amsden

the problem is centralization. If your ISP's DNS goes down it only affects the ISP's customers

https://www.theregister.co.uk/2019/07/02/cloudflare_down/

Cloudflare has had a good chunk of outages over the years. Or at least their outages make news here more often than most any other CDN provider I can recall by a large margin, I haven't tried to get stats so my impression may be incorrect.

Having DNS ride on top of HTTPS makes things worse outage wise I'd expect as that is a pretty common data path. Cloudflare's CPU flare up last year I don't believe had any impact on their regular UDP/53 DNS hosting services (I was in talks with them at around that time to use them as a DNS provider).

I've run my own DNS both recursive(internal) as well as authoritative for my domains(external) since about 1998.

I'd be more open to DNS over HTTPS if there was actually a number of resolvers people could run on their own equipment. Last I checked I haven't seen any (one recent forum thread on the topic here someone pointed me to a product but it ended up being a simple proxy to an already existing DoH provider, not capable of serving DoH from say a local BIND installation).

Perhaps that situation has changed in recent months I am not sure.

I do fear the number of end user issues encountered as a result of split DNS on vpn systems where resolving some host externally results in a different address than internally and that behavior being intentional. I did something like this to block access to vulnerable CMS systems several years ago, if you tried to acess them externally they hit an address where the load balancer inspected the rule and only allowed very specific url patterns through, if you wanted to manage the CMS you had to be on VPN. If you tried to manage from outside you got a big warning page saying you needed to be on VPN.

Another similar situation recently where I adjusted internal DNS during an extended outage so that users on VPN could connect to the application, and users on the internet would get a maintenance page. An alternative solution would of been use host file entries for internal users but that is even more complicated. In fact I had to help one user deal with their host file entries from another similar event 2+ years ago that they never removed which was interfering with the new host file entries they were trying to use(tried to resort to host file entries for that user after all attempts to use DNS failed, only later did they disclose they had other host file entries already in place that were obsolete, so I just had them remove them all).

I'd wager in both of these cases Firefox's (and probably soon Chrome's and others) behavior will cause problems. The article mentions being able to push a policy, well good luck with that outside of very tightly controlled orgs(i've never worked at such an org in my 22 year career). Just another annoying thing to have to keep in mind when a user has a DNS related issue.

Already complex enough to try to get users to clear their DNS cache and/or browser dns cache/restart browser to get around DNS caching problems this will just make it much worse.

oh well, pales in comparison to the headache that will be the new SSL expiry issues, ugh.

Apple drops a bomb on long-life HTTPS certificates: Safari to snub new security certs valid for more than 13 months

Nate Amsden

Re: There is a way around compromised certificates

https://www.engadget.com/2020/02/03/microsoft-teams-expired-certificate/

Not even 1 month ago

Nate Amsden

One can perhaps "hope" it is a certificate error. Having gone through a similar thing with chrome recently where it refuses to use certs beyond 897 days i think it is, it's not a simple error. it cannot be bypassed by the user(unlike say a self signed cert). Which is even more madening. There at least should be means to disable this check for internal CAs. Some flag in the CA cert itself or something. I'm less upset about commercial internet certs of which I manage about 100.

Nate Amsden

Re: It's optional

CAs don't love this they were fighting against the earlier proposals to reduce cert times.

App vulnerabilities are so much more of an issue than ssl problems. This really does little to improve security. It does a lot to frustrate operators though.

Nate Amsden

Re: It's optional

This will just encourage Google and firefox to do the same. Everyone knows they want to. Such a pain in the ass. As usual this will impact internal cert authorities as well. The internet continues to go down the toilet. Sigh

All that Samsung users found on UK website after weird Find my Mobile push notification was... other people's details

Nate Amsden

never logged into samsung

My S8 active got it as well, noticed it at around 4am pacific time in California. My phone has never logged into Samsung's services, and their authentication systems are blocked at a DNS level(account.samsung.com), and I have verified find my phone was disabled as well(never have had it enabled on any device). So can only assume the phones themselves poll the servers occasionally for this kind of thing.

just another reminder the lack of control most users have over their own devices. IOS is worse regarding lack of control and Google is flying as fast as they can to mirror that experience on Android. For me, Android 4.x was the best(ran it for 6 years on a Note 3, my first android device before that I used WebOS).

Microsoft ups the ante with fix-fixing patch that leaves some Windows Server 2008 machines unable to boot

Nate Amsden

Why did MS stop doing service packs?

I mean they still have them but they seem to be really rare(and serve more as feature packs). I remember back in the NT days a service pack was typically just a rolled up collection of fixes in one big file. I think you could even extract the big file into the individual fixes if you wanted(never tried that myself).

I know that a few years ago(?) MS changed their updating scheme to provide more roll up style patches where everything is included in one patch at least for that given patch cycle in theory.

Since Win7 went EOL recently, in the past week I have been going around my home windows systems and VMs that run windows 7 (which is 7 I believe, the only other windows OS I have is XP on a dual boot laptop for games that I haven't touched in 18 months). Most of these systems get very minimal use, and probably spend 90% of their time turned off, so they don't get patches often. Probably 2-3 hadn't seen a patch in over 2 years. I patched them all, but on at least one occasion I was quite surprised that the OS at one point felt it was up to date when I knew absolutely it was not. It said no updates to apply even after forcing it to check again. Only after I manually deployed that servicing stack update (from march 2019 I think), did the system then realize that hey there were more patches after all. There are other patches that fail to install if the servicing stack update is not applied(why the servicing stack update isn't applied automatically in advance, or included with the patches in question I don't know since they don't install without it). At the end of the day it didn't matter to me if these got their patches as the risk is so low (haven't had a known security incident on my home systems since maybe 1993). The "OCD" in me just wanted to get them their last patches.

I just feel it would of been nice for MS to release a service pack that one could download as one big EXE, that had EVERYTHING (for that OS). Something that could take a windows 7 system from any patch level to the most current. Maybe the patch would be 5GB in size I don't know. But it would be simple, wouldn't have to wait hours and hours sometimes for windows update to look for updates, or track down forum posts to try to figure out why a weird update error is occurring, or in some cases like above why the system says there are no updates when you know there are. Same of course goes for XP and all of their other OSs. Would be nice if there was just a service pack released every 6 months that had everything, then at the end of the life cycle release one more. Make the service pack easily searchable and downloadable from Microsoft's site.

Another thing this big service pack would eliminate is the need to install patches, then reboot, then look for new patches, install and reboot. I wish windows update would just download everything it needs in one go even if it requires multiple reboots to install. I did in fact see more than one windows 7 reboot twice during a patch upgrade so they have the ability to even do that, install everything at one go and just reboot as many times as it needs to get it all in one cycle.

In my recent patching I found it odd how windows update would say it wants to install for example April 2019 rollup security patch and then the next rollup it wants to install is Jan 2020. I saw that (month may of been off) on multiple occasions. Did it actually get all of the patches? It claims to have them all but I really don't know how to tell for sure.

Linux has been my desktop/server of choice for almost everything since 1997, so windows is of course not my primary OS but I do still use it daily(in VMs).

Not sure when I'll consider again moving to windows 10, I have no win10 systems today but have poked around with it a bit in the past. I'd be more open to win10 if I got a build that would just stay stable for 5 years(they have the long term support builds but as far as I know those aren't available for consumers).

Good: IT admins scrambled to patch 80 per cent of public-facing Citrix boxes to close nightmare hijack hole

Nate Amsden

Well I think it's obvious vendors would never reach into a customer's site and patch themselves without the customer agreeing to it in advance.

For a serious issue the most they could do is release the fix to anyone who has a valid serial number regardless of support status(or end of life status). Even if the fix came with disclaimers that it hasn't been tested on a given platform (but "should" work), unless the platform is so old that it can't work (in Netscaler's case I would assume this would be really old perhaps 32-bit platforms or before they went multi core("nc" series) not sure when that was, before I was a customer anyway which was late 2011).

Last year I tried to get some newer code for Cisco ASAs that a previous network engineer thought there was no need for support on since they "hardly ever fail", hadn't had a patch in 4-5 years(I checked, more than 100 security advisories released). Unfortunately they were end of life and while I did not need "support", I just wanted to download the code that was there already (and be happy to pay for it), Cisco said no. Fortunately was able to replace the ASAs with another technology product not too long after.

Nate Amsden

curious what kind of issue came up that had to have things be rebuilt? I wouldn't consider myself to be a Netscaler expert but have managed netscalers for web and mysql traffic since 2012 (before that I used F5 for many years). Code roll back should be very simple. (since I have HA, I do all of my upgrades via CLI, haven't tried the GUI to upgrade since I setup my first netscaler).

I was on the 9.3 code base for many years well past end of support. In the early days of using it seems every new release broke something minor, fortunately I don't recall anything ever critical being broken(enough to want to roll back anyway if my memory is right at least). I did hold off on upgrading to 11.x from 9.x due to a mysql query routing bug that took about 2 years to track down and get resolved. I skipped 10.x entirely, straight to 11.0 now 11.1, no need to go higher as long as 11.1 is supported. Though haven't run into any upgrade issues in probably 3-4 years now.

I have not used them for anything like RDP or virtual desktops or anything, strictly http/https, dns, layer 4 stuff, NAT, mysql load balancing, and VPN(only one in the company that still uses citrix vpn, everyone else uses pulse secure). Originally went for Citrix instead of F5 because I was curious about the integrated VPN, the mysql load balancing(neither of which F5 had at the time build into LTM anyway, F5 firepass was a thing but not part of the load balancer), and I had heard good things about Netscaler in general so wanted to give it a try.

Most annoying issues with netscaler over the years for me anyway was that mysql query issue("the NS is not forwarding the "PREPARE statement" which was sent after the "CLOSE statement "by the client."), and problems with the Mac Access gateway VPN client. Got pulse secure after at least 18 months of support on the mac vpn client that we finally discovered the design flaws in the client side that really wouldn't get fixed anytime soon(if ever). Worked fine from windows though.

Had to replace 2 netscaler SSDs in the past year that was a bit annoying as well, the spinning disks in the older platform continue to work fine after almost 8 years.

Nate Amsden

Wonder how many don't even have support

If you don't have a support contract you can't get the fix. Also increased possibility if you don't have a support contract Citrix won't know to try to contact you about the issue. I bet hundreds of those devices at least are also end of life so probably can't get a support contract even if they wanted one(I have a pair of fully functional 7500s that are well past EOL though I'm sure will happily run the latest 11.1 code base at least they have been retired for 18 months I couldn't find a good purpose for them). Those 7500s were purchased in 2011 and literally wouldn't break a sweat running full production load in 2020(the current Netscalers I have run memory and cpu average 5% - I'd wager the 7500s would run the same traffic at 10-20% load tops).

For netscalers at least the code really isn't platform dependent so if you have a support contract for some other netscaler then you can download any version for any platform.

But I'm sure lots of folks out there have these appliances who don't have support, and some subset of them probably don't care enough to do anything about that.

RIP FTP? File Transfer Protocol switched off by default in Chrome 80

Nate Amsden

Re: File Transfer Potocol

SMB and NFS certainly can't move data at a similar rate over the internet. SMB and NFS are made for local connections, and in general are more vulnerable than FTP.

Don't get me wrong I love NFS(v3 only) for file sharing locally.

Nate Amsden

Re: File Transfer Potocol

FTP is still often easier for uploading files(proper ftp client not web browser). Web is easy for downloading stuff but finding a good web application to support uploads, setting it up etc is probably more complicated than ftp. I have used owncloud for this purpose but as far as I know even now the community owncloud has no auditing abilities, so no way to see what user is uploading or downloading(and the web logs just show the urls not the username which is internal to owncloud). It was(probably still is) an option in their enterprise offering though.

FTP is easier still for automated uploads or even automated downloads. I have loved tools like ncftpget, ncftpls, etc for those purposes. Managing uploads and downloads through web apps (javascript etc) via CLI scripts is well beyond my skill set.

sftp works too usually though more complex than ftp. I setup automated sftp downloads for paypal reports sometime last year and it took quite a bit of effort(relative to ncftp tools), didn't help that paypal's sftp servers would reliably randomly fail(at least 25% of the time), and they didn't support key authentication(maybe they do now), so I had to have the script retry automatically 5 times if it failed.

I setup as secure as I could sftp system for another organization to upload log files. Had to run a ssh service on a custom port with custom configs, combined with a bind mount to a remote file system to store the data to make it as secure as I could anyway(allows sftp only not a ssh shell). It works fine, but way more complex than regular ftp. At least they could use ssh keys to authenticate.

I don't use chrome so this change doesn't really affect me, and my usage of ftp itself in general is low anyways.

Virtualization juggernaut VMware hits the CPU turbo button for licensing costs

Nate Amsden

not great but relieved it is not much worse

Hopefully this is the last major licensing cost change for at least 5 years or so. I would expect it anyway. Of course relieved it could of been a hell of a lot worse, actual per core licensing, or the "vRAM" episode they tried at one point, or even worse still perhaps per-VM pricing which I think some/many of their add-ons have as options anyway.

Of course they may jack up the prices again in general when the next vSphere comes out, perhaps in exchange for adding in a bunch of extra features (such as NSX) that you may not need or want.

My vSphere setups have always been basic vSphere(Ent+)+vCenter. I have looked at the add-ons every now and then and never saw anything that was worth the extra money for me anyway. I priced out the vRealize suite and the cost was high enough I could of purchased a half dozen extra hosts (I'll take the extra hosts in a heartbeat). The core products in my experience are rock solid stable (assuming you are on certified hardware anyway), I have generally filed less than 1 support ticket per year for an actual problem I need fixed over the past 9 years now. Currently running about 900 VMs.

The last Vmware product I was truly excited about was vSphere 4.0(just look how packed that release was https://www.vmware.com/support/vsphere4/doc/vsp_40_new_feat.html ), everything since has been smaller incremental updates, and literally the main thing driving my upgrades was simply for support purposes. I ran 4.1 past EOL, and ran 5.5 past EOL. vCenter 6.7 has a pretty decent HTML UI finally but I'd honestly take the older .NET client in a heartbeat (I say that as someone who passionately hated the .NET client in the early days as a Linux user(still am a linux user), but I changed my mind after I saw the flash and the html clients.

(vmware customer since 1999, ESX/i customer since 2006)

Microsoft: 14 January patch was the last for Windows 7. Also Microsoft: Actually...

Nate Amsden

Re: Klingons

The windows part of my life could probably run happily on windows 7 for the next several years. My main desktop/laptop has been linux since about 1997, so day in and out of windows is from VMs. I do have a couple physical systems that have windows 7 installed but they rarely get turned on.

Windows 10 just seems to get worse(at least for those that want control over their systems) as time goes on and I guess it won't get any better, which is too bad. There are the "long term service" builds of Windows 10 but last I heard those were enterprise only.

But windows 10 isn't alone here, much of the technology industry is working hard to remove control from the users(linux example here is systemd). Many users appreciate that, many others do not.

Still losing sleep over that awful Citrix bug? This scanner is here to help... you realize you've already been pwned

Nate Amsden

this is a good bug

I often comment about how I am not concerned about many of the bugs especially the Spectre type information leaking types as overblown because I believe in the vast majority of cases they are. This Citrix bug is good though. Too bad Citrix was not able to respond with a full fix quicker, I wonder if this bug is how Citrix themselves got hacked a while back. I have been using Netscalers since about 2011 (before that used F5 mainly), and was quick to get the workaround in and then patch the systems I have when the patch came out (using 11.1 code). That pulse secure bug last year was good too (affected by that as well).

Leave your admin interface's TLS cert and private key in your router firmware in 2020? Just Netgear things

Nate Amsden

confusing article

Been doing internet stuff for about 25 years now and this article is quite confusing. It sort of implies that Netgear is shipping a certificate authority and private key that is trusted by the browsers in their firmware which would be bad(I think that is unlikely).. but it also sort of implies it is simply shipping a regular (signed by a real CA not a self signed cert)SSL cert+key(because without the key the cert can't be used to serve data) that is valid for this "routerlogin" domain in their firmware. It's probably not the best idea but to me it's far from something to freak out about. I'd assume this cert is only valid for that "routerlogin" domain so in order to do something bad with it you'd need to trick your target into thinking your IP is for that domain somehow, in which case well you can use just about any domain or cert that's valid. I can certainly understand why they did it this way vs using a self signed cert to make it easier for the users.

I learned just a couple months ago that Chrome is even more picky about SSL. I have been self signing internal certs for 15 years now(I don't use chrome) and saw some screenshots from users recently showing Chrome flagging secure sites as "not secure", after investigating determined that Chrome wanted to see some extra fields populated (authorityKeyIdentifier, basicConstraints keyUsage, subjectAltName) in order for it to think it is "secure". Whatever.

Browsers are entirely too extreme with that kind of thing.

(for reference my home "router" is a PC Engines apu2 running OpenBSD. I have a Motorola cable modem in bridging mode, and my local wifi is provided by an Asus AP in "AP" mode hanging off one of the ports of my firewall I guess you could call it "DMZ" as it has no direct access to my internal network).

Problems at Oracle's DynDNS: Domain registration customers transferred at short notice, nameserver records changed

Nate Amsden

Re: Yet another reason Oracle sucks.

vmware workstation now allows linux as a host? I'm pretty sure that vmware workstation (before it was called that) ran on linux only originally. I was using pre 1.0 on linux back in '99 at least(maybe earlier I don't recall). And have been running vmware (and workstation after they renamed it) on linux ever since. I even had a "VMware 1.0.2 for linux" CD for a long time, wish I still had it, not sure what happened to it.

I want to say vmware for windows hosts didn't appear until 2.0, but I could be remembering wrong.

just for nostalgia I have kept (almost?) all of my vmware downloads, oldest I have is VMware-2.0.3-799.tar.gz (6MB only!!) from Jan 2001. By contrast vmware workstation v15 for linux is a 511MB download. The build number for v15 is 15018445 vs I assume 799 was the build for 2.0.3. That's pretty insane.

Amazon: Trump photon-torpedoed our $10bn JEDI dream because he hates CEO Jeff Bezos

Nate Amsden

Whole single vendor thing was a scam from the beginning

That BS that the DOD couldn't manage multiple vendors for this project was so weak. The government should lead by example and worked with multiple providers to work together in a standards compliant way that would of been better for all. This whole "we have to be locked in or else we can't do it" is just crap. They have billions of dollars for this thing. They should be using multiple independent vendors for all layers of the system.

This whole JEDI idea should be canned.

Not that I am any fan of public cloud quite the opposite really. I've even hosted my own email and websites for the past ~22 years on hardware that I own currently in a co-lo in the bay area.

Apple completes $1bn amputation of Intel's 5G modem biz, Chipzilla out of mobiles for good

Nate Amsden

maybe fair?

Qualcomm tried to get in on the server CPU market only to throw in the towel shortly after making a big splash. Intel competitive pressures? Then Intel tries to get in on the modem market and I guess Qualcomm beat them back. What is sort of strange though is both companies have vast resources, they both gave up too easily. Both sides seemed to have fine working products even if they were not best in class.

Chrome devs tell world that DNS over HTTPS won't open the floodgates of hell

Nate Amsden

Re: Won't be used in upcoming builds..

thanks for the info that does look interesting and has not turned up on my searches before. However I don't think it does what I'd like to think it does. It seems to just be a proxy to forward DoH requests to another DoH host.

according to https://github.com/DNSCrypt/dnscrypt-proxy/wiki/Configuration#an-example-static-server-entry

"It responds to standard DNS queries, and can be thus configured in network settings in place of your router's or your ISP's resolver.

But when it receives a query, it will encrypt and authenticate it before sending it to upstream servers able to understand the encrypted protocol."

key point being "upstream servers able to understand the encrypted protocol".

Unless it can handle DoH and hand it off to a local DNS running regular old DNS on port 53 (should be just as secure as the "insecure" traffic is just going to localhost at that point from the proxy.

also looked at https://wiki.archlinux.org/index.php/Dnscrypt-proxy

and it referenced cloudflare as well, no obvious indication it can handle DoH -> regular DNS (which would allow someone to run a local DoH system).

I could be misunderstanding the docs I just spent a couple minutes looking

Nate Amsden

Won't be used in upcoming builds..

"[..] but won't necessarily be automatically used in upcoming builds of the browser"

I'd wager the time frame Chrome devs work with and most of the rest of the world are very far apart. when someone says the above statement I'd have an expectation of that time frame being at least 2+ years, perhaps even 5 years. I'd be quite surprised if within the next 12 months they don't have it on by default(or at least to the same extent firefox has it on anyway). Google and Moz seem to be quite aggressive at pushing this kind of stuff. (Disclosure I bailed on firefox for my main browser at v37 I think it was, on Palemoon since, I do have firefox installed and use it very casually but not my main browser and I don't use Chrome).

For me, as someone who has run DNS for more than 20 years, am still waiting to see non service provider implementations(stuff that I can run myself for my own recursive resolver and is reliable at least) of this DoH -- not that I am in any rush. I checked a few months ago and there wasn't much of anything, checking again now and I don't see anything obviously new in this area.

Running on Intel? If you want security, disable hyper-threading, says Linux kernel maintainer

Nate Amsden

makes sense

If you are super paranoid and wanting to be the most secure you can be. Security is like backups. You have to judge for yourself what you are protecting against, and how best to protect that. What are the risks vs costs of the mitigation? For me it's easy, I believe these side channel attacks are WAY overblown and as such have actively avoided firmware updates from my vendors to patch them. At some point I'll buy a new enough system that it will probably include some of those firmware fixes, but so far I have not.

But it's nice the firmware updates are there for those that want it.

I also don't run cloud stuff. I have hosted my own email on my own server for more than 20 years now(same with my websites). I have managed internet facing infrastructure (100s of systems) for more than 20 years without a single known security incident(I have been involved in other security incidents with things that were managed by other teams or people though). So my track record and confidence is high with that in mind. I also don't run things that I'd rate as high risks for attacks, so that factors in as well. I admit that my situation is far from common, but it does annoy me to see so many folks treat security as an absolute. It's either secure or it's not. That's not true at all because there will always be security patches and vulnerabilities. Just because they haven't been publicly disclosed doesn't mean they aren't there, and in many cases doesn't mean they aren't actively being(perhaps highly targeted) exploited. Goes back to risk again of course.

If you're one of those folks that doesn't feel secure unless they have the latest patch then well go do that, but security is a lot more than just making sure you have the latest patch (and in my opinion it's the other bits that count a lot more towards security than having the software up to the latest patch).

Not a good look, Google: Pixel 4 mobes can be face-unlocked even if you're asleep... or dead?

Nate Amsden

Re: Erm

How about if your dead /hands are cold /no blood flowing? Biometrics in phones is one feature I don't mind, at least I'm not forced to use it(which I don't, also don't do much sensitive stuff like banking or shopping on my phone with anything other than temporary credit card numbers(which are generated on my computer)).

Think your VMware snapshots are all good? Guess again if you're on Windows Server 2019

Nate Amsden

Re: Backups

You apparently misread my post.. My comment regarding 20 years had to do with the need for offsite backups. Ransomware doesn't take down a facility it just encrypts data.

Snapshots certainly can help recover from such an event depending on how they are used. Example is if ransomware encrypts a file share that data is easily recoverable provided you catch it before your snapshot policy starts removing the last snapshots before the ransomware hit.. I recall reading some ransomware attacking windows VSS. i should clarify when I talk about storage its about purpose built systems generally those don't run windows.

Snapshots aren't for everything certainly but they can be a powerful tool. I just wish NAS appliances had the ability to do read write snapshots for data testing(netapp does I believe not aware of any other vendor that can)

As for security intrusions into the network the best policy for that in my opinion is OFFLINE BACKUPS. In addition to whatever dudupe appliance or cloud backup or whatever.. storing the data where it requires physical human interaction to get to it(best example is rotating tape that is physically removed from the drive) .. make sure the intruder cannot wipe out your data because they compromised user or admin credentials.

I remember at a previous job that had everything in a public cloud, realizing that with my admin credentials it would literally be just a few commands(probably in a bash for loop) to wipe out all data and all backups. Now think of the news articles where cloud credentials have been leaked online. So keep a copy of your backups offline if you want to protect against that kind of scenario.

Nate Amsden

Re: Backups

Sort of a misleading post..

The snapshots are used as part of the backup process to have a consistent point in time to get the data. In your link specifically says "This is because the snapshot is used as part of the data movement process to a backup file or a replicated VM. "

At the end of the day you have to determine what you are trying to protect against, and then devise a backup strategy if possible to protect from that.

For my linux VMs (of which I have around 800 of them in production on vSphere), I don't do any VM level backups, just backup the data that we need(at present 99% of it via NFS to HP StoreOnce). Actually I've never needed VM-level backups as I have always felt that is sort of wasteful especially if you are backing up a bunch of systems that are fairly identical to each other in the case of web servers etc. MySQL servers have custom scripts that use Percona xtrabackup to export the data safely to another storage system.

Snapshots absolutely can be backups (short term generally). I rarely use VM level snapshots(and 99% of the time when I do I power the VM off first to make a consistent snapshot faster). As can, gasp, RAID be a backup (protects against disk failure). Storage snapshot (especially file storage) are great for restoring files that were lost accidentally. That is a backup, I mean especially for those time windows where some data could be created, and then destroyed between major backup windows. To have rotating snapshots happening every 5 minutes for X hours, every hour for X days etc.

Some folks don't see a backup as a backup unless it is distributed off site(sometimes to more than one site) and at least semi regularly tested. That certainly qualifies as a very good backup.

But very few have the resources and/or budget to commit to that level of assurance (certainly none that I have worked for nor any that my immediate friends have worked for). I have been involved with several near data disasters caused by software and hardware failures, many of which involved more than 24-48 hours of downtime. In every case to-date at the end of the day the companies opted not to invest significantly more(either in software/hardware or in staff time) to make the backups more robust. In most cases there was some data loss as a result of the failures, though never complete data loss.

I have to believe that many of the folks touting extremely robust backup processes that are fully tested, off site, encrypted etc etc etc are most likely dealing with a very small amount of data in a simple environment. Or are in a fortunate position to have a massive budget available for such a system. In either case I'm sure it is a tiny tiny minority of environments out there.

Too much emphasis in my experience gets put on offsite backups, as if a nuke is going to hit the facility that has your data. Or a big flood or something. This is so incredibly rare. The likelihood of a software failure causing massive data loss (perhaps triggered by a hardware failure) by contrast is quite common.

In nearly 20 years of working with data centers I've only been hosted with one that had a full facility outage. There was a fire in the electrical room that too the site down for I think almost 72 hours. I wasn't hosted in the facility at the time as it had a previous poor track record for power outages. But the point is even if the systems were down for 72 hours (they had generator trucks on site for several months following while they rebuilt the electrical systems), the systems weren't lost. They were down for up to 3 days (including "big" name sites like Bing travel which had no backups at the time apparently), but they came back. That is also literally the only facility I've worked with that ever had a complete power outage. Though where I have the authority I choose good facilities. Having such an outage is terrible of course but it's not a permanent loss.

By contrast I recall an article here on El reg for a similar fire in the electrical room at another facility, I think it was Terremark at the time. They built a good facility, the article said customers never noticed any issues, and they were able to resolve the issue with the fire department with no impact whatsoever.

Oracle demands $12K from network biz that doesn't use its software

Nate Amsden

oops one more update, I remembered why I ordered 20 users, to cover all of the admins that would be using vcenter of which at the time there was just 3, eventually expanded to about 6.

Nate Amsden

Forgot to mention part of the migration to Oracle SE was some of our dev/test systems ran on single socket ESX installations (technically vmware did not support single socket at the time), Oracle didn't support their DB on ESX either I believe (had to repro any issues on bare metal). Our production OLTP systems ran on bare metal but everything else was in an "unsupported" configurations mainly for cost savings(servers were dual socket just pulled one cpu out). didn't have any clusters, each ESX system was stand alone, no vmotion, no HA, didn't even have vCenter, just ESX Standard edition I think it was called. Never had an issue.

Nate Amsden

really depends on what your doing with it obviously.

I ran an Oracle DB as the back end database for our vCenter 5.x installation for about 7 years (migrated to vCenter 6.5 early this year which came with the embedded DB and built in HA).

Dug up the original quote

"ORACLE DATABASE STANDARD EDITION - NAMED USER PLUS 3 YEAR" plus software updates/support. 20 named users (wouldn't recommend this config for an DB running an internet facing app), was $6,300 at the time. I was (and still sort of am) unsure how many named users I needed, I probably could of gotten away with maybe 2 or 3 given that really nothing other than vCenter (which had 2 DBs one for vCenter itself one for VMware Update manager - so 2 named users?) and nightly datapump job for backups. But I saw the cost of 20 and just said screw it license it for a bit more just in case.

I'm sure our license was too small for Oracle to care, they contacted me at least once or twice a year trying to upsell something. I explained what we use Oracle for and there wasn't any opportunities for upsell in this environment. They always understood(sometimes it took some additional explaining) in the end and left me alone for another 6-12 months.

One crazy bit is for a while they were pestering me about renewing support, support that didn't expire for another 2 years. I never understood that. I see emails from last year reminding me my support is expiring in 2020 and the cost to renew the support is .."USD $3.15" .. eventually those emails stopped.

I haven't dug into Oracle's licensing recently but several years ago standard edition could run on unlimited cores and you generally paid per socket(max of 4 I think). vs the enterprise which has the funky per-core licensing. I think Oracle SE even included RAC licensing at one point anyway.

I went through two Oracle audits with a company back in 2006(happened just as I joined the company) and again in 2008. Boss ignored my advice to change to standard edition in 2006 (they were originally licensed for "Standard edition one" if that version still exists, for a DB on an internet facing social media site). They had Enterprise edition installed.

They paid hefty fines and were assured everything was OK after the audit in 2006 so my boss ignored my advice. Auditors came around again in 2008 and found lots of new violations, this time they accepted my advice and I went through the process of migrating everything to standard edition(found it ironic the Oracle staff were not aware of the per socket licensing advantages to Oracle SE vs per-core licensing on Oracle Enterprise), even changing the CPUs from dual core (optimal for per-core licensing on fast cores) to quad core (better for standard edition more power). HP found out the DL380s they sold us as quad core capable ended up not being quad core capable they had to replace the motherboards (some time later they updated their docs reflecting some early boards could not do quad core processors).

The migration from Oracle EE to Oracle SE at that company was pretty painless, I mean no app changes, I did all the work. We had a Oracle consulting company that helped manage things and their custom monitoring app required partitions, so it was their standard practice to install Oracle EE with partitioning ($$$), so they had to change their shit around, but they realized they should do that anyway.

At least with Oracle 10g which is what we had at the time I think, we were still able to leverage Oracle enteprise manager with the performance packs and stuff (against the license), it was easy to wipe the installation from the DB when it came time for the next audit, no issues(semi regularly wiped the config for that anyway due to problems, didn't care about data retention on that stuff). Newer Oracle I noticed it didn't seem possible to install things that way anymore. Really missed the performance packs, I'm not a DBA but it was just amazing to see how quickly anyone could track stuff down, vs MySQL even in 2019 is nowhere close to it(and the way things have progressed in mysql over the past decade MySQL will probably never get to where Oracle was 10-15 years ago).

Stop us if you've heard this one before: Yet another critical flaw threatens Exim servers

Nate Amsden

Re: The best fix is Postfix.

I was going to say why exim? Why not postfix? There must be a reason. I remembered the early hatred of sendmail and m4 files.. I started using postfix around 2001. On my personal mail server the config hasn't changed in well over a decade. Hell I'm still running most of the same regex header filters I wrote in 2002. Postfix is simple to configure so am curious any exim fans want to say what keeps them on exim? Maybe it's better than postfix I don't know either way.

I don't remember why i chose postfix over the other options at the time. I want to say it was likely reccomended to me perhaps by sophos to integrate an antivirus solution which I think was called amavis at the time which I jad deployed running both sophos and mcafee. Looks like amavis is still around and Wikipedia specifically mentions using it with postfix. So that was probably my reason at the time.

Equifax is going to make you work for that 125 bucks it owes each of you: Biz sneaks out Friday night rule change

Nate Amsden

freeze credit?

The day the Equifax news broke I froze my credit on the 3 credit companies. I sent messages to folks I knew advising them to do the same but I don't think anyone did. I've temporarily unfrozen my credit twice since then(which I believe is 2 years now). The temporary unfreeze is automatic, you set the day to unfreeze and the day to re-freeze. Perhaps I'm wrong but I think freezing credit is preferable than relying on a credit reporting service? (maybe better to do both I suppose) Though I don't see many articles suggesting this, which I find strange. Unless you are someone that regularly performs tasks that accesses this data. For me, once a year it feels like a safer thing and there is no monthly or annual fees (though there was a one time fee with one or two of the companies, I think those fees were eliminated as a result of this breach though).

Enjoy the holiday weekend, America? Well-rested? Good. Supermicro server boards can be remotely hijacked

Nate Amsden

SM IPMI still terrible

Shouldn't be surprised I suppose.. my last SM IPMI update(~5 years ago) part of the instructions was to wipe the configuration, I suspected that included the network configuration meaning it would no longer be accessible on the network after rebooting. But I tried anyway just in case. And sure enough yes the IPMI went offline at that point and I didn't have connectivity to it again for another couple of years (next time I went on site, fortunately had no HW failures in the meantime). Add to that the terrible documentation SM has on when firmware is updated, what is fixed etc(release notes seem more common for them on their newest stuff from the looks of it).

I replaced my personal SM server (which otherwise worked OK as in no failures anyway, my SM experience goes back to about 2001) last year with a Dell R230. For work stuff historically I use (since ~2006 anyway) HP, but in this case HP didn't offer a configuration that I wanted so went with Dell. Has worked well so far anyway. My personal server is at a co-lo and runs a half dozen VMs, though maybe will add more VMs got tons of capacity now.

Back to SM..

Since this article mentioned "X10" I wanted to see what the current situation is, so I poked around for an X10 board with IPMI

first web hit was this board:

https://www.supermicro.com/en/products/motherboard/X10SLM-F

seems recent "Single socket H3 (LGA 1150) supports Intel® Xeon® E3-1200 v3/v4, 4th gen. "

downloaded the IPMI firmware package, and at least in this case they give release notes and a list of fixes, but in the "IPMI Firmware Update_NEW.doc" file they still say in big red letters(had higher hopes given the "NEW" in the name)

"NOTE !!! Uncheck preserve configuration box during flashing (very important step for FW to work properly). All settings will be reset to default."

I suppose if you are using Windows, DOS or Linux on the bare metal that may be ok, but for me anyways running vSphere there are (as far as I could tell anyway) no vSphere related tools for IPMI config on supermicro.

At a bare minimum there should be an option where you can at least populate some basic configuration such as network configuration so you can connect to the IPMI after it resets. Hard to believe this situation is unchanged years later. I have had seamless upgrades on HP iLO and Dell iDRAC (used Dell at a company back in 2009-2010 too) on every single attempt, not a single issue over the past ~13 years. Before that I was mostly a SM customer (had a few hundred systems at one point) and firmware updates were basically never applied as the process and documentation was quite scary(I believe often required DOS floppy disk on systems that had no floppy drives, and the remote KVM/virtual media abilities did not exist at the time), and SM themselves warn you not to upgrade anyway(still warn you even today). Their processes and documentation are only marginally better today.

A while back I looked at the IPMI update procedure for Citrix Netscaler, and it was just horrifying https://support.citrix.com/article/CTX137970 ), I believe Citrix uses Supermicro as well. I tried once getting IPMI working on Citrix but ran into a wall pretty quick(I think the certs were the same on every device which caused browsers to freak out, known issue at the time anyway), just use serial console and network PDU.

Microsoft's only gone and published the exFAT spec, now supports popping it in the Linux kernel

Nate Amsden

Re: What if ...

you say that as if Linux systems haven't had exfat options for years when they have. I don't think it was in the stock kernel more likely via FUSE (see https://www.howtogeek.com/235655/how-to-mount-and-use-an-exfat-drive-on-linux/ )

I think it was probably a much bigger issue for android devices at least those with SD card support. Lots of old news stories about OEMs having to pay lots of licensing $$ to MS for Android phones. Though as far as I can recall those agreements were always secret, don't know if details ever came out about what specifically was being licensed.

Though I'm sure native exfat in the kernel will provide much better performance than userland drivers.

by contrast getting exFAT to work in 32-bit XP is far more difficult(I did it a couple of years ago setting up an XP system to play older games ended up losing interest in games again). There was a patch (KB955704) MS released but have long since removed from their site(don't know why, 64-bit patch is readily available), though there are folks out there that kept copies(including me now).

after reading the GIMP article earlier(and thinking back further to the master-slave stuff from python was it??) I'm sort of waiting to someone to come out and be offended by the word "FAT" maybe say something like "exFAT is saying I was fat before but not anymore or something".

(as a fat guy I am not offended, by just about anything)

Security gone in 600 seconds: Make-me-admin hole found in Lenovo Windows laptop crapware. Delete it now

Nate Amsden

Re: "not uncommon"?!

Certainly is not a hard rule for everyone.

https://www.theregister.co.uk/2019/05/15/may_patch_tuesday/

MS released XP patches recently.. (wasn't the first time)

There was a security bug in a Sonicwall product I manage that went end of life earlier this year and they still patched it.

Cisco released patches for ASA firewalls about 10 months after end of life, one from may 2019 is:

https://www.cisco.com/c/en/us/support/docs/csa/cisco-sa-20190501-asa-ipsec-dos.html

But you need a support contract, and you can't get a new one anymore(tried earlier this year for an old ASA that is now being retired in the next couple of weeks previous network engineer didn't feel support was needed because they were unlikely to fail- didn't consider security patches)

I agree what Lenovo did is stupid, but it seems like the software is far from critical and it won't hurt anyone to simply remove it. I'm sure it's on the windows partition of my P50, but that doesn't get booted often maybe 2-3 times a year.

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2020