* Posts by Nate Amsden

2160 posts • joined 19 Jun 2007

Rackspace literally decimates workforce: One in ten staffers let go this week

Nate Amsden

Re: beginning of the end

I'd wager the beginning of the end was probably close to a decade ago. Might of been when they did their initial big investment for OpenStack. Not that OpenStack was a bad idea at the time(had a lot of promise though to me promise has fallen flat the past 5-7 years mainly due to complexity), but it was a serious shift in technical strategy.

https://www.theregister.com/2010/07/19/nasa_rackspace_openstack/

Couldn't find the article(think there is one) of when Rackspace was essentially pulling out of OpenStack.

I was never a customer of theirs but did price their stuff out on a couple occasions ~10 years ago the cost never made sense. Not that public cloud is any better(actually worse in many respects but some still eat it up because it's sexy I guess). Absolutely astonishing how much money is wasted in public cloud, just makes me sad.

Akamai Edge DNS goes down, takes a chunk of the internet with it

Nate Amsden

took them a long time to acknowledge it

As an affected customer it took our stuff out at about 8:34am pacific time. I checked their status page, everything looked fine but DNS was not after several manual attempts to query their systems. Tried to call support, queue was full. Tried to do support chat, was immediately disconnected (that surprised me I expected to be put in a queue even if I was #8590283 in line), tried to file a support ticket, internal server error. Once I saw that, I hung up the phone obviously others were reporting the issue to their support.

Given their support systems were overwhelmed I'm surprised they were unable to update the status page of their site to show an issue was going on.

They have a community support page, and that didn't get a post till about a half hour into the incident, and they didn't even get to email me that there was an issue until two minutes after it recovered(9:39am for us, email came in at 9:41am pacific time). Same with their status page the outage was going for about a half hour before it was updated.

Don't mind the outage, but would be nice if they could get their status page closer to real time status, should have it updated say within 5 minutes of a major disruption like this?

If companies really cared about a CDN provider going down because it does happen the obvious solution is multiple providers, but not many organizations are up to doing that. Though it's significantly easier than using multiple data centers or for those in public cloud multiple cloud providers. Same goes for DNS providers, nobody is forcing you to use a single provider. If it means that much to you then use a 2nd one(or a 3rd), again it's quite simple (but most orgs don't care enough to do it). I recall noticing Amazon was using Dynect about 11 years ago now for the first time(they were UltraDNS only before). And my Dyn rep at the time said they signed up one Q4 after UltraDNS had a big outage. Seems like today they still use both of those providers at least for their main domain. Meanwhile microsoft is bold enough to rely on their Azure DNS for their main domain.

White hats reported key Kaseya VSA flaw months ago. Ransomware outran the patch

Nate Amsden

Re: Inside info?

You failed to mention the fact that if such a critical bug was reported publicly without "responsible disclosure" yes the bug would of been fixed faster, but it is much more likely that such a bug would be exploited even faster(faster than the bug could be fixed). I have no idea how this VSA software even works but even if a patch was released fast would the customers have been quick to patch? (or are their patches applied automatically?)

You can see this in real time right now with the "print nightmare" stuff from MS. Fumbling about releasing patches that don't fix the issue and cause other major issues(reports of people not being able to print with certain kinds of printers) etc. And in that case the disclosure of the bug if I recall right was an accident with the reporter thinking it was already fixed.

It is very unfortunate though that security plays such a low priority in software development for the vast vast majority of organizations out there. Add to that security plays such a low priority in the operation of such software in the vast majority of organizations out there just look to how many times there are reports of compromises because of some issue that had a patch released but never applied, assuming they were even aware such software was in use(if you are running an insecure vpn appliance it should be obvious(obvious = patches available from vendor), but if you have code running insecure libraries it may not be obvious). Or even worse, organizations that expose systems such as databases directly to the internet, or "cloud" file shares that are meant to be private.

I don't know what the solution is, if there even is one. The cost of security issues hasn't gotten to the breaking point where companies are willing to invest more in security seems like anyway.

Intel sticks another nail in the coffin of TSX with feature-disabling microcode update

Nate Amsden

what kind of workloads use/used TSX?

Am assuming probably greater than 90% of workloads never use TSX, but am curious can anyone name an application or type of workload that did? I came across this blog post that explains what TSX is https://software.intel.com/content/www/us/en/develop/blogs/transactional-synchronization-in-haswell.html But to my brain it doesn't give me any clues as to being able to name a software application that might take advantage of it.

Some kind of database? HPC maybe? media encoding? super obscure custom in house apps?

Do you want speed or security as expected? Spectre CPU defenses can cripple performance on Linux in tests

Nate Amsden

disable mitigations in OS - probably doesn't override firmware mitigations?

Most of my systems came from before the Spectre stuff so I haven't installed the firmware updates that have those fixes in them(read some nasty stories about them most recently the worst of them here https://redd.it/nvy8ls), I have seen tons of firmware updates for HPE servers that are just updated microcode, fixes to other microcode, implying some serious issues with stability with the microcode.

I have assumed linux commands to disable mitigation operate only at the kernel level and are unable to "undo" microcode level mitigations.

On top of that on my vmware systems(esxi 6.5) I have kept the VIB(package) for microcode on the older version since this started. The risk associated with this vulnerability is so low in my use cases(and in pretty much every use case I've dealt with in the past 25 years) it's just not worth the downsides at this time. I can certainly understand if you are a service provider with no control over what your customers are doing.

I can only hope for CPU/BIOS/EFI vendors to offer an option to disable the mitigations at that level so you can get the latest firmware with other fixes just disable that functionality. Probably won't happen which is too bad, but at least I've avoided a lot of pain for myself and my org in the meantime(pain as in having VM hosts randomly crash as a result of buggy microcode).

I do have one VM host that crashes randomly, 3 times in the past year so far, only log indicates that it loses power sometimes 2-3 times in short succession(and there is 0 chance of power failure). No other failure indicated, not workload related. HPE wants me to upgrade the firmware but I don't think it's a firmware issue if dozens of other identical hosts aren't suffering the same fate. They say the behavior is similar to what they see in the buggy microcode, but that buggy microcode is not on the system. So in the meantime I just tell VMware DRS to not put more critical VMs on that host, as I don't want to replace random hardware until I have some idea of what is failing(or at least can reliably reproduce the behavior I ran a 72 hour full burn in after the first crash and full hardware diagnostics everything passed), sort of assuming perhaps the circuit board between the power supplies and rest of the system is flaking out but not sure. The first time it crashed so hard the iLO itself got hung up(could not log in) and I had to completely power cycle the server from the PDUs(personally never happened to me before), iLO did not hang on the other two crashes. Server is probably 5 years old now.

Another Q is if version "X" of microcode is installed at the firmware/BIOS/EFI level, and the OS tries to install microcode "V" (older), does that work? or does the cpu ignore it(perhaps silently?). Haven't looked into it but have been wondering that for some time now. I'm not even sure how to check the version of microcode that is in use(haven't looked into it either). Seems like something that should be tracked though especially given microcode can come from either an system bios/firmware update and/or the OS itself.

Microsoft loves Linux so much that packages.microsoft.com has fallen and can't get up

Nate Amsden

never rely on external systems for critical repos

In the post shows someone saying they have a customer experiencing a big outage because they can't download these files? Hosting your own repo even if it is in your "cloud" account has been a thing for more than 15 years now. I am just at a loss for words why people continue to depend on these external sources when they should be mirroring whatever is critical for them inside their perimeter so they have control over it. The sheer laziness of folks is just amazing. Don't get me started on "oh just run this command that downloads a shell script, pipe it to a shell and run it as root to install the software" people. shoot me now, please.

Debian's Cinnamon desktop maintainer quits because he thinks KDE is better now

Nate Amsden

Mate is great

Not sure when I first switched to Gnome 2, though I was using Afterstep for many years in the late 90s and early 00s (on Debian), perhaps I jumped to Gnome 2 from that mostly on Ubuntu maybe starting mid 00s. Then Ubuntu and the Gnome team separately both taking similar drugs decided on radical changes. Fortunately there was enough people to start MATE and Mint(not sure when Mint's first version was). I jumped ship from Ubuntu about a year after 10.04 LTS went end of life to Mate 17, and now on Mate 20(installed fresh last year).

Really nice to have a stable user interface for about the last 15 years now. Though I may only have ~5ish years left one of my key bits of software I use with Mate is called brightside for edge flipping on virtual desktops(never been multi monitor, always been virtual desktops my regular laptop uses 16). brightside apparently isn't maintained anymore, the last version I could find was for Ubuntu 16.04.

After a couple hours of work I was able to build it cleanly on Mint 20 (Ubuntu 20) and it works fine, but several of the libraries it uses are past end of life(and I had to hack some stuff into the code/configs to get them to build) and am certainly concerned 5 years down the road when I upgrade again will it even work anymore. Then there's the whole Wayland thing what will that be like, guessing brightside from 2014 using X11 protocol probably won't work too well on that. Been using edge flipping since my days with Afterstep which was/is a master at virtual desktops(WindowMaker too I'm sure I used Afterstep at the time to be different I guess, later used LiteStep on WinXP for my work system in mid 00s). Mate works fine without brightside but I switch virtual desktops often times several times a minute so having that functionality is critical. I saw some alternatives before I went down the road of building brightside myself none seemed to compare from what I recall.

Only annoying bit is the marco window manager continuously loses the "mouse over activation" ability(another critical bit for me), started a few years ago was hoping it would be fixed in Mate 20 but it is not. I have a little button on the screen that I press to reset marco(doesn't cause any data loss) and it works again for a random amount of time.

Never was into Cinnamon or Gnome 3, I have had Gnome 3 on my home Debian "server" (only has a GUI to show either calm videos in a loop in VLC or a slideshow) over the years and think it would be too painful to use day to day.

I used KDE back in the 90s for a while I remember building it and QT from source many times, pre 1.0 stuff. Not sure why I stopped using it

Anyways thanks to MATE/Mint folks..going to go donate again now.

Cloudflare network outage disrupts Discord, Shopify

Nate Amsden

Re: CDN useless

Not sure where you are coming from but it has been common practice for CDN to terminate SSL for over a decade now(probably much longer). Most(maybe all) of the major CDNs are PCI compliant as well(contacted several last year as I was expecting to have to jump CDNs again our previous CDN went out of business early last year). So they have visibility into everything traversing them from a protocol perspective anyway. Even if you encrypt individual files to transfer they can still be cached in encrypted form since the CDN will see the raw data as it decrypts the SSL/TLS on top.

Really can't imagine many customers out there not trusting their CDNs to decrypt the traffic. Servers are faster but in my experience at least servers have rarely been the bottleneck when it comes to traffic, servers are eaten up by app transactions. It's origin bandwidth and latency that CDNs help in the most simple use cases. Not too uncommon to get more than a 90% reduction in origin bandwidth with CDN.

But they can do more if your developers are willing to leverage them, one useful function several provide is automatic image resizing. Tried to get the devs to use it at the org I am at for years but they never wanted to, instead they wanted to store ~15 copies of each image(pre generated in advance regardless if any of those copies would ever get used) in different resolutions, just a waste of resources, made worse seeing some images on the size be super sized only to be reduced dynamically by image tags in the browser.

CDNs do offer a nice protection from (D)DOS attacks as well at least some varieties of them just because they have such massive capacity.

CDNs certainly can go down, so for those that is super critical that their CDN does not go down then use multiple CDNs either dumb round robin DNS or use an intelligent DNS provider that can do health checks on the backend and automatically re publish DNS entries to point to an alternate provider(in the past I was at a company that did this not with CDN but with our own multiple backend systems(app stack was entirely transactional no static content nothing could be cached) and we kept the TTLs to 60s or less I believe using an anycast DNS provider this was ~11 years ago. Prior to that they used BGP to fail over between sites but that was quite problematic so we changed to DNS failover).

Linux 5.13 hits rc5, isn’t yet calm, Linus Torvalds is only mildly perturbed

Nate Amsden

Re: Still brickin'...

Very confused.. as a Debian user (well until switching to Devuan) since 1998, if you are not familiar with Linux and are not looking to get familiar with it, Debian is nowhere near the top of the list of distros you should use. Really only more technically inclined people would of even heard of it.

Even myself I ran Ubuntu on my laptops for several years until 10.04 went EOL then switched to Mint. I run Debian/Devuan on my personal servers (have about 650-700 Ubuntu servers for work).

So you really set yourself up for failure. That is unless you were looking to dig in and learn about things and fix it or find compatible hardware which it didn't seem like you were in the mood for.

Myself when I first setup Linux back in 1996 I chose Slackware(3.0 I think?) specifically because it was more involved to use than Red Hat (the most common distro at the time) and I wanted to get into the deep end. And I did, downloading and compiling tons of things over the early years from source whether it was the kernel, libc, glibc, X11, KDE, Gnome etc etc.. learned a lot. I don't do that too much anymore though. And stay far away from bleeding edge kernels. Last time I installed a kernel directly from upstream was in the 2.2.x days(back when there was a "stable" and "unstable" branches of the kernel once that stopped then I stopped toying with things at that level).

Hell I just started trying to dig into finding why there seems to be some major new memory leaks in linux 5.4 and 5.8 (Ubuntu 20.04) that didn't exist in Linux 4.4 (Ubuntu 16.04). First time really looking at /proc/slabinfo and /proc/zoneinfo in 20+ years of linux usage, hopefully something useful comes of it. Have never noticed this kind of memory leak in the kernel before, my use cases are very typical, nothing extreme so don't encounter problems often.

AWS Free Tier, where's your spending limit? 'I thought I deleted everything but I have been charged $200'

Nate Amsden

downhill

It's getting worse? really? wow.

Quick story - back in 2010/2011 I worked for a small startup in Seattle. The CEO's brother was the head of Amazon cloud(now the CEO of Amazon I guess). I met with him and my small team at the time, gave them our list of complaints and their response was basically yeah we know that is a problem and we are working on it(manager at the time said it was their typical response). On paper the startup had upwards of $500k/mo bill with them. I don't know how much, if any was forgiven on the backend given the close relations with the executives(though they did direct us to cut spending by as much as we could got it down to maybe $250k?? per month? -- so $3M/year - actually pushed a project to move them out of the cloud which had a ~6 month ROI but the company board didn't like it despite all management including CTO and CEO being on board they didn't want to fight the board for that).

Anyway, the core part of the story, my director(new guy after original manager left) had a history of working AT amazon for more than a decade. Everyone at the company(especially me) hated their cloud. Non stop problems, outages lies you name it. So my director reached out to their support(keep in mind they were just a few miles away) and said HEY, we spend a lot of money with you, have a lot of executive tie ins between us, and we're in Seattle just like you are. Everyone here hates your cloud. We must be doing something wrong, maybe many things wrong. Can you come on site and help us out.

Their answer? No. Not their model, tough shit. Your problem.

I really struggle to think of any vendor on the planet if you are spending half a million dollars a month on calls to complain and ask for help they would have someone on the plane(if required) the same or next day without question. I remember Oracle flying on site to one of my employers to help diagnose an issue. I recall EMC was a couple hours away from flying someone on site to that same company to fix another issue(which ended up not being an EMC issue at all but a bug in the script the storage person wrote, I remember that call with EMC they were practically panicing to get our processes going again after said storage engineer fucked up the script and went on vacation immediately after). That company spent a FRACTION of the $ per month on that stuff.

Amazon told us to fuck off. Kind of needless to say my director(again he worked at amazon for 10+ years and we had many ex-amazon employees working) was quite surprised at their response.

I left not long after and the company I have been at now(hired by first manager at previous startup) have been saving over $1M/year by moving out of Amazon cloud(bill was over $100k/mo in late 2011/early 2012 for an app stack that launched from day 1 in their cloud and we've grown tons since). So easily $10 million in savings over the past ~9.5 years or so since we moved out. Executives have come and gone and tried to pitch cloud again but they could never come close to making costs work.

I have read over the years their support has improved so quite possibly the support response would not be what it was for us back then for that kind of customer. But seeing your comment reminded me of this experience.

VMware reveals critical vCenter hole it says ‘needs to be considered at once’

Nate Amsden

Re: Hey now

yes sorry forgot to mention HA. vCenter HA value is questionable to me it has it's own share of issues and the failover times are absolutely terrible (for my simple setups probably takes a good 6 minutes, I understand why it takes that long due to design of the apps HA is sort of a bolt on thing instead of a design thing). Then there's the times when you have to destroy HA to upgrade with schema changes and stuff. But I hope it is better than nothing...sometimes I wonder though.

Nate Amsden

Re: Hey now

As a linux user since 1996 count me in the group that really misses the .NET client. I run all my vCenter stuff in vmware workstation running windows anyway(Linux host OS). I held onto vCenter 5.5 for as long as I could.

Side note - am installing this on one of my 6.7 vCenter setups and the build number doesn't match, the ISO is VMware-vCenter-Server-Appliance-6.7.0.48000-18010531-patch-FP.iso and the actual build after installation is 18010599 (but it also says 48000 on the login screen) from the command "vpxd -v". Don't recall ever seeing a mismatch like this before myself.

Cisco discloses self-sabotaging SSD bug that causes rolling outages for some Firepower appliances

Nate Amsden

when might this end?

Seems like we have been getting reports for the last ~5 years or more about SSD firmware bugs that brick drives after X period of days from a wide range of manufacturers.

For me these firmware bricking bugs are the biggest concern I have with SSDs on critical systems. Fortunately I have never been impacted(as in had a drive fail as a result of these) yet. But have read many reports from others who have other the years.

Even the worst hard disks I ever used (IBM 75GXP back in ~2000 and yes I was part of the lawsuit for a short time anyway) did not fail like this. I mean you could literally have a dozen SSDs fail at exactly the same time because of this. It's quite horrifying.

I have a critical enterprise all flash array running since late 2014, no plans to retire it, all updates applied(with no expectation of any more firmware updates being made for these drives anymore), oldest drives are down to 89% endurance left so in theory endurance wise they could probably go another 20-30 years, though I don't plan to keep the system active beyond say 2026 assuming I'm still at the company etc.

Cisco: A price rise is coming to a town near you imminently. Blame chip shortages

Nate Amsden

pretty crazy

Came across this a few days ago, comments regarding network equipment lead times - https://www.reddit.com/r/networking/comments/n644ux/lead_times_through_the_roof/

My org hasn't needed to buy much in the past year so haven't been affected by the situation. Most critical network has enough capacity for the next 2-3 years without requiring anything new. Oldest critical stuff(everything is redundant) has been in service 3,348 days(9.4 years today) and goes EOL by the vendor 6/30/22. Probably can get 3rd party support after.

Servers don't have any new needs at this point before 2024 if things keep going the way they have the past ~3 years anyway.

Nothing exciting, but runs super stable almost zero issues on everything.

That Salesforce outage: Global DNS downfall started by one engineer trying a quick fix

Nate Amsden

bad app and DNS

Here's a great example of a bad app. Java. I first came across this probably in 2004. I just downloaded the "reccomended" release for Oracle Java on linux (from java.com) which is strangely 1.8 build 291 (thought there was java 11 or 12 or 13 now? ) anyway...

peek inside the default java.security file

(adjusted formatting of the output to make it take less lines)

# The Java-level namelookup cache policy for successful lookups:

# any negative value: caching forever - any positive value: the number of seconds to cache an address for - zero: do not cache

# default value is forever (FOREVER). For security reasons, this caching is made forever when a security manager is set. When a security manager is not set, the default behavior in this implementation is to cache for 30 seconds.

# NOTE: setting this to anything other than the default value can have serious security implications. Do not set it unless you are sure you are not exposed to DNS spoofing attack.

#networkaddress.cache.ttl=-1

I don't think I need to explain how stupid that is. It caused major pain for us back in 2004(till we found the setting), and again in 2011(4 companies later, couldn't convince the payment processor to adjust this setting at the time had to resort to rotating DNS names when we had IP changes) and well it's the same default in 2021. Not sure about newer than Java 8 what the default may be. DNS spoofing attacks are a thing of course(I believe handling them in this manor is poor), but it's also possible to be under a spoofing attack when the jvm starts up resulting in a bad dns result which never gets expired per the default settings anyway.

At the end of the day it's a bad default setting. I'm fine if someone wants to for some crazy reason put this setting in themselves, but it should not be the default and in my experience not many people know that this setting even exists and are surprised to learn about it.

But again, not a DNS problem, bad application problem.

Nate Amsden

wth is it with always dns?

I don't get it? Been running DNS for about 25 years now. It's super rare that a problem is DNS related in my experience. I certainly have had DNS issues over the years, most often the problems tend to be bad config, bad application(includes actual apps as well as software running on devices such as OS, storage systems network devices etc), bad user. In my experience bad application wins the vast majority of times. I have worked in SaaS-style (as in in house developed core applications) since 2000.

But I have seen many people say "it's always dns", maybe DNS related issues are much more common in windows environments? I know DNS resolution can be a pain such as dealing with various levels of browser and OS caching regarding split dns where DNS names resolve to different addresses if you are inside or outside the network/vpn). I don't classify those as DNS issues though, DNS is behaving exactly as it was configured/intended to, it was the unfortunate user who happened to do an action which performed a query whose results were then cached by possibly multiple layers in the operating system before switching networks and the cache didn't get invalidated resulting in a problem.

I know there have been some higher profile DNS related outages by some cloud providers(I think MS had one not long ago) but still seems to be a tiny minority of the causes of problems.

It makes me feel like "it's always DNS" is like the folks who try to blame the network for every little problem when it's almost never the network either(speaking as someone who manages servers, storage, networking, apps, hypervisors etc so I have good visibility into most everything except in house apps).

LG intranet leaks suggest internal firesale of unsold, unreleased smartphones as biz exits the mobile market

Nate Amsden

Re: I had one of the HP Touchpads

I have about 5 or 6. I kept one of my original 16GB Touchpads in the brown box(mailing box, the device box is white) from the firesale never opened. Don't know why just wanted to.

Two of my touchpads get daily use and have for many years as digital picture frames(the others are mainly spares). Paired with the touchstone charging dock they just sit there scrolling through hundreds to thousands of pictures. Have a 3rd with touchstone which I did use as a picture frame too but stopped using it for now as I don't have a good spot to put it where it would actually be noticed often.

Took some time to work around limitations in the software in getting a more random selection of pictures as well as distributing them in directories where the file quantity wasn't too big for the device to deal with. Also cropped the pictures to minimize the cpu required to auto resize when displaying as well as limiting pictures to either portrait or landscape to maximize viewing area. I've been quite impressed with how well they have held up, 0 failures in a decade. I would of expected at least a screen to die or memory to go bad or something. Their clocks aren't accurate as they are never on the network so there is serious drift but I don't use them for clocks.

I remember spending at least a couple hours working through errors on HP's site the day of the fire sale to buy some, bought four 16GB models at the time(used 2 for gifts at the time and sold one at cost to a friend), and had bought a single 32GB model day 1 from best buy whom later refunded me the difference in cost once the firesale started I think.

Still have a HP Pre3 as well though that has mostly sat in a box since I got my Galaxy note 3 (been on S8 Active for a couple of years, no plans to upgrade anytime soon, maximizing battery life as best as I can with chargie.org dongles that limit charging time).

WebOS was pretty neat, though it was clear it was pretty doomed when HP appeared unwilling to invest in it, instead they tossed what $10 billion at Autonomy instead? It was going to take several billion to even consider trying to compete seriously with Android/IOS.

21 nails in Exim mail server: Vulnerabilities enable 'full remote unauthenticated code execution', millions of boxes at risk

Nate Amsden

shocking

Well maybe I shouldn't be shocked, but I am still. Not at the security issue but looking at that MX server survey I had no idea that Exim and Postfix combined had that high of a market share, and that Sendmail was at 40% ~15 years ago and is now at under 4%. I really expected nobody to have more than say 20-25% market share. Personally I have been using Postfix since about 2001 I think. It was suggested to me for a anti virus solution I was looking to deploy at the time and just haven't had a need to look at anything else.

I went off to look at sendmail.org, and wow they are old school(except they seem to be operating under the "ProofPoint" brand not sure when that happened), just read the stuff under the "Contact us" section. Also it's the first reference to a FTP server I have seen on a website in a long time(I have nothing against ftp myself other than it is funky to work through firewalls).

I still prefer text email myself and my personal email server does strip html off of incoming emails automatically which can sometimes make things difficult (and in very rare occasions impossible as in the entire message is empty) to read. But it certainly brings back memories of an earlier era(an era that was much more fun for me computing wise anyway).

For work my org uses office 365 (and hosted exchange at rack space prior), MS introduced breaking changes in the OWA client which I use for most of my mail which makes text based email composing impossible. Reported it almost 2 years ago and last I checked it was still broken (the behavior being new line characters are broken making the entire email be one long line, in many cases totally unreadable. Message is fine in the "outbox" and only gets mangled once it gets beyond that level).

Working from home is the future, yet VMware just extended vSphere 6.5 support for a year because remote upgrades are too hard

Nate Amsden

Licensing and hardware support would be the biggest reasons(much more so for 7.0 than 6.7). 7.0 dropped official support for tons of hardware(and 6.7 isn't officially supported by a bunch of hardware too). And of course for orgs that have been running vmware for a while they may not have maintained their support contracts over the years which is required to get upgrades to 7.0 license keys. I know on more than one occasion I have gone to upgrade license keys only to get rejected saying the support was expired(support was through HP), and had to go to HP to get them to "sync" the system to get the vmware portal to recognize that support is current.

Still running vSphere 6.5 and vCenter 6.7 across my org, nothing in the newer versions compelling enough to upgrade (vCenter 6.7 only for the newer HTML client). Really miss the .NET client(saying that is weird given I've been linux on the desktop since 1998, and hated using .NET back when I first started using ESX 3 because of that, but it's so much better than the HTML and flash clients, except for 4k screen support found it unusuable on my laptop when I first tried out 4k a few years ago, but worked around it by switching laptop to 1080p).

Last vSphere release I was super excited about was 4.1. Everything since has been "nice, but not that exciting" which is good I suppose, nice stable product(I file less than 1 ticket/year on vmware issues with ~700-1000 VMs running).

Microsoft customers locked out of Teams, Office, Xbox, Dynamics – and Azure Active Directory breakdown blamed

Nate Amsden

I guess they are going to miss their SLA?

https://www.theregister.com/2021/01/06/four_nines_azure_active_directory_sla/

Google says once third-party cookies are toast, Chrome won't help ad networks track individuals around the web

Nate Amsden

Re: Once upon a time...

firefox removed the ability to prompt to accept cookies a long time ago(I think it was just after firefox 33.something). I held onto it as long as I could then switched to Pale moon, who eventually I guess had to retire that feature probably a year or two ago(because of upstream changes). Tried waterfox back before I decided to use Palemoon but the feature did not work at all in that browser either at the time.

I still have 37k entries in my moz_perms sqlite database which I assume pale moon still uses(can right click on a page and see the permissions for that page and they seem to hold up), though don't have a way to add more entries(easily anyway).

$ sqlite3 permissions.sqlite

SQLite version 3.31.1 2020-01-27 19:55:54

Enter ".help" for usage hints.

sqlite> .schema

CREATE TABLE moz_hosts ( id INTEGER PRIMARY KEY,host TEXT,type TEXT,permission INTEGER,expireType INTEGER,expireTime INTEGER,modificationTime INTEGER,appId INTEGER,isInBrowserElement INTEGER);

CREATE TABLE IF NOT EXISTS "moz_perms" ( id INTEGER PRIMARY KEY,origin TEXT,type TEXT,permission INTEGER,expireType INTEGER,expireTime INTEGER,modificationTime INTEGER);

sqlite> select count(*) from moz_perms;

37009

sqlite> select * from moz_perms where origin like "%thereg%" limit 5;

35|http://www.theregister.co.uk|cookie|2|0|0|1512317577932

36|https://www.theregister.co.uk|cookie|2|0|0|1512317577932

37|http://nir.theregister.co.uk|cookie|2|0|0|1512317577932

38|https://nir.theregister.co.uk|cookie|2|0|0|1512317577932

130|http://forums.theregister.co.uk|cookie|8|0|0|1512317577932

but zero entries for theregister.com I guess they changed that after I lost access to that feature.

The wrong guy: Backup outfit Spanning deleted my personal data, claims Cohesity field CTO

Nate Amsden

agree with the other posters

This CEO is an idiot. Don't care what the EULA says don't be paying such tiny amounts of money for such a massive amount of storage, the math doesn't work out, not even close. It's like people wanting to leverage google drive for a few dollars a month and storing hundreds of gigs to tens or hundreds of TBs and somehow think that is a reasonable thing to do. It's so beyond absurd I don't even have words.

And what kind of small business has that much data? Really sounds fishy. Maybe this CEO is trying to disguise his hoarding habits by saying it's for his "Small business".

To add insult to injury the guy is a CEO of a storage company. He has no right to be angry he should be embarrassed for being so stupid about this, and then doubling down and going public about it. Hell he could of fired off an email to people on his IT team and said "hey I'm thinking about using this for X, what do you think?" Or maybe he did and he didn't agree with their response.

Red Hat returns with another peace offering in the wake of the CentOS Stream affair: More free stuff

Nate Amsden

really the same situation can be there for any "free" linux distribution as well, the maintainers could find themselves not interested in doing it anymore for any number of reasons "forcing" the customer to make a move.

https://en.wikipedia.org/wiki/Scientific_Linux (similar to CentOS)

There was another one(probably more) I think many years ago that was similar to CentOS that quit too saying they were just going to use CentOS. And of course think of how many distributions out there have come and gone over the past 20 years. I thought it was named Scientific Linux which is why I looked it up but seems I remembered wrong, or maybe there was another Scientific Linux in the 2005-2010 time frame.

I haven't used CentOS since ~6.something in a professional environment and haven't used RHEL since RHEL v3(I remember going from RH7.2->RHEL 2.1). Not that I have anything against either distro just the past two companies I've been at(almost 11 years now) have been Ubuntu based and I haven't felt enough of a need to make that big change(I did like aspects of CentOS/RHEL after having used them for many years despite using Debian on my personal stuff since 1998 - since all my Debian systems have gone to Devuan). The whole systemd thing has long soured me on RHEL, not that Ubuntu doesn't have that same issue but it's just more reason not to bother with moving to another distro.

if your not willing to pay for support, just be prepared to have to jump distros every now and then.

VMware warns of critical remote code execution flaw in vSphere HTML5 client

Nate Amsden

kill openSLP

FYI you can run this command to see if the SLP service is even being used (at least on vSphere 6): esxcli system slp stats get

to see stats about SLP, assuming it is accurate, when I disabled SLP on my clusters back in October the stats indicated no hits to the service since the system started(assuming some kind of health check that ran when the hypervisor booted, time stamp of the event matches boot time exactly).

vmware suggested in the past to disable SLP if you are not using it as a "workaround" though they implied(as of Oct 2020, though looking now looks like they have removed that language, tried checking archive.org for the older page but page was just a blank page) that may break stuff so it's not a long term solution(for me based on the stats of that command, it is a long term solution don't think that feature has ever been used at the orgs I have worked at):

https://kb.vmware.com/s/article/76372

as an extra check I ran nmap against the hosts to verify the port was closed after making the change.

Linux Mint users in hot water for being slow with security updates, running old versions

Nate Amsden

Re: An option for automatic updates would be good

On Mint 17 I went as far as locking my kernel version for something like 2 years because I was tired of sound randomly breaking between versions. Haven't had that issue on Mint 20 yet but my kernel hasn't been updated for a while, build date says April 20 2020 (didn't install Mint 20 till August 2020).

Nate Amsden

Re: Solve the Nvida problem and I'll be on board

I assume you have a super new Nvidia card?

I've been using Nvidia on linux since probably 98-99, Riva TNT something like that. Haven't had to use Nvidia's drivers from their website(I mean downloading them manually and using their installer, I have relied on the packaged drivers which seem to do something similar but in a more clean way) I think I want to say since something like Ubuntu 8 or 9 myself. But I am by no means on the bleeding edge of Nvidia. My main machine runs a Quadro M2000M (Lenovo P50 laptop). Though I really don't use it's capabilities for much these days.

I remember having to purchase AcceleratedX back in the 90s for Linux not sure if any others around here remember that. I think mainly for my Number Nine(brand) cards which I was a fan of back then.

Nate Amsden

Re: Could they pick a better example than Firefox?

Would be nice if Mint LTS releases, being an LTS release used Firefox ESR instead of regular firefox for this kind of reason.

I upgrade my ESR about once every month or two myself. I haven't run the built in browsers to mint probably since Firefox's version was below 30.x. (maybe was on Ubuntu 10.04 at that time I don't recall how long ago that Firefox version was).

Nate Amsden

Re: So...perhaps Mint SHOULD have automatic updates turned on.

sure, change the default.

Then the more advanced users who care can easily turn it off if they need to.

(Linux user since 1996, no auto updates, ever)

I ran Mint 17 until not long after Mint 20 came out, the timestamp on my last downloaded ISO of Mint 20 was Aug 15 which sounds about right. I maintain my browsers separately from the OS (Palemoon, Firefox ESR and Seamonkey, none of which appear to be in the Mint repos). I also run them as a somewhat more limited user and launch them via sudo e.g.

sudo -u firefox -H VDPAU_NVIDIA_NO_OVERLAY=1 /usr/local/palemoon/palemoon %u

Could go further of course but haven't bothered to do so. Does make sorting file permissions out funky sometimes.

Just wasn't looking forward to doing the work to do the migration until Mint 20 came out. I rely on a gnome app called brightside which I guess hasn't been maintained in years and it took several hours of work to compile it on Mint 20 from Ubuntu 16 sources(was available in the Mint 17 repos). Plus more hours to get everything setup again(started from scratch on a new partition rather than try to upgrade the existing OS, still have Mint 17 installed and can boot to it if needed). I ran Ubuntu 10.04 on my laptop for a good 12-18 months after EOL before installing Mint 17 years ago.

For me personally the security risk is quite low. I suspect for most linux users the risk is quite low(though mine personally I think is much lower than even that), just by nature of the type of users most likely to be running linux, combine that with stats like this:

https://www.theregister.com/2021/02/18/cve_exploitation_2_6pc_kenna_security/

I guess at the end of the day the "install all updates now!" group of people generally come across as you will be secure if you have all of the latest updates. (may not be the intention but that's the way it sounds in my opinion) Which of course isn't true. In my opinion even running older software you are safer if your not going to tons of random sites and downloading random things and opening random email attachments with all security updates applied. So of course it depends on the user. Hence going back to linux users are more likely to not do that kind of thing.

I have run internet exposed servers since 1996. I host my own websites, DNS, email etc all on public IPs(behind an openbsd firewall) at a co-location facility(and I even have an FTP server still for a couple people that use my systems). My "exposure" there I guess you could say is "high" because the systems are always open from the outside(at least the ports I want opened are). However I have had zero security incidents(that I am aware of) since ~1999 (in that case the incident was caused by an malicious user on the system who was granted legitimate ssh level access but ended up being not trustworthy).

Microsoft announces a new Office for offline fans, slashes support, hikes the price

Nate Amsden

now if they only let consumers buy LTSC windows

That would be a nice improvement. I know enterprise customers can get LTSC, but consumers should be able to as well. MS is supporting the LTSC windows regardless so it's not as if it would be much effort. Slap a premium price on it, that's fine. I'll pay double, even triple the price for that peace of mind without much hesitation.

I purchased a copy of Windows 10 LTSC for a work VM last year, and the cost was almost $500(had to buy win10 pro then a LTSC upgrade license, note company paid not me of course). Possible the vendor quoted sub optimal part numbers I am not sure. Support until 2029 so that's good.

My main desktops/laptops have been linux since 1998, any windows systems at home still run 7(most are off as I don't need them), no plans to upgrade them at this point. AV software still supported, and I haven't had a known security issue with any of my personal systems since the early 90s.

Housekeeping and kernel upgrades do not always make for happy bedfellows

Nate Amsden

don't understand

As someone who manually compiled and installed many kernels from 2.0 until late in the 2.2 series(fond memories of 2.2.19 for some reason), I never ever even one time had to delete any files as a result of a kernel update (outside of MAYBE the /boot partition if it was low on space). I had to check the article again to make sure it was referring to Linux and it seems to be. Since with 2.4.x onwards(basically when they stopped doing the "stable" and "development" kernels) I rely on the distro packages for kernel updates again, no files need be deleted for such updates.

Could this story somehow be referring to an OS upgrade rather than a KERNEL upgrade? Of course in linux land these can and typically are often completely unrelated(even in 2021, as much as the linux kernel is terrible about binary compatibility with it's own drivers many people know that whether you run kernel 3.x or 4.x or 5.x provided it supports your hardware there isn't much difference for typical workloads). But even with an OS upgrade I don't see a need to delete (m)any files. I also spent hundreds of hours back in the 90s compiling things like gnome, kde, X11, even libc5 and glibc, among dozens of other packages I can't remember anymore.

maybe this was a thing before 2.0 kernels (my intro to linux was slackware 3.0 with 2.0.something I believe back in 1996), but I suspect it was not.

this whole story just doesn't make any sense.

Salesforce: Forget the ping-pong and snacks, the 9-to-5 working day is just so 2019, it's over and done with

Nate Amsden

Re: Up yours to HP and Yahoo etc

I remember when HP announced that employees had to come to the office again a few years ago. There was claims that there literally wasn't enough office space available for all employees to come. Perhaps they corrected that situation I'm not sure, or reversed course on the concept.

Nate Amsden

Re: WFHSS

Probably much less of a thing than the "stress syndromes" driven by commute times, traffic, open floor office plans(never thought I'd REALLY miss cube farms), etc etc.

So I'd expect it to be a net plus overall for worker health. Certainly not universally but the reverse situation is not universal either. Hopefully employers can figure out the right balance for their employees.

For me personally, going to an office isn't the end of the world but it is more about cost of housing and commute times which really make that unattractive in many situations. I did have a job for a couple of years in a small city(~100k) where I had been living for 9 years, and the new job was literally across the street from my apartment. I had co-workers who parked further away(to avoid parking fees) than I lived. That was an awesome setup. I originally moved to that apartment for another job, which was about 1/2 mile up the street back in 2000.

More patches for SolarWinds Orion after researchers find flaw allowing low-priv users to execute code, among others

Nate Amsden

ServU FTP

Wow that brings back memories. I knew a guy(online only never met) back in the late 90s who distributed a "hacked" ServU which was popular for a certain file sharing people back then. People used it because it didn't need a license key, but he also inserted his own backdoor account(s).

Synology to enforce use of validated disks in enterprise NAS boxes. And guess what? Only its own disks exceed 4TB

Nate Amsden

Never used Synology(I do have a 4-bay Terramaster whose software I declared unusuable after about 1 hour and fortunately was able to easily replace it with Devuan running on a USB connected SSD been running for a year now at a colo for my personal offsite file storage) but I would imagine they got tired of getting their support burned by customers (often not their fault) SMR drives or something.

Nate Amsden

Re: Are they going proprietary though?

In what world is Synology an enterprise NAS? They are a best a SMB option, same goes for the TrueNAS/FreeNAS stuff. Whole different sport. Maybe they want to look more enterprise like by adopting enterprise things like custom firmware?

I can't think of any time a supported enterprise storage system would have any storage in it ever other than from the vendor of the system. Same goes for every other component in the system.

So I can certainly see why users would be upset since Synology is not an enterprise system, never has been, probably never will be. Same for Qnap and others in that group of products(can't name any others since well I don't use 'em). Maybe some think they are enterprise because they provide a rackmount version of their product or something (I'm guessing they do).

IBM cloud tries to subvert subscriptions with pricing plan that stretches some discounts

Nate Amsden

Re: This kind of makes the financial motivation of moving to the cloud moot

Brick and mortar infrastructure: you can(many have since VMware became a thing) consolidate and oversubscribe to slash costs of underused services/servers/storage.

Cloud: you only pay for what you provision (no way to take cpu/memory/disk shares from instance 1 to instance 35 where it can be used).

(this very important distinction is what allowed the org I work for to save more than $1 million/year since I moved them out of public cloud in early 2012, there have been recent cost analysis by people who wanted to move back as a resume bullet point but they couldn't make the numbers work and aren't with the org anymore -- read another comment about ROI - for us the ROI in 2012 was about 8 months)

I'm assuming most clouds don't offer committed rates on general resources

for example, customer commits to 100 "big" instances each having 10 CPU cores, 64GB of memory, and 1TB of disk.

Which gives a total of 1,000 CPU cores, 6400GB of memory, and 100TB of disk.

While that customer pays for those resources they can then provision any instance sizes (and number)they want as long as it fits in that total aggregate capacity. Likely still are forced to used pre defined fixed sizing for those instances(so no fine tuning number of cpu, memory and disk space/type per instance leads to more wasting).

But I think most public clouds still do fixed instance allocations and pay per instance rather than pay per resource, which leads to massive wasting of resources. Containers help address some of this but there is quite a bit of wastage there too. This is an issue I've been pointing out for about 11 years now.

Severe bug in Libgcrypt – used by GPG and others – is a whole heap of trouble, prompts patch scramble

Nate Amsden

systemd and DNSSEC ?

wtf? I wouldn't be surprised if systemd is doing DNS these days but isn't DNSSEC a server-to-server thing not a client to server thing? If so wtf is systemd doing with it?

on the topic of DNSSEC I came across this blog a while back and found it informative, rips into DNSSEC https://sockpuppet.org/blog/2015/01/15/against-dnssec/

"In fact, it does nothing for any of the “last mile” of DNS lookups: the link between software and DNS servers. It’s a server-to-server protocol."

Been running DNS myself since about 1997(both hosting authoritative BIND9 servers as well as hosting domains with Dyn in the last decade or so), though no DNSSEC.

Linux maintainer says long-term support for 5.10 will stay at two years unless biz world steps up and actually uses it

Nate Amsden

Re: Not a company but as an end-user...

Me too, hell on my laptop I ran Ubuntu 10.04 LTS way past end of life(didn't want Unity, eventually installed Mint 17), and only in the past 3 months I think did I install Mint 20 (was on Mint 17 before so ran it a good 18 months or so past end of life). So far Mint 20 has more bugs that affect me than 17 did, but whatever no deal breakers(and really nothing new experience wise that makes me happy I upgraded). I do maintain my browsers separately(manually) from the OS so I do get updates (running Palemoon, and firefox ESR and seamonkey at the same time for different tasks).

When I was on Mint 17 I actually locked my kernel to an older release(4.4.0-98) and ran that for a solid 3 years because I was tired of shit breaking randomly after upgrades(mainly sound not working after more than one new kernel upgrade, using a Lenovo P50 laptop). I would of probably stuck to the 3.x kernels on Mint 17 but had to upgrade to 4.x to get wifi working (something I didn't realize for the first 6 months until I traveled since I never use wifi at home with my laptop always ethernet).

Have been annoyed for so long with Linux's lack of a stable ABI for drivers. I know it'll never get fixed I've been using linux for almost 25 years now but it still annoys me. Fortunately these days server wise most of my servers are VMs. It was so frustrating back in earlier days having to slipstream ethernet and storage drivers into Redhat/CentOS kernels to kickstart systems and having to match up drivers with the kernels(even if it was off by a small revision it would puke). I think that was the last time I used cpio.

Nate Amsden

Re: support life?

why do you care what kernel is in your TP-link device, especially if it is a supported version? I just checked one of my $100k storage arrays which is running a supported software from the vendor, Running 2.6.32 kernel(apparently built in 2017). It works fine, it's a pretty locked down system(managed to sneak my ssh key onto the system during a recent support call otherwise customers don't generally have linux level access to the system). I don't have concerns. It's by far not the most recent OS release for that platform but it is technically the most recent recommended release (Just had latest patches applied a few weeks ago) for that generation of hardware(the hardware was released in late 2012, system purchased probably 2015 I think).

I recently reported(again) some bugs with the software(not kernel related but to the storage functionality). Support said I can upgrade to the next major version to get the fixes, though I deem that too risky myself given the engineering group has told me themselves they don't generally recommend that version for this hardware. It does work, and is supported, but I'm a super conservative person with storage so I'd rather live with these bugs which I can work around then risk different perhaps worse bugs in the newer version. Unlikely to see such bugs given my use cases but it's just not THAT important to upgrade so I will run this release of software until well past end of life (late 2022), probably not retiring this piece of equipment before 2024.

(linux user since 1996)

AMD, Nvidia, HPE tapped to triple the speed of US weather super with $35m upgrade

Nate Amsden

Same network speed as current system?

This article says the network links are 200 Gigabits on the new system.

However apparently their current system has 25 GB/s:

https://www2.cisl.ucar.edu/resources/computational-systems/cheyenne

"Partial 9D Enhanced Hypercube single-plane interconnect topology Bandwidth: 25 GBps bidirectional per link"

25 Gigabytes * 8 = 200 Gigabits.

Would of thought things would be faster on the newer system. That bidirectional statement may imply the current system is just 100Gbit in each direction and the new system is perhaps 200Gbit in each direction? that would be a good boost.

Give 'em SSPL, says Elastic. No thanks, say critics: 'Doubling down on open' not open at all

Nate Amsden

Re: It's your cash they're after

Curious don't you think they are getting Microsoft Windows and Oracle DB licenses for running those products in their cloud stuff for customers?

https://aws.amazon.com/rds/oracle/

https://aws.amazon.com/windows/products/ec2/

I certainly hope they are paying for them, and I think it's safe to assume those costs are passed onto the customers.

Signal boost: Secure chat app is wobbly at the moment. Not surprising after gaining 30m+ users in a week, though

Nate Amsden

why phone number required

I installed signal again just now to verify the experience. First thing it wants me to do is send me a text message. it also wants access to contacts. So i deleted it. I don't use the other apps mentioned in the article either.

I think i tried it a couple years ago with one of those virtual burner phone services but it didn't work.

I don't get why they don't have an email sign up option. I assume if you never grant access to contacts you can manually add your friends.

I do have line installed. It too wanted a phone number to install. Account was created while i was overseas with another phone and local sim. I decided to install it on my newer phone in 2019 and it wanted a sim card too. Also didn't work with virtual phone texting i think. So i bought a prepaid sim. Used it to register the app then switched back to email authentication and removed phone rights from the app and changed back to my normal sim.

With signal i think(memory is hazy this was a while ago), i did the same process only after verifying I could not find any way to switch to email authentication. So i nuked it before removing the prepaid sim.(possible i am confusing signal with another chat app in this situation i tested at least a couple)

I used the line chat app with nothing but wifi on a dedicated phone for 2 years and it worked fine( it had a sim originallywhen it was installed). I can search for friends by thier username in the app or if they are with me in person i think there is a QR code function take a picture with the app and it adds the friend.

Are there chat apps (make it simple) that work from signup on a wifi only device? I just think if you're really concerned about privacy then you'll want the option to use a dedicated device on wifi. ( cheaper than maintaining a prepaid sim ongoing). Before I moved line to my main phone if i traveled i took both phones and used tethering.

I couldn't find any last i checked.

I wouldn't be surprised if signal is more "secure" than line i just don't want to give them my phone number. So i don't use it. I'd like to though have heard good things.

Android devs: If you're using the Google Play Core Library, update it against this remote file inclusion CVE. Pronto

Nate Amsden

users may be more inclined to update apps

If the developers (of both OS and apps) would develop versions that just have the security fixes not new features. Also sorely lacking is the ability to easily roll back as well.

Was burned too many times early on after switching to Android many years ago I have had auto app updates off and only update when I really have to.

Two cases in point. Ironically both Weather apps.

Weather.com perhaps before their app was sold to IBM had a pretty good Android app. Then they improved it I guess and wrecked it pretty royally in my opinion anyways. Fortunately I had a backup of the older version(v4.2) and it continued to work for several years(some MINOR things broke, but 85% of the app was workable which was better than the new official version). I was actually quite impressed how long the older version lasted. I powered up my Note 3 just now with that app and it does not work anymore(no errors, just no weather data), but it did at least up until July 2019 when I moved to a newer phone as my daily driver.

On my newer devices I switched to Accuweather, which too had a really nice(in my mind anyway) user interface worked well, paid for the no ads version. Then recently they revamped it. Wrecked it again (check google reviews MANY complaints). Fortunately again I had a backup and reverted to the older version. For whatever reason since I downgraded the notification bar doesn't update automatically anymore no matter what I do, I have to click a little icon on the bar to get it to update. But it works otherwise and again is better than the alternative of using their new app. They started sending popups in the app to get me to upgrade but have ignored them. Not sure if I will get lucky enough to be able to use this older version in the years to come, or if I'll need to find another weather app.

OpenZFS v2.0.0 targets Linux and FreeBSD – shame about the Oracle licensing worries

Nate Amsden

zfs in ubuntu since at least 16.04

I'm not sure if it was considered "supported" back in 16.04(2016) but am currently running several 16.04 systems with zfs with packages from ubuntu.

Perhaps the 19.10 innovation with zfs was installer support? I recall reading news about that but don't remember the version specifically.

https://packages.ubuntu.com/xenial/zfsutils-linux

No special setup, just using zfs in some cases where the compression is helpful, as the back end SAN storage is old enough that it doesn't support inline compression(no zfs on root, just extra file systems after system was installed already)

Haven't personally used any Ubuntu that wasn't LTS since 10.04 so don't know how much further back than 16.04 built in zfs got.

AWS admits to 'severely impaired' services in US-EAST-1, can't even post updates to Service Health Dashboard

Nate Amsden

Re: what a great day

Actually just retired some of our earliest hardware about 1 year ago. A bunch of DL385 G7s, an old 3PAR F200, and some Qlogic fibre channel switches. I have Extreme 1 and 10 gig switches that are still running from their first start date of Dec 2011(they don't go EOL until 2022). HP was willing to continue supporting the G7s for another year as well I just didn't have a need to keep them around anymore. The F200 went end of life maybe 2016(was on 3rd party support since).

Retired a pair of Citrix Netscalers maybe 3 years ago now that were EOL, current Netscalers EOL in 2024(bought in 2015), don't see a need to do anything with them until that time. Also retired some VPN and firewall appliances over the past 2-3 years as they went EOL.

I expect to need major hardware refreshes starting in 2022, and finishing in 2024, most gear that will get refreshed will have been running for at least 8 years at that point. Have no pain points for performance or capacity anywhere. The slowdown of "moore's law" has dramatically extended the useful life of most equipment as the advances have been far less impressive these past 5-8 years than they were the previous decade.

I don't even need one full hand to count the number of unexpected server failures in the past 4 years. Just runs so well, it's great.

As a reference point we run around ~850 VMs of various sizes. Probably 3-400 containers now too, many of which are on bare metal hardware. Don't need hypervisor overhead for bulk container deployment.

The cost savings are nothing new, been talking about this myself for about 11 years now since I was first exposed to the possibility of public cloud. The last company I was at was spending upwards of $300k/mo on public cloud. I could of built something that could do the handle their workloads for under $1M. But they weren't interested so I moved on and they eventually went out of business.

Nate Amsden

what a great day

I guess that's all I had to say, moved the org I work for out of their cloud about 9 years ago now saving roughly $1M/year in the process. Some internal folks over the years have tried to push to go back to a public cloud because it's so trendy, but they could never make the cost numbers come close to making it worth while so nothing has happened.

VMware reveals critical hypervisor bugs found at Chinese white hat hacking comp. One lets guests run code on hosts

Nate Amsden

Most probably aren't affected

It seems according to the advisory a workaround is to remove the USB 3.x controller. As far as I know this is not added by default, none of the ~850 Windows and Linux VMs I manage have it. I had to go and add a USB controller to see the option even appear. Have never needed USB 3 otherwise.

Even my vmware workstation at home which I use every day is using USB 1.1 controller.

score one for good defaults I suppose.

(vmware customer since 1999)

How Apple's M1 uses high-bandwidth memory to run like the clappers

Nate Amsden

Re: Apple leading the way once more

Several folks seem to think this performance will be possible on Windows anytime soon. MS partnered with Qualcomm for their ARM stuff and it seems to be weak by comparison. Qualcomm's ARM datacenter chips went nowhere as well. The trend of higher performing processors on mobile Apple vs Android seems to have been going on for a long time. While there are others that make ARM on mobile it seems general opinion is Qualcomm is by far the best/fastest when it comes to Android.

Things would be totally different if Apple had any history of licensing their chip designs or even agreeing to sell their chips to other companies but they have no interest in doing so(no signs of that changing). Also not as if MS (or google) can encourage Apple financially given Apple has so much money in the bank.

Apple has certainly accomplished some amazing stuff by vertically integrating all of this, really good work. I'm certainly not their target market so won't be using this myself but for many people it will be good.

Will be interesting to see how this affects market share in these segments I'm guessing Apple will pick up quite a bit vs Windows. Lots of folks touted OS X as being a great easy to use OS, but add into that this new processor and the speed/battery savings it gives it's pretty amazing.

If anything this won't obviously inspire significant fear from Qualcomm or other ARM vendors because Apple's locked in ecosystem. They can't sell into IOS/OS X, and vise versa. Just look at the progress of processors in the wearable space for comparison. I have read Apple has made quite a bit of progress there over the years meanwhile many others either got out of the space or let their designs sit for years without improvements.

Since MS can't go to Apple to buy chips, they are sort of stuck. Same for Google. Sure MS or Google could design their own chips like Apple but it would take many years before they are viable like this (assuming they ever get to that point before being killed off).

HP: That print-free-for-life deal we promised you? Well, now it's pay-per-month to continue using your printer ink

Nate Amsden

no printer at home since 2004

I rarely print. Last time I printed regularly at home was 2003, i would print out my resume along with mini CD labels for business card CDs with a bunch of samples on them, attach to my paper resume at snail mail it to job applications (in addition to applying online), i figured it was a good way to get noticed at the time. Anyway i got a new job in 2003 and my printing needs sort of stopped. When I needed to print i printed at the office.

Fast forward a few jobs and many years I shifted to fully remote work in 2016. Was working from home prior to that but the office was close by(about a mile). In 2016 i moved 90 minutes away.

I started using fedex office for my printing needs. Have to drive 15min each way to get the print outs and there's a $1 minimum for submitting jobs through their website but it works well. I probably go on average 5 or 6 times per year and spend probably on average $8 to $12 per year for those jobs.

There is a UPS store that is much closer and they claim to do online printing too but last time I checked I could not find a way to submit a simple 1 page job(or a few 1 page jobs). It seemed geared towards project level stuff but maybe that's different now.

Capita still wants to offload education software unit, sale talks ongoing

Nate Amsden

quite confused

The article references a "peak" share price of about 8, but links to a site which shows the current share price in the low 30s

https://www.investegate.co.uk/CompData.aspx?CPI

Looking at the past few years of performance it seems as if they've probably done several reverse stock splits? It seems back in 2015 they peaked at around $800 according to finance.yahoo.com.

I think the article should be updated to reflect the split adjusted(?) pricing.

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2021