* Posts by agnostic_node1

2 posts • joined 11 Jun 2015

Virty servers' independence promise has been betrayed

agnostic_node1

"So, given these issues, what can you as an admin do? To be honest, your options are limited."

No they aren't. I object good sir!

"Sure, the odd virtualised cluster is perhaps acceptable but when you start to scale to hundreds of hosts, thousands of guests and several faux clusters these issues start to become a real pain for the administrator who has to work around them."

The answer to that problem is not to scale bad/obsolete practices.

1) You can address apparent virtual clustering difficulties by thinking app-first, rather than infrasturcture-first. The author doesn't mention anything about virtual or physical load balancers hosting FQDNs & proxying SSL, for instance. But if he was building http infrastructure for the Register in 2015, it's safe to say he'd be better served parking the register.co.uk FQDN on several high-end physical or virtual load balancers, and hosting his public A records & C-Names on a DNS host that has Anycast DNS, such that American users like myself resolve to the Register's Yankee datacenters, rather than make a trip across the pond.

But even if he didn't have those resources, he could pull an IT MacGuyver out of his hat and do poor-man's load balancing: if http server A needs rebooting, delete it's C-Name from your public DNS host. Once it starts back up, re-create it.

If he has an IT team that uses the RFC-1918 or public IP address to access & manage the application and its infrastructure, beat the team members on the head with a large heavy stick. If they repeat, or start mentioning the IP address to users, beat them until hospitalization is required by law. Follow this up by posting a fatwa banning such practices.

Management's got a point....making hardware much much less important is what the last 5-8 years has been about in IT.

2) The column doesn't mention the concept of what we in the Microsoft kingdom (others call it something else) think of as availability sets, or what network guys might call failure domains. Essentially, why bother with guest virty clusters if you can guarantee that multiple instances of an app's Storage + Compute + Network resources have no hardware in common and are accessed via FQDN? App1.domain.com is thus in Avaibility Set 1, which consists of VMs on-prem, in my Azure-west or AWS west region, and that whole enchilada is replicated active/passive to Availability Set 2 on the East coast, or (super-sexy) is Active/Active with AS2

"The faux clusters usually utilise shared SCSI bus technology and sit on different hosts. “Big wow,” you may say but it has a direct and detrimental effect on the ability to manage a cluster."

Bingo! +1, retweet, Facebook Like!

3) I don't think about SPC-3 disk reservations, block storage vols as witness/tie-breaker, or RDMs anymore, either on physical or virtual clusters, which did make virty clusters hard. Why? Fault-tolerant SMB 3 file shares fixed all those headaches; I can use \\myawesomefileshare\path1 or an Azure service as a witness to resolve a cluster dispute. I can even host my SQL databases on an SMB share, so who the hell wants the baggage entailed with a block storage volume? It's almost to the point where I only need block vols for boot...I'm sure NFS 4 has similar features; relief from the pains of unique-as-a-snowflake block vols is within reach of the Everyday IT Grunt.

4) The author doesn't mention the difference between fault tolerance & high availability, which is important for his readers to understand. The former is hard & expensive to do on a physical host & guest, the latter is much easier to achieve. Most businesses when pressed will really only need the latter.

Let me pose a thought experiment. Take a 3,000 seat enterprise. Imagine it is 100% virtualized. Now imagine it runs Active Directory for identity/authentication etc. Now further imagine that all AD Domain Controllers are virtual.

Is that AD Domain for that business more like a virtual cluster hosting an application on shared resources and offering HA, or is it more like a distributed application that is fault tolerant? What happens if 30 out of 31 virtual AD servers falls over? Can your users still get a krb ticket, identify themselves to your resources, and do their work?

If your answer is "yes, they can still do their work," then you know what thinking app-first is all about.

agnostic_node1

"So, given these issues, what can you as an admin do? To be honest, your options are limited."

No they aren't. I object good sir!

"Sure, the odd virtualised cluster is perhaps acceptable but when you start to scale to hundreds of hosts, thousands of guests and several faux clusters these issues start to become a real pain for the administrator who has to work around them."

The answer to that problem is not to scale bad/obsolete practices.

1) You can address apparent virtual clustering difficulties by thinking app-first, rather than infrasturcture-first. The author doesn't mention anything about virtual or physical load balancers hosting FQDNs & proxying SSL, for instance. But if he was building http infrastructure for the Register in 2015, it's safe to say he'd be better served parking the register.co.uk FQDN on several high-end physical or virtual load balancers, and hosting his public A records & C-Names on a DNS host that has Anycast DNS, such that American users like myself resolve to the Register's Yankee datacenters, rather than make a trip across the pond.

But even if he didn't have those resources, he could pull an IT MacGuyver out of his hat and do poor-man's load balancing: if http server A needs rebooting, delete it's C-Name from your public DNS host. Once it starts back up, re-create it.

If he has an IT team that uses the RFC-1918 or public IP address to access & manage the application and its infrastructure, beat the team members on the head with a large heavy stick. If they repeat, or start mentioning the IP address to users, beat them until hospitalization is required by law. Follow this up by posting a fatwa banning such practices.

Management's got a point....making hardware much much less important is what the last 5-8 years has been about in IT.

2) The column doesn't mention the concept of what we in the Microsoft kingdom (others call it something else) think of as availability sets, or what network guys might call failure domains. Essentially, why bother with guest virty clusters if you can guarantee that multiple instances of an app's Storage + Compute + Network resources have no hardware in common and are accessed via FQDN? App1.domain.com is thus in Avaibility Set 1, which consists of VMs on-prem, in my Azure-west or AWS west region, and that whole enchilada is replicated active/passive to Availability Set 2 on the East coast, or (super-sexy) is Active/Active with AS2

"The faux clusters usually utilise shared SCSI bus technology and sit on different hosts. “Big wow,” you may say but it has a direct and detrimental effect on the ability to manage a cluster."

Bingo! +1, retweet, Facebook Like!

3) I don't think about SPC-3 disk reservations, block storage vols as witness/tie-breaker, or RDMs anymore, either on physical or virtual clusters, which did make virty clusters hard. Why? Fault-tolerant SMB 3 file shares fixed all those headaches; I can use \\myawesomefileshare\path1 or an Azure service as a witness to resolve a cluster dispute. I can even host my SQL databases on an SMB share, so who the hell wants the baggage entailed with a block storage volume? It's almost to the point where I only need block vols for boot...I'm sure NFS 4 has similar features; relief from the pains of unique-as-a-snowflake block vols is within reach of the Everyday IT Grunt.

4) The author doesn't mention the difference between fault tolerance & high availability, which is important for his readers to understand. The former is hard & expensive to do on a physical host & guest, the latter is much easier to achieve. Most businesses when pressed will really only need the latter.

Let me pose a thought experiment. Take a 3,000 seat enterprise. Imagine it is 100% virtualized. Now imagine it runs Active Directory for identity/authentication etc. Now further imagine that all AD Domain Controllers are virtual.

Is that AD Domain for that business more like a virtual cluster hosting an application on shared resources and offering HA, or is it more like a distributed application that is fault tolerant? What happens if 30 out of 31 virtual AD servers falls over? Can your users still get a krb ticket, identify themselves to your resources, and do their work?

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022