So-called "software" doesn't exist.
So-called "software" is merely the current state of the hardware.
Code tells the hardware what state it should be in ...
The software-defined data centre concept has attracted considerable attention and hype, with its promise of reducing hardware costs and automating control of infrastructure. Backers of the idea say the SDDC will enable policy-driven management of resources, allowing applications to be deployed across commodity hardware to suit …
I think the downvote was a bit harsh.
The problem with sw defined everything is that you then need a management policy for everything and you need to know that it works, because when you configure it, it will cascade through your datacentre. It takes all the complexity and shoves it upstream into fewer systems. Which in itself, generates more complexity.
What happens when a DDOS attack ramps up the iscsi requirements to your webservers which floods a switch which reroutes traffic via another switch which also handles traffic for the support VPN for the backend database for the finance system which happens to be running year-end batch jobs which now take longer than expected, overlapping the backup windows....? you get the idea. Troubleshooting dynamic systems is a nightmare no-one wants. Non-dynamic systems are may be less resilient, but that help limit cascade failures which have support teams chasing their tails. At least, it they would be chasing their tails if the manager hadn't decided that he could configure the policy system himself and fired them all.
True, but operating systems, platforms and run-time environments provide layers of translation between software and hardware. The days when software was written directly to directly communicate with hardware are, in most cases, long gone, even on the likes of mobile phones, or white goods.
The article quotes the likes of Google and Facebook, who make their own systems. They've designed these to work with their particular applications. What works for them isn't necessarily going to work well for a trading application in a bank. For intensive applications where performance is paramount, choosing the correct hardware and software combination, tuning it, and thoroughly testing it are all going to remain vital in the years to come.
The "software-defined" moniker is a load of bollocks anyway. Things have been "defined" by software ever since software was written to run on a common platform across different discrete types of hardware. It's nothing new.
Oh, why bother. Pinnacles of the SD* paradigm are already achieved.
Software Defined Newsmongering was done in 2003, when Python scripts started to generate media headlines like "Horror in Smallville! Peaceful neighbourhood is menaced by a drunk beaver!". I believe that's been their task ever since. Lately they seem to have written lots of articles about Everything-As-A-Service and Software-Defined-Everything.
Software Defined Commentarding appeared a bit later, starting as short blog comments with a weblink, but they must have evolved into full-blown rant engines by now.
I would say this approach is limited by the lowest common denominator, usually the security and risk audit people, who take the most conservative approach to meeting decade old regulatory requirements. As with most things cloudy, SDDC is fine for Web 2.0 start-ups building to take advantage but less so for established businesses.
It would completely change how all of the services are billed, what with the outsorcerer making most of their money being able to charge per server provisioned. Can't see that changing in the current agreement which has quite a few years left to run.
Will be interesting to see how many new ways there are to charge for all this in the next contract!