The article implies you can make your storage faster by making the pipe (bandwidth) bigger. In my experience at least the pipe is almost never taxed (even 4Gb FC). I know there are cases that it is but I suspect they are in the minority.
Of course experienced tech readers know this already.
I'm more concerned with queue depths at lower speeds (mainly because older gear has smaller queues) than througput.
My servers are overkill but the cost isn't high. 2x10G links for VM traffic 2x10G links for vmotion and fault tolerance 2x1G links for host mgmt and 2x8G FC links for primary storage (boot from SAN). With exception of FC everything else is active/passive for simplicity.
11 cables out of the back of each DL38x system with power and iLO. Good thing I have big racks with lots of cable management. 4 labels per cable and it takes a while to wire a new box but we add boxes at most twice a year in the past 3 years.
Maybe someday I'll have blades