I am working on a education VDI deployment
on a large greenfield campus. Their preferential vendor was Dell and their core switching is Brocade.
Pricing up the Dell Blades was a strange one because the M620 comes with 10G on the LOM. Additional 10GB Broadcom cards were the same price as 1GB cards and that is before the insane discounts that the education sector gets.
The M1000e was kitted out with 6x IO Aggregators which have QSFP+ on board for a 40GB to uplink. Unless you start spending silly money trying to convert the QSFP+ to optical, you need a SFP+ or QSFP+ capable switch. The one that made sense was the Force10 S2410, but it doesn't exist any more so we had to go for the S4810 which comes in at $30k and then there is the price of the cables. Again education prices made these dramatically cheaper.
Where it got really expensive was the Brocade MLXe 24 port SFP+ modules. Unfortunately the Brocade MLXe doesn't support 40GB QSFP+, but their top of rack switches, the ICX 6615 does. Also Dell were being really fussy about what they would support, so this became a no go.
So really, this reiterates what others have said. Switch prices and options are the problem. I am struggling to justify having 96 SFP+ ports on a two-switch redundancy design, but it was really the only option given to us. When the environment is fully kitted out with SANs, uplinks, connections to the Blade Chassis, we will only need to use 4 per blade chassis, 1 per iscsi SAN and maybe 4 to the core.
Datacentres have a different problem, but environments that are smaller really need some smaller switch options to justify 10GB or a reduced price point.