Who says UCS are using FCoE, all our templates are iSCSI vNICs
Few mentions of UCS making use of FCoE from the fabric interconnects, the FCs also support iSCSI which opens buying disk 90% cheaper if you want it.
We now buy multiple cheaper NAS/(iSCSI SAN) units with NAS grade disk, the prosumer/consumer small NAS market has done a great job proliferating these disks across the market with features historically reserved for huge spindle arrays.
UCS FIs have 24/32 10Gbit ports, sure you can use them for FC - but why would you? UCS hosted visors can multi-path back-to-back to many 10Gbit Eth ports on a storage unit, no switch even required, bought at typically 10% of the cost of sage brands as EMC/Hitachi/NetApp (by the time you've not bought vendor lock-in disks)
I recently bought two QNAP 2480u-rp units, used any SSDs for cache, pay couple hundred dollars a disk, 4x10Gbit ports from intel, have 100TB in each for £16K (all approved, tested, does not break your warranty)--- or buy two vNXE's get slower throughput, less extensibility and pay 160K, 10x the cost. Plus none of those niggles SFF or LFF for SSD support/large disks, transceiver compatibility, expensive support, parts lockin, feature licensing
Little argument to be had, with virtualisation its simple to move the storage design to the visor, get resilience and MTTF by literally having spare kit in the DC itself -- who needs EMC any more for less than PT storage. 40Gbit a head over iSCSI - fast enough for most, entire spare unit -- staff close to the kit again.