Yes, easier to plug and play. This is a major advantage in some cases.
I'll allow your 4 servers in a back office around the corner, plugging into a switch under the secretary's desk . *grin*
Put 38 servers in a rack and dual path connect them to the top of rack switches.
a) you need the wider racks just for the cables. Trust me I have thermal sensor maps that prove that.
b) you need cable management at the servers, and at the top of rack switches. In some cases you may want to have slot bars on the walls of the rack.
c) you need to keep the *substantial* difference in weight in mind after the 3rd or 4th rack.
d) when DACs *do* go south you need specialized testing tools. Which apparently even our 3rd level vendor support folks don't have.
Where they've most definitely provided simplification on one level, they *can* require more complex preparation, and they absolutely require changes in the planning, prep and implementation stages.
I've basically had a bad set of experiences with DACs, bad planning, and really tight overloaded racks. Mind you we've also *never* had a bad fibre cable recover after being "cleaned" at all ends. Thus I may just be a cranky old bastard letting my experiences colour my perspectives.