4 labels per cable
My cables get 4 labels per cable, the outer most on each end indicates what it plugs into, the innermost on each end indicates what it plugs into on the other end of the cable. The last systems my co-worker installed he said took him about an average of 1 hour per server for the cabling/labeling (3x1G cables, 4x10G cables, 2x8G FC cables, 2xpower (44 labels) maybe someday I'll have blades). Fortunately he LOVED the idea of having 4 labels per cable as well and was happy to do the work.
I also use color coded cables where possible, at least on the network side. I'm happy my future 10G switches will be 10GbaseT which will give me a very broad selection of colors and lengths that I didn't have with the passive twinax stuff.
Use good labels too, took me a few years before I came across a good label maker+labels. Earlier ones didn't withstand the heat(one of my big build outs in 2005 had us replacing a couple thousand labels as they all fell off after a month or two, then fell off again). I have been using the Brady BMP21 for about the past 8 years with vynl labels(looks/feels like regular labels, I've NEVER had one come off).
Another labeling tip that I came across after seeing how on site support handled things. Even though my 10G cables were labeled properly it was basically impossible to "label" the 10G side on the servers themselves, with 4x10G ports going to each server (two sets of two, so it's important which goes to which port still), I did have a drawing on site that indicated the ports, but the support engineer ended doing something even simpler that I had not thought of (at one point we had to have all of our 10G NICs replaced due to faulty design), which was label them "top left" "top right" "bottom left" "bottom right", for connecting to the servers(these NICs were stacked on each other so it was a "square" of four ports across two NICs). Wish I would of thought of that! I've adjusted my labeling accordingly now.
Also I skip the cable management arms on servers, restricts airflow, I just have semi to-length cables so that there is not a lot of slack. Cable management arms are good if you intend to do work on a running system(hot swap a fan or something), but I've never really had that need. I'd rather have better airflow.
Wherever possible I use extra wide racks too (standard 19" rails but 31" wide total) for better cable management. In every facility I have been in power has always been the constraint, so sitting two 47U racks in a 4 or 5 rack cage still allowed us to max out the power (I use Rittal racks), and usually have rack space available.
Also temperature sensors, my ServerTech PDUs each come with slots for two temp+humidity probes, so each rack has four sensors (two in front, two in back), hooked up to monitoring.
I also middle mount all "top of rack" network gear for better cable flow.
Me personally, I have never worked for an organization that came to me and said "hey we're moving data centers". I've ALWAYS been the key technical contact for any such discussions and would have very deep input into any decisions that were made(so nothing would be a surprise). Maybe it's common in big companies, I've never worked for a big company(probably never will who knows).