" I would posit that the understanding of their operation would need to be well in-hand from the start in order for them to exist at all."
Much of the current networking theory - like all new technology - is based on trying to explain afterwards why things didn't happen as expected. There are several RFCs that show the progression of work-rounds that were devised when each new unexpected operational problem was discovered. Van Jacobsen's and Phil Karn's TCP algorithms are good examples that now seem obvious in hindsight.
Modelling tended to be done after the event. I remember one sage had a saying - "if you need Queueing Theory calculations - then the system is under-sized".
Modelling after the event did confirm surprising answers. IIRC telephone networks smoothed random bursts of traffic - but bursty data packet behaviour just gets amplified.