A’s improvement is not fairly gained
The issue here is that you're essentially relying on the providers of transport stacks to "do the right thing". It's a fundamental of connectionless network layers that it's up to the endpoints to police the rate at which they throw packets into the network and there's only the most basic feedback (discarded packets or, at best, a congestion flag) to indicate the rate may be too high in relation to other traffic. For any individual endpoint, the best response to that indication may well be to increase its traffic rate by sending out multiple copies of the same packets, increasing the chance - relative to well-behaved endpoints - that at least one may survive the queue drops that will occur at the point of congestion and there's no real defence against that.
Existing congestion algorithms don't really fare well if there are multiple network paths - the connection will get throttled to the speed of the most congested - and the transport protocol itself doesn't really lend itself to, for example, prioritising latency over reliability (actively encouraging routers to drop rather than queue packets).
Where you have more control over the behaviour of the network layer (knowledge of or even control over bandwidth, latency, etc) and perhaps some notion of resource reservation you can clearly do "better" than in the case of the Internet at large - but that may well mean new transport protocols as well as different traffic-control algorithms.
And if you're going that far, some distributed ingress control (to put a lid on DDoS) might be worth considering too.