Well said, Anonymous Coward
As a network engineer within the telecoms industry, can I just try to shed some light on some current issues that underpin customer dissatisfaction.
Most posts here show a lack of understanding of both 1) the current business climate within which the telecoms industry has to survive and turn a profit and 2) traffic engineering principles in a multi-protocol, multi-service next-generation network.
Presumably, most people here consider themselves at least tech-savvy, if not actually a net-head. With that in mind, the first point is simple. Why do you believe the crap told to you by "ISPs"? Let me clarify that. Why do you believe statements written by marketing/sales types that couldn't SPELL IP? The idea that you can get 8Mbps guaranteed throughput "on the 'net" - which actually means "across continents" - for £9.99 a month? Come on! No engineer that works for a telco would promise you that. But the service definitions are written by Marketing types, NOT engineers. And the statements they make are not based upon technical realities, but on "what are the competitors saying, we need to offer more", and "what are the competiton charging, we need to charge less". How naive do you have to be, to think that such services can be technically realised?
Our core network only spans the UK, and I can tell you that that amount of dedicated high-class b/w would COST us approx. £800 per month to provide. That's cost to us, not price to you. And we own our own fibre. Would you be willing to pay £900 a month for a 8 Mbps connection? No? Thought not! Bandwidth is a scarce, expensive resource, and the bean-counters FORCE us engineers to design the network to ensure a good Return On Investment. That means upgrading ONLY when we HAVE to. And engineers do NOT make that call, senior managers do, and they do it based on financial return - which often means DELIBERATELY postponing upgrades that are absolutely essential, in other words accepting bad service. In today's tight financial climate, if an upgrade doesn't bring in good margin, it isn't going to happen. Ergo, we have to design around the problem, because despite what you think, most engineers in this industry want to do a good job, and take pride in technically solving the well-nigh impossible problems given to us by the marketing dead-heads.
So we deploy techniques, like statistical multiplexing and traffic engineering in the core, to put off the expensive day when we have to light up more glass, or worse, dig in more cable. These techniques RELY on users sending intermittent traffic streams. When users don't (and P2P apps turn users from intermittent senders into continuous senders), congestion occurs. When congestion occurs, everyone's traffic is hit. In our case, that means hitting customers like banks and power utilities that pay a LOT more dosh than you to get their traffic delivered.
So, to stop that from happening, we groom the traffic! We cannot do anything else. We cannot deploy enough bandwidth to cope, even if the bean-counters opened the purse strings. There are so many users (millions of 'em!) out there, that the aggregated traffic streams, if they were ALL pumping out 8 - 20 Mbps, would blow our (or any other) backbone, far less an Internet peering connection, and just deploying b/w would NOT cure it. We would need a major redesign on the core network. That means an investment of 10's of millions of pounds and years of, possibly service interrupting, work. It is NOT going to happen! End of story.
Actually, on a technical point, no matter how much b/w we deploy, if we are going to GUARANTEE that no congestion is going to occur in our core network (as our corporate customers insist), we *have* to deploy traffic engineering and grooming techniques, as failures in the core, or sudden events, can still potentially create congestion points (unless we have typical core n/w utilisation of less than 10%).
To illustrate, to cope with BBC coverage of an Olympic event, we increased our b/w, on a link running at 8% utilization i.e the pipe was damn near empty, by a factor of 5 (yup 500% increase), and we STILL got traffic discards at peak). No commercial concerns could conceivably do this on an on-going regular basis and stay in business.
And, for the truly technical among us, the problem is compounded by fact that Internet traffic arrival rates follow a power-law distribution, rather than a Poisson distribution. For the non-technical, that means the aggregated traffic doesn't smooth out, but stays bursty.
So do the sensible thing, and plan your downloads to use off-peak hours. That way you have a chance of decent service. Just railing about it and blasting a 24/7 data stream to/from your supplier, is plain stupid, not tech-savvy.
Sorry for the rant!