How about this
Everyone runs Torrents over uTP and then all ISP's can just throttle that (or block it completely) and that will kill off P2P without ISP's having to invest in vast sums of money for auto-traffic shapers.
The internet's TCP/IP protocol doesn't work very well. As the internet's traffic cop, it's supposed to prevent applications from overloading the network, but it's at a loss when it comes to managing P2P applications. This deficiency, generally known to network engineers but denied by net neutrality advocates, has been a central …
Funny.
I thought utp meant 'unshielded twisted pair'. ie networking cable!
.
Anyway, its almost certain too that this new uTP thing will simply not be recognised by the real workhorses of the internet, your typical router. Routers only deal with the likes of TCP/IP or UDP/IP, and perhaps throw in some Novell protocols such as IPX/SPX and MS ones such as NetBEUI I think which dont get thrown around the internet at all.
You likely wont see real advantages until routers recognise and act on such traffic, not likely.
Routers wont be able to assist at all in the traffic flow in such cases, they'll only see it as UDP. The work then to properly flow the traffic will only be performed by the end point systems, of which such advantages are likely to be minimal, especially in light of the traffic flow logic that Azureus (cant remember its new name) does with P2P traffic already.
Users will not really enjoy the UDP torrent experience.
Why?
Cos without the TCP model, which is working well today as a protocol, end users might well experience problems with their existing UDP applications, even if they do use QoS on their own DSL routers etc. Users can only control the outbound QoS effectively, having some limited backoff ability,like WRED etc, on TCP as they can affect things like TCP Window size by dropping packets inbound - however that has limited control over the results.
But the combination of TCP and the torrent app creates a well behaved app/protocol where shaping works about as well as you can get, considering you only control a single end point.
The ISP issue of throttling traffic is another problem - and lets not confuse the 2 things right now.
TCP torrent is a very well behaved and well tried approach for sharing - I'm not getting drawn into the rights or wrongs as that is not my point in this comment.
uTorrent - guys what are you thinking!!
I'm just pointing out that changing to UDP might make a good experience worse, and annoy dedicated users as result.
Deep inspection of packets is thwarted simply by using a VPN. End of. There's no way around that, so stop farting about with ever more elaborate schemes for deep inspection.
Network managers who blindly degrade all VPN traffic aren't going to make friends, but I suppose they could. Failing that, the only *reliable* information a network has about traffic are the source and destination address. (All else can be hidden or obfuscated.)
Fortunately, these are sufficient. If I want *my* traffic to be managed according to type, I should manage it myself with a router in my own network that I control. If I don't, I'm saying that I'm happy for my ISP or any other provider in the chain to filter by source and destination address. Heavy users get capped according to bandwidth, rather than the political correctness of their usage. Networks get to use simple algorithms for traffic control.
And as someone pointed out in the previous thread of this series, we already have DCCP if you actually want a congestion-and-TCP-friendly version of UDP-or-anything-derived-therefrom.
There is no problem here and even if there were then that problem has already been solved.
"Everyone runs Torrents over uTP and then all ISP's can just throttle that (or block it completely) and that will kill off P2P without ISP's having to invest in vast sums of money for auto-traffic shapers."
Well, from the looks of it, uTP is nothing more than an application layer over UDP, good luck blocking that (along with all the VoIP, WoW, streaming and such).
The Service Providers oversold their ability to deliver their product and are now crying 'Foul!' that some people who have found an application that has the potential to allow them to use all of the service they have purchased are doing so. Shaping network traffic without agreement from the person buying the service is just as dishonest as blocking the traffic and claiming you aren't. I'm not necessarily objecting to traffic shaping. If the user purchasing it agrees to the traffic shaping I've got no problem with it. But it can't simply be imposed by the people selling the service.
These Somalis are EVERYWHERE!
Why not go the whole hog and create PDG (Pirate Datagram Protocol) designed for P2P apps instead of using UDP?
Incidentally, there is a somewhat clear article on "fair" handling of TCP stream in the Dec. issue of IEEE Spectrum, for those who like to read: http://www.spectrum.ieee.org/dec08/7027
Now, the author needs to retract the article of dec. 1st, right?
Yeah yeah, all companies that make P2P software are evil devious bastards who want to kill the internet for everyone. Anyone who uses P2P is an evil pirate, freetard, (insert insult or marginally clever slight here), etc etc etc. Really guys aren't you tired to painting everyone who has or is using a P2P application with the same broad inaccurate and frankly bull shit brush? Wake me when you're done.
While I'm sure the MPAA would love to do that the word you're after is hackles.
http://idioms.thefreedictionary.com/raise+hackles
A much improved piece. The thing still missing here is that part of the intent of bit torrent protocols is to utilise as much bandwidth as possible in delivering content maximising download speed by using multiple distinct paths. It's inherently aggressive in its bandwidth consumption.
Most P2P users are pirates, for each of you downloading a linux distro 3 or 4 times a year I know a guy with a terabyte of music and video. That's the person we moan about, that's the person who's making your WoW patch take a week to download.
Fundamentally though, it's about the service providers having sold a package they can't deliver on. Pricing is the best way to fix this.
I've been accused of being too long-winded on your site, Richard, so I'll try and post here.
You gave me a link today, to a speech you gave earlier this year, where you mentioned your Op-Ed on the whole sandvine thing, and commented that you were a witness at the FCC hearing on the subject at Harvard. You even made insinuations about the sandvine throttling on page 1 of this very piece. However, yesterday, nine to ten months after you testified, you admitted you still didn't even know the details of the Comcast sandvine implimentation.
Quote "But I’ve never seen any data that says the management was triggered by 8 seeders, if you have some please share." (http://bennett.com/blog/2008/12/note-about-udp/#comment-427523) despite it being, as was pointed out to you, in the filings Comcast made. The other problem is it interferes and blocks without discrimination. I used to be chairman of the US Pirate Party, and another of our officers was on Comcast in Utah. We had videos from Pirate APrty conferences he could not seed. There was steal this film (1 and 2) and Route Irish that he could not seed. Our material, legally shareable that was prevented because of Comcasts bad decisions. Decisions you have defended for almost a year, despite the fact that until yesterday, you still didn't know the details. Perhaps you also forgot that it forged packets to kill the connections of non-bittorrent applications?
The problem with sandvine is that it doesn't care whats being transfered, just that the protocol, or connections that look like this protocol, should be terminated. If the police stopped every black man, because he was black, or looked like a black man, regardless of any wrongdoing that may have been committed (and maybe phoned ahead to the persons destination, forging their voice, saying they would be late or not coming) would that be acceptable? That's what sandvine does, with networks.
The solution might be throttling and management based on content then, to reduce the transfer of copyrighted materials, which lets say is half the p2p traffic (for ease of analysis). That would reduce the load on the networks. However, such systems don't work with Bittorrent. Ben Jones over at TorrentFreak (who I've known since the early 90s) has pointed out why systems like CopySense don't work with bittorrent.
At the end of the day, you, Richard, are arguing that this is bad, because it makes things so much harder for the ISPs to manage their networks. I am very sorry, that ISPs appear to be so unable to deal with problems that come up, that may complicate their business. It doesn't happen in ANY other business after all, that something new comes out or happens that complicates things immeasurably. Oh wait, US auto industry and the oil crisis; Airlines and 9/11 (and the sheer incompetence of the TSA); Traffic police and radar/laser detectors; map companies and GPS; fixed line telephone companies, and cellular networks; traditional television networks, and cable TV (or satellite, depending on locale); I could go on and on. ISPs should see about getting their act together, ESPECIALLY the cableco's, with their local monopolies, because right now, the real loser is the customer.
I meant to call you out on this one when you published the last article, but in the end couldn't be bothered for the same reason that I often can't be bothered to reply to trolls, but now, since you're repeating it, I guess I can raise the effort.
(I would like to refer back to your previous article, but for some incredibly strange reason there is no link back to it from this story. What an amazing coincidence! That normally never goes wrong!)
First point: your argument is based on a specious dishonesty:
>"The internet's TCP/IP protocol doesn't work very well. As the internet's traffic cop, it's supposed to prevent applications from overloading the network"
Utter garbage, and you know it. The congestion control feature of TCP is a crude hack, designed solely to reclaim just enough bandwidth in congested circumstances to let network management traffic (e.g. icmp source quench) have a chance to get through and avoid congestive collapse. What it is NOT, and what you persist in misrepresenting it as, is some kind of fair-bandwidth-allocation mechanism for managing users on an ISP-scale network. It wasn't designed for that sort of job and so it's not remotely up to that sort of job - as the problems demonstrated by the widespread deployment of P2P have /demonstrated/. Given that fact, it is a false syllogism on your part to ever infer that working around it is some attempt to avoid fair bandwidth sharing, because TCP CC simply *does* *not* *do* fair bandwidth sharing.
As indeed you have been forced to admit in this week's article, because the consistent stance on your part would be an obvious nonsense: that nobody should ever try and develop new protocols to solve the problems in TCP.
>"Morris also told the media this week that TCP only reduces its sending rate in response to packet loss, a common but erroneous belief. Like uTP, Microsoft’s Compound TCP begins to slow down when it detects latency increases."
That's a blatant red-herring. What relevance is some new and not yet widespread protocol that is just out of the lab? It's not something that BitTorrent could start using just like that. CTCP is not the same thing as TCP-as-deployed, so Morris was completely accurate, and you can only pretend otherwise by falsely conflating two different things.
>"But it’s sensible to explore alternatives to TCP, as we’ve said on these pages many times, and we’re glad BitTorrent finally agrees."
WHAT! How can you expect us to swallow this blatant lie? Only last week you /excoriated/ BitTorrent for doing exactly what you're now congratulating them on. So /that's/ why there's no back-link! But did you really think nobody would be able to remember what you said only a few days ago?
No, really, this is beyond pathetic. Your article is titled "BitTorrent net meltdown delayed". The last one was called "Bittorrent declares war on VoIP, gamers". WTF actually changed between the two? Nothing about the protocol, just you realised you were wrong. This article represents the most mealy-mouthed, weasly, two-faced, hypocritical "mea culpa" I've EVER read.
And who is this "we"? Have you got a mouse in your pocket? Or are you actually claiming to be stating ElReg editorial policy here?
>"P2P users, most of them pirates"
>"a licensing deal with the MPAA raised the shackles [sic] of private P2P trackers, causing them to temporarily ban uTorrent clients until they could be satisfied that the privacy of those trafficking in stolen content wouldn't be compromised."
As others have said, PPOSTFU. The 1970s called - Sony vs. Universal wants their definition of "substantial non-infringing uses" back.
Your experience with LANs does nothing to make you an authority on wide-area and backbone architecture. And your prejudiced pseudo-political attitude make your articles nothing but an exercise in ad hoc retrospective self-justification. Your arguments are uninteresting and lack force when they're merely a reflection of the path that you were forced to travel to reach a pre-judged conclusion.
This post has been deleted by its author
The imporant thing about uTP is not that it is a UDP-based protocol for moving data. Moving P2P data over UDP not a terribly new idea (My company, Pando, has been doing P2P UDP for years, as have others).
The thing that is interesting about uTP is that it's the first step in an effort to create an open, industry standard (LEDBAT) to move bulk data over UDP in a way that is more manageable by ISPs than TCP. This is a good thing, and is something that I'm extremely supportive of. Interestingly, while some have speculated that this could lead to "net meltdown" the intention of the LEDBAT effort is actually the opposite - to move bulk data over a distinct protocol that allows applications to detect congestion and "back off" so that more time sensitive data can take higher priority, and so that congested routers can make more intelligent decisions than they can with TCP. This should ultimately be great for all involved.
There's a related, parallel effort to optimize p2p traffic by making it more intelligent at the network level, called P4P (http://www.dcia.info/activities/#p4p), and an associated group in the IETF (ALTO). I would invite anyone interested in optimizing p2p traffic to read up on these groups' work - they hold the promise of significantly improving the way that p2p and ISPs work together.
@David Hicks
"Latency MAY be the sign of congestion, or just a long link, doesn't mean you can't still have high transfer rate."
Yes. That's one reason there's so much research into this, instead of having it already finished. I've never heard of M$'s particular TCP, but TCP Vegas was just the beginning -- TCP New Reno, TCP BIC, and TCP Cubic have all tweaked the congestion algorithms, and TCP Hybla is explicitly designed for slightly lossy satellite links. "They" are trying the trick of coming up with a TCP variant that responds to latency *increases*, so that it can keep up good speeds on high-latency but high-bandwidth connections (while backing off when the latency is increasing due to network load..)... The hard part is doing this properly with a normal TCP implementation at the other end (i.e. it's harder to ensure correct behavior from both ends when a connection is "New TCP"<->TCP versus "New TCP"<->"New TCP").
In this case, it appears uTorrent will just bypass this whole issue by using their own congestion-control algorithms; it should be easier since 1) well, implementing their own TCP congestion control is right out, that's done in the kernel on most OSes, and I'm not sure a user app will get accurate enough latency info anyway, since it'd see a data stream rather than actual timestamped packets. 2) It knows it'll be utp<->utp so it shouldn't have to worry about TCP version incompatibilities making the congestion control misbehave.
@Tom 16:38 GMT
@Andrew Norton
et al
Your arguments in defense of the pig that is torrent are like those of little children. "Waah ... Mommy promised me all I can eat, even if that means stealing the food off of everyone else's plate. Mommy promised! She can't break her promise, now! Wahh!"
The best thing ISPs can do at this point is to rewrite every ToS they have to remove this common complaint and add comprehensive throttling language. If total crap programs like Bitorrent can get away with stealing all of the bandwidth from any given loop, then it's clear that ISPs need the ability to throttle torrent users' bandwidth, just to preserve the rights of other subscribers.
Just because Mommy promised doesn't mean that stealing everyone else's food is right. As I have said many times in discussions about torrent users, stop your selfish crying and start behaving like adults in the worldwide community. Have some respect for those around you.
Laird says: "...while some have speculated that this could lead to "net meltdown" the intention of the LEDBAT effort is actually the opposite - to move bulk data over a distinct protocol that allows applications to detect congestion and "back off" so that more time sensitive data can take higher priority, and so that congested routers can make more intelligent decisions than they can with TCP. This should ultimately be great for all involved."
Effects don't flow directly from intentions, they come from actions. I support the LEDBAT and ALTO efforts, as well as related efforts down through the years such as IntServ, RSVP, DiffServ, ECN, and Re-ECN. But we need a whole lot more data and wider experiments that compare differing approaches before we declare any of these schemes successful.
I believe the principal flaw in uTP is going to turn out to be its exclusive focus on end-point measurement. There's not enough information there to make the decisions it wants to make, and a better approach would allow routers and shapers to communicate with the path selection process, as ALTO does. Not that ALTO is perfect either, mind you; it's a bit too piracy-friendly for my taste.
But as I've said in The Register for more than a year, TCP doesn't do the job for P2P and we need an alternative. Whether that's a combination of existing standards or whole new thing remains to be seen.
Not that I know of, ratfox. There are studies that show the percentage of total network usage thats attributable to p2p, but they're all produced by, or on behalf of, companies that make network monitoring, network filtering, QoS equipment or similar. Imagine if GM, Ford or Chrysler funded a study that said 'people need to drive more american cars' - you'd see the problem. Same thing.
As far as the percentage that infringes copyright, thats impossible to tell, especially with bittorrent. This is why Copysense doesn't work, for instance. A bittorrent packet doesn't tell you if it's from a linux ISO, Steal this film, or The Dark Knight. It's just impossible to tell.
I'm not even going to get into fair use excerpts, such as a clip of a copyrighted video used in a fair-use-consistent way, of which some bittorrent packets might be flagged as being the source material.
Mr Bennett might have had 30 years experiance in general networking, but I've had 10 years experiance in this direct field, at the front lines (I help out with support in the µTorrent IRC channel, amongst other places) where I learn something new every day.
If the infrastructure of the internet were designed, or rebuilt like it should be, like the water systems in a perfect world, this all would be a very minor issue. Say the backbones of the network can handle 100%, or more, of the bandwidth of at least one large-ish cities worth of users going full steam on their sold bandwidth (250,000 or so maybe at about 7Mbits/s). Unless people are literally downloading multiple movies at once, instead of streaming like most law abiding users, a normal citizen should only need 5Mbits/s max. And to be honest, even if they were downloading a large movie legitimately, VERY FEW are used to being able to do so within 30 minutes, or are willing to pay the extra fee for a better connection because of how rare those occasions are. So assuming the infrastructure was set up rationally, even if someone were to use their entire bandwidth up with piracy, it would go unnoticed to most other internet users as they tend to average less than 500Kbits/s, especially since most internet games are still designed to be used with modems. For those just in case kind of customers who still have the cash for games, but not for the connection because they live in the middle of the forest. Not to mention, with how big most web apps have become to be more secure, the web browsers don't burst pages up nearly as fast as we used to have them on high speed connections, and you know what, most users don't give a crap about that either! We've been around long enough to appreciate something coming up quick enough that we don't feel like we need to go do something else while we wait :P Most of the people who DO complain, have a very small understanding of how tech works, and think they know more than they do. The rest of us who have a clue, or those of use who know we don't have a clue, you never hear from, because we DON'T CARE. And we're the majority of the consumers, but not the target market apparently.
Telcos just need to turn on the darn fiber and make sure they are actually limiting users connection speeds to reasonable levels without overcharging them for what the backbone cannot support. Mind you, I do love seeing my connection do it's peak at around 5.6Mbits. It is supposed to be 10 but I actually don't give a crap, why, because I either almost never need the speed, OR the other side of the connection (place I'm getting data from) doesn't have the available bandwidth.
The problem is the backbone and infrastructure, though I'm not denying the reliance on TCP as a protocol for management does also slow things down majorly and that combined with artificial bottle-necking, often gives everyone who needs the connectivity to be available to not always get their chance. Such as during peak hours when everyone has to learn to be patient, which they have. The few who complain are literally that, the few. Telcos/ISPs just need to make a more rational business model and improve their infrastructure by, now don't hit me, re-investing their income into it.
An example for the simpleminded (no offense ;P), if you have a small company that does very little network use internally, but conducts a lot of internet traffic, would you give all the workstations in the company gigabit connections while the company only has a T1 connection to the outside world? Well, exaggerated example aside, that's pretty much how the internet is currently set up at the backbone and town/village/city level. My home town has fiber, yeah, but when 3 people who aren't being throttled and can somehow get 40Mbits through their cable modems (one of which I know personally), when they do their pirating, how much do you think is left for everyone else? I fear this is not an isolated case either, and since rebuilding the infrastructure requires a huge re-investment by the companies charging us all for all they can for scattered reliability, they instead are just making people too afraid to use their network as much as they want to. They also have a lot of "IT" people who shouldn't bein the field inthe firstplace because their better at BSing their credentials than actually backing them up, and most managers don't have the time or knowledge to know better till it's too late.
And just wait, they'll just say "if we open up the backbone wider, spammers and pirates (and probably terrorists too) will go crazy with all the available power now available to them." Well, wasn't that what the management software/hardware is supposed to control?
Welcome to the western world, where profits are more important than product quality.
Phew, I wish I could elaborate my comments a little more, but no one would want to read a 5 page paper on what is actually going on in higher detail from a commenter. I just hate idiots bitching about things they don't know anything about, or especially, those who know just enough to be dangerous, but not enough to be useful. (El-Reg not included, they at least know when to learn and change tactics becuase, well, that's what learning is all about. Growing and change, and adjusting yourself and your ideas when you find out what you thought was true isn't. Also something not that popular at least here in the US if you believe the media... unfortunately.) Staying the course is only useful for the short term, and anyone who doesn't know that is an idiot or suicidal!
No-one's mummy promised anything. Instead, this company people pay money to for a service promised this, in exchange for the payment. The company promised a service, The customer promised to pay in return. That is what we adults call 'a contract'.
Now, 'oops' the company has oversold and can not hold up it's side of the contract, or is complaining about people holding it to the contract, and utilising what has been sold. They want to break the contract. If you can't hold up your end of the deal, say you don't have enough money to go around all your bills, will the ISP take a small payment instead for they month, because your funds are oversubscribed. Of course not. You pay your bill, or you're cut off, and they will enforce that bill strictly.
I don't live outside my means. If I don't have enough of my commodity (money) to go around, I don't promise it in a contract. If a company doesn't have enough bandwidth to adequately deal with the customers it has, it should't be making it worse by signing up new ones. That is the basis of this discussion.
ISPs oversold capacity to try and make things more affordable. The more they oversold, the cheaper their prices and the greater their profits. Yes you can have a leased line for $400/month, or a stupidly high contention rate line for $5/month; there has to be a balancing act. The ISPs like comcast, arguing for these bandwidth measures, are those that have failed to create a balance. Perhaps plans should be sold not by speed alone, but by an average speed rated as total throughput per month. 100Gb a month download limit? Sell it as a 0.4Mbit down connection. People can use their connection then flat out all month, and not breach the 100Gb/month limit that is the ISP's network capacity limit. Sell on capacity, not peak speed.
The only problem with this sort of FAIR package naming, is that the ISP's with the poor infrastructure (you know the ones we're meant to be sorry for, for all the naughty people using what they were sold) won't have very good plans - low rates and high prices - and still their businesses will suffer.
In technology, you have to invest, the complaining ISPs are not, not where it counts. They'd rather spend $250,000 on network managing tools to give them 2 years of service on the same pipes, than $20M to give them 10, and better customer satisfaction. However, as Mr Bennett said on his blog, "ISPs upgrade their networks as users and markets require. If they spend too much money on upgrades, they get hammered by the stock market; they spend too little, they get hammered by their competitors." The problem is, most ISPs have very few competitors. If you don't like your Comcast connection, is there another cable ISP you can go to? no. Me, I only have one option for broadband, Bellsouth/AT+T. There are no competitors. I can get satellite internet, or dialup, but they're not like-products. They can spend as little as they want on infrastructure, as he says, and I'm F*cked. Oh, and remind me just how good the judgements of the morons on Wall Street are, please? Remind me who put us into a economic meltdown. thats right, the people the ISPs are trying to keep happy (rather than their customers).
Mr Bennett,.. Richard,.. please, do a little more research and have a little less to drink when you're going to write a piece on networking technologies. That way, you might avoid publishing a piece like this.
Firstly, Compound TCP is a native part of the Vista and Windows 2008 Server (beta) TCP stack. Take a minute to think about that - Vista and Win2k8 SERVER beta versions... and it's disabled by default in Vista. Oh, and MS has made a hotfix for 64-bit XP... and Win 2k3 Server. Massive industry leaders in consumer computing, all four of them. Huge retail market share. (Don't forget, disabled by default in Vista. The command to enable it, by the way is: "netsh interface tcp set global congestionprovider=ctcp".[1] Which you have to execute using admin privileges in a command shell. Not really something you can type by mistake.)
Now, I've worked as a Cisco engineer for close to ten years and have never heard of Compound TCP. I'm not scared of learning and I'm happy to do follow-up research on stories of new technologies (you can't know everything, as you have demonstrated so spectacularly) so I followed your link, it was rubbish. I did a google search and, fortunately, Wikipedia saved the day. [2]
Compound TCP is TCP/IP, the clue is in the title. I know it sounds sexy at first but you have to read all the way to the end. Looking at the descriptions I could find, it seems that the client/host/endpoint (whichever name you prefer) is responsible for throttling the data flow based on timestamps or duplicate ACKs; it is, therefore, completely transparent to the network devices along the path.
TCP Congestion avoidance and management will still be the only way to provide traffic management on those devices and, although VPNs and encrypted torrents prevent DPI and the port customisation inherent in many BitTorrent clients prevents accurate port-based filtering / management, ISP consumer ADSL links have very clever ways of determining who is 'hogging' the bandwidth and they are quite prepared to pare down a particular customer's share where the fair usage policy has been breached.
IntServ, RSVP, DiffServ, ECN, and Re-ECN all require accurate per-packet marking which is never going to be reliable on consumer ADSL networks and ISP marking would require DPI, which is unlikely to happen. Add to that the various tunneling technologies (which require pre-encryption classification) and it becomes unmanageable, from a supplier's perspective.
For what it's worth, I agree that massive illegal file sharing needs controlling: there's no reason that some numpty downloading all 500,000 CSI episodes (or whatever) should send my connection to xbox live through the floor when we're both paying for reliable (and usable) internet access. Having said that, making sensationalist claims that the decision of one company to switch its base protocol will cause all delay-sensitive traffic to be held, unrecognised, in ever-increasing interface queues, is just irresponsible. Writing a follow-up piece less than a week later (with all the hallmarks of a party political u-turn) is just embarrassing.
Getting away from the technicalities, I would like to ask you about the assertion that you are "a network inventor who helped design THE modern, manageable local area network". "THE"? and how are you defining "modern"? and when did ISPs have much to do with a local area network or indeed any corporate network that's not run over the internet... using consumer ADSL services. So, unless you actually know Bob Metcalfe or David Boggs (and the lack of any piece in El Reg about Ethernet's 25th birthday this year suggests that you probably don't) maybe you should consider adding a qualifying statement to each article. Can I suggest: "he vaguely remembers reading some stuff about a LAN at Networkers once."
I liked the piece but surely it should have been filed under 'Odds & Sods', or B-Journo-FH perhaps? If you want to write a technical piece, please have a chat with Verity. She, at least, has been through Open University. Such as it's worth.[3]
Jason
[1] Source: http://en.wikipedia.org/wiki/Compound_TCP#Supported_platforms
[2] Source: http://en.wikipedia.org/wiki/Compound_TCP
[3] Source: http://www.theregister.co.uk/2008/10/14/verity_stob_further_eduation
PS: @James Butler: I wasn't going to reply to your comments but, well,.. look, please, pretty please, do a business course... or finish high school... or (preferably) both. ISPs are there to make money. Simple. End of story. There are so many and you can change ISPs in a week or two. If a customer doesn't like one, they have the choice to move on to another one. The best ones are usually the little ones whose brand name isn't strong enough to retain customers on its own. Those are the ones have to work at it. However, like their larger competitors, they quickly learn that customers will act like spoilt children when they are asked not to abuse the contention-based network that is ADSL. Contention. Key word. Look it up. That's why fair use policies exist.
I would, however, like to congratulate you on not living outside your means. Well done you. It's very grown up, very 'adult'.
"But it’s sensible to explore alternatives to TCP, as we’ve said on these pages many times, and we’re glad BitTorrent finally agrees."
Not to mention your last piece...
"One thing that is certain is that uTP will not reduce the volume of traffic that P2P moves across the internet, something that would be commercial suicide for a company that depends heavily on aggressive file sharers, and pirates, for its popularity."
No, you still don't get it. It WILL reduce the traffic, for the same amount of useful data effectively transmitted. And your filesharers=pirates insinuation is preposterous. Quite insulting, too. Read more (the comments on your last piece, available here: http://www.theregister.co.uk/2008/12/01/richard_bennett_utorrent_udp/ might be a good start).
"The solution for ISPs is simple as was thought of year ago.
A caching torrent proxy.
It was common with web pages right up until the point that almost every page became dynamic meaning the exact same page was almost never accessed twice."
...despite the fact that almost no web page actually needs to be dynamic. If you want to pick on a legitimate target of "those who broke the internet", try the Web 2.0-tards, who made all text dynamic, all images Flash, and everything about 10 times larger and slower than it needs to be. In particular, can we PLEASE locate the person who first reinvented HTML's <a> tag by using a piece of Javascript, and disembowel them.
So, yes, Nathan, proxying is indeed a sane and rational answer to this problem, so you can be sure that someone somewhere is working on a version of BitTorrent that deliberately makes the segments slightly different, ostensibly to circumvent the evil traffic managers of course.
Compound TCP is *one example* of a TCP implementation that has a latency-driven congestion avoidance state, and Vegas and Cubic are others. The point is that the alleged innovation of uTP is nothing new.
The modern, manageable local area network is Ethernet over twisted pair and Wi-Fi. And yes, I've had dinner at Bob Metcalfe's house, not that it's particularly relevant to the current discussion.
Before spouting off about petty details that fall outside the scope of your comprehension try to develop some analytical skills.
"So we're back to the same problem as before " ... By Tom Posted Friday 5th December 2008 16:38 GMT
Well said, Tom. IT Piracy can also be a Healthy Renegade in QuITe Rougish Competition ..... 42 Provide AI Beta Service Product .... Virtual Control Program..... for Governments Hire/Pay as you Go.
Bandwidth Cap
No "Fair Use" policy, no sandvine or traffic shaping, proxying or deep-packet inspection required. Just sell the service as 10/20/50/100/250Gb/month over ADSL1/ADSL2+/Cable. Split it into peak/offpeak if you want. to help even things out over the day - most torrent programs have bandwidth schedulers. Or if you want to be nice, have unlimited off-peak traffic with a cap just at peak times.
Its simple, effective and cheap.
Good fences make good neighbours and all that. Just let everyone know where they stand.
Even Paris doesn't know why this is so difficult.
Oh yes, as a consumer I'd like bandwidth reservation negotiation available for my link so I can prioritise voip traffic over the bit of string that is my adsl connection. It would be rather cool to have asterisk/voip router reserve bandwidth during a phone call and release it when the call is finished. Ok, even just QoS would be nice.
You're absolutely right! You know, all I see on the roads today are big 18 wheeler trucks; you know that almost all of the congestion on the roads is caused by these massive trucks carrying counterfeit goods. And now trucks have this cloaking technology that makes them look like VW Beetles to the police, they're going to be everywhere! Woe is us, we won't be able to use the roads! We'll never be able to visit grandma at home, all because of these stupid trucks, all of them carrying counterfeit goods!
(flipside)
You know, all I see on the roads today are cars; you know that almost all of the congestion on the roads is caused by these massive piles of cars carrying pirates from one place they've raided to the next. And now cars have this cloaking technology that makes them look like 18 wheelers to the police, they're going to be everywhere! Woe is us, we won't be able to use the roads! We'll never be able to deliver our counterfeit baby formula to the stores before it expires, all because of these stupid cars, all of them carrying pirates!
Stop blaming everyone else for your ISP sucking. You know what? Because of your preference towards using the internet for everything teletubbies related I can't download my torrents of porn--it's like you're denying my free speech. Tarring everyone else but yourself with feathers just makes you look like the odd one out.
"Except last week, when you ranted and raved that bypassing TCP CC would cause the immediate death of the 'net, film at 11. HTH."
Whether a new scheme improves on the current, diverse implementations of TCP CC or degrades it depends completely on how it's done. Field reports from uTP indicate that it bypasses traffic shaping, which is great for the individual pirate and not so great for the VoIP or gaming user. While BT, Inc. say this is an accident, that's really beside the point.
The fact that BT, Inc. has abandoned TCP because it doesn't effectively manage congestion is a very telling admission, because it makes the argument that traffic shaping is an essential part of the Internet and not just an annoying means of monopoly rent-seeking.
It doesn't really matter whether it's water, cack, loo roll or whatever, you can only get so much in a tube at once.
At a guess most of us have signed up to 50:1 contention at a nominal 8Mb/s, so the most we can expect to download in a month is 86400 * 31 * 8000000 / 50 / 8 bytes, a mere 53.568GB, irrespective of protocol, and without time off to view the downloads.
If you've paid for more, you have a case for moaning; if not, please bear in mind you'll get even less in February.
Paris 'cos she'd be a jolly good overhead.
>"The fact that BT, Inc. has abandoned TCP because it doesn't effectively manage congestion is a very telling admission, because it makes the argument that traffic shaping is an essential part of the Internet and not just an annoying means of monopoly rent-seeking."
'Telling admission'? You're about a year behind on the entire debate; everyone from freetards to Comcast accepts that content-neutral bandwidth sharing is legitimate: the argument was over the targetting of specific applications and the forgery of network packets, which even ISPs now accept was crude, unnecessary and needlessly discriminatory against some customers as compared to others.
(The other argument is about trade descriptions and whether it's legitimate to advertise something as "unlimited" when it is in fact deliberately and by design limited).
"The fact that BT, Inc. has abandoned TCP because it doesn't effectively manage congestion is a very telling admission, because it makes the argument that traffic shaping is an essential part of the Internet and not just an annoying means of monopoly rent-seeking."
It does no such thing. It makes the argument that TCP was never designed to provide "traffic shaping" in the first place, it was design to provide reliable communication over a loss prone connection. The fact that some rudimentary congestion control was tacked on after the fact due to a flaw in the design of the internet does not in any way imply that traffic shaping is essential to the internet. The only thing traffic shaping is essential to is the well being of some ISPs broken business model in which they sign contracts with customers to provide services that they lack the resources to deliver.
"The fact that some rudimentary congestion control was tacked on after the fact due to a flaw in the design of the internet does not in any way imply that traffic shaping is essential to the internet"
Oh?
So the congestion control code that was hacked into TCP by Van Jacobson back in 1987 was just for the fun of it and has no operational value to the Internet at all, just as the original method of signalling congestion by IP-level error message was just for kicks?
These Internet protocols are so full of jokes, you never know when you've found one.