HTTP Speed+Mobility
HTTP S&M? The internet wants it faster.
The HTTP protocol - one of the web's foundation specifications - is getting a speed and security revamp. The Internet Engineering Task Force (IETF) is this week holding meetings on what it's calling HTTP 2.0, "a major new version" of the ubiquitous data transfer protocol. The changes apply to HTTPbis, the core of the …
One way to speed up web site loading is to load as few files as possible - so what about adding an extension to HTTP and HTML to allow a web site to specify files within a container (e.g. a tarball, or a ZIP), and to allow the browser to fetch the whole container?
So even on a site like the reg - you could have a container with all the static stuff (images like the boffin graphic on this post), a container with all JS and CSS (since that might change), and then the content. Like all files, your browser should only fetch the images and scripting containers if modified, so one fetch once, and then the content can work with them.
A lot of modern data formats are compressed about as much as is possible so zipping wouldn't save much space. Zipping text is great but how much bandwidth is text compared to images/video.
As for downloading all the images a site is likely to give you so you don't have to make repeat requests seems like a lot of wasted bandwidth. Particularly for mobile where I want just the bear minimum to get the page to display (given the slow speed of my provider for one).
I think he meant using zip/tar simply as a method for glomming a bunch of files together. Every request for a new file adds some additional traffic to manage that request, which isn't helping put your webpage up. If you can get a webpage with one request for a large-ish file, instead of a thousand requests for small files, you've saved yourself a decent chunk of data. Each request also takes some time to hit the server and come back, so it would speed things up too.
This is not too dissimilar to what actually happens, in effect.
With HTTP pipelining and transparent compression, you end up with a single content stream that has been compressed.
This is totally transparent to the user, website and client side developer tooling.
Plus, it has the added advantage that files are available to render as soon as they have been downloaded.
I think the low-hanging fruit are gone.
HTTP is already designed to permit static content to be cached, so the "container" you are looking for already exists and is called a "file". If the prospect of lots of little files annoys you, there's another existing standard (mhtml) for storing multiple elements in a single file, that is supported to some degree by most browsers.
I also think the high-level fruit are likely to remain unreachable.
Sites that insist on base64-encoding the client's autobiography in the URL, or putting time-dependent trivia on every page, or offering personalised content to each visitor, are broken at the application-level (authorship) and cannot be fixed by changing the transport protocol.
If we are talking about XML's inventors, I think it is only fair to consider the context of the work.
Compared to HTML, XML is a thing of beauty. Its regular structure makes it easier to parse. Its extensible structure means that this easiness ought to persist over a few generations. The fact that it has been abused more than Jimi Hendrix' guitar is no reflection on its original design.
SMTP is way overdue an overhaul. It's 30 years old, despite a few additions to the protocol made 4 years ago, is completely open to abuse and relies on after-thoughts such as Domain Keys and SPF in an attempt to check sender authenticity. Those and the SMTP extensions aren't mandatory for most servers to send, relay or receive email. It needs a complete overhaul to enforce sender authentication and other security/encryption features into the protocol itself with a kill date for SMTP v1.
Within the current SMTP protocol framework there no obstacles to deploy a global x.509 certificate authenticated MTA infrastructure using TLS. Implemented by most popluar MTAs. Allowing only legit (white-listed) MTAs into the infrastructure.
There's the little practical detail of the current problems with SSL CAs, the slightly bigger problem of large scale SSL certificate deployments being stubbornly difficult (or the deployment of the PKI-understanding mutation throughout the human population taking longer than expected) but other than that, it's a piece of cake.
What it'll give you though until large chunks of the Internet start refusing mail from other large chunks is authenticated spam. Happy happy happy, joy joy joy. Maybe if harbouring spam senders would slap sanctions or an invasion on a country might it help but even then I have my doubts.
Divide and conquer.
If I send all my authenticated mail through a relay (to whom I and lots of others pay a small fee, so it's a viable business model) and countersign it, *recipients* only have to whitelist the relay. (Recipients can complain to the relay people in the reasonable expectation that the relay people will pursue the matter rather than see their whitelisting threatened by a rogue customer.)
For further simplication, relays can aggregate with other relays. Also, I (and they) may have deals with other relays at the same level of the hierarchy, to avoid a single point of failure.