Re: Not sure about this
However, I suspect that if HTTP changes to use a protocol that does out-of-order requests and responses, a fair few applications written to use HTTP will break.
The author anticipates this being done as part of any new transport layer as a key requirement. From the article:
RPC needs to handle lost, misordered, and duplicated messages ... and fragmentation/reassembly of messages must be supported
I have to agree with AlanSh here, it doesn't seem appropriate to break layers of abstraction for a single use case. There is an argument that a third transport layer is needed, a reliable message passing service. That doesn't need to be intrinsically linked to an RPC mechanism.
Yes you need error checking, you need packet fragmentation and reassembly. You also want a fire and forget transmission model - "send this few hundred kilobytes to that machine and don't trouble me further unless there is a problem". Whether you layer that on top of or alongside UDP is a judgement call.
If you go for new protocol at the transport layer that introduces issues issues of its own. First that comes to mind would be just how big can these messages get? Potentially you could end up with tens of megabytes of data inside operating system or firewall buffers - an issue that doesn't arise in the byte stream model where to a first approximation data is consumed as it is received.
If there's movement towards that I certainly wouldn't trust Google to design it. Past history has shown they tend to have tunnel vision towards their own use cases. Any solution need to scale not only to servers and user devices but small IoT devices with maybe 100kb RAM.