Patrick McManus, @mcmanusducksong or pmcmanus@mozilla.com
It turns out the Internet is not worthy of Jon Postel's legacy. At best, half right.
// wrap the multiplexed input stream with a buffered input stream, so
// that we write data in the largest chunks possible. this is actually
// necessary to workaround some common server bugs (see bug 137155).
rv = NS_NewBufferedInputStream(getter_AddRefs(mRequestStream), multi,
nsIOService::gDefaultSegmentSize);
Please keep up the communication on H2 interop issues and hard fail whenever possible. Consider it herd immunity.
The end-to-end principle is important, and cryptography is its strongest guardian.
AGL
Too large of a portion of the actual work of a browser HTTP engine involves connection scheduling in an implicit negotiation with path and peer.
Total Page Load Time is a lousy metric.
Focus on latency to usable. SpeedIndex is great.
Inlining defeats priority. Pipelines too.
Resources have relationships that browser can understand that can not be fully expressed in HTTP, though H2 is better at this.
Parallelism mitigates queueing delays. H2 ftw.
Cross origin parallelism is wild west
Severe tension exists around TCP's fairness even though TCP is not fair. Relatedly - the migration of 6-in-h1 to 1-in-h2
TCP termination is easier than ever before
Senders are too conservative - Is slow start really Congestion Control? A lot of my data is in slow start.
Senders are too aggressive - Even so, we can melt down anyhow via parallelism. Witnessed 120 shards at IW > 10 and the induced 90% loss.
Senders are blind - bufferbloat and RT sadness ensue.
Pacing handshakes and HTTP requests absolutely do help at application layer.
At transport layer, I'm ready for something better, deployable, portable, and secure than TCP.