HTTP in Practice: From the Browser

HTTP in Practice: From the Browser

SNEAK PREVIEW

  • First, Be Safe To Protect Your Human and the Internet.
  • Second, Be Robust To Interop In the Face of Reality.
  • Third, Be Fast to Enable Delightful Things.

Finale: Browser Telemetry Nugget Trivia

Mission 1: Stay Safe

Oh dear..

Mission 2: Be Robust

Postel's Shadow

It turns out the Internet is not worthy of Jon Postel's legacy. At best, half right.

        // wrap the multiplexed input stream with a buffered input stream, so
        // that we write data in the largest chunks possible.  this is actually
        // necessary to workaround some common server bugs (see bug 137155).
        rv = NS_NewBufferedInputStream(getter_AddRefs(mRequestStream), multi,
                                       nsIOService::gDefaultSegmentSize);
        

Please keep up the communication on H2 interop issues and hard fail whenever possible. Consider it herd immunity.

Bugs Just About Delimiters!

  • CRLF, or LF
  • Missing zero-chunk
  • 204/304 with body
  • Unterminated gzip encoding
  • Content Length off by trailing CR(LF)
  • Redirects with Content Length of Target (seriously)
  • Header Line Continuation

Cannot Deploy

The end-to-end principle is important, and cryptography is its strongest guardian.

Connection Management

Too large of a portion of the actual work of a browser HTTP engine involves connection scheduling in an implicit negotiation with path and peer.

  • NAT Timeouts
  • Server Timeouts
  • Client Timeouts
  • 6? Per Origin
  • Pooling
  • Sharding
  • SSE
  • Hanging Get
  • Anonymous
  • TCP KA
  • Congestion
  • Power
  • Late Binding
  • Mobility
  • VPN

Mission 3: Be Fast

Priority Matters

Total Page Load Time is a lousy metric.

Focus on latency to usable. SpeedIndex is great.

Inlining defeats priority. Pipelines too.

Resources have relationships that browser can understand that can not be fully expressed in HTTP, though H2 is better at this.

Parallelism Matters

Parallelism mitigates queueing delays. H2 ftw.

Cross origin parallelism is wild west

Severe tension exists around TCP's fairness even though TCP is not fair. Relatedly - the migration of 6-in-h1 to 1-in-h2

TCP termination is easier than ever before

Congestion Control Matters

Senders are too conservative - Is slow start really Congestion Control? A lot of my data is in slow start.

Senders are too aggressive - Even so, we can melt down anyhow via parallelism. Witnessed 120 shards at IW > 10 and the induced 90% loss.

Senders are blind - bufferbloat and RT sadness ensue.

Congestion Control - A New Hope

Pacing handshakes and HTTP requests absolutely do help at application layer.

At transport layer, I'm ready for something better, deployable, portable, and secure than TCP.

Data "Nuggets" Trivia

By Ludovic Bertron from New York City

Basic Characterization Info

  1. Median Page Size: ???
  2. Median Page Objects: ???
  3. Percent of HTTPS:// Pages (Navigations): ???
  4. Percent of HTTPS:// Transactions: ???
  5. (This is where you guess the answer.)

Basic Characterization Info

  1. Median Page Size: 1.7MB
  2. Median Page Objects: 80
  3. Percent of HTTPS:// Pages (Navigations): 36
  4. Percent of HTTPS:// Transactions: 60
  5. (This is where I provide answer.)

Proxies and Caches

  1. Browser Cache Hit Rate: ???
  2. Percent of 304 Responses: ???
  3. Percent of Failed Validations: ???
  4. Percent of Explicit Proxy Use: ???

Proxies and Caches

  1. Browser Cache Hit Rate: 31%
  2. Percent of 304 Responses: 4%
  3. Percent of Failed Validations: 4%
  4. Percent of Explicit Proxy Use: The floor is 1.5% HTTP plus .02% SOCKS. But really unkown - both transparent and enterprise/privacy telemetry skew come into play.

HTTP Response Version

  1. Percent H2: ???
  2. Percent SPDY: ???
  3. Percent HTTP/1.1: ???
  4. Percent HTTP/1.0: ???
  5. Percent HTTP/0.9: ???

HTTP Response Version

  1. Percent H2: 9%
  2. Percent SPDY: 3%
  3. Percent HTTP/1.1: 87%
  4. Percent HTTP/1.0: 1%
  5. Percent HTTP/0.9: .001%

TCP Handshake (standin for RTT)

  1. 25th Percentile: ???
  2. Median: ???
  3. 75th Percentile: ???

TCP Handshake (standin for RTT)

  1. 25th Percentile: 22ms (54ms for mobile)
  2. Median: 53ms (107ms for mobile)
  3. 75th Percentile: 117ms (213ms for mobile)

Connection Reusue

  1. Percent HTTP/1 Connections that serve 1 transaction: ???
  2. Percent H2/ SPDY Connections that serve 1 transaction: ???

Connection Reusue

  1. Percent HTTP/1 Connections that serve 1 transaction: 73%
  2. Percent H2/ SPDY Connections that serve 1 transaction: 7%
  3. RST_STREAM independent of TCP is the most under appreciated feature of H2.

    It does get better from the transaction POV. Average H1 transaction is on a conneciton w/ >=10 other transactions. 75% of transactions share their connection overhead w/ at least one other transaction. For SPDY/H2 it is 20 or more and 99% see some sharing.

HTTP/1 Queue Delay (Desktop)

  1. 5th Percentile: ???
  2. 25th Percentile: ???
  3. Median: ???
  4. 75th Percentile: ???
  5. 95th Percentile: ???
  6. This is the amount of time a request spends waiting blocked on HTTP parallelism rules (e.g. the classic 6 connection per origin rule). It does not count time making progress towards dns, tcp setup, tls, etc..

HTTP/1 Queue Delay

  1. 5th Percentile: 1ms
  2. 25th Percentile: 1ms
  3. Median: 21ms
  4. 75th Percentile: 221ms
  5. 95th Percentile: 3390ms (DANG!)
  6. Now, how about SPDY/H2?

Spdy/H2 Queue Delay

  1. 5th Percentile: 1ms
  2. 25th Percentile: 1ms
  3. Median: 1ms
  4. 75th Percentile: 3ms
  5. 95th Percentile: 17ms (SWEET - w/ Server Based Priority)

Thanks - What do you see?

Red panda (Firefox) Photo by Yortw