QUIC vs TCP


QUIC

QUIC Is Coming – New Internet Performance Protocol Arriving That Can Triple Performance When Browser Enabled.

Steep security impacts due to reduced firewall security – consider it an “attractive nuisance” until firewalls improve.

New Internet performance protocol coming from Google, Facebook and Internet firms such as Fastly, Cloudflare and peers. A weekend experiment by Google employee Jim Roskind advances through the Internet’s standard process. Top 1% Internet services – Google, YouTube, Gmail, Facebook, and others already serve data using QUIC and eagerly await end user adoption.

QUIC adds features and performance optimization of HTTP/3, SSL-TLS3 on top of TCP/IP’s simpler UDP User Datagram Protocol replacing “TCP”.  TCP is not going away! TCP, a fabulous protocol provides communications reliability for most Internet applications.  QUIC is an optimized consolidation of TCP, SSL and HTTP functions in one new package.  Google and Facebook are operational, ready to deploy to billions of people – but it’s not fully standardized, even though it is working on the top 1% sites now – Facebook, Google Search, Gmail and YouTube.  

Servers greatly benefit from efficiency and performance – while end user adoption is key to the big service providers achieving huge performance and financial gains from the new protocol.

We have prepared a packet-by-packet detailed analysis providing the data points to help users decide when to shift browser’s to QUIC – it is off by default.

Major Internet browsers support QUIC now by enabling experimental options – super easy to do.

The design tout’s latency benefits, and our results bear that out in detailed forensic benchmark tests. There are trade offs to consider that we will discuss to help you make an informed decision.

QUIC History

QUIC (pronounced “quick”) is both an abbreviation and acronym standing for Quick UDP Internet Connections.

Roskind at Google implemented and deployed QUIC in 2012 and described it to the IETF Internet Engineering Task Force in 2013, subsequently it more formally become an industry collaborated standard in 2016. Here is a link to the latest release Nov 2, 2020 https://datatracker.ietf.org/doc/draft-ietf-quic-applicability/ 

IETF Standards

IETF Internet standards follow a process with steps and names. An Internet Draft (ID), may become a Proposed Standard, Draft Standard and on to an Internet Standard. All named “RFCs” some of which are selectively or experimentally loosely implemented in products.  Fundamental RFC’s are written to define the administrative procedures of promulgating standards in the form of procedures.

Latency

Click on a link to connect to a server.  This starts a back-and-forth connection process. TCP connections experience latency delay from client location, at 70% speed of light across miles, add route stops along the way – altogether the trip takes 1 millisecond per hundred miles.  If you are connecting to a server in New York from San Francisco it might take 30 milliseconds to make the trip. The server gets the request taking 1 millisecond to reply and then makes the return 30 millisecond trip for a total of 61 milliseconds.

Add up 20 of those transactions and you are at 1.2 seconds. 61×20=1210 milliseconds. A thousand milliseconds are a second.

Sliding Window Protocols

Sliding window protocols, like TCP Transmission Control Protocol, send multiple packets when there is plenty of bytes to send, one after another, without waiting for the reply to send the next packet. If only a small number of bytes needs to be sent fitting in one packet (1500 bytes or less), then each is sent awaiting the response to send the next. That is what we term a “chatty” or “request-reply” transaction that voids the sliding window of bytes that overcome latency because they are sent in succession. Video, large files are examples of where sliding window works. If a human is involved – we cannot interact fast enough to create enough bytes to enable a sliding window.  But if we open or store a large file, we enable sliding window to slide bytes across the network one after the other. That is why you can see slowness on a Citrix terminal one character at a time because it does not enable sliding widows because all you need send is a few bytes at a time. Yes, sliding window overcomes latency, but the latency is there, it’s just overcome. No matter how much money Elon Musk may have, he must still wait a few hundred milliseconds to communicate to his space capsule. It’s a physics thing. Same goes for the server in London. On top of sliding windows, we use “credit” to allow each end to send bytes to fill up the latency of the link – and then we acknowledge the Bytes just like we have a credit limit on our credit card giving us time to pay at the end of a period of time. Once acknowledged – we use that credit again. Too much credit risks losing data, so we manage it and adapt the protocol for the link speed and latency measuring the link latency as we operate. It all works quite well, although sending bytes to Mars requires a lot of credit as it takes a long time to get an acknowledgement.

TCP Requires a 3-way handshake

TCP requires a handshake to set up the acknowledgement numbers and open a connection. It introduces latency due to the back-and-forth volleys to get set up. There are intrinsic benefits to that volley, firewalls look at special control bits and ports keeping track of the “state of the connection” knowing the start, middle and end of sessions to enhance security. Control bits let the firewall control when the session is over. 

UDP QUIC HTTP3 consolidates TCP, SSL handshakes and HTTP Commands

QUIC uses a simpler TCP/IP protocol, UDP User Datagram Protocol, that have fewer session controls at the UDP layer – but be sure the controls must be accomplished above the UDP layer in the QUIC layer. The clever method QUIC performs is combining the control of TCP-like functions with the SSL security layer functions that must volley across the network to set up.  Both TCP and SSL volleys are serially duplicated doubling latency, QUIC, by combining TCP and SSL reduces latency one time for each session and uses the same session for multiple HTTP commands and HTTP/3 improves functions into fewer transaction volleys.

Downside – New Firewalls Required for Homes, SMB’s

Firewalls use TCP handshake and SSL session state information for essential security checks. After QUIC encryption setup, the firewall cannot evaluate session status.  Inability to inspect handshakes and control functions as allowed with TCP/SSL reduces security. Higher end firewalls with deep packet hardware-based inspection can evaluate the initial QUIC session setup – high end firewall features are costly.  Merely updating software on most firewalls will likely have performance issues using software and central CPU processor resources opposed to hardware filter arrays.

Share:

More Articles

Scroll to Top