Website Performance: Part 1 - Networks
Why are some websites faster than others? A part of it is how the website is constructed. A part of it is how the data is moved over the internet between your computer and the distant server that hosts it.
For this first article, we look at what goes into moving data over the internet in a consistent, fast and responsive manner.
DNS - Domain to Address
This and all other websites are the result of a web browser sending an initial request for an HTML page across the internet. While the nature of how each website is constructed varies considerably, the display is the result of data being transferred to the web browser from some (usually) distant server.
Before the request for data can be sent, the browser needs to reach out across the internet to an address and knock on the server’s door. The human readable address or domain name (apike.ca, google.com, etc.) we associate with a given website needs to be converted into an Internet Protocol (IP) address. The two commonly used protocols for addressing are IPv4 and IPv6. The former ran out of addresses for all our devices and servers, but clever hacks have allowed us to keep using IPv4. Much like y2k and vaccinations, the risks of switching to IPv6 are largely overblown. Ultimately, the protocol used to address a website doesn’t play a huge role in the speed of it.
Note: The IP protocol is more than just an address format. It defines how to move data across the internet from a source to a destination.
So, how do we go from that domain name to an address? The IP address of a Domain Name System (DNS) is usually automatically configured as part of connecting to a network. The DNS provides a way to lookup IPv4 and IPv6 addresses for a domain name. The DNS lookup of the domain will block all progress until the address is returned.
Speed Strategies: A slow DNS will slow the initial connection to a website. Unlike other things, it isn’t something a website has control over. It is something the user deals with. Most Internet Service Providers (ISPs) provide a speedy DNS that will be much faster than a 3rd party DNS. Older users might remember manually configuring the DNS to bypass the one from your ISP.
Once we have the IP address of a website, we know where the party is. A request going to the party still doesn’t know how to dress up to get what it wants from the server, can’t interpret the server’s response and has no security.
What the user indicates in the web browser’s address bar to request could result in opening up a web page, image, a PDF document or a myriad of other things. So, the protocol for making a request must be extremely flexible and, in the case of a web page, efficient at handling many follow up requests for more data.
The venerable Hypertext Transfer Protocol (HTTP) instructs browsers on how to make a request for data and interpret the results. HTTP has been extensively poked and prodded over its lifetime in search of an optimal solution for transferring data from websites.
Newer browsers and servers support HTTP/2. This replaces the 15 year old HTTP/1.1, and greatly speeds up many things as well as reducing the number of (expensive to create) connections between the user's browser and server. HTTP/1.1 is still used primarily because HTTP/2 makes encryption de facto mandatory.
Security is of great importance to users. Adding security to HTTP means adding end-to-end encryption of the communication between browser and server, but it does not alter how browser and server understand the content of that communication. Transportation Layer Security (TLS) adds its own overhead with HTTP/1.1 to starting to request data from a server. A benefit of HTTP/2 is that it was designed around this and uses an extension (ALPN) that avoids this overhead.
Speed Strategies: Websites should be using HTTP/2 (with encryption) if possible. This will speed up requesting data from the website compared to older versions of HTTP. Websites should use HTTP/1.1 only for unencrypted connections.
Over the Network
Data doesn't travel as an uninterrupted stream of bits over a network. It's broken down into discrete units of data called packets. By interweaving multiple sources of packets, the illusion is created that multiple applications and multiple devices on the same network can send and receive data at the same time.
The Quality of Service (QoS) the network provides for the packets can drastically alter the user's experience. Let's look at some common network properties used to monitor QoS.
Network Properties - Latency
Latency is the network delay to send and receive a response. Light is fast, but, still needs over 10 ms (milliseconds) to move 300 km. So, a further away server has a higher minimum latency. It is important to note that this is the minimum theoretical delay. Hardware, software and other conditions play a role.
The data in a browser's request and server's response doesn’t go directly from the user's machine to the destination server. Specialized network equipment called switches (or hubs) link other switches and the endpoints together with wired or wireless connections. The data hops from machine to machine on its way to the destination. Each hop adds to the latency. So, fewer hops are generally better. The switch maintains a set of routing rules that instruct it how to send received data towards its destination. The minimum length path may not be used to avoid congestion, business concerns, or due to poorly implemented rules.
On each hop, the data is converted from a signal to data, checked for errors, possibly inspected, routed and converted back into a signal to send onwards. The inspection step could include things like looking for suspicious activity, generating statistics (data usage) and prioritizing certain data. An overly busy, complicated inspection rules or just a slow switch hardware can add milliseconds to the latency.
The latency of a connection is usually calculated by sending a small network request called a ping. The average time taken to receive a response is the latency. This is usually measured in milliseconds (ms).
Note: All requests have a latency. When investigating network properties, a ping is used instead of a more complex request to minimize the delay caused by the server processing the request.
A latency of around 30ms or less is ideal with fast paced action the user is expected to react to in realtime. A less demanding activity like submitting a form on a website can work fine with a network latency in the low hundreds.
Speed Strategies: The best location for a website server is where data can travel with as few hops as possible to the user and can be as physically close as possible to the user. Thus, a website that is fast for all users needs to be in multiple places at the same time. Few, if any, website operators have the expertise to distribute their own site across the globe. Content Distribution Networks provide the specialized service of geolocating data closer to users across the world.
Network Properties - Packet Delay Variation (Jitter)
Jitter is the variable delay in a series of packets sent across the network. A small amount of jitter is acceptable and expected. Jitter can be packets “clumping” and being received in bursts. It can also result in packets being delivered out of order.
Speed Strategies: Jitter is often a symptom of excessive network traffic at the time. A quality CDN helps by reducing possible congestion points.
Network Properties - Errors and Loss
Most packets do multiple hops before reaching its final destination. On each hop, errors can be caused by hardware or software faults or noise in the transmission. Congestion or taking the wrong route can cause packets to be dropped by the switch inspecting an incoming packet. Transport protocols like TCP to detect and retransmit when errors and loss are detected. Transport protocols like UDP allow packets to vanish and never arrive.
On wired networks, the error and loss rate is usually quite low. On wireless networks, something as immaterial as a cloud can ruin your day. The typical wavelengths used by the widely used commercial wifi routers are blocked and reflected by walls (video). This especially true of concrete walls.
Speed Strategies: A website needs a very reliable connection to the internet. Problems on the general internet are usually temporary and resolved quickly. Things under the user control like switching from a wireless to a wired network connection on the device could reduce the error rate. Wifi routers can be moved closer and positioned to avoid obstructions like thick walls between the device and it.
Network Properties - Bandwidth
Bandwidth is the amount of data that can be moved from one device to another per unit time. With connections across the internet, the hop with the lowest bandwidth on the path the data takes is what determines the maximum possible bandwidth. This maximum will change over time due to congestion. The point with of the lowest bandwidth could be the user's connection to their ISP, the user's wifi connection, some internet switch or the connection the server has to the internet.
The observed bandwidth is always less than the maximum possible bandwidth. A part of the bandwidth is taken up by the transport protocol used to shuffle the data around. The properties of a connection like network jitter, error rate, loss rate and latency (ping) can also limit the bandwidth used. The black art of how a transport protocol interacts with network properties to determine the bandwidth used is beyond this article.
If the server can’t compute the data to send fast enough or steadily stream the data, the effective bandwidth will be less than optimal even when the network has a high quality of service (QoS).
Speed Strategies: A website should attempt to maximize the rate it can output data. Theoretically, a website only needs a connection with bandwidth that matches that maximized rate of data output. In practice, a website can never have enough bandwidth.
Maximizing a Network - TCP and UDP
Each IP packet sent on the network contains a transport protocol that then contains the actual data. The transport protocol controls the flow of data between two applications on the different devices. There are two popular protocols used with websites, TCP and UDP.
Transmission Control Protocol (TCP) provides a reliable data stream between two applications (the user’s web browser and the website’s server application) and IP (IPv4 or IPv6) locates the source and destination machines. This combination is often called TCP/IP.
By convention, TCP is used to transport HTTP requests. TCP's guaranteed delivery is a very good for most network data. A website with parts missing or parts out of order would be hard to read. It does have some bad properties. Creating and maintaining a two way data stream adds latency and constrains the bandwidth used. Jitter, errors and loss can idle a server or web browser while TCP waits for a packet that completes a request or response. Still, it is the gold standard for reliably streaming data between two devices on the internet.
User Datagram Protocol (UDP) sends a packet (aka datagram) at a time between an application on the user's machine and on another machine. Like TCP, IP (IPv4 or IPv6) locates the source and destination machines.
Unlike TCP, there's no continued connection, guarantee of delivery or guarantee that packets won't arrive out of order. Less overhead means more bandwidth and less latency. This goes a long way to making realtime applications fast and responsive. The consistent part is up to the application as they need to devise specific methods of dealing with delayed or lost packets. So, UDP is preferred for applications like audio and video conferencing or first person shooters.
Limiting Request Wait
Once a request has been sent, the user's application waits for the web server to return data. The time to first byte (ttfb) of data includes the latency and the time the web server required to prepare that first byte.
A web server may have good latency to ping, but it could be unable to quickly respond to requests for actual data. The web server responding may need to wait for the data to be retrieved or various calculations to complete.
Speed Strategies: A fast website will reduce this wait time as much as possible. A “substantive” unavoidable delay will still be indicated in an immediate response. This initial response will instruct how to receive the data once it is prepared. The UI can then provide feedback to the user about the process.
Website Networking Conclusions
If you made it all the way here then you've covered the basics of how websites deliver content over the internet in a consistent, fast and responsive manner. The next article will look at how websites are constructed to be fast (or at least feel fast).