Latency for the Layperson: Understanding Network Performance, Latency and Why it Matters
In this blog post, I will explain the basic terminology of network speed and latency, in addition to ways that we reduce latency at the network level, so you can walk away knowing the foundational concepts and their importance to mitigating latency—and why it’s important.
This blog will focus on enterprise latency considerations. End users or consumers can experience sluggishness due to non-network-related factors like a slow internet service (e.g., dial up, satellite or ISP traffic management issues) or traffic congestion in their local intranet due to heavy internet usage or many people engaging in high-bandwidth activities—like when an entire family is watching video streaming on different devices on the same network at their house.
Defining Latency, Packet Loss, and TCP vs. UDP
You may have heard of network latency before. Put simply, latency is how long it takes data to travel between a source and destination on a network, plus how long the response takes to return. This measurement is called Round Trip Time or RTT. It’s measured in milliseconds.
The unit of data is referred to as a packet, i.e., whatever is being transmitted and received. A packet could be an HTTP request between an end user’s browser and their favorite website, the inputs generated by a gamer playing their favorite MMORPG, or any other data transmitted via internet protocols.
Packet loss occurs when one (or more) packets of data traveling across a computer network fail to reach its destination. It is caused by errors in data transmission or network congestion, and it’s measured as a percentage of packets lost with respect to packets sent.
Different control protocols handle packet loss differently.
Transmission Control Protocol (TCP) handles packet loss and automatically retransmits the missing packets. It also controls the speed of transmission and is able to slow down traffic flows when loss is detected. This is abstracted from software developers and allows them to focus on the app itself.
User Datagram Protocol (UDP) takes a hands-off approach to packet loss, allowing the developer to handle how retransmission takes place—if at all.
TCP is appropriate and preferred for most applications, with the exceptions being streaming media or data, (think music, video, gaming, telephony) where it is preferable to lose a few packets rather than pause the stream to wait for retransmission to happen. These kinds of applications tend to leverage UDP and define their own packet loss mitigation approach.
What’s the Importance of Low Latency?
While one benefit of any high-performance network is low latency, the extent to which any given organization or IT function requires the lowest possible latency will differ depending on the applications being deployed. For example, an application that serves web ads must have as low latency as possible in order to make sure that valuable time isn’t lost loading an ad on a web page. Likewise, a gaming company requires ultralow latency for its servers so that players have an experience that is as smooth and responsive as possible.
Factors That Impact Latency
Latency is impacted by several factors, including the hardware in place (number of devices in path, link/device utilization and any network cable or hardware issues), and most importantly, the geographic distance the packets travel.
Latency and Border Gateway Protocol
Border Gateway Protocol (BGP) routing is used between large ISPs and IT environments so that there is a shared and agreed-upon way to exchange routing information to allow for data to “hop” through each autonomous network to its destination. BGP uses a model of Autonomous System (AS) paths to determine what it believes to be the shortest, fastest path. Unfortunately, AS path in no way takes into account latency, only the number of discrete organizations (or ASNs) between source and destination networks.
Previously, we wrote about BGP and how its default settings can lead to high latency. In summary: BGP does not have a built-in way to avoid network congestion and slowdowns, whether due to high traffic, malfunctioning hardware or any other latency-increasing factor.
Reducing Latency, the HorizonIQ Way
HorizonIQ takes great pride in our network and works diligently to ensure customers have the best and lowest-latency experiences. We make sure that the network gear we deploy is correct for the network position, with low latency switching close to the customer devices, while also using deep buffer devices at core locations to protect against microbursts in traffic and stop retransmissions before they’re even sent.
Finally, and most importantly, HorizonIQ customers can take advantage of Performance IP, our automated route optimization engine. It watches popular destination IP prefixes, and it automatically puts all of our customers’ outbound traffic on the fastest, most stable routes and providers. Check out the video below to learn more about how it works or jump straight to a demo.