Jul 19, 2018

Is Default BGP Hurting Your Network Performance?

David Heidgerken, Solutions Engineer

The internet has been called many things: a superhighway, the web, the net … a series of tubes. While the quality of any of these individual metaphors might be up for debate, they all suggest one important point: There are always multiple paths for any network traffic to take on the way between point A and point B.

Border Gateway Protocol (BGP) usually determines which path your traffic will take. Its default configurations will pick the path that involves the fewest “hops” between Autonomous System Numbers (often shortened to ASN, a unique identifier for any single network entity connected to the internet).

It’s important to note that ASN hops is not necessarily the same as router hops, so while one provider’s path might comprise the fewest ASN hops, traffic might be passing between many routers, encountering hidden issues along the way.

How BGP Can Limit Application and Network Performance

Most of the time, default BGP is a good enough way to determine how to get data from one place to another. But of course, there are a couple of major catches.

  1. Performance issues: Small delays add up from end to end, resulting in latency. Problems in the physical layer can cause packet loss. Router resource saturation can “blackhole” traffic and keep two points from communicating, even if BGP’s routing tables see it as a valid route.
  2. Most crucially to many businesses and applications, the default metrics BGP takes into account can’t assess the performance (i.e., latency and packet loss) of any particular path.

These will cause issues for any network, but if your applications or customers aren’t particularly sensitive to network performance, then default BGP routing might be perfectly viable for your business.

Optimizing BGP for Your Application Needs: Monitoring Options

Yet the inability to route traffic to the best-performing path using default BGP quickly becomes a problem for any performance-sensitive applications, whether running gaming servers, serving up advertisements, or hosting financial transactions.

First, you can do nothing. This is not the most highly recommended option, but it’s a decision that might be the simple reality of too few network engineering resources or staff without sufficient experience. The downsides are obvious, but the upside is that this option is extremely friendly to your budget.

If you are connected to HorizonIQ’s network, however, doing nothing might be OK: You don’t have to do anything at all to receive the benefits of our route optimization because it’s baked into our entire network. Just by being plugged in with HorizonIQ, you are taking advantage of our blend of Tier 1 ISPs, even if your own network is single-homed (or if you don’t have network engineers adjusting BGP).

That brings us to your second option: You can task a team of network engineers to manually manage routing outside of default BGP metrics, whether optimizing for cost of utilization (for multihomed networks), performance or both. This would involve analyzing traffic in-depth and then adjusting settings on the fly. The downside to this is that it’s time-consuming and resource-heavy. And since traffic is dynamic, it will always need to be monitored and adjusted on a short time frame to achieve the best results.

But the settings will never truly achieve optimal performance—or anything close to it, more likely—simply because analyzing thousands of data points and redirecting traffic across an entire network is not feasible even for the most well-staffed and resourced teams.

This is why HorizonIQ’s route optimization technology is so powerful: It takes manual manipulation out of the equation, giving you and your team time back while providing better performance.

How BGP Automation Can Cure What Ails Your Network Monitoring

There is a clear best option for handling performance issues with BGP: automation. This option doesn’t require a team of dedicated network engineers—and on top of that, it can give you better performance than any human team ever could. HorizonIQ’s Performance IP® does just that, and the technology behind these services is woven into every one of our data centers, products and services, using a layer of intelligent automation to optimize network performance.

Its route optimization engine can make millions of prefixed moves within any given Point of Presence (POP), testing routing paths tens of thousands of times (if needed) every 90 seconds to determine the best-performing routes for your traffic. And it does it all without any human intervention or direction necessary.

You don’t have to settle for poor network performance. A little bit of automation through something like HorizonIQ’s Performance IP® can go a long way in eliminating resource drain, keeping costs reasonable, and—most importantly—freeing up your team to focus on your applications.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

David Heidgerken

Solutions Engineer

Read More
Jul 12, 2018

Business Continuity Options for Colocation

INAP

Backup and Disaster Recovery services are probably unique among IT services, in that you have them in the hopes of never needing them. But in an age of sophisticated hackers, increasingly destructive natural disasters and the ever-present risk of human error, the question is not one of if but when you will need business continuity services.

We’ve written about taking a multiplatform infrastructure approach to colocation and cloud and how it’s often the best fit when designing your IT infrastructure from an application-first perspective. But even if you decide to take an all-colo approach to your production deployment, using cloud-based Business Continuity services is still an option you should consider for protecting your critical workloads and data, especially when you can simply add them on to your existing services without having to go to another service provider.

Where to start?

As with any Business Continuity project, you need to first determine your recovery goals. Ask: Do my applications need a zero-downtime solution, or can we tolerate several hours of downtime? Establishing a baseline Recovery Point Objective (RPO) and Recovery Time Objective (RTO) for critical business systems will create a solid framework that will guide your decisions.

Once recovery goals are established, you then need to look at your production workloads. Are you running your critical systems on physical hardware or virtualized infrastructure? Does your recovery solution support physical, virtual or multiplatform environments? Here, a trusted service provider can also come in handy to make sure you can have a business continuity solution that’s flexible enough for your infrastructure and your needs.

Physical

Protecting workloads on physical infrastructure can be challenging, since backup and recovery solutions can only focus on protection at the operating system level, which is bound to the specific underlying hardware. This means that copying the OS to different hardware can cause problems. Should a custom backup and disaster recovery solution be needed, look for a service provider that has additional capabilities to build out application-specific recovery options.

This might take the form of colocating identical storage arrays for array-based replication by using the data replication feature built into many mainstream storage appliances. Or it might be building out custom bare metal infrastructure for Active-Active replication of application data.

Virtual

The benefits of virtualization are obvious for your applications: Instead of having just one application per server, you can run several guest operating systems and a handful of applications with the same physical hardware. In this way, virtualization offers unprecedented ability to scale and distribute workloads across your infrastructure.

These benefits extend to backups and disaster recovery as well because virtualization allows critical VM data to be restored or replicated to another location completely independently of the underlying hardware. In addition to VM backups powered by companies like Veeam, R1Soft and Commvault, there are other disaster recovery options that your provider may offer as a service. INAP offers Standby DRaaS and Dedicated DRaaS, which protect your critical VM data either in a pay-as-you-go standby state or with dedicated cloud resources for organizations with the strictest business continuity needs.

Multiplatform Infrastructure

In an ideal world, all workloads would fit in one bucket or the other. But for most, a multiplatform approach will be the most optimal to achieving the operational and financial goals of any business continuity implementation. For example, a company might have virtualized most of their critical infrastructure but still have a legacy inventory system that needs to stay on physical servers because of technology limitations.

A service provider like INAP has the ability to provide the virtual and physical infrastructure, as well as management services needed—all packaged into a multiplatform Disaster Recovery environment. This is why it’s important to work with a service provider that has a multitude of options and expertise in any and all infrastructure solutions, whether colocation, bare metal or private cloud deployments.

 

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More