Month: July 2014
Unmetered hosting, traffic & bandwidth guide
Unmetered hosting means a hosting plan with unmetered traffic. The price you pay each month does not depend on the amount of traffic (data) sent to and from your server during the month. But unmetered does not mean unlimited.
In fact, unmetered hosting plans are often very limited in the amount of data you can send and receive, because data is often transferred between your servers and the public web at a lower speed. This blog post will look at the terminology of server traffic and bandwidth, and explain how to get a clear picture of exactly what you are buying.
Traffic, bandwidth & throughput
When you buy a server, you can choose specifications relating to the amount of traffic included in the server package. It doesn’t matter if you’re buying a dedicated server, VPS or cloud deployment. With the number and variety of hosting providers in the market, you can always find a provider who gives a clear picture of what to expect in terms of traffic limits, data transfer speed and pricing.
Let’s look at the terminology first, then move on to the calculations and pricing.
Traffic
Traffic is sometimes called data transfer, or, the volume of data passed between your server and the public web over a certain period. For ease of comparison, as well as to fit with billing cycles, the period of calculation is nearly always a month.
Traffic is measured in bytes. Usually terabytes (TB). So if you see a server whose traffic (meaning traffic limit) is 30TB, that means there’s a limit of 30TB of data that can be passed between your servers and the public web during the month before you start to incur additional fees. These additional fees should be published on a provider’s website as well as in the terms of service.
Traffic between your servers and the public web can be categorized as inbound or outbound (sometimes denoted I/O). The public web is “the rest of the Internet”.
For a simple website, the outbound traffic from the website to the public web is the information served in the form of a web page. If people can upload data via your website to your server’s database, that would count as inbound traffic from the public web to your servers.
In the world of hosting, traffic is often incorrectly referred to as bandwidth. You may see an offer for a server with ‘bandwidth 10TB/mo’. In this case, you can tell from the fact it’s measured in TB/mo that the provider means traffic (or data transfer, if you prefer), not bandwidth.
Bandwidth
Bandwidth is actually the speed limit at which data can be transferred between your servers and the public web.
Bandwidth is measured in Megabits per second (Mbps) and sometimes Gigabits per second (Gbps). Hosting bandwidth typically ranges between 10Mbps and 1Gbps, with 100Mbps a common bandwidth for high-performance dedicated servers.
Bandwidth is sometimes referred to by the size of the port. For example, a dedicated server with a dedicated 100Mbps port affords you 100Mbps bandwidth.
Throughput
Throughput is the actual rate of data transfer achieved. It will always be less than the bandwidth (which is the capacity and therefore upper limit). Like bandwidth, the actual throughput is measured in Megabits per second (Mbps).
The reasons that actual throughput will be less than the bandwidth include the network overhead required to transmit and route data, the nature and number of users, their location and the fact that bandwidth is sometimes shared with several other servers. This is very common in VPS (Virtual Private Server) hosting, where several virtual servers exist on one physical server, and certain elements are shared between the virtual servers. One of these elements is the bandwidth.
Unmetered is not unlimited
When you buy unmetered hosting, especially an unmetered VPS, it’s important to understand that you are not buying unlimited traffic. Traffic is always limited, in practice, by the other factors: time and throughput. Throughput (Mbps) x Time (Seconds) = Traffic (MB)
Because not all of this information is always published, it can be difficult to understand how much bandwidth your server will be able to use and how much traffic you can realistically expect to achieve in a month. It is important to find out this information before you buy, and to compare this to your requirements in terms of speed and overall traffic consumption.
For a website, this is essentially a question of how many visitors you expect to receive in a month, and how quickly you and your users need the data to be transferred between your server and the Internet.
Indirect factors in this evaluation will include your users’ network speeds, the weight of the page (how much data needs to be transferred for the page to load) and the expectations of your users as to what is acceptable.
Unmetered often means slow
Every server and hosting package should be taken on its own merits. If you have a small blog read by a few hundred people, you will find that a cheap hosting package with a reputable company is enough for your needs. You may not even want to find out about the speeds and traffic involved if you trust the provider.
But any kind of application or project that is worthy of investment in a VPS or dedicated server deserves a little more attention to the traffic and bandwidth specifications. Find out from the company what you can expect (or better still what they guarantee), and use the opportunity to gauge the nature of the provider and their customer service from this interaction.
As we have seen, an unmetered VPS is still limited in traffic, it is just limited by the speed, not the traffic pricing. That doesn’t necessarily mean that the speed is slow. And slow doesn’t necessarily mean bad. But it does mean you need to know what you’re buying.
Essentially, when you buy a VPS, you usually need to choose between a guaranteed bandwidth (speed) or pricing that is based partly on traffic (volume). Whatever your preference, it shouldn’t stop you from enquiring as to what bandwidth and traffic you might realistically achieve. This can sometimes be found in the terms and conditions, or can be found out by contacting the provider.
An example scenario might be an unmetered VPS with 100 users sharing a 1Gbps port. That would leave you with about 3TB traffic (or data transfer) per month as a hard traffic limit. It also means that your bandwidth may be limited and when many people are accessing your website or application at once, they will experience slower page loads or downloads.
If traffic to your website or application is ‘bursty’ and inconsistent, you might be better off with a hosting package with a traffic limit (and price tied to traffic) than to limit your actual performance at busy times by sharing bandwidth with other customers.
Make an informed choice
When buying any server or hosting plan, it’s all a question of understanding the dynamics of your website or application and knowing what you’re buying. Think about the future too – how scalable is the solution you are considering, and will it be able to handle traffic growth without you needing to migrate to a new server?
And don’t forget the other factors to consider like a provider’s reputation, support and location.
For more information and advice on hosting plans, contact us, or subscribe to our blog.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
As with most buzzwords, the term “hybrid” is being applied to just about everything, including your infrastructure environment. More vendors are claiming to offer hybridized solutions that mix cloud and physical infrastructure, but this assertion isn’t always true.
Until now, there hasn’t been a standardized definition of the concept, and some providers are employing “hybrid washing” tactics to take advantage of the information gap. Here are a few things you need to know to avoid falling victim to “hybrid washing.”
Hybrid infrastructure is not the same as hybrid cloud
The term “hybrid” is most often used to describe a combination of public and private clouds, but hybrid infrastructure extends beyond this definition. You may be familiar with the National Institute of Standards and Technology (NIST) definition of hybrid cloud which describes it as a combination of public, private and community clouds “bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).” While this sufficiently addresses a combination of disparate clouds, the definition doesn’t include the ability to manage and move data and applications across cloud and non-cloud infrastructure environments.
Location isn’t everything
A mix of on-premise and hosted cloud is sometimes mistakenly referred to as a hybridized environment. In some cases, an application may be deployed using colocated servers in a data center, but certain workloads may take place in other hosting environments. This split application architecture is making the delineation between on-premise and off-premise less important. Regardless of physical location, on- or off-premise, private or public cloud, what really matters is the ability to leverage each environment to address your application requirements.
Disparate environments aren’t transparent
Connecting hosting and cloud platforms via unmanaged network links is considered to be a hybrid environment by some providers. While linking a meet me room to another cloud, hosting or colocation vendor can connect disparate environments, this does not provide network transparency or the ability to manage, monitor and provision machines across physical and cloud environments.
What is true hybridization?
A comprehensive hybrid solution allows you to move workloads across distinct environments (private cloud, public cloud, hosted, dedicated servers or collocated servers) to achieve greater scalability and flexibility for your applications. Hybridization connects diverse hosting environments via a unified, fully transparent network, and can be managed through a “single pane of glass” with a single point of contact for support and billing.
Infrastructure buyers who can distinguish between truly hybrid solutions and those that are “hybrid washed” will be better positioned to establish an optimized environment that can meet precise business needs.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Three reasons to choose one provider for your hybrid infrastructure
Hybrid infrastructure is a great way for organizations to create a best fit environment, especially for data-intensive applications that require high levels of performance and flexibility. The ability to move workloads across different infrastructure environments, including cloud, hosted, dedicated servers and colocated servers, provides increased responsiveness, scalability, and cost efficiency.
Achieving the benefits of hybrid infrastructure requires careful planning. Organizations often decide to purchase services from multiple vendors to establish the right environment for their unique application requirements. Unfortunately, this approach can result in a patchwork of disparate solutions that can cause major headaches down the road.
Let’s explore three reasons to consider using one provider for your hybrid infrastructure.
1. Unified networking
True hybridization allows you to quickly move workloads across different hosting environments without time-consuming network configurations or tasks. A unified network fabric is the foundation that enables persistent connections between legacy and virtualized environments. Using a mix of solutions from multiple providers makes it difficult to put these connections in place and share the same network infrastructure. Different providers usually have different policies and specifications, including the type of firewalls, load balancers, hardware and software they require, which limits your ability to establish seamless processes and automated deployments.
As your business and customer base grow over time, the lack of automation can increase the risk of mistakes, slow down the speed of your deployments, and create unnecessary burdens on support and engineering staff. A hybrid environment with unified networking allows you to streamline and automate operations without having to compromise your processes based on different providers’ requirements.
2. Single pane of glass
A hybrid infrastructure composed of solutions from multiple vendors may not include a comprehensive management platform with visibility into your overall environment. Viewing and monitoring dedicated servers from one provider and cloud services from another requires you to use separate management interfaces. Choosing one vendor that offers a management platform designed specifically for hybrid infrastructure will give you complete visibility through a single pane of glass, allowing you to programmatically provision, manage and monitor your cloud, colo, and hosting environments. Without the ability to logically provision and monitor machines across your infrastructure, the risk of operational inefficiencies increases and scalability can be difficult.
3. One invoice
Dealing with multiple service providers can be an operational and administrative liability. Choosing one provider for your hybrid infrastructure gives you unified billing and support, which removes the hassles of dealing with multiple different account contacts. Having one invoice lets you view exactly which resources are used for specific applications, making it easier to control costs across the organization. A single bill and single point of contact should be an essential part of a hybridized infrastructure.
Reaping the performance and flexibility benefits of hybrid infrastructure requires more than the right combination of hosting, colocation or physical environments. Unified networking, a single pane of glass management platform and one invoice are critical aspects of true hybridized environments. Choosing one service provider that can offer these capabilities is the most efficient way to create a hybrid infrastructure that successfully meets the needs of your applications.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Data center water usage and conservation is a critical aspect of green building design and environmental sustainability. Most data centers use large amounts of water for cooling purposes in order to maintain an ideal operating temperature for servers, hardware and equipment. But how do water conservation efforts affect the cost and operational efficiencies of a data center?
While 70% of the earth’s surface is covered in water, only 2.5 percent is fresh water, most of which is currently ice. Historically, the demand for fresh water has increased with population growth, and the average price has risen around 10-12 percent per year since 1995. In contrast, the price of gold has increased only 6.8 percent and real estate 9.4 percent during this same period.
So how much water do data centers use? While the average U.S. household uses 254 gallons per day, a 15 MW data center consumes up to 360,000 gallons of water per day. As the cost of water continues to rise with demand, the issue becomes one of both economics and sustainability.
How are Data Centers Addressing the Problem?
In order to control costs in the long term, data center operators are finding creative ways to manage water usage. Options include using less freshwater and finding alternative water sources for cooling systems.
- Reduced water usage—Designing cooling systems with better water management, resulting in less water use.
- Recycled water—Developing systems that run on recycled or undrinkable water (i.e., greywater from sinks, showers, tubs and washing machines). INAP’s Santa Clara facility was the first commercial data center in California to use reclaimed water to help cool the building.
- No water—In some regions, air economizers that do not require water can be used year round.
Challenges
While using less freshwater provides long-term cost and environmental benefits, alternative solutions also create new challenges. The use of recycled water can have negative effects on the materials used in cooling systems, such as mild steel, galvanized iron, stainless steel, copper alloys and plastic. Water hardness (measure of combined calcium and magnesium concentrations), alkalinity, total suspended solids (TSS – e.g. sand and fine clay), ammonia and chloride can cause corrosion, scale deposits and biofilm growth.
Data center operators must proactively identify susceptible components and determine a proper water treatment system. Implementing a water quality monitoring system can provide advanced warning for operational issues caused by water quality parameters.
With the rising cost and demand for freshwater, conservation measures are essential to the long term operations of a data center. Internap is committed to achieving the highest levels of efficiency and sustainability across our data center footprint, with a mix of LEED, Green Globes and ENERGY STAR
certifications at the following facilities:
- FLAGSHIP DALLAS DATA CENTER
1221 Coit Road
Plano, TX 75075
- FLAGSHIP LOS ANGELES DATA CENTER
3690 Redondo Beach Avenue
Redondo Beach, CA 90278
- FLAGSHIP DOWNTOWN ATLANTA DATA CENTER
250 Williams Street NW
Atlanta, GA 30303
- FLAGSHIP NEW JERSEY DATA CENTER
1 Enterprise Avenue N
Secaucus, NJ 07094
- FLAGSHIP SILICON VALLEY DATA CENTER
2151 Mission College Blvd
Santa Clara, CA 95054
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
As companies continue migrating their infrastructure to cloud computing because of scalability and performance benefits, the value of incorporating bare metal servers is often overlooked. When building out Distil Networks, we needed a highly scalable architecture that could handle anything our customers threw at us, and bare metal helped us achieve this.
Distil Networks now serves billions of page views per month, so we have to support traffic spikes from customers with little to no advanced warning – which made the cloud seem like a perfect choice. Our initial network ran exclusively on cloud computing, which worked fine except that performance wasn’t as optimal as we had hoped. You don’t know how your cloud instances are going to perform from day to day – it’s shared computing, the hardware is older, and the hypervisor introduces overhead.
To achieve better performance, we eventually moved to bare metal. But this had drawbacks, too. While the dedicated hardware improved performance, we now lacked the scalability of the cloud. We couldn’t handle unpredictable traffic spikes unless we overprovisioned our servers to accommodate peak demand, which simply isn’t cost-efficient in the long run. We needed the performance of physical servers and the scalability of cloud.
A new approach
We took a step back and revisited our base assumption of no predictability in our traffic, and found that it was flawed. While we didn’t know what the spike in usage and compute needs would be, we noticed that we never dipped below a certain baseline of traffic. This got us thinking — what if we ran that baseline traffic on dedicated hardware, and only scaled when we burst beyond that? Our hybrid model was born.
Scalability
Our customer traffic varies from day to day, customer to customer. If we were to overprovision resources based on peak capacity, a large portion would go unused. The difference between our median traffic and peak traffic is 2-3x, so on any given day, that means up to 75% of our server resources would be underutilized. But, under-provisioning could mean outages for customers, which isn’t an option. A hybridized environment including cloud and bare metal helped us solve this problem.
Raw application performance
Application speed on our servers was also a challenge. As we introduced more management layers into our environment, performance started to degrade, but we couldn’t afford to take the hit. Especially at high volumes, we had problems with inconsistent performance, but by incorporating bare metal into our environment, our solutions experience less volatility and increased processing power.
The best of both worlds
While cloud computing will always win the scalability fight, with a few tweaks to your architecture, you can set up a hybrid environment that also provides the performance you need. Not only has hybrid hosting allowed Distil Networks to address our challenges of scale and performance, we’ve also been able to grow our business and reduce our infrastructure and engineering costs.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
We’re proud to announce the complete deployment of Managed Internet Route Optimizer™ (MIRO) to help you fight through network congestion and achieve the best available online performance.
Internet traffic has become increasingly congested as businesses race to meet consumer demand for web-based services. Applications that are data-intensive or sensitive to latency require a fast, highly available global network in order to deliver content to multiple devices around the world.
Now available across Internap’s major global network Points of Presence (POPs), MIRO constantly probes multiple carriers to route traffic across the best-performing path more than 99% of the time.
The power of MIRO
Our tests show that next-generation MIRO routes packets 35 milliseconds faster on average across all destination prefixes than suboptimal carrier routes. When compared to any single carrier, MIRO provides 4 times less fluctuation in performance, resulting in a more stable, reliable connection and consistent application performance. The addition of IPv6 capabilities, hardware upgrades, and a more efficient route-optimization engine ensure optimal network performance for our cloud, hosting, and colocation customers. To learn more about the new aspects of next-generation MIRO, see our earlier blog.
The MIRO technology works over our redundant P-NAP infrastructure.
Worldwide capabilities
To enhance the global power of MIRO even more, we’ve just added a new POP in Slough, UK, just minutes outside London. As a data center hub for the region, Slough is an ideal location because of the large volumes of Internet traffic in the area. The P-NAP marks our third location in the region, and we plan to add more in the future to accommodate high customer demand for our IP solution.
Globally, the next-generation MIRO algorithm makes millions of route adjustments every hour, improving application performance for our customers to their end users. This successful deployment marks another step towards Internap’s commitment to provide Performance Without Compromise.