Apr 29, 2015

New direct-attached storage option for fast, big data applications

INAP

We recently launched a new Direct Attached Storage option for Internap’s bare-metal configurations. Customers with data-intensive workloads will benefit from Intel’s faster, more robust S3710 series Solid State Disks, which are now available across all of our bare-metal server locations.

Customers with intensive write applications or who simply require higher IOPS than Intel’s DC S3500 series will find this upgrade very useful. S3710 series SSDs can increase random write IOPS by more than three times compared with the S3500s, meaning more transactions for every storage dollar spent.

Intel DC S3500 Intel DC S3710
Endurance Rating (lifetime write) Up to 880TBW Up to 24.3PBW
Sequential Read Up to 500MB/s Constant 550MB/s
Sequential Write Up to 450MB/s Up to 520MB/s
Random Read (100% span) Up to 65,000 IOPS Up to 85,000 IOPS
Random Write (100% span) Up to 14,600 IOPS Up to 45,000 IOPS
Overprovisioning Firmware Fix No Yes

The S3710 is available in four options:

  • 200GB @ $150/SSD/mo
  • 400GB @ $300/SSD/mo
  • 800GB @ $600/SSD/mo
  • 1.2TB @ $900/SSD/mo

The upgrade to the S3710 series SSDs goes a long way in improving performance of write-heavy or IOPS-intensive applications.

To order the new S3710 series SSDs, new customers can call 1.877.843.7627 or chat with us, and existing Internap customers can contact their account executives.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Apr 23, 2015

Backup Strategy: What Should I Back Up?

Paul Painter, Director, Solutions Engineering

At HorizonIQ, we know that data protection is essential to make businesses work, and losing critical data can make businesses disappear.

According to PricewaterhouseCoopers, 94% of companies that suffer a catastrophic data loss without backup will go out of business within 2 years. The firm also estimates that 15,000 hard drives around the world fail every day. And, according to web security firm Symantec, the median cost of downtime for a small or medium-size business is $12,500 per day.

Backup can save your company from lost revenue, lost sales, lost development time and lost, well, everything.

But what should you backup? Do you need to backup everything, or just certain files? How can you assess that? We’d like to offer you some tips in this article about how to approach your backup strategy.

Defining file backups and bare-metal backups

Let’s start with two definitions to understand the difference between file backups and bare-metal backups. Then, we will discuss some common use cases.

File backup — By definition, a backup refers to a copy of computer data, as a file or as the contents of a hard drive. The most common way to take backups is on the files level. Determining the most critical files is the first step in defining your backup strategy.

Bare-metal backup — A bare-metal backup refers to a full backup of everything in order to restore the server to a previous state. “Everything” in this situation refers to the operating system, the configuration of the operating system, the application code, and the application and user data, as illustrated in this image.

Backup use cases

We’ve observed that our customers’ backup strategy evolves according to a few factors, including company size, the criticality of their data, and restoration time tolerance. Bare-metal backups usually help decrease the restoration time, since you don’t need to reinstall the operating system, re-configure it, and reinstall your applications.

Bare metal or file backup for startups
A startup may not have the luxury of backing up everything, because the process may represent a lot of data, including the operating system and configurations. It can also involve some high costs that can’t easily be justified. Plus, is it worth your time to work overnight to re-install the operating system and re-configure everything like it was before the disaster event? In this scenario, backing up only your critical files may be sufficient.

Backup for SMB & enterprise corporations
As the company grows larger, you usually end up using both types of backup. Some servers are not worth the expense of performing bare-metal backups, because this can require significant offline time. But, if one or two servers are business-critical, you may need to take bare-metal backups in order to be able to restore them in a timely manner in the event of a disaster.

Can I still perform some file restoration when taking bare-metal backups?
By taking bare-metal backups, you usually are protecting yourself against file loss, since the backup solution should be capable of extracting one specific file or set of files from this bare-metal backup.

Backup best practices

We’d like to remind you that regardless of which backup strategy you choose for your specific case, the most forgotten best practice is the most important one: testing your backups. We advise you to try and restore your files and your servers on a regular basis. Why test restoration? Because you don’t want to test a restoration for the first time when you’re experiencing an actual disaster scenario. You want to make sure your backups are not corrupted, or that you missed a critical file when determining which files were essential for backup.

 

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Paul Painter

Director, Solutions Engineering

Read More
Apr 21, 2015

Bare metal: The right infrastructure for big data

Ansley Kilgore

What Are the Key Bare Metal Infrastructure Requirements for Big Data Analytics?

Flexibility and performance are common themes that drive almost every large-scale infrastructure deployment, and big data is no different. The ability to analyze, store, and take action on fast, big data is critical, and a reliable, high-performance Internet infrastructure is required in order to extract valuable insights from large volumes of data.

Scalability – Adjust to changes in demand with a scalable infrastructure that can accommodate sudden increases in volume without negatively impacting performance. On-demand solutions like HorizonIQ’s bare-metal cloud offer the performance of a dedicated server along with the elasticity of cloud.

Low latency – The right servers and specifications need a reliable, highly available network to provide ultra-low latency. This is especially important for industries that require transactions and requests to take place in less than 100 milliseconds.

Performance – Process and analyze trillions of data sets quickly to meet your real-time analytics needs. Bare-metal servers offer higher CPU, RAM, storage, and internal network performance for your most demanding applications when compared to virtualized cloud offerings.

Hybridizing your colocation, dedicated hosting, and cloud services into one unified environment can meet your big data workload requirements while optimizing for scale, speed, and performance.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Ansley Kilgore

Read More
Apr 16, 2015

Top 5 ways to avoid cloud security issues

Ansley Kilgore

Cloud security has evolved to provide better data and access management for public cloud delivery models. But even with more control and visibility into the data placed in the cloud, many organizations still have cloud security issues that impact compliance. Certain industries such as finance, healthcare and retail are bound by strict compliance and location regulations around data management and personally identifiable information.

So what are some factors to consider before deploying a cloud solution to make sure that it will meet your compliance requirements?

1. User and access management

Some organizations have multiple individuals and teams that operate portions of their applications in the cloud. The ability to define access rules and grant or deny access for users can be critical to the security of your data.

2. Firewalls and other network access devices


Many environments use hardware or software firewalls and VPN appliances to protect access to server resources or define network access policies across application components.

3. Separation of resources (network, compute, storage)

Often, applications with specific compliance requirements need to employ dedicated hardware or single-tenant environments. These requirements may affect your ability to use public cloud services if the appropriate segmentation is not available.

4. Encryption management tools

Many organizations require levels of encryption for both data-in-flight and data-at-rest, to protect data in multi-tenant environments. Tools to enable such services, such as key and certificate management, can accelerate adoption of cloud for organizations with these security needs.

5. Compliance certifications and vendor/buyer responsibilities

Organizations with specific compliance needs, such as PCI-DSS or HIPAA may require certifications or Reports
on Compliance (RoC) to determine exactly which requirements are met by the provider and which remain the buyer’s responsibility.

 

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Ansley Kilgore

Read More
Apr 14, 2015

Customer spotlight – engage:BDR reduces latency for advertisers and publishers

Ansley Kilgore

As a digital advertising company that began in 2009, engage:BDR is considered a pioneer in the adtech industry. With a fully managed solution as well as a standalone real-time bidding (RTB) platform, the company covers a broad spectrum of services for advertisers and publishers. By leveraging cross-device and cross-channel capabilities, engage:BDR can render static or dynamic ads, regardless of medium. The end result is that engage:BDR can choose the right ad for the right audience and drive marketing success for customers.

Why is engage:BDR unique?
Based in the heart of West Hollywood, engage:BDR is different from other technology startups in a few ways. First, the company began with no venture capital investment. While this gave the founders a certain amount of freedom, it also created strong motivation to be profitable from day one and keep operating costs low. In addition, while most adtech companies either match the right ad to the right audience, or serve the ad, engage:BDR does both. This requires a low-latency environment to accommodate their RTB exchange and ad platform.

Infrastructure challenges
Like most tech startups, when engage:BDR initially went into business they used Amazon Web Services (AWS) for their infrastructure needs. But around 2011, when the company started venturing into programmatic ad purchasing, their bandwidth usage costs grew exponentially. When their monthly bill suddenly skyrocketed to $80K, they began looking for a more cost-effective alternative. “AWS is easy, but once you start growing, the bandwidth will kill you,” said Kenneth Kwan, CTO.

Reduced latency
In an ad exchange, responses to bid requests need to take place within 75 milliseconds, and lower latency gives more leeway for ad purchasers to respond. After evaluating other solutions, engage:BDR chose Internap’s Performance IPTM service and Content Delivery Network (CDN). While the decision was initially driven by cost, engage:BDR has seen significantly lower latency since making the switch.

Content delivery
To efficiently serve the ads, engage:BDR leverages Internap’s CDN to deliver large files and assets. Serving the images can be bandwidth-intensive, and using a CDN is a more efficient method to accomplish this.

Adtech ecosystem
Additionally, Internap has locations near the major ad exchanges, including San Jose and Los Angeles. This allows engage:BDR to be part of an adtech ecosystem where close proximity of DSPs and SSPs further reduces latency for everyone with a seat on the ad exchange.

One of the metrics that engage:BDR tracks is queries per second, which includes all requests for ads and responses to bids. The company currently processes between 1.5 and 2 billion queries per day. With this staggering volume of requests, a cost-effective IP and CDN solution has a big impact on the bottom line and the efficiency of the ad network and RTB marketplace.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Ansley Kilgore

Read More
Apr 10, 2015

3 network considerations for cloud solutions

Ansley Kilgore

When evaluating public cloud solutions, network is a factor that should not be overlooked. The network permeates through the layers of your infrastructure, from database and storage to the application and management layers. Without a reliable, high-availability network, your environment can suffer from latency, poor security, and lack of controlled access. To provide an optimal end-user experience, your cloud solution requires the right network capabilities. Let’s take a look at some network considerations for cloud and how it can affect your infrastructure.

Performance and scale

First and foremost, performance and scale go hand in hand, and latency and network speed can have a drastic impact on both. A reliable, optimized IP service can reduce response time and ensure more consistent application and workload performance. This is especially important for data-intensive workloads that are sensitive to latency. The right network can help you scale to accommodate increased transaction volume and number of users, without compromising the capabilities of your application.

North-south and east-west traffic

North-south refers to the network traffic flowing from your application to the end user, while east-west traffic refers to traffic between servers within your infrastructure. When evaluating network capabilities of cloud providers, you need optimized IP for both types of traffic, as this can affect performance across all areas of your infrastructure.

Location

The right network makes the physical location of your cloud less important. Cloud services are designed to be managed through API or portal interfaces, which reduces the need for physical proximity. With that said, cloud location becomes relevant if you’re trying to target a specific geographic market. Using low-latency Internet transit or additional services such as Content Delivery Network (CDN) and anycast DNS can make the location of the cloud provider less important in these situations.

So what are some specific questions to ask cloud providers regarding network?

  • How is customer traffic segmented from seeing each other?
  • How easy is it to create/manage network segments (e.g. vlans) on your particular resource pool?
  • What networking speeds are offered or guaranteed between hosts, virtual machines or services (such as compute to block storage)?

 

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Ansley Kilgore

Read More
Apr 8, 2015

DNS Failover: Strengthen your disaster recovery plan

INAP

DNS failover is a must-have for any company that generates revenue from its website. Having a reliable disaster recovery plan in place can prevent negative repercussions on your bottom line. But let’s face it – outages can and do occur. They can happen at any moment and for a variety of reasons including:

  • Hardware failures
  • Malicious attacks (DDoS, hackers)
  • Scheduled maintenance and upgrades
  • Man-made or natural disasters

Today, we announced a new feature for HorizonIQ’s Managed DNS: Active Failover. Backing up your sites and making sure that your DNS service includes failover are important steps in preparing for downtime. Since DNS handles the initial transaction to your servers, it can be a first line of defense to make sure your users have uninterrupted access to your content.

What is a Managed DNS solution?

Internet service providers usually run basic DNS services; however, certain organizations or individuals can greatly benefit from the features and performance of a managed DNS. Managed DNS is a service that allows you to control the DNS of your sites without having to worry about the underlying infrastructure. Furthermore, managed DNS solutions provide advanced features, availability, and redundancy.

What is DNS Failover and how does it work?

DNS Failover is essentially a two-step process. The first step involves actively monitoring the health of your servers. Monitoring is usually carried out by ping or Internet Control Message Protocol (ICMP) to verify that your HTTP server is functioning. The health of the servers can be assessed every few minutes, and more advanced services allow you to configure your monitoring time settings. In the second step, DNS records are dynamically updated in order to resolve traffic to a backup host in case the primary server is down. Once your primary server is back up and running, traffic is automatically directed towards its original IP address.

DNS Failover is not without limitations. In order for it to work, you need to have backup locations for your site and applications. Even if DNS records are quickly updated once an outage has been detected, ISPs need to update their DNS cache records, which is normally based on TTL (Time to Live). Until that occurs, some users will still be directed to the downed primary server.

The HorizonIQ solution

Unlike some DNS service providers, HorizonIQ’s Managed DNS performs health checks 24/7 to verify the availability of your website or applications and provide you with the optimal DNS failover solution. In addition, managing and setting up DNS failover for the first time is quick and easy through our intuitive web interface and portal.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Apr 3, 2015

Guide to high-performance hosting

INAP

The original version of this article was published on the iWeb blog. Read it here.

What is high performance in hosting, and which hosting model offers the ‘best’ performance? It’s all a question of priorities. Find out how focusing on the right features can mean choosing the most cost-effective, high-performance hosting solution for your own particular needs.

Shared web hosting is very limited in performance. Beyond a certain level of workload or website traffic, there is no such thing as high performance (shared) web hosting. Even ‘unlimited’ web hosting that offers unlimited capacity and unlimited traffic is not as high-performance as it sounds.

Shared web hosting shares computing, storage and bandwidth capacity between many tenants. That means performance is limited by the amount of CPU, RAM and bandwidth that you are are allocated at a given moment. Even if there is no stated limit to the bytes of traffic you can serve, or files you can store, you are severely limited by the fact that you are sharing a physical server with many other tenants and their unknown/variable workloads. That said, good web hosting offers perfectly adequate performance for low-traffic, static websites.

So how do you know when a web hosting plan is inadequate for your site?
Websites that use shared hosting will become noticeably slow to load if the server is struggling to keep up with demand. You can also track page load times in Google Analytics – just make sure that your sample size is large enough to base a decision on.

If you expect high levels of traffic, or you are a hosting reseller/agency looking to build a portfolio of websites, consider planning ahead and buying hosting that has enough headroom for you to grow. It could save you the pain of a series of migrations.

Other ways to improve website performance for any level of hosting is to optimize and/or separately host different website elements. For example, you can speed up performance by consolidating CSS, you can host your CSS, JavaScript, images and other files separately, and you can use a Content Delivery Network (CDN) to store and serve static content.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Apr 1, 2015

The essential cloud buyer’s guide

Ansley Kilgore

As more cloud providers race to meet the fast-growing needs of the IT and developer community, the process of evaluating cloud solutions has become more complex. For organizations looking for public cloud Infrastructure-as-a-Service (IaaS), comparing cloud services and features across a level playing field can be difficult. The practice of “cloud washing” contributes to the confusion, as some providers apply the term “cloud” to offerings that are not truly cloud.

Whether your business is a startup developing cloud-native applications or an enterprise organization looking to adopt cloud while leveraging legacy infrastructure, cloud can offer benefits in terms of performance, scalability and efficiency. Ultimately, making the wrong choice can result in problems with scale, performance, cost control, vendor lock-in and unsuccessful cloud migrations.

So what are the key factors that you should consider when evaluating cloud IaaS offerings? To provide guidance around these pain points, we’ve put together a buyer’s guide and cloud checklist to help you navigate the decision-making process.

Performance matters
A top consideration should be the performance of your cloud solution. Even if you’re not processing, analyzing and storing large amounts of data in real-time, high-performance cloud solutions such as bare metal can offer benefits around cost and efficiency when incorporated into your environment. While high-performance capabilities are required for real-time, data-intensive applications or workloads, such as adtech or online gaming use cases, they can also offer powerful compute, storage and network benefits for other use cases.

Choosing a high-performance cloud can also prepare your infrastructure for the future as your business evolves. Consider your current and future requirements, and choose a cloud solution that can grow along with the changing needs of your business.

 

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

Ansley Kilgore

Read More