Month: September 2014
In a recent webinar, Internap and our guest, Forrester Research’s James Staten, reviewed the cloud service provider landscape and discussed how hybridization is shaping cloud deployment strategies.
IT decision makers evaluating cloud service providers need a clear understanding of how cloud can support business drivers, but in many cases, a hybrid approach may be the best choice. Choosing a provider with a breadth of services beyond cloud will give your business more flexibility and control in the future.
“Cloudy” service offerings
As cloud service providers race to meet the growing demands of businesses, the definition of the term “cloud” is becoming blurry. The practice of “cloud washing” – when providers simply add “cloud” to existing service offerings – creates confusion in the marketplace and muddies the waters as to what cloud is and what it isn’t. Enterprises and SMBs are challenged to look beneath the surface and find out if the solution will actually meet their needs.
Evaluating cloud service providers
To further complicate matters, a cloud environment isn’t always the best fit. Even if your infrastructure could migrate completely to the cloud, this option may not be the most cost-effective choice in the long run.
For example, a hybrid environment can be more cost effective for a typical ecommerce business. Let’s look at a cost analysis for an ecommerce site infrastructure and compare three different environments – public cloud only, colocation only and hybrid.
In this scenario, the ecommerce site has predictable levels of demand for 95% of the year and higher peak demand for 5% of the year. Assumptions include dedicated firewalls, dedicated load balancers, web servers, application and cache servers, and database servers.
Colocation – Utilization is low and there is a significant amount of unused resources since the infrastructure is built to handle peak demand.
Public Cloud – Utilization is high, but allows the company to only use what it needs. Effective utilization is achieved, but at a high cost.
Hybrid – Combining both colocation and cloud provides high utilization at a lower cost than either option alone.
The bottom line is to do what’s best for the application instead of trying to fit a square peg into a round hole. When evaluating cloud service providers, choose one that encompasses hybrid infrastructure capabilities as well as cloud hosting solutions.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Data center storage requirements are changing quickly as a result of the increasing volumes of big data that must be stored, analyzed and transmitted. The digital universe is doubling in size every two years, and will grow by a factor of 10 between 2013 and 2020, according to the recent EMC Digital Universe study. So, clearly storage needs are skyrocketing.
Fortunately (for all of us buyers out there), the cost per gigabyte of storage is falling rapidly, primarily because disk and solid state drives continue to evolve to support higher areal densities. Alas, the volume of data being stored seems to be outpacing our ability to cram more magnetic bits (or circuits in the case of flash) per nanometer of surface area.
So clearly, storage costs are likely to become a larger component of overall IT budgets in the coming years. Here are five things to consider when planning for your future storage needs.
1. High power density data centers
With increasing storage needs and a greater sophistication of the storage devices in use, power needs for each square foot in a data center are increasing rapidly. As a result, high power density design is a critical component of any modern data center. For example, if an average server rack holds around 42 servers and each of those servers uses 300W of power, the entire rack will require 12-13kW in a space as small as 25 square feet. Some data center cabinets can be packed with even more servers; for example, some blade server systems can now support more than 10x the number of servers that might exist in an average rack. This increasing demand for higher power density is directly related to the need for higher storage densities in data centers.
2. Cost-efficient data center storage
Choosing an energy-efficient data center from the start can help control costs in the long run. Facilities designed for high density power can accommodate rising storage needs within a smaller space, so you can grow in place without having to invest in a larger footprint.
Allocating your storage budget across different tiers is another way to help control costs. Audit your data to determine how it is used, and how often particular files are accessed during a given period, and categorize the data into tiers so that the type of data is matched with the appropriate storage type. The most-accessed data will require a more expensive storage option while older, less-accessed data can be housed in less-expensive storage. Some examples of different storage types, from most to least expensive, include RAM, solid state drive, spinning hard disk drives (SATA or SAS drives) and tape backup.
3. Scalability
Infrastructure should be designed with scalability in mind; otherwise, costs can become unmanageable and possibly result in poor performance or even outages. Scalability allows you to grow your infrastructure at a pace that matches the growth in data, and also gives you the ability to scale back if needed. Distributed or “scale-out” architectures can provide the perfect foundation for multi-petabyte storage workloads because of their ability to quickly expand and contract according to compute, storage, or networking needs. Also, a hybrid infrastructure that connects different types of environments can enable customers to migrate data between cloud and colocation; if an unexpected need for storage occurs, customers can then shift their budget between opex and capex if needed.
4. Security
Strict security or compliance requirements for data, particularly for companies in the healthcare or payment processing industries, can increase the complexity of data management and storage processes. For example, some data need to be held in dedicated, 3rd party-audited environments and/or fully encrypted at rest and in motion.
5. Backup and replication
When planning your infrastructure, it must support backup and replication in addition to your application requirements. Online backup handles unpredictable failures like natural disasters, while replication deals with predictable hardware failures that may occur during planned maintenance. Establishing adequate replication and backup requirements can more than double the storage needs for your application.
Your data center storage needs will continue to increase over time, as the digital universe continues to expand in alignment with Moore’s Law. Careful planning is required to create a cost-efficient, secure, reliable infrastructure that can keep up with the pace of data growth. Service providers can draw on their experience to help you find the right storage options for different storage needs.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Internap wins Stevie Award for big data solution at 2014 American Business AwardsSM
Internap was honored to receive a bronze Stevie® Award in the New Product or Service of the Year – Software – Big Data Solution category at the 12th Annual American Business Awards on September 12, 2014. The award honors Internap and Aerospike for creating the industry’s first “fast big data” platform, which runs Aerospike’s hybrid NoSQL databases on Internap’s bare-metal servers.
The combined solution from Internap and Aerospike enables developers to quickly deploy applications that demand predictable, high performance in a cost-effective hosted environment. Big data workloads and other computationally intensive applications require higher levels of performance and throughput than traditional virtualized cloud infrastructure can provide.
Benchmark tests comparing similar virtual and bare-metal cloud configurations show Internap’s bare-metal cloud yields superior CPU, RAM, storage and internal network performance. In many cases, organizations require 8x fewer bare-metal servers than virtualized servers, resulting in decreased IT equipment cost, less power usage and a smaller data center footprint.
The eXelate use case
eXelate is the smart data company that powers smarter digital marketing decisions worldwide for marketers, agencies, platforms, publishers and data providers. eXelate’s platform provides accurate, actionable, and agile data and analytics on online household demographics, purchase intent and behavioral propensities.
Aerospike’s hybrid NoSQL database allows eXelate to use at least 12x fewer servers than in-memory database solutions with lower storage capacity (740GB of storage as opposed to 64GB available with in-memory database solutions). This enables massive volume real-time data storage and continual read/write back to the Aerospike cluster..
By running the Aerospike NoSQL databases on Internap bare-metal servers in four data centers around the world, eXelate is able to process 2 TB of data per day and ingest over 60 billion transactions per month for more than 200 publishers and marketers across multiple geographic regions.
Details about The American Business Awards and the lists of Stevie Award winners who were announced on September 12 are available at www.StevieAwards.com/ABA.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
When it comes to high power density data centers, all are not created equal. Many customers, particularly those focused on ad tech and big data analytics, are specifically looking for colocation space that can support high power densities of 12+kW per rack. Here at Internap, we have several customers that need at least 17kW per rack, which requires significant air flow management, temperature control and electricity. To put this in perspective, 17kW equates to about 60,000 BTUs, and a gas grill with 60,000 BTUs can cook a pretty good steak in about five minutes.
Delivering super high power density that meets customer demands and ensures tolerable working conditions requires careful planning. When designing a high power density data center, there are three essential elements to consider.
1. Hot aisle vs. cold aisle containment.
To effectively separate hot and cold air and keep equipment cool, data centers use either hot aisle or cold aisle containment. With cold aisle containment, all the space outside the enclosed cold aisle is considered hot aisle, and enough cold air must be pumped across the front side of the servers to keep them cool. However, the hot aisles can become too hot – over 90 degrees – which creates intolerable working conditions for customers who need to access their equipment.
As power densities rise, temperature control becomes even more important. Using true hot aisle containment instead of cold aisle containment creates better working conditions for customers and maintains a reasonable temperature across the entire data center floor. With hot aisle containment, there’s still heat coming from the racks, but you only have to deal with the heat coming from the rack you’re working on at the time, instead of getting roasted by all of them at once. This approach helps avoid the “walking up into the attic” effect for data center technicians.
2. Super resilient cooling systems.
In a typical data center, if the computer room air conditioning (CRAC) units go offline, you have about 10-15 minutes to get the chillers restarted before temperatures start to rise significantly. But when equipment is putting off 36,000 BTUs, you don’t have that luxury. To avoid an oven-like atmosphere, cooling systems must be ultra-resilient and designed with concurrent maintainability, including +1 chillers and separate loops for the entire cooling infrastructure.
Hot aisle containment also makes a cooling outage less painful because the entire data center floor becomes a cool air pocket that can be sucked through the machines, giving you a few extra minutes before things start getting – well, sweaty.
3. Electrical distribution.
Data centers must be designed to support high density power from day one. We have a mobile analytics customer that uses nine breaker positions in a single footprint. You can’t simply add more breaker panels when customers need them; you have to plan ahead to accommodate future breaker requests from the start. Also, breaker positions are used for primary and redundant circuits – more customers than ever are requesting redundant power, so this should also be taken into consideration.
The flexibility of modular design
Internap’s high density data centers are flexible enough to work with custom cabinets if the customer prefers to use their own. As long as the cabinet can be attached to the ceiling and connected to the return air plenum, we can meet the customers’ power density requirements.
Data centers designed to support high power density allow companies to get more out of their colocation footprint. The ability to use rack space more efficiently and avoid wasted space can help address changing needs and save money in the long run. But be sure to choose a data center originally designed to accommodate high power density – otherwise you and your equipment may have trouble keeping cool.