Month: May 2013
Please join us on June 5th, 2013 at 1 p.m. EST for our next webcast, Hybridization: Shattering Silos Between Cloud and Colocation. We’ll discuss how hybridization of cloud and colocation can result in improved visibility and flexibility for your infrastructure, and present use cases in which colocation and cloud efficiently work together.
Traditional colocation typically lacks the transparency, automation and flexibility inherent in cloud services – making it difficult to gain a holistic view of the environment or to easily address certain use cases (such as scale-out Web applications or “bursty” and unpredictable workloads).
To solve these challenges, colocated enterprises should now consider an “all-cloud” strategy, right? Not quite. Cloud solutions are not appropriate for every application or workload type, especially if latency, security or uptime is of utmost concern.
So, is it possible to have the best of both worlds – the control and reliability offered by colocation and the visibility and flexibility of cloud? Join us for this complimentary webcast in which we’ll explore how you can leverage cloud and colocation hybridization for your IT infrastructure.
Attend this complimentary webcast to learn:
- Advantages of colocation and cloud solutions
- How hybridization of cloud and colocation can result in improved visibility and flexibility for your IT infrastructure
- Real-world use cases in which colocation and cloud efficiently work together
Learn more and register for the webcast.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Big data is the buzzword in the IT industry these days. While traditional data warehousing involves terabytes of human-generated transactional data to record facts, big data involves petabytes of human and machine-generated data to harvest facts. Big data becomes supremely valuable when it is captured, stored, searched, shared, transferred, deeply analyzed and visualized.
The platform that is frequently cited as the enabler for all of these things is Hadoop, the open source project from Apache that has become the major technology movement for big data. Hadoop has emerged as the preferred way to handle massive amounts of not only structured data, but also complex petabytes of semi-structured and unstructured data generated daily by humans and machines.
The major components of Hadoop include Hadoop Distributed File System (HDFS) as well as implementation of MapReduce. HDFS distributes and replicates files across a cluster of standardized computers/servers. MapReduce parses the data into workable portions across the cluster, so they can be concurrently processed based on a map function configured by the user. Hadoop relies on each compute node to process its own chunk of data allowing for efficient “scaling-out” without degrading performance.
Hadoop’s popularity is largely due to its ability to store, analyze and access large amounts of data, quickly and cost effectively across these clusters of commodity hardware. Some use cases include digital marketing automation, fraud detection and prevention, social network and relationship analysis, predictive modeling for new drugs, retail in-store behavior analysis, mobile device location-based marketing within an almost endless variety of verticals. Although Hadoop is not considered a direct replacement for traditional data warehouses, it enhances enterprise data architectures with potential for deep analytics to attain true value big data.
When building and deploying big data solutions with scale-out architecture, cloud is a natural consideration. The value of a virtualized IaaS solution, like our own AgileCLOUD is clear – configuration options are extensive, provisioning is fast and easy, and the use cases are wide-ranging. When considering hosting solutions for Hadoop deployments, shared public cloud architectures usually have performance trade-offs to reach scale, such as I/O bottlenecks that can arise when MapReduce workloads scale. Moreover, virtualization and shared tenancy can impact CPU and RAM performance. Purchasing larger and larger virtual instances or additional services to reach higher IOPS to compensate for those bottlenecks can get expensive and/or lack the desired results.
Hence the beauty of on demand bare metal cloud solutions for many resource intensive use cases: Disks are local and can be configured with SSDs to achieve higher IOPS. RAM and storage are fully dedicated and server nodes can be provisioned and deprovisioned programmatically depending on demand. Depending on the application and use case, a single bare-metal server can support greater workloads than multiple similarly sized VMs. Under the right circumstances, the use of both virtualized and bare metal server nodes can yield significant cost savings and better performance.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Here at INAP, we work very hard to maintain the reputation of our network. This includes of course, quickly and efficiently handling abuse complaints to ensure that our servers and services are not causing a problem for anyone else out on the Internet. Our abuse team makes sure that every valid complaint that is submitted to abuse@inap.com is forwarded onto the appropriate party. That said, here is some helpful information you can use when an abuse complaint lands on your doorstep.
DON’T PANIC!
Seriously, this is the first, and most important step. The worst thing you can do is nothing. If we never hear back on the status of a complaint, and the malicious content is still there, we are often left with no choice but to shut off the device generating the nasty stuff until we hear back from the operator. Make sure that your primary contact email on your account has abuse@inap.com whitelisted. This is crucial as oftentimes the complaints we have to forward usually contain a snippet or information regarding the spam, phishing sites, etc, and it can get chewed up by your spam filter. Even if you don’t have time or are temporarily unable to handle a complaint, let us know when you can check on it.
Got Management?
If you are unsure of how to handle an abuse complaint, and you have management for the affected server, you can always submit a support ticket regarding it. This way, the complaint will be swiftly handled by myself or another one of our system administrators 24×7, rain or shine. For us, it can be rather enjoyable sifting through mail logs, checking timestamps, tracking down how, when, and where spam was sent.
Resellers, Pay it Forward
If you happen to resell any of our services, it is important to ensure that your clients are dealing with abuse complaints in a timely manner as well. After all, the buck has to stop somewhere. Many of our larger partners operate abuse departments of their own, and work with our abuse team extensively to ensure that their own networks are kept clean and abuse-free as well.
Feedback Loops Are Our Friends!
INAP has automated spam reporting using what are called feedback loops. How does it work? We signed up with major mail providers (Google, Microsoft, AOL, Yahoo, etc.) so that they notify us whenever one of their users marks a message as spam that originated on our network. Our system checks the complaint, and matches the IP Address to the server, and automatically notifies the affected operator. Basically, if you see a lot of these during a particular timeframe, it is usually a dead giveaway that something on that IP was sending out spam.
Policy Review
If you do plan on using your server to send out large amounts of (legitimate) mail, please take the time to ensure that your methods are in line with INAP’s Terms of Service and Acceptable Usage Policy, easily reviewable at https://www.inap.com/legal/.
Cleaning Up
Once you have dealt with any abuse issues, it doesn’t hurt to verify that your server’s reputation on the internet is left intact. IP address reputation is usually governed by Realtime Black-hole Lists (RBLs), which are listings of IP Addresses accessible by DNS. If one of these RBLs find that your IPs have been sending too much spam, they will list you, and any mail server on the internet that uses that RBL will not accept mail from you. Some good resources to check if and where you are listed are http://mxtoolbox.com/blacklists.aspx and http://multirbl.valli.org. If you find yourself listed, the RBL will usually have information on how to get delisted, and we here at INAP are more than glad to assist you in this endeavor.
Updated: January 2019
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
SSAE, SOC 2 & SOC 3 reporting standards
The last time I wrote about SOC 2 reporting, it was still very new. I was still learning about these standards, and as a result, may not have been as exact as you might have wanted. I also may have been a little hard on SSAE reports. And despite my description, there is no SSAE SOC 2 report; SSAE and SOC 2 are different types of audits.
So now, I thought it might be worth a refresh of some key SSAE, SOC 2, and SOC 3 points, thoughts, and opinions.
What is the difference between SOC1 vs SOC2 vs SOC3?
- SSAE 16 or SOC 1 is basically a replacement for what was known as SAS70. With this report, an auditor will evaluate controls as defined by the service provider and offer an opinion. Depending on how rigorously the service provider tests, the report may be extremely valuable or not that helpful to the service provider’s customers.
- SOC 2 and SOC 3 are based around the American Institute of Certified Public Accountants’ Trust Service Principles (TSP) of security, availability, processing integrity, confidentiality, and privacy. Service providers being audited under SOC 2 and 3 are evaluated against both their own controls and some predefined TSP controls. Because of these standards, these reports are, in my opinion, and the opinion of others, more likely to be useful. Note however, that a service provider is not required to test on all 5 TSPs, so there may be differences even among SOC 2 or 3 reports from different providers.
- A SOC 2 report contains the auditor’s report and details around the tests performed, the results and an opinion on the controls. A SOC 3 report only contains the auditor’s report on whether the controls meet the service criteria established under TSP. Which one is better depends on what level of detail a customer needs.
- The testing for each type of audit can be at a certain time (Type I), or over a specified period (Type II).
- No one gets certified with one of these audits. A service provider simply “successfully completes” the audit. To find out how successfully, you need to read the service providers’ reports.
Soc 2 Report Summary
Hopefully, the information above is useful and will help you make informed choices. If you want some additional opinion, I am partial to SOC 2 Type 2 reports. It’s what we do here at Internap. These reports provide info about operational controls and provide auditor insight into how well those controls work. This seems to be what most of our customer’s auditors want.
But beyond that, these reports are great tools for us to benchmark our own performance. For HorizonIQ, it’s not just a marketing gimmick; it’s serious business. And that’s probably as important as any other reason when you trust your business with us.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Data center disaster preparedness: disaster-resistant design and infrastructure – part 1 (video)
For IT organizations evaluating data center space or colocation services, it’s important to consider disaster mitigation and recovery capabilities. In this video, watch Bill Brown, VP, Data Center Operations at Internap, discuss certain factors that should be present to minimize disruptions to your business in the event of a disaster.
Redundancy standards for power and cooling – The data center should be designed to N+1 redundancy standards for both power and cooling. This will certify a level of maintainability as well as resiliency in the event of a loss of power grid utility or other unexpected event.
Independent electrical grid – This should be “hardened” to withstand outside influences, such as earthquakes or terrorist attacks. Actions as simple as positioning bollards at the entrance to deter a threat or placing a cement wall between transformers to guard against a catastrophic ground fault can reduce risks.
Single points of failure should be identified to minimize the risk of disruption and help maintain business continuity.
Before choosing a data center provider, take an in-person tour of the facility to determine if the design and infrastructure will provide the level of redundancy and protection that your business needs.
The physical design of the data center should also have adequate precautions in place to protect against weather-related threats. In certain geographic areas that are prone to earthquakes or other outside influences, preventative measures should be in place to avoid an impact on business operations. Internap’s Houston data center is located in the only downtown building that did not lose power during Hurricane Ike in 2008.
To learn more, download our ebook, Data Center Disaster Preparedness: Six Assurances You Should Look For.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Explore your IT Infrastructure options with Internap’s Solution Builder
Choosing a hosting platform that meets the unique needs of your business can be challenging, and many variables must be considered when making IT infrastructure decisions. Different use cases and workloads can require different infrastructure options, and other factors such as sensitivity to latency and downtime, speed of deployment and security must also be evaluated.
To address this challenge, we’ve created the Solution Builder, an online tool to help you navigate the myriad of infrastructure options available, from colocation and managed hosting to private and public cloud.
The Solution Builder considers a variety of factors, including the nature of your specific project, the technical application or architecture requirements and the geographic location of your deployment to create a personalized recommendation. This interactive tool even provides real-time feedback that shows you how different requirements will alter the type of infrastructure that fits your needs.
Agile hosting – If you require a rapid deployment with no long-term commitment, Internap’s Agile hosting platform may be a good fit.
Managed hosting – For companies that require a more customized solution and prefer to evenly distribute the costs of deployment over the life of the application, a managed hosting solution can provide increased control.
Colocation – For organizations that want to own and manage their own hardware and equipment and take advantage of shared power connections, HVAC systems, physical security and redundant architecture, colocation may be the best option.
Evaluating your infrastructure options? Internap’s Solution Builder can help you find and build the solution that best fits your requirements.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
To create a cost-effective infrastructure option for housing big data deployments, many IT organizations have been forced to use a combination of cloud, dedicated hosting and colocation from multiple providers. But the emergence of new colocation services that provide cloud-like flexibility along with secure physical infrastructure helps meet the demands of big data through data center hybridization.
Today, the largest hard drive you can buy provides 3 terabytes of storage. Online gamers playing Battlefield 3 will generate enough data to fill that entire drive in just three days. Twitter generates enough information to fill one up in a mere six hours. Facebook is even more impressive: they collect 500 terabytes of data per day, which means it would take them a scant 8 minutes to fill up the world’s largest hard drive.
So where do companies actually put all of that data? If you’re Facebook, Twitter, Google or one of a handful of other industry titans, you build your own data centers and fill them with servers as fast as you can build them. But if you’re not yet a multi-billion dollar corporation, building your own state-of-the-art data center is probably not a viable option. Cloud and dedicated hosting can provide flexible contracts and tools that make hosting and scaling traditional web applications easier, but these options make less sense when you start talking about storing petabytes of big data.
The evolution of colocation
Many companies find that the most cost-effective way of deploying a big data cluster is to put it in someone else’s state-of-the-art data center via colocation. But cost-effectiveness is only part of the equation. Traditionally, colocation hasn’t been able to offer the customizable options of dedicated hosting, or the agility and flexibility of cloud. Whether it’s trying to meet additional seasonal demand, running an intensive one-off report on your big data deployment, or deploying a new web application, companies can’t wait four weeks to deploy it in a collocated environment. They need access to on-demand resources.
Hybridization
The ability to connect your colocation and cloud resources can fill this void, and help bridge the gap between big data deployment and the cloud. Internap’s Platform Connect offers true hybridization and allows you to seamlessly link your colocation, Custom and Agile hosting, and AgileCLOUD environments across the same layer 2 or layer 3 network within any of our Agile-enabled facilities. Launching your new free-to-play game next month and need to make sure your MongoDB deployment can temporarily handle a 500% increase in the amount of data you’re collecting? Need to run a series of reports on your CouchDB deployment that’s already seeing 85% utilization? Want to replicate your 80TB Hadoop deployment for a few weeks so you can see firsthand what effects an hdfs changes will have on your queries? No problem.
Multiple services, one provider
Instead of searching for multiple providers – one partner for colocation and IP, one for dedicated hosting and one for cloud – Internap offers multiple services under one roof, including award-winning colocation for housing your big data, the industry’s fastest and most reliable IP , exceptional Custom and Agile hosting services for your websites and mission critical applications and the fastest and most cost-effective cloud for your on demand needs. Being able to reach out to a single provider who understands your IT needs at every layer of the OSI model is a small but important step.
Until now, traditional colocation has taken a backseat to dedicated hosting and cloud computing, but it’s emerging as an agile, cost-effective option for housing big data. The ability to hybridize your data center with Platform Connect and apply cloud-like flexibility to physical servers offers IT organizations a best of both worlds approach to big data deployments. (See these capabilities in action at Internap’s New York Metro data center.)
Learn more about how Platform Connect can hybridize your data center and provide a new approach to big data infrastructure challenges.