Month: August 2012
Choosing a data center services provider that has the right people in place is essential in the midst of any kind of emergency. The on-site staff and Network Operations Center staff are your first responders and key in the event of a disaster, surely just as important as all the preventative measures mentioned in my previous posts. Here are a few questions to ask yourself and any prospective provider:
- Does your data center provider have the highly-trained and skilled people needed to operate its complex equipment?
- Do they provide 24/7 on-site support?
- Do they invest in ongoing employee education, including industry-recognized certifications?
- Do they have deep expertise not just in facilities management, but other critical areas, such as IP networking, telecoms and customer support?
- Do they collaborate with peers across the enterprise and discuss best practices?
- Does the provider have internal leadership teams where ideas can be discussed and best practices identified that can be implemented across the provider’s portfolio of facilities?
These are critical questions you need answered before selecting a data center provider. Skilled data center engineers should complement facility engineers that are trained in electrical and mechanical disciplines, and carry licenses and equipment certifications.
With the right people and the right processes, your provider will be better prepared in the event of a disaster scenario. After selecting a provider that is committed to design, planning, testing, maintenance, communications and people, you can be assured that your data center provider is positioned to support your business — no matter what disaster may be on the horizon.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
So based on the title I’m assuming you are assuming I’m referring to personal preparation – securing your home, buying all the needed pre- and post-storm supplies, gathering necessary paperwork to contact your insurance company, etc., etc. – to ride out the storm and get back to normal as quickly as possible in the aftermath.
Well, yes. But I’m also referring to your business-critical data and content. I’ve covered this topic in some previous mashups earlier this year: “Ready or not the storm is coming” and “It’s hurricane season – will the wind blow and blow and blow your roof down?” but it bears repeating. Can you recover quickly should a hurricane or other natural disaster hit your IT Infrastructure and put it out of commission? As you can imagine there is no dearth of information on the matter, and this week’s mashup includes some of the more recent articles on the topic.
- Now is not the time to be creating your disaster recovery plans
- Prepare for Isaac using Internet, apps
- Hurricane Isaac and Disaster Recovery
- Lessons Learned: VARs Ready Should Isaac Head Their Way
- NOAA increases prediction for active hurricane season
Good luck if you are one of those bracing for, or impacted by, Isaac. We’ll all hope for the best. Oh, and for future reference, there’s no need to suffer data loss if you have the right components of a DR plan in place. We’re here to help you with ensuring business continuity around the clock and through any storm. Also, please take a peek at our Disaster Preparedness eBook.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Optimizing online gaming performance with multiple-cloud and hybrid-cloud strategies — Part two
If you are just joining us, this is the second half of a two-part series from our guest blogger Robert Malnati. Robert is Vice President of Marketing at Cedexis, an Internet performance and monitoring solution provider.
Cloud Strategies for Optimum Gaming
Game performance is affected by many factors, further complicating infrastructure decisions. Building on the free benchmarking of clouds that Cedexis provides, (along with a paid service we also provide called OpenMix) allows gaming companies and other enterprises to begin to assess where and how they can deliver the best “end-user experience” for dollars spent.
Internap has long realized that relying on legacy protocols or backward looking routing latency assessments to achieve optimal network performance was a wild goose chase. Similar to the way Internap’s proprietary routing algorithm, MIRO, gathers massive amounts of data to improve the performance of its IP, at Cedexis, we have learned that effectively using data can yield unexpected performance gains across multiple locations and clouds..
Real time data allows for the avoidance of the numerous availability gaps and performance lags that occur throughout a day on any cloud or within any data center’s Internet connectivity. By combining two or more clouds/data centers, your gaming customers can “ride the low latency curve” of optimized performance.
Comparisons from a Radar Audit, an analysis we do for interested customers using actual Radar data, show expected performance gains from cloud vendors they are interested in doing business with. Importantly, note that the different colors display the relative performance of different traffic routing methodologies: Round Robin, Historical Latency data and Real Time Latency data.
Round robin routing is fine for improving availability by maintaining active-active instances, but does little to improve performance. Latency routing using historical data results in limited benefit, as is seen by users of the daily latency data used by AWS-Route 53. Significant gains are found when using real-time data, however, providing meaningful latency reduction for your online game participants, whether they are single or multiple person games (and even for initial downloads).
Optimizing Multi-player Game VM Homing – Multiple player online games provide a more powerful demonstration of the value of a SaaS based GSLB solution like OpenMix. Selecting the optimum location to spin up a multi-player game virtual machine is a great example of the power of Cedexis GSLB (our OpenMix service).
Consider the following use case:
- Three gamers joining their weekly multi-player online game, from Dallas, London and Berlin (blue Icons).
- The game is available from four locations: Chicago Data Center, VA Colocation, New Jersey Cloud and a London Cloud (red icons).
- Optimum location is New Jersey; cloud based on real time latency measures for each location and considering the ISP network serving each player (green icon).
Hence the importance of (1) effectively gathering and assessing real time latency data and (2) choosing among cloud providers, like Internap’s AgileCLOUD, that have appropriately dispersed cloud nodes and system performance.
The gaming business remains robust, but competitive. Having your game downloads delayed or time out due to suboptimal routing or using the wrong combination of cloud node locations/vendors can be expensive. It is my hope that this note served to provide you with the means to be a better decision maker about cloud hosting solutions as you evaluate them for your infrastructure.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Optimizing online gaming performance with multiple-cloud and hybrid-cloud strategies
As I listened to the Internap online gaming webinar last week it was encouraging to hear how vibrant and growing all three principle online gaming market segments are: Social, Mobile and Multi-player Online.
Clearly, monetization challenges are intensifying due to the “big gaming companies getting bigger” and app stores drowning in apps. That said, the use of cloud and CDN services, by new and mature games alike, offers a unique combination of customer experience improvement and cap-ex-friendly costs worth looking at by all players in the space.
Clouds Provide Scaling and Costs Flexibility
The use of public cloud and CDN solutions can play many beneficial roles in a game’s lifecycle, allowing for low-cost initial deployment, rapidly deployable “bursting capacity” during heady growth and/or pay-only-for-what-is-used infrastructure during the twilight of popularity.
Selecting the perfect cloud partner and the best region/zone is an ever-evolving target. Game popularity trends differently around the globe and the need to best align platform costs with optimum user experience is an emerging science. So, the challenge is optimizing these many investment options.
Cedexis Radar Provides Free Cloud Performance Reporting
As was mentioned on the webinar, solutions from Cedexis can provide both the free data to evaluate clouds and the intelligent Global Server Load Balancing to most effectively take advantage of your cloud, private data center and CDN infrastructures.
Cloud benchmarking is available for free from the Radar Community. The Cedexis Radar Community is the leading independent authority of cloud and CDN performance, with hundreds of companies around the world deploying the Radar tag on their web properties to help gather approximately one billion end user performance measurements a day.
In addition to unlimited access to the free aggregate cloud benchmarking data, every community member gains visibility into the community availability, throughput, and latency measurements, as well as the traffic of their own subscribers allowing Radar Community members to be educated consumers of these services.
Stay tuned for part two of this series tomorrow.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Carter Validus Mission Critical REIT announced earlier this week that they had acquired two Texas data center properties. There’s been some confusion around the reporting of this news, leading many people to inquire whether Internap had sold our Flagship Dallas data center (located in Plano, Texas) facility to Carter Validus. The short answer is that the only change for Internap is we will be sending our lease payments to a new landlord. The change of ownership on the property has no impact on our Dallas data center lease or our commitment to the Dallas data center market; Internap continues to hold a long-term lease with standard renewal terms on the facility. In fact, we see Carter Validus’ recent acquisitions as a positive validation of the health of the data center industry in general and the Dallas data center market in particular.
Built with scalability in mind, Internap’s state-of-the-art carrier-neutral premium data center in Dallas features high-density power, a concurrently maintainable N+1 architecture, and robust connectivity options, allowing customers flexible deployment configurations with room to grow. To request a tour or learn more about all our Dallas data centers, click here.
Our Dallas data centers are conveniently located in the Dallas-Fort Worth metroplex:
- Flagship Dallas Data Center
1221 Coit Road
Plano, Texas 75075
- Dallas Data Center
1950 N Stemmons Fwy
Dallas, Texas 75207
- Data Center Downtown Dallas POP
400 S Akard Street
Dallas, Texas 75202
More About Our Flagship Dallas Data Center
Our flagship Dallas data center is located at 1221 Coit Road, Plano, TX 75075. This Dallas colocation data center connects to Atlanta, Silicon Valley, Los Angeles, Phoenix, Chicago and Washington, D.C. data centers via our reliable, high-performing backbone. Our carrier-neutral, SOC 2 Type II Dallas data center facilities are concurrently maintainable, energy efficient and support high-power density environments of 20+ kW per rack. If you need colocation Dallas services, or other data center solutions in the Dallas-Fort Worth metroplex, contact us.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
The Internet changes everything. It changes the nature of the content you create and how you distribute that content. Your distribution model becomes a global one as users from around the world demand your content. While there are new opportunities, there are also new challenges. And if you aren’t an IT aficionado, getting that content to your audience while maintaining a positive user experience is a biggie.
One of the ways marketers, e-commerce gurus, gaming companies and IT professionals alike accomplish this goal is through web acceleration technologies. One such technology is called a Content Delivery Network or CDN for short. If this term is new to you, you are in for a treat. Recently I got our own Pete Mastin, vice president of IP and CDN services, to get in the hot seat to explain just what a CDN is, how use cases have changed over the years and why it’s important for websites today. Check out the first of our new vblog series below.
Need more detail? Be sure to download a copy of our CDN Buyer’s Guide for more on the intricacies of CDN.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
The six keys to disaster preparedness: Communication best practices
Welcome to the fifth installment of a six-part series on disaster preparedness and the importance of choosing a data center services provider that can handle emergencies. We’ve gone through the essentials of disaster-resistant design and infrastructure, documented response plans, mock drills, preventative maintenance and are now moving on to communication best practices.
Command and control is essential in managing any crisis. You should expect to be notified of any potential business-impacting event as well as to receive timely, detailed updates throughout. It’s best to look for a data center provider that has the resources and ability to proactively communicate information relative to their business operations.
It’s also a good idea for your provider to house its Network Operations Centers (NOCs) in geographically separate, redundant locations and have them staffed 24/7 so that all customers can be kept apprised of any situation that might affect their business.
In the event of a disaster, the NOC becomes the focal point for both internal and customer communications ensuring timely, accurate and ongoing updates regarding any event that may be occurring in data centers around the country.
Customer Communications
Geographically-disparate NOCs provide the redundancy to maintain a network should one facility fail within an event that impacts an entire region. The site data center operations team should notify the NOC quickly – ideally within 15 minutes of an event – and provide timely updates via systems like email, SMS and conference bridges, among other methods.
In addition, the NOC should have the ability to monitor the data center’s Building Monitoring System (BMS) so NOC engineers have first-hand knowledge of equipment alarms that may be occurring. The integration of BMS with the NOC, the conference bridge for real-time communications and cell phones with SMS capabilities provided to all site operations personnel provide a strong foundation for customer communications.
Ensure that your provider can facilitate effective communications during an event via a clear chain of command, a solid plan for keeping you up to date, and published escalation procedures. They should be prepared to answer detailed “what, when, where, why and how” questions. Documenting a detailed log of events will be extremely useful for a “postmortem” discussion after the event.
In our next segment on data center disaster preparedness, we’ll discuss the importance of having the right people in place to handle an unexpected failure or emergency.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
It’s that time of year to stock up on school supplies and replace those often worn items that your child has outgrown or destroyed beyond recognition. So, where to go? According to USA TODAY, the convergence of smartphone technology, social-media data and futuristic technology such as 3-D printers is changing the face of retail in a way that experts across the industry say will upend the bricks-and-mortar model in a matter of a few years. While still several years away technology making magic mirrors possible (virtual dressing rooms to see how something would look on you), there are others here today. Two of the hottest topics surrounding mobile app development right now — social media and HTML5. Walletless platforms for smartphones are no longer a figment of our imagination. The list goes on…
And anyone in the business knows the cost of poor website performance such as latency and jitter. Dire stats such as: 78% of end users will go to a competitor’s site due to poor performance and a one-second delay in website performance equals a 7% reduction in customer conversions.
Here are some recent articles about retail and its future:
- Recently released market study: Airport Retail Trends in Europe, 2012-2013
- The Future of Retail and Your Money
- Retailers seek to cash in on mobile-payment trend
- Positive Retail Sales Report Isn’t Winning Over Skeptics
- The Best Move to Save This Retail Dinosaur
So retail’s high-tech future may mean no more malls to spend the day meandering and purchasing? Well, at least I can still get my retail therapy, just in a completely different way. What are you doing to prepare for the technology that will impact your end users? If you are in the retail business, you may want to read our solutions brief: The Keys to Success for Online Retailers for ideas to further your business online.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Do I have game? I’d like to think so. Now back in the day, I’d dabble a bit at the arcade with some Donkey Kong, Centipede and Q-Bert (that orange guy with the big nose sure was cute). Move to the 90s and add a little Nintendo action with the Super Mario Brothers and then some Tetris. In the last few years, I jumped on the Guitar Hero and RockBand tour (I’m even proud to say that I actually did win a contest a few years back and could rip a mean solo). Now I’ve got an iPad and iPhone and usually get spanked at Words with Friends, but does that classify me as having game? Sure it does.
Over 72% of American households play computer video games. The industry has moved from arcade gaming to console gaming, then to pc gaming, and now we’re in the age of online and mobile gaming. In an economic environment where many industries are struggling to grow, the online gaming sector is soaring. Why is that? Could it be the easy access that tablets and smart phones offer causing everyone to get in the game?
Business Insights forecasts in their 2011 Video Gaming Industry Outlook that online gaming revenues will increase from $13.2 billion in 2009 to $25.3 billion in 2014, posting a CAGR of 13.9%. How are online game developers and publishers keeping up with gamer demands and the brisk pace of innovation?
Join us tomorrow to check out the latest trends in the gaming industry. Hear from a panel of real “gamers” on what they’re seeing in their businesses as they maximize their IT Infrastructure to be high scorers in the market. Gaming experts Kelvin Mok from Gravity Interactive and Monty Sharma of MassDigi will discuss the role of mobile and tablets, optimal environments for online games and more. Join us for our live webcast to be a part of the discussion.
Image courtesy of Hi-Rez Studios.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Moving to the cloud one app at a time
Leading IT organizations are realizing that it can be easier to consume quality infrastructure from vendors than to build out ever-more assets into their own data centers. This isn’t a slight to the work that enterprise IT organizations have done over the past few decades. It’s simply a function of the fact that high-performing, reliable infrastructure is most efficient at scale, and the demand for these services is being better met with each passing day.
So, what are world-class IT organizations doing? They are getting out of the infrastructure business. While wholesale outsourcing of infrastructure still presents a host of challenges, most IT departments are now using Infrastructure-as-a-Service (IaaS) for at least some of their applications. IaaS provides pre-configured hardware and storage over the Internet. This is a cost-effective delivery model, where the IT infrastructure services provider is responsible for owning, hosting, running and maintaining the equipment. In this type of model, organizations typically pay only for what they use.
IaaS IS ready for enterprise applications
Over the past few years, IaaS has rapidly matured to the point that many applications can be hosted reliably — with few or no problems. To be sure, there are those who make a living based on the complexity of enterprise IT environments who will provide well-crafted objections about the shortcomings of current IaaS offerings. In some cases they will be right, but IaaS providers are increasingly addressing concerns about reliability, performance and security, and doing so at a lower cost than internal IT.
Execution is difficult — so you need a good strategy
It’s easy to say that IaaS is ready, right? But you’ve got mission-critical applications with real-world requirements that are not set in the current context of the capabilities of IaaS providers.
For better or worse, expectations have been set by the custom, high-performing environments that the budgets of the last 20 years allowed. Everyone is looking for the benefits of cloud, along with application performance and functionality. Reliable and scalable infrastructure is simply expected. Your reward for building a world-class enterprise infrastructure is to find a way to gradually make it more efficient, reliable and scalable.
So how and where do you get started in building an IaaS migration strategy?
Follow these five steps:
- Understand and prioritize your application portfolio. How critical is this application to the business? How hard would it be to move this application to a new platform? Are there specific performance requirements?
- Reallocate resources and realign goals. You may need to assess how to realign both people and budgets around the application.
- Take a hands-on approach to evaluating your options. Don’t write an RFI/RFP/RFQ; these are all methods of learning about what the post-sale experience will be like. Instead, spin up some infrastructure with a few providers, and turn it off when you’re done.
- Test a “cloud ready” application. After you’ve’ removed dependencies on bespoke infrastructure, perform a pilot in which you test a cloud-specific application in the cloud. Once stable, follow your existing, proven application release methodology, and move it officially to the cloud environment.
- Just do it. The real key is to start the process and prove to your organization that it can be done, thus setting the stage for a more comprehensive application migration strategy.
Need more detail? Check out full-length version by downloading the eBook, “Five Key Steps to a Successful Cloud Takeoff.”