Jul 27, 2011

Top 5 reasons why every website should be using a CDN

INAP

Website operators usually think of a Content Delivery Network (CDN) in terms of what its name implies – rapid “content delivery” of files, websites, games and streaming video.

However, CDNs perform a number of functions that make them increasingly valuable and integral to any successful website and Internet presence. We’ve highlighted the top 5 reasons why every website should use a CDN.

Never get “slashdotted” again!

It’s always great when your efforts get recognized. However, when someone slashdots your website, you are being asked to deal with more traffic than you ever imagined.

Slashdotting can have the very undesirable effect of sending thousands of visitors to your site at once, overloading and crashing your servers, and turning off a whole new set of potential customers along with your existing ones. By caching your site on a CDN you are able to handle the “Slashdot Effect” and enjoy the benefits of so many end users visiting your site.

Protect your site from Distributed Denial of Service (DDoS) attacks

DDoS attacks happen thousands of times a day worldwide, and will put a website out of business. CDNs help absorb the load and prevent servers from becoming overwhelmed by abnormally high traffic volume. Without a CDN to act as a buffer, cloud servers would be vulnerable to attack, which is important for ecommerce websites with servers that store personal information.

Support new markets for your products, services and information

The Internet gives people the freedom to sell anywhere. In fact, you never know where new customers will come from. Whether your potential new customers are in Hollywood, Hoboken or Hong Kong, if your website performance and content delivery is judged to be slow, you can bet you won’t be closing sales to far-off markets any time soon.

CDNs ensure good performance everywhere. As a result, your content and application traffic are delivered quickly and reliably, no matter where your new customers may be.

Reduce your infrastructure costs (servers, storage, racks, footprint)

Building a worldwide network of data centers is one way to solve the problem of slow content delivery to your potential customers and end users. Of course, for most businesses, this is completely unrealistic. Leveraging someone else’s worldwide infrastructure through a CDN is a much more cost-effective and realistic solution to achieve the same end without the huge capital expenditure.

Make page turns much faster for a consistent user experience

Users won’t be satisfied with website performance if your content doesn’t render instantaneously upon a mouseclick. Surveys bear out what happens when users aren’t satisfied:

A one-second delay in website performance will result in a 7% loss in conversions, 11% fewer page downloads and a 16% decrease in customer satisfaction.*

You must decrease page load times or face losses. CDNs ensure that content is delivered more quickly and consistently to your end users.

Why isn’t your business using a CDN?

For the reasons cited above alone, the question is no longer about why you should be using a CDN, but how fast can you enable your websites and Internet presence to reach new customers across the world, leverage new assets and protect your existing ones.

*Forrester Research/Strangeloop

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 21, 2011

Have You Ever Been to Expediatravelocityorbitz.com?

INAP

Have you ever been to the website Expediatravelocityorbitz.com?

What about Bestbuytargetwalmart.com?

Of course you are shaking your head “no,” but I bet you have and many of your customers and would-be customers have too. I can prove it.

Time and again we hear from customers, we talk to end users and we read reports and surveys that tell the same story about website performance. Look no further than this statistic from the travel industry:

59% of consumers multi-task when waiting for a travel site to load. Nearly one in five open another travel site in a new window when made to wait.

Need more evidence?

Ask yourself how many times you have clicked away from a website when a page took too long to load. Ask yourself how many times you then went to at least two other websites.

Performance does matter. It matters down to the second for many travel companies, online retailers, media companies and even down to the millisecond for software companies, financial companies and gaming companies.

End users expect your website to be highly responsive, interactive and informational. They expect rich graphics and video that show what your products or services can do for them. They want page clicks to be as responsive as turning pages in a digital magazine.

Now consider your business. The impact of a 1-second delay in performance here doesn’t just apply to the travel industry. For any industry, if your website typically earns $100,000 a day, this year you could lose $2.5 million in sales!

Performance is critical. Take the steps to make sure the only website your potential customers want is YourBusiness.com.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 8, 2011

Cooking up OpenStack with Swift, Chef and Spiceweasel, Part 2

INAP

Intro

Cooking up OpenStack with Swift, Chef and Spiceweasel, Part 2

We at Voxel are deepening our relationship with the whole notion of infrastructure automation and our commitment to the growing set of tools from the OpenStack project and Chef from the OpsCode folks.

Where we last left off, I was walking through our manifest files which we use to crank out clusters of OpenStack’s Swift Object Storage Clouds. Now I’ll talk more about what a typical Swift topology looks like, which will help explain what this manifest actually means.

Swift Topology and our Setup Variables: Hardware, Rings, Users

OS and Networking: First off, Swift needs to know the various networks and IP addresses of the different services. It’s a good idea to separate your Swift deployment into a public “proxy” network and a “storage” network. Their network activity profiles are much different, and it will give the network jocks a chance to tune for the traffic patterns they see. It also offers a level of protection against sabotage from the outside. So, have at least two network devices on every single node, one for the storage network, one for the proxy network. Your own basic Chef node configuration should be able to arrange that.

Second, we’re going to make life easy and use Ubuntu 10.04 LTS and the swift-core repo available from the Launchpad Swift site. Those repos are built in our Swift cookbooks.

Finally, you’ve gotta setup a dedicated storage device of some kind on each of the Swift object datastore servers, not the proxies. You really ought to make this storage simple JBOD (Just a Bunch of Disks). No RAID – the fail-over advantages of RAID are lost because the swift system itself is replicating the data to other nodes.

Swift Server Chef Roles and the Rings: Based on the “Multi-node setup” of Swift as documented on the OpenStack website, we create two classes of servers, each with different suites of software and different hardware profiles. The “swift-proxy” role proxies and load balances the HTTP requests from clients, and also handles authentication. Swift also allows external authentication and authorization services to be configured, but it’s a custom coding job. Proxy role nodes have a special setting that allows account maintenance, which I have sub-classed in our Chef roles as “swift-auth” nodes, and they manage sensitive information that is best be kept away from end users by keeping them on a private network with access to the storage network. Not much disk is required by these servers. The “swift-storage” are the disk hogs. They handle the life-cycle of the three “rings” – the stored objects ring, the container ring, and the user account data ring. These rings are managed by nodes that I’ve subclassed off the storage nodes and called “swift-util.” On “swift-util” role nodes you’d run the ring rebuilding which takes a lot of CPU and other utilities to manage and maintain the rings like “st” All the services share some core packages, but each have their own Ubuntu “deb” packages.

With this info in mind, it becomes clear what our manifest will be able to do. Let’s walk through them now.

The Manifest, made manifest

There are two parts to the manifest: the spiceweasel YAML file, and the data bag with the ring and device definitions.

The SpiceWeasel deployment file – mytest_infrastructure.yaml

Here’s a sample spiceweasel YAML file for our purposes.

cookbooks:
- swift:
- apt:

roles:
- swift-storage:
- swift-proxy:
- swift-auth:
- openstack-base:
- swift-util:

data bags:
- mytest_cluster:

nodes:
- voxel 2:
	- role[swift-proxy]
	- --hostname_pattern=proxy --domain_name=swift.newgoliath.com --config_id_group=4 --facility=lga --image_id=39 --swap=2
- voxel 2:
	- role[swift-auth]
	- --hostname_pattern=proxy --domain_name=swift.newgoliath.com --config_id_group=4 --facility=lga --image_id=39 --swap=2
- voxel 5:
	- role[swift-storage]
	- --hostname_pattern=storage --domain_name=swift.newgoliath.com --config_id_group=4 --facility=lga --image_id=39 --swap=2
- voxel 2: - role[swift-util] - --hostname_pattern=util --domain_name=swift.newgoliath.com --config_id_group=4 --facility=lga --image_id=39 --swap=2

Broken down into sections, we’ll review the above.

The first stanza calls knife to upload those two cookbooks into the chef server.

cookbooks:
- swift:
- apt:

The second section uploads the <rolename>.json or <rolename>.rb files into the chef server.

roles:
- swift-storage:
- swift-proxy:
- swift-auth:
- openstack-base:
- swift-util:

The third section indicates the data bags to be loaded into the server (much more on this below):

data bags:
- mytest_cluster:

The last section indicates the nodes that will be built via our Voxel automated node creation. Note here that we’re making two types of servers. We’re making two proxy servers and five storage servers. They are using our “hostname_pattern” feature, that appends sequential two digit numbers to the hostnames, and appends the “domain_name” to it to create a FQDN. Our servers are offered in configuration sets, documented elsewhere – which define the hardware dedicated to each. ‘facility’ defines the Voxel facility where you’d like the server deployed. ‘image_id’ is from the list of available OS images – this one being Ubuntu 10.04LTS, and ‘swap’ indicating the amount of swap space that will be reserved on disk.

nodes:
- voxel 2:
	- role[swift-proxy]
	- --hostname_pattern=proxy --domain_name=swift.newgoliath.com --config_id_group=4 --facility=lga --image_id=39 --swap=2
- voxel 2:
	- role[swift-auth]
	- --hostname_pattern=proxy --domain_name=swift.newgoliath.com --config_id_group=4 --facility=lga --image_id=39 --swap=2
- voxel 5:
	- role[swift-storage]
	- --hostname_pattern=storage --domain_name=swift.newgoliath.com --config_id_group=4 --facility=lga --image_id=39 --swap=2
- voxel 2:
	- role[swift-util]
	- --hostname_pattern=util --domain_name=swift.newgoliath.com --config_id_group=4 --facility=lga --image_id=39 --swap=2

The Swift specific data bag – “mytest_cluster.json”

You use this data bag to kick off the initial configuration of the rings and the devices. It’s then watched by the Swift cookbook, and the rings are rebuilt and distributed through the zones in the order described in the “zone_config_order” setting below. This allows you to maintain a high level of service, while you reconfigure the rings to optimize cluster performance.

/chef-repo/data_bags/testcluster# less conf.json
{
"id": "conf",
 "ring_common": {
       "zone_config_order": "1,2,3,4,5",
    "account_part_power": 18,
    "account_replicas": 3,
    "account_min_part_hours": 1,
    "container_part_power": 18,
    "container_replicas": 3,
    "container_min_part_hours": 1,
    "object_part_power": 18,
    "object_replicas": 3,
    "object_min_part_hours": 1
 },

               "rings": [

                         { "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "1", "hostname": "storage01", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "2", "hostname": "storage02", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "3", "hostname": "storage03", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "4", "hostname": "storage04", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "5", "hostname": "storage05", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,

                         { "status": "online", "ring_type": "container",   "cluster": "testcluster", "zone": "1", "hostname": "storage01", "port":   6001, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "container",   "cluster": "testcluster", "zone": "2", "hostname": "storage02", "port":   6001, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "container",   "cluster": "testcluster", "zone": "3", "hostname": "storage03", "port":   6001, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "container",   "cluster": "testcluster", "zone": "4", "hostname": "storage04", "port":   6001, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "container",   "cluster": "testcluster", "zone": "5", "hostname": "storage05", "port":   6001, "device": "xvda4", "weight": 100, "meta": "install" } ,

                         { "status": "online", "ring_type": "object",   "cluster": "testcluster", "zone": "1", "hostname": "storage01", "port":   6000, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "object",   "cluster": "testcluster", "zone": "2", "hostname": "storage02", "port":   6000, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "object",   "cluster": "testcluster", "zone": "3", "hostname": "storage03", "port":   6000, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "object",   "cluster": "testcluster", "zone": "4", "hostname": "storage04", "port":   6000, "device": "xvda4", "weight": 100, "meta": "install" } ,
                         { "status": "online", "ring_type": "object",   "cluster": "testcluster", "zone": "5", "hostname": "storage05", "port":   6000, "device": "xvda4", "weight": 100, "meta": "install" }
               ]
}

The file has two major parts.  First, there’s the short stanza with the common configuration options which define the basics of the cluster and the rings.  Second, there’s the details of the rings with the hardware and zones.

Here’s some detailed explanation:

The first stanza:

"id": "conf",
 "ring_common": {
    "zone_config_order": "1,2,3,4,5",
    "account_part_power": 18,
    "account_replicas": 3,
    "account_min_part_hours": 1,
    "container_part_power": 18,
    "container_replicas": 3,
    "container_min_part_hours": 1,
    "object_part_power": 18,
    "object_replicas": 3,
    "object_min_part_hours": 1
 },

Most of these settings you’re unlikely to change once your cluster is up and running.  Of note here is the “zone_config_order” is unique to Voxel’s system, and controls how  configuration changes are applied.  Configuration changes can be quite costly to the system – they can really gum up IO – so it’s important to take a phased approach to applying ring configuration changes.  See the OpenStack Swift documentation for the settings for “part_power”, “replicas” and “min_part_hours.”

The second major part of the data bag file is details of each of the three rings.  We’ll take the “account” ring here as our example.

"rings": [
{ "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "1", "hostname": "storage01", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,
{ "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "2", "hostname": "storage02", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,
{ "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "3", "hostname": "storage03", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,
{ "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "4", "hostname": "storage04", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,
{ "status": "online", "ring_type": "account",   "cluster": "testcluster", "zone": "5", "hostname": "storage05", "port":   6002, "device": "xvda4", "weight": 100, "meta": "install" } ,

If you’re familiar with the swift-ring-builder tool, you’ll notice all the familiar parameters – but I’ve added two: “status” and “cluster.”

“Status” lets you take a config line out of service without abandoning it. You’d use this if you need to do serious work on a piece of hardware or network segment, and it will be unreliable for a while.  “status”: “offline” will allow you to perform work on the configured item without the ring concerning itself with keeping the data on it as counting in a replica.  Once you set it back to “status”: “online” Swift will begin to replicate to it again and the proxies will access it for data.

“cluster” indicates the name of your cluster – so chef will have the option of deploying and managing multiple clusters without the ring config information getting all confused.

Note also that I set the “meta” setting to “install,” which you should feel free to change while you modify your cluster and manage it with Chef.  For example, you might want to add a new storage node and device once the cluster is up – and you would indicate this with a “meta”:

And on to Victory

This concludes our review of architecture and the “manifest” concept.  Coming up in July, we’ll see Part 3 where I’ll be reviewing just how our chef cookbooks make use of the “manifest” and integrate with some of our local systems – making servicing our customers a breeze!

So long, and thanks for reading!

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 6, 2011

Internap Featured on American Public Media

INAP

Internap was featured in a story on American Public Media™ Marketplace® on the increasing power demands in data centers resulting from cloud computing.

You can hear Mike Higgins, our Senior Vice President for Data Center Services, share his insights and just one of the many ways we are helping the environment by lowering power utilization and costs in the ever expanding data center space.

Marketplace, Marketplace Morning Report®, and Marketplace Money® are heard by an audience of more than 9.1 million unique listeners in the course of a week on 486 public radio stations nationwide! As such, we were honored by the opportunity to discuss such an important topic with the team at American Public Media.

Learn more about Internap’s thinking on future data center design.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 5, 2011

Internap Wins Stevie Award for Best Customer Service Department

INAP

Congratulations to Internap’s Customer Support employees and ‘thank you’ to our customers!

Internap has won a Stevie Award for ‘Best Customer Service Department’ of the Year at the 2011 American Business Awards last week. The award recognizes Internap’s excellence in providing outstanding network performance and customer support.

The Stevie Awards honor the achievements and positive contributions of exemplary organizations and business people around the globe, serving as one of the world’s more coveted awards.

Rather than rely on outsourced call centers, Internap engineers answer all customer calls 24x7x365 in its own network operations centers (NOC) – dramatically reducing resolution time. Rapid reaction times combined with proactive communication from an experienced customer service team help make Internap’s support organization world-class. Additionally, in 2010, Internap’s dedicated customer support efforts yielded these outstanding customer benefits:

  • Customers gave an average rating of 4.8 following resolution of open service tickets (based on a scale of 1 – 5, with 5 being “extremely satisfied”).
  • Internap’s Net Promoter Score, a general business customer loyalty metric, rose 20 percentage points in the last year (Net Promoter Scores are measured against a company’s own increase/decrease over time versus against competitor metrics).
  • Internap’s NOC had a 2:1 ratio of outbound to inbound calls, highlighting its proactive customer support.
  • In a testament to the outstanding service of Internap’s support team, out of the hundreds of incoming customer service calls per month:
  • All calls were answered within 10 seconds, on average.
  • The caller abandon rate was an astonishingly low .01% .
  • 95% of the time, the Internap NOC engineer answering a call was the same one that resolved the issue, underscoring the deep skill and expertise of the department.

More than 200 executives across the country participated in the judging process to determine the Finalists and Stevie Award winners. More than 2,800 entries from organizations of all sizes and in virtually every industry were submitted for consideration.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More
Jul 1, 2011

Kindergarten, IT, Navigating “the Cloud,” and Candy

INAP

InfoWorld published a great article highlighting the “The 10 worst cloud outages (and what we can learn from them) .”

Reading the article, I’m reminder of the book, All I Really Need to Know, I Learned in Kindergarten by Robert Fulghum. I am sure there is a version designed just for IT and the technology industry, but I haven’t found it. Maybe this will be the start of a lucrative e-book enterprise.

Or not.

Lesson 1 – Technology is never perfect.

Nor were your toys. Remember the cars that would fly off the track of your race car set when they went too fast?

Like everything else in IT, “the Cloud” is inherently imperfect. We expect cloud computing platforms to work flawlessly and never go down. Ironically, while we all would acknowledge that the complexities involved in a public cloud computing solution are much greater than those in a traditional dedicated solution, we expect flawless reliability from that cloud platform. Cloud computing is based on much of the same standardized hardware, network and infrastructure components that companies use in their data center environments complemented by evolving software to support multitenant environments. These systems can and will fail which represents their inherent imperfection.

The good news? The technology industry is full of examples of learning from mistakes to make better products and services more quickly than ever. Cloud computing will be no exception.

Lesson 2 – Two is always better than one.

Remember Doublemint® gum where there were always two of everything? Just like gum or candy, it’s always better when you have more.

Redundancy will save the day. Just like how you build in redundant elements into your technology and network infrastructure, cloud providers will continue to build redundancy into theirs. Cloud computing’s perception, value and adoption will rise the closer providers are able to achieve “perfect redundancy.” Perfect redundancy addresses all single point of failures that can arise from hardware, software, network and human errors. The perception of “flawless” cloud computing will be driven by the providers ability to attain near perfect redundancy across the elements cited earlier.

Lesson 3 – Always have a backup plan.

As a kid, having a backup plan was usually about being sneaky. In IT, it’s about being smart.

Cloud computing should simply be considered as a flexible platform option for how your applications are delivered. It doesn’t change who is ultimately responsible for supporting the business.
Backup, Disaster Recovery and Business Continuity plans are no less critical when deploying applications in a cloud environment. The businesses that weathered the outages discussed in the article were those that had plans to address this eventuality.

Lesson 4 – When all else fails, don’t lose the data!

Ever lose a $20.00 bill as a kid? You get the same feeling when you lose data.

Data protection and redundancy go hand in hand. Redundancy isn’t just about applications placed in the cloud computing arena but also for the data these applications generate. Businesses must still deploy a multi-layered data protection regimen to reduce the risk of loss. Cloud storage can be a compelling aspect of this regimen in addition to a combination of on-premise storage mediums such as SAN, NAS, and tape systems.

Conclusion

As a cloud computing provider, we’re learning and evolving too. Vendors are learning to create systems and processes that will meet the reliability that businesses really need and businesses are learning to once again rely on their proven smart approaches that achieve their application needs while protecting their exposure from any single point of failure. The intersection of both paths will certainly yield the true vision and promise of cloud computing.

Explore HorizonIQ
Bare Metal

LEARN MORE

Stay Connected

About Author

INAP

Read More