Month: October 2015
OpenStack Ironic brings us closer to full data center automation
With the addition of Ironic to the list of OpenStack projects finding their way into data centers around the world, the dream of having the entire data center automated and orchestrated by a single entity is becoming possible. OpenStack, with the help of Nova, Cinder, Glance, Swift and others, offers a strong, stable and flexible virtual platform, but data centers also need to provide physical machines to customers.
This important aspect was simply not addressed by OpenStack until recently. After the first iteration with the Nova bare metal project, developers quickly realized that an undertaking of this magnitude needed something more than an extension of Nova. Hence, the creation of a full project, aptly named Ironic, to deliver the opposite of what OpenStack was originally meant to do – deliver physical, not virtual, machines.
OpenStack gets physical
There are plenty of reasons for a customer to request a physical machine. High-performance computing (HPC) comes to mind, where crunching numbers as fast as possible can sometimes be incompatible with virtualization and its many layers of abstraction. But now, there is another reason for this renewed interest in running a machine as close to the metal as possible: the containers (r)evolution.
Containers are a new way of consuming computing resources. They are smaller than classic virtual machines and do not require all the disk space, complex file tree, configuration and computing overhead of a full-fledged operating system. This is a software developer’s dream – the removal of a lot of complexity from the equation, leaving behind nothing but the beautifully written poetry of his code(!).
Containers are not necessarily incompatible with the previous paradigm of classic virtual machines, but VMs are becoming more of an unnecessary burden. In complex systems where the KISS rule dominates almost all others, we tend to throw away the unnecessary pretty quickly.
Bringing bare metal to OpenStack
A perfect match for this VM-less container would then be a good old bare-metal server, but since OpenStack has been all about virtualization since its inception, there’s never been any real interest in providing this bare-metal orchestration to the customer. The work that has been done (for example Triple-O) was directed towards the undercloud, or the initial provisioning of the cloud (after all, it can be called the cloud, but the reality is that it never really left the metal below).
There is a big difference between automating the provisioning of an undercloud and providing customers with these same servers. The customer servers need to reside in isolated tenant networks, which means we’re not just automating server installation, but automatically configuring networking equipment as well.
Although this last part is not yet part of OpenStack, the Internap engineering team managed to get ahead of the curve and provide just that – tenant network isolation delivered to the customer in a pure physical environment. The Internap team will now work in collaboration with the OpenStack Ironic team to share their experience and help bring OpenStack one step closer to a fully automated data center solution.
Will containers become a driving force for OpenStack to accelerate its bare-metal capabilities in the near future? Absolutely. Are containers the only reason to offer bare-metal servers to customers? Not at all, but from the point of view of a service provider, once the server is delivered to the customer, the actual usage of the machine is not relevant…it’s all about how it got there.
Watch the presentation from the OpenStack Summit Tokyo to learn more about OpenStack Ironic.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
While the cloud is still the go-to infrastructure solution for flexibility and scale, today’s applications and systems often require higher performance than traditional virtualized cloud can provide.
Compute-intensive workloads, such as those focused on analytics and big data, typically need high disk I/O, high throughput and extremely low latency to perform efficiently. But the need for increased performance and scale can push the limits of virtual cloud environments, resulting in performance degradation and volatility.
The value of bare-metal IaaS
To address these concerns in a cost-efficient manner, bare-metal Infrastructure-as-a-Service (IaaS) has emerged as a non-virtualized alternative. Bare-metal IaaS provides the performance of dedicated servers with the automation and on-demand capabilities of cloud services.
Since 2012, Internap customers with performance-centric workloads – such as eXelate, Taptica, Distil Networks, CrowdStar and others – have built global, scale-out application environments using AgileSERVER, our automated bare-metal IaaS, as part of their infrastructure.
AgileSERVER on OpenStack
Today, our bare-metal AgileSERVER got even better. We introduced the general availability of AgileSERVER powered by OpenStack. In keeping with our commitment to high performance, this new version of our bare-metal IaaS solution includes significant compute, networking, management, and hybridization capabilities.
Compute
The new version of AgileSERVER gives Internap customers access to PCI Express NVM storage, which operates at the processor level and removes bottlenecks from a SATA/SAA controller. This translates into insanely high IOPS and read/write speeds, which provide significant performance benefits for big data and analytics use cases. This can also reduce costs because it requires fewer servers, and speeds up developer productivity.
In addition to the latest generation of storage technology, AgileSERVER 2.0 offers a wide range of memory options up to 1TB RAM to meet high-performance needs.
Networking
Advanced networking features offer deployment flexibility and enhanced security to the network. With access to 10 VLANs per account, companies that must adhere to strict compliance guidelines such as PCI compliance can ensure sensitive information is kept separate from servers on other VLANs. A payment processor would likely require segmentation of the cardholder data environment to meet PCI security requirements.
For increased redundancy, NIC bonding supports separation of customer and management networks and removes single points of failure. This allows for seamless failovers to avoid downtime and reduce impact on application performance.
Each account provides 10TB of Universal Transfer bandwidth per month for IP and CDN traffic, making it easier to manage costs. This is included in the cost of hardware to enable bandwidth sharing across environments.
Management
Built on the native OpenStack API, AgileSERVER provides access to the entire OpenStack ecosystem of tools. This means you can manage bare-metal instances through the OpenStack API or portal, including the Horizon dashboard and also orchestrate entire stack deployments via Heat and Glance for autoscaling and image management of infrastructure. Provision bare-metal servers in minutes and manage them through CLI, API or portal, and take advantage of new term-based and usage-based billing down to the second. This allows you to meet changing capacity needs on-demand in a cost-effective manner.
Hybridization
Create a best-fit environment to meet the unique needs of your applications. Through a common portal, you can manage Internap’s OpenStack-based AgileCLOUD virtual cloud instances, as well as colocation, managed hosting, or on-premise infrastructure. DevOps teams migrating different types of applications to the cloud can mix and match different types of environments to ensure optimal performance without the need for creative, manual networking.
Ultimately, bare-metal AgileSERVER gives our customers better performance at a lower price. The ability to hybridize your infrastructure using OpenStack management tools represents a milestone for DevOps teams that need to build high-performance environments.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Organizations are no longer asking whether they should migrate to the cloud. Instead, the conversation now centers around what to move and how to move it. But building a best-fit environment for today’s applications requires facets of different infrastructure types, or, a hybrid approach.
The benefits of hybridization can be overshadowed by the complexity of the migration task. Deciding what to migrate and how to do it is a complicated process. Let’s take a look at a few things to consider before starting to migrate your applications and workloads to a hybrid infrastructure.
Considerations for hybrid cloud migration
Different applications and workloads will have different requirements, but ultimately, your goal is to optimize the infrastructure for the workload. Let’s look at some general examples of different types of applications and factors you should consider before migrating.
Cloud-native applications, dev/test/staging environments
Designed for scale or temporary usage, these types of applications are naturally suited for the public cloud, allowing you to take immediate advantage of automation and rapid-deployment features.
eCommerce/SaaS/Internet-based, revenue-generating applications
Parts of these applications may be suited for hosting, private cloud and public cloud.
You should consider the architecture to fully realize benefits from cost and operational efficiencies of cloud.
Data-driven applications
Analytics, business intelligence and data storage all require additional considerations around access, compliance and growth to determine whether hosting, private or public cloud is a fit.
Enterprise and general business
These systems are often designed around legacy architectures and may have redundancy or high availability requirements. Hosting or private cloud are typically the best options in this case, but vendor support may be required.
Hybrid infrastructure is a means to an end, and establishing goals for your cloud migration is the most important step.
For a more in-depth discussion of what to consider when moving to the cloud, learn why many companies are moving from public cloud to bare metal.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
OpenStack® is one of many cloud platforms that provide elasticity and on-demand usage. But one reason OpenStack is particularly compelling is the high level of flexibility and customization offered by open source cloud solutions.
Customization and extendability
OpenStack cloud offers access to APIs and source code, along with portability and extendability, which enable you to modify the platform to meet your use case needs. For example, if you require a high-performance computing cluster with extreme parallelization, you can extend and modify OpenStack and implement it in a way that meets the specific needs of your use case. The same goes for web scale or general business applications, and you can also build and implement OpenStack in a way that best suits your requirements.
Flexible implementation
OpenStack offers more flexibility than commercial cloud platforms because it allows you to implement a subset of functionality based on your requirements. This means if you don’t need block storage or the big data service, you don’t have to implement it. In addition, the interoperability offered by OpenStack allows you to move workloads between public and private clouds and even between different providers.
The OpenStack ecosystem
When OpenStack celebrated its fifth birthday back in June 2015, the project was backed by more than 20,000 members across 500 companies. Being part of such a large and dedicated global community brings access to extensive documentation and support that simply isn’t available from other types of cloud solutions.
When comparing open source cloud computing products, the choice ultimately comes down to your technical preference and which one best meets the needs of your use case. The ability to customize your OpenStack implementation is an advantage over commercial platforms that simply hand you a reference architecture and a list of best practices, and then instruct you to implement their solution in a particular way.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
Recently, The American Registry for Internet Numbers (ARIN) announced the exhaustion of the free IPv4 address pool. Below is a collection of news articles to keep you updated on this topic.
North America Just Ran Out of Old-School Internet Addresses
The Internet is rapidly running out of the most commonly used type of IP address, known as IPv4. ARIN announced it has run out of freely available IPv4 addresses. While this won’t affect normal Internet users, it will put more pressure on Internet service providers, software companies, and large organizations to accelerate their migration to IPv4’s successor, IPv6. Read entire article.
North America’s IPv4 address supply runs dry
The long-predicted exhaustion of IPv4 addresses has now taken place in North America, with the region’s authority left with no further supply of the 32-bit labels to issue. In the early days of the internet, the 4.3 billion possible IPv4 addresses appeared adequate. But as early as 1995 the Internet Engineering Task Force, or IETF, named the IPv6 successor protocol, and people have been warning of the consequences of the impending IPv4 address exhaustion for years. Read entire article.
No more IPv4, now what?
After several false scares and years of warnings, it has finally happened: the Internet has run out of IPv4 addresses. Now that all IPv4 addresses have been taken, anyone looking for a new IP must either buy one from someone else or adopt the IPv6 format. Some businesses have decided to purchase more IPv4 addresses on the secondary market instead of switching. In most cases blocks of IPv4 addresses can be bought for around $15 each. Buying more IPv4 addresses can only delay the inevitable for these businesses. Read entire article.
ARIN Issues the Final IPv4 Addresses in its Free Pool, Forcing Shift to IPv6
At 128 bits, IPv6 has a much larger address space than the current standard, IPv4, which is facing the threat of address exhaustion because of its small size. IPv6 provides more than 340 trillion, trillion, trillion addresses, compared to the four billion IP addresses that are available with IPv4. IPv6 also provides more flexibility in allocating addresses and routing traffic, eliminating the need for network address translation. Read entire article.
Explore HorizonIQ
Bare Metal
LEARN MORE
Stay Connected
As part of our commitment to high-performance infrastructure, we’re pleased to announce the availability of PCI Express Non-Volatile Memory (NVM) storage for AgileSERVER.
NVM is the next generation of SSD storage technologies, and NVM Express (NVMe) is designed specifically to unlock the benefits of NVM. The NVMe interface has been designed to leverage low latency with the processor to remove current bottlenecks from AHCI. As a result, the total throughput of the card is much greater than what your traditional SSD or SATA/SAS HDD could accomplish – yes, even the 12Gbps drives.
What it means: PCI Express NVM Storage
Many of our customers with data-intensive workloads require the latest and most powerful compute and storage technologies for optimal performance. This is especially true for organizations with fast, big data and analytics use cases, and the increased disk I/O and reduced data latency offered by NVM can have a very favorable impact. We have a customer using our NVM storage configurations that is generating more than 1.2 million IOPS per server today. Other benefits offered by NVM include:
- Increased throughput speed provides better performance with fewer servers and can lower your operating costs.
- Faster creation of virtual machines (VMs), as well as faster execution of workloads within the VMs makes developers more productive.
- Significantly higher IOPS can be achieved as a result of reduced latency, resulting in better performance and cost savings.
The PCI Express NVM Storage Cards (Intel P3608 Series) are now available in all Internap Agile regions.
Contact us to learn more, or existing Internap customers can log in to the portal.