781.897.1727
NFI Logo - blog featured image

What Not to Bring into the Cloud

As enterprises and service providers look to offer on-demand computing resources and need to manage large networks of virtual machines, many companies are looking to the OpenStack cloud computing platform as a way to gain the flexibility they need to design their cloud.  OpenStack provides that flexibility without the requirement of proprietary hardware or software and has the ability to integrate with legacy systems and third-party technologies.

OpenStack is a free and open-source software cloud computing software platform that’s primarily deployed as an IaaS (infrastructure as a service) solution to control processing, storage and networking resources throughout a data center. It can be used for processing big data with tools like Hadoop, for scaling computing to meet demand, and for high-performance computing (HPC) environments to handle diverse and intensive workloads.

We recently deployed a software system for a client on OpenStack to study the feasibility of an OpenStack system. That system was a stack of six instances and four networks described by an OpenStack Heat template. It sounded easy enough. But here is the twist — since a non-virtualized version of that system would still exist, it was decided that we would deploy to OpenStack an unmodified version of the system to reduce the number of differences between the two types of deployment. Eventually, a few things felt just like legacy systems that you try to bend to fit them to a new environment. The system is being actively developed, but many important design choices had been made long before people started to be concerned about virtualization and clouds. Here are a few points to illustrate that.

The Uncommon Linux Distribution

In our case, the guest OS had to be an uncommon Linux distribution on which we had no control, and it wasn’t cloud-ready like most mainstream Linux distributions. Because of that, my guest OSes had to run without Virtio’s paravirtualization drivers (a virtualization technique that presents a software interface to virtual machines that is similar, but not identical to that of the underlying hardware)Theoretically, this is not a problem because OpenStack offers what you need to live without paravirtualization. For example, with some efforts, you can live without Virtio disks by using IDE disks. But according to the number of issues we had to deal with because we were trying to avoid Virtio, this is not a well-traveled road, and it took more effort. We had to visit OpenStack’s issue tracker in order to find workarounds. Another issue with our guest OS was with its serial console. The serial console is what OpenStack can offer you to login to your instance without depending on network connectivity to your instance. The serial console of our guest wasn’t exactly how OpenStack expected it to be, and so it took some more effort to get it working, while it worked at the first attempt with mainstream Linux Distributions.

The Forbidden DHCP Server

One of the instances of that system provided a DHCP server which wasn’t working at first. After some traffic capture and analysis, we quickly saw that OpenStack was dropping DHCP responses. And it was by design, since it could interfere with some of the fundamental gears of OpenStack. A quick Internet search shows that there is one workaround. You can change a kernel setting to disable that packet filtering, but it has a few side effects. It disables all OpenStack’s security group filtering, and all Linux bridge filtering on the host. Also, nobody can say which other side effects will appear in the mid- to long-term. It wasn’t a perfect workaround, but we could run on an unpatched OpenStack, to explore further the feasibility of the project.

The Link Aggregation Emulation

One of the things needed for the system to work, was link aggregation. Without it, some configuration logic within the instances would get confused. It is more enlightening to say we needed link aggregation emulation, since all virtual aggregated links will most likely end up on the same physical cable anyway, suppressing any advantage provided by link aggregation. There is no reason to design that for the cloud, and it made a few tools unhappy when trying to create 2 NICs on the same network for the same instance. In our case “nova boot” and “heat stack-create” refused to do it while “nova interface-attach” was fine.

Post Heat Deployment Steps

The deployment was encoded by a heat template. At the end we were left with two things that could not be deployed from within the heat template. An IDE volume (not a Virtio one as explained earlier) and the ports for the link aggregation (because “heat stack-create” didn’t support them). Although they were pretty simple steps for anybody comfortable with all the OpenStack client apps (nova, neutron, cinder …) it put more details in the deployment procedure which is already, lengthy enough.

Therefore, when you move software systems to the cloud, bringing over infrastructure details whose designs are pre-virtualization, it brings another source of risk. Whether you can afford to break these dependencies on these infrastructure details is, of course, a topic for another discussion.