NFI Logo - blog featured image

OpenStack Simply Explained

Northforge has created a 3 part blog series that looks at some of the most interesting and up to date information available on the internet regarding the OpenStack software platform.

OpenStack is a collection of open source projects that can provide cloud management of network and computing resources. It is backed by a significant number of dominant players in the high tech industries such as IBM, HP, Dell, Cisco and AT&T, to name a few.

OpenStack Overview

OpenStack is an open source platform that lets you build an Infrastructure as a Service (IAAS) cloud that runs on commodity hardware and scales massively. [1] The main components of OpenStack are:

OpenStack Platform


      • Object Store (codenamed “Swift”) provides object storage. It allows you to store or retrieve files (but not mount directories like a fileserver).
      • Image (codenamed “Glance”) provides a catalog and repository for virtual disk images. These disk images are mostly commonly used in OpenStack Compute.
      • Compute (codenamed “Nova”) provides virtual servers upon demand.
      • Dashboard (codenamed “Horizon”) provides a modular web-based user interface for all the OpenStack services. With this web GUI, you can perform most operations on your cloud like launching an instance, assigning IP addresses and setting access controls.
      • Identity (codenamed “Keystone”) provides authentication and authorization for all the OpenStack services. It also provides a service catalog of services within a particular OpenStack cloud.
      • Network (codenamed “Neutron” , also known as “Quantum” in earlier releases) provides “network connectivity as a service” between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them.
      • Block Storage (codenamed “Cinder”) provides persistent block storage to guest VMs. [3]


OpenStack Hardware Scaling

Applications that have performance models based on hardware requirements must consider how OpenStack manages the CPUs, memory and storage in order to maintain their performance benchmarks in the cloud. The information was collected from the following sources:



An OpenStack virtual machine is created based on a virtual hardware template called a flavor. The flavor defines the amount of virtualized CPU cores (a portion of a physical CPU core that is assigned to a virtual machine) memory, and storage used by the virtual machine. The default OpenStack set of flavors is:

OpenStack Flavors


Additional flavors can be created to match the recommended hardware requirements for the application. For an existing application, in the near term, a flavor can be specified that matches the performance of the dedicated hardware previously used.  In future an application’s engineering rules can be modified to incorporate smaller units of compute resources to allow operators to be more granular in OpenStack flavor size.  It should be possible to fit an equation to the performance curve such that the size of the OpenStack flavor can be left up to the Operator.  In either case, an agent could be developed to validate the resource allocation (CPU, RAM, storage, disk swapping etc.) that is incorporated into the application.

Though non-persistent storage is disk space, if the VM is terminated for any reason the disk space is lost.  Care must be taken to ensure persistent data is kept in permanent storage, in the OpenStack case persistent block storage is provided by ‘Cinder’.

Virtual Instances

Virtual instances are hosted on an OpenStack compute node. Ideally, all virtual instances will be the same flavor if the compute node is dedicated to supporting a single service.

The scheduler will create virtual instances on a compute node until either the virtual CPU core limit or virtual memory limit is reached on that compute node. The default CPU overcommit ratio ( increasing the number of virtual CPU cores on a compute node at the cost reducing performance) is 16 virtual cores to 1 physical core but for CPU intensive applications a ratio of 4:1 or 2:1 is more applicable and if warranted could be set as 1:1. Increasing the number of virtual instances will adversely affect the performance of the instances. In the case of an existing application’s that provides engineering rules with precise performance modeling; a ratio of 1:1 can be mandated for deterministic behavior.

The formula for the number of virtual instances on a compute node is:

(CPU Overcommit Ratio (virtual cores per physical core)*(# of physical cores) )
                                                           (# of virtual cores per instance)

Memory can also be overcommitted. The default overcommit ratio is 2:1 but overcommitting memory will also adversely affect instance performance so do only after testing your particular use case. As with CPU allocation, a ratio of 1:1 can be mandated for deterministic behavior where warranted.

The Ram and CPU overcommit ratio can set at 1:1 initially so a virtual instance (with a flavor that matches the application’s hardware requirements) would be the equivalent to, say, a server blade that satisfies the current engineering rules for the application. It is recommended to test slightly higher overcommit ratios to see if a system could scale to support more virtual instances while satisfying the performance criteria for the application.

Additional compute nodes can be added easily to OpenStack when it is necessary to scale out horizontally. Ideally, the new compute node should be just as powerful as the existing compute nodes so that instance capacity increases linearly. Over time, compute hardware becomes more powerful, and more virtual instances can be supported. However care needs to be taken to ensure application performance is not adversely affected.

The most popular hypervisor used by OpenStack is KVM, though LXC, QEMU, UML, VMWare vSphere, Xen, PowerVM, Hyper-V and Bare Metal are also supported. All new OpenStack features are tested and implemented on KVM and gets the most support on OpenStack forums. It is best suited for Linux guests. [6]

The physical location of the VMs may be a factor affecting performance, consider the case where 2 VMs need to transfer a lot of data between themselves to accomplish a task. The impact on performance of the physical VM location can be summarized as follows:

Open Stack VM Location

Data Storage

Data storage options available to a virtual instance in OpenStack are:

  • Local file system (managed by Nova) on the compute node
  • Block storage volumes (Cinder) on the block storage node
  • Software Image repository (Glance) on an image server

A bottleneck with virtual machines is local disk I/O by the Hypervisor since there all virtual instances have their local file systems on the compute node’s local hard disk. The problem can be mitigated by having a SSD drive on the compute node with a high IOPS (I/O operations per second) rate and migrating critical data to a block storage node.

If the compute node goes down and the applications running on the virtual instances must be recovered in their current state then critical data must be stored outside of the virtual instance on an external block storage node. The block storage is attached to the virtual machine via iSCSI. Having a separate storage node also allows scaling and redundancy of block storage independently of the compute node. The block storage node should have a high IOPS rate and be reliably accessible to the network.

The Glance component can store snapshots of the virtual instance which can be used to backup and quickly restore a virtual instance with an application.

A database, such as Oracle RAC, is treated as an external entity to the OpenStack environment and is accessed through standard approaches such as JDBC and ODBC. In the longer term, the database server could be run in a virtual instance.

Part 2 of this blog series regarding the OpenStack software platform will gather some of the most interesting and up to date information available on the internet regarding OpenStack performance metrics.

The information for this blog was collected from the following sources:
http://www.infoworld.com/d/cloud-computing – Infoworld’s CloudComputing site.
http://www.openstack.org/ – OpenStack organization’s web site
http://www.mirantis.com/ – Mirantis OpenStack service vendor
http://www.redhat.com/products/cloud-computing/openstack/ – Red Hat’s OpenStack Deployment
http://devstack.org/ – DevStack OpenStack deployment scripts
http://www.morphlabs.com/ – MorphLabs Cloud Consultants
http://www.nagios.org/ – Nagios element and networking monitoring system


[1] http://docs.openstack.org/trunk/openstack-ops/content/index.html
[2] http://www.redhat.com/products/cloud-computing/openstack/
[3] http://docs.openstack.org/folsom/openstack-object-storage/admin/content/components-of-openstack.html
[4] http://docs.openstack.org/trunk/openstack-ops/content/scaling.html#starting
[5] http://www.youtube.com/watch?v=0RRdKknfRUc(OpenStack Capacity Planning)
[6] http://docs.openstack.org/trunk/openstack-ops/content/scaling.html


Northforge has combined its technical expertise in cloud computing/SaaS software development with its extensive network infrastructure experience to deliver multiple Cloud and SaaS technology projects. We understand the design, development and UI requirements to take the nebulous out of your next cloud-based project.