Fundamental Services of Red Hat OpenStack

Estimated reading time: 9 minutes.

Objective

Describe the OpenStack services required by typical applications and their virtual machines.

The OpenStack Services Architecture

OpenStack is composed of multiple independent services which can work (and usually do) together. Having services that are independent of each other enables faster delivery of new features and more reliable fixes for bugs and security issues, and is widely recognized as a best practice for software development.

s1 fundamental lecture fig 1

But OpenStack services rarely run standalone, except by developers working on the services themselves. Most OpenStack clusters run a standard set of services and, even when some of those servers could be disabled, with no loss of functionality for a certain application workload, the standard set is usually left enabled. This means most clusters running Red Hat OpenStack Services on OpenShift run all services included with the product.

An OpenStack Operator focuses on user-facing OpenStack services, meaning services that provide an API for users outside of the cluster. Some OpenStack services exist to provide internal infrastructure for an OpenStack cluster and their APIs are available only from inside a cluster, for use by other OpenStack services and compute nodes. The OpenStack Administration learning journey provides more details about these internal services, and this course focuses on the user-facing services.

OpenStack Services to Manage Applications

The first set of OpenStack services are the minimal services required by practically any functional OpenStack cluster. It would be hard to support even the simplest applications without including all services in this category, so we call them fundamental services.

It is common for the OpenStack community to refer to individual services by either their code names or their purposes. Follows a list of the fundamental OpenStack services covered in this section:

  • Keystone: Identity service

  • Nova: Compute service

  • Cinder: Block Storage service

  • Neutron: Networking service

  • Glance: Image service

s1 fundamental lecture fig 2

OpenStack Authentication and Service Discovery: Keystone

The first service all OpenStack Operators interact with is Keystone, the Identity service. It is both the authentication entry point for an OpenStack cluster, which validates user login credentials and also a discovery entry point for other OpenStack services. All other OpenStack services rely on Keystone to find each other and to validate access tokens.

So OpenStack Operator users (and also Administrator users) first connect to Keystone, and then connect to whatever other service they need. Tools such as the OpenStack client and Horizon make this process transparent, creating the illusion of a unified set of commands or pages for an entire cluster.

OpenStack Virtual Machines: Nova

As application workloads run on OpenStack as Virtual Machines (VMs), the first service Operators care about is Nova, the Compute service. It interacts with the local hypervisor in compute nodes to manage VMs and provides advanced features such as live migration and snapshots. OpenStack nova uses the server resource type to manage VMs. It is usual to refer to Nova VMs as "server instances".

If an Operator just creates a VM using Nova APIs, that VM has no connectivity to anything else, including other VMs inside the same OpenStack cluster. All data inside that VM can be lost in events such as hardware failure of a compute node. So most VMs require that you use Nova together with other fundamental services. But, if everything you need is access to the virtual console of a VM, to run software installed locally on that VM, then Nova alone would be sufficient.

Fortunately, both the OpenStack client and Horizon offer quick commands and workflows to create VMs already attached to virtual storage and connected to virtual networks, performing multiple OpenStack API calls for one user action.

OpenStack Virtual Disks: Cinder

Application workloads need to store multiple kinds of data, from their boot disk and its operating system, potentially customized with many changes in configuration files, added packages, and so on. They may also require multiple data disks to store application data, logs, and other information as operating system files.

The Block Storage service, Cinder, provides virtual disks, called volumes, which can be attached to VMs as you need. Once the volume is attached to a VM, you need to use operating system commands inside that VM to format the volume with a file system and configure directories, file ownership, and file permissions before applications are able to store data on the volume. Just like you would do with a physical disk on a physical server.

You can attach multiple volumes to the same VM, and you move volumes from one VM to another. You can even attach a single volume to multiple VMs at the same time, though this is a dangerous operation: applications not designed for shared disks may easily corrupt data.

Cinder does not provide shared filesystems, which some workloads require. Stay tuned for the next section, which describes the right OpenStack service for such applications. The "block" nature of Cinder relates to the network storage protocol, such as iSCSI, Fibre Channel, or Ceph RBD.

Cinder also provides advanced features such as clones and snapshots of volumes, which you use for backup and disaster recovery procedures, or for quick cloning of VMs. The root or boot disk of a VM is usually a Cinder volume because this enables VMs to survive the loss of the compute node running the VM.

There might be reasons to NOT use Cincer volumes for VM disks, for example to support very fast local NVMe flash storage, but then the application itself needs to provide resiliency and fault tolerance for that data. Most times, applications rely on the infrastructure to do it for them, which means Cinder does the job.

OpenStack Virtual Networks: Neutron

Most application workloads require network connectivity to other servers, which can be other VMs inside a cluster or physical servers outside the cluster. As long as there is network connectivity, each side does not know if the other side is a physical or virtual machine or another kind of device.

Neutron is the Networking service of OpenStack. It provides virtual internal networks, which it calls tenant networks, and connectivity to external networks, which it calls provider networks. OpenStack Operators are usually allowed to create multiple tenant networks and configure many related resources, such as security rules (similar to network firewalls), routing rules, and network address translation between tenant or provider networks.

It is possible to manage Neutron virtual networks down to the virtual port level, and you can even create virtual VLANs (virtual Virtual Local Area Networks?) ports and trunks on tenant networks, for applications which you migrate from physical servers and need to keep their original operating system settings. Provider networks, on the other side, can abstract data center networking details, such as VLAN IDs, from connected VMs, if you wish.

Notice that, whatever you configure on your Neutron virtual networks, you need consistent configurations in the operating system inside your VMs. OpenStack helps with this task by providing features, such as cloud-init, to offer an initial configuration to new VMs, and also provides virtual DHCP services. But, if you want to configure static IP addresses and routes inside your VMs, you can. It is also a best practice to configure the internal firewall of your operating system, such as firewalld from RHEL, in addition to the network security resources from OpenStack.

There are also some interesting networking capabilities provided by other OpenStack services, such as load balancing, which we will see in the next section.

Operating System Boot Images: Glance

If you create a new VM, you need a bootable operating system disk. It could be an installation disk, designed to be used for only the first boot, and that installs an operating system on the root disk of the VM; but usually it is a root disk already preinstalled and preconfigured. Nowadays, all operating system vendors provide these two kinds of operating system images, and whatever you wish to use, you need to provide that image to OpenStack.

Glance, the OpenStack Image service, not only provides a choice of such images that Nova can use for the first boot of a VM, or to copy to the root disk of a VM, prior to its first boot, but also provides management of an image catalog, so Operators and Administrators can create and maintain a large set of customized VM images.

There are many reasons to customize VM images, from preconfiguring operating system settings required by your organization policies, such as enterprise identity servers, certificate authorities, and agents for anti-virus and backup software, to including entire application stacks, like an online web store that you use to run multiple VMs with copies of the same application, possibly in multiple OpenStack clusters.

It is the work of the OpenStack Administrator to provide at least one image to Glance before Operators can create VMs.

Operation and Administration of Fundamental Services

As we saw during the presentation of Nova and Glance, there are resources in each service that an Operator would not create or change, and would only access. An Administrator is required to manage those resources for them. Sometimes it would be possible and perfectly fine to let Operators manage a subset of those resources, but an Administrator would provide a starter set before allowing Operators to access the cluster.

We already saw, as examples Administrator-only resources, Neutron provider networks: you do not want to let Operators connect their applications to everything and anything outside your cluster: there are security policies and boundaries to enforce. Besides, Operators are not supposed to need knowledge of the physical topology and resources of your data center. They are supposed to be concerned with only virtual resources inside an OpenStack cluster.

As a counter-example, we already saw Glance images: if all Operators need at least one boot image, why not create a shared pool of standard images, instead of letting each Operator manage their own copies of the same images, which boot the same operating system version? However it is fine to allow Operators to create their own customized images for the application teams they support.