Resources, Projects, and Domains

Estimated reading time: 11 minutes.

Objective

Describe how to organize OpenStack resources in projects and domains for access control and quota management.

Pending Review.

Multitenancy with OpenStack

OpenStack clusters typically run many applications and are managed by many Operators. OpenStack does not prescribe a way of grouping applications and Operators, but provides a general mechanism for enforcing whatever is your organizational need, based on domains and projects.

s1 domains projects lecture fig 1

On the structure of domains and projects, OpenStack services enforce authorization checks, which may allow or deny a user to manage API respources, enabling separation of concerns, for example: an Operator may manage applications from some projects but not from others.

The structure of domains and projects also enable OpenStack services to enforce API resource and compute resource quotas, which prevents a single application to take up all compute capacity of a cluster or of a compute node.

Notice that, unlike operating system file quotas, which assign capacity to users and groups, OpenStack quotas assign capacity to applications, by means of their projects. If a user has access to multiple projects, they can create resources which consume quota from all those projects.

OpenStack Domains and Projects

An OpenStack domain is the top-level container for projects, users, groups, and roles. Users, groups, and roles can also exist at the project level, meaning they have only access to the resources of the project, but they usually exist at the domain level so they can access resources from different projects on the same domain.

OpenStack allows users and groups from one domain to access resources from projects in other domains, which enables an approach of creating two parallel sets of domains:

  1. Resource domains, that include projects with application resources such as servers and networks;

  2. Authentication domains, which include users and groups and usually no projects.

This approach enables you to manage your organizational chart independently of your applications. It also enables managing scenarios such as bringing together two originally independent organizations, like two companies that merged. Each organization can be a distinct domain, and each domain can have its own authentication settings, possibly using different external identity services.

Most OpenStack API resources have to exist inside a project, and some API resource types can be shared, meaning they could be referenced by resource instances from different projects. This enables some Operators to manage resources for usage by different projects and teams, for example standard server boot images, by creating projects dedicated to host those shared resources.

OpenStack projects can be hierarchical: a project can have subprojects. Access permissions and resource quotas can be inherited (or not) from parent to child project, which enables large teams some degree of self-management.

OpenStack API Resources Names and IDs

OpenStack API resources do not have unique names. For some resource types, names have to be unique at the project or domain level, but resources of the same type from different domains or projects can use the same name. For some resource types, the name is not expected to be unique at all, and multiple instances of the same resource, on the same project, could use the same name.

OpenStack API resources are uniquely identified by an UUID assigned by its service. All OpenStack API calls which create new resource instances return the UUID of the new resource, and this is the only guaranteed way of referencing the resource later on. While you can change the name of a resource, you cannot change its UUID.

Most OpenStack client commands will accept names of resources and retrieve their UUIDs from the specified project and domain scopes, but if they cannot, you are expected to retry the command providing the UUIDs of API resources you need.

When one API resource references other API resources, it is always by UUID, never by name. This means API resource names could change at any time.

Some OpenStack API resources also accept zero or more tags, which you use for arbitrary grouping of resources. OpenStack services do not give any meaning to tags, but they help Operators manage clusters with a large number of resources.

OpenStack Authorization and Roles

In the previous chapter, we learned that Keystone authenticates the identity of a user, but Keystone itself makes no authorization decisions: these belong to individual OpenStack services.

Keystone does populate access tokens with data, such as originating domain and project of a user, and the list of groups and roles a user is member of. OpenStack services can use any of that information to make authentication decisions, and they are expected to follow a few conventions, among them:

  • Users defined at a scope (system, project, or domain) or which got permissions because of groups and roles defined at a scope, can only access resources from the same scope or from lower scopes.

  • Users with the admin role can perform any operation with any resource, respecting scope restrictions.

  • Users with the member role can perform a restricted set of operatations which create, change, or delete resources.

  • Users with the reader role can only list resources and query attributes of those resources, but cannot make any change nor delete resources.

To understand the difference between admin and member roles, consider the conflicting needs of granting some autonomy to OpenStack Operators, while denying them the ability to perform dangerous operations. For example, a user might have permission to create server instances and networks inside a project, but the same user cannot change resource quotas on that project, nor grant access to resources to other users.

Remember that users do not own OpenStack API resources. Servers, networks, volumes, and other API resources belong to projects and domains only.

The admin, member, and reader roles exist by convention. It is up to individual OpenStack services to assign specifc meaning to each of those roles, that is: which APIs they can invoke and using which parameters. For some services, the distinction between admin and member might make no sense and those roles would be equivalent. For other services, the reader role may not be able to view sensitive attributes of a resource instance.

It is possible, but uncommon, to create custom roles in OpenStack clusters. The reason being that you cannot define the permissions (or rules) for custom roles using OpenStack APIs. You would have to manage services directly on the control plane to provide them with policy files. Most organizations stay within the three standard roles.

As a best practice, avoid assigning roles directly to users, but assign them to groups and let users get roles based on their group membership.

OpenStack Users and Groups

It is possible to create and manage users and groups as Keystone API resources, but it is not recommended. Those local users and groups are nice for small scenarios, like test clusters and proofs-of-concept, but the recommended approach is delegating user and group management to external enterprise identity systems based on the LDAP or OIDC standards.

One Keystone domain can refer to only one authentication backend. This is a reason some organizations create user domains and resource domains: so they can create multiple user domains, each referring to a different identity system.

Traditional data centers may be used to the LDAP technology, used by identity management systems like Microsoft Active Directory (MSAD) and Red Hat Enterprise Linux Identity Management (RHEL IdM) but this technology is losing ground for identity management systems compatible with OpenID Connect (OIDC), which are based on the OAuth standard.

Nowadays, it is advisable to prefer OIDC identity systems over LDAP systems, especially for sharing authentication backends with modern cloud-native and containerized IT infrastructure applications, such as CI/CD systems. Some organizations stick with LDAP systems because most operating systems cannot authenticate local users using OIDC, but most of the times there is no reason to make OpenStack compute nodes members of your MSAD or RHEL IdM authetication domain.

The System Scope

Keystone and some other OpenStack services recognize the system scope meaning an entire cluster. OpenStack APIs which operate at the system scope, outside of any domain and project, are very rare, but granting a user or group with the admin role at the system scope is an effective way of making them superusers for the totality of a cluster.

For some services there may be a special project and/or a special domain, prepopulared with shared resources, and with admin or member restricted to only few users, plus reader assigned by default to all users. This looks like system scope but it is only a domain and project reserved for OpenStack Administrators to share reosurces with other users and projects. Organizations can use a similar approach for avoiding duplicating resources on multiple projects and domains.

OpenStack Resource Quotas

Like access control, compute resource quotas are set and enforced by each individual OpenStack service. Like access control, it depends on data managed by Keystone, but each service makes its own quota enforcement decisions.

OpenStack Nova used to provide per-user quotas but they are deprecated and are not available aymore from the OpenStack client and other up-to-date OpenStack management tools.
s1 domains projects lecture fig 2

Compute resource quotas can only prevent the creation or changes to API resource instances. They are not designed to enforce dynamic usage quotas, but to ensure applications get some guaranteed capacity and that compute nodes are not overloaded with more server instances they can handle.

To understand that, assume that a project has a quota of only 10 vCPUs. If that project already has three server instances, which add up to 8 vCPUs, it is only possible to create two new server instances, each one using one vCPU, or one instance using two vCPUs. It doesn’t matter if the existing instances are mostly iddle and the cluster has plenty if capacity for running more virtual machines.

API Resource Quotas and Compute Resource Quotas

API resource quotas affect the number of instances you can create, of a given API resource type, inside a project. That class of quota limts the number of server instances, floating ips, virtual networks, volumes, and so on.

Compute resource quotas affect the quantity, or capacity, of a compute resource that a project can consume, aggregating the consuption from all server instances in the project. That class of quota limits total quantity of vCPUs, total memory aggregated from all server instances, total disk space from all volumes, and so on.

Here "compute" means anything required to running applications, and could include storage and network capacity. It does not reffer to API resources from Nova, the Compute service.

Object Storage Quotas

Remember that quotas are defined and enforced by each OpenStack service themselves, and this allows for some inconsistencies and also for purposeful exceptions. One such exception is Swift, which provides both API resource and compute resource storage quotas per storage account.

Object storage accounts enable external applications to save and retrieve objects without directly authenticating to Keystone. They can represent individual users of Swift object storage instead of individual applications.

Object storage quotas limit the number of containers or buckets, the number of objects, and the total space occupied by aggregating all objects owned by the account.

Compute Resource Overcommit

Because most applications are bursty, meaning their actual usage of compute resource vary over time, and most times it happens during short periods of time, alternating with other short periods of iddleness, OpenStack enables overcommit of compute resources by default.

An OpenStack Administrator can configure different overcommit levels for different classes of compute nodes, but the idea is that, if a compute node has an overcommit factor of 2.0 and 16 cores, it can run server instances adding up to 32 vCPUs.

It is not frequent to configure overcommit of other classes of compute resources, such as memory and GPUs because they tend to be used with a more consistent, non-bursty pattern.