Introduction to MicroShift and Red Hat Device Edge
Estimated reading time: 10 minutes.
- Objective
-
Understand how MicroShift is similar and different from Red Hat OpenShift
What is Red Hat Device Edge
Red Hat Device Edge is a software and support subscription that bundles three components:
-
Red Hat Enterprise Linux (RHEL), which includes a feature set known as RHEL for Edge.
-
Red Hat build of MicroShift.
-
Red Hat Ansible Automation Platform.
All of these three components can be purchased as part of a number of other subscriptions from Red Hat subscriptions. For example, RHEL and Ansible are frequently purchased as their own subscriptions, instead of as part of a bundle, and MicroShift is also available as part of the Red Hat OpenShift Container Platform and Red Hat OpenShift Plus.
In the end, you may purchase Red Hat Device Edge to use MicroShift over RHEL for Edge in a large number of of small edge devices. On the other hand, you could find out that your edge machines are "large" and best suited to another subscription which also includes RHEL and other Red Hat products
You can take advantage of the Red Hat build of MicroShift component with any of product subscriptions mentioned above. You can mix entitlements from different subscriptions on the same edge site, for example: some from Red Hat Device Edge and others from regular Red Hat OpenShift, if your site that contains multiple kinds of machines.
In this course, we focus on the technical capabilities provided by MicroShift, while other courses in this learning path focus on RHEL for Ege and Ansible management of edge devices.
What is MicroShift
The Red Hat® build of MicroShift is a lightweight Kubernetes container orchestration solution built from the edge capabilities of Red Hat® OpenShift® and based on the open source community’s project by the same name. (…) MicroShift brings the power and scalability of Kubernetes to the edge, as a natural extension of an OpenShift environment, allowing applications to be written once and run right where they are needed — next to the data source or end user.
The following figure compares MicroShift with the smallest edition of Red Hat OpenShift, which is Red Hat OpenShift Kubernetes Engine:
It shows that, while both the Red Hat build of MicroShift and Red Hat OpenShift include Kubernetes as their container orchestrator, plus security and a number of cluster services, Red Hat OpenShift includes a larger set of cluster services. The figure also shows that Red Hat OpenShift relies on RHEL CoreOS as its cluster node operating system, while MicroShift relies on plain RHEL and on RHEL services, such as the system journal daemon, for tasks that Red Hat OpenShift would defer to its cluster services.
MicroShift uses the same container engine (CRI-O) as Red Hat OpenShift and the same container images for components such as the Ingress controller, the OVN network provider, and the LVM Storage operator.
A number of components of Red Hat OpenShift are absent from MicroShift, among them:
-
The Cluster Version Operator, which manages installation and upgrades of Red Hat OpenShift.
-
The Machine Configuration Operator, which manages the configuration of the RHEL CoreOS operating system in OpenShift cluster nodes.
-
Most Cluster Operators, which manage cluster services like Ingress, OAuth, Monitoring, and Kubernetes control plane components such as Etcd.
It is also expected that Red Hat OpenShift users deploy a number of optional components as add-on operators, through the Operator Lifecycle Manager (OLM), such as:
-
OpenShift GitOps
-
OpenShift Pipelines
-
OpenShift Virtualization
-
OpenShift ServiceMesh
-
Red Hat DevSpaces
A number of products certified for OpenShift, including storage and network providers, are also deployed as add-on operators. While you can install the OLM in MicroShift, as an optional component, most add-on operators are not currently tested or supported on MicroShift, and many of them would make no sense. For example, you would run OpenShift Pipelines or Red Hat Ansible Automation Platform in a regular OpenShift cluster to produce system images of Red Hat Device Edge for applications that target MicroShift. That OpenShift instance could even manage the final deployment of the application in actual edge devices, or in OpenShift Virtualization VMs for integration testing of such applications on actual RHEL for Edge system images.
Both the Red Hat build of MicroShift and Red Hat OpenShift are certified kubernetes distributions, so they are expected to be compatible with the same applications. At least, this is the theory, but the fact is that Kubernetes certification only tests a comprehensive set of core APIs required to support application workloads, and does not validate the infrastructure or operational processes for managing Kubernetes clusters. It leaves room for lots of differentiation between Kubernetes distributions and applications could depend on a number of extension APIs that may not be available or compatible with every distribution of Kubernetes.
MicroShift Versus OpenShift
The Red Hat build of MicroShift is different than other components of Red Hat Device Edge in the sense that it’s more than just a different SKU to acquire and entitle the same software and capabilities. While RHEL for Edge is just RHEL, without any difference in its kernel, binaries, packages, and other components of RHEL, and Ansible for Edge is just the same Red Hat Ansible Automation Platform, with a set of supported content collections that includes both edge and non-edge content collections, MicroShift is actually a unique build of Red Hat OpenShift, starting from the same sources, but producing different binaries with only a subset of the capabilities of Red Hat OpenShift.
MicroShift is designed to require only a small number of cluster services (and their containers), so that it runs well within the CPU and memory constraints of edge devices. Red Hat OpenShift, on the other hand, requires data center-class server machines even on its single-node cluster topology.
While Red Hat OpenShift runs core Kubernetes components, such as Etcd, the Kubernetes scheduler, and the Kubernetes API server, as independent processes on distinct pods, which are managed by OpenShift cluster operators, MicroShift runs all these components in a single process, managed by Systemd. In a sense, MicroShift is an "all-in-one container" build of Kubernetes plus selected pieces of Red Hat OpenShift.
OpenShift includes, since its 3.0 release, a number of extension APIs, and an OpenShift API server component that handles them. Some of these extension APIs are still today essential to the Red Hat OpenShift application platform, for example, Security Context Constraints, while others, such as Image Streams, are losing developer’s preference in favor of alternatives such as GitOps. The Red Hat build of MicroShift includes, on its build of the OpenShift API server, only two of the extension APIs originated from OpenShift 3.0:
-
Security Context Constraints
-
Routes
And MicroShift runs its OpenShift API server in the same container as its Kubernetes core components.
MicroShift also includes selected components of OpenShift unchanged, by running the same container images that provide those components on Red Hat OpenShift:
-
Ingress (and Route) controller
-
LVM Storage operator and CSI provider
-
OVN-Kubernetes CNI provider
-
Service certificate management
-
Core DNS
A few components of Red Hat OpenShift can also be installed in MicroShift from RPM packages, also using the same container images as Red Hat OpenShift, among them:
-
OpenShift GitOps pull agent
-
Multus secondary networks
-
Operator Lifecycle Manager
The OpenShift GitOps package for MicroShift is not a complete OpenShift GitOps operator. It contains only an ArgoCD pull agent to fetch manifests from Git repositories and does not include features such as Argo Rollouts or multi-cluster capabilities. Some additional features of ArgoCD might be included with that pull agent but not tested with MicroShift. |
Red Hat OpenShift is designed to be compatible with a large number of certified third-party components, from storage and network providers to security agents and DevOps tools. MicroShift is designed to work with a more restricted and opinionated set of components, but also to enable adding optional components, such as GPU enablement, from either RPM packages or add-on operators.
The goal of MicroShift to provide sufficient compatibility with Red Hat OpenShift so that applications developed and tested on OpenShift can move to edge deployments unchanged, using the same container images and Kubernetes manifests.
If you wish, you can package those applications as Helm charts or add-on operators, and still deploy them on MicroShift the same way you would on OpenShift.
MicroShift Cluster Management
You manage Red Hat OpenShift almost entirely from Kubernetes APIs, using either custom resources from OpenShift cluster operators or from add-on operators. Even the operating system on OpenShift cluster nodes is managed using Kubernetes APIs. OpenShift cluster administrators, especially when running OpenShift in IaaS clouds, may never see the need to run Linux commands on their OpenShift cluster nodes. They are advised to NOT open SSH sessions to these nodes for day-to-day tasks.
Managing MicroShift, on the other hand, requires using traditional RHEL tools, such as the DNF package manager, Systemd, and SSH. Red Hat OpenShift is a complete application platform by itself, which provides a cloud-like experience, while MicroShift is closer to "just" running Kubernetes on top of RHEL.
Finally, MicroShift clusters are always single node. If you need HA, you have to either consider RHEL HA services, such as Pacemaker, or switch to Red Hat OpenShift. If you need horizontal scalability among multiple nodes, or vertical scalability to larger servers, you should also consider Red Hat OpenShift. On the other hand, MicroShift integrates well with other features of RHEL for Edge, for example, the Greenboot capability of rolling back system upgrades to a previously known good image.
You can run MicroShift in cloud instances if you wish, but MicroShift lacks the integration components to use cloud auto-scaling, cloud storage, and cloud load balancers. It is really designed for small physical edge devices and provides all components required by those devices as an integrated product. Upstream Kubernetes can integrate with cloud services as long as you add and configure a number of third-party components such as network providers, storage providers, ingress controllers, service discovery, and more. Red Hat OpenShift provides those compoents, while MicroShift doesn’t.
What’s Next
The first activities in this course prepare the virtual labs environment for air-gaped deployment of MicroShift using either package-based RHEL or RHEL for Edge. It should provide enough information for you to replicate the activities in your own environment, if you prefer, or try a simpler deployment, not air-gaped.