Lab: Deploy Kubernetes Applications in MicroShift
Estimated reading time: 10 minutes.
- Objective
-
Deploy single containers and multi-container applications in MicroShift from a remote client.
Before You Begin
You need a test machine where you installed, configured, and verified MicroShift, by following the instructions from the first lab of this chapter.
You also need a development machine from which you will access the MicroShift instance on your test machine, and that was configured with a kubeconfig file from the previous lab, for access to pre-provisioned namespaces MicroShift
Finally, you also need a mirror registry machine, already configured with a mirror registry for Red Hat OpenShift and pre-populated with container images required by MicroShift and course activities. Make sure that your mirror registry machine machine was configured and verified by following the instructions from a previous lab of this course.
These instructions were tested on RHEL 9.5 but should work with minimal or no change on newer and older RHEL 9.x releases.
If you are using the course classroom, log in to the workstation
VM, which is your development machine, as the user student
with password student
. If not, please adapt the instructions to your test environment.
You will perform most steps in this lab on your development machine.
Instructions
-
Log in to your development machine and verify that MicroShift is up and running on your test machine.
-
Check that you have a kubeconfig file for an unprivileged service account and a pre-provisioned namespace.
$ export KUBECONFIG=~/remote-user $ oc whoami system:serviceaccount:userns:user $ oc project Using project "userns" from context named "microshift" on server "https://serverb:6443".
Other oc
commands related to projects do not work with MicroShift. -
Check that the MicroShift service is active, using impersonation to get cluster administration rights.
$ oc --as admin get node NAME STATUS ROLES AGE VERSION serverb Ready control-plane,master,worker 3d21h v1.30.5
-
List the APIs available on your MicroShift instance.
$ oc api-resources | wc -l 70 $ oc api-resources NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap ...
On Red Hat OpenShift 4.17, you would get hundreds of APIs available, as opposed to a few dozen on Red Hat build of MicroShift.
-
-
Deploy a MySQL database with persistent storage, to verify that your MicroShift instance can provide dynamic storage for applications.
-
Create a deployment and configure it with environment variables required by the MySQL image from Red Hat.
$ oc create deployment db --replicas 0 --image servera.lab.example.com:8443/rhel9/mysql-80 deployment.apps/db created $ oc set env deployment/db MYSQL_DATABASE=test MYSQL_USER=user MYSQL_PASSWORD=redhat123 deployment.apps/db updated
If you replace the registry name and port with registry.redhat.io
, it would work with an instance of MicroShift connected to the internet. -
Configure the deployment with a PVC and scale the deployment to create a database server pod.
$ oc set volumes deployment/db --add --name data --claim-name dbdata --claim-size 1G --claim-mode rwo --mount-path /var/lib/mysql/data deployment.apps/db volume updated $ oc scale deployment/db --replicas 1 deployment.apps/db scaled
-
Check that your namespace contains a PVC and wait until the database pod is ready and running.
$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE dbdata Bound pvc-4091fb84-3804-4ca6-8c83-032235361db9 976564Ki RWO topolvm-provisioner <unset> 57s $ oc get deployment NAME READY UP-TO-DATE AVAILABLE AGE db 0/1 1 0 4m22s $ oc get deployment NAME READY UP-TO-DATE AVAILABLE AGE db 1/1 1 1 45s $ oc get pod NAME READY STATUS RESTARTS AGE db-55c5557878-6scc8 1/1 Running 0 18s
-
Create a network tunnel from your development machine to the database pod. Leave this command running as you move to the next step.
$ oc port-forward $(oc get pod -o name -l app=db) 3306:3306 Forwarding from 127.0.0.1:3306 -> 3306 Forwarding from [::1]:3306 -> 3306
-
Open another terminal and use it to connect to the database pod using the MySQL client included in RHEL.
$ sudo dnf install mysql ... Complete! $ mysql -h127.0.0.1 -uuser -predhat123 test ... Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> mysql> create table test (id integer primary key, data varchar(200)); Query OK, 0 rows affected (0.17 sec) mysql> insert into test values (1, 'one'); Query OK, 1 row affected (0.02 sec) mysql> insert into test values (2, 'two'); Query OK, 1 row affected (0.01 sec) mysql> select * from test; +----+------+ | id | data | +----+------+ | 1 | one | | 2 | two | +----+------+ 2 rows in set (0.00 sec) mysql> exit Bye
-
Return to the first terminal, running the
oc port-forward
command, and terminate the network tunnel with Ctrl+C.
-
-
Switch to your test machine and verify that MicroShift created a new logical volume to store data from the PVC.
$ export KUBECONFIG=~/local-admin $ PV=$( oc get pvc dbdata -n userns -o jsonpath='{.spec.volumeName}' ) $ echo $PV pvc-24f9b9f4-e95a-4307-9d2b-5af6230b90bc $ oc get pv $PV NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc-24f9b9f4-e95a-4307-9d2b-5af6230b90bc 976564Ki RWO Delete Bound userns/dbdata topolvm-provisioner <unset> 19s $ LV=$( oc --as admin get pv $PV -o jsonpath='{.spec.csi.volumeHandle}' ) $ echo $LV bea16431-baad-43fc-800e-f5d9f138e430 $ sudo lvs rhel/$LV LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert bea16431-baad-43fc-800e-f5d9f138e430 rhel -wi-a----- 956.00m
-
While you are already on your test machine, check that port 8080 is not used by any of the RHEL services running on the test machine, especially MicroShift.
$ sudo ss -tulnp Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* users:(("chronyd",pid=856,fd=5)) udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:* users:(("microshift",pid=1528,fd=100)) ...
You do not need to open the firewall for applications you deploy in MicroShift because OVN configures netflow rules that bypass the system firewall, for all services and routes that you create in MicroShift.
-
-
Switch to your development machine and deploy a hello world web application, to verify that you can expose applications in MicroShift for external access using OpenShift routes.
-
Create a deployment for the hello word application and wait until its pod is ready and running.
$ oc create deployment hello --image quay.io/flozanorht/php-ubi:9 deployment.apps/hellp created $ oc get deployment,pod NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/hello 1/1 1 1 37s NAME READY STATUS RESTARTS AGE pod/db-55c5557878-6scc8 1/1 Running 0 6m14s pod/hello-7fd66dd674-2bnjc 1/1 Running 0 37s
If you replace the registry name and port with quay.io
it would work with an instance of MicroShift connected to the Internet. -
Create a service and an OpenShift route to expose the hello world application to external access. Notice that, with MicroShift, unprivileged users cannot manage routes.
$ oc expose deployment/hello --port 8080 service/hello exposed $ oc get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hellophp ClusterIP 10.43.136.146 <none> 8080/TCP 10s $ oc expose service hello Error from server (Forbidden): routes.route.openshift.io is forbidden: User "system:serviceaccount:userns:user" cannot create resource "routes" in API group "route.openshift.io" in the namespace "userns" $ oc --as admin expose service hello route.route.openshift.io/hello exposed $ oc --as admin get route NAME HOST ADMITTED SERVICE TLS hello hello-userns.apps.example.com True hello
-
Edit your
/etc/hosts
file to map the host name of the route to the IP address of your test machine by appending the following line:172.25.250.11 hello-userns.apps.example.com
In a real-world scenario, you would configure a DNS server to resolve any hostname within the applications domain of your MicroShift instance to its IP address. -
Check that your test machine can access the hello world application using the host name assigned to it by MicroShift.
$ curl http://hello-userns.apps.example.com <html> <body> Hello, world! </body> </html>
-
Delete the route and service to prepare for the next step.
$ oc --as admin delete route hello route.route.openshift.io "hello" deleted $ oc delete service hello service "hello" deleted
-
-
Create a load balancer service to expose the hello world application without using an HTTP proxy.
-
Create a service of type load balancer and get its external IP address. That address should match the IP address of your test machine. You could choose any TCP port that is free on your test machine, but for simplicity, this lab uses the same TCP port the hello world applications uses inside its container.
$ oc expose deployment/hello --port 8080 --type LoadBalancer service/hello exposed $ oc get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello LoadBalancer 10.43.189.145 172.25.250.11 8080:31736/TCP 28s
The oc expose
command configures load balancer services with a node port, which is unnecessary, and the result is that the same service accepts connections on two different ports of the machine running MicroShift. Switching to theoc create service loadbalancer
command makes no difference; it also configures an unnecessary node port. -
Check that you can access the hello world application by using the load balancer IP address and port, that is, using the public host name of your test machine.
$ curl http://serverb.lab.example.com:8080 <html> <body> Hello, world! </body> </html>
-
To get rid of the node port, you can perform the following commands to patch the service resource.
$ oc patch service hello --type json --patch '[{"op": "replace", "path": "/spec/allocateLoadBalancerNodePorts", "value": false }]' service/hello patched $ oc patch service hello --type json --patch '[{"op": "remove", "path": "/spec/ports/0/nodePort"}]' service/hello patched $ oc get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello LoadBalancer 10.43.25.48 172.25.250.11 8080/TCP 13m
Alternatively, you could use the
oc edit
command to make these changes, or create the load balancer service from YAML manifests instead of using imperative commands.
-
-
Delete all resources you created during this lab.
-
Delete the database deployment and its PVC. Notice that deleting a PVC also deletes its persistent volume.
$ oc delete deployment db deployment.apps "db" deleted $ oc delete pvc dbdata persistentvolumeclaim "dbdata" deleted $ oc --as admin get pv No resources found
-
Delete the hello world deployment and its service.
$ oc delete service hello service "hello" deleted. $ oc delete deployment hello deployment.apps "hello" deleted
-
Finally, undo the edits to your
/etc/hosts
file.
-
With these two test deployments, you verified that your MicroShift instance can provide persistent storage and ingress network connectivity to its applications.