4. Testing Workload Isolation with a Virtual Machine

Workload partitioning is not limited to containerized applications; it also extends to virtual machines managed by OpenShift Virtualization. In this section, we will deploy a VirtualMachine (VM) with 4 vCPUs and verify that it respects the CPU isolation boundaries defined in our PerformanceProfile.

Installing Command Line Tools

If you haven’t already installed the virtctl binary, which is essential for interacting with OpenShift VMs from the CLI, follow these steps:

# Get the cluster ingress domain
DOMAIN=$(oc get ingress.config.openshift.io cluster -o jsonpath='{.spec.domain}')

# Download the virtctl binary from the internal cluster endpoint
wget --no-check-certificate https://hyperconverged-cluster-cli-download-openshift-cnv.${DOMAIN}/amd64/linux/virtctl.tar.gz

# Extract and install to a local bin directory
tar zvxf virtctl.tar.gz
mkdir -p ~/.local/bin/
mv virtctl ~/.local/bin/

Creating a Virtual Machine

We will create a VM using a DataVolume for its root disk and use cloud-init to inject our SSH public key. This allows for passwordless access to the VM.

  1. Generate an SSH key pair:

    # Create a dedicated SSH key for this lab
    ssh-keygen -t rsa -f ~/.ssh/wlp_id_rsa -N ""
    WLP_SSH_KEY=$(cat ~/.ssh/wlp_id_rsa.pub)
  2. Define and create the VM manifest:

    tee student-vm-ssh.yaml << EOF
    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: centos9-student
      labels:
        app: centos9-student
    spec:
      runStrategy: Manual
      dataVolumeTemplates:
        - metadata:
            name: centos-student-dv
          spec:
            sourceRef:
              kind: DataSource
              name: centos-stream9
              namespace: openshift-virtualization-os-images
            storage:
              resources:
                requests:
                  storage: 30Gi
              storageClassName: ocs-external-storagecluster-ceph-rbd
      template:
        metadata:
          labels:
            kubevirt.io/domain: centos9-student
        spec:
          domain:
            cpu:
              cores: 4
              sockets: 1
              threads: 1
            resources:
              # Resource constraints at the Kubernetes container level
              requests:
                memory: 2Gi
                cpu: 4        # Ensure the scheduler reserves 4 physical cores
              limits:
                memory: 2Gi
                cpu: 4        # Hard limit to prevent CPU bursting beyond the isolated set
            devices:
              disks:
                - name: rootdisk
                  disk:
                    bus: virtio
                - name: cloudinitdisk
                  disk:
                    bus: virtio
              interfaces:
                - name: default
                  masquerade: {}
          networks:
            - name: default
              pod: {}
          volumes:
            - name: rootdisk
              dataVolume:
                name: centos-student-dv
            - name: cloudinitdisk
              cloudInitNoCloud:
                userData: |
                  #cloud-config
                  user: root
                  password: redhat
                  chpasswd: { expire: False }
                  ssh_pwauth: True
                  disable_root: False
                  ssh_authorized_keys:
                    - ${WLP_SSH_KEY}
    EOF
    
    # Apply the manifest and start the VM
    oc apply -f student-vm-ssh.yaml
    virtctl start centos9-student

Running the Stress Test Inside the VM

Wait for the VM to boot, then SSH into it to run a CPU-intensive task.

  1. SSH into the VM:

    # Use virtctl to securely SSH into the VM instance
    virtctl ssh root@centos9-student --identity-file=~/.ssh/wlp_id_rsa
  2. Install and run stress-ng inside the VM shell:

    # Install the stress-ng utility
    dnf install -y /usr/bin/stress-ng
    
    # Launch a background stress test on 4 CPUs at 100% load
    nohup stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp &
    
    # Exit the VM shell
    exit

Verifying Host CPU Usage

Now, we will verify that the virtualization overhead and the VM’s computational work are confined to the Isolated CPU set on the host node.

  1. Access the control plane node:

    NODE_NAME=$(oc get nodes -o jsonpath='{.items[0].metadata.name}')
    oc debug node/$NODE_NAME
  2. Observe the top output on the host:

    top 1

    In the top output, you will see the QEMU process (which runs the VM) consuming 400% CPU (4 cores). Crucially, these cycles are being drawn from the Isolated set (cores 16-19).

    Example top output
    %Cpu0  :  8.1 us,  3.4 sy,  0.0 ni, 81.8 id,  0.3 wa,  1.7 hi,  4.7 si,  0.0 st
    %Cpu1  :  7.8 us,  4.7 sy,  0.0 ni, 84.1 id,  0.0 wa,  1.4 hi,  2.0 si,  0.0 st
    %Cpu2  :  7.7 us,  3.0 sy,  0.0 ni, 86.9 id,  0.3 wa,  1.3 hi,  0.7 si,  0.0 st
    %Cpu3  :  9.4 us,  3.7 sy,  0.0 ni, 84.5 id,  0.0 wa,  1.7 hi,  0.7 si,  0.0 st
    %Cpu4  :  7.4 us,  3.4 sy,  0.0 ni, 87.2 id,  0.0 wa,  1.3 hi,  0.7 si,  0.0 st
    %Cpu5  :  7.0 us,  2.7 sy,  0.0 ni, 88.3 id,  0.0 wa,  1.3 hi,  0.3 si,  0.3 st
    %Cpu6  :  6.4 us,  3.4 sy,  0.0 ni, 87.9 id,  0.3 wa,  1.7 hi,  0.3 si,  0.0 st
    %Cpu7  :  7.4 us,  3.7 sy,  0.0 ni, 86.9 id,  0.0 wa,  1.7 hi,  0.3 si,  0.0 st
    %Cpu8  :  6.7 us,  2.7 sy,  0.0 ni, 88.2 id,  0.0 wa,  1.7 hi,  0.7 si,  0.0 st
    %Cpu9  :  7.7 us,  4.0 sy,  0.0 ni, 86.2 id,  0.3 wa,  1.3 hi,  0.3 si,  0.0 st
    %Cpu10 :  7.1 us,  4.0 sy,  0.0 ni, 87.2 id,  0.0 wa,  1.3 hi,  0.3 si,  0.0 st
    %Cpu11 :  8.7 us,  4.0 sy,  0.0 ni, 85.7 id,  0.0 wa,  1.3 hi,  0.3 si,  0.0 st
    %Cpu12 :  9.6 us,  4.3 sy,  0.0 ni, 84.1 id,  0.0 wa,  1.7 hi,  0.3 si,  0.0 st
    %Cpu13 :  8.7 us,  3.0 sy,  0.0 ni, 86.0 id,  0.3 wa,  1.3 hi,  0.7 si,  0.0 st
    %Cpu14 :  8.7 us,  3.0 sy,  0.0 ni, 86.0 id,  0.0 wa,  1.7 hi,  0.7 si,  0.0 st
    %Cpu15 :  6.7 us,  4.0 sy,  0.0 ni, 87.6 id,  0.0 wa,  1.3 hi,  0.3 si,  0.0 st
    %Cpu16 : 99.7 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
    %Cpu17 : 99.7 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
    %Cpu18 : 99.7 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
    %Cpu19 : 99.7 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
    %Cpu20 :  0.7 us,  0.7 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
    %Cpu21 :  2.3 us,  1.6 sy,  0.0 ni, 95.4 id,  0.0 wa,  0.3 hi,  0.3 si,  0.0 st
    %Cpu22 :  2.3 us,  1.3 sy,  0.0 ni, 96.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    %Cpu23 :  1.0 us,  0.3 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.0 hi,  0.3 si,  0.0 st
    %Cpu24 :  9.9 us,  1.3 sy,  0.0 ni, 88.4 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
    %Cpu25 :  0.3 us,  1.0 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
    %Cpu26 :  1.0 us,  0.7 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    %Cpu27 :  0.7 us,  0.7 sy,  0.0 ni, 98.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    %Cpu28 :  3.0 us,  0.7 sy,  0.0 ni, 96.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    %Cpu29 :  1.3 us,  0.7 sy,  0.0 ni, 97.4 id,  0.0 wa,  0.3 hi,  0.3 si,  0.0 st
    %Cpu30 :  2.0 us,  0.7 sy,  0.0 ni, 97.0 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
    %Cpu31 :  1.3 us,  0.7 sy,  0.0 ni, 97.4 id,  0.0 wa,  0.3 hi,  0.3 si,  0.0 st
  3. Verify Process Affinity from the Host:

    Find the process ID of the qemu-kvm instance and check its affinity.

  4. Find the qemu-kvm process for the VM:

    ps -ef | grep qemu-kvm
    Example ps output
    107      1216721 1216582 99 15:03 ?        00:05:49 /usr/libexec/qemu-kvm -name guest=demo_centos9-student,.............
  5. Check its CPU affinity:

    # Find the PID of the VM's QEMU process
    QEMU_PID=$(pgrep -o qemu-kvm)
    
    # Check the CPU affinity of the QEMU process
    taskset -c -p $QEMU_PID
    Example Output
    pid 1216721's current affinity list: 16-19

This confirms that the VM’s compute resources are strictly pinned to the designated isolated cores, preventing any impact on the cluster’s control plane.

Cleanup

To release the resources, delete the VM:

oc delete -f student-vm-ssh.yaml