3. Testing Workload Isolation with Pods
Having confirmed that system components are strictly pinned to the Reserved cores, we will now examine how user-deployed Pods behave. Kubernetes classifies Pods into three Quality of Service (QoS) classes: BestEffort, Burstable, and Guaranteed.
In a partitioned cluster, all user Pods are scheduled on the Isolated CPU set by default, but their internal pinning behavior varies depending on their QoS class.
Prerequisites: Create a Test Namespace
Before proceeding, create a dedicated namespace for our experiments:
# Create the demo project
oc new-project demo
Test Case 1: BestEffort Pod
A BestEffort Pod is created when no resource requests or limits are defined. These Pods are allowed to use any available CPU cycles within the Isolated set (16-31) but have no guaranteed resources.
-
Deploy a CPU-intensive BestEffort Pod:
tee $HOME/pod-besteffort.yaml << 'EOF' --- apiVersion: apps/v1 kind: Deployment metadata: name: cpu-stress-deployment namespace: demo spec: replicas: 1 selector: matchLabels: app: cpu-stress template: metadata: labels: app: cpu-stress spec: volumes: - name: temp-space emptyDir: {} containers: - name: stress-ng-container image: quay.io/wangzheng422/qimgs:centos9-test-stress-ng-2025.12.24.v03 volumeMounts: - name: temp-space mountPath: "/tmp/stress-workdir" # No resources defined, resulting in a BestEffort QoS class command: - "/bin/bash" - "-c" - | echo "Starting stress test on 4 CPUs..."; stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir EOF oc apply -f $HOME/pod-besteffort.yaml -
Access the Pod and observe its environment:
# Identify the pod name POD_NAME=$(oc get pods -n demo -l app=cpu-stress -o jsonpath='{.items[0].metadata.name}') # Open an interactive shell in the container oc rsh -n demo $POD_NAME -
Run
topinside the Pod:# starting top to see individual CPU details top 1 -
Analyze the Results:
Observe the
topoutput. You will notice that the CPU load is concentrated on the Isolated cores (e.g., CPU 18, 19, 27, 31 in the example below), while the Reserved cores (0-15) remain relatively idle.Example
topoutput%Cpu0 : 6.0 us, 3.3 sy, 0.0 ni, 84.9 id, 0.3 wa, 1.3 hi, 4.0 si, 0.0 st %Cpu1 : 7.4 us, 3.0 sy, 0.0 ni, 86.9 id, 0.3 wa, 1.3 hi, 1.0 si, 0.0 st %Cpu2 : 5.4 us, 2.7 sy, 0.0 ni, 90.2 id, 0.0 wa, 1.0 hi, 0.3 si, 0.3 st %Cpu3 : 7.2 us, 2.1 sy, 0.0 ni, 89.4 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu4 : 6.8 us, 3.1 sy, 0.0 ni, 88.5 id, 0.0 wa, 1.0 hi, 0.3 si, 0.3 st %Cpu5 : 7.4 us, 3.0 sy, 0.0 ni, 88.3 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu6 : 7.4 us, 2.7 sy, 0.0 ni, 88.6 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu7 : 6.8 us, 2.7 sy, 0.0 ni, 88.5 id, 0.3 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu8 : 6.1 us, 3.7 sy, 0.0 ni, 88.5 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu9 : 6.8 us, 2.7 sy, 0.0 ni, 89.2 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu10 : 6.1 us, 2.4 sy, 0.0 ni, 89.8 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu11 : 5.4 us, 2.4 sy, 0.0 ni, 90.5 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu12 : 5.8 us, 3.4 sy, 0.0 ni, 89.1 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu13 : 7.7 us, 4.0 sy, 0.0 ni, 86.6 id, 0.0 wa, 1.3 hi, 0.3 si, 0.0 st %Cpu14 : 7.1 us, 2.4 sy, 0.0 ni, 88.8 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu15 : 5.8 us, 2.4 sy, 0.0 ni, 90.1 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu16 : 0.3 us, 0.0 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu17 : 0.7 us, 0.3 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu18 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu19 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu20 : 0.7 us, 0.0 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu21 : 1.0 us, 0.3 sy, 0.0 ni, 98.0 id, 0.0 wa, 0.3 hi, 0.3 si, 0.0 st %Cpu22 : 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu23 : 0.3 us, 0.3 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu24 : 0.3 us, 0.0 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu25 : 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu26 : 0.3 us, 0.3 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu27 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu28 : 0.7 us, 0.3 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu29 : 0.3 us, 0.0 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu30 : 0.7 us, 0.3 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu31 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st -
Verify Process Affinity:
First, find the PID of the
stress-ngprocess:ps -ef | grep "stress-ng"Example
ps1000810+ 1 0 0 14:06 ? 00:00:00 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 2 1 99 14:06 ? 00:00:27 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 3 1 99 14:06 ? 00:00:27 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 4 1 99 14:06 ? 00:00:27 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 5 1 99 14:06 ? 00:00:27 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 8 6 0 14:06 pts/0 00:00:00 grep stress-ngNow, check the CPU affinity of that process ID automatically:
# Check the cpuset affinity for the main process (PID 1) taskset -c -p $(pgrep -o stress-ng)Example
tasksetoutputpid 1's current affinity list: 16-31The output should confirm that the Pod is restricted to the range 16-31.
-
Exit the Pod:
exit
Test Case 2: Burstable Pod
A Burstable Pod has resource requests that are lower than its limits. Like BestEffort Pods, Burstable Pods in a partitioned cluster are scheduled across the entire Isolated CPU set.
-
Update the Deployment to Burstable QoS:
oc scale deployment cpu-stress-deployment --replicas=0 oc patch deployment cpu-stress-deployment -n demo --patch ' spec: template: spec: containers: - name: stress-ng-container resources: requests: cpu: "2" memory: "64Mi" limits: cpu: "4" memory: "128Mi" ' oc scale deployment cpu-stress-deployment --replicas=1 -
Access the Pod and observe its environment:
# Identify the pod name POD_NAME=$(oc get pods -n demo -l app=cpu-stress -o jsonpath='{.items[0].metadata.name}') # Open an interactive shell in the container oc rsh -n demo $POD_NAME -
Run
topinside the Pod:# starting top to see individual CPU details top 1 -
Analyze the Results:
Observe the
topoutput. You will notice that the CPU load is concentrated on the Isolated cores (e.g., CPU 17, 18, 23, 30 in the example below), while the Reserved cores (0-15) remain relatively idle.Example
topoutput%Cpu0 : 7.1 us, 3.4 sy, 0.0 ni, 83.2 id, 0.0 wa, 1.0 hi, 5.1 si, 0.3 st %Cpu1 : 5.8 us, 3.1 sy, 0.0 ni, 87.5 id, 0.3 wa, 1.4 hi, 2.0 si, 0.0 st %Cpu2 : 5.4 us, 2.4 sy, 0.0 ni, 90.1 id, 0.0 wa, 1.4 hi, 0.7 si, 0.0 st %Cpu3 : 7.5 us, 2.0 sy, 0.0 ni, 88.8 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu4 : 5.8 us, 2.7 sy, 0.0 ni, 90.1 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu5 : 6.8 us, 2.7 sy, 0.0 ni, 88.8 id, 0.0 wa, 1.0 hi, 0.3 si, 0.3 st %Cpu6 : 7.1 us, 3.0 sy, 0.0 ni, 87.5 id, 0.3 wa, 1.3 hi, 0.3 si, 0.3 st %Cpu7 : 5.8 us, 3.4 sy, 0.0 ni, 89.1 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu8 : 9.7 us, 6.0 sy, 0.0 ni, 82.3 id, 0.3 wa, 1.0 hi, 0.3 si, 0.3 st %Cpu9 : 7.8 us, 3.1 sy, 0.0 ni, 87.1 id, 0.3 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu10 : 5.4 us, 3.7 sy, 0.0 ni, 88.8 id, 0.0 wa, 1.7 hi, 0.3 si, 0.0 st %Cpu11 : 5.7 us, 3.7 sy, 0.0 ni, 89.3 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu12 : 4.7 us, 3.1 sy, 0.0 ni, 90.5 id, 0.0 wa, 1.0 hi, 0.3 si, 0.3 st %Cpu13 : 6.5 us, 4.4 sy, 0.0 ni, 87.4 id, 0.0 wa, 1.4 hi, 0.3 si, 0.0 st %Cpu14 : 6.1 us, 2.7 sy, 0.0 ni, 89.1 id, 0.0 wa, 1.4 hi, 0.3 si, 0.3 st %Cpu15 : 7.7 us, 3.4 sy, 0.0 ni, 87.2 id, 0.0 wa, 1.0 hi, 0.7 si, 0.0 st %Cpu16 : 1.3 us, 0.3 sy, 0.0 ni, 97.7 id, 0.0 wa, 0.7 hi, 0.0 si, 0.0 st %Cpu17 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu18 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu19 : 1.3 us, 0.3 sy, 0.0 ni, 97.7 id, 0.0 wa, 0.7 hi, 0.0 si, 0.0 st %Cpu20 : 1.7 us, 0.3 sy, 0.0 ni, 97.3 id, 0.0 wa, 0.3 hi, 0.3 si, 0.0 st %Cpu21 : 1.7 us, 0.3 sy, 0.0 ni, 97.3 id, 0.0 wa, 0.3 hi, 0.3 si, 0.0 st %Cpu22 : 1.0 us, 0.3 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu23 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu24 : 1.3 us, 1.0 sy, 0.0 ni, 97.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu25 : 1.0 us, 0.0 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu26 : 0.7 us, 0.3 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu27 : 1.0 us, 0.3 sy, 0.0 ni, 98.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu28 : 2.0 us, 0.3 sy, 0.0 ni, 97.3 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu29 : 1.3 us, 0.0 sy, 0.0 ni, 98.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu30 : 99.3 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.7 hi, 0.0 si, 0.0 st %Cpu31 : 1.3 us, 0.3 sy, 0.0 ni, 98.0 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st -
Verify Process Affinity:
First, find the PID of the
stress-ngprocess:ps -ef | grep "stress-ng"Example
ps1000810+ 1 0 0 14:04 ? 00:00:00 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 2 1 51 14:04 ? 00:00:09 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 3 1 48 14:04 ? 00:00:09 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 4 1 47 14:04 ? 00:00:09 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 5 1 49 14:04 ? 00:00:09 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000810+ 8 6 0 14:04 pts/0 00:00:00 grep stress-ngNow, check the CPU affinity of that process ID automatically:
# Check the cpuset affinity for the main process (PID 1) taskset -c -p $(pgrep -o stress-ng)Example
tasksetoutputpid 1's current affinity list: 16-31You will find the affinity list is still 16-31. Burstable pods share the isolated pool and are not pinned to specific cores.
-
Exit the Pod:
exit
Test Case 3: Guaranteed Pod
A Guaranteed pod has CPU requests equal to its CPU limits. The Kubernetes CPU Manager assigns exclusive CPUs to each container in a Guaranteed pod.
-
Update the Deployment to Guaranteed QoS:
oc scale deployment cpu-stress-deployment --replicas=0 oc patch deployment cpu-stress-deployment -n demo --patch ' spec: template: spec: containers: - name: stress-ng-container resources: requests: cpu: "4" memory: "64Mi" limits: cpu: "4" memory: "64Mi" ' oc scale deployment cpu-stress-deployment --replicas=1 -
Access the Pod and observe its environment:
# Identify the pod name POD_NAME=$(oc get pods -n demo -l app=cpu-stress -o jsonpath='{.items[0].metadata.name}') # Open an interactive shell in the container oc rsh -n demo $POD_NAME -
Run
topinside the Pod:# starting top to see individual CPU details top 1 -
Analyze the Results:
Observe the
topoutput. You will notice that the CPU load is concentrated on the Isolated cores (e.g., CPU 16, 17, 18, 19 in the example below), while the Reserved cores (0-15) remain relatively idle.Example
topoutput%Cpu0 : 6.4 us, 2.4 sy, 0.0 ni, 85.8 id, 0.0 wa, 1.0 hi, 4.4 si, 0.0 st %Cpu1 : 4.4 us, 1.7 sy, 0.0 ni, 91.6 id, 0.0 wa, 1.0 hi, 1.4 si, 0.0 st %Cpu2 : 5.4 us, 2.4 sy, 0.0 ni, 91.2 id, 0.0 wa, 0.7 hi, 0.3 si, 0.0 st %Cpu3 : 4.7 us, 2.4 sy, 0.0 ni, 91.2 id, 0.3 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu4 : 5.4 us, 3.0 sy, 0.0 ni, 89.9 id, 0.3 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu5 : 4.1 us, 3.0 sy, 0.0 ni, 91.6 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu6 : 4.4 us, 2.0 sy, 0.0 ni, 92.2 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu7 : 5.1 us, 2.0 sy, 0.0 ni, 91.6 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu8 : 4.7 us, 2.0 sy, 0.0 ni, 92.2 id, 0.0 wa, 1.0 hi, 0.0 si, 0.0 st %Cpu9 : 4.8 us, 2.0 sy, 0.0 ni, 91.8 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu10 : 4.0 us, 4.4 sy, 0.0 ni, 89.9 id, 0.0 wa, 1.3 hi, 0.3 si, 0.0 st %Cpu11 : 4.4 us, 2.7 sy, 0.0 ni, 91.3 id, 0.3 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu12 : 5.1 us, 2.4 sy, 0.0 ni, 91.6 id, 0.0 wa, 1.0 hi, 0.0 si, 0.0 st %Cpu13 : 5.1 us, 3.0 sy, 0.0 ni, 90.2 id, 0.3 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu14 : 6.1 us, 2.4 sy, 0.0 ni, 90.6 id, 0.0 wa, 1.0 hi, 0.0 si, 0.0 st %Cpu15 : 5.1 us, 1.7 sy, 0.0 ni, 91.9 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu16 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu17 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu18 : 99.3 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.7 hi, 0.0 si, 0.0 st %Cpu19 : 99.7 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu20 : 0.3 us, 0.0 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu21 : 0.3 us, 0.3 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu22 : 1.0 us, 0.3 sy, 0.0 ni, 98.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu23 : 0.3 us, 0.0 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu24 : 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu25 : 0.3 us, 1.0 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu26 : 0.3 us, 0.3 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu27 : 0.7 us, 0.0 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu28 : 0.3 us, 0.3 sy, 0.0 ni, 99.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st %Cpu29 : 0.3 us, 0.0 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu30 : 0.3 us, 0.3 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.3 hi, 0.3 si, 0.0 st %Cpu31 : 0.7 us, 0.3 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st -
Verify Process Affinity:
First, find the PID of the
stress-ngprocess:ps -ef | grep "stress-ng"Example
ps1000820+ 1 0 0 14:45 ? 00:00:00 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000820+ 2 1 99 14:45 ? 00:01:32 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000820+ 3 1 99 14:45 ? 00:01:32 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000820+ 4 1 99 14:45 ? 00:01:32 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000820+ 5 1 99 14:45 ? 00:01:32 stress-ng --cpu 4 --cpu-load 100 --temp-path /tmp/stress-workdir 1000820+ 18 13 0 14:47 pts/0 00:00:00 grep stress-ngNow, check the CPU affinity of that process ID automatically:
# Check the cpuset affinity for the main process (PID 1) taskset -c -p $(pgrep -o stress-ng)Example
tasksetoutputpid 1's current affinity list: 16-19Instead of the broad
16-31range, you will see a specific set of 4 cores (e.g.,16-19). -
Exit the Pod:
exit