Creating a Linux Bridge Secondary Network for Virtual Machines
Versions tested:
OpenShift 4.20,4.21
A Linux bridge secondary network uses the Kubernetes NMState Operator to create a new Linux bridge device on the cluster nodes,
and the Multus CNI in conjunction with the bridge CNI plugin
to connect pods and VirtualMachines (VMs) to the bridge.
This Linux bridge runs in the nodes' network namespace, and acts as a Layer 2 network switch between pods or VMs running on the same node. Linux bridges can also be linked to a physical host interface to allow connections to external networks, including connections to VMs or pods running on other nodes in the cluster.
In short, you can use a Linux bridge secondary network if the following conditions apply:
-
You do not intend to use it as a primary network (not currently possible using the bridge CNI plugin with OpenShift).
-
You need to connect VMs to cluster-external networks (or the local/physical network without traversing through a NAT gateway).
-
You need to connect
VirtualMachinesto a VLAN.
A Linux bridge NetworkAttachmentDefinition is the simplest way to connect VirtualMachines to a VLAN.
|
Prerequisites
-
OpenShift 4.20+ cluster with OpenShift Virtualization operator installed
-
NMState operator installed on the cluster
-
Cluster admin access for creating NodeNetworkConfigurationPolicy and NetworkAttachmentDefinition resources
-
CLI tools installed:
oc,virtctl -
A physical network interface available on worker nodes (if connecting to external networks)
Steps
The process is broken down into the following steps:
-
Create a Network Attachment Definition to expose the bridge network to pods and VMs
-
Attach VirtualMachines to the Linux bridge secondary network
Create a Linux Bridge Node Network Configuration Policy
The first step is to define the Linux bridge on the cluster nodes using a NodeNetworkConfigurationPolicy (nncp).
All nncp objects are cluster-scoped, so they do not exist in a namespace.
You can, however control which nodes that the nncp applies to using key-value labels.
|
Required Steps:
|
Copy and paste the following NodeNetworkConfigurationPolicy into a new file named lnbr0-bridge.net-attach-def.yaml.
To connect to external networks, you’ll need to uncomment the last two lines, changing the device name to match the secondary or tertiary (if primary & secondary are bonded) device name on the node(s):
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: lnbr0-bridge
spec:
desiredState:
interfaces:
- name: lnbr0
description: layer2-only linux bridge with an optional port
type: linux-bridge
state: up
ipv4:
enabled: false
bridge:
options:
stp:
enabled: false
# Uncomment below and change the interface name
# to match the desired physical host port
#port:
#- name: ens1
-
lnbr0(shorthand forlinux bridge 0): the interface name of the Linux bridge in this example. The name is otherwise arbitrary. -
ipv4: addressing is disabled, as the bridge does not need its own IP address unless acting as a gateway. -
options.stp.enabled: a boolean value enabling or disabling spanning tree protocol (disabled in this example). -
port(commented) a list of network adapters (typically only one) to bind the bridge interface to.-
name: (commented) the name of the physical host device that you’re binding the bridge to. We usedens1in our commented example, but yours may vary.
-
| See the NMState API Guide for a full schema with field descriptions and examples. |
Once you’ve saved the file as lnbr0-bridge.nncp.yaml, go ahead and apply the nncp manifest to the OpenShift cluster:
oc create -f lnbr0-bridge.nncp.yaml
After a few moments, each applicable node in the cluster will have a new bridge interface named lnbr0 with the configured connection state.
To view this progress in action, you can view the NodeNetworkConfigurationEnactment (nnce) object which executes actions pertaining to the NodeNetworkConfigurationPolicy:
oc get nnce
When running the above command, you will see a status of Progressing for each node in the cluster until the enaction is complete, and the status for each node changes to Available.
To confirm that the lnbr0 interface exists on the node(s), you can either view the NodeNetworkState of the cluster (which is akin to running ip addr show on each node), or start a debug session on a node.
To view the NodeNetworkState for all nodes in the cluster, fetch the nns resource, and optionally grep for the bridge interface:
oc describe nns | grep lnbr0
As with nmstate objects (which trigger the NMState Operator), nncp objects are cluster-scoped, so the namespace is not required, although multiple nncp objects can coexist on the cluster.
|
Once the existence of the Linux bridge interface is confirmed on the node(s), proceed with creating a NetworkAttachmentDefinition.
Creating a Linux Bridge Network Attachment Definition
The next step is to create a NetworkAttachmentDefinition to provides Layer 2 networking to your pods/VMs.
The NetworkAttachmentDefinition is what allows pods/VMs within a specific project/namespace to connect to a secondary network.
While not strictly required to create a NetworkAttachmentDefinition, an nncp object defining the bridge interface must exist on the cluster before you attach a VirtualMachine to a bridge network.
Neglecting to do so will prevent any VM with a bridge-based NIC from booting.
|
First, create the namespace for your VMs if it doesn’t already exist:
oc create namespace bridge-demo
Then switch to that namespace:
oc project bridge-demo
Next, copy and paste the following NetworkAttachmentDefinition into a new file named lnbr0-bridge.net-attach-def.yaml.
Change the vlan ID as required if you chose to bind a physical port in the nncp:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: lnbr0-bridge
namespace: bridge-demo
spec:
config: '{
"name": "lnbr0-bridge",
"cniVersion": "0.3.1",
"type": "bridge",
"bridge": "lnbr0",
"ipam": {},
"vlan": 1,
"macspoofchk": false,
"disableContainerInterface": false,
"portIsolation": false
}'
Each field from the above example is explained below (optional fields are shown with their respective default values):
-
metadata.name: k8s name of theNetworkAttachmentDefinition. -
metadata.namespace: should match the namespace that the pods/VMs are running in (bridge-demoin this example), but could exist in another namespace. -
spec.config: accepts a json string value with the following sub-fields:-
"name": name of the configuration, which should ideally matchmetadata.name. -
"cniVersion": currently, the only supported CNI version is"0.3.1". -
"type"(required): must be set to"bridge"to use a Linux bridge interface. -
"bridge"(required): name of the bridge interface defined on nodes/hosts via the NNCP definition. -
"ipam"(unsupported): OpenShift Virtualization does not support IP address management using the bridge CNI plugin, so this field should be empty (layer2only). -
"vlan"(optional): desired VLAN ID of the interface (defaults to1). -
"macspoofchk"(optional): whether to enable mac spoof check, limiting traffic originating from the container to the mac address of the interface (defaults tofalse). -
"disableContainerInterface"(optional): sets the container network interface (veth peer) state down (defaults tofalse). -
"portIsolation"(optional): whether to set isolation on the host interface. This prevents containers from communicating with each other, enforcing communication only with the host or through the gateway (defaults tofalse).
-
More example configurations of the Linux bridge CNI plugin (including several examples which are unsupported in OpenShift Virtualization) are available in the bridge CNI plugin documentation as well as the official OpenShift Documentation.
|
IP addressing and interface bonding:
|
Once the file is saved as lnbr0-bridge.net-attach-def.yaml (where net-attach-def is the valid k8s short-name), go ahead and apply the network-attachment-definition to the OpenShift cluster:
oc create -f lnbr0-bridge.net-attach-def.yaml
Verify that the network-attachment-definition was successfully created:
oc get net-attach-def lnbr0-bridge
With two of the three components in place, the last component is to Attach VirtualMachines to the bridge network.
Attach VirtualMachines to the Secondary Network
In this step, you will create two new VirtualMachines which will be attached to the bridge as a secondary network.
To recap what’s been done so far, we have:
-
installed the
kubernetes-nmstate-operatorand triggered deployment by creating annmstateobject (prior to this guide). -
applied a
NodeNetworkConfigurationPolicyto create a Linux bridge device on the OpenShift cluster node(s). -
monitored the bridge deployment by viewing the
NodeNetworkConfigurationEnactmentwhile in progress. -
verified that the bridge exists on the nodes by viewing the
NodeNetworkStateobject. -
created a (Multus)
NetworkAttachmentDefinitionwhich will allowVirtualMachinesto connect to the Linux bridge to use either as a virtual switch, or as a literal bridge to the host network (if bound to a physical device).
With the above points covered, proceed to Create the First VirtualMachine.
Create the First VirtualMachine
As previously, in this section we will create a manifest of an object (of kind: VirtualMachine) using provided yaml data, and then apply it to the cluster using oc apply.
Using the VirtualMachine manifest below, copy and paste the yaml contents into a new file named fedora-bridge-demo.vm.yaml:
| The cloud-init password in this example is for demo purposes only. Always use a strong, unique password or SSH key authentication in production environments. |
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: fedora-bridge-demo
namespace: bridge-demo
labels:
app: fedora-bridge-demo
spec:
runStrategy: Always
dataVolumeTemplates:
- metadata:
name: fedora-bridge-demo-root
spec:
sourceRef:
kind: DataSource
name: fedora
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 30Gi
instancetype:
name: u1.large
preference:
name: fedora
template:
metadata:
labels:
app: fedora-bridge-demo
spec:
domain:
devices:
disks:
- name: fedora-bridge-demo-root
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
- bridge: {}
name: nic-linux-bridge
networks:
- name: default
pod: {}
- multus:
networkName: lnbr0-bridge
name: nic-linux-bridge
volumes:
- name: fedora-bridge-demo-root
dataVolume:
name: fedora-bridge-demo-root
- name: cloudinitdisk
cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp2s0:
addresses:
- 192.168.50.10/24
userData: |
#cloud-config
user: fedora
password: fedora
chpasswd: { expire: False }
Reviewing specific fields from the above example (all field paths are relative to spec.template.spec):
-
domain.devices.interfaces: lists two interfaces, the primarydefaultand the secondarynic-linux-bridge-
bridge: one of two ways to connect a VM nic to the host container (the other is IPmasquerade). This uses a vm-to-container bridge, whereas thelnbr0bridge connects container-to-node (completing the connection). -
name: an arbitrary identifier which gets referred to later in thenetworksection.
-
-
networks: list of networks to which theinterfacesare attached.-
name: the name of the bridge interface as defined in theinterfacesfield. -
multus.networkName: the name of the NetworkAttachmentDefinition (NAD) providing pods/VMs access to the bridge network.
-
-
volumes: there are two named volumes: one is the rootdatavolumedisk, and the other is thecloudinitdisk-
CloudInitNoCloud: informs kubevirt that this is a cloud-init disk and to use the NoCloud data source-
networkData: this configures a static IP (192.168.50.10/24) on the secondary interface (enp2s0)-
ethernets: The secondary interface nameenp2s0follows KubeVirt’s predictable naming convention for the second network interface
-
-
-
The primary interface (enp1s0) uses masquerade and gets its IP from the pod network
|
With the VirtualMachine manifest saved as a file named fedora-bridge-demo.vm.yaml, go ahead and apply it to the OpenShift cluster:
oc apply -f fedora-bridge-demo.vm.yaml
Check the state of the VirtualMachine. It will take some time to clone the underlying DataVolume before the VM can be launched.
You can proceed with creating the second VM rather than wait until the VM reaches the Running state.
oc get vm fedora-bridge-demo
Create a Second VirtualMachine
The following VirtualMachine manifest changes the name (including labels and volume names) and IP address from the first vm, but it otherwise identical.
Copy and paste the provided yaml into a new file named fedora-bridge-demo2.vm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: fedora-bridge-demo2
namespace: bridge-demo
labels:
app: fedora-bridge-demo2
spec:
runStrategy: Always
dataVolumeTemplates:
- metadata:
name: fedora-bridge-demo2-root
spec:
sourceRef:
kind: DataSource
name: fedora
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 30Gi
instancetype:
name: u1.large
preference:
name: fedora
template:
metadata:
labels:
app: fedora-bridge-demo2
spec:
domain:
devices:
disks:
- name: fedora-bridge-demo2-root
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
- bridge: {}
name: nic-linux-bridge
networks:
- name: default
pod: {}
- multus:
networkName: lnbr0-bridge
name: nic-linux-bridge
volumes:
- name: fedora-bridge-demo2-root
dataVolume:
name: fedora-bridge-demo2-root
- name: cloudinitdisk
cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp2s0:
addresses:
- 192.168.50.20/24
userData: |
#cloud-config
user: fedora
password: fedora
chpasswd: { expire: False }
With the file saved as fedora-bridge-demo2.vm.yaml, apply this second VirtualMachine to the OpenShift cluster:
oc create -f fedora-bridge-demo2.vm.yaml
Verify that both VMs are running, and check the status of the network interfaces by inspecting each VirtualMachineInstance (vmi) directly:
oc get vm -n bridge-demo
oc get vmi fedora-bridge-demo -n bridge-demo -o jsonpath='{.status.interfaces}' | jq .
oc get vmi fedora-bridge-demo2 -n bridge-demo -o jsonpath='{.status.interfaces}' | jq .
With the second VM running and active, proceed with Testing.
Test the Linux Bridge Secondary Network
In our unmodified examples, the Linux bridge secondary network acts as a simple virtual switch, which (when not bridged to a physical node interface) spans across pods, VMs and namespaces within the same node.
The simplest way of testing connectivity across the Linux bridge is to test basic connectivity from one VirtualMachine to the other.
The following resources must exist in the bridge-demo namespace before you can proceed with testing:
-
NetworkAttachmentDefinition:lnbr0-bridge -
VirtualMachine:fedora-bridge-demo(secondary NIC IP address: 192.168.50.10) -
VirtualMachine:fedora-bridge-demo2(secondary NIC IP address: 192.168.50.20)
The simplest way to test is by connecting to the console on one VM (or optionally, both VMs) and pinging the other.
Access the console of the first VirtualMachine using the virtctl command and login as user:`fedora` with password:`fedora` (hit ENTER if no login prompt appears):
virtctl console fedora-bridge-demo
Check the IP address of the VirtualMachine. The following command should return a static address of 192.168.50.10 for the enp2s0 interface:
ip addr show
Ping the fedora-bridge-demo2 (IP address: 192.168.50.20) VirtualMachine from the console of the first VM. If you see replies, then the secondary network is operational (hit CTRL+C to cancel the ping):
ping 192.168.50.20
To exit the VM console, hit CTRL+].
To cleanup all of the resources from this exercise, simply switch to a different namespace and delete the bridge-demo namespace:
oc project default
oc delete namespace bridge-demo
Summary
In this tutorial, you learned:
-
How to create a Linux bridge on cluster nodes using a
NodeNetworkConfigurationPolicywith the NMState Operator -
How to create a
NetworkAttachmentDefinitionusing the bridge CNI plugin to expose the bridge network to VMs -
How to configure VirtualMachines with both a primary masquerade interface and a secondary bridge interface
-
How to assign static IP addresses on the bridge network using cloud-init
networkData -
How to verify Layer 2 connectivity between VMs on the Linux bridge secondary network