Guided solution (page 2)
-
List available networks in OpenStack cluster.
oc exec -n openstack openstackclient -- openstack network listSample output[student@workstation ~]$ oc exec -n openstack openstackclient -- openstack network list +--------------------------------------+-------------------------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+-------------------------+--------------------------------------+ | 5526bfcf-a164-4a91-ad99-90bb5c41f500 | public | b5a9f748-df82-436a-b8e3-14912e258b5d | | d806b6c9-ee50-4cdf-9a79-2f58f3f6b8ce | scenario-bfx021-network | 89db0de7-6d5a-498a-948d-2e1ab26185f6 | | dded9e02-20c0-49ef-85aa-21b5e091e572 | private | 3e903f93-7638-41f0-96f2-538088a16aca | +--------------------------------------+-------------------------+--------------------------------------+ [student@workstation ~]$
-
Now list the ports for the network associated with the instance
oc exec -n openstack openstackclient -- openstack port list --network scenario-bfx021-network --longSample output[student@workstation ~]$ oc exec -n openstack openstackclient -- openstack port list --network scenario-bfx021-network --long +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------+--------+-----------------+--------------------------+------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | Security Groups | Device Owner | Tags | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------+--------+-----------------+--------------------------+------+ | 5fdc2cfb-64a4-463b-b8b3-442aeed9e913 | | fa:16:3e:44:dd:4b | ip_address='192.168.140.1', subnet_id='89db0de7-6d5a-498a-948d-2e1ab26185f6' | ACTIVE | None | network:router_interface | | | e123fcf9-74bc-416e-a627-b11b2b630c62 | | fa:16:3e:e6:a9:e3 | ip_address='192.168.140.180', subnet_id='89db0de7-6d5a-498a-948d-2e1ab26185f6' | ACTIVE | None | compute:nova | | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------+--------+-----------------+--------------------------+------+ [student@workstation ~]$
-
List all the ports with device owner and device ID.
oc exec -n openstack openstackclient -- openstack port list --network scenario-bfx021-network --long -c id -c device_owner -c device_idSample output[student@workstation ~]$ oc exec -n openstack openstackclient -- openstack port list --network scenario-bfx021-network --long -c id -c device_owner -c device_id +--------------------------------------+--------------------------+-----------+ | ID | Device Owner | device_id | +--------------------------------------+--------------------------+-----------+ | 5fdc2cfb-64a4-463b-b8b3-442aeed9e913 | network:router_interface | None | | e123fcf9-74bc-416e-a627-b11b2b630c62 | compute:nova | None | +--------------------------------------+--------------------------+-----------+ [student@workstation ~]$
-
You see that there is no port with device ID as ovn_meta.
-
ovn_meta port is missing here.
-
Each network needs to have this port.
-
To fix this problem inside the Neutron api container there is a utility called neutron-ovn-db-sync-util which compares neutrons MySQL database with the OVN database and makes sure everything is in sync.
-
The utility has multiple modes like log, repair, so on.
-
-
Use OpenShift client to attach to neutron_api POD.
oc rsh -n openstack $(oc get pods -n openstack -l service=neutron -o name)Sample output[student@workstation ~]$ oc rsh -n openstack $(oc get pods -n openstack -l service=neutron -o name) Defaulted container "neutron-api" out of: neutron-api, neutron-httpd sh-5.1$
-
Within the POD, run neutron-ovn-db-sync-util in log mode.
neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --ovn-neutron_sync_mode log --debug | tee /tmp/neutron-ovn-db-sync-util.logMake sure to run the above command within the pod
-
Check the toollogs in the POD to look for messages related to instance’s network.
less /tmp/neutron-ovn-db-sync-util.log grep network-id /tmp/neutron-ovn-db-sync-util.log
Replace network-id in the above command with uuid of scenario-bfx021-network * .Sample output
sh-5.1$ grep d806b6c9-ee50-4cdf-9a79-2f58f3f6b8ce /tmp/neutron-ovn-db-sync-util.log
2025-06-26 10:43:01.258 60 DEBUG neutron.services.segments.db [None req-7e5209ae-f072-4319-a91e-190156dbbdb5 - - - - - -] neutron.services.segments.plugin.Plugin method get_segments called with arguments (<neutron_lib.context.Context object at 0x7f54ab6077c0>,) {'filters': {'network_id': ['d806b6c9-ee50-4cdf-9a79-2f58f3f6b8ce']}} wrapper /usr/lib/python3.9/site-packages/oslo_log/helpers.py:65
2025-06-26 10:43:01.635 60 WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync [None req-7e5209ae-f072-4319-a91e-190156dbbdb5 - - - - - -] Missing metadata port found in Neutron for network d806b6c9-ee50-4cdf-9a79-2f58f3f6b8ce
sh-5.1$
+ Observe the metadata warning saying Missing metadata port found in Neutron network.
-
Before running the neutron-ovn-db-sync-util command with the repair option, it is recommended to disable access to the Neutron API. This ensures that no new changes are made to the Neutron database while the neutron-ovn-db-sync-util tool is running. For example, you can achieve this by adding a custom NetworkPolicy.
[student@workstation ~]$ echo "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-ingress-to-neutron-api spec: podSelector: matchLabels: service: neutron policyTypes: - Ingress ingress: [] " | oc apply -f - networkpolicy.networking.k8s.io/deny-ingress-to-neutron-api created -
Now run the neutron-ovn-db-sync-util command with repair option.
sh-5.1$ neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --ovn-neutron_sync_mode repair --debug | tee /tmp/neutron-ovn-db-sync-util.log
-
Check the logs again and see that now the tool is repairing the missing metadata port: Creating missing metadata port in Neutron and OVN for network.
grep network-id /tmp/neutron-ovn-db-sync-util.logReplace network-id in the above command with uuid of scenario-bfx021-network * .Sample output
sh-5.1$ grep d806b6c9-ee50-4cdf-9a79-2f58f3f6b8ce /tmp/neutron-ovn-db-sync-util.log 2025-06-26 10:54:48.906 72 DEBUG neutron.services.segments.db [None req-1f2b416c-0b80-4387-9940-b5b14194c650 - - - - - -] neutron.services.segments.plugin.Plugin method get_segments called with arguments (<neutron_lib.context.Context object at 0x7f258e1467c0>,) {'filters': {'network_id': ['d806b6c9-ee50-4cdf-9a79-2f58f3f6b8ce']}} wrapper /usr/lib/python3.9/site-packages/oslo_log/helpers.py:65 2025-06-26 10:54:49.210 72 WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync [None req-1f2b416c-0b80-4387-9940-b5b14194c650 - - - - - -] Missing metadata port found in Neutron for network d806b6c9-ee50-4cdf-9a79-2f58f3f6b8ce 2025-06-26 10:54:49.211 72 WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync [None req-1f2b416c-0b80-4387-9940-b5b14194c650 - - - - - -] Creating missing metadata port in Neutron and OVN for network d806b6c9-ee50-4cdf-9a79-2f58f3f6b8ce -
Exit from the pod’s shell session to be back on the workstation vm’s shell.
-
Remove the NetworkPolicy that was previously created to restore access to the Neutron API.
[student@workstation ~]$ oc delete networkpolicy deny-ingress-to-neutron-api networkpolicy.networking.k8s.io/deny-ingress-to-neutron-api deleted -
Login to the compute node using ssh.
-
Verify the metadata resources those were missing are now available after the DB sync.
ip net podman ps | grep haproxy -
Restart the instance to fetch metadata on boot.
oc exec -n openstack openstackclient -- openstack server reboot scenario-bfx021-vmIf you did not receive any output in the previous step, re-run those commands after the VM has been rebooted.