Gsd-skill-creator openstack-neutron
OpenStack Neutron software-defined networking service. Provides network abstraction for cloud instances including security groups, floating IPs, DHCP, L3 routing, ML2 plugin architecture with OVN/OVS backends, network namespaces, provider and tenant networks, VXLAN/VLAN/flat network types, and port management. Use for deploying, configuring, operating, and troubleshooting OpenStack networking.
git clone https://github.com/Tibsfox/gsd-skill-creator
T=$(mktemp -d) && git clone --depth=1 https://github.com/Tibsfox/gsd-skill-creator "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/openstack/neutron" ~/.claude/skills/tibsfox-gsd-skill-creator-openstack-neutron && rm -rf "$T"
skills/openstack/neutron/SKILL.mdOpenStack Neutron -- Software-Defined Networking
Neutron is OpenStack's networking service and the most complex component in the stack. It provides the network abstraction layer that connects every instance, container, and service in the cloud. Where physical networking uses cables, switches, and routers, Neutron virtualizes all of these into software constructs that operators manage through APIs.
Architecture
Neutron uses the ML2 (Modular Layer 2) plugin architecture, which separates the network model from the mechanism that implements it. The ML2 plugin supports multiple mechanism drivers -- the two primary backends for Kolla-Ansible deployments are:
- OVN (Open Virtual Network): The recommended backend for new deployments. OVN provides distributed virtual routing, native DHCP, and security group implementation without requiring separate agents. It uses a northbound/southbound database architecture for state management.
- OVS (Open vSwitch): The legacy backend that uses separate agents for L3 routing, DHCP, and metadata. Each function runs in its own network namespace. More mature but more complex operationally.
Network types supported: flat (untagged), VLAN (802.1Q tagged), VXLAN (overlay tunnels), GRE (generic routing encapsulation). Single-node deployments typically use flat for provider networks and VXLAN for tenant networks.
The agent model (OVS backend): Neutron runs multiple agents --
neutron-openvswitch-agent (L2 connectivity), neutron-l3-agent (routing and NAT), neutron-dhcp-agent (IP assignment), neutron-metadata-agent (instance metadata). Each agent manages its domain through network namespaces. OVN consolidates these into ovn-controller on each node.
Deploy
Kolla-Ansible Configuration
Key settings in
globals.yml:
# Backend selection (choose one) neutron_plugin_agent: "ovn" # Recommended for new deployments # neutron_plugin_agent: "openvswitch" # Legacy, more agents to manage # External network interface (the physical NIC for provider networks) neutron_external_interface: "eth1" # Adjust to your hardware # Provider networks (required for floating IPs and external access) enable_neutron_provider_networks: "yes" # DVR (Distributed Virtual Router) -- disable for single-node enable_neutron_dvr: "no" # Network types neutron_tenant_network_types: "vxlan" neutron_type_drivers: "flat,vlan,vxlan"
Network Bridge Configuration
The external bridge (
br-ex) connects Neutron to the physical network:
# Verify the bridge exists after deployment docker exec openvswitch_vswitchd ovs-vsctl show # Should show br-ex with neutron_external_interface as a port # For OVN: verify integration bridge docker exec openvswitch_vswitchd ovs-vsctl show | grep br-int
Container Verification
# List Neutron containers docker ps --format '{{.Names}}' | grep neutron # Expected containers (OVN backend): # neutron_server, neutron_ovn_metadata_agent # Plus OVN containers: ovn_controller, ovn_northd, ovsdb-nb, ovsdb-sb # Expected containers (OVS backend): # neutron_server, neutron_openvswitch_agent, neutron_l3_agent, # neutron_dhcp_agent, neutron_metadata_agent # Check agent status openstack network agent list # All agents should show "alive" and "UP"
Configure
Provider Networks vs Tenant Networks
- Provider networks are mapped to physical network infrastructure. They provide external connectivity and floating IP pools. Created by admins only.
- Tenant networks are virtual overlay networks (VXLAN) that tenants create for their instances. Isolated from each other by default.
# Create a provider network (flat, mapped to physnet1) openstack network create --share --external \ --provider-physical-network physnet1 \ --provider-network-type flat \ provider-net # Create a provider subnet with allocation pool openstack subnet create --network provider-net \ --subnet-range 192.168.1.0/24 \ --gateway 192.168.1.1 \ --allocation-pool start=192.168.1.100,end=192.168.1.200 \ --dns-nameserver 8.8.8.8 \ provider-subnet
Subnet Configuration
Key parameters for every subnet: CIDR range, gateway IP, DHCP allocation pool (avoid overlap with static IPs), DNS nameservers, and host routes.
# Create a tenant subnet with DHCP openstack subnet create --network tenant-net \ --subnet-range 10.0.0.0/24 \ --gateway 10.0.0.1 \ --dns-nameserver 8.8.8.8 \ tenant-subnet
Router Configuration
Routers connect tenant networks to provider networks and provide SNAT for outbound traffic and DNAT for floating IPs.
# Create a router and set its external gateway openstack router create main-router openstack router set --external-gateway provider-net main-router openstack router add subnet main-router tenant-subnet
Security Groups
Default security group policy: deny all ingress, allow all egress. Every instance gets the default security group unless overridden.
# Allow SSH and ICMP openstack security group rule create --protocol tcp --dst-port 22 default openstack security group rule create --protocol icmp default # Allow HTTP/HTTPS openstack security group rule create --protocol tcp --dst-port 80 default openstack security group rule create --protocol tcp --dst-port 443 default
Floating IP Pool
Floating IPs are allocated from the provider network's allocation pool and associated with instance ports for external access.
# Create and assign a floating IP openstack floating ip create provider-net openstack server add floating ip my-instance <floating-ip>
MTU Configuration
VXLAN adds 50 bytes of overhead. If your physical MTU is 1500, tenant network MTU must be 1450 or less. Configure jumbo frames (MTU 9000) on the physical network to avoid fragmentation with overlays.
# Set global MTU in globals.yml # neutron_mtu: 1500 # Physical network MTU # Tenant networks auto-calculate: physical_mtu - overlay_overhead
Port Security and Allowed Address Pairs
Port security prevents IP/MAC spoofing. Disable only when required (e.g., for load balancers, VPN gateways).
# Disable port security on a specific port openstack port set --no-security-group --disable-port-security <port-id> # Allow additional IP/MAC pairs on a port openstack port set --allowed-address ip-address=10.0.0.0/24 <port-id>
Operate
Network and Subnet Management
# List networks and subnets openstack network list openstack subnet list # Show detailed network info openstack network show <network-name> # Delete a network (must remove all ports first) openstack port list --network <network-name> openstack port delete <port-id> openstack network delete <network-name>
Floating IP Management
# List floating IPs openstack floating ip list # Disassociate and release openstack server remove floating ip <server> <floating-ip> openstack floating ip delete <floating-ip>
Security Group Management
# List security groups and their rules openstack security group list openstack security group rule list <group-name> # Create a custom security group openstack security group create web-servers --description "HTTP/HTTPS access" openstack security group rule create --protocol tcp --dst-port 80 web-servers openstack security group rule create --protocol tcp --dst-port 443 web-servers
Network Namespace Debugging (OVS backend)
# List all network namespaces ip netns list # Format: qdhcp-<network-id>, qrouter-<router-id> # Execute commands inside a namespace ip netns exec qrouter-<router-id> ip addr show ip netns exec qrouter-<router-id> iptables -t nat -L -n -v # Check DHCP namespace ip netns exec qdhcp-<network-id> ps aux | grep dnsmasq
Port Diagnostics
# Show port details including binding status openstack port show <port-id> # Check port binding openstack port show <port-id> -c binding_vif_type -c binding_host_id # binding_vif_type should be "ovs" or "ovn" -- "binding_failed" means trouble
QoS Policies
# Create a bandwidth limit policy openstack network qos policy create bw-limiter openstack network qos rule create --type bandwidth-limit \ --max-kbps 10000 --max-burst-kbits 1000 bw-limiter # Apply to a port openstack port set --qos-policy bw-limiter <port-id>
Troubleshoot
Instance Has No Network Connectivity
Symptoms: Instance boots but cannot reach its gateway, no IP assigned, or no connectivity to other instances.
Diagnostic sequence:
- Check port binding:
-- look atopenstack port list --server <instance>
. Ifbinding_vif_type
, the mechanism driver could not wire the port. Check neutron-server and agent logs.binding_failed - Check security groups:
. Ensure rules allow the traffic. Default denies all ingress.openstack port show <port-id> -c security_group_ids - Check DHCP:
. If IP is assigned but instance does not have it, DHCP may have failed. Check the DHCP agent or OVN DHCP options.openstack port show <port-id> -c fixed_ips - Check network namespace (OVS):
. If this works, the issue is between the namespace and the instance (OVS flows).ip netns exec qdhcp-<net-id> ping <instance-ip> - Check OVS flows:
. Missing flows indicate agent synchronization issues.docker exec openvswitch_vswitchd ovs-ofctl dump-flows br-int | grep <port-tag> - Check OVN (if applicable):
anddocker exec ovn_northd ovn-nbctl show
to verify logical switch and port bindings.ovn-sbctl show
Floating IP Not Reachable
Symptoms: Floating IP assigned but not pingable or accessible from external network.
Diagnostic sequence:
- Check router gateway:
-- verifyopenstack router show <router>
is set to the provider network.external_gateway_info - Check SNAT/DNAT rules (OVS):
. Look for DNAT rules mapping the floating IP to the fixed IP.ip netns exec qrouter-<router-id> iptables -t nat -L -n -v - Check external bridge:
-- verifydocker exec openvswitch_vswitchd ovs-vsctl show
exists and the physical interface is attached.br-ex - Check ARP: From the external network,
. If no response, the L3 agent is not responding for this IP.arping <floating-ip> - Check security groups: Floating IP traffic must pass through security group rules on the instance port. Ensure ICMP/SSH/HTTP is allowed.
- Check OVN gateway chassis:
to verify NAT rules exist.docker exec ovn_northd ovn-nbctl lr-nat-list <router>
DHCP Failures
Symptoms: Instance boots without an IP address or gets the wrong IP.
Diagnostic sequence:
- Check agent status:
. Agent must be alive.openstack network agent list | grep dhcp - Check namespace (OVS):
. The dnsmasq process should be running.ip netns exec qdhcp-<net-id> ps aux | grep dnsmasq - Check lease file:
. Verify the instance MAC is listed.ip netns exec qdhcp-<net-id> cat /var/lib/neutron/dhcp/<net-id>/leases - Check subnet DHCP:
. Must beopenstack subnet show <subnet> -c enable_dhcp
.True - Check port DHCP options:
.openstack port show <port-id> -c extra_dhcp_opts - OVN DHCP:
to verify DHCP options are programmed.docker exec ovn_northd ovn-nbctl list DHCP_Options
Security Group Rules Not Applying
Symptoms: Traffic that should be allowed is blocked, or traffic that should be blocked passes through.
Diagnostic sequence:
- Verify rules:
. Check direction (ingress/egress), protocol, port range, and remote IP prefix.openstack security group rule list <group> - Check port security:
. If disabled, security groups are bypassed entirely.openstack port show <port-id> -c port_security_enabled - Check OVS flows (OVS backend):
. Stale flows may not reflect current rules. Restart the OVS agent to force a resync.docker exec openvswitch_vswitchd ovs-ofctl dump-flows br-int | grep <port-tag> - Check conntrack: Stateful rules track connections. A rule change does not affect existing connections. Restart the instance or flush conntrack entries.
- OVN ACLs:
to verify ACL rules match security group intent.docker exec ovn_northd ovn-nbctl acl-list <logical-switch>
Network Creation Fails
Symptoms:
openstack network create returns an error about VLAN IDs, provider network, or type driver.
Diagnostic sequence:
- Check type drivers: Verify
inneutron_type_drivers
includes the requested type.globals.yml - VLAN range exhaustion: Check
forml2_conf.ini
. If the range is exhausted, extend it or clean up unused networks.network_vlan_ranges - Provider network misconfigured: Verify
name matches betweenphysnet
(ml2_conf.ini
,flat_networks
) andnetwork_vlan_ranges
in the OVS agent config.bridge_mappings - VXLAN VNI range: Check
in ML2 config. Default range is large (1:65535) but can be exhausted in heavily used environments.vni_ranges
MTU / Fragmentation Issues
Symptoms: Large packets fail, SSH works but SCP stalls, or HTTP transfers hang after initial handshake.
Diagnostic sequence:
- Check MTU chain: Physical NIC MTU minus VXLAN overhead (50 bytes) must equal tenant network MTU. If physical is 1500, tenant must be 1450.
- Test with ping:
(do not fragment). Reduce size until it works to find the effective MTU.ping -M do -s 1400 <target> - Check path MTU discovery: Verify ICMP type 3 code 4 (fragmentation needed) is not blocked by security groups or firewalls.
- Jumbo frames: If the physical network supports MTU 9000, set
inneutron_mtu: 9000
to eliminate VXLAN overhead issues.globals.yml
OVN/OVS Specific Issues
Symptoms: Networking intermittently fails, port binding fails, or flows are stale.
Diagnostic sequence:
- OVS status:
-- check bridge configuration, port attachments, error states.docker exec openvswitch_vswitchd ovs-vsctl show - OVN database sync:
(northbound) anddocker exec ovn_northd ovn-nbctl show
(southbound). Compare expected vs actual logical topology.docker exec ovn_controller ovn-sbctl show - OVS flow table:
-- look for flows with zero packet counts (unused) or unusually high counts (possible loop).docker exec openvswitch_vswitchd ovs-ofctl dump-flows br-int - OVN controller connectivity:
-- should reportdocker exec ovn_controller ovn-appctl connection-status
. If not, check ovsdb-server connectivity.connected - Database compaction: Large OVN databases can slow operations. Check database size and compact if needed:
.docker exec ovn_northd ovsdb-tool compact /var/lib/openvswitch/ovnnb_db.db
Integration Points
- Keystone: All Neutron API calls require Keystone authentication. Neutron registers
service and endpoint in the Keystone catalog. Service usernetwork
authenticates against Keystone for internal operations.neutron - Nova: When an instance boots, Nova requests Neutron to create a port on the specified network. The port binding process wires the instance's virtual NIC to the OVS/OVN bridge. Nova also queries Neutron for security group rules and network metadata.
- Metadata service: Neutron proxies the metadata service (169.254.169.254) to Nova's metadata API. The metadata agent (or OVN metadata agent) runs in the network namespace and forwards requests.
- Octavia (LBaaS): Load Balancer as a Service uses Neutron networks for VIP allocation, member connectivity, and health monitoring. Octavia creates ports on Neutron networks for its amphora instances.
- VPNaaS: VPN as a Service extends Neutron with IPsec VPN capabilities. It creates router-based VPN connections using the L3 agent infrastructure.
NASA SE Cross-References
| SE Phase | Neutron Activity | Reference |
|---|---|---|
| Phase B (Preliminary Design) | Design network topology: management, tenant, provider, and storage network separation. Select ML2 mechanism driver (OVN vs OVS). Plan VXLAN/VLAN segmentation. Define security group strategy. | SP-6105 SS 4.3-4.4 |
| Phase C (Final Design & Build) | Configure networking parameters. Set up external bridge. Configure MTU and network type drivers. Define provider network mappings. | SP-6105 SS 5.1 |
| Phase D (Integration & Test) | Verify network connectivity end-to-end: instance-to-instance, instance-to-external, floating IP reachability. Verify security group enforcement. Test DHCP assignment. Verify metadata service. | SP-6105 SS 5.2-5.3 |
| Phase E (Operations) | Day-2 network management: create/modify networks and subnets, manage floating IPs, update security groups, monitor agent health, debug connectivity issues, manage QoS policies. | SP-6105 SS 5.4-5.5 |