[SB DB] OVN BGP Agent: Design of the BGP Driver with kernel routing¶
Purpose¶
The addition of a BGP driver enables the OVN BGP agent to expose virtual machine (VMs) and load balancer (LBs) IP addresses through the BGP dynamic protocol when these IP addresses are either associated with a floating IP (FIP) or are booted or created on a provider network. The same functionality is available on project networks, when a special flag is set.
This document presents the design decision behind the BGP Driver for the Networking OVN BGP agent.
Overview¶
With the growing popularity of virtualized and containerized workloads, it is common to use pure Layer 3 spine and leaf network deployments in data centers. The benefits of this practice reduce scaling complexities, failure domains, and broadcast traffic limits.
The Southbound driver for OVN BGP Agent is a Python-based daemon that runs on each OpenStack Controller and Compute node. The agent monitors the Open Virtual Network (OVN) southbound database for certain VM and floating IP (FIP) events. When these events occur, the agent notifies the FRR BGP daemon (bgpd) to advertise the IP address or FIP associated with the VM. The agent also triggers actions that route the external traffic to the OVN overlay. Because the agent uses a multi-driver implementation, you can configure the agent for the specific infrastructure that runs on top of OVN, such as OSP or Kubernetes and OpenShift.
Note
Note it is only intended for the N/S traffic, the E/W traffic will work exactly the same as before, i.e., VMs are connected through geneve tunnels.
This design simplicity enables the agent to implement different drivers,
depending on what OVN SB DB events are being watched (watchers examples at
ovn_bgp_agent/drivers/openstack/watchers/
), and what actions are
triggered in reaction to them (drivers examples at
ovn_bgp_agent/drivers/openstack/XXXX_driver.py
, implementing the
ovn_bgp_agent/drivers/driver_api.py
).
A driver implements the support for BGP capabilities. It ensures that both VMs
and LBs on provider networks or associated floating IPs are exposed through BGP.
In addition, VMs on tenant networks can be also exposed
if the expose_tenant_network
configuration option is enabled.
To control what tenant networks are exposed another flag can be used:
address_scopes
. If not set, all the tenant networks will be exposed, while
if it is configured with a (set of) address_scopes, only the tenant networks
whose address_scope matches will be exposed.
A common driver API is defined exposing the these methods:
expose_ip
andwithdraw_ip
: exposes or withdraws IPs for local OVN ports.expose_remote_ip
andwithdraw_remote_ip
: exposes or withdraws IPs through another node when the VM or pods are running on a different node. For example, use for VMs on tenant networks where the traffic needs to be injected through the OVN router gateway port.expose_subnet
andwithdraw_subnet
: exposes or withdraws subnets through the local node.
Proposed Solution¶
To support BGP functionality the OVN BGP Agent includes a driver
that performs the extra steps required for exposing the IPs through BGP on
the correct nodes and steering the traffic to/from the node from/to the OVN
overlay. To configure the OVN BGP agent to use the BGP driver set the
driver
configuration option in the bgp-agent.conf
file to
ovn_bgp_driver
.
The BGP driver requires a watcher to react to the BGP-related events.
In this case, BGP actions are triggered by events related to
Port_Binding
and Load_Balancer
OVN SB DB tables.
The information in these tables is modified when VMs and LBs are created and
deleted, and when FIPs for them are associated and disassociated.
Then, the agent performs some actions in order to ensure those VMs are reachable through BGP:
Traffic between nodes or BGP Advertisement: These are the actions needed to expose the BGP routes and make sure all the nodes know how to reach the VM/LB IP on the nodes.
Traffic within a node or redirecting traffic to/from OVN overlay: These are the actions needed to redirect the traffic to/from a VM to the OVN Neutron networks, when traffic reaches the node where the VM is or in their way out of the node.
The code for the BGP driver is located at
ovn_bgp_agent/drivers/openstack/ovn_bgp_driver.py
, and its associated
watcher can be found at
ovn_bgp_agent/drivers/openstack/watchers/bgp_watcher.py
.
OVN SB DB Events¶
The watcher associated with the BGP driver detects the relevant events on the OVN SB DB to call the driver functions to configure BGP and linux kernel networking accordingly. The following events are watched and handled by the BGP watcher:
VMs or LBs created/deleted on provider networks
FIPs association/disassociation to VMs or LBs
VMs or LBs created/deleted on tenant networks (if the
expose_tenant_networks
configuration option is enabled, or if theexpose_ipv6_gua_tenant_networks
for only exposing IPv6 GUA ranges)Note
If
expose_tenant_networks
flag is enabled, it does not matter the status ofexpose_ipv6_gua_tenant_networks
, as all the tenant IPs are advertised.
It creates new event classes named
PortBindingChassisEvent
and OVNLBEvent
, that all the events
watched for BGP use as the base (inherit from).
The BGP watcher reacts to the following events:
PortBindingChassisCreatedEvent
: Detects when a port of type""
(empty double-quotes),virtual
, orchassisredirect
gets attached to the OVN chassis where the agent is running. This is the case for VM or amphora LB ports on the provider networks, VM or amphora LB ports on tenant networks with a FIP associated, and neutron gateway router ports (cr-lrps). It callsexpose_ip
driver method to perform the needed actions to expose it.PortBindingChassisDeletedEvent
: Detects when a port of type""
(empty double-quotes),virtual
, orchassisredirect
gets detached from the OVN chassis where the agent is running. This is the case for VM or amphora LB ports on the provider networks, VM or amphora LB ports on tenant networks with a FIP associated, and neutron gateway router ports (cr-lrps). It callswithdraw_ip
driver method to perform the needed actions to withdraw the exposed BGP route.FIPSetEvent
: Detects when a Port_Binding entry of typepatch
gets itsnat_addresses
field updated (e.g., action related to FIPs NATing). When true, and the associated VM port is on the local chassis, the event is processed by the agent and the required IP rule gets created and its IP is (BGP) exposed. It calls theexpose_ip
driver method, including the associated_port information, to perform the required actions.FIPUnsetEvent
: Same as previous, but when thenat_addresses
field get an IP deleted. It calls thewithdraw_ip
driver method to perform the required actions.SubnetRouterAttachedEvent
: Detects when a Port_Binding entry of typepatch
port gets created. This means a subnet is attached to a router. In theexpose_tenant_network
case, if the chassis is the one having the cr-lrp port for that router where the port is getting created, then the event is processed by the agent and the needed actions (ip rules and routes, and ovs rules) for exposing the IPs on that network are performed. This event calls the driver APIexpose_subnet
. The same happens ifexpose_ipv6_gua_tenant_networks
is used, but then, the IPs are only exposed if they are IPv6 global.SubnetRouterDetachedEvent
: Same asSubnetRouterAttachedEvent
, but for the deletion of the port. It callswithdraw_subnet
.TenantPortCreateEvent
: Detects when a port of type""
(empty double-quotes) orvirtual
gets updated. If that port is not on a provider network, and the chassis where the event is processed has theLogicalRouterPort
for the network and the OVN router gateway port where the network is connected to, then the event is processed and the actions to expose it through BGP are triggered. It calls theexpose_remote_ip
because in this case the IPs are exposed through the node with the OVN router gateway port, instead of the node where the VM is located.TenantPortDeleteEvent
: Same asTenantPortCreateEvent
, but for the deletion of the port. It callswithdraw_remote_ip
.OVNLBMemberUpdateEvent
: This event is required to handle the OVN load balancers created on the provider networks. It detects when new datapaths are added/removed to/from theLoad_Balancer
entries. This happens when members are added/removed which triggers the addition/deletion of their datapaths into theLoad_Balancer
table entry. The event is only processed in the nodes with the relevant OVN router gateway ports, because it is where it needs to get exposed to be injected into OVN overlay.OVNLBMemberUpdateEvent
callsexpose_ovn_lb_on_provider
only when the second datapath is added. The first datapath belongs to the VIP for the provider network, while the second one belongs to the load balancer member.OVNLBMemberUpdateEvent
callswithdraw_ovn_lb_on_provider
when the second datapath is deleted, or the entire load balancer is deleted (event type isROW_DELETE
).Note
All the load balancer members are expected to be connected through the same router to the provider network.
Driver Logic¶
The BGP driver is in charge of the networking configuration ensuring that
VMs and LBs on provider networks or with FIPs can be reached through BGP
(N/S traffic). In addition, if the expose_tenant_networks
flag is enabled,
VMs in tenant networks should be reachable too – although instead of directly
in the node they are created, through one of the network gateway chassis nodes.
The same happens with expose_ipv6_gua_tenant_networks
but only for IPv6
GUA ranges. In addition, if the config option address_scopes
is set, only
the tenant networks with matching corresponding address_scope
will be
exposed.
To accomplish the network configuration and advertisement, the driver ensures:
VM and LBs IPs can be advertised in a node where the traffic could be injected into the OVN overlay, in this case either the node hosting the VM or the node where the router gateway port is scheduled (see limitations subsection).
Once the traffic reaches the specific node, the traffic is redirected to the OVN overlay by leveraging kernel networking.
BGP Advertisement¶
The OVN BGP Agent (both SB and NB drivers) is in charge of triggering FRR (IP routing protocol suite for Linux which includes protocol daemons for BGP, OSPF, RIP, among others) to advertise/withdraw directly connected routes via BGP. To do that, when the agent starts, it ensures that:
FRR local instance is reconfigured to leak routes for a new VRF. To do that it uses
vtysh shell
. It connects to the existsing FRR socket (--vty_socket
option) and executes the next commands, passing them through a file (-c FILE_NAME
option):router bgp {{ bgp_as }} address-family ipv4 unicast import vrf {{ vrf_name }} exit-address-family address-family ipv6 unicast import vrf {{ vrf_name }} exit-address-family router bgp {{ bgp_as }} vrf {{ vrf_name }} bgp router-id {{ bgp_router_id }} address-family ipv4 unicast redistribute connected exit-address-family address-family ipv6 unicast redistribute connected exit-address-family
There is a VRF created (the one leaked in the previous step), by default with name
bgp-vrf
.There is a dummy interface type (by default named
bgp-nic
), associated to the previously created VRF device.Ensure ARP/NDP is enabled at OVS provider bridges by adding an IP to it.
Then, to expose the VMs/LB IPs as they are created (or upon
initialization or re-sync), since the FRR configuration has the
redistribute connected
option enabled, the only action needed to expose it
(or withdraw it) is to add it (or remove it) from the bgp-nic
dummy interface.
Then it relies on Zebra to do the BGP advertisement, as Zebra detects the
addition/deletion of the IP on the local interface and advertises/withdraws
the route:
$ ip addr add IPv4/32 dev bgp-nic $ ip addr add IPv6/128 dev bgp-nicNote
As we also want to be able to expose VM connected to tenant networks (when
expose_tenant_networks
orexpose_ipv6_gua_tenant_networks
configuration options are enabled), there is a need to expose the Neutron router gateway port (cr-lrp on OVN) so that the traffic to VMs in tenant networks is injected into OVN overlay through the node that is hosting that port.
Traffic Redirection to/from OVN¶
Besides the VM/LB IP being exposed in a specific node (either the one hosting the VM/LB or the one with the OVN router gateway port), the OVN BGP Agent is in charge of configuring the linux kernel networking and OVS so that the traffic can be injected into the OVN overlay, and vice versa. To do that, when the agent starts, it ensures that:
ARP/NDP is enabled on OVS provider bridges by adding an IP to it
There is a routing table associated to each OVS provider bridge (adds entry at /etc/iproute2/rt_tables)
If the provider network is a VLAN network, a VLAN device connected to the bridge is created, and it has ARP and NDP enabled.
Cleans up extra OVS flows at the OVS provider bridges
Then, either upon events or due to (re)sync (regularly or during start up), it:
Adds an IP rule to apply specific routing table routes, in this case the one associated to the OVS provider bridge:
$ ip rule 0: from all lookup local 1000: from all lookup [l3mdev-table] *32000: from all to IP lookup br-ex* # br-ex is the OVS provider bridge *32000: from all to CIDR lookup br-ex* # for VMs in tenant networks 32766: from all lookup main 32767: from all lookup default
Adds an IP route at the OVS provider bridge routing table so that the traffic is routed to the OVS provider bridge device:
$ ip route show table br-ex default dev br-ex scope link *CIDR via CR-LRP_IP dev br-ex* # for VMs in tenant networks *CR-LRP_IP dev br-ex scope link* # for the VM in tenant network redirection *IP dev br-ex scope link* # IPs on provider or FIPs
Adds a static ARP entry for the OVN Distributed Gateway Ports (cr-lrps) so that the traffic is steered to OVN via br-int – this is because OVN does not reply to ARP requests outside its L2 network:
$ ip neigh ... CR-LRP_IP dev br-ex lladdr CR-LRP_MAC PERMANENT ...
For IPv6, instead of the static ARP entry, an NDP proxy is added, same reasoning:
$ ip -6 neigh add proxy CR-LRP_IP dev br-ex
Finally, in order for properly send the traffic out from the OVN overlay to kernel networking to be sent out of the node, the OVN BGP Agent needs to add a new flow at the OVS provider bridges so that the destination MAC address is changed to the MAC address of the OVS provider bridge (
actions=mod_dl_dst:OVN_PROVIDER_BRIDGE_MAC,NORMAL
):$ sudo ovs-ofctl dump-flows br-ex cookie=0x3e7, duration=77.949s, table=0, n_packets=0, n_bytes=0, priority=900,ip,in_port="patch-provnet-1" actions=mod_dl_dst:3a:f7:e9:54:e8:4d,NORMAL cookie=0x3e7, duration=77.937s, table=0, n_packets=0, n_bytes=0, priority=900,ipv6,in_port="patch-provnet-1" actions=mod_dl_dst:3a:f7:e9:54:e8:4d,NORMAL
Driver API¶
The BGP driver needs to implement the driver_api.py
interface with the
following functions:
expose_ip
: creates all the IP rules and routes, and OVS flows needed to redirect the traffic to the OVN overlay. It also ensure FRR exposes through BGP the required IP.withdraw_ip
: removes the above configuration to withdraw the exposed IP.expose_subnet
: add kernel networking configuration (IP rules and route) to ensure traffic can go from the node to the OVN overlay, and vice versa, for IPs within the tenant subnet CIDR.withdraw_subnet
: removes the above kernel networking configuration.expose_remote_ip
: BGP exposes VM tenant network IPs through the chassis hosting the OVN gateway port for the router where the VM is connected. It ensures traffic destinated to the VM IP arrives to this node by exposing the IP through BGP locally. The previous steps inexpose_subnet
ensure the traffic is redirected to the OVN overlay once on the node.withdraw_remote_ip
: removes the above steps to stop advertising the IP through BGP from the node.
The driver API implements these additional methods for OVN load balancers on provider networks:
expose_ovn_lb_on_provider
: adds kernel networking configuration to ensure traffic is forwarded from the node to the OVN overlay and to expose the VIP through BGP.withdraw_ovn_lb_on_provider
: removes the above steps to stop advertising the load balancer VIP.
Agent deployment¶
The BGP mode (for both NB and SB drivers) exposes the VMs and LBs in provider
networks or with FIPs, as well as VMs on tenant networks if
expose_tenant_networks
or expose_ipv6_gua_tenant_networks
configuration
options are enabled.
There is a need to deploy the agent in all the nodes where VMs can be created as well as in the networker nodes (i.e., where OVN router gateway ports can be allocated):
For VMs and Amphora load balancers on provider networks or with FIPs, the IP is exposed on the node where the VM (or amphora) is deployed. Therefore the agent needs to be running on the compute nodes.
For VMs on tenant networks (with
expose_tenant_networks
orexpose_ipv6_gua_tenant_networks
configuration options enabled), the agent needs to be running on the networker nodes. In OpenStack, with OVN networking, the N/S traffic to the tenant VMs (without FIPs) needs to go through the networking nodes, more specifically the one hosting the Distributed Gateway Port (chassisredirect OVN port (cr-lrp)), connecting the provider network to the OVN virtual router. Hence, the VM IPs are advertised through BGP in that node, and from there it follows the normal path to the OpenStack compute node where the VM is located — through the tunnel.Similarly, for OVN load balancer the IPs are exposed on the networker node. In this case the ARP request for the VIP is replied by the OVN router gateway port, therefore the traffic needs to be injected into OVN overlay at that point too. Therefore the agent needs to be running on the networker nodes for OVN load balancers.
As an example of how to start the OVN BGP Agent on the nodes, see the commands below:
$ python setup.py install $ cat bgp-agent.conf # sample configuration that can be adapted based on needs [DEFAULT] debug=True reconcile_interval=120 expose_tenant_networks=True # expose_ipv6_gua_tenant_networks=True # for SB DB driver driver=ovn_bgp_driver # for NB DB driver #driver=nb_ovn_bgp_driver bgp_AS=64999 bgp_nic=bgp-nic bgp_vrf=bgp-vrf bgp_vrf_table_id=10 ovsdb_connection=tcp:127.0.0.1:6640 address_scopes=2237917c7b12489a84de4ef384a2bcae [ovn] ovn_nb_connection = tcp:172.17.0.30:6641 ovn_sb_connection = tcp:172.17.0.30:6642 [agent] root_helper=sudo ovn-bgp-agent-rootwrap /etc/ovn-bgp-agent/rootwrap.conf root_helper_daemon=sudo ovn-bgp-agent-rootwrap-daemon /etc/ovn-bgp-agent/rootwrap.conf $ sudo bgp-agent --config-dir bgp-agent.conf Starting BGP Agent... Loaded chassis 51c8480f-c573-4c1c-b96e-582f9ca21e70. BGP Agent Started... Ensuring VRF configuration for advertising routes Configuring br-ex default rule and routing tables for each provider network Found routing table for br-ex with: ['201', 'br-ex'] Sync current routes. Add BGP route for logical port with ip 172.24.4.226 Add BGP route for FIP with ip 172.24.4.199 Add BGP route for CR-LRP Port 172.24.4.221 ....Note
If you only want to expose the IPv6 GUA tenant IPs, then remove the option
expose_tenant_networks
and addexpose_ipv6_gua_tenant_networks=True
instead.Note
If you want to filter the tenant networks to be exposed by some specific address scopes, add the list of address scopes to
address_scope=XXX
section. If no filtering should be applied, just remove the line.
Note that the OVN BGP Agent operates under the next assumptions:
A dynamic routing solution, in this case FRR, is deployed and advertises/withdraws routes added/deleted to/from certain local interface, in this case the ones associated to the VRF created to that end. As only VM and load balancer IPs need to be advertised, FRR needs to be configure with the proper filtering so that only /32 (or /128 for IPv6) IPs are advertised. A sample config for FRR is:
frr version 7.5 frr defaults traditional hostname cmp-1-0 log file /var/log/frr/frr.log debugging log timestamp precision 3 service integrated-vtysh-config line vty router bgp 64999 bgp router-id 172.30.1.1 bgp log-neighbor-changes bgp graceful-shutdown no bgp default ipv4-unicast no bgp ebgp-requires-policy neighbor uplink peer-group neighbor uplink remote-as internal neighbor uplink password foobar neighbor enp2s0 interface peer-group uplink neighbor enp3s0 interface peer-group uplink address-family ipv4 unicast redistribute connected neighbor uplink activate neighbor uplink allowas-in origin neighbor uplink prefix-list only-host-prefixes out exit-address-family address-family ipv6 unicast redistribute connected neighbor uplink activate neighbor uplink allowas-in origin neighbor uplink prefix-list only-host-prefixes out exit-address-family ip prefix-list only-default permit 0.0.0.0/0 ip prefix-list only-host-prefixes permit 0.0.0.0/0 ge 32 route-map rm-only-default permit 10 match ip address prefix-list only-default set src 172.30.1.1 ip protocol bgp route-map rm-only-default ipv6 prefix-list only-default permit ::/0 ipv6 prefix-list only-host-prefixes permit ::/0 ge 128 route-map rm-only-default permit 11 match ipv6 address prefix-list only-default set src f00d:f00d:f00d:f00d:f00d:f00d:f00d:0004 ipv6 protocol bgp route-map rm-only-default ip nht resolve-via-default
The relevant provider OVS bridges are created and configured with a loopback IP address (eg. 1.1.1.1/32 for IPv4), and proxy ARP/NDP is enabled on their kernel interface.
Limitations¶
The following limitations apply:
There is no API to decide what to expose, all VMs/LBs on providers or with floating IPs associated with them will get exposed. For the VMs in the tenant networks, the flag
address_scopes
should be used for filtering what subnets to expose – which should be also used to ensure no overlapping IPs.There is no support for overlapping CIDRs, so this must be avoided, e.g., by using address scopes and subnet pools.
Network traffic is steered by kernel routing (IP routes and rules), therefore OVS-DPDK, where the kernel space is skipped, is not supported.
Network traffic is steered by kernel routing (IP routes and rules), therefore SR-IOV, where the hypervisor is skipped, is not supported.
In OpenStack with OVN networking the N/S traffic to the ovn-octavia VIPs on the provider or the FIPs associated to the VIPs on tenant networks needs to go through the networking nodes (the ones hosting the Distributed Router Gateway Ports, i.e., the chassisredirect cr-lrp ports, for the router connecting the load balancer members to the provider network). Therefore, the entry point into the OVN overlay needs to be one of those networking nodes, and consequently the VIPs (or FIPs to VIPs) are exposed through them. From those nodes the traffic follows the normal tunneled path (Geneve tunnel) to the OpenStack compute node where the selected member is located.