BaGPipe-BGP is a component of networking-bagpipe, used on compute nodes along the Neutron agent and bagpipe agent extension of this agent.
It is a lightweight implementation of BGP VPNs (IP VPNs and E-VPNs), targeting deployments on compute nodes hosting VMs, in particular for Openstack/KVM platforms.
The goal of BaGPipe-BGP is not to fully implement BGP specifications, but only the subset of specifications required to implement IP VPN VRFs and E-VPN EVIs (RFC4364 a.k.a RFC2547bis, RFC7432/draft-ietf-bess-evpn-overlay, and RFC4684).
BaGPipe-BGP is designed to use encapsulations over IP (such as MPLS-over-GRE or VXLAN), and thus does not require the use of LDP. Bare MPLS over Ethernet is also supported and can be used if compute nodes/routers have direct Ethernet connectivity.
BaGPipe-BGP has been designed to provide VPN (IP VPN or E-VPN) connectivity to local VMs running on an Openstack compute node.
BaGPipe-BGP is typically driven via its HTTP REST interface, by Openstack Neutron agent extensions found in this package.
Moreover, BaGPipe-BGP can also be used standalone (in particular for testing purposes), with for instance VMs tap interfaces or with veth interfaces to network namespaces (see below).
If you only want to test how to interconnect one compute node running bagpipe-bgp and an IP/MPLS router, you don’t need to setup a BGP Route Reflector.
However, using BaGPipe-BGP between compute nodes currently requires setting up a BGP Route Reflector (see BGP Implementation and Caveats). Typically, passive mode will have to be used for BGP peerings.
The term “BGP Route Reflector” refers to a BGP implementation that redistributes routes between iBGP peers RFC4456.
When using bagpipe-bgp on more than one compute node, we thus need each instance of BaGPipe-BGP to be configured to peer with at least one route reflector (see Configuration).
We provide a tool that can be used to emulate a route reflector to interconnect 2 BaGPipe-BGP implementations, typically for test purposes (see Fake RR).
For more than 2 compute nodes running BaGPipe-BGP, you will need a real BGP implementation supporting RFC4364 and BGP route reflection (and ideally also RFC4684), different options can be considered:
The bagpipe-bgp config file default location is:
/etc/bagpipe-bgp/bgp.conf
.
It needs to be customized, at least for the following:
local_address
: the local address to use for BGP sessions and traffic
encapsulation (can also be specified as an interface, e.g. “eth0”, in which
the IPv4 address of this interface will be used)peers
: the list of BGP peers, it depends on the BGP setup that you
have chosen (see above BGP Route Reflection)Example with two compute nodes and relying on bagpipe fake route reflector:
Note well that the dataplane drivers proposed in the sample config file are dummy drivers that will not actually drive any dataplane state. To have traffic really forwarded into IP VPNs or E-VPNs, you need to select real dataplane drivers.
For instance, you can use the ovs
dataplane driver for IP VPN, and the linux
driver for E-VPN.
Note well that there are specific constraints or dependencies applying to dataplane drivers for IP VPNs:
ovs
driver can be used on most recent Linux kernels,
but requires an OpenVSwitch with suitable MPLS code (OVS 2.4 to 2.6 was
tested); this driver can do bare-MPLS or MPLS-over-GRE (but see
Caveats for MPLS-over-GRE); for bare MPLS, this driver
requires the OVS bridge to be associated with an IP address, and that
VRF interfaces be plugged into OVS prior to calling BaGPipe-BGP API
to attach themlinux
driver relies on the native MPLS stack of the Linux kernel,
it currently requires a kernel 4.4+ and uses the pyroute2 module that allows
defining all states via Netlink rather than by executing ‘ip’ commandsFor E-VPN, the linux
driver is supported without any particular additional
configuration being required, and simply requires a Linux kernel >=3.10
(linux_vxlan.py).
If systemd init scripts are installed (see samples/systemd
), bagpipe-bgp
is typically started with: systemctl start bagpipe-bgp
It can also be started directly with the bagpipe-bgp
command
(--help
to see what parameters can be used).
By default, it outputs logs on stdin (captured by systemd if run under systemd).
If you choose to use our fake BGP Route Reflector (see BGP Route
Reflection), you can start it whether with the
bagpipe-fakerr
command, or if you have startup scripts installed,
with service bagpipe-fakerr start
. Note that this tool requires
the additional installation of the twisted
python package.
There isn’t anything to configure, logs will be in syslog.
This tool is not a BGP implementation and simply plugs together two TCP connections face to face.
The bagpipe-rest-attach
tool allows to exercise the REST API through
the command line to attach and detach interfaces from IP VPN VRFs and
E-VPN EVIs.
See bagpipe-rest-attach --help
.
This example assumes that there is a pre-existing tap interface ‘tap42’.
on compute node A, plug tap interface tap42, MAC de:ad:00:00:be:ef, IP 11.11.11.1 into an IP VPN VRF with route-target 64512:77:
bagpipe-rest-attach --attach --port tap42 --mac de:ad:00:00:be:ef --ip 11.11.11.1 --gateway-ip 11.11.11.254 --network-type ipvpn --rt 64512:77
on compute node B, plug tap interface tap56, MAC ba:d0:00:00:ca:fe, IP 11.11.11.2 into an IP VPN VRF with route-target 64512:77:
bagpipe-rest-attach --attach --port tap56 --mac ba:d0:00:00:ca:fe --ip 11.11.11.2 --gateway-ip 11.11.11.254 --network-type ipvpn --rt 64512:77
Note that this example is a schoolbook example only, but does not actually work unless you try to use one of the two MPLS Linux dataplane drivers.
Note also that, assuming that VMs are behind these tap interfaces, these VMs will need to have proper IP configuration. When BaGPipe-BGP is use standalone, no DHCP service is provided, and the IP configuration will have to be static.
In this example, the bagpipe-rest-attach tool will build for you a network namespace and a properly configured pair of veth interfaces, and will plug one of the veth to the VRF:
on compute node A, plug a netns interface with IP 12.11.11.1 into a new IP VPN VRF named “test”, with route-target 64512:78
bagpipe-rest-attach --attach --port netns --ip 12.11.11.1 --network-type ipvpn --vpn-instance-id test --rt 64512:78
on compute node B, plug a netns interface with IP 12.11.11.2 into a new IP VPN VRF named “test”, with route-target 64512:78
bagpipe-rest-attach --attach --port netns --ip 12.11.11.2 --network-type ipvpn --vpn-instance-id test --rt 64512:78
For this last example, assuming that you have configured bagpipe-bgp to
use the ovs
dataplane driver for IP VPN, you will actually be able
to have traffic exchanged between the network namespaces:
ip netns exec test ping 12.11.11.2
PING 12.11.11.2 (12.11.11.2) 56(84) bytes of data.
64 bytes from 12.11.11.2: icmp_req=6 ttl=64 time=1.08 ms
64 bytes from 12.11.11.2: icmp_req=7 ttl=64 time=0.652 ms
In this example, similarly as the previous one, the bagpipe-rest-attach tool will build for you a network namespace and a properly configured pair of veth interfaces, and will plug one of the veth to the E-VPN instance:
on compute node A, plug a netns interface with IP 12.11.11.1 into a new E-VPN named “test2”, with route-target 64512:79
bagpipe-rest-attach --attach --port netns --ip 12.11.11.1 --network-type evpn --vpn-instance-id test2 --rt 64512:79
on compute node B, plug a netns interface with IP 12.11.11.2 into a new E-VPN named “test2”, with route-target 64512:79
bagpipe-rest-attach --attach --port netns --ip 12.11.11.2 --network-type evpn --vpn-instance-id test2 --rt 64512:79
For this last example, assuming that you have configured bagpipe-bgp to
use the linux
dataplane driver for E-VPN, you will
actually be able to have traffic exchanged between the network
namespaces:
ip netns exec test2 ping 12.11.11.2
PING 12.11.11.2 (12.11.11.2) 56(84) bytes of data.
64 bytes from 12.11.11.2: icmp_req=1 ttl=64 time=1.71 ms
64 bytes from 12.11.11.2: icmp_req=2 ttl=64 time=1.06 ms
The REST API (default port 8082) provide troubleshooting information, in read-only, through the /looking-glass URL.
It can be accessed with a browser: e.g. http://10.0.0.1:8082/looking-glass or http://127.0.0.1:8082/looking-glass (a browser extension to nicely display JSON data is recommended).
It can also be accessed with the bagpipe-looking-glass
utility:
# bagpipe-looking-glass
bgp: (...)
vpns: (...)
config: (...)
logs: (...)
summary:
warnings_and_errors: 2
start_time: 2014-06-11 14:52:32
local_routes_count: 1
BGP_established_peers: 0
vpn_instances_count: 1
received_routes_count: 0
# bagpipe-looking-glass bgp peers
* 192.168.122.1 (...)
state: Idle
# bagpipe-looking-glass bgp routes
match:IPv4/mpls-vpn,*:
* RD:192.168.122.101:1 12.11.11.1/32 MPLS:[129-B]:
attributes:
next_hop: 192.168.122.101
extended_community: target:64512:78
afi-safi: IPv4/mpls-vpn
source: VRF 1 (...)
route_targets:
* target:64512:78
match:IPv4/rtc,*:
* RTC<64512>:target:64512:78:
attributes:
next_hop: 192.168.122.101
afi-safi: IPv4/rtc
source: BGPManager (...)
match:L2VPN/evpn,*: -
The main components of BaGPipe-BGP are:
The engine dispatching events related to BGP routes is designed with a publish/subscribe pattern based on the principles in RFC4684. Workers (a worker can be a BGP peer or a local worker responsible for an IP VPN VRF) publish BGP VPN routes with specified Route Targets, and subscribe to the Route Targets that they need to receive. The engine takes care of propagating advertisement and withdrawal events between the workers, based on subscriptions and BGP semantics (e.g. no redistribution between BGP peers sessions).
The core engine does not do any BGP best path selection. For routes received from external BGP peers, best path selection happens in the VRF workers. For routes that local workers advertise, no best path selection is done because two distinct workers will never advertise a route of same BGP NLRI.
For implementation convenience, the design choice was made to use Python native threads and python Queues to manage the API, local workers, and BGP peers workloads:
The BaGPipe-BGP service, as currently designed, does not persist information on VPNs (VRFs or EVIs) and the ports attached to them. On a restart, the component responsible triggering the attachment of interfaces to VPNs, can detect the restart of the BGP and re-trigger these attachments.
The BGP protocol implementation reuses BGP code from ExaBGP. BaGPipe-BGP only reuses the low-level classes for message encodings and connection setup.
Non-goals for this BGP implementation:
BaGPipe-BGP was designed to allow for a modular dataplane implementation. For each type of VPN (IP VPN, E-VPN) a dataplane driver is chosen through configuration. A dataplane driver is responsible for setting up forwarding state for incoming and outgoing traffic based on port attachment information and BGP routes.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.