BGP-based IP VPNs networks are widely used in the industry especially for enterprises. This project aims at supporting inter-connection between L3VPNs and Neutron resources, i.e. Networks, Routers and Ports.
A typical use-case is the following: a tenant already having a BGP IP VPN (a set of external sites) setup outside the datacenter, and they want to be able to trigger the establishment of connectivity between VMs and these VPN sites.
Another similar need is when E-VPN is used to provide an Ethernet interconnect between multiple sites.
This service plugin exposes an API to interconnect OpenStack Neutron ports, typically VMs, via the Networks and Routers they are connected to, with an L3VPN network as defined by [RFC4364] (BGP/MPLS IP Virtual Private Networks). The framework is generic to also support E-VPN [RFC7432], which inherits the same protocol architecture as BGP/MPLS IP VPNs.
BGP-based VPNs allow a network operator to offer a VPN service to a VPN customer, delivering isolating connectivity between multiple sites of this customer.
Contrarily to for instance IPSec or SSL-based VPNs, these VPNs are not typically built over the Internet, are most often not encrypted, and their creation is not at the hand of the end-user.
Here is a reminder on how the connectivity is defined between sites of a VPN (VRFs).
In BGP-based VPNs, a set of identifiers called Route Targets are associated with a VPN, and in the typical case identify a VPN ; they can also be used to build other VPN topologies such as hub’n’spoke.
Each VRF (Virtual Routing and Forwarding) in a PE (Provider Edge router) imports/exports routes from/to Route Targets. If a VRF imports from a Route Target, BGP IP VPN routes will be imported in this VRF. If a VRF exports to a Route Target, the routes in the VRF will be associated to this Route Target and announced by BGP.
As outlined in the overview, how PEs, CEs (Customer Edge router), VRFs map to Neutron constructs will depend on the backend driver used for this service plugin.
For instance, with the current bagpipe driver, the PE and VRF functions are implemented on compute nodes and the VMs are acting as CEs. This PE function will BGP-peer with edge IP/MPLS routers, BGP Route Reflectors or other PEs.
Bagpipe BGP which implements this function could also be instantiated in network nodes, at the l3agent level, with a BGP speaker on each l3agent; router namespaces could then be considered as CEs.
Other backends might want to consider the router as a CE and drive an external PE to peer with the service provider PE, based on information received with this API. It’s up to the backend to manage the connection between the CE and the cloud provider PE.
Another typical option is where the driver delegates the work to an SDN controller which drives a BGP implementation advertising/consuming the relevant BGP routes and remotely drives the vswitches to setup the datapath accordingly.
BGP VPN are deployed, and managed by the operator, in particular to manage Route Target identifiers that control the isolation between the different VPNs. Because of this BGP VPN parameters cannot be chosen by tenants, but only by the admin. In addition, network operators may prefer to not expose actual Route Target values to the users.
The operation that is let at the hand of a tenant is the association of a BGPVPN resource that it owns with his Neutron Networks or Routers.
So there are two workflows, one for the admin, one for a tenant.
Sequence diagram summarizing these two workflows:
This diagram gives an overview of the architecture:
This second diagram depicts how the bagpipe reference driver implements its backend:
[RFC4364] | BGP/MPLS IP Virtual Private Networks (IP VPNs) http://tools.ietf.org/html/rfc4364 |
[RFC7432] | BGP MPLS-Based Ethernet VPN (Ethernet VPNs, a.k.a E-VPN) http://tools.ietf.org/html/rfc7432 |
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.