Networking Guide¶
The Tricircle is to provide networking automation across Neutron servers in multi-region OpenStack clouds deployment, many cross Neutron networking modes are supported. In this guide, how to use CLI to setup typical networking mode will be described.
Networking Terms¶
There are four important networking terms will be used in networking automation across Neutron.
- Local Network
Local Network is a network which can only reside in one OpenStack cloud.
Network type could be VLAN, VxLAN, Flat.
If you specify a region name as the value of availability-zone-hint during network creation, then the network will be created as local network in that region.
If the default network type to be created is configured to “local” in central Neutron, then no matter you specify availability-zone-hint or not, the network will be local network if the network was created without explicitly given non-local provider network type.
External network should be created as local network, that means external network is explicitly existing in some specified region. It’s possible that each region provides multiple external networks, that means there is no limitation on how many external networks can be created.
For example, local network could be created as follows:
openstack --os-region-name=CentralRegion network create --availability-zone-hint=RegionOne net1
- Local Router
Local Router is a logical router which can only reside in one OpenStack cloud.
If you specify a region name as the value of availability-zone-hint during router creation, then the router will be created as local router in that region.
For example, local router could be created as follows:
neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionOne R1
- Cross Neutron L2 Network
Cross Neutron L2 Network is a network which can be stretched into more than one Neutron servers, these Neutron servers may work in one OpenStack cloud or multiple OpenStack clouds.
Network type could be VLAN, VxLAN, Flat.
During the network creation, if availability-zone-hint is not specified, or specified with availability zone name, or more than one region name, or more than one availability zone name, then the network will be created as cross Neutron L2 network.
If the default network type to be created is not configured to “local” in central Neutron, then the network will be cross Neutron L2 network if the network was created without specified provider network type and single region name in availability-zone-hint.
For example, cross Neutron L2 network could be created as follows:
neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan --availability-zone-hint RegionOne --availability-zone-hint RegionTwo net1
- Non-Local Router
Non-Local Router will be able to reside in more than one OpenStack cloud, and internally inter-connected with bridge network.
Bridge network used internally for non-local router is a special cross Neutron L2 network.
Local networks or cross Neutron L2 networks can be attached to local router or non-local routers if the network can be presented in the region where the router can reside.
During the router creation, if availability-zone-hint is not specified, or specified with availability zone name, or more than one region name, or more than one availability zone name, then the router will be created as non-local router.
For example, non-local router could be created as follows:
neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionOne --availability-zone-hint RegionTwo R3
It’s also important to understand that cross Neutron L2 network, local router and non-local router can be created for different north-south/east-west networking purpose.
- North-South and East-West Networking
Instances in different OpenStack clouds can be attached to a cross Neutron L2 network directly, so that they can communicate with each other no matter in which OpenStack cloud.
If L3 networking across OpenStack clouds is preferred, local network attached to non-local router can be created for instances to attach.
Local router can be set gateway with external networks to support north-south traffic handled locally.
Non-local router can work only for cross Neutron east-west networking purpose if no external network is set to the router.
Non-local router can serve as the centralized north-south traffic gateway if external network is attached to the router, and support east-west traffic at the same time.
Prerequisites¶
One CentralRegion in which central Neutron and Tricircle services are started, and central Neutron is configured with Tricircle Central Neutron plugin properly. And at least two regions(RegionOne, RegionTwo) in which Tricircle Local Neutron plugin is configured properly in local Neutron.
RegionOne is mapped to az1, and RegionTwo is mapped to az2 by pod management through Tricircle Admin API.
You can use az1 or RegionOne as the value of availability-zone-hint when creating a network. Although in this document only one region in one availability zone, one availability zone can include more than one region in Tricircle pod management, so if you specify az1 as the value, then it means the network will reside in az1, and az1 is mapped to RegionOne, if you add more regions into az1, then the network can spread into these regions too.
Please refer to the installation guide and configuration guide how to setup multi-region environment with Tricircle service enabled.
If you setup the environment through devstack, you can get these settings which are used in this document as follows:
Suppose that each node has 3 interfaces, and eth1 for tenant vlan network, eth2 for external vlan network. If you want to verify the data plane connectivity, please make sure the bridges “br-vlan” and “br-ext” are connected to regarding interface. Using following command to connect the bridge to physical ethernet interface, as shown below, “br-vlan” is wired to eth1, and “br-ext” to eth2:
sudo ovs-vsctl add-br br-vlan
sudo ovs-vsctl add-port br-vlan eth1
sudo ovs-vsctl add-br br-ext
sudo ovs-vsctl add-port br-ext eth2
Suppose the vlan range for tenant network is 101~150, external network is 151~200, in the node which will run central Neutron and Tricircle services, configure the local.conf like this:
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:101:150,extern:151:200)
OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
TRICIRCLE_START_SERVICES=True
enable_plugin tricircle https://github.com/openstack/tricircle/
In the node which will run local Neutron without Tricircle services, configure the local.conf like this:
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:101:150,extern:151:200)
OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
TRICIRCLE_START_SERVICES=False
enable_plugin tricircle https://github.com/openstack/tricircle/
You may have noticed that the only difference is TRICIRCLE_START_SERVICES is True or False. All examples given in this document will be based on these settings.
If you also want to configure vxlan network, suppose the vxlan range for tenant network is 1001~2000, add the following configuration to the above local.conf:
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
If you also want to configure flat network, suppose you use the same physical network as the vlan network, configure the local.conf like this:
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS=(flat_networks=bridge,extern)
In both RegionOne and RegionTwo, external network is able to be provisioned, the settings will look like this in /etc/neutron/plugins/ml2/ml2_conf.ini:
network_vlan_ranges = bridge:101:150,extern:151:200
vni_ranges = 1001:2000(or the range that you configure)
flat_networks = bridge,extern
bridge_mappings = bridge:br-vlan,extern:br-ext
Please be aware that the physical network name for tenant VLAN network is “bridge”, and the external network physical network name is “extern”.
In central Neutron’s configuration file, the default settings look like as follows:
bridge_network_type = vxlan
network_vlan_ranges = bridge:101:150,extern:151:200
vni_ranges = 1001:2000
flat_networks = bridge,extern
tenant_network_types = vxlan,vlan,flat,local
type_drivers = vxlan,vlan,flat,local
If you want to create a local network, it is recommend that you specify availability_zone_hint as region name when creating the network, instead of specifying the network type as “local”. The “local” type has two drawbacks. One is that you can not control the exact type of the network in local Neutron, it’s up to your local Neutron’s configuration. The other is that the segment ID of the network is allocated by local Neutron, so it may conflict with a segment ID that is allocated by central Neutron. Considering such problems, we have plan to deprecate “local” type.
If you want to create a L2 network across multiple Neutron servers, then you have to speficy –provider-network-type vlan in network creation command for vlan network type, or –provider-network-type vxlan for vxlan network type. Both vlan and vxlan network type could work as the bridge network. The default bridge network type is vxlan.
If you want to create a flat network, which is usually used as the external network type, then you have to specify –provider-network-type flat in network creation command.
You can create L2 network for different purposes, and the supported network types for different purposes are summarized as follows.
Networking purpose |
Supported |
---|---|
Local L2 network for instances |
FLAT, VLAN, VxLAN |
Cross Neutron L2 network for instances |
FLAT, VLAN, VxLAN |
Bridge network for routers |
FLAT, VLAN, VxLAN |
External network |
FLAT, VLAN |
Networking Scenario¶
- North South Networking via Direct Provider Networks
- North South Networking via Multiple External Networks
- Multiple North-South gateways with East-West Networking enabled
- North South Networking via Single External Network
- Local Networking
- How to use the new layer-3 networking model for multi-NS-with-EW
Service Function Chaining Guide¶
Service Function Chaining provides the ability to define an ordered list of network services (e.g. firewalls, load balancers). These services are then “stitched” together in the network to create a service chain.
Installation¶
After installing tricircle, please refer to https://docs.openstack.org/networking-sfc/latest/install/install.html to install networking-sfc.
Configuration¶
1 Configure central Neutron server
After installing the Tricircle and networing-sfc, enable the service plugins in central Neutron server by adding them in
neutron.conf.0
(typically found in/etc/neutron/
):service_plugins=networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,tricircle.network.central_sfc_plugin.TricircleSfcPlugin
In the same configuration file, specify the driver to use in the plugins.
[sfc] drivers = tricircle_sfc [flowclassifier] drivers = tricircle_fc
2 Configure local Neutron
Please refer to https://docs.openstack.org/networking-sfc/latest/install/configuration.html to config local networking-sfc.
How to play¶
1 Create pods via Tricircle Admin API
2 Create necessary resources in central Neutron server
neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan net1 neutron --os-region-name=CentralRegion subnet-create net1 10.0.0.0/24 neutron --os-region-name=CentralRegion port-create net1 --name p1 neutron --os-region-name=CentralRegion port-create net1 --name p2 neutron --os-region-name=CentralRegion port-create net1 --name p3 neutron --os-region-name=CentralRegion port-create net1 --name p4 neutron --os-region-name=CentralRegion port-create net1 --name p5 neutron --os-region-name=CentralRegion port-create net1 --name p6
Please note that network type must be vxlan.
3 Get image ID and flavor ID which will be used in VM booting. In the following step, the VM will boot from RegionOne and RegionTwo.
glance --os-region-name=RegionOne image-list nova --os-region-name=RegionOne flavor-list glance --os-region-name=RegionTwo image-list nova --os-region-name=RegionTwo flavor-list
4 Boot virtual machines
openstack --os-region-name=RegionOne server create --flavor 1 --image $image1_id --nic port-id=$p1_id vm_src openstack --os-region-name=RegionOne server create --flavor 1 --image $image1_id --nic port-id=$p2_id --nic port-id=$p3_id vm_sfc1 openstack --os-region-name=RegionTwo server create --flavor 1 --image $image2_id --nic port-id=$p4_id --nic port-id=$p5_id vm_sfc2 openstack --os-region-name=RegionTwo server create --flavor 1 --image $image2_id --nic port-id=$p6_id vm_dst
5 Create port pairs in central Neutron server
neutron --os-region-name=CentralRegion port-pair-create --ingress p2 --egress p3 pp1 neutron --os-region-name=CentralRegion port-pair-create --ingress p4 --egress p5 pp2
6 Create port pair groups in central Neutron server
neutron --os-region-name=CentralRegion port-pair-group-create --port-pair pp1 ppg1 neutron --os-region-name=CentralRegion port-pair-group-create --port-pair pp2 ppg2
7 Create flow classifier in central Neutron server
neutron --os-region-name=CentralRegion flow-classifier-create --source-ip-prefix 10.0.0.0/24 --logical-source-port p1 fc1
8 Create port chain in central Neutron server
neutron --os-region-name=CentralRegion port-chain-create --flow-classifier fc1 --port-pair-group ppg1 --port-pair-group ppg2 pc1
9 Show result in CentralRegion, RegionOne and RegionTwo
neutron --os-region-name=CentralRegion port-chain-list neutron --os-region-name=RegionOne port-chain-list neutron --os-region-name=RegionTwo port-chain-list
You will find a same port chain in each region.
10 Check if the port chain is working
In vm_dst, ping the p1’s ip address, it should fail.
Enable vm_sfc1, vm_sfc2’s forwarding function
sudo sh echo 1 > /proc/sys/net/ipv4/ip_forward
Add the following route for vm_sfc1, vm_sfc2
sudo ip route add $p6_ip_address dev eth1
In vm_dst, ping the p1’s ip address, it should be successfully this time.
Note
Not all images will bring up the second NIC, so you can ssh into vm, use “ifconfig -a” to check whether all NICs are up, and bring up all NICs if necessary. In CirrOS you can type the following command to bring up one NIC.
sudo cirros-dhcpc up $nic_name
VLAN aware VMs Guide¶
VLAN aware VM is a VM that sends and receives VLAN tagged frames over its vNIC. The main point of that is to overcome the limitations of the current one vNIC per network model. A VLAN (or other encapsulation) aware VM can differentiate between traffic of many networks by different encapsulation types and IDs, instead of using many vNICs. This approach scales to higher number of networks and enables dynamic handling of network attachments (without hotplugging vNICs).
Installation¶
No additional installation required, Please refer to the Tricircle installation guide to install Tricircle then configure Neutron server to enable trunk extension.
Configuration¶
1 Configure central Neutron server
Edit neutron.conf, add the following configuration then restart central Neutron server
Option
Description
Example
[DEFAULT] service_plugins
service plugin central Neutron server uses
tricircle.network.central_trunk_plugin. TricircleTrunkPlugin
2 Configure local Neutron server
Edit neutron.conf, add the following configuration then restart local Neutron server
Option
Description
Example
[DEFAULT] service_plugins
service plugin central Neutron server uses
trunk
How to play¶
1 Create pods via Tricircle Admin API
2 Create necessary resources in central Neutron server
neutron --os-region-name=CentralRegion net-create --provider:network_type vlan net1 neutron --os-region-name=CentralRegion subnet-create net1 10.0.1.0/24 neutron --os-region-name=CentralRegion port-create net1 --name p1 neutron --os-region-name=CentralRegion net-create --provider:network_type vlan net2 neutron --os-region-name=CentralRegion subnet-create net2 10.0.2.0/24 neutron --os-region-name=CentralRegion port-create net2 --name p2
Please note that network type must be vlan, the port p1, p2 and net2’s provider segmentation_id will be used in later step to create trunk and boot vm.
3 Create trunk in central Neutron server
openstack --os-region-name=CentralRegion network trunk create trunk1 --parent-port p1 --subport port=p2,segmentation-type=vlan,segmentation-id=$net2_segment_id
4 Get image ID and flavor ID which will be used in VM booting. In the following step, the trunk is to be used in the VM in RegionOne, you can replace RegionOne to other region’s name if you want to boot VLAN aware VM in other region.
glance --os-region-name=RegionOne image-list nova --os-region-name=RegionOne flavor-list
5 Boot virtual machines
nova --os-region-name=RegionOne boot --flavor 1 --image $image1_id --nic port-id=$p1_id vm1
6 Show result on CentralRegion and RegionOne
openstack --os-region-name=CentralRegion network trunk show trunk1 openstack --os-region-name=RegionOne network trunk show trunk1
The result will be the same, except for the trunk id.