Networking Option 2: Self-service networks¶
Install and configure the Networking components on the controller node.
Install the components¶
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-openvswitch ebtables
Configure the server component¶
Edit the
/etc/neutron/neutron.conf
file and complete the following actions:In the
[database]
section, configure database access:[database] # ... connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace
NEUTRON_DBPASS
with the password you chose for the database.Note
Comment out or remove any other
connection
options in the[database]
section.In the
[DEFAULT]
section, enable the Modular Layer 2 (ML2) plug-in and router service:[DEFAULT] # ... core_plugin = ml2 service_plugins = router
In the
[DEFAULT]
section, configureRabbitMQ
message queue access:[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace
RABBIT_PASS
with the password you chose for theopenstack
account in RabbitMQ.In the
[DEFAULT]
and[keystone_authtoken]
sections, configure Identity service access:[DEFAULT] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS
Replace
NEUTRON_PASS
with the password you chose for theneutron
user in the Identity service.Note
Comment out or remove any other options in the
[keystone_authtoken]
section.In the
[DEFAULT]
and[nova]
sections, configure Networking to notify Compute of network topology changes:[DEFAULT] # ... notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [nova] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS
Replace
NOVA_PASS
with the password you chose for thenova
user in the Identity service.
In the
[oslo_concurrency]
section, configure the lock path:[oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp
Configure the Modular Layer 2 (ML2) plug-in¶
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.
Edit the
/etc/neutron/plugins/ml2/ml2_conf.ini
file and complete the following actions:In the
[ml2]
section, enable flat, VLAN, and VXLAN networks:[ml2] # ... type_drivers = flat,vlan,vxlan
In the
[ml2]
section, enable VXLAN self-service networks:[ml2] # ... tenant_network_types = vxlan
In the
[ml2]
section, enable the Linux bridge and layer-2 population mechanisms:[ml2] # ... mechanism_drivers = openvswitch,l2population
Warning
After you configure the ML2 plug-in, removing values in the
type_drivers
option can lead to database inconsistency.Note
The Linux bridge agent only supports VXLAN overlay networks.
In the
[ml2]
section, enable the port security extension driver:[ml2] # ... extension_drivers = port_security
In the
[ml2_type_flat]
section, configure the provider virtual network as a flat network:[ml2_type_flat] # ... flat_networks = provider
In the
[ml2_type_vxlan]
section, configure the VXLAN network identifier range for self-service networks:[ml2_type_vxlan] # ... vni_ranges = 1:1000
Configure the Open vSwitch agent¶
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.
Edit the
/etc/neutron/plugins/ml2/openvswitch_agent.ini
file and complete the following actions:In the
[ovs]
section, map the provider virtual network to the provider physical bridge and configure the IP address of the physical network interface that handles overlay networks:[ovs] bridge_mappings = provider:PROVIDER_BRIDGE_NAME local_ip = OVERLAY_INTERFACE_IP_ADDRESS
Replace
PROVIDER_BRIDGE_NAME
with the name of the bridge connected to the underlying provider physical network. See Host networking and Open vSwitch: Provider networks for more information.Also replace
OVERLAY_INTERFACE_IP_ADDRESS
with the IP address of the underlying physical network interface that handles overlay networks. The example architecture uses the management interface to tunnel traffic to the other nodes. Therefore, replaceOVERLAY_INTERFACE_IP_ADDRESS
with the management IP address of the controller node. See Host networking for more information.Ensure
PROVIDER_BRIDGE_NAME
external bridge is created andPROVIDER_INTERFACE_NAME
is added to that bridge# ovs-vsctl add-br $PROVIDER_BRIDGE_NAME # ovs-vsctl add-port $PROVIDER_BRIDGE_NAME $PROVIDER_INTERFACE_NAME
In the
[agent]
section, enable VXLAN overlay networks and enable layer-2 population:[agent] tunnel_types = vxlan l2_population = true
In the
[securitygroup]
section, enable security groups and configure the Open vSwitch native or the hybrid iptables firewall driver:[securitygroup] # ... enable_security_group = true firewall_driver = openvswitch #firewall_driver = iptables_hybrid
In the case of using the hybrid iptables firewall driver, ensure your Linux operating system kernel supports network bridge filters by verifying all the following
sysctl
values are set to1
:net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the
br_netfilter
kernel module needs to be loaded. Check your operating system’s documentation for additional details on enabling this module.
Configure the layer-3 agent¶
The Layer-3 (L3) agent provides routing and NAT services for self-service virtual networks.
Edit the
/etc/neutron/l3_agent.ini
file and complete the following actions:In the
[DEFAULT]
section, configure the Open vSwitch interface driver:[DEFAULT] # ... interface_driver = openvswitch
Configure the DHCP agent¶
The DHCP agent provides DHCP services for virtual networks.
Edit the
/etc/neutron/dhcp_agent.ini
file and complete the following actions:In the
[DEFAULT]
section, configure the Open vSwitch interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on provider networks can access metadata over the network:[DEFAULT] # ... interface_driver = openvswitch dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
Return to Networking controller node configuration.