Quality of Service advanced service is designed as a service plugin. The service is decoupled from the rest of Neutron code on multiple levels (see below).
QoS extends core resources (ports, networks) without using mixins inherited from plugins but through an ml2 extension driver.
Details about the DB models, API extension, and use cases can be found here: qos spec .
The neutron.extensions.qos.QoSPluginBase class uses method proxies for methods relating to QoS policy rules. Each of these such methods is generic in the sense that it is intended to handle any rule type. For example, QoSPluginBase has a create_policy_rule method instead of both create_policy_dscp_marking_rule and create_policy_bandwidth_limit_rule methods. The logic behind the proxies allows a call to a plugin’s create_policy_dscp_marking_rule to be handled by the create_policy_rule method, which will receive a QosDscpMarkingRule object as an argument in order to execute behavior specific to the DSCP marking rule type. This approach allows new rule types to be introduced without requiring a plugin to modify code as a result. As would be expected, any subclass of QoSPluginBase must override the base class’s abc.abstractmethod methods, even if to raise NotImplemented.
Each QoS driver has a property called supported_rule_types, where the driver exposes the rules it’s able to handle.
For a list of all rule types, see: neutron.services.qos.qos_consts.VALID_RULE_TYPES.
The list of supported QoS rule types exposed by neutron is calculated as the common subset of rules supported by all active QoS drivers.
Note: the list of supported rule types reported by core plugin is not enforced when accessing QoS rule resources. This is mostly because then we would not be able to create rules while at least one of the QoS driver in gate lacks support for the rules we’re trying to test.
QoS design defines the following two conceptual resources to apply QoS rules for a port or a network:
Each QoS policy contains zero or more QoS rules. A policy is then applied to a network or a port, making all rules of the policy applied to the corresponding Neutron resource.
When applied through a network association, policy rules could apply or not to neutron internal ports (like router, dhcp, load balancer, etc..). The QosRule base object provides a default should_apply_to_port method which could be overridden. In the future we may want to have a flag in QoSNetworkPolicyBinding or QosRule to enforce such type of application (for example when limiting all the ingress of routers devices on an external network automatically).
Each project can have at most one default QoS policy, although is not mandatory. If a default QoS policy is defined, all new networks created within this project will have assigned this policy, as long as no other QoS policy is explicitly attached during the creation process. If the default QoS policy is unset, no change to existing networks will be made.
From database point of view, following objects are defined in schema:
All database models are defined under:
For QoS, the following neutron objects are implemented:
Those are defined in:
For QosPolicy neutron object, the following public methods were implemented:
In addition to the fields that belong to QoS policy database object itself, synthetic fields were added to the object that represent lists of rules that belong to the policy. To get a list of all rules for a specific policy, a consumer of the object can just access the corresponding attribute via:
Implementation is done in a way that will allow adding a new rule list field with little or no modifications in the policy object itself. This is achieved by smart introspection of existing available rule object definitions and automatic definition of those fields on the policy class.
Note that rules are loaded in a non lazy way, meaning they are all fetched from the database on policy fetch.
For Qos<type>Rule objects, an extendable approach was taken to allow easy addition of objects for new rule types. To accommodate this, fields common to all types are put into a base class called QosRule that is then inherited into type-specific rule implementations that, ideally, only define additional fields and some other minor things.
Note that the QosRule base class is not registered with oslo.versionedobjects registry, because it’s not expected that ‘generic’ rules should be instantiated (and to suggest just that, the base rule class is marked as ABC).
QoS objects rely on some primitive database API functions that are added in:
Details on RPC communication implemented in reference backend driver are discussed in a separate page.
The flow of updates is as follows:
Reference agents implement QoS functionality using an L2 agent extension.
At the moment, QoS is supported by Open vSwitch, SR-IOV and Linux bridge ml2 drivers.
Each agent backend defines a QoS driver that implements the QosAgentDriver interface:
Table of Neutron backends, supported rules and traffic direction (from the VM point of view)
+----------------------+----------------+----------------+----------------+
| Rule \ Backend | Open vSwitch | SR-IOV | Linux Bridge |
+----------------------+----------------+----------------+----------------+
| Bandwidth Limit | Egress/Ingress | Egress (1) | Egress/Ingress |
+----------------------+----------------+----------------+----------------+
| Minimum Bandwidth | - | Egress | - |
+----------------------+----------------+----------------+----------------+
| DSCP Marking | Egress | - | Egress |
+----------------------+----------------+----------------+----------------+
(1) Max burst parameter is skipped because it's not supported by ip tool.
Open vSwitch implementation relies on the new ovs_lib OVSBridge functions:
An egress bandwidth limit is effectively configured on the port by setting the port Interface parameters ingress_policing_rate and ingress_policing_burst.
That approach is less flexible than linux-htb, Queues and OvS QoS profiles, which we may explore in the future, but which will need to be used in combination with openflow rules.
An ingress bandwidth limit is effectively configured on the port by setting Queue and OvS QoS profile with linux-htb type for port.
The Open vSwitch DSCP marking implementation relies on the recent addition of the ovs_agent_extension_api OVSAgentExtensionAPI to request access to the integration bridge functions:
The DSCP markings are in fact configured on the port by means of openflow rules.
SR-IOV bandwidth limit implementation relies on the new pci_lib function:
As the name of the function suggests, the limit is applied on a Virtual Function (VF).
ip link interface has the following limitation for bandwidth limit: it uses Mbps as units of bandwidth measurement, not kbps, and does not support float numbers. So in case the limit is set to something less than 1000 kbps, it’s set to 1 Mbps only. If the limit is set to something that does not divide to 1000 kbps chunks, then the effective limit is rounded to the nearest integer Mbps value.
The Linux bridge implementation relies on the new tc_lib functions.
For egress bandwidth limit rule:
The egress bandwidth limit is configured on the tap port by setting traffic policing on tc ingress queueing discipline (qdisc). Details about ingress qdisc can be found on lartc how-to. The reason why ingress qdisc is used to configure egress bandwidth limit is that tc is working on traffic which is visible from “inside bridge” perspective. So traffic incoming to bridge via tap interface is in fact outgoing from Neutron’s port. This implementation is the same as what Open vSwitch is doing when ingress_policing_rate and ingress_policing_burst are set for port.
For ingress bandwidth limit rule:
The ingress bandwidth limit is configured on the tap port by setting a simple tc-tbf queueing discipline (qdisc) on the port. It requires a value of HZ parameter configured in kernel on the host. This value is necessary to calculate the minimal burst value which is set in tc. Details about how it is calculated can be found in here. This solution is similar to Open vSwitch implementation.
QoS framework is flexible enough to support any third-party vendor. To integrate a third party driver (that just wants to be aware of the QoS create/update/delete API calls), one needs to implement ‘neutron.services.qos.drivers.base’, and register the driver during the core plugin or mechanism driver load, see
neutron.services.qos.drivers.openvswitch.driver register method for an example.
Note
All the functionality MUST be implemented by the vendor, neutron’s QoS framework will just act as an interface to bypass the received QoS API request and help with database persistence for the API operations.
To enable the service, the following steps should be followed:
On server side:
On agent side (OVS):
All the code added or extended as part of the effort got reasonable unit test coverage.
Base unit test classes to validate neutron objects were implemented in a way that allows code reuse when introducing a new object type.
There are two test classes that are utilized for that:
Every new object implemented on top of one of those classes is expected to either inherit existing test cases as is, or reimplement it, if it makes sense in terms of how those objects are implemented. Specific test classes can obviously extend the set of test cases as they see needed (f.e. you need to define new test cases for those additional methods that you may add to your object implementations on top of base semantics common to all neutron objects).
Additions to ovs_lib to set bandwidth limits on ports are covered in:
New functional tests for tc_lib to set bandwidth limits on ports are in:
API tests for basic CRUD operations for ports, networks, policies, and rules were added in:
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.