This spec describes how to implement QoS extension for networking-midonet. The backend side is covered by another spec. [5]
Use the Neutron QoS plugin as it is. Implement MidoNet specific notification driver which communicates with the MidoNet API.
[DEFAULT]
service_plugins = qos
[qos]
notification_drivers = midonet,message_queue
setup.cfg:
neutron.qos.notification_drivers =
midonet = midonet.neutron.services.qos.driver:MidoNetQosServiceNotificationDriver
Note: message_queue driver [4] is the AMQP RPC [7] based driver for the reference implementation. It isn’t necessary for MidoNet-only deployments.
Neutron QoS plugin [1] has notification driver mechansim [2], which can be used for networking-midonet to implement backend notifications.
When Neutron QoS plugin receives API requests, it updates the corresponding DB rows. After commiting the DB changes, it calls one of the following methods of the loaded notification drivers:
Note: a request for a rule (eg. update_policy_rule) ends up with a notification for the entire policy the rule belongs to.
Note: a request for a specific rule type (eg. update_policy_dscp_marking_rule) are automatically converted to a generic method (eg. update_policy_rule) by the QoS extension, namely QoSPluginBase. [6]
Notification driver methods are considered async and always success. [9] Currently there’s no convenient way to report errors from the backend. While it’s possible for a driver to return an error by raising an exception, if multiple drivers are loaded and one of them fails that way, the rest of drivers are just skipped. Even if we assume the simplest case where only MidoNet QoS driver is loaded, there’s no mechanism to mark the resource error or rollback the operation. There’s an ongoing effort in Neutron [8] in that area, which might improve the situation.
For ML2, the existing QoS extension driver should work.
If we want to make this feature available for the monolithic plugins, the equivalent needs to be implemented for them.
Instead of the QoS driver, we can implement the entire QoS plugin by ourselves.
[DEFAULT]
service_plugins = midonet_qos
setup.cfg:
neutron.service_plugins =
midonet_qos = midonet.neutron.services.qos.plugin:MidonetQosPlugin
This might fit the current backend design [5] better.
We can re-use the reference QoS plugin and its DB models by inheriting its class. It’s a rather discouraged pattern these days, though. This way the first implementation might be simpler. But it might be tricky to deal with other backends (consider ML2 heterogeneous deployments) and future enhancements in Neutron.
[1] | https://github.com/openstack/neutron/blob/2be2d97d11719db88537a9664c95f1b6b11d3707/neutron/services/qos/qos_plugin.py |
[2] | https://github.com/openstack/neutron/blob/2be2d97d11719db88537a9664c95f1b6b11d3707/neutron/services/qos/notification_drivers/manager.py |
[3] | https://github.com/openstack/neutron/blob/2be2d97d11719db88537a9664c95f1b6b11d3707/neutron/services/qos/notification_drivers/qos_base.py#L18 |
[4] | https://github.com/openstack/neutron/blob/2be2d97d11719db88537a9664c95f1b6b11d3707/neutron/services/qos/notification_drivers/message_queue.py#L40 |
[5] | (1, 2) https://review.gerrithub.io/#/c/289456/ |
[6] | https://github.com/openstack/neutron/blob/2be2d97d11719db88537a9664c95f1b6b11d3707/neutron/extensions/qos.py#L225 |
[7] | http://docs.openstack.org/developer/neutron/devref/quality_of_service.html#rpc-communication |
[8] | https://review.openstack.org/#/c/351858/ |
[9] | https://bugs.launchpad.net/neutron/+bug/1627749 |
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.