There are a number of skills that make a great Neutron developer: writing good code, reviewing effectively, listening to peer feedback, etc. The objective of this document is to describe, by means of examples, the pitfalls, the good and bad practices that ‘we’ as project encounter on a daily basis and that make us either go slower or accelerate while contributing to Neutron.
By reading and collaboratively contributing to such a knowledge base, your development and review cycle becomes shorter, because you will learn (and teach to others after you) what to watch out for, and how to be proactive in order to prevent negative feedback, minimize programming errors, writing better tests, and so on and so forth...in a nutshell, how to become an effective Neutron developer.
The notes below are meant to be free-form and brief by design. They are not meant to replace or duplicate OpenStack documentation, or any project-wide documentation initiative like peer-review notes or the team guide. For this reason, references are acceptable and should be favored, if the shortcut is deemed useful to expand on the distilled information. We will try to keep these notes tidy by breaking them down into sections if it makes sense. Feel free to add, adjust, remove as you see fit. Please do so, taking into consideration yourself and other Neutron developers as readers. Capture your experience during development and review and add any comment that you believe will make your life and others’ easier.
Happy hacking!
Document common pitfalls as well as good practices done during plugin development.
Document common pitfalls as well as good practices done during database development.
first() does not raise an exception.
Do not get an object to delete it. If you can delete() on the query object. Read the warnings for more details about in-python cascades.
For PostgreSQL if you’re using GROUP BY everything in the SELECT list must be an aggregate SUM(...), COUNT(...), etc or used in the GROUP BY.
The incorrect variant:
q = query(Object.id, Object.name,
func.count(Object.number)).group_by(Object.name)
The correct variant:
q = query(Object.id, Object.name,
func.count(Object.number)).group_by(Object.id, Object.name)
Beware of the InvalidRequestError exception. There is even a Neutron bug registered for it. Bear in mind that this error may also occur when nesting transaction blocks, and the innermost block raises an error without proper rollback. Consider if savepoints can fit your use case.
When designing data models that are related to each other, be careful to how you model the relationships’ loading strategy. For instance a joined relationship can be very efficient over others (some examples include router gateways or network availability zones).
If you add a relationship to a Neutron object that will be referenced in the majority of cases where the object is retrieved, be sure to use the lazy=’joined’ parameter to the relationship so the related objects are loaded as part of the same query. Otherwise, the default method is ‘select’, which emits a new DB query to retrieve each related object adversely impacting performance. For example, see patch 88665 which resulted in a significant improvement since router retrieval functions always include the gateway interface.
Conversely, do not use lazy=’joined’ if the relationship is only used in corner cases because the JOIN statement comes at a cost that may be significant if the relationship contains many objects. For example, see patch 168214 which reduced a subnet retrieval by ~90% by avoiding a join to the IP allocation table.
When writing extensions to existing objects (e.g. Networks), ensure that they are written in a way that the data on the object can be calculated without additional DB lookup. If that’s not possible, ensure the DB lookup is performed once in bulk during a list operation. Otherwise a list call for a 1000 objects will change from a constant small number of DB queries to 1000 DB queries. For example, see patch 257086 which changed the availability zone code from the incorrect style to a database friendly one.
Sometimes in code we use the following structures:
def create():
with context.session.begin(subtransactions=True):
create_something()
try:
_do_other_thing_with_created_object()
except Exception:
with excutils.save_and_reraise_exception():
delete_something()
def _do_other_thing_with_created_object():
with context.session.begin(subtransactions=True):
....
The problem is that when exception is raised in _do_other_thing_with_created_object it is caught in except block, but the object cannot be deleted in except section because internal transaction from _do_other_thing_with_created_object has been rolled back. To avoid this nested transactions should be used. For such cases help function safe_creation has been created in neutron/db/common_db_mixin.py. So, the example above should be replaced with:
_safe_creation(context, create_something, delete_something,
_do_other_thing_with_created_object)
Where nested transaction is used in _do_other_thing_with_created_object function.
The _safe_creation function can also be passed the ``transaction=False argument to prevent any transaction from being created just to leverage the automatic deletion on exception logic.
Beware of ResultProxy.inserted_primary_key which returns a list of last inserted primary keys not the last inserted primary key:
result = session.execute(mymodel.insert().values(**values))
# result.inserted_primary_key is a list even if we inserted a unique row!
Beware of pymysql which can silently unwrap a list with an element (and hide a wrong use of ResultProxy.inserted_primary_key for example):
e.execute("create table if not exists foo (bar integer)")
e.execute(foo.insert().values(bar=1))
e.execute(foo.insert().values(bar=[2]))
The 2nd insert should crash (list provided, integer expected). It crashes at least with mysql and postgresql backends, but succeeds with pymysql because it transforms them into:
INSERT INTO foo (bar) VALUES (1)
INSERT INTO foo (bar) VALUES ((2))
Document common pitfalls as well as good practices done when invoking system commands and interacting with linux utils.
Document common pitfalls as well as good practices done when using eventlet and monkey patching.
Document common pitfalls as well as good practices done when writing tests, any test. For anything more elaborate, please visit the testing section.
Preferring low level testing versus full path testing (e.g. not testing database via client calls). The former is to be favored in unit testing, whereas the latter is to be favored in functional testing.
Prefer specific assertions (assert(Not)In, assert(Not)IsInstance, assert(Not)IsNone, etc) over generic ones (assertTrue/False, assertEqual) because they raise more meaningful errors:
def test_specific(self):
self.assertIn(3, [1, 2])
# raise meaningful error: "MismatchError: 3 not in [1, 2]"
def test_generic(self):
self.assertTrue(3 in [1, 2])
# raise meaningless error: "AssertionError: False is not true"
Use the pattern “self.assertEqual(expected, observed)” not the opposite, it helps reviewers to understand which one is the expected/observed value in non-trivial assertions. The expected and observed values are also labeled in the output when the assertion fails.
Prefer specific assertions (assertTrue, assertFalse) over assertEqual(True/False, observed).
Don’t write tests that don’t test the intended code. This might seem silly but it’s easy to do with a lot of mocks in place. Ensure that your tests break as expected before your code change.
Avoid heavy use of the mock library to test your code. If your code requires more than one mock to ensure that it does the correct thing, it needs to be refactored into smaller, testable units. Otherwise we depend on fullstack/tempest/api tests to test all of the real behavior and we end up with code containing way too many hidden dependencies and side effects.
All behavior changes to fix bugs should include a test that prevents a regression. If you made a change and it didn’t break a test, it means the code was not adequately tested in the first place, it’s not an excuse to leave it untested.
Test the failure cases. Use a mock side effect to throw the necessary exceptions to test your ‘except’ clauses.
Don’t mimic existing tests that violate these guidelines. We are attempting to replace all of these so more tests like them create more work. If you need help writing a test, reach out to the testing lieutenants and the team on IRC.
Mocking open() is a dangerous practice because it can lead to unexpected bugs like bug 1503847. In fact, when the built-in open method is mocked during tests, some utilities (like debtcollector) may still rely on the real thing, and may end up using the mock rather what they are really looking for. If you must, consider using OpenFixture, but it is better not to mock open() at all.
Document common pitfalls as well as good practices done when extending the RPC Interfaces.
Sometimes we want to refactor things in a non-backward compatible way. In most cases you can use debtcollector to mark things for deprecation. Config items have deprecation options supported by oslo.config.
The deprecation process must follow the standard deprecation requirements. In terms of neutron development, this means:
Document common pitfalls as well as good practices done when writing code that needs to process a lot of data.
Document common pitfalls as well as good practices done when instrumenting your code.
Document common pitfalls as well as good practices done when writing code that is used to interface with other projects, like Keystone or Nova.
Document common pitfalls as well as good practices done when writing docstrings.
Document common nits and pedantic comments to watch out for.
Document common pitfalls as well as good practices done when writing commit messages. For more details see Git commit message best practices. This is the TL;DR version with the important points for committing to Neutron.
Document common pitfalls as well as good practices done when dealing with OpenStack CI.