This document aims to define how we describe features listed in the Feature Support Matrix.
Our users want the features they rely on to be reliable and always continue to solve for their use case. When things break, users request that we solve their issues quickly. It would be better if we never had those regressions in the first place.
We are taking a two-pronged approach:
Some definitions to help understand the later part of the document.
These are the users we will talk about in this document:
Now in reality the picture is way more complex. Specifically, there are likely to be different roles for observer, creator and admin roles for the application developer. Similarly, there are likely to be various levels of cloud operator permissions, some read only, see a subset of tenants, etc.
Note: this is not attempting to be an exhaustive set of personas that consider various facets of the different users, but instead aims to be a minimal set of users, such that we use a consistent terminology throughout this document.
To reduce the size of the matrix, we organize the features into groups. Each group maps to a set of user stories, that can be validated by a set of scenarios, tests. Typically, this means a set of tempest tests.
This list focuses on API concepts like attach and detach volumes, rather than deployment specific concepts like attach iSCSI volume to KVM based VM.
A deployment maps to a specific test environment. A full description of the environment should be provided, so its possible to reproduce the test results that are reported for each of the Feature Groups.
Note: this description includes all aspects of the deployment: the hypervisor, the number of nova-compute services, the storage being used, the network driver being used, the types of images being tested, etc.
The Feature Group Maturity rating is specific to the API concepts, rather than specific to a particular deployment. That detail is covered in the deployment rating for each feature group.
We are starting out these Feature Group ratings:
Incomplete features are those that don’t have enough functionality to satisfy real world use cases.
Experimental features should be used with extreme caution. They are likely to have little or no upstream testing. With little testing there are likely to be many unknown bugs.
For a feature to be considered complete, we must have:
There are various reasons why a feature, once complete, becomes required, but currently its largely when a feature is supported by all drivers. Note that any new drivers need to prove they support all required features before it would be allowed in upstream Nova. Please note that this list is technically unrelated to the DefCore effort, despite there being obvious parallels that could be drawn.
Required features are those that any new technology must support before being allowed into tree. The larger the list, the more features can be expected to be available on all Nova based clouds.
Deprecated features are those that are scheduled to be removed in a future major release of Nova. If a feature is marked as complete, it should never be deprecated. If a feature is incomplete or experimental for several releases, it runs the risk of being deprecated, and later removed from the code base.
The deployment rating is purely about the state of the tests for each Feature Group on a particular deployment.
There will the following ratings:
The eventual goal is to automate this list from some third party CI reporting system, but so we can make progress, this will be a manual inspection that is documented by an hand written ini file. Ideally, this will be reviewed every milestone.
This is a look at features targeted at application developers, and the current state of each feature, independent of the specific deployment.
Please note: this is still a work in progress!
Key TODOs: