2024.1 Series Release Notes¶
2024.1-eom¶
Bug Fixes¶
In a deployment with wide and symmetric provider trees, i.e. where there are multiple children providers under the same root having inventory from the same resource class (e.g. in case of nova’s mdev GPU or PCI in Placement features) if the allocation candidate request asks for resources from those children RPs in multiple request groups the number of possible allocation candidates grows rapidly. E.g.:
1 root, 8 child RPs with 1 unit of resource each a_c requests 6 groups with 1 unit of resource each => 8*7*6*5*4*3=20160 possible candidates
1 root, 8 child RPs with 6 unit of resources each a_c requests 6 groups with 6 unit of resources each => 8^6=262144 possible candidates
Placement generates these candidates fully before applying the limit parameter provided in the allocation candidate query to be able do a random sampling if
[placement]randomize_allocation_candidatesis True.Placement takes excessive time and memory to generate this amount of allocation candidates and the client might time out waiting for the response or the Placement API service run out of memory and crash.
To avoid request timeout or out of memory events a new
[placement]max_allocation_candidatesconfig option is implemented. This limit is applied not after the request limit but during the candidate generation process. So this new option can be used to limit the runtime and memory consumption of the Placement API service.The new config option is defaulted to
-1, meaning no limit, to keep the legacy behavior. We suggest to tune this config in the affected deployments based on the memory available for the Placement service and the timeout setting of the clients. A good initial value could be around100000.If the number of generated allocation candidates is limited by the
[placement]max_allocation_candidatesconfig option then it is possible to get candidates from a limited set of root providers (e.g. compute nodes) as placement uses a depth-first strategy, i.e. generating all candidates from the first root before considering the next one. To avoid this issue a new config option[placement]allocation_candidates_generation_strategyis introduced with two possible values:depth-first, generates all candidates from the first viable root provider before moving to the next. This is the default and this triggers the old behaviorbreadth-first, generates candidates from viable roots in a round-robin fashion, creating one candidate from each viable root before creating the second candidate from the first root. This is the possible behavior.
In a deployment where
[placement]max_allocation_candidatesis configured to a positive number we recommend to set[placement]allocation_candidates_generation_strategytobreadth-first.
10.0.0¶
Upgrade Notes¶
Python 3.6 & 3.7 support has been dropped. The minimum version of Python now supported is Python 3.8.
9.0.0¶
New Features¶
The Placement policies have been modified to drop the system scope. Every API policy is scoped to project. This means that system scoped users will get 403 permission denied error.
Currently, Placement supports the following default roles:
admin(Legacy admin)serviceproject reader(for project resource usage)
For the details on what changed from the existing policy, please refer to the RBAC new guidelines. We have implemented phase-1 and phase-2 of the RBAC new guidelines.
Currently, scope checks and new defaults are disabled by default. You can enable them by switching the below config option in
placement.conffile:[oslo_policy] enforce_new_defaults=True enforce_scope=True
Upgrade Notes¶
All the placement policies have been dropped the system scope and they are now project scoped only. The scope of policy is not overridable in policy.yaml. If you have enabled the scope enforcement and using system scope token to access placement APIs, you need to switch to the project scope token. Enforce scope is not enabled by default but it will be enabled by default in the future release. The old defaults are deprecated but enforced by default which will be removed in the future release.
placement:reshaper:reshapepolicy default has been changed toservicerole only.
8.0.0¶
Upgrade Notes¶
Python 3.6 & 3.7 support has been dropped. The minimum version of Python now supported is Python 3.8.
Bug Fixes¶
Since microversion 1.34, it has been possible to provide a
mappingsfield when creating new allocations via thePOST /allocationsorPUT /allocations/{allocation_id}APIs. This field should be a a dictionary associating request group suffixes with a list of UUIDs identifying the resource providers that satisfied each group. Due to a typo, this was allowing an empty object ({}). This is now corrected.
7.0.0¶
New Features¶
Microversion 1.39 adds support for the
in:syntax in therequiredquery parameter in theGET /resource_providersAPI as well as to therequiredandrequiredNquery params of theGET /allocation_candidatesAPI. Also adds support for repeating therequiredandrequiredNparameters in the respective APIs. So:required=in:T3,T4&required=T1,!T2
is supported and it means T1 and not T2 and (T3 or T4).
The
HTTPProxyToWSGImiddleware is now enabled in api pipeline. With this middleware enabled, actual client addresses are recorded in request logs in stead addresses of intermediate load balancers.
6.0.0.0rc1¶
New Features¶
Microversion 1.38 adds support for a
consumer_type(required) key in the request body ofPOST /allocations,PUT /allocations/{consumer_uuid}and in the response ofGET /allocations/{consumer_uuid}.GET /usagesrequests gain aconsumer_typekey as an optional query parameter to filter usages based on consumer_types.GET /usagesresponse will group results based on the consumer type and will include a newconsumer_countkey per type irrespective of whether theconsumer_typewas specified in the request. If anallconsumer_typekey is provided, all results are grouped under one key,all. Older allocations which were not created with a consumer type are considered to have anunknownconsumer_type. If anunknownconsumer_typekey is provided, all results are grouped under one key,unknown.The corresponding changes to
POST /reshaperare included.
With the new microversion
1.37placement now supports re-parenting and un-parenting resource providers viaPUT /resource_providers/{uuid}API.
5.0.0¶
New Features¶
The default policies provided by placement have been updated to add support for read-only roles. This is part of a broader community effort to support read-only roles and implement secure, consistent default policies. Refer to the Keystone documentation for more information on the reason for these changes.
Previously, all policies defaulted to
rule:admin_api, which mapped torole:admin. The following rules now default torole:admin and system_scope:allinstead:placement:allocation_candidates:listplacement:allocations:deleteplacement:allocations:listplacement:allocations:manageplacement:allocations:updateplacement:reshaper:reshapeplacement:resource_classes:listplacement:resource_classes:createplacement:resource_classes:showplacement:resource_classes:updateplacement:resource_classes:deleteplacement:resource_providers:createplacement:resource_providers:deleteplacement:resource_providers:listplacement:resource_providers:showplacement:resource_providers:updateplacement:resource_providers:aggregates:listplacement:resource_providers:aggregates:updateplacement:resource_providers:allocations:listplacement:resource_providers:inventories:createplacement:resource_providers:inventories:deleteplacement:resource_providers:inventories:listplacement:resource_providers:inventories:showplacement:resource_providers:inventories:updateplacement:resource_providers:traits:deleteplacement:resource_providers:traits:listplacement:resource_providers:traits:updateplacement:resource_providers:usagesplacement:traits:listplacement:traits:showplacement:traits:updateplacement:traits:delete
The following rule now defaults to
(role:reader and system_scope:all) or role:reader and project_id:%(project_id)sinstead:placement:usages
More information on these policy defaults can be found in the documentation.
The default policy used for the
/usagesAPI,placement:usages, has been updated to allow project users to view information about resource usage for their project, specified using theproject_idquery string parameter. Previously this API was restricted to admins.
Upgrade Notes¶
The default value of
[oslo_policy] policy_fileconfig option has been changed frompolicy.jsontopolicy.yaml. Operators who are utilizing customized or previously generated static policy JSON files (which are not needed by default), should generate new policy files or convert them in YAML format. Use the oslopolicy-convert-json-to-yaml tool to convert a JSON to YAML formatted policy file in backward compatible way.
The deprecated
placementpolicy has now been removed. This policy was used prior to the introduction of granular policies in the nova 18.0.0 (Rocky) release.
The deprecated
[placement]/policy_fileconfiguration option is removed Use the more standard[oslo_policy]/policy_fileconfig option. If you do not override policy with custom rules you will have nothing to do. If you do override the placement default policy then you will need to update your configuration to use the[oslo_policy]/policy_fileconfig option.
Deprecation Notes¶
Use of JSON policy files was deprecated by the
oslo.policylibrary during the Victoria development cycle. As a result, this deprecation is being noted in the Wallaby cycle with an anticipated future removal of support byoslo.policy. As such operators will need to convert to YAML policy files. Please see the upgrade notes for details on migration of any custom policy files.
3.0.0¶
Upgrade Notes¶
Python 2.7 support has been dropped. The minimum version of Python now supported by placement is Python 3.6.
Bug Fixes¶
When a single resource provider receives many concurrent allocation writes, retries may be performed server side when there is a resource provider generation conflict. When those retries are all consumed, the client receives an HTTP 409 response and may choose to try the request again.
In an environment where high levels of concurrent allocation writes are common, such as a busy clustered hypervisor, the default retry count may be too low. See story 2006467
A new configuation setting,
[placement]/allocation_conflict_retry_count, has been added to address this situation. It defines the number of times to retry, server-side, writing allocations when there is a resource provider generation conflict.
2.0.0.0rc1¶
Prelude¶
The 2.0.0 release of placement is the first release where placement is available solely from its own project and must be installed separately from nova. If the extracted placement is not already in use, prior to upgrading to Train, the Stein version of placement must be installed. See Upgrading from Nova to Placement for details.
2.0.0 adds a suite of features which, combined, enable targeting candidate providers that have complex trees modeling NUMA layouts, multiple devices, and networks where affinity between and grouping among the members of the tree are required. These features will help to enable NFV and other high performance workloads in the cloud.
Also added is support for forbidden aggregates which allows groups of resource providers to only be used for specific purposes, such as reserving a group of compute nodes for licensed workloads.
Extensive benchmarking and profiling have led to massive performance enhancements, especially in environments with large numbers of resource providers and high concurrency.
New Features¶
In microversion 1.34 the body of the response to a
GET /allocation_candidatesrequest has been extended to include amappingsfield with each allocation request. The value is a dictionary associating request group suffixes with the uuids of those resource providers that satisfy the identified request group. For convenience, this mapping can be included in the request payload forPOST /allocations,PUT /allocations/{consumer_uuid}, andPOST /reshaper, but it will be ignored.
From microversion
1.36, a newsame_subtreequeryparam onGET /allocation_candidatesis supported. It accepts a comma-separated list of request group suffix strings ($S). Each must exactly match a suffix on a granular group somewhere else in the request. Importantly, the identified request groups need not have a resources$S. If this is provided, at least one of the resource providers satisfying a specified request group must be an ancestor of the rest. Thesame_subtreequery parameter can be repeated and each repeated group is treated independently.
Microversion 1.35 adds support for the
root_requiredquery parameter to theGET /allocation_candidatesAPI. It accepts a comma-delimited list of trait names, each optionally prefixed with!to indicate a forbidden trait, in the same format as therequiredquery parameter. This restricts allocation requests in the response to only those whose (non-sharing) tree’s root resource provider satisfies the specified trait requirements.
In microversion 1.33, the syntax for granular groupings of resource, required/forbidden trait, and aggregate association requests introduced in 1.25 has been extended to allow, in addition to numbers, strings from 1 to 64 characters in length consisting of a-z, A-Z, 0-9,
_, and-. This is done to allow naming conventions (e.g.,resources_COMPUTEandresources_NETWORK) to emerge in situations where multiple services are collaborating to make requests.
Add support for forbidden aggregates in
member_ofqueryparam inGET /resource_providersandGET /allocation_candidates. Forbidden aggregates are prefixed with a!from microversion1.32.This negative expression can also be used in multiple
member_ofparameters:?member_of=in:<agg1>,<agg2>&member_of=<agg3>&member_of=!<agg4>
would translate logically to
“Candidate resource providers must be at least one of agg1 or agg2, definitely in agg3 and definitely not in agg4.”
We do NOT support
!within thein:list:?member_of=in:<agg1>,<agg2>,!<agg3>
but we support
!in:prefix:?member_of=!in:<agg1>,<agg2>,<agg3>
which is equivalent to:
?member_of=!<agg1>&member_of=!<agg2>&member_of=!<agg3>
where returned resource providers must not be in agg1, agg2, or agg3.
Specifying forbidden aggregates in granular requests,
member_of<N>is also supported from the same microversion,1.32.
Upgrade Notes¶
The
Missing Root Provider IDsupgrade check in theplacement-status upgrade checkcommand will now produce a failure if it detects anyresource_providersrecords with a nullroot_provider_idvalue. Run theplacement-manage db online_data_migrationscommand to heal these types of records.
Deprecation Notes¶
The
[placement]/policy_fileconfiguration option is deprecated and its usage is being replaced with the more standard[oslo_policy]/policy_fileoption. If you do not override policy with custom rules you will have nothing to do. If you do override the placement default policy then you will need to update your configuration to use the[oslo_policy]/policy_fileoption. By default, the[oslo_policy]/policy_fileoption will be used if the file it points at exists.
Bug Fixes¶
By fixing bug story/2005842 the OSProfiler support works again in the placement WSGI.
Limiting nested resource providers with the
limit=Nquery parameter when callingGET /allocation_candidatescould result in incomplete provider summaries. This is now fixed so that all resource providers that are in the same trees as any provider mentioned in the limited allocation requests are shown in the provider summaries collection. For more information see story/2005859.
1.0.0.0rc1¶
Prelude¶
The 1.0.0 release of Placement is the first release where the Placement code is hosted in its own repository and managed as its own OpenStack project. Because of this, the majority of changes are not user-facing. There are a small number of new features (including microversion 1.31) and bug fixes, listed below.
A new document, Upgrading from Nova to Placement, has been created. It
explains the steps required to upgrade to extracted Placement from Nova and
to migrate data from the nova_api database to the
placement_database.
New Features¶
Add support for the
in_treequery parameter to theGET /allocation_candidatesAPI. It accepts a UUID for a resource provider. If this parameter is provided, the only resource providers returned will be those in the same tree with the given resource provider. The numbered syntaxin_tree<N>is also supported. This restricts providers satisfying the Nth granular request group to the tree of the specified provider. This may be redundant with otherin_tree<N>values specified in other groups (including the unnumbered group). However, it can be useful in cases where a specific resource (e.g. DISK_GB) needs to come from a specific sharing provider (e.g. shared storage).For example, a request for
VCPUandVGPUresources frommyhostandDISK_GBresources fromsharing1might look like:?resources=VCPU:1&in_tree=<myhost_uuid> &resources1=VGPU:1&in_tree1=<myhost_uuid> &resources2=DISK_GB:100&in_tree2=<sharing1_uuid>
A configuration setting
[placement_database]/sync_on_startupis added which, if set toTrue, will cause database schema migrations to be called when the placement web application is started. This avoids the need to callplacement-manage db syncseparately.To preserve backwards compatibility and avoid unexpected changes, the default of the setting is
False.
A new online data migration has been added to populate missing
root_provider_idin the resource_providers table. This can be run during the normal placement-manage db online_data_migrations routine. See the Bug#1803925 for more details.
Upgrade Notes¶
An upgrade check was added to the
placement-status upgrade checkcommand for incomplete consumers which can be remedied by running theplacement-manage db online_data_migrationscommand.
0.1.0¶
Upgrade Notes¶
A
placement-status upgrade checkcommand is added which can be used to check the readiness of a placement deployment before initiating an upgrade.
Bug Fixes¶
Previously, when an aggregate was specified by the
member_ofquery parameter in theGET /allocation_candidatesoperation, the non-root providers in the aggregate were excluded unless their root provider was also in the aggregate. With this release, the non-root providers directly associated with the aggregate are also considered. See the Bug#1792503 for details.