Plugins Reference¶
Deployment¶
Engines¶
DevstackEngine [Engine]¶
Deploy Devstack cloud.
Sample configuration:
{
"type": "DevstackEngine",
"devstack_repo": "https://example.com/devstack/",
"local_conf": {
"ADMIN_PASSWORD": "secret"
},
"provider": {
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "10.2.0.8"}]
}
}
Namespace: default
Parameters:
- local_conf (dict) [ref]
- localrc (dict) [ref]
- devstack_branch (str) [ref]
- provider (dict) [ref]
- type (str) [ref]
- devstack_repo (str) [ref]
ExistingCloud [Engine]¶
Just use an existing OpenStack deployment without deploying anything.
To use ExistingCloud, you should put credential information to the config:
{
"type": "ExistingCloud",
"auth_url": "http://localhost:5000/v2.0/",
"region_name": "RegionOne",
"endpoint_type": "public",
"admin": {
"username": "admin",
"password": "password",
"tenant_name": "demo"
},
"https_insecure": False,
"https_cacert": "",
}
Or, using keystone v3 API endpoint:
{
"type": "ExistingCloud",
"auth_url": "http://localhost:5000/v3/",
"region_name": "RegionOne",
"endpoint_type": "public",
"admin": {
"username": "admin",
"password": "admin",
"user_domain_name": "admin",
"project_name": "admin",
"project_domain_name": "admin",
},
"https_insecure": False,
"https_cacert": "",
}
To specify extra options use can use special "extra" parameter:
{
"type": "ExistingCloud",
"auth_url": "http://localhost:5000/v2.0/",
"region_name": "RegionOne",
"endpoint_type": "public",
"admin": {
"username": "admin",
"password": "password",
"tenant_name": "demo"
},
"https_insecure": False,
"https_cacert": "",
"extra": {"some_var": "some_value"}
}
Namespace: default
Parameters:
- https_insecure (bool) [ref]
- endpoint [ref]
- auth_url (str) [ref]
- region_name (str) [ref]
endpoint_type [ref]
Set of expected values: 'admin', 'internal', 'public', 'None'.
- extra (dict) [ref]
admin [ref]
N/a
- https_cacert (str) [ref]
- type (str) [ref]
users (list) [ref]
Elements of the list should follow format(s) described below:
{'$ref': '#/definitions/user'}
LxcEngine [Engine]¶
Deploy with other engines in lxc containers.
Sample configuration:
{
"type": "LxcEngine",
"provider": {
"type": "DummyProvider",
"credentials": [{"user": "root", "host": "example.net"}]
},
"distribution": "ubuntu",
"release": "raring",
"tunnel_to": ["10.10.10.10", "10.10.10.11"],
"start_lxc_network": "10.1.1.0/24",
"container_name_prefix": "devstack-node",
"containers_per_host": 16,
"start_script": "~/start.sh",
"engine": { ... }
}
Namespace: default
Parameters:
start_lxc_network (str) [ref]
Should follow next pattern: ^(d+.){3}d+/d+$.
- containers_per_host (int) [ref]
tunnel_to (list) [ref]
Elements of the list should follow format(s) described below:
- Type: str.
- release (str) [ref]
- distribution (str) [ref]
- container_name (str) [ref]
- type (str) [ref]
- provider (dict) [ref]
Module: rally.deployment.engines.lxc
MultihostEngine [Engine]¶
Deploy multihost cloud with existing engines.
Sample configuration:
{
"type": "MultihostEngine",
"controller": {
"type": "DevstackEngine",
"provider": {
"type": "DummyProvider"
}
},
"nodes": [
{"type": "Engine1", "config": "Config1"},
{"type": "Engine2", "config": "Config2"},
{"type": "Engine3", "config": "Config3"},
]
}
If {controller_ip} is specified in configuration values, it will be replaced with controller address taken from credential returned by controller engine:
...
"nodes": [
{
"type": "DevstackEngine",
"local_conf": {
"GLANCE_HOSTPORT": "{controller_ip}:9292",
...
Namespace: default
Provider Factories¶
CobblerProvider [Provider Factory]¶
Creates servers via PXE boot from given cobbler selector.
Cobbler selector may contain a combination of fields to select a number of system. It's user responsibility to provide selector which selects something. Since cobbler stores servers password encrypted the user needs to specify it configuration. All servers selected must have the same password.
Sample configuration:
{
"type": "CobblerProvider",
"host": "172.29.74.8",
"user": "cobbler",
"password": "cobbler",
"system_password": "password"
"selector": {"profile": "cobbler_profile_name", "owners": "user1"}
}
Namespace: default
Parameters:
- host (str) [ref]
- password (str) [ref]
- selector (dict) [ref]
- user (str) [ref]
- system_password (str) [ref]
ExistingServers [Provider Factory]¶
Just return endpoints from its own configuration.
Sample configuration:
{
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "localhost"}]
}
Namespace: default
Parameters:
credentials (list) [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "required": [ "host", "user" ], "type": "object", "properties": { "host": { "type": "string" }, "password": { "type": "string" }, "port": { "type": "integer" }, "key": { "type": "string" }, "user": { "type": "string" } } }
- type (str) [ref]
LxcProvider [Provider Factory]¶
Provide lxc container(s) on given host.
Sample configuration:
{
"type": "LxcProvider",
"distribution": "ubuntu",
"start_lxc_network": "10.1.1.0/24",
"containers_per_host": 32,
"tunnel_to": ["10.10.10.10"],
"forward_ssh": false,
"container_name_prefix": "rally-multinode-02",
"host_provider": {
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "host.net"}]
}
}
Namespace: default
Parameters:
start_lxc_network (str) [ref]
Should follow next pattern: ^(d+.){3}d+/d+$.
- containers_per_host (int) [ref]
- forward_ssh (bool) [ref]
- release (str) [ref]
- distribution (str) [ref]
tunnel_to (list) [ref]
Elements of the list should follow format(s) described below:
- Type: str.
- type (str) [ref]
- container_name_prefix (str) [ref]
- host_provider (dict) [ref]
OpenStackProvider [Provider Factory]¶
Provide VMs using an existing OpenStack cloud.
Sample configuration:
{
"type": "OpenStackProvider",
"amount": 42,
"user": "admin",
"tenant": "admin",
"password": "secret",
"auth_url": "http://example.com/",
"flavor_id": 2,
"image": {
"checksum": "75846dd06e9fcfd2b184aba7fa2b2a8d",
"url": "http://example.com/disk1.img",
"name": "Ubuntu Precise(added by rally)",
"format": "qcow2",
"userdata": "disable_root: false"
},
"secgroup_name": "Rally"
}
Namespace: default
Parameters:
- deployment_name (str) [ref]
- image (dict) [ref]
- auth_url (str) [ref]
- user (str) [ref]
- password (str) [ref]
- tenant (str) [ref]
nics (list) [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "additionalProperties": false, "required": [ "net-id" ], "type": "object", "properties": { "net-id": { "type": "string" } } }
- region (str) [ref]
- wait_for_cloud_init (bool) [ref]
- amount (int) [ref]
- flavor_id (str) [ref]
- secgroup_name (str) [ref]
- type (str) [ref]
- config_drive (bool) [ref]
VirshProvider [Provider Factory]¶
Create VMs from prebuilt templates.
Sample configuration:
{
"type": "VirshProvider",
"connection": "alex@performance-01",
"template_name": "stack-01-devstack-template",
"template_user": "ubuntu",
"template_password": "password"
}
where :
- connection - ssh connection to vms host
- template_name - vm image template
- template_user - vm user to launch devstack
- template_password - vm password to launch devstack
Namespace: default
Parameters:
- template_name (str) [ref]
connection (str) [ref]
Should follow next pattern: ^.+@.+$.
- template_password (str) [ref]
- type (str) [ref]
- template_user (str) [ref]
Task Component¶
Charts¶
Lines [Chart]¶
Display results as generic chart with lines.
This plugin processes additive data and displays it in HTML report as linear chart with X axis bound to iteration number. Complete output data is displayed as linear chart as well, without any processing.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
additive={"title": "Additive data as stacked area",
"description": "Iterations trend for foo and bar",
"chart_plugin": "Lines",
"data": [["foo", 12], ["bar", 34]]},
complete={"title": "Complete data as stacked area",
"description": "Data is shown as stacked area, as-is",
"chart_plugin": "Lines",
"data": [["foo", [[0, 5], [1, 42], [2, 15], [3, 7]]],
["bar", [[0, 2], [1, 1.3], [2, 5], [3, 9]]]],
"label": "Y-axis label text",
"axis_label": "X-axis label text"})
Namespace: default
Module: rally.task.processing.charts
Pie [Chart]¶
Display results as pie, calculate average values for additive data.
This plugin processes additive data and calculate average values. Both additive and complete data are displayed in HTML report as pie chart.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
additive={"title": "Additive output",
"description": ("Pie with average data "
"from all iterations values"),
"chart_plugin": "Pie",
"data": [["foo", 12], ["bar", 34], ["spam", 56]]},
complete={"title": "Complete output",
"description": "Displayed as a pie, as-is",
"chart_plugin": "Pie",
"data": [["foo", 12], ["bar", 34], ["spam", 56]]})
Namespace: default
Module: rally.task.processing.charts
StackedArea [Chart]¶
Display results as stacked area.
This plugin processes additive data and displays it in HTML report as stacked area with X axis bound to iteration number. Complete output data is displayed as stacked area as well, without any processing.
Keys "description", "label" and "axis_label" are optional.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
additive={"title": "Additive data as stacked area",
"description": "Iterations trend for foo and bar",
"chart_plugin": "StackedArea",
"data": [["foo", 12], ["bar", 34]]},
complete={"title": "Complete data as stacked area",
"description": "Data is shown as stacked area, as-is",
"chart_plugin": "StackedArea",
"data": [["foo", [[0, 5], [1, 42], [2, 15], [3, 7]]],
["bar", [[0, 2], [1, 1.3], [2, 5], [3, 9]]]],
"label": "Y-axis label text",
"axis_label": "X-axis label text"})
Namespace: default
Module: rally.task.processing.charts
StatsTable [Chart]¶
Calculate statistics for additive data and display it as table.
This plugin processes additive data and compose statistics that is displayed as table in HTML report.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
additive={"title": "Statistics",
"description": ("Table with statistics generated "
"from all iterations values"),
"chart_plugin": "StatsTable",
"data": [["foo stat", 12], ["bar", 34], ["spam", 56]]})
Namespace: default
Module: rally.task.processing.charts
Table [Chart]¶
Display complete output as table, can not be used for additive data.
Use this plugin for complete output data to display it in HTML report as table. This plugin can not be used for additive data because it does not contain any processing logic.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
complete={"title": "Arbitrary Table",
"description": "Just show columns and rows as-is",
"chart_plugin": "Table",
"data": {"cols": ["foo", "bar", "spam"],
"rows": [["a row", 1, 2], ["b row", 3, 4],
["c row", 5, 6]]}})
Namespace: default
Module: rally.task.processing.charts
TextArea [Chart]¶
Arbitrary text
This plugin processes complete data and displays of output in HTML report.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
complete={"title": "Script Inline",
"chart_plugin": "TextArea",
"data": ["first output", "second output",
"third output"]]})
Namespace: default
Module: rally.task.processing.charts
Contexts¶
allow_ssh [Context]¶
Sets up security groups for all users to access VM via SSH.
Namespace: default
Parameters:
- null [ref]
api_versions [Context]¶
Context for specifying OpenStack clients versions and service types.
Some OpenStack services support several API versions. To recognize the endpoints of each version, separate service types are provided in Keystone service catalog.
Rally has the map of default service names - service types. But since service type is an entity, which can be configured manually by admin( via keystone api) without relation to service name, such map can be insufficient.
Also, Keystone service catalog does not provide a map types to name (this statement is true for keystone < 3.3 ).
This context was designed for not-default service types and not-default API versions usage.
An example of specifying API version:
# In this example we will launch NovaKeypair.create_and_list_keypairs
# scenario on 2.2 api version.
{
"NovaKeypair.create_and_list_keypairs": [
{
"args": {
"key_type": "x509"
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
},
"api_versions": {
"nova": {
"version": 2.2
}
}
}
}
]
}
An example of specifying API version along with service type:
# In this example we will launch CinderVolumes.create_and_attach_volume
# scenario on Cinder V2
{
"CinderVolumes.create_and_attach_volume": [
{
"args": {
"size": 10,
"image": {
"name": "^cirros.*-disk$"
},
"flavor": {
"name": "m1.tiny"
},
"create_volume_params": {
"availability_zone": "nova"
}
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 2,
"users_per_tenant": 2
},
"api_versions": {
"cinder": {
"version": 2,
"service_type": "volumev2"
}
}
}
}
]
}
Also, it possible to use service name as an identifier of service endpoint, but an admin user is required (Keystone can return map of service names - types, but such API is permitted only for admin). An example:
# Similar to the previous example, but `service_name` argument is used
# instead of `service_type`
{
"CinderVolumes.create_and_attach_volume": [
{
"args": {
"size": 10,
"image": {
"name": "^cirros.*-disk$"
},
"flavor": {
"name": "m1.tiny"
},
"create_volume_params": {
"availability_zone": "nova"
}
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 2,
"users_per_tenant": 2
},
"api_versions": {
"cinder": {
"version": 2,
"service_name": "cinderv2"
}
}
}
}
]
}
Namespace: default
Parameters:
Dictionary is expected. Keys should follow pattern(s) described bellow.
- ^[a-z]+$ (str) [ref]
audit_templates [Context]¶
Context class for adding temporary audit template for benchmarks.
Namespace: default
Parameters:
audit_templates_per_admin (int) [ref]
Min value: 1.
params (list) [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "type": "object", "properties": { "goal": { "type": "object", "properties": { "name": { "type": "string" } } }, "strategy": { "type": "object", "properties": { "name": { "type": "string" } } } } }
fill_strategy [ref]
Set of expected values: 'round_robin', 'random', 'None'.
Module: rally.plugins.openstack.context.watcher.audit_templates
ceilometer [Context]¶
Context for creating samples and collecting resources for benchmarks.
Namespace: default
Parameters:
- counter_name (str) [ref]
metadata_list (list) [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "type": "object", "properties": { "status": { "type": "string" }, "deleted": { "type": "string" }, "created_at": { "type": "string" }, "name": { "type": "string" } } }
samples_per_resource (int) [ref]
Min value: 1.
- counter_unit (str) [ref]
resources_per_tenant (int) [ref]
Min value: 1.
counter_volume (float) [ref]
Min value: 0.
batches_allow_lose (int) [ref]
Min value: 0.
timestamp_interval (int) [ref]
Min value: 1.
batch_size (int) [ref]
Min value: 1.
- counter_type (str) [ref]
cluster_templates [Context]¶
Context class for generating temporary cluster model for benchmarks.
Namespace: default
Parameters:
- docker_storage_driver (str) [ref]
- http_proxy (str) [ref]
- docker_volume_size (int) [ref]
- https_proxy (str) [ref]
- no_proxy (str) [ref]
- external_network_id (str) [ref]
- labels (str) [ref]
- dns_nameserver (str) [ref]
- server_type (str) [ref]
- network_driver (str) [ref]
- fixed_network (str) [ref]
- image_id (str) [ref]
- tls_disabled (bool) [ref]
- registry_enabled (bool) [ref]
- coe (str) [ref]
- flavor_id (str) [ref]
- volume_driver (str) [ref]
- master_lb_enabled (bool) [ref]
- public (bool) [ref]
- fixed_subnet (str) [ref]
- master_flavor_id (str) [ref]
Module: rally.plugins.openstack.context.magnum.cluster_templates
clusters [Context]¶
Context class for generating temporary cluster for benchmarks.
Namespace: default
Parameters:
node_count (int) [ref]
Min value: 1.
- cluster_template_uuid (str) [ref]
dummy_context [Context]¶
Dummy context.
Namespace: default
Parameters:
- fail_cleanup (bool) [ref]
- fail_setup (bool) [ref]
ec2_servers [Context]¶
Context class for adding temporary servers for benchmarks.
Servers are added for each tenant.
Namespace: default
Parameters:
servers_per_tenant (int) [ref]
Min value: 1.
- image (dict) [ref]
- flavor (dict) [ref]
existing_network [Context]¶
This context supports using existing networks in Rally.
This context should be used on a deployment with existing users.
Namespace: default
Parameters:
Module: rally.plugins.openstack.context.network.existing_network
flavors [Context]¶
Context creates a list of flavors.
Namespace: default
Parameters:
list [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "additionalProperties": false, "required": [ "name", "ram" ], "type": "object", "properties": { "name": { "type": "string" }, "ram": { "minimum": 1, "type": "integer" }, "ephemeral": { "minimum": 0, "type": "integer" }, "vcpus": { "minimum": 1, "type": "integer" }, "extra_specs": { "additionalProperties": { "type": "string" }, "type": "object" }, "swap": { "minimum": 0, "type": "integer" }, "disk": { "minimum": 0, "type": "integer" } } }
fuel_environments [Context]¶
Context for generating Fuel environments.
Namespace: default
Parameters:
- release_id (int) [ref]
- network_provider (str) [ref]
environments (int) [ref]
Min value: 1.
- deployment_mode (str) [ref]
- net_segment_type (str) [ref]
resource_management_workers (int) [ref]
Min value: 1.
heat_dataplane [Context]¶
Context class for create stack by given template.
This context will create stacks by given template for each tenant and add details to context. Following details will be added:
id of stack; template file contents; files dictionary; stack parameters;
Heat template should define a "gate" node which will interact with Rally by ssh and workload nodes by any protocol. To make this possible heat template should accept the following parameters:
network_id: id of public network router_id: id of external router to connect "gate" node key_name: name of nova ssh keypair to use for "gate" node
Namespace: default
Parameters:
- files (dict) [ref]
- context_parameters (dict) [ref]
- parameters (dict) [ref]
- template [ref]
stacks_per_tenant (int) [ref]
Min value: 1.
image_command_customizer [Context]¶
Context class for generating image customized by a command execution.
Run a command specified by configuration to prepare image.
Use this script e.g. to download and install something.
Namespace: default
Parameters:
- username (str) [ref]
- floating_network (str) [ref]
- userdata (str) [ref]
- internal_network (str) [ref]
workers (int) [ref]
Min value: 1.
port (int) [ref]
Min value: 1.
Max value: 65535.
command [ref]
N/a
- flavor (dict) [ref]
- password (str) [ref]
- image (dict) [ref]
Module: rally.plugins.openstack.context.vm.image_command_customizer
images [Context]¶
Context class for adding images to each user for benchmarks.
Namespace: default
Parameters:
min_disk (int) [ref]
Min value: 0.
- image_container (str) [ref]
- image_url (str) [ref]
image_type [ref]
Set of expected values: 'qcow2', 'raw', 'vhd', 'vmdk', 'vdi', 'iso', 'aki', 'ari', 'ami'.
min_ram (int) [ref]
Min value: 0.
- image_args (dict) [ref]
- image_name (str) [ref]
images_per_tenant (int) [ref]
Min value: 1.
lbaas [Context]¶
Namespace: default
Parameters:
- pool (dict) [ref]
lbaas_version (int) [ref]
Min value: 1.
manila_security_services [Context]¶
This context creates 'security services' for Manila project.
Namespace: default
Parameters:
security_services (list) [ref]
It is expected to be list of dicts with data for creation of security services.
Elements of the list should follow format(s) described below:
- Type: dict. Data for creation of security services.
Example:
{'type': 'LDAP', 'dns_ip': 'foo_ip', 'server': 'bar_ip', 'domain': 'quuz_domain', 'user': 'ololo', 'password': 'fake_password'}
Format:
{ "additionalProperties": true, "required": [ "type" ], "type": "object", "properties": { "type": { "enum": [ "active_directory", "kerberos", "ldap" ] } } }
Module: rally.plugins.openstack.context.manila.manila_security_services
monasca_metrics [Context]¶
Context for creating metrics for benchmarks.
Namespace: default
Parameters:
metrics_per_tenant (int) [ref]
Min value: 1.
value_meta (list) [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "type": "object", "properties": { "value_meta_value": { "type": "string" }, "value_meta_key": { "type": "string" } } }
- name (str) [ref]
- dimensions (dict) [ref]
murano_environments [Context]¶
Context class for creating murano environments.
Namespace: default
Parameters:
environments_per_tenant (int) [ref]
Min value: 1.
Module: rally.plugins.openstack.context.murano.murano_environments
murano_packages [Context]¶
Context class for uploading applications for murano.
Namespace: default
Parameters:
- app_package (str) [ref]
Module: rally.plugins.openstack.context.murano.murano_packages
network [Context]¶
Create networking resources.
This creates networks for all tenants, and optionally creates another resources like subnets and routers.
Namespace: default
Parameters:
- network_create_args (dict) [ref]
subnets_per_network (int) [ref]
Min value: 1.
- start_cidr (str) [ref]
dns_nameservers (list) [ref]
Elements of the list should follow format(s) described below:
- Type: str.
networks_per_tenant (int) [ref]
Min value: 1.
profiles [Context]¶
Context creates a temporary profile for Senlin test.
Namespace: default
Parameters:
- version (str) [ref]
- type (str) [ref]
- properties (dict) [ref]
quotas [Context]¶
Context class for updating benchmarks' tenants quotas.
Namespace: default
Parameters:
- neutron (dict) [ref]
- cinder (dict) [ref]
- manila (dict) [ref]
- nova (dict) [ref]
- designate (dict) [ref]
roles [Context]¶
Context class for assigning roles for users.
Namespace: default
Parameters:
list [ref]
Elements of the list should follow format(s) described below:
- Type: str. The name of role to assign to user
sahara_cluster [Context]¶
Context class for setting up the Cluster an EDP job.
Namespace: default
Parameters:
workers_count (int) [ref]
Min value: 1.
- worker_flavor_id (str) [ref]
- use_autoconfig (bool) [ref]
- cluster_configs (dict) [ref]
- enable_proxy (bool) [ref]
- plugin_name (str) [ref]
- floating_ip_pool (str) [ref]
volumes_size (int) [ref]
Min value: 1.
- node_configs (dict) [ref]
- flavor_id (str) [ref]
volumes_per_node (int) [ref]
Min value: 1.
- enable_anti_affinity (bool) [ref]
- hadoop_version (str) [ref]
- auto_security_group (bool) [ref]
security_groups (list) [ref]
Elements of the list should follow format(s) described below:
- Type: str.
- master_flavor_id (str) [ref]
Module: rally.plugins.openstack.context.sahara.sahara_cluster
sahara_image [Context]¶
Context class for adding and tagging Sahara images.
Namespace: default
Parameters:
- username (str) [ref]
- image_uuid (str) [ref]
- hadoop_version (str) [ref]
- image_url (str) [ref]
- plugin_name (str) [ref]
sahara_input_data_sources [Context]¶
Context class for setting up Input Data Sources for an EDP job.
Namespace: default
Parameters:
input_type [ref]
Set of expected values: 'swift', 'hdfs'.
swift_files (list) [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "additionalProperties": false, "required": [ "name", "download_url" ], "type": "object", "properties": { "name": { "type": "string" }, "download_url": { "type": "string" } } }
- input_url (str) [ref]
Module: rally.plugins.openstack.context.sahara.sahara_input_data_sources
sahara_job_binaries [Context]¶
Context class for setting up Job Binaries for an EDP job.
Namespace: default
Parameters:
libs (list) [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "additionalProperties": false, "required": [ "name", "download_url" ], "type": "object", "properties": { "name": { "type": "string" }, "download_url": { "type": "string" } } }
mains (list) [ref]
Elements of the list should follow format(s) described below:
Type: dict. Format:
{ "additionalProperties": false, "required": [ "name", "download_url" ], "type": "object", "properties": { "name": { "type": "string" }, "download_url": { "type": "string" } } }
Module: rally.plugins.openstack.context.sahara.sahara_job_binaries
sahara_output_data_sources [Context]¶
Context class for setting up Output Data Sources for an EDP job.
Namespace: default
Parameters:
output_type [ref]
Set of expected values: 'swift', 'hdfs'.
- output_url_prefix (str) [ref]
Module: rally.plugins.openstack.context.sahara.sahara_output_data_sources
servers [Context]¶
Context class for adding temporary servers for benchmarks.
Servers are added for each tenant.
Namespace: default
Parameters:
servers_per_tenant (int) [ref]
Number of servers to boot in each Tenant.
Min value: 1.
image (dict) [ref]
Name of image to boot server(s) from.
auto_assign_nic (bool) [ref]
True if NICs should be assigned.
flavor (dict) [ref]
Name of flavor to boot server(s) with.
nics (list) [ref]
List of networks to attach to server.
Elements of the list should follow format(s) described below:
{'oneOf': [{'type': 'object', 'properties': {'net-id': {'type': 'string'}}, 'description': 'Network ID in a format like OpenStack API expects to see.'}, {'type': 'string', 'description': 'Network ID.'}]}
stacks [Context]¶
Context class for create temporary stacks with resources.
Stack generator allows to generate arbitrary number of stacks for each tenant before test scenarios. In addition, it allows to define number of resources (namely OS::Heat::RandomString) that will be created inside each stack. After test execution the stacks will be automatically removed from heat.
Namespace: default
Parameters:
resources_per_stack (int) [ref]
Min value: 1.
stacks_per_tenant (int) [ref]
Min value: 1.
swift_objects [Context]¶
Namespace: default
Parameters:
object_size (int) [ref]
Min value: 1.
containers_per_tenant (int) [ref]
Min value: 1.
objects_per_container (int) [ref]
Min value: 1.
resource_management_workers (int) [ref]
Min value: 1.
users [Context]¶
Context class for generating temporary users/tenants for benchmarks.
Namespace: openstack
Parameters:
user_domain (str) [ref]
ID of domain in which users will be created.
project_domain (str) [ref]
ID of domain in which projects will be created.
user_choice_method [ref]
The mode of balancing usage of users between scenario iterations. Set of expected values: 'random', 'round_robin'.
users_per_tenant (int) [ref]
The number of users to create per one tenant.
Min value: 1.
tenants (int) [ref]
The number of tenants to create.
Min value: 1.
resource_management_workers (int) [ref]
The number of concurrent threads to use for serving users context.
Min value: 1.
volume_types [Context]¶
Context class for adding volumes types for benchmarks.
Namespace: default
Parameters:
list [ref]
Elements of the list should follow format(s) described below:
- Type: str.
volumes [Context]¶
Context class for adding volumes to each user for benchmarks.
Namespace: default
Parameters:
- type [ref]
volumes_per_tenant (int) [ref]
Min value: 1.
size (int) [ref]
Min value: 1.
Hooks¶
fault_injection [Hook]¶
Performs fault injection using os-faults library.
- Configuration:
- action - string that represents an action (more info in [1]) verify - whether to verify connection to cloud nodes or not
This plugin discovers extra config of ExistingCloud and looks for "cloud_config" field. If cloud_config is present then it will be used to connect to the cloud by os-faults.
Another option is to provide os-faults config file through OS_FAULTS_CONFIG env variable. Format of the config can be found in [1].
[1] http://os-faults.readthedocs.io/en/latest/usage.html
Namespace: default
Parameters:
- action (str) [ref]
- verify (bool) [ref]
SLAs¶
failure_rate [SLA]¶
Failure rate minimum and maximum in percents.
Namespace: default
Parameters:
max (float) [ref]
Min value: 0.0.
Max value: 100.0.
min (float) [ref]
Min value: 0.0.
Max value: 100.0.
max_avg_duration [SLA]¶
Maximum average duration of one iteration in seconds.
Namespace: default
Parameters:
float [ref]
Min value: 0.0.
max_avg_duration_per_atomic [SLA]¶
Maximum average duration of one iterations atomic actions in seconds.
Namespace: default
Parameters:
Dictionary is expected. Keys should follow pattern(s) described bellow.
. (str)* [ref]
The name of atomic action.
Module: rally.plugins.common.sla.max_average_duration_per_atomic
max_seconds_per_iteration [SLA]¶
Maximum time for one iteration in seconds.
Namespace: default
Parameters:
float [ref]
Min value: 0.0.
outliers [SLA]¶
Limit the number of outliers (iterations that take too much time).
The outliers are detected automatically using the computation of the mean and standard deviation (std) of the data.
Namespace: default
Parameters:
max (int) [ref]
Min value: 0.
min_iterations (int) [ref]
Min value: 3.
sigmas (float) [ref]
Min value: 0.0.
performance_degradation [SLA]¶
Calculates performance degradation based on iteration time
This SLA plugin finds minimum and maximum duration of iterations completed without errors during Rally task execution. Assuming that minimum duration is 100%, it calculates performance degradation against maximum duration.
Namespace: default
Parameters:
max_degradation (float) [ref]
Min value: 0.0.
Scenarios¶
Authenticate.keystone [Scenario]¶
Check Keystone Client.
Namespace: openstack
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_ceilometer [Scenario]¶
Check Ceilometer Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: openstack
Parameters:
repetitions [ref]
Number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_cinder [Scenario]¶
Check Cinder Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: openstack
Parameters:
repetitions [ref]
Number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_glance [Scenario]¶
Check Glance Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated. In following we are checking for non-existent image.
Namespace: openstack
Parameters:
repetitions [ref]
Number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_heat [Scenario]¶
Check Heat Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: openstack
Parameters:
repetitions [ref]
Number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_monasca [Scenario]¶
Check Monasca Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: openstack
Parameters:
repetitions [ref]
Number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_neutron [Scenario]¶
Check Neutron Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: openstack
Parameters:
repetitions [ref]
Number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_nova [Scenario]¶
Check Nova Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: openstack
Parameters:
repetitions [ref]
Number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
CeilometerAlarms.create_alarm [Scenario]¶
Create an alarm.
This scenarios test POST /v2/alarms. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc that may be passed while creating an alarm.
Namespace: openstack
Parameters:
meter_name [ref]
Specifies meter name of the alarm
threshold [ref]
Specifies alarm threshold
kwargs [ref]
Specifies optional arguments for alarm creation.
CeilometerAlarms.create_alarm_and_get_history [Scenario]¶
Create an alarm, get and set the state and get the alarm history.
- This scenario makes following queries:
- GET /v2/alarms/{alarm_id}/history GET /v2/alarms/{alarm_id}/state PUT /v2/alarms/{alarm_id}/state
Initially alarm is created and then get the state of the created alarm using its alarm_id. Then get the history of the alarm. And finally the state of the alarm is updated using given state. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc that may be passed while alarm creation.
Namespace: openstack
Parameters:
meter_name [ref]
Specifies meter name of the alarm
threshold [ref]
Specifies alarm threshold
state [ref]
An alarm state to be set
timeout [ref]
The number of seconds for which to attempt a successful check of the alarm state
kwargs [ref]
Specifies optional arguments for alarm creation.
CeilometerAlarms.create_and_delete_alarm [Scenario]¶
Create and delete the newly created alarm.
This scenarios test DELETE /v2/alarms/(alarm_id) Initially alarm is created and then the created alarm is deleted using its alarm_id. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc that may be passed while alarm creation.
Namespace: openstack
Parameters:
meter_name [ref]
Specifies meter name of the alarm
threshold [ref]
Specifies alarm threshold
kwargs [ref]
Specifies optional arguments for alarm creation.
CeilometerAlarms.create_and_get_alarm [Scenario]¶
Create and get the newly created alarm.
These scenarios test GET /v2/alarms/(alarm_id) Initially an alarm is created and then its detailed information is fetched using its alarm_id. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc. that may be passed while creating an alarm.
Namespace: openstack
Parameters:
meter_name [ref]
Specifies meter name of the alarm
threshold [ref]
Specifies alarm threshold
kwargs [ref]
Specifies optional arguments for alarm creation.
CeilometerAlarms.create_and_list_alarm [Scenario]¶
Create and get the newly created alarm.
This scenarios test GET /v2/alarms/(alarm_id) Initially alarm is created and then the created alarm is fetched using its alarm_id. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc. that may be passed while creating an alarm.
Namespace: openstack
Parameters:
meter_name [ref]
Specifies meter name of the alarm
threshold [ref]
Specifies alarm threshold
kwargs [ref]
Specifies optional arguments for alarm creation.
CeilometerAlarms.create_and_update_alarm [Scenario]¶
Create and update the newly created alarm.
This scenarios test PUT /v2/alarms/(alarm_id) Initially alarm is created and then the created alarm is updated using its alarm_id. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc that may be passed while alarm creation.
Namespace: openstack
Parameters:
meter_name [ref]
Specifies meter name of the alarm
threshold [ref]
Specifies alarm threshold
kwargs [ref]
Specifies optional arguments for alarm creation.
CeilometerAlarms.list_alarms [Scenario]¶
Fetch all alarms.
This scenario fetches list of all alarms using GET /v2/alarms.
Namespace: openstack
CeilometerEvents.create_user_and_get_event [Scenario]¶
Create user and gets event.
This scenario creates user to store new event and fetches one event using GET /v2/events/<message_id>.
Namespace: openstack
CeilometerEvents.create_user_and_list_event_types [Scenario]¶
Create user and fetch all event types.
This scenario creates user to store new event and fetches list of all events types using GET /v2/event_types.
Namespace: openstack
CeilometerEvents.create_user_and_list_events [Scenario]¶
Create user and fetch all events.
This scenario creates user to store new event and fetches list of all events using GET /v2/events.
Namespace: openstack
CeilometerMeters.list_matched_meters [Scenario]¶
Get meters that matched fields from context and args.
Namespace: openstack
Parameters:
filter_by_user_id [ref]
Flag for query by user_id
filter_by_project_id [ref]
Flag for query by project_id
filter_by_resource_id [ref]
Flag for query by resource_id
metadata_query [ref]
Dict with metadata fields and values for query
limit [ref]
Count of resources in response
CeilometerMeters.list_meters [Scenario]¶
Check all available queries for list resource request.
Namespace: openstack
Parameters:
metadata_query [ref]
Dict with metadata fields and values
limit [ref]
Limit of meters in response
CeilometerQueries.create_and_query_alarm_history [Scenario]¶
Create an alarm and then query for its history.
This scenario tests POST /v2/query/alarms/history An alarm is first created and then its alarm_id is used to fetch the history of that specific alarm.
Namespace: openstack
Parameters:
meter_name [ref]
Specifies meter name of alarm
threshold [ref]
Specifies alarm threshold
orderby [ref]
Optional param for specifying ordering of results
limit [ref]
Optional param for maximum number of results returned
kwargs [ref]
Optional parameters for alarm creation
Module: rally.plugins.openstack.scenarios.ceilometer.queries
CeilometerQueries.create_and_query_alarms [Scenario]¶
Create an alarm and then query it with specific parameters.
This scenario tests POST /v2/query/alarms An alarm is first created and then fetched using the input query.
Namespace: openstack
Parameters:
meter_name [ref]
Specifies meter name of alarm
threshold [ref]
Specifies alarm threshold
filter [ref]
Optional filter query dictionary
orderby [ref]
Optional param for specifying ordering of results
limit [ref]
Optional param for maximum number of results returned
kwargs [ref]
Optional parameters for alarm creation
Module: rally.plugins.openstack.scenarios.ceilometer.queries
CeilometerQueries.create_and_query_samples [Scenario]¶
Create a sample and then query it with specific parameters.
This scenario tests POST /v2/query/samples A sample is first created and then fetched using the input query.
Namespace: openstack
Parameters:
counter_name [ref]
Specifies name of the counter
counter_type [ref]
Specifies type of the counter
counter_unit [ref]
Specifies unit of the counter
counter_volume [ref]
Specifies volume of the counter
resource_id [ref]
Specifies resource id for the sample created
filter [ref]
Optional filter query dictionary
orderby [ref]
Optional param for specifying ordering of results
limit [ref]
Optional param for maximum number of results returned
kwargs [ref]
Parameters for sample creation
Module: rally.plugins.openstack.scenarios.ceilometer.queries
CeilometerResource.get_tenant_resources [Scenario]¶
Get all tenant resources.
This scenario retrieves information about tenant resources using GET /v2/resources/(resource_id)
Namespace: openstack
Module: rally.plugins.openstack.scenarios.ceilometer.resources
CeilometerResource.list_matched_resources [Scenario]¶
Get resources that matched fields from context and args.
Namespace: openstack
Parameters:
filter_by_user_id [ref]
Flag for query by user_id
filter_by_project_id [ref]
Flag for query by project_id
filter_by_resource_id [ref]
Flag for query by resource_id
metadata_query [ref]
Dict with metadata fields and values for query
start_time [ref]
Lower bound of resource timestamp in isoformat
end_time [ref]
Upper bound of resource timestamp in isoformat
limit [ref]
Count of resources in response
Module: rally.plugins.openstack.scenarios.ceilometer.resources
CeilometerResource.list_resources [Scenario]¶
Check all available queries for list resource request.
This scenario fetches list of all resources using GET /v2/resources.
Namespace: openstack
Parameters:
metadata_query [ref]
Dict with metadata fields and values for query
start_time [ref]
Lower bound of resource timestamp in isoformat
end_time [ref]
Upper bound of resource timestamp in isoformat
limit [ref]
Count of resources in response
Module: rally.plugins.openstack.scenarios.ceilometer.resources
CeilometerSamples.list_matched_samples [Scenario]¶
Get list of samples that matched fields from context and args.
Namespace: openstack
Parameters:
filter_by_user_id [ref]
Flag for query by user_id
filter_by_project_id [ref]
Flag for query by project_id
filter_by_resource_id [ref]
Flag for query by resource_id
metadata_query [ref]
Dict with metadata fields and values for query
limit [ref]
Count of samples in response
Module: rally.plugins.openstack.scenarios.ceilometer.samples
CeilometerSamples.list_samples [Scenario]¶
Fetch all available queries for list sample request.
Namespace: openstack
Parameters:
metadata_query [ref]
Dict with metadata fields and values for query
limit [ref]
Count of samples in response
Module: rally.plugins.openstack.scenarios.ceilometer.samples
CeilometerStats.create_meter_and_get_stats [Scenario]¶
Create a meter and fetch its statistics.
Meter is first created and then statistics is fetched for the same using GET /v2/meters/(meter_name)/statistics.
Namespace: openstack
Parameters:
kwargs [ref]
Contains optional arguments to create a meter
CeilometerStats.get_stats [Scenario]¶
Fetch statistics for certain meter.
Statistics is fetched for the using GET /v2/meters/(meter_name)/statistics.
Namespace: openstack
Parameters:
meter_name [ref]
Meter to take statistic for
filter_by_user_id [ref]
Flag for query by user_id
filter_by_project_id [ref]
Flag for query by project_id
filter_by_resource_id [ref]
Flag for query by resource_id
metadata_query [ref]
Dict with metadata fields and values for query
period [ref]
The length of the time range covered by these stats
groupby [ref]
The fields used to group the samples
aggregates [ref]
Name of function for samples aggregation
Returns: list of statistics data
CeilometerTraits.create_user_and_list_trait_descriptions [Scenario]¶
Create user and fetch all trait descriptions.
This scenario creates user to store new event and fetches list of all traits for certain event type using GET /v2/event_types/<event_type>/traits.
Namespace: openstack
CeilometerTraits.create_user_and_list_traits [Scenario]¶
Create user and fetch all event traits.
This scenario creates user to store new event and fetches list of all traits for certain event type and trait name using GET /v2/event_types/<event_type>/traits/<trait_name>.
Namespace: openstack
CinderVolumeBackups.create_incremental_volume_backup [Scenario]¶
Create a incremental volume backup.
The scenario first create a volume, the create a backup, the backup is full backup. Because Incremental backup must be based on the full backup. finally create a incremental backup.
Namespace: openstack
Parameters:
size [ref]
Volume size in GB
do_delete [ref]
Deletes backup and volume after creating if True
create_volume_kwargs [ref]
Optional args to create a volume
create_backup_kwargs [ref]
Optional args to create a volume backup
Module: rally.plugins.openstack.scenarios.cinder.volume_backups
CinderVolumeTypes.create_and_delete_encryption_type [Scenario]¶
Create and delete encryption type
- This scenario firstly creates an encryption type for a given
- volume type, then deletes the created encryption type.
Namespace: openstack
Parameters:
create_specs [ref]
The encryption type specifications to add
Module: rally.plugins.openstack.scenarios.cinder.volume_types
CinderVolumeTypes.create_and_delete_volume_type [Scenario]¶
Create and delete a volume Type.
Namespace: openstack
Parameters:
kwargs [ref]
Optional parameters used during volume type creation.
Module: rally.plugins.openstack.scenarios.cinder.volume_types
CinderVolumeTypes.create_and_list_encryption_type [Scenario]¶
Create and list encryption type
- This scenario firstly creates a volume type, secondly creates an
- encryption type for the volume type, thirdly lists all encryption types.
Namespace: openstack
Parameters:
specs [ref]
The encryption type specifications to add
search_opts [ref]
Options used when search for encryption types
kwargs [ref]
Optional parameters used during volume type creation.
Module: rally.plugins.openstack.scenarios.cinder.volume_types
CinderVolumeTypes.create_and_set_volume_type_keys [Scenario]¶
Create and set a volume type's extra specs.
Namespace: openstack
Parameters:
volume_type_key [ref]
A dict of key/value pairs to be set
kwargs [ref]
Optional parameters used during volume type creation.
Module: rally.plugins.openstack.scenarios.cinder.volume_types
CinderVolumeTypes.create_volume_type_and_encryption_type [Scenario]¶
Create encryption type
- This scenario first creates a volume type, then creates an encryption
- type for the volume type.
Namespace: openstack
Parameters:
specs [ref]
The encryption type specifications to add
kwargs [ref]
Optional parameters used during volume type creation.
Module: rally.plugins.openstack.scenarios.cinder.volume_types
CinderVolumes.create_and_accept_transfer [Scenario]¶
Create a volume transfer, then accept it
Measure the "cinder transfer-create" and "cinder transfer-accept" command performace.
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB)
image [ref]
Image to be used to create initial volume
kwargs [ref]
Optional args to create a volume
CinderVolumes.create_and_attach_volume [Scenario]¶
Create a VM and attach a volume to it.
Simple test to create a VM and attach a volume, then detach the volume and delete volume/VM.
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB) or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image [ref]
Glance image name to use for the VM
flavor [ref]
VM flavor name
create_volume_params [ref]
Optional arguments for volume creation
create_vm_params [ref]
Optional arguments for VM creation
kwargs [ref]
(deprecated) optional arguments for VM creation
CinderVolumes.create_and_delete_snapshot [Scenario]¶
Create and then delete a volume-snapshot.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between snapshot creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
force [ref]
When set to True, allows snapshot of a volume when the volume is attached to an instance
min_sleep [ref]
Minimum sleep time between snapshot creation and deletion (in seconds)
max_sleep [ref]
Maximum sleep time between snapshot creation and deletion (in seconds)
kwargs [ref]
Optional args to create a snapshot
CinderVolumes.create_and_delete_volume [Scenario]¶
Create and then delete a volume.
Good for testing a maximal bandwidth of cloud. Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB) or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image [ref]
Image to be used to create volume
min_sleep [ref]
Minimum sleep time between volume creation and deletion (in seconds)
max_sleep [ref]
Maximum sleep time between volume creation and deletion (in seconds)
kwargs [ref]
Optional args to create a volume
CinderVolumes.create_and_extend_volume [Scenario]¶
Create and extend a volume and then delete it.
Namespace: openstack
Parameters:
size [ref]
Volume size (in GB) or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
new_size [ref]
Volume new size (in GB) or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
to extend. Notice: should be bigger volume size
min_sleep [ref]
Minimum sleep time between volume extension and deletion (in seconds)
max_sleep [ref]
Maximum sleep time between volume extension and deletion (in seconds)
kwargs [ref]
Optional args to extend the volume
CinderVolumes.create_and_get_volume [Scenario]¶
Create a volume and get the volume.
Measure the "cinder show" command performance.
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB) or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image [ref]
Image to be used to create volume
kwargs [ref]
Optional args to create a volume
CinderVolumes.create_and_list_snapshots [Scenario]¶
Create and then list a volume-snapshot.
Namespace: openstack
Parameters:
force [ref]
When set to True, allows snapshot of a volume when the volume is attached to an instance
detailed [ref]
True if detailed information about snapshots should be listed
kwargs [ref]
Optional args to create a snapshot
CinderVolumes.create_and_list_volume [Scenario]¶
Create a volume and list all volumes.
Measure the "cinder volume-list" command performance.
If you have only 1 user in your context, you will add 1 volume on every iteration. So you will have more and more volumes and will be able to measure the performance of the "cinder volume-list" command depending on the number of images owned by users.
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB) or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
detailed [ref]
Determines whether the volume listing should contain detailed information about all of them
image [ref]
Image to be used to create volume
kwargs [ref]
Optional args to create a volume
CinderVolumes.create_and_list_volume_backups [Scenario]¶
Create and then list a volume backup.
Namespace: openstack
Parameters:
size [ref]
Volume size in GB
detailed [ref]
True if detailed information about backup should be listed
do_delete [ref]
If True, a volume backup will be deleted
create_volume_kwargs [ref]
Optional args to create a volume
create_backup_kwargs [ref]
Optional args to create a volume backup
CinderVolumes.create_and_restore_volume_backup [Scenario]¶
Restore volume backup.
Namespace: openstack
Parameters:
size [ref]
Volume size in GB
do_delete [ref]
If True, the volume and the volume backup will be deleted after creation.
create_volume_kwargs [ref]
Optional args to create a volume
create_backup_kwargs [ref]
Optional args to create a volume backup
CinderVolumes.create_and_update_volume [Scenario]¶
Create a volume and update its name and description.
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB)
image [ref]
Image to be used to create volume
create_volume_kwargs [ref]
Dict, to be used to create volume
update_volume_kwargs [ref]
Dict, to be used to update volume
CinderVolumes.create_and_upload_volume_to_image [Scenario]¶
Create and upload a volume to image.
Namespace: openstack
Parameters:
size [ref]
Volume size (integers, in GB), or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image [ref]
Image to be used to create volume.
force [ref]
When set to True volume that is attached to an instance could be uploaded to image
container_format [ref]
Image container format
disk_format [ref]
Disk format for image
do_delete [ref]
Deletes image and volume after uploading if True
kwargs [ref]
Optional args to create a volume
CinderVolumes.create_from_volume_and_delete_volume [Scenario]¶
Create volume from volume and then delete it.
Scenario for testing volume clone.Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
size [ref]
Volume size (in GB), or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
Should be equal or bigger source volume size
min_sleep [ref]
Minimum sleep time between volume creation and deletion (in seconds)
max_sleep [ref]
Maximum sleep time between volume creation and deletion (in seconds)
kwargs [ref]
Optional args to create a volume
CinderVolumes.create_nested_snapshots_and_attach_volume [Scenario]¶
Create a volume from snapshot and attach/detach the volume
This scenario create volume, create it's snapshot, attach volume, then create new volume from existing snapshot and so on, with defined nested level, after all detach and delete them. volume->snapshot->volume->snapshot->volume ...
Namespace: openstack
Parameters:
size [ref]
- Volume size - dictionary, contains two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
default values: {"min": 1, "max": 5}
nested_level [ref]
Amount of nested levels
create_volume_kwargs [ref]
Optional args to create a volume
create_snapshot_kwargs [ref]
Optional args to create a snapshot
kwargs [ref]
Optional parameters used during volume snapshot creation.
CinderVolumes.create_snapshot_and_attach_volume [Scenario]¶
Create volume, snapshot and attach/detach volume.
Namespace: openstack
Parameters:
volume_type [ref]
Name of volume type to use
size [ref]
- Volume size - dictionary, contains two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
default values: {"min": 1, "max": 5}
kwargs [ref]
Optional parameters used during volume snapshot creation.
CinderVolumes.create_volume [Scenario]¶
Create a volume.
Good test to check how influence amount of active volumes on performance of creating new.
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB) or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image [ref]
Image to be used to create volume
kwargs [ref]
Optional args to create a volume
CinderVolumes.create_volume_and_clone [Scenario]¶
Create a volume, then clone it to another volume.
- This creates a volume, then clone it to anothor volume,
- and then clone the new volume to next volume...
- create source volume (from image)
- clone source volume to volume1
- clone volume1 to volume2
- clone volume2 to volume3
- ...
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB) or dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image [ref]
Image to be used to create initial volume
nested_level [ref]
Amount of nested levels
kwargs [ref]
Optional args to create volumes
CinderVolumes.create_volume_and_update_readonly_flag [Scenario]¶
Create a volume and then update its readonly flag.
Namespace: openstack
Parameters:
size [ref]
Volume size (integer, in GB)
image [ref]
Image to be used to create volume
read_only [ref]
The value to indicate whether to update volume to read-only access mode
kwargs [ref]
Optional args to create a volume
CinderVolumes.create_volume_backup [Scenario]¶
Create a volume backup.
Namespace: openstack
Parameters:
size [ref]
Volume size in GB
do_delete [ref]
If True, a volume and a volume backup will be deleted after creation.
create_volume_kwargs [ref]
Optional args to create a volume
create_backup_kwargs [ref]
Optional args to create a volume backup
CinderVolumes.create_volume_from_snapshot [Scenario]¶
Create a volume-snapshot, then create a volume from this snapshot.
Namespace: openstack
Parameters:
do_delete [ref]
If True, a snapshot and a volume will be deleted after creation.
create_snapshot_kwargs [ref]
Optional args to create a snapshot
kwargs [ref]
Optional args to create a volume
CinderVolumes.list_transfers [Scenario]¶
List all transfers.
This simple scenario tests the "cinder transfer-list" command by listing all the volume transfers.
Namespace: openstack
Parameters:
detailed [ref]
If True, detailed information about volume transfer should be listed
search_opts [ref]
Search options to filter out volume transfers.
CinderVolumes.list_types [Scenario]¶
List all volume types.
This simple scenario tests the cinder type-list command by listing all the volume types.
Namespace: openstack
Parameters:
search_opts [ref]
Options used when search for volume types
is_public [ref]
If query public volume type
CinderVolumes.list_volumes [Scenario]¶
List all volumes.
This simple scenario tests the cinder list command by listing all the volumes.
Namespace: openstack
Parameters:
detailed [ref]
True if detailed information about volumes should be listed
CinderVolumes.modify_volume_metadata [Scenario]¶
Modify a volume's metadata.
This requires a volume to be created with the volumes
context. Additionally, sets * set_size
must be greater
than or equal to deletes * delete_size
.
Namespace: openstack
Parameters:
sets [ref]
How many set_metadata operations to perform
set_size [ref]
Number of metadata keys to set in each set_metadata operation
deletes [ref]
How many delete_metadata operations to perform
delete_size [ref]
Number of metadata keys to delete in each delete_metadata operation
DesignateBasic.create_and_delete_domain [Scenario]¶
Create and then delete a domain.
Measure the performance of creating and deleting domains with different level of load.
Namespace: openstack
DesignateBasic.create_and_delete_records [Scenario]¶
Create and then delete records.
Measure the performance of creating and deleting records with different level of load.
Namespace: openstack
Parameters:
records_per_domain [ref]
Records to create pr domain.
DesignateBasic.create_and_delete_recordsets [Scenario]¶
Create and then delete recordsets.
Measure the performance of creating and deleting recordsets with different level of load.
Namespace: openstack
Parameters:
recordsets_per_zone [ref]
Recordsets to create pr zone.
DesignateBasic.create_and_delete_server [Scenario]¶
Create and then delete a server.
Measure the performance of creating and deleting servers with different level of load.
Namespace: openstack
DesignateBasic.create_and_delete_zone [Scenario]¶
Create and then delete a zone.
Measure the performance of creating and deleting zones with different level of load.
Namespace: openstack
DesignateBasic.create_and_list_domains [Scenario]¶
Create a domain and list all domains.
Measure the "designate domain-list" command performance.
If you have only 1 user in your context, you will add 1 domain on every iteration. So you will have more and more domain and will be able to measure the performance of the "designate domain-list" command depending on the number of domains owned by users.
Namespace: openstack
DesignateBasic.create_and_list_records [Scenario]¶
Create and then list records.
If you have only 1 user in your context, you will add 1 record on every iteration. So you will have more and more records and will be able to measure the performance of the "designate record-list" command depending on the number of domains/records owned by users.
Namespace: openstack
Parameters:
records_per_domain [ref]
Records to create pr domain.
DesignateBasic.create_and_list_recordsets [Scenario]¶
Create and then list recordsets.
If you have only 1 user in your context, you will add 1 recordset on every iteration. So you will have more and more recordsets and will be able to measure the performance of the "openstack recordset list" command depending on the number of zones/recordsets owned by users.
Namespace: openstack
Parameters:
recordsets_per_zone [ref]
Recordsets to create pr zone.
DesignateBasic.create_and_list_servers [Scenario]¶
Create a Designate server and list all servers.
If you have only 1 user in your context, you will add 1 server on every iteration. So you will have more and more server and will be able to measure the performance of the "designate server-list" command depending on the number of servers owned by users.
Namespace: openstack
DesignateBasic.create_and_list_zones [Scenario]¶
Create a zone and list all zones.
Measure the "openstack zone list" command performance.
If you have only 1 user in your context, you will add 1 zone on every iteration. So you will have more and more zone and will be able to measure the performance of the "openstack zone list" command depending on the number of zones owned by users.
Namespace: openstack
DesignateBasic.create_and_update_domain [Scenario]¶
Create and then update a domain.
Measure the performance of creating and updating domains with different level of load.
Namespace: openstack
DesignateBasic.list_domains [Scenario]¶
List Designate domains.
This simple scenario tests the designate domain-list command by listing all the domains.
Suppose if we have 2 users in context and each has 2 domains uploaded for them we will be able to test the performance of designate domain-list command in this case.
Namespace: openstack
DesignateBasic.list_records [Scenario]¶
List Designate records.
This simple scenario tests the designate record-list command by listing all the records in a domain.
Suppose if we have 2 users in context and each has 2 domains uploaded for them we will be able to test the performance of designate record-list command in this case.
Namespace: openstack
Parameters:
domain_id [ref]
Domain ID
DesignateBasic.list_recordsets [Scenario]¶
List Designate recordsets.
This simple scenario tests the openstack recordset list command by listing all the recordsets in a zone.
Namespace: openstack
Parameters:
zone_id [ref]
Zone ID
DesignateBasic.list_servers [Scenario]¶
List Designate servers.
This simple scenario tests the designate server-list command by listing all the servers.
Namespace: openstack
DesignateBasic.list_zones [Scenario]¶
List Designate zones.
This simple scenario tests the openstack zone list command by listing all the zones.
Namespace: openstack
Dummy.dummy [Scenario]¶
Do nothing and sleep for the given number of seconds (0 by default).
Dummy.dummy can be used for testing performance of different ScenarioRunners and of the ability of rally to store a large amount of results.
Namespace: default
Parameters:
sleep [ref]
Idle time of method (in seconds).
Dummy.dummy_exception [Scenario]¶
Throw an exception.
Dummy.dummy_exception can be used for test if exceptions are processed properly by ScenarioRunners and benchmark and analyze rally results storing process.
Namespace: default
Parameters:
size_of_message [ref]
Int size of the exception message
sleep [ref]
Idle time of method (in seconds).
message [ref]
Message of the exception
Dummy.dummy_exception_probability [Scenario]¶
Throw an exception with given probability.
Dummy.dummy_exception_probability can be used to test if exceptions are processed properly by ScenarioRunners. This scenario will throw an exception sometimes, depending on the given exception probability.
Namespace: default
Parameters:
exception_probability [ref]
Sets how likely it is that an exception will be thrown. Float between 0 and 1 0=never 1=always.
Dummy.dummy_output [Scenario]¶
Generate dummy output.
This scenario generates example of output data.
Namespace: default
Parameters:
random_range [ref]
Max int limit for generated random values
Dummy.dummy_random_action [Scenario]¶
Sleep random time in dummy actions.
Namespace: default
Parameters:
actions_num [ref]
Int number of actions to generate
sleep_min [ref]
Minimal time to sleep, numeric seconds
sleep_max [ref]
Maximum time to sleep, numeric seconds
Dummy.dummy_random_fail_in_atomic [Scenario]¶
Dummy.dummy_random_fail_in_atomic in dummy actions.
Can be used to test atomic actions failures processing.
Namespace: default
Parameters:
exception_probability [ref]
Probability with which atomic actions fail in this dummy scenario (0 <= p <= 1)
Dummy.dummy_timed_atomic_actions [Scenario]¶
Run some sleepy atomic actions for SLA atomic action tests.
Namespace: default
Parameters:
number_of_actions [ref]
Int number of atomic actions to create
sleep_factor [ref]
Int multiplier for number of seconds to sleep
Dummy.failure [Scenario]¶
Raise errors in some iterations.
Namespace: default
Parameters:
sleep [ref]
Float iteration sleep time in seconds
from_iteration [ref]
Int iteration number which starts range of failed iterations
to_iteration [ref]
Int iteration number which ends range of failed iterations
each [ref]
Int cyclic number of iteration which actually raises an error in selected range. For example, each=3 will raise error in each 3rd iteration.
EC2Servers.boot_server [Scenario]¶
Boot a server.
Assumes that cleanup is done elsewhere.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
kwargs [ref]
Optional additional arguments for server creation
EC2Servers.list_servers [Scenario]¶
List all servers.
This simple scenario tests the EC2 API list function by listing all the servers.
Namespace: openstack
FuelEnvironments.create_and_delete_environment [Scenario]¶
Create and delete Fuel environments.
Namespace: openstack
Parameters:
release_id [ref]
Release id (default 1)
network_provider [ref]
Network provider (default 'neutron')
deployment_mode [ref]
Deployment mode (default 'ha_compact')
net_segment_type [ref]
Net segment type (default 'vlan')
delete_retries [ref]
Retries count on delete operations (default 5)
FuelEnvironments.create_and_list_environments [Scenario]¶
Create and list Fuel environments.
Namespace: openstack
Parameters:
release_id [ref]
Release id (default 1)
network_provider [ref]
Network provider (default 'neutron')
deployment_mode [ref]
Deployment mode (default 'ha_compact')
net_segment_type [ref]
Net segment type (default 'vlan')
FuelNodes.add_and_remove_node [Scenario]¶
Add node to environment and remove.
Namespace: openstack
Parameters:
node_roles [ref]
List. Roles, which node should be assigned to env with
GlanceImages.create_and_delete_image [Scenario]¶
Create and then delete an image.
Namespace: openstack
Parameters:
container_format [ref]
Container format of image. Acceptable formats: ami, ari, aki, bare, and ovf
image_location [ref]
Image file location
disk_format [ref]
Disk format of image. Acceptable formats: ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso
kwargs [ref]
Optional parameters to create image
GlanceImages.create_and_list_image [Scenario]¶
Create an image and then list all images.
Measure the "glance image-list" command performance.
If you have only 1 user in your context, you will add 1 image on every iteration. So you will have more and more images and will be able to measure the performance of the "glance image-list" command depending on the number of images owned by users.
Namespace: openstack
Parameters:
container_format [ref]
Container format of image. Acceptable formats: ami, ari, aki, bare, and ovf
image_location [ref]
Image file location
disk_format [ref]
Disk format of image. Acceptable formats: ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso
kwargs [ref]
Optional parameters to create image
GlanceImages.create_image_and_boot_instances [Scenario]¶
Create an image and boot several instances from it.
Namespace: openstack
Parameters:
container_format [ref]
Container format of image. Acceptable formats: ami, ari, aki, bare, and ovf
image_location [ref]
Image file location
disk_format [ref]
Disk format of image. Acceptable formats: ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso
flavor [ref]
Nova flavor to be used to launch an instance
number_instances [ref]
Number of Nova servers to boot
create_image_kwargs [ref]
Optional parameters to create image
boot_server_kwargs [ref]
Optional parameters to boot server
kwargs [ref]
Optional parameters to create server (deprecated)
GlanceImages.list_images [Scenario]¶
List all images.
This simple scenario tests the glance image-list command by listing all the images.
Suppose if we have 2 users in context and each has 2 images uploaded for them we will be able to test the performance of glance image-list command in this case.
Namespace: openstack
HeatStacks.create_and_delete_stack [Scenario]¶
Create and then delete a stack.
Measure the "heat stack-create" and "heat stack-delete" commands performance.
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_and_list_stack [Scenario]¶
Create a stack and then list all stacks.
Measure the "heat stack-create" and "heat stack-list" commands performance.
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_check_delete_stack [Scenario]¶
Create, check and delete a stack.
Measure the performance of the following commands: - heat stack-create - heat action-check - heat stack-delete
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_snapshot_restore_delete_stack [Scenario]¶
Create, snapshot-restore and then delete a stack.
Measure performance of the following commands: heat stack-create heat stack-snapshot heat stack-restore heat stack-delete
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_stack_and_list_output [Scenario]¶
Create stack and list outputs by using new algorithm.
Measure performance of the following commands: heat stack-create heat output-list
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_stack_and_list_output_via_API [Scenario]¶
Create stack and list outputs by using old algorithm.
Measure performance of the following commands: heat stack-create heat output-list
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_stack_and_scale [Scenario]¶
Create an autoscaling stack and invoke a scaling policy.
Measure the performance of autoscaling webhooks.
Namespace: openstack
Parameters:
template_path [ref]
Path to template file that includes an OS::Heat::AutoScalingGroup resource
output_key [ref]
The stack output key that corresponds to the scaling webhook
delta [ref]
The number of instances the stack is expected to change by.
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template (dict of file name to file path)
environment [ref]
Stack environment definition (dict)
HeatStacks.create_stack_and_show_output [Scenario]¶
Create stack and show output by using new algorithm.
Measure performance of the following commands: heat stack-create heat output-show
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
output_key [ref]
The stack output key that corresponds to the scaling webhook
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_stack_and_show_output_via_API [Scenario]¶
Create stack and show output by using old algorithm.
Measure performance of the following commands: heat stack-create heat output-show
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
output_key [ref]
The stack output key that corresponds to the scaling webhook
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_suspend_resume_delete_stack [Scenario]¶
Create, suspend-resume and then delete a stack.
Measure performance of the following commands: heat stack-create heat action-suspend heat action-resume heat stack-delete
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
parameters [ref]
Parameters to use in heat template
files [ref]
Files used in template
environment [ref]
Stack environment definition
HeatStacks.create_update_delete_stack [Scenario]¶
Create, update and then delete a stack.
Measure the "heat stack-create", "heat stack-update" and "heat stack-delete" commands performance.
Namespace: openstack
Parameters:
template_path [ref]
Path to stack template file
updated_template_path [ref]
Path to updated stack template file
parameters [ref]
Parameters to use in heat template
updated_parameters [ref]
Parameters to use in updated heat template If not specified then parameters will be used instead
files [ref]
Files used in template
updated_files [ref]
Files used in updated template. If not specified files value will be used instead
environment [ref]
Stack environment definition
updated_environment [ref]
Environment definition for updated stack
HeatStacks.list_stacks_and_resources [Scenario]¶
List all resources from tenant stacks.
Namespace: openstack
HttpRequests.check_random_request [Scenario]¶
Benchmark the list of requests
This scenario takes random url from list of requests, and raises exception if the response is not the expected response.
Namespace: default
Parameters:
requests [ref]
List of request dicts
status_code [ref]
Expected Response Code it will be used only if we doesn't specified it in request proper
Module: rally.plugins.common.scenarios.requests.http_requests
HttpRequests.check_request [Scenario]¶
Standard way to benchmark web services.
This benchmark is used to make request and check it with expected Response.
Namespace: default
Parameters:
url [ref]
Url for the Request object
method [ref]
Method for the Request object
status_code [ref]
Expected response code
kwargs [ref]
Optional additional request parameters
Module: rally.plugins.common.scenarios.requests.http_requests
IronicNodes.create_and_delete_node [Scenario]¶
Create and delete node.
Namespace: openstack
Parameters:
driver [ref]
The name of the driver used to manage this Node.
kwargs [ref]
Optional additional arguments for node creation
IronicNodes.create_and_list_node [Scenario]¶
Create and list nodes.
Namespace: openstack
Parameters:
driver [ref]
The name of the driver used to manage this Node.
associated [ref]
Optional argument of list request. Either a Boolean or a string representation of a Boolean that indicates whether to return a list of associated (True or "True") or unassociated (False or "False") nodes.
maintenance [ref]
Optional argument of list request. Either a Boolean or a string representation of a Boolean that indicates whether to return nodes in maintenance mode (True or "True"), or not in maintenance mode (False or "False").
detail [ref]
Optional, boolean whether to return detailed information about nodes.
sort_dir [ref]
Optional, direction of sorting, either 'asc' (the default) or 'desc'.
marker [ref]
DEPRECATED since Rally 0.10.0
limit [ref]
DEPRECATED since Rally 0.10.0
sort_key [ref]
DEPRECATED since Rally 0.10.0
kwargs [ref]
Optional additional arguments for node creation
KeystoneBasic.add_and_remove_user_role [Scenario]¶
Create a user role add to a user and disassociate.
Namespace: openstack
KeystoneBasic.authenticate_user_and_validate_token [Scenario]¶
Authenticate and validate a keystone token.
Namespace: openstack
KeystoneBasic.create_add_and_list_user_roles [Scenario]¶
Create user role, add it and list user roles for given user.
Namespace: openstack
KeystoneBasic.create_and_delete_ec2credential [Scenario]¶
Create and delete keystone ec2-credential.
Namespace: openstack
KeystoneBasic.create_and_delete_role [Scenario]¶
Create a user role and delete it.
Namespace: openstack
KeystoneBasic.create_and_delete_service [Scenario]¶
Create and delete service.
Namespace: openstack
Parameters:
service_type [ref]
Type of the service
description [ref]
Description of the service
KeystoneBasic.create_and_get_role [Scenario]¶
Create a user role and get it detailed information.
Namespace: openstack
Parameters:
kwargs [ref]
Optional additional arguments for roles creation
KeystoneBasic.create_and_list_ec2credentials [Scenario]¶
Create and List all keystone ec2-credentials.
Namespace: openstack
KeystoneBasic.create_and_list_roles [Scenario]¶
Create a role, then list all roles.
Namespace: openstack
Parameters:
create_role_kwargs [ref]
Optional additional arguments for roles create
list_role_kwargs [ref]
Optional additional arguments for roles list
KeystoneBasic.create_and_list_services [Scenario]¶
Create and list services.
Namespace: openstack
Parameters:
service_type [ref]
Type of the service
description [ref]
Description of the service
KeystoneBasic.create_and_list_tenants [Scenario]¶
Create a keystone tenant with random name and list all tenants.
Namespace: openstack
Parameters:
kwargs [ref]
Other optional parameters
KeystoneBasic.create_and_list_users [Scenario]¶
Create a keystone user with random name and list all users.
Namespace: openstack
Parameters:
kwargs [ref]
Other optional parameters to create users like "tenant_id", "enabled".
KeystoneBasic.create_and_update_user [Scenario]¶
Create user and update the user.
Namespace: openstack
Parameters:
create_user_kwargs [ref]
Optional additional arguments for user creation
update_user_kwargs [ref]
Optional additional arguments for user updation
KeystoneBasic.create_delete_user [Scenario]¶
Create a keystone user with random name and then delete it.
Namespace: openstack
Parameters:
kwargs [ref]
Other optional parameters to create users like "tenant_id", "enabled".
KeystoneBasic.create_tenant [Scenario]¶
Create a keystone tenant with random name.
Namespace: openstack
Parameters:
kwargs [ref]
Other optional parameters
KeystoneBasic.create_tenant_with_users [Scenario]¶
Create a keystone tenant and several users belonging to it.
Namespace: openstack
Parameters:
users_per_tenant [ref]
Number of users to create for the tenant
kwargs [ref]
Other optional parameters for tenant creation
Returns: keystone tenant instance
KeystoneBasic.create_update_and_delete_tenant [Scenario]¶
Create, update and delete tenant.
Namespace: openstack
Parameters:
kwargs [ref]
Other optional parameters for tenant creation
KeystoneBasic.create_user [Scenario]¶
Create a keystone user with random name.
Namespace: openstack
Parameters:
kwargs [ref]
Other optional parameters to create users like "tenant_id", "enabled".
KeystoneBasic.create_user_set_enabled_and_delete [Scenario]¶
Create a keystone user, enable or disable it, and delete it.
Namespace: openstack
Parameters:
enabled [ref]
Initial state of user 'enabled' flag. The user will be created with 'enabled' set to this value, and then it will be toggled.
kwargs [ref]
Other optional parameters to create user.
KeystoneBasic.create_user_update_password [Scenario]¶
Create user and update password for that user.
Namespace: openstack
KeystoneBasic.get_entities [Scenario]¶
Get instance of a tenant, user, role and service by id's.
An ephemeral tenant, user, and role are each created. By default, fetches the 'keystone' service. This can be overridden (for instance, to get the 'Identity Service' service on older OpenStack), or None can be passed explicitly to service_name to create a new service and then query it by ID.
Namespace: openstack
Parameters:
service_name [ref]
The name of the service to get by ID; or None, to create an ephemeral service and get it by ID.
MagnumClusterTemplates.list_cluster_templates [Scenario]¶
List all cluster_templates.
Measure the "magnum cluster_template-list" command performance.
Namespace: openstack
Parameters:
limit [ref]
- (Optional) The maximum number of results to return
per request, if:
- limit > 0, the maximum number of cluster_templates to return.
- limit param is NOT specified (None), the number of items returned respect the maximum imposed by the Magnum API (see Magnum's api.max_limit option).
kwargs [ref]
Optional additional arguments for cluster_templates listing
Module: rally.plugins.openstack.scenarios.magnum.cluster_templates
MagnumClusters.create_and_list_clusters [Scenario]¶
create cluster and then list all clusters.
Namespace: openstack
Parameters:
node_count [ref]
The cluster node count.
cluster_template_uuid [ref]
Optional, if user want to use an existing cluster_template
kwargs [ref]
Optional additional arguments for cluster creation
MagnumClusters.list_clusters [Scenario]¶
List all clusters.
Measure the "magnum clusters-list" command performance.
Namespace: openstack
Parameters:
limit [ref]
- (Optional) The maximum number of results to return
per request, if:
- limit > 0, the maximum number of clusters to return.
- limit param is NOT specified (None), the number of items returned respect the maximum imposed by the Magnum API (see Magnum's api.max_limit option).
kwargs [ref]
Optional additional arguments for clusters listing
MistralExecutions.create_execution_from_workbook [Scenario]¶
Scenario tests execution creation and deletion.
This scenario is a very useful tool to measure the "mistral execution-create" and "mistral execution-delete" commands performance.
Namespace: openstack
Parameters:
definition [ref]
String (yaml string) representation of given file content (Mistral workbook definition)
workflow_name [ref]
String the workflow name to execute. Should be one of the to workflows in the definition. If no
workflow_name is passed, one of the workflows in the definition will be taken.
wf_input [ref]
File containing a json string of mistral workflow input
params [ref]
File containing a json string of mistral params (the string is the place to pass the environment)
do_delete [ref]
If False than it allows to check performance in "create only" mode.
Module: rally.plugins.openstack.scenarios.mistral.executions
MistralExecutions.list_executions [Scenario]¶
Scenario test mistral execution-list command.
This simple scenario tests the Mistral execution-list command by listing all the executions.
Namespace: openstack
Parameters:
marker [ref]
The last execution uuid of the previous page, displays list of executions after "marker".
limit [ref]
Number Maximum number of executions to return in a single result.
sort_keys [ref]
Id,description
sort_dirs [ref]
[SORT_DIRS] Comma-separated list of sort directions. Default: asc.
Module: rally.plugins.openstack.scenarios.mistral.executions
MistralWorkbooks.create_workbook [Scenario]¶
Scenario tests workbook creation and deletion.
This scenario is a very useful tool to measure the "mistral workbook-create" and "mistral workbook-delete" commands performance.
Namespace: openstack
Parameters:
definition [ref]
String (yaml string) representation of given file content (Mistral workbook definition)
do_delete [ref]
If False than it allows to check performance in "create only" mode.
MistralWorkbooks.list_workbooks [Scenario]¶
Scenario test mistral workbook-list command.
This simple scenario tests the Mistral workbook-list command by listing all the workbooks.
Namespace: openstack
MonascaMetrics.list_metrics [Scenario]¶
Fetch user's metrics.
Namespace: openstack
Parameters:
kwargs [ref]
Optional arguments for list query: name, dimensions, start_time, etc
MuranoEnvironments.create_and_delete_environment [Scenario]¶
Create environment, session and delete environment.
Namespace: openstack
Module: rally.plugins.openstack.scenarios.murano.environments
MuranoEnvironments.create_and_deploy_environment [Scenario]¶
Create environment, session and deploy environment.
Create environment, create session, add app to environment packages_per_env times, send environment to deploy.
Namespace: openstack
Parameters:
packages_per_env [ref]
Number of packages per environment
Module: rally.plugins.openstack.scenarios.murano.environments
MuranoEnvironments.list_environments [Scenario]¶
List the murano environments.
Run murano environment-list for listing all environments.
Namespace: openstack
Module: rally.plugins.openstack.scenarios.murano.environments
MuranoPackages.import_and_delete_package [Scenario]¶
Import Murano package and then delete it.
Measure the "murano import-package" and "murano package-delete" commands performance. It imports Murano package from "package" (if it is not a zip archive then zip archive will be prepared) and deletes it.
Namespace: default
Parameters:
package [ref]
Path to zip archive that represents Murano application package or absolute path to folder with package components
MuranoPackages.import_and_filter_applications [Scenario]¶
Import Murano package and then filter packages by some criteria.
Measure the performance of package import and package filtering commands. It imports Murano package from "package" (if it is not a zip archive then zip archive will be prepared) and filters packages by some criteria.
Namespace: default
Parameters:
package [ref]
Path to zip archive that represents Murano application package or absolute path to folder with package components
MuranoPackages.import_and_list_packages [Scenario]¶
Import Murano package and get list of packages.
Measure the "murano import-package" and "murano package-list" commands performance. It imports Murano package from "package" (if it is not a zip archive then zip archive will be prepared) and gets list of imported packages.
Namespace: default
Parameters:
package [ref]
Path to zip archive that represents Murano application package or absolute path to folder with package components
include_disabled [ref]
Specifies whether the disabled packages will be included in a the result or not. Default value is False.
MuranoPackages.package_lifecycle [Scenario]¶
Import Murano package, modify it and then delete it.
Measure the Murano import, update and delete package commands performance. It imports Murano package from "package" (if it is not a zip archive then zip archive will be prepared), modifies it (using data from "body") and deletes.
Namespace: default
Parameters:
package [ref]
Path to zip archive that represents Murano application package or absolute path to folder with package components
body [ref]
Dict object that defines what package property will be updated, e.g {"tags": ["tag"]} or {"enabled": "true"}
operation [ref]
String object that defines the way of how package property will be updated, allowed operations are "add", "replace" or "delete". Default value is "replace".
NeutronLoadbalancerV1.create_and_delete_healthmonitors [Scenario]¶
Create a healthmonitor(v1) and delete healthmonitors(v1).
Measure the "neutron lb-healthmonitor-create" and "neutron lb-healthmonitor-delete" command performance. The scenario creates healthmonitors and deletes those healthmonitors.
Namespace: openstack
Parameters:
healthmonitor_create_args [ref]
Dict, POST /lb/healthmonitors request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_delete_pools [Scenario]¶
Create pools(v1) and delete pools(v1).
Measure the "neutron lb-pool-create" and "neutron lb-pool-delete" command performance. The scenario creates a pool for every subnet and then deletes those pools.
Namespace: openstack
Parameters:
pool_create_args [ref]
Dict, POST /lb/pools request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_delete_vips [Scenario]¶
Create a vip(v1) and then delete vips(v1).
Measure the "neutron lb-vip-create" and "neutron lb-vip-delete" command performance. The scenario creates a vip for pool and then deletes those vips.
Namespace: openstack
Parameters:
pool_create_args [ref]
Dict, POST /lb/pools request options
vip_create_args [ref]
Dict, POST /lb/vips request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_list_healthmonitors [Scenario]¶
Create healthmonitors(v1) and list healthmonitors(v1).
Measure the "neutron lb-healthmonitor-list" command performance. This scenario creates healthmonitors and lists them.
Namespace: openstack
Parameters:
healthmonitor_create_args [ref]
Dict, POST /lb/healthmonitors request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_list_pools [Scenario]¶
Create a pool(v1) and then list pools(v1).
Measure the "neutron lb-pool-list" command performance. The scenario creates a pool for every subnet and then lists pools.
Namespace: openstack
Parameters:
pool_create_args [ref]
Dict, POST /lb/pools request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_list_vips [Scenario]¶
Create a vip(v1) and then list vips(v1).
Measure the "neutron lb-vip-create" and "neutron lb-vip-list" command performance. The scenario creates a vip for every pool created and then lists vips.
Namespace: openstack
Parameters:
vip_create_args [ref]
Dict, POST /lb/vips request options
pool_create_args [ref]
Dict, POST /lb/pools request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_update_healthmonitors [Scenario]¶
Create a healthmonitor(v1) and update healthmonitors(v1).
Measure the "neutron lb-healthmonitor-create" and "neutron lb-healthmonitor-update" command performance. The scenario creates healthmonitors and then updates them.
Namespace: openstack
Parameters:
healthmonitor_create_args [ref]
Dict, POST /lb/healthmonitors request options
healthmonitor_update_args [ref]
Dict, POST /lb/healthmonitors update options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_update_pools [Scenario]¶
Create pools(v1) and update pools(v1).
Measure the "neutron lb-pool-create" and "neutron lb-pool-update" command performance. The scenario creates a pool for every subnet and then update those pools.
Namespace: openstack
Parameters:
pool_create_args [ref]
Dict, POST /lb/pools request options
pool_update_args [ref]
Dict, POST /lb/pools update options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_update_vips [Scenario]¶
Create vips(v1) and update vips(v1).
Measure the "neutron lb-vip-create" and "neutron lb-vip-update" command performance. The scenario creates a pool for every subnet and then update those pools.
Namespace: openstack
Parameters:
pool_create_args [ref]
Dict, POST /lb/pools request options
vip_create_args [ref]
Dict, POST /lb/vips request options
vip_update_args [ref]
Dict, POST /lb/vips update options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV2.create_and_list_loadbalancers [Scenario]¶
Create a loadbalancer(v2) and then list loadbalancers(v2).
Measure the "neutron lbaas-loadbalancer-list" command performance. The scenario creates a loadbalancer for every subnet and then lists loadbalancers.
Namespace: openstack
Parameters:
lb_create_args [ref]
Dict, POST /lbaas/loadbalancers request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v2
NeutronNetworks.create_and_delete_floating_ips [Scenario]¶
Create and delete floating IPs.
Measure the "neutron floating-ip-create" and "neutron floating-ip-delete" commands performance.
Namespace: openstack
Parameters:
floating_network [ref]
Str, external network for floating IP creation
floating_ip_args [ref]
Dict, POST /floatingips request options
NeutronNetworks.create_and_delete_networks [Scenario]¶
Create and delete a network.
Measure the "neutron net-create" and "net-delete" command performance.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options
NeutronNetworks.create_and_delete_ports [Scenario]¶
Create and delete a port.
Measure the "neutron port-create" and "neutron port-delete" commands performance.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated.
port_create_args [ref]
Dict, POST /v2.0/ports request options
ports_per_network [ref]
Int, number of ports for one network
NeutronNetworks.create_and_delete_routers [Scenario]¶
Create and delete a given number of routers.
Create a network, a given number of subnets and routers and then delete all routers.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated.
subnet_create_args [ref]
Dict, POST /v2.0/subnets request options
subnet_cidr_start [ref]
Str, start value for subnets CIDR
subnets_per_network [ref]
Int, number of subnets for one network
router_create_args [ref]
Dict, POST /v2.0/routers request options
NeutronNetworks.create_and_delete_subnets [Scenario]¶
Create and delete a given number of subnets.
The scenario creates a network, a given number of subnets and then deletes subnets.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated.
subnet_create_args [ref]
Dict, POST /v2.0/subnets request options
subnet_cidr_start [ref]
Str, start value for subnets CIDR
subnets_per_network [ref]
Int, number of subnets for one network
NeutronNetworks.create_and_list_floating_ips [Scenario]¶
Create and list floating IPs.
Measure the "neutron floating-ip-create" and "neutron floating-ip-list" commands performance.
Namespace: openstack
Parameters:
floating_network [ref]
Str, external network for floating IP creation
floating_ip_args [ref]
Dict, POST /floatingips request options
NeutronNetworks.create_and_list_networks [Scenario]¶
Create a network and then list all networks.
Measure the "neutron net-list" command performance.
If you have only 1 user in your context, you will add 1 network on every iteration. So you will have more and more networks and will be able to measure the performance of the "neutron net-list" command depending on the number of networks owned by users.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options
NeutronNetworks.create_and_list_ports [Scenario]¶
Create and a given number of ports and list all ports.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated.
port_create_args [ref]
Dict, POST /v2.0/ports request options
ports_per_network [ref]
Int, number of ports for one network
NeutronNetworks.create_and_list_routers [Scenario]¶
Create and a given number of routers and list all routers.
Create a network, a given number of subnets and routers and then list all routers.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated.
subnet_create_args [ref]
Dict, POST /v2.0/subnets request options
subnet_cidr_start [ref]
Str, start value for subnets CIDR
subnets_per_network [ref]
Int, number of subnets for one network
router_create_args [ref]
Dict, POST /v2.0/routers request options
NeutronNetworks.create_and_list_subnets [Scenario]¶
Create and a given number of subnets and list all subnets.
The scenario creates a network, a given number of subnets and then lists subnets.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated
subnet_create_args [ref]
Dict, POST /v2.0/subnets request options
subnet_cidr_start [ref]
Str, start value for subnets CIDR
subnets_per_network [ref]
Int, number of subnets for one network
NeutronNetworks.create_and_show_network [Scenario]¶
Create a network and show network details.
Measure the "neutron net-show" command performance.
Namespace: openstack
Parameters:
network_create_args [ref]
Dict, POST /v2.0/networks request options
NeutronNetworks.create_and_update_networks [Scenario]¶
Create and update a network.
Measure the "neutron net-create and net-update" command performance.
Namespace: openstack
Parameters:
network_update_args [ref]
Dict, PUT /v2.0/networks update request
network_create_args [ref]
Dict, POST /v2.0/networks request options
NeutronNetworks.create_and_update_ports [Scenario]¶
Create and update a given number of ports.
Measure the "neutron port-create" and "neutron port-update" commands performance.
Namespace: openstack
Parameters:
port_update_args [ref]
Dict, PUT /v2.0/ports update request options
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated.
port_create_args [ref]
Dict, POST /v2.0/ports request options
ports_per_network [ref]
Int, number of ports for one network
NeutronNetworks.create_and_update_routers [Scenario]¶
Create and update a given number of routers.
Create a network, a given number of subnets and routers and then updating all routers.
Namespace: openstack
Parameters:
router_update_args [ref]
Dict, PUT /v2.0/routers update options
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated.
subnet_create_args [ref]
Dict, POST /v2.0/subnets request options
subnet_cidr_start [ref]
Str, start value for subnets CIDR
subnets_per_network [ref]
Int, number of subnets for one network
router_create_args [ref]
Dict, POST /v2.0/routers request options
NeutronNetworks.create_and_update_subnets [Scenario]¶
Create and update a subnet.
The scenario creates a network, a given number of subnets and then updates the subnet. This scenario measures the "neutron subnet-update" command performance.
Namespace: openstack
Parameters:
subnet_update_args [ref]
Dict, PUT /v2.0/subnets update options
network_create_args [ref]
Dict, POST /v2.0/networks request options. Deprecated.
subnet_create_args [ref]
Dict, POST /v2.0/subnets request options
subnet_cidr_start [ref]
Str, start value for subnets CIDR
subnets_per_network [ref]
Int, number of subnets for one network
NeutronNetworks.list_agents [Scenario]¶
List all neutron agents.
This simple scenario tests the "neutron agent-list" command by listing all the neutron agents.
Namespace: openstack
Parameters:
agent_args [ref]
Dict, POST /v2.0/agents request options
NeutronSecurityGroup.create_and_delete_security_groups [Scenario]¶
Create and delete Neutron security-groups.
Measure the "neutron security-group-create" and "neutron security-group-delete" command performance.
Namespace: openstack
Parameters:
security_group_create_args [ref]
Dict, POST /v2.0/security-groups request options
Module: rally.plugins.openstack.scenarios.neutron.security_groups
NeutronSecurityGroup.create_and_list_security_groups [Scenario]¶
Create and list Neutron security-groups.
Measure the "neutron security-group-create" and "neutron security-group-list" command performance.
Namespace: openstack
Parameters:
security_group_create_args [ref]
Dict, POST /v2.0/security-groups request options
Module: rally.plugins.openstack.scenarios.neutron.security_groups
NeutronSecurityGroup.create_and_update_security_groups [Scenario]¶
Create and update Neutron security-groups.
Measure the "neutron security-group-create" and "neutron security-group-update" command performance.
Namespace: openstack
Parameters:
security_group_create_args [ref]
Dict, POST /v2.0/security-groups request options
security_group_update_args [ref]
Dict, PUT /v2.0/security-groups update options
Module: rally.plugins.openstack.scenarios.neutron.security_groups
NovaAgents.list_agents [Scenario]¶
List all builds.
Measure the "nova agent-list" command performance.
Namespace: openstack
Parameters:
hypervisor [ref]
List agent builds on a specific hypervisor. None (default value) means list for all hypervisors
NovaAggregates.create_aggregate_add_and_remove_host [Scenario]¶
Create an aggregate, add a host to and remove the host from it
Measure "nova aggregate-add-host" and "nova aggregate-remove-host" command performance.
Namespace: openstack
Parameters:
availability_zone [ref]
The availability zone of the aggregate
NovaAggregates.create_aggregate_add_host_and_boot_server [Scenario]¶
Scenario to create and verify an aggregate
This scenario creates an aggregate, adds a compute host and metadata to the aggregate, adds the same metadata to the flavor and creates an instance. Verifies that instance host is one of the hosts in the aggregate.
Namespace: openstack
Parameters:
image [ref]
The image ID to boot from
metadata [ref]
The metadata to be set as flavor extra specs
availability_zone [ref]
The availability zone of the aggregate
ram [ref]
Memory in MB for the flavor
vcpus [ref]
Number of VCPUs for the flavor
disk [ref]
Size of local disk in GB
boot_server_kwargs [ref]
Optional additional arguments to verify host aggregates
NovaAggregates.create_and_delete_aggregate [Scenario]¶
Create an aggregate and then delete it.
This scenario first creates an aggregate and then delete it.
Namespace: openstack
Parameters:
availability_zone [ref]
The availability zone of the aggregate
NovaAggregates.create_and_get_aggregate_details [Scenario]¶
Create an aggregate and then get its details.
This scenario first creates an aggregate and then get details of it.
Namespace: openstack
Parameters:
availability_zone [ref]
The availability zone of the aggregate
NovaAggregates.create_and_list_aggregates [Scenario]¶
Create a aggregate and then list all aggregates.
This scenario creates a aggregate and then lists all aggregates.
Namespace: openstack
Parameters:
availability_zone [ref]
The availability zone of the aggregate
NovaAggregates.create_and_update_aggregate [Scenario]¶
Create an aggregate and then update its name and availability_zone
This scenario first creates an aggregate and then update its name and availability_zone
Namespace: openstack
Parameters:
availability_zone [ref]
The availability zone of the aggregate
NovaAggregates.list_aggregates [Scenario]¶
List all nova aggregates.
Measure the "nova aggregate-list" command performance.
Namespace: openstack
NovaAvailabilityZones.list_availability_zones [Scenario]¶
List all availability zones.
Measure the "nova availability-zone-list" command performance.
Namespace: openstack
Parameters:
detailed [ref]
True if the availability-zone listing should contain detailed information about all of them
Module: rally.plugins.openstack.scenarios.nova.availability_zones
NovaFlavors.create_and_delete_flavor [Scenario]¶
Create flavor and delete the flavor.
Namespace: openstack
Parameters:
ram [ref]
Memory in MB for the flavor
vcpus [ref]
Number of VCPUs for the flavor
disk [ref]
Size of local disk in GB
kwargs [ref]
Optional additional arguments for flavor creation
NovaFlavors.create_and_get_flavor [Scenario]¶
Create flavor and get detailed information of the flavor.
Namespace: openstack
Parameters:
ram [ref]
Memory in MB for the flavor
vcpus [ref]
Number of VCPUs for the flavor
disk [ref]
Size of local disk in GB
kwargs [ref]
Optional additional arguments for flavor creation
NovaFlavors.create_and_list_flavor_access [Scenario]¶
Create a non-public flavor and list its access rules
Namespace: openstack
Parameters:
ram [ref]
Memory in MB for the flavor
vcpus [ref]
Number of VCPUs for the flavor
disk [ref]
Size of local disk in GB
kwargs [ref]
Optional additional arguments for flavor creation
NovaFlavors.create_flavor [Scenario]¶
Create a flavor.
Namespace: openstack
Parameters:
ram [ref]
Memory in MB for the flavor
vcpus [ref]
Number of VCPUs for the flavor
disk [ref]
Size of local disk in GB
kwargs [ref]
Optional additional arguments for flavor creation
NovaFlavors.create_flavor_and_add_tenant_access [Scenario]¶
Create a flavor and Add flavor access for the given tenant.
Namespace: openstack
Parameters:
ram [ref]
Memory in MB for the flavor
vcpus [ref]
Number of VCPUs for the flavor
disk [ref]
Size of local disk in GB
kwargs [ref]
Optional additional arguments for flavor creation
NovaFlavors.create_flavor_and_set_keys [Scenario]¶
Create flavor and set keys to the flavor.
Measure the "nova flavor-key" command performance. the scenario first create a flavor,then add the extra specs to it.
Namespace: openstack
Parameters:
ram [ref]
Memory in MB for the flavor
vcpus [ref]
Number of VCPUs for the flavor
disk [ref]
Size of local disk in GB
extra_specs [ref]
Additional arguments for flavor set keys
kwargs [ref]
Optional additional arguments for flavor creation
NovaFlavors.list_flavors [Scenario]¶
List all flavors.
Measure the "nova flavor-list" command performance.
Namespace: openstack
Parameters:
detailed [ref]
True if the flavor listing should contain detailed information
kwargs [ref]
Optional additional arguments for flavor listing
NovaFloatingIpsBulk.create_and_delete_floating_ips_bulk [Scenario]¶
Create nova floating IP by range and delete it.
This scenario creates a floating IP by range and then delete it.
Namespace: openstack
Parameters:
start_cidr [ref]
Floating IP range
kwargs [ref]
Optional additional arguments for range IP creation
Module: rally.plugins.openstack.scenarios.nova.floating_ips_bulk
NovaFloatingIpsBulk.create_and_list_floating_ips_bulk [Scenario]¶
Create nova floating IP by range and list it.
This scenario creates a floating IP by range and then lists all.
Namespace: openstack
Parameters:
start_cidr [ref]
Floating IP range
kwargs [ref]
Optional additional arguments for range IP creation
Module: rally.plugins.openstack.scenarios.nova.floating_ips_bulk
NovaHosts.list_and_get_hosts [Scenario]¶
List all nova hosts, and get detailed information for compute hosts.
Measure the "nova host-describe" command performance.
Namespace: openstack
Parameters:
zone [ref]
List nova hosts in an availability-zone. None (default value) means list hosts in all availability-zones
NovaHosts.list_hosts [Scenario]¶
List all nova hosts.
Measure the "nova host-list" command performance.
Namespace: openstack
Parameters:
zone [ref]
List nova hosts in an availability-zone. None (default value) means list hosts in all availability-zones
NovaHypervisors.list_and_get_hypervisors [Scenario]¶
List and Get hypervisors.
The scenario first lists all hypervisors, then get detailed information of the listed hypervisors in turn.
Measure the "nova hypervisor-show" command performance.
Namespace: openstack
Parameters:
detailed [ref]
True if the hypervisor listing should contain detailed information about all of them
NovaHypervisors.list_and_get_uptime_hypervisors [Scenario]¶
List hypervisors,then display the uptime of it.
The scenario first list all hypervisors,then display the uptime of the listed hypervisors in turn.
Measure the "nova hypervisor-uptime" command performance.
Namespace: openstack
Parameters:
detailed [ref]
True if the hypervisor listing should contain detailed information about all of them
NovaHypervisors.list_and_search_hypervisors [Scenario]¶
List all servers belonging to specific hypervisor.
The scenario first list all hypervisors,then find its hostname, then list all servers belonging to the hypervisor
Measure the "nova hypervisor-servers <hostname>" command performance.
Namespace: openstack
Parameters:
detailed [ref]
True if the hypervisor listing should contain detailed information about all of them
NovaHypervisors.list_hypervisors [Scenario]¶
List hypervisors.
Measure the "nova hypervisor-list" command performance.
Namespace: openstack
Parameters:
detailed [ref]
True if the hypervisor listing should contain detailed information about all of them
NovaHypervisors.statistics_hypervisors [Scenario]¶
Get hypervisor statistics over all compute nodes.
Measure the "nova hypervisor-stats" command performance.
Namespace: openstack
NovaImages.list_images [Scenario]¶
List all images.
Measure the "nova image-list" command performance.
Namespace: openstack
Parameters:
detailed [ref]
True if the image listing should contain detailed information
kwargs [ref]
Optional additional arguments for image listing
NovaKeypair.boot_and_delete_server_with_keypair [Scenario]¶
Boot and delete server with keypair.
- Plan of this scenario:
- create a keypair
- boot a VM with created keypair
- delete server
- delete keypair
Namespace: openstack
Parameters:
image [ref]
ID of the image to be used for server creation
flavor [ref]
ID of the flavor to be used for server creation
boot_server_kwargs [ref]
Optional additional arguments for VM creation
server_kwargs [ref]
Deprecated alias for boot_server_kwargs
kwargs [ref]
Optional additional arguments for keypair creation
NovaKeypair.create_and_delete_keypair [Scenario]¶
Create a keypair with random name and delete keypair.
This scenario creates a keypair and then delete that keypair.
Namespace: openstack
Parameters:
kwargs [ref]
Optional additional arguments for keypair creation
NovaKeypair.create_and_get_keypair [Scenario]¶
Create a keypair and get the keypair details.
Namespace: openstack
Parameters:
kwargs [ref]
Optional additional arguments for keypair creation
NovaKeypair.create_and_list_keypairs [Scenario]¶
Create a keypair with random name and list keypairs.
This scenario creates a keypair and then lists all keypairs.
Namespace: openstack
Parameters:
kwargs [ref]
Optional additional arguments for keypair creation
NovaNetworks.create_and_delete_network [Scenario]¶
Create nova network and delete it.
Namespace: openstack
Parameters:
start_cidr [ref]
IP range
kwargs [ref]
Optional additional arguments for network creation
NovaNetworks.create_and_list_networks [Scenario]¶
Create nova network and list all networks.
Namespace: openstack
Parameters:
start_cidr [ref]
IP range
kwargs [ref]
Optional additional arguments for network creation
NovaSecGroup.boot_and_delete_server_with_secgroups [Scenario]¶
Boot and delete server with security groups attached.
- Plan of this scenario:
- create N security groups with M rules per group vm with security groups)
- boot a VM with created security groups
- get list of attached security groups to server
- delete server
- delete all security groups
- check that all groups were attached to server
Namespace: openstack
Parameters:
image [ref]
ID of the image to be used for server creation
flavor [ref]
ID of the flavor to be used for server creation
security_group_count [ref]
Number of security groups
rules_per_security_group [ref]
Number of rules per security group
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaSecGroup.boot_server_and_add_secgroups [Scenario]¶
Boot a server and add a security group to it.
- Plan of this scenario:
- create N security groups with M rules per group
- boot a VM
- add security groups to VM
Namespace: openstack
Parameters:
image [ref]
ID of the image to be used for server creation
flavor [ref]
ID of the flavor to be used for server creation
security_group_count [ref]
Number of security groups
rules_per_security_group [ref]
Number of rules per security group
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaSecGroup.create_and_delete_secgroups [Scenario]¶
Create and delete security groups.
This scenario creates N security groups with M rules per group and then deletes them.
Namespace: openstack
Parameters:
security_group_count [ref]
Number of security groups
rules_per_security_group [ref]
Number of rules per security group
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaSecGroup.create_and_list_secgroups [Scenario]¶
Create and list security groups.
This scenario creates N security groups with M rules per group and then lists them.
Namespace: openstack
Parameters:
security_group_count [ref]
Number of security groups
rules_per_security_group [ref]
Number of rules per security group
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaSecGroup.create_and_update_secgroups [Scenario]¶
Create and update security groups.
This scenario creates 'security_group_count' security groups then updates their name and description.
Namespace: openstack
Parameters:
security_group_count [ref]
Number of security groups
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaServerGroups.create_and_list_server_groups [Scenario]¶
Create a server group, then list all server groups.
Measure the "nova server-group-create" and "nova server-group-list" command performance.
Namespace: openstack
Parameters:
all_projects [ref]
If True, display server groups from all projects(Admin only)
kwargs [ref]
Server group name and policy
Module: rally.plugins.openstack.scenarios.nova.server_groups
NovaServers.boot_and_associate_floating_ip [Scenario]¶
Boot a server and associate a floating IP to it.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_and_bounce_server [Scenario]¶
Boot a server and run specified actions against it.
Actions should be passed into the actions parameter. Available actions are 'hard_reboot', 'soft_reboot', 'stop_start', 'rescue_unrescue', 'pause_unpause', 'suspend_resume', 'lock_unlock' and 'shelve_unshelve'. Delete server after all actions were completed.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
force_delete [ref]
True if force_delete should be used
actions [ref]
List of action dictionaries, where each action dictionary speicifes an action to be performed in the following format: {"action_name": <no_of_iterations>}
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_and_delete_multiple_servers [Scenario]¶
Boot multiple servers in a single request and delete them.
Deletion is done in parallel with one request per server, not with a single request for all servers.
Namespace: openstack
Parameters:
image [ref]
The image to boot from
flavor [ref]
Flavor used to boot instance
count [ref]
Number of instances to boot
min_sleep [ref]
Minimum sleep time in seconds (non-negative)
max_sleep [ref]
Maximum sleep time in seconds (non-negative)
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for instance creation
NovaServers.boot_and_delete_server [Scenario]¶
Boot and delete a server.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
min_sleep [ref]
Minimum sleep time in seconds (non-negative)
max_sleep [ref]
Maximum sleep time in seconds (non-negative)
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_and_get_console_output [Scenario]¶
Get text console output from server.
This simple scenario tests the nova console-log command by retrieving the text console log output.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
length [ref]
The number of tail log lines you would like to retrieve. None (default value) or -1 means unlimited length.
kwargs [ref]
Optional additional arguments for server creation
Returns: Text console log output for server
NovaServers.boot_and_list_server [Scenario]¶
Boot a server from an image and then list all servers.
Measure the "nova list" command performance.
If you have only 1 user in your context, you will add 1 server on every iteration. So you will have more and more servers and will be able to measure the performance of the "nova list" command depending on the number of servers owned by users.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
detailed [ref]
True if the server listing should contain detailed information about all of them
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_and_live_migrate_server [Scenario]¶
Live Migrate a server.
This scenario launches a VM on a compute node available in the availability zone and then migrates the VM to another compute node on the same availability zone.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between VM booting and running live migration (of random duration from range [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
block_migration [ref]
Specifies the migration type
disk_over_commit [ref]
Specifies whether to allow overcommit on migrated instance or not
min_sleep [ref]
Minimum sleep time in seconds (non-negative)
max_sleep [ref]
Maximum sleep time in seconds (non-negative)
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_and_migrate_server [Scenario]¶
Migrate a server.
This scenario launches a VM on a compute node available in the availability zone, and then migrates the VM to another compute node on the same availability zone.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_and_rebuild_server [Scenario]¶
Rebuild a server.
This scenario launches a VM, then rebuilds that VM with a different image.
Namespace: openstack
Parameters:
from_image [ref]
Image to be used to boot an instance
to_image [ref]
Image to be used to rebuild the instance
flavor [ref]
Flavor to be used to boot an instance
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_and_show_server [Scenario]¶
Show server details.
This simple scenario tests the nova show command by retrieving the server details.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
kwargs [ref]
Optional additional arguments for server creation
Returns: Server details
NovaServers.boot_and_update_server [Scenario]¶
Boot a server, then update its name and description.
The scenario first creates a server, then update it. Assumes that cleanup is done elsewhere.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
description [ref]
Update the server description
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_lock_unlock_and_delete [Scenario]¶
Boot a server, lock it, then unlock and delete it.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between locking and unlocking the server (of random duration from min_sleep to max_sleep).
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
min_sleep [ref]
Minimum sleep time between locking and unlocking in seconds
max_sleep [ref]
Maximum sleep time between locking and unlocking in seconds
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_server [Scenario]¶
Boot a server.
Assumes that cleanup is done elsewhere.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
auto_assign_nic [ref]
True if NICs should be assigned
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_server_associate_and_dissociate_floating_ip [Scenario]¶
Boot a server associate and dissociate a floating IP from it.
The scenario first boot a server and create a floating IP. then associate the floating IP to the server.Finally dissociate the floating IP.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_server_attach_created_volume_and_live_migrate [Scenario]¶
Create a VM, attach a volume to it and live migrate.
Simple test to create a VM and attach a volume, then migrate the VM, detach the volume and delete volume/VM.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between attaching a volume and running live migration (of random duration from range [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
image [ref]
Glance image name to use for the VM
flavor [ref]
VM flavor name
size [ref]
Volume size (in GB)
block_migration [ref]
Specifies the migration type
disk_over_commit [ref]
Specifies whether to allow overcommit on migrated instance or not
boot_server_kwargs [ref]
Optional arguments for VM creation
create_volume_kwargs [ref]
Optional arguments for volume creation
min_sleep [ref]
Minimum sleep time in seconds (non-negative)
max_sleep [ref]
Maximum sleep time in seconds (non-negative)
NovaServers.boot_server_attach_created_volume_and_resize [Scenario]¶
Create a VM from image, attach a volume to it and resize.
Simple test to create a VM and attach a volume, then resize the VM, detach the volume then delete volume and VM. Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between attaching a volume and running resize (of random duration from range [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
image [ref]
Glance image name to use for the VM
flavor [ref]
VM flavor name
to_flavor [ref]
Flavor to be used to resize the booted instance
volume_size [ref]
Volume size (in GB)
min_sleep [ref]
Minimum sleep time in seconds (non-negative)
max_sleep [ref]
Maximum sleep time in seconds (non-negative)
force_delete [ref]
True if force_delete should be used
confirm [ref]
True if need to confirm resize else revert resize
do_delete [ref]
True if resources needs to be deleted explicitly else use rally cleanup to remove resources
boot_server_kwargs [ref]
Optional arguments for VM creation
create_volume_kwargs [ref]
Optional arguments for volume creation
NovaServers.boot_server_from_volume [Scenario]¶
Boot a server from volume.
The scenario first creates a volume and then a server. Assumes that cleanup is done elsewhere.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
volume_size [ref]
Volume size (in GB)
volume_type [ref]
Specifies volume type when there are multiple backends
auto_assign_nic [ref]
True if NICs should be assigned
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_server_from_volume_and_delete [Scenario]¶
Boot a server from volume and then delete it.
The scenario first creates a volume and then a server. Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
volume_size [ref]
Volume size (in GB)
volume_type [ref]
Specifies volume type when there are multiple backends
min_sleep [ref]
Minimum sleep time in seconds (non-negative)
max_sleep [ref]
Maximum sleep time in seconds (non-negative)
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_server_from_volume_and_live_migrate [Scenario]¶
Boot a server from volume and then migrate it.
The scenario first creates a volume and a server booted from the volume on a compute node available in the availability zone and then migrates the VM to another compute node on the same availability zone.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between VM booting and running live migration (of random duration from range [min_sleep, max_sleep]).
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
volume_size [ref]
Volume size (in GB)
volume_type [ref]
Specifies volume type when there are multiple backends
block_migration [ref]
Specifies the migration type
disk_over_commit [ref]
Specifies whether to allow overcommit on migrated instance or not
force_delete [ref]
True if force_delete should be used
min_sleep [ref]
Minimum sleep time in seconds (non-negative)
max_sleep [ref]
Maximum sleep time in seconds (non-negative)
kwargs [ref]
Optional additional arguments for server creation
NovaServers.boot_server_from_volume_and_resize [Scenario]¶
Boot a server from volume, then resize and delete it.
The scenario first creates a volume and then a server. Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
This test will confirm the resize by default, or revert the resize if confirm is set to false.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
to_flavor [ref]
Flavor to be used to resize the booted instance
volume_size [ref]
Volume size (in GB)
min_sleep [ref]
Minimum sleep time in seconds (non-negative)
max_sleep [ref]
Maximum sleep time in seconds (non-negative)
force_delete [ref]
True if force_delete should be used
confirm [ref]
True if need to confirm resize else revert resize
do_delete [ref]
True if resources needs to be deleted explicitly else use rally cleanup to remove resources
boot_server_kwargs [ref]
Optional arguments for VM creation
create_volume_kwargs [ref]
Optional arguments for volume creation
NovaServers.boot_server_from_volume_snapshot [Scenario]¶
Boot a server from a snapshot.
The scenario first creates a volume and creates a snapshot from this volume, then boots a server from the created snapshot. Assumes that cleanup is done elsewhere.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
volume_size [ref]
Volume size (in GB)
volume_type [ref]
Specifies volume type when there are multiple backends
auto_assign_nic [ref]
True if NICs should be assigned
kwargs [ref]
Optional additional arguments for server creation
NovaServers.list_servers [Scenario]¶
List all servers.
This simple scenario test the nova list command by listing all the servers.
Namespace: openstack
Parameters:
detailed [ref]
True if detailed information about servers should be listed
NovaServers.pause_and_unpause_server [Scenario]¶
Create a server, pause, unpause and then delete it
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServers.resize_server [Scenario]¶
Boot a server, then resize and delete it.
This test will confirm the resize by default, or revert the resize if confirm is set to false.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
to_flavor [ref]
Flavor to be used to resize the booted instance
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServers.resize_shutoff_server [Scenario]¶
Boot a server and stop it, then resize and delete it.
This test will confirm the resize by default, or revert the resize if confirm is set to false.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
to_flavor [ref]
Flavor to be used to resize the booted instance
confirm [ref]
True if need to confirm resize else revert resize
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServers.shelve_and_unshelve_server [Scenario]¶
Create a server, shelve, unshelve and then delete it
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServers.snapshot_server [Scenario]¶
Boot a server, make its snapshot and delete both.
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServers.suspend_and_resume_server [Scenario]¶
Create a server, suspend, resume and then delete it
Namespace: openstack
Parameters:
image [ref]
Image to be used to boot an instance
flavor [ref]
Flavor to be used to boot an instance
force_delete [ref]
True if force_delete should be used
kwargs [ref]
Optional additional arguments for server creation
NovaServices.list_services [Scenario]¶
List all nova services.
Measure the "nova service-list" command performance.
Namespace: openstack
Parameters:
host [ref]
List nova services on host
binary [ref]
List nova services matching given binary
Quotas.cinder_get [Scenario]¶
Get quotas for Cinder.
Measure the "cinder quota-show" command performance
Namespace: openstack
Quotas.cinder_update [Scenario]¶
Update quotas for Cinder.
Namespace: openstack
Parameters:
max_quota [ref]
Max value to be updated for quota.
Quotas.cinder_update_and_delete [Scenario]¶
Update and Delete quotas for Cinder.
Namespace: openstack
Parameters:
max_quota [ref]
Max value to be updated for quota.
Quotas.neutron_update [Scenario]¶
Update quotas for neutron.
Namespace: openstack
Parameters:
max_quota [ref]
Max value to be updated for quota.
Quotas.nova_update [Scenario]¶
Update quotas for Nova.
Namespace: openstack
Parameters:
max_quota [ref]
Max value to be updated for quota.
Quotas.nova_update_and_delete [Scenario]¶
Update and delete quotas for Nova.
Namespace: openstack
Parameters:
max_quota [ref]
Max value to be updated for quota.
SaharaClusters.create_and_delete_cluster [Scenario]¶
Launch and delete a Sahara Cluster.
This scenario launches a Hadoop cluster, waits until it becomes 'Active' and deletes it.
Namespace: openstack
Parameters:
flavor [ref]
Nova flavor that will be for nodes in the created node groups. Deprecated.
master_flavor [ref]
Nova flavor that will be used for the master instance of the cluster
worker_flavor [ref]
Nova flavor that will be used for the workers of the cluster
workers_count [ref]
Number of worker instances in a cluster
plugin_name [ref]
Name of a provisioning plugin
hadoop_version [ref]
Version of Hadoop distribution supported by the specified plugin.
floating_ip_pool [ref]
Floating ip pool name from which Floating IPs will be allocated. Sahara will determine automatically how to treat this depending on its own configurations. Defaults to None because in some cases Sahara may work w/o Floating IPs.
volumes_per_node [ref]
Number of Cinder volumes that will be attached to every cluster node
volumes_size [ref]
Size of each Cinder volume in GB
auto_security_group [ref]
Boolean value. If set to True Sahara will create a Security Group for each Node Group in the Cluster automatically.
security_groups [ref]
List of security groups that will be used while creating VMs. If auto_security_group is set to True, this list can be left empty.
node_configs [ref]
Config dict that will be passed to each Node Group
cluster_configs [ref]
Config dict that will be passed to the Cluster
enable_anti_affinity [ref]
If set to true the vms will be scheduled one per compute node.
enable_proxy [ref]
Use Master Node of a Cluster as a Proxy node and do not assign floating ips to workers.
use_autoconfig [ref]
If True, instances of the node group will be automatically configured during cluster creation. If False, the configuration values should be specify manually
SaharaClusters.create_scale_delete_cluster [Scenario]¶
Launch, scale and delete a Sahara Cluster.
This scenario launches a Hadoop cluster, waits until it becomes 'Active'. Then a series of scale operations is applied. The scaling happens according to numbers listed in
Namespace: openstack
Parameters:
flavor [ref]
Nova flavor that will be for nodes in the created node groups. Deprecated.
master_flavor [ref]
Nova flavor that will be used for the master instance of the cluster
worker_flavor [ref]
Nova flavor that will be used for the workers of the cluster
workers_count [ref]
Number of worker instances in a cluster
plugin_name [ref]
Name of a provisioning plugin
hadoop_version [ref]
Version of Hadoop distribution supported by the specified plugin.
deltas [ref]
List of integers which will be used to add or remove worker nodes from the cluster
floating_ip_pool [ref]
Floating ip pool name from which Floating IPs will be allocated. Sahara will determine automatically how to treat this depending on its own configurations. Defaults to None because in some cases Sahara may work w/o Floating IPs.
neutron_net_id [ref]
Id of a Neutron network that will be used for fixed IPs. This parameter is ignored when Nova Network is set up.
volumes_per_node [ref]
Number of Cinder volumes that will be attached to every cluster node
volumes_size [ref]
Size of each Cinder volume in GB
auto_security_group [ref]
Boolean value. If set to True Sahara will create a Security Group for each Node Group in the Cluster automatically.
security_groups [ref]
List of security groups that will be used while creating VMs. If auto_security_group is set to True this list can be left empty.
node_configs [ref]
Configs dict that will be passed to each Node Group
cluster_configs [ref]
Configs dict that will be passed to the Cluster
enable_anti_affinity [ref]
If set to true the vms will be scheduled one per compute node.
enable_proxy [ref]
Use Master Node of a Cluster as a Proxy node and do not assign floating ips to workers.
use_autoconfig [ref]
If True, instances of the node group will be automatically configured during cluster creation. If False, the configuration values should be specify manually
SaharaJob.create_launch_job [Scenario]¶
Create and execute a Sahara EDP Job.
This scenario Creates a Job entity and launches an execution on a Cluster.
Namespace: openstack
Parameters:
job_type [ref]
Type of the Data Processing Job
configs [ref]
Config dict that will be passed to a Job Execution
job_idx [ref]
Index of a job in a sequence. This index will be used to create different atomic actions for each job in a sequence
SaharaJob.create_launch_job_sequence [Scenario]¶
Create and execute a sequence of the Sahara EDP Jobs.
This scenario Creates a Job entity and launches an execution on a Cluster for every job object provided.
Namespace: openstack
Parameters:
jobs [ref]
List of jobs that should be executed in one context
SaharaJob.create_launch_job_sequence_with_scaling [Scenario]¶
Create and execute Sahara EDP Jobs on a scaling Cluster.
This scenario Creates a Job entity and launches an execution on a Cluster for every job object provided. The Cluster is scaled according to the deltas values and the sequence is launched again.
Namespace: openstack
Parameters:
jobs [ref]
List of jobs that should be executed in one context
deltas [ref]
List of integers which will be used to add or remove worker nodes from the cluster
SaharaNodeGroupTemplates.create_and_list_node_group_templates [Scenario]¶
Create and list Sahara Node Group Templates.
This scenario creates two Node Group Templates with different set of node processes. The master Node Group Template contains Hadoop's management processes. The worker Node Group Template contains Hadoop's worker processes.
By default the templates are created for the vanilla Hadoop provisioning plugin using the version 1.2.1
After the templates are created the list operation is called.
Namespace: openstack
Parameters:
flavor [ref]
Nova flavor that will be for nodes in the created node groups
plugin_name [ref]
Name of a provisioning plugin
hadoop_version [ref]
Version of Hadoop distribution supported by the specified plugin.
use_autoconfig [ref]
If True, instances of the node group will be automatically configured during cluster creation. If False, the configuration values should be specify manually
Module: rally.plugins.openstack.scenarios.sahara.node_group_templates
SaharaNodeGroupTemplates.create_delete_node_group_templates [Scenario]¶
Create and delete Sahara Node Group Templates.
This scenario creates and deletes two most common types of Node Group Templates.
By default the templates are created for the vanilla Hadoop provisioning plugin using the version 1.2.1
Namespace: openstack
Parameters:
flavor [ref]
Nova flavor that will be for nodes in the created node groups
plugin_name [ref]
Name of a provisioning plugin
hadoop_version [ref]
Version of Hadoop distribution supported by the specified plugin.
use_autoconfig [ref]
If True, instances of the node group will be automatically configured during cluster creation. If False, the configuration values should be specify manually
Module: rally.plugins.openstack.scenarios.sahara.node_group_templates
SenlinClusters.create_and_delete_cluster [Scenario]¶
Create a cluster and then delete it.
Measure the "senlin cluster-create" and "senlin cluster-delete" commands performance.
Namespace: openstack
Parameters:
desired_capacity [ref]
The capacity or initial number of nodes owned by the cluster
min_size [ref]
The minimum number of nodes owned by the cluster
max_size [ref]
The maximum number of nodes owned by the cluster. -1 means no limit
timeout [ref]
The timeout value in seconds for cluster creation
metadata [ref]
A set of key value pairs to associate with the cluster
SwiftObjects.create_container_and_object_then_delete_all [Scenario]¶
Create container and objects then delete everything created.
Namespace: openstack
Parameters:
objects_per_container [ref]
Int, number of objects to upload
object_size [ref]
Int, temporary local object size
kwargs [ref]
Dict, optional parameters to create container
SwiftObjects.create_container_and_object_then_download_object [Scenario]¶
Create container and objects then download all objects.
Namespace: openstack
Parameters:
objects_per_container [ref]
Int, number of objects to upload
object_size [ref]
Int, temporary local object size
kwargs [ref]
Dict, optional parameters to create container
SwiftObjects.create_container_and_object_then_list_objects [Scenario]¶
Create container and objects then list all objects.
Namespace: openstack
Parameters:
objects_per_container [ref]
Int, number of objects to upload
object_size [ref]
Int, temporary local object size
kwargs [ref]
Dict, optional parameters to create container
SwiftObjects.list_and_download_objects_in_containers [Scenario]¶
List and download objects in all containers.
Namespace: openstack
SwiftObjects.list_objects_in_containers [Scenario]¶
List objects in all containers.
Namespace: openstack
VMTasks.boot_runcommand_delete [Scenario]¶
Boot a server, run script specified in command and delete server.
Namespace: openstack
Parameters:
image [ref]
Glance image name to use for the vm. Optional in case of specified "image_command_customizer" context
flavor [ref]
VM flavor name
username [ref]
Ssh username on server, str
password [ref]
Password on SSH authentication
command [ref]
Command-specifying dictionary that either specifies remote command path via remote_path' (can be uploaded from a local file specified by `local_path), an inline script via `script_inline' or a local script file path using `script_file'. Both `script_file' and `local_path' are checked to be accessible by the `file_exists' validator code.
The `script_inline' and `script_file' both require an `interpreter' value to specify the interpreter script should be run with.
Note that any of `interpreter' and `remote_path' can be an array prefixed with environment variables and suffixed with args for the `interpreter' command. `remote_path's last component must be a path to a command to execute (also upload destination if a `local_path' is given). Uploading an interpreter is possible but requires that `remote_path' and `interpreter' path do match.
Examples:
# Run a `local_script.pl' file sending it to a remote # Perl interpreter command = { "script_file": "local_script.pl", "interpreter": "/usr/bin/perl" } # Run an inline script sending it to a remote interpreter command = { "script_inline": "echo 'Hello, World!'", "interpreter": "/bin/sh" } # Run a remote command command = { "remote_path": "/bin/false" } # Copy a local command and run it command = { "remote_path": "/usr/local/bin/fio", "local_path": "/home/foobar/myfiodir/bin/fio" } # Copy a local command and run it with environment variable command = { "remote_path": ["HOME=/root", "/usr/local/bin/fio"], "local_path": "/home/foobar/myfiodir/bin/fio" } # Run an inline script sending it to a remote interpreter command = { "script_inline": "echo "Hello, ${NAME:-World}"", "interpreter": ["NAME=Earth", "/bin/sh"] } # Run an inline script sending it to an uploaded remote # interpreter command = { "script_inline": "echo "Hello, ${NAME:-World}"", "interpreter": ["NAME=Earth", "/tmp/sh"], "remote_path": "/tmp/sh", "local_path": "/home/user/work/cve/sh-1.0/bin/sh" }
volume_args [ref]
Volume args for booting server from volume
floating_network [ref]
External network name, for floating ip
port [ref]
Ssh port for SSH connection
use_floating_ip [ref]
Bool, floating or fixed IP for SSH connection
force_delete [ref]
Whether to use force_delete for servers
wait_for_ping [ref]
Whether to check connectivity on server creation
max_log_length [ref]
The number of tail nova console-log lines user would like to retrieve
Returns: dictionary with keys `data' and `errors': data: dict, JSON output from the script errors: str, raw data from the script's stderr stream
VMTasks.dd_load_test [Scenario]¶
Boot a server from a custom image, run a command that outputs JSON.
Example Script in rally-jobs/extra/install_benchmark.sh
Namespace: openstack
Parameters:
command [ref]
Default parameter from scenario
VMTasks.runcommand_heat [Scenario]¶
Run workload on stack deployed by heat.
Workload can be either file or resource:
{"file": "/path/to/file.sh"} {"resource": ["package.module", "workload.py"]}Also it should contain "username" key.
Given file will be uploaded to gate_node and started. This script should print key value pairs separated by colon. These pairs will be presented in results.
Gate node should be accessible via ssh with keypair key_name, so heat template should accept parameter key_name.
Namespace: openstack
Parameters:
workload [ref]
Workload to run
template [ref]
Path to heat template file
files [ref]
Additional template files
parameters [ref]
Parameters for heat template
Watcher.create_audit_and_delete [Scenario]¶
Create and delete audit.
Create Audit, wait until whether Audit is in SUCCEEDED state or in FAILED and delete audit.
Namespace: openstack
Watcher.create_audit_template_and_delete [Scenario]¶
Create audit template and delete it.
Namespace: openstack
Parameters:
goal [ref]
The goal audit template is based on
strategy [ref]
The strategy used to provide resource optimization algorithm
Watcher.list_audit_templates [Scenario]¶
List existing audit templates.
Audit templates are being created by Audit Template Context.
Namespace: openstack
Parameters:
name [ref]
Name of the audit template
goal [ref]
Name of the goal
strategy [ref]
Name of the strategy
limit [ref]
The maximum number of results to return per request, if:
- limit > 0, the maximum number of audit templates to return.
- limit == 0, return the entire list of audit_templates.
- limit param is NOT specified (None), the number of items returned respect the maximum imposed by the Watcher API
(see Watcher's api.max_limit option).
sort_key [ref]
Optional, field used for sorting.
sort_dir [ref]
Optional, direction of sorting, either 'asc' (the default) or 'desc'.
detail [ref]
Optional, boolean whether to return detailed information about audit_templates.
ZaqarBasic.create_queue [Scenario]¶
Create a Zaqar queue with a random name.
Namespace: openstack
Parameters:
kwargs [ref]
Other optional parameters to create queues like "metadata"
ZaqarBasic.producer_consumer [Scenario]¶
Serial message producer/consumer.
Creates a Zaqar queue with random name, sends a set of messages and then retrieves an iterator containing those.
Namespace: openstack
Parameters:
min_msg_count [ref]
Min number of messages to be posted
max_msg_count [ref]
Max number of messages to be posted
kwargs [ref]
Other optional parameters to create queues like "metadata"
Scenario Runners¶
constant [Scenario Runner]¶
Creates constant load executing a scenario a specified number of times.
This runner will place a constant load on the cloud under test by executing each scenario iteration without pausing between iterations up to the number of times specified in the scenario config.
The concurrency parameter of the scenario config controls the number of concurrent iterations which execute during a single scenario in order to simulate the activities of multiple users placing load on the cloud under test.
Namespace: default
Parameters:
max_cpu_count (int) [ref]
The maximum number of processes to create load from.
Min value: 1.
type (str) [ref]
Type of Runner.
timeout (float) [ref]
Operation's timeout.
concurrency (int) [ref]
The number of parallel iteration executions.
Min value: 1.
times (int) [ref]
Total number of iteration executions.
Min value: 1.
constant_for_duration [Scenario Runner]¶
Creates constant load executing a scenario for an interval of time.
This runner will place a constant load on the cloud under test by executing each scenario iteration without pausing between iterations until a specified interval of time has elapsed.
The concurrency parameter of the scenario config controls the number of concurrent iterations which execute during a single sceanario in order to simulate the activities of multiple users placing load on the cloud under test.
Namespace: default
Parameters:
duration (float) [ref]
The number of seconds during which to generate a load.
Min value: 0.0.
type (str) [ref]
Type of Runner.
timeout (float) [ref]
Operation's timeout.
Min value: 1.
concurrency (int) [ref]
The number of parallel iteration executions.
Min value: 1.
rps [Scenario Runner]¶
Scenario runner that does the job with specified frequency.
Every single benchmark scenario iteration is executed with specified frequency (runs per second) in a pool of processes. The scenario will be launched for a fixed number of times in total (specified in the config).
An example of a rps scenario is booting 1 VM per second. This execution type is thus very helpful in understanding the maximal load that a certain cloud can handle.
Namespace: default
Parameters:
- rps [ref]
times (int) [ref]
Min value: 1.
- timeout (float) [ref]
max_cpu_count (int) [ref]
Min value: 1.
max_concurrency (int) [ref]
Min value: 1.
- type (str) [ref]
Module: rally.plugins.common.runners.rps
serial [Scenario Runner]¶
Scenario runner that executes benchmark scenarios serially.
Unlike scenario runners that execute in parallel, the serial scenario runner executes scenarios one-by-one in the same python interpreter process as Rally. This allows you to benchmark your scenario without introducing any concurrent operations as well as interactively debug the scenario from the same command that you use to start Rally.
Namespace: default
Parameters:
- type (str) [ref]
times (int) [ref]
Min value: 1.
Triggers¶
event [Trigger]¶
Triggers hook on specified event and list of values.
Namespace: default
Note
One of the following groups of parameters should be provided.
Option 1 of parameters:
Triage hook based on specified seconds after start of workload.
at (list) [ref]
Elements of the list should follow format(s) described below:
- Type: int.
unit [ref]
Set of expected values: 'time'.
Option 2 of parameters:
Triage hook based on specific iterations.
at (list) [ref]
Elements of the list should follow format(s) described below:
- Type: int.
unit [ref]
Set of expected values: 'iteration'.
periodic [Trigger]¶
Periodically triggers hook with specified range and step.
Namespace: default
Note
One of the following groups of parameters should be provided.
Option 1 of parameters:
Periodically triage hook based on elapsed time after start of workload.
start (int) [ref]
Min value: 0.
step (int) [ref]
Min value: 1.
end (int) [ref]
Min value: 1.
unit [ref]
Set of expected values: 'time'.
Option 2 of parameters:
Periodically triage hook based on iterations.
start (int) [ref]
Min value: 1.
step (int) [ref]
Min value: 1.
end (int) [ref]
Min value: 1.
unit [ref]
Set of expected values: 'iteration'.
Verification Component¶
Verification Reporters¶
html-static [Verification Reporter]¶
Generates verification report in HTML format with embedded JS/CSS.
Namespace: default
json [Verification Reporter]¶
Generates verification report in JSON format.
An example of the report (All dates, numbers, names appearing in this example are fictitious. Any resemblance to real things is purely coincidental):
{"verifications": { "verification-uuid-1": { "status": "finished", "skipped": 1, "started_at": "2001-01-01T00:00:00", "finished_at": "2001-01-01T00:05:00", "tests_duration": 5, "run_args": { "pattern": "set=smoke", "xfail_list": {"some.test.TestCase.test_xfail": "Some reason why it is expected."}, "skip_list": {"some.test.TestCase.test_skipped": "This test was skipped intentionally"}, }, "success": 1, "expected_failures": 1, "tests_count": 3, "failures": 0, "unexpected_success": 0 }, "verification-uuid-2": { "status": "finished", "skipped": 1, "started_at": "2002-01-01T00:00:00", "finished_at": "2002-01-01T00:05:00", "tests_duration": 5, "run_args": { "pattern": "set=smoke", "xfail_list": {"some.test.TestCase.test_xfail": "Some reason why it is expected."}, "skip_list": {"some.test.TestCase.test_skipped": "This test was skipped intentionally"}, }, "success": 1, "expected_failures": 1, "tests_count": 3, "failures": 1, "unexpected_success": 0 } }, "tests": { "some.test.TestCase.test_foo[tag1,tag2]": { "name": "some.test.TestCase.test_foo", "tags": ["tag1","tag2"], "by_verification": { "verification-uuid-1": { "status": "success", "duration": "1.111" }, "verification-uuid-2": { "status": "success", "duration": "22.222" } } }, "some.test.TestCase.test_skipped[tag1]": { "name": "some.test.TestCase.test_skipped", "tags": ["tag1"], "by_verification": { "verification-uuid-1": { "status": "skipped", "duration": "0", "details": "Skipped until Bug: 666 is resolved." }, "verification-uuid-2": { "status": "skipped", "duration": "0", "details": "Skipped until Bug: 666 is resolved." } } }, "some.test.TestCase.test_xfail": { "name": "some.test.TestCase.test_xfail", "tags": [], "by_verification": { "verification-uuid-1": { "status": "xfail", "duration": "3", "details": "Some reason why it is expected.\n\n" "Traceback (most recent call last): \n" " File "fake.py", line 13, in <module>\n" " yyy()\n" " File "fake.py", line 11, in yyy\n" " xxx()\n" " File "fake.py", line 8, in xxx\n" " bar()\n" " File "fake.py", line 5, in bar\n" " foo()\n" " File "fake.py", line 2, in foo\n" " raise Exception()\n" "Exception" }, "verification-uuid-2": { "status": "xfail", "duration": "3", "details": "Some reason why it is expected.\n\n" "Traceback (most recent call last): \n" " File "fake.py", line 13, in <module>\n" " yyy()\n" " File "fake.py", line 11, in yyy\n" " xxx()\n" " File "fake.py", line 8, in xxx\n" " bar()\n" " File "fake.py", line 5, in bar\n" " foo()\n" " File "fake.py", line 2, in foo\n" " raise Exception()\n" "Exception" } } }, "some.test.TestCase.test_failed": { "name": "some.test.TestCase.test_failed", "tags": [], "by_verification": { "verification-uuid-2": { "status": "fail", "duration": "4", "details": "Some reason why it is expected.\n\n" "Traceback (most recent call last): \n" " File "fake.py", line 13, in <module>\n" " yyy()\n" " File "fake.py", line 11, in yyy\n" " xxx()\n" " File "fake.py", line 8, in xxx\n" " bar()\n" " File "fake.py", line 5, in bar\n" " foo()\n" " File "fake.py", line 2, in foo\n" " raise Exception()\n" "Exception" } } } } }
Namespace: default
junit-xml [Verification Reporter]¶
Generates verification report in JUnit-XML format.
An example of the report (All dates, numbers, names appearing in this example are fictitious. Any resemblance to real things is purely coincidental):
<testsuites> <!--Report is generated by Rally 0.8.0 at 2002-01-01T00:00:00--> <testsuite id="verification-uuid-1" tests="9" time="1.111" errors="0" failures="3" skipped="0" timestamp="2001-01-01T00:00:00"> <testcase classname="some.test.TestCase" name="test_foo" time="8" timestamp="2001-01-01T00:01:00" /> <testcase classname="some.test.TestCase" name="test_skipped" time="0" timestamp="2001-01-01T00:02:00"> <skipped>Skipped until Bug: 666 is resolved.</skipped> </testcase> <testcase classname="some.test.TestCase" name="test_xfail" time="3" timestamp="2001-01-01T00:03:00"> <!--It is an expected failure due to: something--> <!--Traceback: HEEELP--> </testcase> <testcase classname="some.test.TestCase" name="test_uxsuccess" time="3" timestamp="2001-01-01T00:04:00"> <failure> It is an unexpected success. The test should fail due to: It should fail, I said! </failure> </testcase> </testsuite> <testsuite id="verification-uuid-2" tests="99" time="22.222" errors="0" failures="33" skipped="0" timestamp="2002-01-01T00:00:00"> <testcase classname="some.test.TestCase" name="test_foo" time="8" timestamp="2001-02-01T00:01:00" /> <testcase classname="some.test.TestCase" name="test_failed" time="8" timestamp="2001-02-01T00:02:00"> <failure>HEEEEEEELP</failure> </testcase> <testcase classname="some.test.TestCase" name="test_skipped" time="0" timestamp="2001-02-01T00:03:00"> <skipped>Skipped until Bug: 666 is resolved.</skipped> </testcase> <testcase classname="some.test.TestCase" name="test_xfail" time="4" timestamp="2001-02-01T00:04:00"> <!--It is an expected failure due to: something--> <!--Traceback: HEEELP--> </testcase> </testsuite> </testsuites>
Namespace: default
Verifier Managers¶
tempest [Verifier Manager]¶
Tempest verifier.
Description:
Quote from official documentation:
This is a set of integration tests to be run against a live OpenStack cluster. Tempest has batteries of tests for OpenStack API validation, Scenarios, and other specific tests useful in validating an OpenStack deployment.Rally supports features listed below:
- cloning Tempest: repository and version can be specified
- installation: system-wide with checking existence of required packages or in virtual environment
- configuration: options are discovered via OpenStack API, but you can override them if you need
- running: pre-creating all required resources(i.e images, tenants, etc), prepare arguments, launching Tempest, live-progress output
- results: all verifications are stored in db, you can built reports, compare verification at whatever you want time.
Appeared in Rally 0.8.0 (actually, it appeared long time ago with first revision of Verification Component, but 0.8.0 is mentioned since it is first release after Verification Component redesign)
- Running arguments:
- concurrency: Number of processes to be used for launching tests. In case of 0 value, number of processes will be equal to number of CPU cores.
- load_list: a list of tests to launch.
- pattern: a regular expression of tests to launch.
- set: Name of predefined set of tests. Known names: full, smoke, baremetal, clustering, compute, database, data_processing, identity, image, messaging, network, object_storage, orchestration, telemetry, volume, scenario
- skip_list: a list of tests to skip (actually, it is a dict where keys are names of tests, values are reasons).
- xfail_list: a list of tests that are expected to fail (actually, it is a dict where keys are names of tests, values are reasons).
- Installation arguments:
- system_wide: Whether or not to use the system-wide environment for verifier instead of a virtual environment. Defaults to False.
- source: Path or URL to the repo to clone verifier from. Defaults to https://git.openstack.org/openstack/tempest
- version: Branch, tag or commit ID to checkout before verifier installation. Defaults to 'master'.
Namespace: openstack
Module: rally.plugins.openstack.verification.tempest.manager