Rally Plugins Reference¶
Task Scenario Runners¶
constant [scenario runner]¶
Creates constant load executing a scenario a specified number of times.
This runner will place a constant load on the cloud under test by executing each scenario iteration without pausing between iterations up to the number of times specified in the scenario config.
The concurrency parameter of the scenario config controls the number of concurrent scenarios which execute during a single iteration in order to simulate the activities of multiple users placing load on the cloud under test.
Namespace: default
constant_for_duration [scenario runner]¶
Creates constant load executing a scenario for an interval of time.
This runner will place a constant load on the cloud under test by executing each scenario iteration without pausing between iterations until a specified interval of time has elapsed.
The concurrency parameter of the scenario config controls the number of concurrent scenarios which execute during a single iteration in order to simulate the activities of multiple users placing load on the cloud under test.
Namespace: default
serial [scenario runner]¶
Scenario runner that executes benchmark scenarios serially.
Unlike scenario runners that execute in parallel, the serial scenario runner executes scenarios one-by-one in the same python interpreter process as Rally. This allows you to benchmark your scenario without introducing any concurrent operations as well as interactively debug the scenario from the same command that you use to start Rally.
Namespace: default
rps [scenario runner]¶
Scenario runner that does the job with specified frequency.
Every single benchmark scenario iteration is executed with specified frequency (runs per second) in a pool of processes. The scenario will be launched for a fixed number of times in total (specified in the config).
An example of a rps scenario is booting 1 VM per second. This execution type is thus very helpful in understanding the maximal load that a certain cloud can handle.
Namespace: default
Module: rally.plugins.common.runners.rps
Task SLAs¶
performance_degradation [SLA]¶
Calculates perfomance degradation based on iteration time
This SLA plugin finds minimum and maximum duration of iterations completed without errors during Rally task execution. Assuming that minimum duration is 100%, it calculates performance degradation against maximum duration.
Namespace: default
outliers [SLA]¶
Limit the number of outliers (iterations that take too much time).
The outliers are detected automatically using the computation of the mean and standard deviation (std) of the data.
Namespace: default
max_avg_duration_per_atomic [SLA]¶
Maximum average duration of one iterations atomic actions in seconds.
Namespace: default
Module: rally.plugins.common.sla.max_average_duration_per_atomic
Task Contexts¶
users [context]¶
Context class for generating temporary users/tenants for benchmarks.
Namespace: default
existing_users [context]¶
This context supports using existing users in Rally.
It uses information about deployment to properly initialize context["users"] and context["tenants"]
So there won't be big difference between usage of "users" and "existing_users" context.
Namespace: default
Module: rally.plugins.openstack.context.keystone.existing_users
api_versions [context]¶
Context for specifying OpenStack clients versions and service types.
Some OpenStack services support several API versions. To recognize the endpoints of each version, separate service types are provided in Keystone service catalog.
Rally has the map of default service names - service types. But since service type is an entity, which can be configured manually by admin( via keystone api) without relation to service name, such map can be insufficient.
Also, Keystone service catalog does not provide a map types to name (this statement is true for keystone < 3.3 ).
This context was designed for not-default service types and not-default API versions usage.
An example of specifying API version:
# In this example we will launch NovaKeypair.create_and_list_keypairs
# scenario on 2.2 api version.
{
"NovaKeypair.create_and_list_keypairs": [
{
"args": {
"key_type": "x509"
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
},
"api_versions": {
"nova": {
"version": 2.2
}
}
}
}
]
}
An example of specifying API version along with service type:
# In this example we will launch CinderVolumes.create_and_attach_volume
# scenario on Cinder V2
{
"CinderVolumes.create_and_attach_volume": [
{
"args": {
"size": 10,
"image": {
"name": "^cirros.*uec$"
},
"flavor": {
"name": "m1.tiny"
},
"create_volume_params": {
"availability_zone": "nova"
}
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 2,
"users_per_tenant": 2
},
"api_versions": {
"cinder": {
"version": 2,
"service_type": "volumev2"
}
}
}
}
]
}
Also, it possible to use service name as an identifier of service endpoint, but an admin user is required (Keystone can return map of service names - types, but such API is permitted only for admin). An example:
# Similar to the previous example, but `service_name` argument is used
# instead of `service_type`
{
"CinderVolumes.create_and_attach_volume": [
{
"args": {
"size": 10,
"image": {
"name": "^cirros.*uec$"
},
"flavor": {
"name": "m1.tiny"
},
"create_volume_params": {
"availability_zone": "nova"
}
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 2,
"users_per_tenant": 2
},
"api_versions": {
"cinder": {
"version": 2,
"service_name": "cinderv2"
}
}
}
}
]
}
Namespace: default
network [context]¶
Create networking resources.
This creates networks for all tenants, and optionally creates another resources like subnets and routers.
Namespace: default
existing_network [context]¶
This context supports using existing networks in Rally.
This context should be used on a deployment with existing users.
Namespace: default
Module: rally.plugins.openstack.context.network.existing_network
custom_image [context]¶
Base class for the contexts providing customized image with.
Every context class for the specific customization must implement the method _customize_image that is able to connect to the server using SSH and e.g. install applications inside it.
This is used e.g. to install the benchmark application using SSH access.
This base context class provides a way to prepare an image with custom preinstalled applications. Basically, this code boots a VM, calls the _customize_image and then snapshots the VM disk, removing the VM afterwards. The image UUID is stored in the user["custom_image"]["id"] and can be used afterwards by scenario.
Namespace: default
image_command_customizer [context]¶
Context class for generating image customized by a command execution.
Run a command specified by configuration to prepare image.
Use this script e.g. to download and install something.
Namespace: default
Module: rally.plugins.openstack.context.vm.image_command_customizer
clusters [context]¶
Context class for generating temporary cluster for benchmarks.
Namespace: default
cluster_templates [context]¶
Context class for generating temporary cluster model for benchmarks.
Namespace: default
Module: rally.plugins.openstack.context.magnum.cluster_templates
manila_security_services [context]¶
This context creates 'security services' for Manila project.
Namespace: default
Module: rally.plugins.openstack.context.manila.manila_security_services
audit_templates [context]¶
Context class for adding temporary audit template for benchmarks.
Namespace: default
Module: rally.plugins.openstack.context.watcher.audit_templates
heat_dataplane [context]¶
Context class for create stack by given template.
This context will create stacks by given template for each tenant and add details to context. Following details will be added:
id of stack; template file contents; files dictionary; stack parameters;
Heat template should define a "gate" node which will interact with Rally by ssh and workload nodes by any protocol. To make this possible heat template should accept the following parameters:
network_id: id of public network router_id: id of external router to connect "gate" node key_name: name of nova ssh keypair to use for "gate" node
Namespace: default
stacks [context]¶
Context class for create temporary stacks with resources.
Stack generator allows to generate arbitrary number of stacks for each tenant before test scenarios. In addition, it allows to define number of resources (namely OS::Heat::RandomString) that will be created inside each stack. After test execution the stacks will be automatically removed from heat.
Namespace: default
servers [context]¶
Context class for adding temporary servers for benchmarks.
Servers are added for each tenant.
Namespace: default
sahara_output_data_sources [context]¶
Context class for setting up Output Data Sources for an EDP job.
Namespace: default
Module: rally.plugins.openstack.context.sahara.sahara_output_data_sources
sahara_input_data_sources [context]¶
Context class for setting up Input Data Sources for an EDP job.
Namespace: default
Module: rally.plugins.openstack.context.sahara.sahara_input_data_sources
sahara_cluster [context]¶
Context class for setting up the Cluster an EDP job.
Namespace: default
Module: rally.plugins.openstack.context.sahara.sahara_cluster
sahara_job_binaries [context]¶
Context class for setting up Job Binaries for an EDP job.
Namespace: default
Module: rally.plugins.openstack.context.sahara.sahara_job_binaries
ec2_servers [context]¶
Context class for adding temporary servers for benchmarks.
Servers are added for each tenant.
Namespace: default
tempest [context]¶
Namespace: default
Module: rally.plugins.openstack.context.not_for_production.tempest
murano_environments [context]¶
Context class for creating murano environments.
Namespace: default
Module: rally.plugins.openstack.context.murano.murano_environments
murano_packages [context]¶
Context class for uploading applications for murano.
Namespace: default
Module: rally.plugins.openstack.context.murano.murano_packages
ceilometer [context]¶
Context for creating samples and collecting resources for benchmarks.
Namespace: default
Task Scenarios¶
GlanceImages.create_and_list_image [scenario]¶
Create an image and then list all images.
Measure the "glance image-list" command performance.
If you have only 1 user in your context, you will add 1 image on every iteration. So you will have more and more images and will be able to measure the performance of the "glance image-list" command depending on the number of images owned by users.
Namespace: default
Parameters:
- container_format: container format of image. Acceptable
formats: ami, ari, aki, bare, and ovf
image_location: image file location
- disk_format: disk format of image. Acceptable formats:
ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso
kwargs: optional parameters to create image
GlanceImages.list_images [scenario]¶
List all images.
This simple scenario tests the glance image-list command by listing all the images.
Suppose if we have 2 users in context and each has 2 images uploaded for them we will be able to test the performance of glance image-list command in this case.
Namespace: default
GlanceImages.create_and_delete_image [scenario]¶
Create and then delete an image.
Namespace: default
Parameters:
- container_format: container format of image. Acceptable
formats: ami, ari, aki, bare, and ovf
image_location: image file location
- disk_format: disk format of image. Acceptable formats:
ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso
kwargs: optional parameters to create image
GlanceImages.create_image_and_boot_instances [scenario]¶
Create an image and boot several instances from it.
Namespace: default
Parameters:
- container_format: container format of image. Acceptable
formats: ami, ari, aki, bare, and ovf
image_location: image file location
- disk_format: disk format of image. Acceptable formats:
ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso
flavor: Nova flavor to be used to launch an instance
number_instances: number of Nova servers to boot
kwargs: optional parameters to create server
CinderVolumes.create_and_list_volume [scenario]¶
Create a volume and list all volumes.
Measure the "cinder volume-list" command performance.
If you have only 1 user in your context, you will add 1 volume on every iteration. So you will have more and more volumes and will be able to measure the performance of the "cinder volume-list" command depending on the number of images owned by users.
Namespace: default
Parameters:
- size: volume size (integer, in GB) or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
- detailed: determines whether the volume listing should contain
detailed information about all of them
image: image to be used to create volume
kwargs: optional args to create a volume
CinderVolumes.list_volumes [scenario]¶
List all volumes.
This simple scenario tests the cinder list command by listing all the volumes.
Namespace: default
Parameters:
- detailed: True if detailed information about volumes
should be listed
CinderVolumes.create_and_update_volume [scenario]¶
Create a volume and update its name and description.
Namespace: default
Parameters:
- size: volume size (integer, in GB)
- image: image to be used to create volume
- create_volume_kwargs: dict, to be used to create volume
- update_volume_kwargs: dict, to be used to update volume
CinderVolumes.create_and_delete_volume [scenario]¶
Create and then delete a volume.
Good for testing a maximal bandwidth of cloud. Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: default
Parameters:
- size: volume size (integer, in GB) or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image: image to be used to create volume
- min_sleep: minimum sleep time between volume creation and
deletion (in seconds)
- max_sleep: maximum sleep time between volume creation and
deletion (in seconds)
kwargs: optional args to create a volume
CinderVolumes.create_volume [scenario]¶
Create a volume.
Good test to check how influence amount of active volumes on performance of creating new.
Namespace: default
Parameters:
- size: volume size (integer, in GB) or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image: image to be used to create volume
kwargs: optional args to create a volume
CinderVolumes.modify_volume_metadata [scenario]¶
Modify a volume's metadata.
This requires a volume to be created with the volumes
context. Additionally, sets * set_size
must be greater
than or equal to deletes * delete_size
.
Namespace: default
Parameters:
sets: how many set_metadata operations to perform
- set_size: number of metadata keys to set in each
set_metadata operation
deletes: how many delete_metadata operations to perform
- delete_size: number of metadata keys to delete in each
delete_metadata operation
CinderVolumes.create_and_extend_volume [scenario]¶
Create and extend a volume and then delete it.
Namespace: default
Parameters:
- size: volume size (in GB) or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
- new_size: volume new size (in GB) or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
to extend. Notice: should be bigger volume size
- min_sleep: minimum sleep time between volume extension and
deletion (in seconds)
- max_sleep: maximum sleep time between volume extension and
deletion (in seconds)
kwargs: optional args to extend the volume
CinderVolumes.create_from_volume_and_delete_volume [scenario]¶
Create volume from volume and then delete it.
Scenario for testing volume clone.Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: default
Parameters:
- size: volume size (in GB), or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
Should be equal or bigger source volume size
- min_sleep: minimum sleep time between volume creation and
deletion (in seconds)
- max_sleep: maximum sleep time between volume creation and
deletion (in seconds)
kwargs: optional args to create a volume
CinderVolumes.create_and_delete_snapshot [scenario]¶
Create and then delete a volume-snapshot.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between snapshot creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: default
Parameters:
- force: when set to True, allows snapshot of a volume when
the volume is attached to an instance
- min_sleep: minimum sleep time between snapshot creation and
deletion (in seconds)
- max_sleep: maximum sleep time between snapshot creation and
deletion (in seconds)
kwargs: optional args to create a snapshot
CinderVolumes.create_and_attach_volume [scenario]¶
Create a VM and attach a volume to it.
Simple test to create a VM and attach a volume, then detach the volume and delete volume/VM.
Namespace: default
Parameters:
- size: volume size (integer, in GB) or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image: Glance image name to use for the VM
flavor: VM flavor name
create_volume_params: optional arguments for volume creation
create_vm_params: optional arguments for VM creation
kwargs: (deprecated) optional arguments for VM creation
CinderVolumes.create_snapshot_and_attach_volume [scenario]¶
Create volume, snapshot and attach/detach volume.
This scenario is based on the standalone qaStressTest.py (https://github.com/WaltHP/cinder-stress).
Namespace: default
Parameters:
- volume_type: Whether or not to specify volume type when creating
volumes.
- size: Volume size - dictionary, contains two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
default values: {"min": 1, "max": 5}
- kwargs: Optional parameters used during volume
snapshot creation.
CinderVolumes.create_nested_snapshots_and_attach_volume [scenario]¶
Create a volume from snapshot and attach/detach the volume
This scenario create volume, create it's snapshot, attach volume, then create new volume from existing snapshot and so on, with defined nested level, after all detach and delete them. volume->snapshot->volume->snapshot->volume ...
Namespace: default
Parameters:
- size: Volume size - dictionary, contains two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
default values: {"min": 1, "max": 5}
nested_level: amount of nested levels
create_volume_kwargs: optional args to create a volume
create_snapshot_kwargs: optional args to create a snapshot
- kwargs: Optional parameters used during volume
snapshot creation.
CinderVolumes.create_and_list_snapshots [scenario]¶
Create and then list a volume-snapshot.
Namespace: default
Parameters:
- force: when set to True, allows snapshot of a volume when
the volume is attached to an instance
- detailed: True if detailed information about snapshots
should be listed
kwargs: optional args to create a snapshot
CinderVolumes.create_and_upload_volume_to_image [scenario]¶
Create and upload a volume to image.
Namespace: default
Parameters:
- size: volume size (integers, in GB), or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image: image to be used to create volume.
- force: when set to True volume that is attached to an instance
could be uploaded to image
container_format: image container format
disk_format: disk format for image
do_delete: deletes image and volume after uploading if True
kwargs: optional args to create a volume
CinderVolumes.create_volume_backup [scenario]¶
Create a volume backup.
Namespace: default
Parameters:
size: volume size in GB
- do_delete: if True, a volume and a volume backup will
be deleted after creation.
create_volume_kwargs: optional args to create a volume
create_backup_kwargs: optional args to create a volume backup
CinderVolumes.create_and_restore_volume_backup [scenario]¶
Restore volume backup.
Namespace: default
Parameters:
size: volume size in GB
- do_delete: if True, the volume and the volume backup will
be deleted after creation.
create_volume_kwargs: optional args to create a volume
create_backup_kwargs: optional args to create a volume backup
CinderVolumes.create_and_list_volume_backups [scenario]¶
Create and then list a volume backup.
Namespace: default
Parameters:
size: volume size in GB
- detailed: True if detailed information about backup
should be listed
do_delete: if True, a volume backup will be deleted
create_volume_kwargs: optional args to create a volume
create_backup_kwargs: optional args to create a volume backup
CinderVolumes.create_volume_and_clone [scenario]¶
Create a volume, then clone it to another volume.
- This creates a volume, then clone it to anothor volume,
- and then clone the new volume to next volume...
- create source volume (from image)
- clone source volume to volume1
- clone volume1 to volume2
- clone volume2 to volume3
- ...
Namespace: default
Parameters:
- size: volume size (integer, in GB) or
- dictionary, must contain two values:
min - minimum size volumes will be created as; max - maximum size volumes will be created as.
image: image to be used to create initial volume
nested_level: amount of nested levels
kwargs: optional args to create volumes
CinderVolumes.create_volume_from_snapshot [scenario]¶
Create a volume-snapshot, then create a volume from this snapshot.
Namespace: default
Parameters:
- do_delete: if True, a snapshot and a volume will
be deleted after creation.
create_snapshot_kwargs: optional args to create a snapshot
kwargs: optional args to create a volume
NeutronSecurityGroup.create_and_list_security_groups [scenario]¶
Create and list Neutron security-groups.
Measure the "neutron security-group-create" and "neutron security-group-list" command performance.
Namespace: default
Parameters:
- security_group_create_args: dict, POST /v2.0/security-groups
request options
Module: rally.plugins.openstack.scenarios.neutron.security_groups
NeutronSecurityGroup.create_and_delete_security_groups [scenario]¶
Create and delete Neutron security-groups.
Measure the "neutron security-group-create" and "neutron security-group-delete" command performance.
Namespace: default
Parameters:
- security_group_create_args: dict, POST /v2.0/security-groups
request options
Module: rally.plugins.openstack.scenarios.neutron.security_groups
NeutronSecurityGroup.create_and_update_security_groups [scenario]¶
Create and update Neutron security-groups.
Measure the "neutron security-group-create" and "neutron security-group-update" command performance.
Namespace: default
Parameters:
- security_group_create_args: dict, POST /v2.0/security-groups
request options
- security_group_update_args: dict, PUT /v2.0/security-groups
update options
Module: rally.plugins.openstack.scenarios.neutron.security_groups
NeutronNetworks.create_and_list_networks [scenario]¶
Create a network and then list all networks.
Measure the "neutron net-list" command performance.
If you have only 1 user in your context, you will add 1 network on every iteration. So you will have more and more networks and will be able to measure the performance of the "neutron net-list" command depending on the number of networks owned by users.
Namespace: default
Parameters:
- network_create_args: dict, POST /v2.0/networks request options
NeutronNetworks.create_and_update_networks [scenario]¶
Create and update a network.
Measure the "neutron net-create and net-update" command performance.
Namespace: default
Parameters:
- network_update_args: dict, PUT /v2.0/networks update request
- network_create_args: dict, POST /v2.0/networks request options
NeutronNetworks.create_and_delete_networks [scenario]¶
Create and delete a network.
Measure the "neutron net-create" and "net-delete" command performance.
Namespace: default
Parameters:
- network_create_args: dict, POST /v2.0/networks request options
NeutronNetworks.create_and_list_subnets [scenario]¶
Create and a given number of subnets and list all subnets.
The scenario creates a network, a given number of subnets and then lists subnets.
Namespace: default
Parameters:
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated
subnet_create_args: dict, POST /v2.0/subnets request options
subnet_cidr_start: str, start value for subnets CIDR
subnets_per_network: int, number of subnets for one network
NeutronNetworks.create_and_update_subnets [scenario]¶
Create and update a subnet.
The scenario creates a network, a given number of subnets and then updates the subnet. This scenario measures the "neutron subnet-update" command performance.
Namespace: default
Parameters:
subnet_update_args: dict, PUT /v2.0/subnets update options
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated.
subnet_create_args: dict, POST /v2.0/subnets request options
subnet_cidr_start: str, start value for subnets CIDR
subnets_per_network: int, number of subnets for one network
NeutronNetworks.create_and_delete_subnets [scenario]¶
Create and delete a given number of subnets.
The scenario creates a network, a given number of subnets and then deletes subnets.
Namespace: default
Parameters:
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated.
subnet_create_args: dict, POST /v2.0/subnets request options
subnet_cidr_start: str, start value for subnets CIDR
subnets_per_network: int, number of subnets for one network
NeutronNetworks.create_and_list_routers [scenario]¶
Create and a given number of routers and list all routers.
Create a network, a given number of subnets and routers and then list all routers.
Namespace: default
Parameters:
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated.
subnet_create_args: dict, POST /v2.0/subnets request options
subnet_cidr_start: str, start value for subnets CIDR
subnets_per_network: int, number of subnets for one network
router_create_args: dict, POST /v2.0/routers request options
NeutronNetworks.create_and_update_routers [scenario]¶
Create and update a given number of routers.
Create a network, a given number of subnets and routers and then updating all routers.
Namespace: default
Parameters:
router_update_args: dict, PUT /v2.0/routers update options
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated.
subnet_create_args: dict, POST /v2.0/subnets request options
subnet_cidr_start: str, start value for subnets CIDR
subnets_per_network: int, number of subnets for one network
router_create_args: dict, POST /v2.0/routers request options
NeutronNetworks.create_and_delete_routers [scenario]¶
Create and delete a given number of routers.
Create a network, a given number of subnets and routers and then delete all routers.
Namespace: default
Parameters:
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated.
subnet_create_args: dict, POST /v2.0/subnets request options
subnet_cidr_start: str, start value for subnets CIDR
subnets_per_network: int, number of subnets for one network
router_create_args: dict, POST /v2.0/routers request options
NeutronNetworks.create_and_list_ports [scenario]¶
Create and a given number of ports and list all ports.
Namespace: default
Parameters:
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated.
port_create_args: dict, POST /v2.0/ports request options
ports_per_network: int, number of ports for one network
NeutronNetworks.create_and_update_ports [scenario]¶
Create and update a given number of ports.
Measure the "neutron port-create" and "neutron port-update" commands performance.
Namespace: default
Parameters:
port_update_args: dict, PUT /v2.0/ports update request options
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated.
port_create_args: dict, POST /v2.0/ports request options
ports_per_network: int, number of ports for one network
NeutronNetworks.create_and_delete_ports [scenario]¶
Create and delete a port.
Measure the "neutron port-create" and "neutron port-delete" commands performance.
Namespace: default
Parameters:
- network_create_args: dict, POST /v2.0/networks request
options. Deprecated.
port_create_args: dict, POST /v2.0/ports request options
ports_per_network: int, number of ports for one network
NeutronNetworks.create_and_list_floating_ips [scenario]¶
Create and list floating IPs.
Measure the "neutron floating-ip-create" and "neutron floating-ip-list" commands performance.
Namespace: default
Parameters:
- floating_network: str, external network for floating IP creation
- floating_ip_args: dict, POST /floatingips request options
NeutronNetworks.create_and_delete_floating_ips [scenario]¶
Create and delete floating IPs.
Measure the "neutron floating-ip-create" and "neutron floating-ip-delete" commands performance.
Namespace: default
Parameters:
- floating_network: str, external network for floating IP creation
- floating_ip_args: dict, POST /floatingips request options
NeutronLoadbalancerV1.create_and_list_pools [scenario]¶
Create a pool(v1) and then list pools(v1).
Measure the "neutron lb-pool-list" command performance. The scenario creates a pool for every subnet and then lists pools.
Namespace: default
Parameters:
- pool_create_args: dict, POST /lb/pools request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_delete_pools [scenario]¶
Create pools(v1) and delete pools(v1).
Measure the "neutron lb-pool-create" and "neutron lb-pool-delete" command performance. The scenario creates a pool for every subnet and then deletes those pools.
Namespace: default
Parameters:
- pool_create_args: dict, POST /lb/pools request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_update_pools [scenario]¶
Create pools(v1) and update pools(v1).
Measure the "neutron lb-pool-create" and "neutron lb-pool-update" command performance. The scenario creates a pool for every subnet and then update those pools.
Namespace: default
Parameters:
- pool_create_args: dict, POST /lb/pools request options
- pool_update_args: dict, POST /lb/pools update options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_list_vips [scenario]¶
Create a vip(v1) and then list vips(v1).
Measure the "neutron lb-vip-create" and "neutron lb-vip-list" command performance. The scenario creates a vip for every pool created and then lists vips.
Namespace: default
Parameters:
- vip_create_args: dict, POST /lb/vips request options
- pool_create_args: dict, POST /lb/pools request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_delete_vips [scenario]¶
Create a vip(v1) and then delete vips(v1).
Measure the "neutron lb-vip-create" and "neutron lb-vip-delete" command performance. The scenario creates a vip for pool and then deletes those vips.
Namespace: default
Parameters:
- pool_create_args: dict, POST /lb/pools request options
- vip_create_args: dict, POST /lb/vips request options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_update_vips [scenario]¶
Create vips(v1) and update vips(v1).
Measure the "neutron lb-vip-create" and "neutron lb-vip-update" command performance. The scenario creates a pool for every subnet and then update those pools.
Namespace: default
Parameters:
- pool_create_args: dict, POST /lb/pools request options
- vip_create_args: dict, POST /lb/vips request options
- vip_update_args: dict, POST /lb/vips update options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_list_healthmonitors [scenario]¶
Create healthmonitors(v1) and list healthmonitors(v1).
Measure the "neutron lb-healthmonitor-list" command performance. This scenario creates healthmonitors and lists them.
Namespace: default
Parameters:
- healthmonitor_create_args: dict, POST /lb/healthmonitors request
options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_delete_healthmonitors [scenario]¶
Create a healthmonitor(v1) and delete healthmonitors(v1).
Measure the "neutron lb-healthmonitor-create" and "neutron lb-healthmonitor-delete" command performance. The scenario creates healthmonitors and deletes those healthmonitors.
Namespace: default
Parameters:
- healthmonitor_create_args: dict, POST /lb/healthmonitors request
options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
NeutronLoadbalancerV1.create_and_update_healthmonitors [scenario]¶
Create a healthmonitor(v1) and update healthmonitors(v1).
Measure the "neutron lb-healthmonitor-create" and "neutron lb-healthmonitor-update" command performance. The scenario creates healthmonitors and then updates them.
Namespace: default
Parameters:
- healthmonitor_create_args: dict, POST /lb/healthmonitors request
options
- healthmonitor_update_args: dict, POST /lb/healthmonitors update
options
Module: rally.plugins.openstack.scenarios.neutron.loadbalancer_v1
Quotas.nova_update [scenario]¶
Update quotas for Nova.
Namespace: default
Parameters:
- max_quota: Max value to be updated for quota.
Quotas.nova_update_and_delete [scenario]¶
Update and delete quotas for Nova.
Namespace: default
Parameters:
- max_quota: Max value to be updated for quota.
Quotas.cinder_update [scenario]¶
Update quotas for Cinder.
Namespace: default
Parameters:
- max_quota: Max value to be updated for quota.
Quotas.cinder_update_and_delete [scenario]¶
Update and Delete quotas for Cinder.
Namespace: default
Parameters:
- max_quota: Max value to be updated for quota.
Quotas.neutron_update [scenario]¶
Update quotas for neutron.
Namespace: default
Parameters:
- max_quota: Max value to be updated for quota.
FuelNodes.add_and_remove_node [scenario]¶
Add node to environment and remove.
Namespace: default
Parameters:
- node_roles: list. Roles, which node should be assigned to
env with
FuelEnvironments.create_and_delete_environment [scenario]¶
Create and delete Fuel environments.
Namespace: default
Parameters:
- release_id: release id (default 1)
- network_provider: network provider (default 'neutron')
- deployment_mode: deployment mode (default 'ha_compact')
- net_segment_type: net segment type (default 'vlan')
- delete_retries: retries count on delete operations (default 5)
FuelEnvironments.create_and_list_environments [scenario]¶
Create and list Fuel environments.
Namespace: default
Parameters:
- release_id: release id (default 1)
- network_provider: network provider (default 'neutron')
- deployment_mode: deployment mode (default 'ha_compact')
- net_segment_type: net segment type (default 'vlan')
KeystoneBasic.create_user [scenario]¶
Create a keystone user with random name.
Namespace: default
Parameters:
- kwargs: Other optional parameters to create users like
"tenant_id", "enabled".
KeystoneBasic.create_delete_user [scenario]¶
Create a keystone user with random name and then delete it.
Namespace: default
Parameters:
- kwargs: Other optional parameters to create users like
"tenant_id", "enabled".
KeystoneBasic.create_user_set_enabled_and_delete [scenario]¶
Create a keystone user, enable or disable it, and delete it.
Namespace: default
Parameters:
- enabled: Initial state of user 'enabled' flag. The user
will be created with 'enabled' set to this value, and then it will be toggled.
kwargs: Other optional parameters to create user.
KeystoneBasic.create_tenant [scenario]¶
Create a keystone tenant with random name.
Namespace: default
Parameters:
- kwargs: Other optional parameters
KeystoneBasic.authenticate_user_and_validate_token [scenario]¶
Authenticate and validate a keystone token.
Namespace: default
KeystoneBasic.create_tenant_with_users [scenario]¶
Create a keystone tenant and several users belonging to it.
Namespace: default
Parameters:
- users_per_tenant: number of users to create for the tenant
- kwargs: Other optional parameters for tenant creation
Returns: keystone tenant instance
KeystoneBasic.create_and_list_users [scenario]¶
Create a keystone user with random name and list all users.
Namespace: default
Parameters:
- kwargs: Other optional parameters to create users like
"tenant_id", "enabled".
KeystoneBasic.create_and_list_tenants [scenario]¶
Create a keystone tenant with random name and list all tenants.
Namespace: default
Parameters:
- kwargs: Other optional parameters
KeystoneBasic.add_and_remove_user_role [scenario]¶
Create a user role add to a user and disassociate.
Namespace: default
KeystoneBasic.create_and_delete_role [scenario]¶
Create a user role and delete it.
Namespace: default
KeystoneBasic.create_add_and_list_user_roles [scenario]¶
Create user role, add it and list user roles for given user.
Namespace: default
KeystoneBasic.get_entities [scenario]¶
Get instance of a tenant, user, role and service by id's.
An ephemeral tenant, user, and role are each created. By default, fetches the 'keystone' service. This can be overridden (for instance, to get the 'Identity Service' service on older OpenStack), or None can be passed explicitly to service_name to create a new service and then query it by ID.
Namespace: default
Parameters:
- service_name: The name of the service to get by ID; or
None, to create an ephemeral service and get it by ID.
KeystoneBasic.create_and_delete_service [scenario]¶
Create and delete service.
Namespace: default
Parameters:
- service_type: type of the service
- description: description of the service
KeystoneBasic.create_update_and_delete_tenant [scenario]¶
Create, update and delete tenant.
Namespace: default
Parameters:
- kwargs: Other optional parameters for tenant creation
KeystoneBasic.create_user_update_password [scenario]¶
Create user and update password for that user.
Namespace: default
KeystoneBasic.create_and_list_services [scenario]¶
Create and list services.
Namespace: default
Parameters:
- service_type: type of the service
- description: description of the service
KeystoneBasic.create_and_list_ec2credentials [scenario]¶
Create and List all keystone ec2-credentials.
Namespace: default
KeystoneBasic.create_and_delete_ec2credential [scenario]¶
Create and delete keystone ec2-credential.
Namespace: default
CeilometerEvents.create_user_and_list_events [scenario]¶
Create user and fetch all events.
This scenario creates user to store new event and fetches list of all events using GET /v2/events.
Namespace: default
CeilometerEvents.create_user_and_list_event_types [scenario]¶
Create user and fetch all event types.
This scenario creates user to store new event and fetches list of all events types using GET /v2/event_types.
Namespace: default
CeilometerEvents.create_user_and_get_event [scenario]¶
Create user and gets event.
This scenario creates user to store new event and fetches one event using GET /v2/events/<message_id>.
Namespace: default
CeilometerTraits.create_user_and_list_traits [scenario]¶
Create user and fetch all event traits.
This scenario creates user to store new event and fetches list of all traits for certain event type and trait name using GET /v2/event_types/<event_type>/traits/<trait_name>.
Namespace: default
CeilometerTraits.create_user_and_list_trait_descriptions [scenario]¶
Create user and fetch all trait descriptions.
This scenario creates user to store new event and fetches list of all traits for certain event type using GET /v2/event_types/<event_type>/traits.
Namespace: default
IronicNodes.create_and_list_node [scenario]¶
Create and list nodes.
Namespace: default
Parameters:
- associated: Optional. Either a Boolean or a string
representation of a Boolean that indicates whether to return a list of associated (True or "True") or unassociated (False or "False") nodes.
- maintenance: Optional. Either a Boolean or a string
representation of a Boolean that indicates whether to return nodes in maintenance mode (True or "True"), or not in maintenance mode (False or "False").
- marker: Optional, the UUID of a node, eg the last
node from a previous result set. Return the next result set.
- limit: The maximum number of results to return per
request, if:
- limit > 0, the maximum number of nodes to return.
- limit == 0, return the entire list of nodes.
- limit param is NOT specified (None), the number of items returned respect the maximum imposed by the Ironic API (see Ironic's api.max_limit option).
- detail: Optional, boolean whether to return detailed
information about nodes.
sort_key: Optional, field used for sorting.
- sort_dir: Optional, direction of sorting, either 'asc' (the
default) or 'desc'.
kwargs: Optional additional arguments for node creation
IronicNodes.create_and_delete_node [scenario]¶
Create and delete node.
Namespace: default
Parameters:
- kwargs: Optional additional arguments for node creation
TempestScenario.single_test [scenario]¶
Launch a single Tempest test by its name.
Namespace: default
Parameters:
- test_name: name of tempest scenario for launching
- log_file: name of file for junitxml results
- tempest_conf: User specified tempest.conf location
TempestScenario.all [scenario]¶
Launch all discovered Tempest tests by their names.
Namespace: default
Parameters:
- log_file: name of file for junitxml results
- tempest_conf: User specified tempest.conf location
TempestScenario.set [scenario]¶
Launch all Tempest tests from a given set.
Namespace: default
Parameters:
- set_name: set name of tempest scenarios for launching
- log_file: name of file for junitxml results
- tempest_conf: User specified tempest.conf location
TempestScenario.list_of_tests [scenario]¶
Launch all Tempest tests from a given list of their names.
Namespace: default
Parameters:
- test_names: list of tempest scenarios for launching
- log_file: name of file for junitxml results
- tempest_conf: User specified tempest.conf location
TempestScenario.specific_regex [scenario]¶
Launch Tempest tests whose names match a given regular expression.
Namespace: default
Parameters:
- regex: regexp to match Tempest test names against
- log_file: name of file for junitxml results
- tempest_conf: User specified tempest.conf location
VMTasks.boot_runcommand_delete [scenario]¶
Boot a server, run script specified in command and delete server.
Namespace: default
Parameters:
image: glance image name to use for the vm
flavor: VM flavor name
username: ssh username on server, str
password: Password on SSH authentication
- command: Command-specifying dictionary that either specifies
remote command path via remote_path' (can be uploaded from a local file specified by `local_path), an inline script via `script_inline' or a local script file path using `script_file'. Both `script_file' and `local_path' are checked to be accessible by the `file_exists' validator code.
The `script_inline' and `script_file' both require an `interpreter' value to specify the interpreter script should be run with.
Note that any of `interpreter' and `remote_path' can be an array prefixed with environment variables and suffixed with args for the `interpreter' command. `remote_path's last component must be a path to a command to execute (also upload destination if a `local_path' is given). Uploading an interpreter is possible but requires that `remote_path' and `interpreter' path do match.
Examples:
# Run a `local_script.pl' file sending it to a remote # Perl interpreter command = { "script_file": "local_script.pl", "interpreter": "/usr/bin/perl" } # Run an inline script sending it to a remote interpreter command = { "script_inline": "echo 'Hello, World!'", "interpreter": "/bin/sh" } # Run a remote command command = { "remote_path": "/bin/false" } # Copy a local command and run it command = { "remote_path": "/usr/local/bin/fio", "local_path": "/home/foobar/myfiodir/bin/fio" } # Copy a local command and run it with environment variable command = { "remote_path": ["HOME=/root", "/usr/local/bin/fio"], "local_path": "/home/foobar/myfiodir/bin/fio" } # Run an inline script sending it to a remote interpreter command = { "script_inline": "echo "Hello, ${NAME:-World}"", "interpreter": ["NAME=Earth", "/bin/sh"] } # Run an inline script sending it to an uploaded remote # interpreter command = { "script_inline": "echo "Hello, ${NAME:-World}"", "interpreter": ["NAME=Earth", "/tmp/sh"], "remote_path": "/tmp/sh", "local_path": "/home/user/work/cve/sh-1.0/bin/sh" }
volume_args: volume args for booting server from volume
floating_network: external network name, for floating ip
port: ssh port for SSH connection
use_floating_ip: bool, floating or fixed IP for SSH connection
force_delete: whether to use force_delete for servers
wait_for_ping: whether to check connectivity on server creation
**kwargs: extra arguments for booting the server
- max_log_length: The number of tail nova console-log lines user
would like to retrieve
Returns: dictionary with keys `data' and `errors': data: dict, JSON output from the script errors: str, raw data from the script's stderr stream
VMTasks.boot_runcommand_delete_custom_image [scenario]¶
Boot a server from a custom image, run a command that outputs JSON.
Example Script in rally-jobs/extra/install_benchmark.sh
Namespace: default
VMTasks.runcommand_heat [scenario]¶
Run workload on stack deployed by heat.
Workload can be either file or resource:
Also it should contain "username" key.
Given file will be uploaded to gate_node and started. This script should print key value pairs separated by colon. These pairs will be presented in results.
Gate node should be accessible via ssh with keypair key_name, so heat template should accept parameter key_name.
Namespace: default
Parameters:
- workload: workload to run
- template: path to heat template file
- files: additional template files
- parameters: parameters for heat template
NovaServers.boot_and_list_server [scenario]¶
Boot a server from an image and then list all servers.
Measure the "nova list" command performance.
If you have only 1 user in your context, you will add 1 server on every iteration. So you will have more and more servers and will be able to measure the performance of the "nova list" command depending on the number of servers owned by users.
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
- detailed: True if the server listing should contain
detailed information about all of them
kwargs: Optional additional arguments for server creation
NovaServers.list_servers [scenario]¶
List all servers.
This simple scenario test the nova list command by listing all the servers.
Namespace: default
Parameters:
- detailed: True if detailed information about servers
should be listed
NovaServers.boot_and_delete_server [scenario]¶
Boot and delete a server.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- min_sleep: Minimum sleep time in seconds (non-negative)
- max_sleep: Maximum sleep time in seconds (non-negative)
- force_delete: True if force_delete should be used
- kwargs: Optional additional arguments for server creation
NovaServers.boot_and_delete_multiple_servers [scenario]¶
Boot multiple servers in a single request and delete them.
Deletion is done in parallel with one request per server, not with a single request for all servers.
Namespace: default
Parameters:
- image: The image to boot from
- flavor: Flavor used to boot instance
- count: Number of instances to boot
- min_sleep: Minimum sleep time in seconds (non-negative)
- max_sleep: Maximum sleep time in seconds (non-negative)
- force_delete: True if force_delete should be used
- kwargs: Optional additional arguments for instance creation
NovaServers.boot_server_from_volume_and_delete [scenario]¶
Boot a server from volume and then delete it.
The scenario first creates a volume and then a server. Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
volume_size: volume size (in GB)
- volume_type: specifies volume type when there are
multiple backends
min_sleep: Minimum sleep time in seconds (non-negative)
max_sleep: Maximum sleep time in seconds (non-negative)
force_delete: True if force_delete should be used
kwargs: Optional additional arguments for server creation
NovaServers.boot_and_bounce_server [scenario]¶
Boot a server and run specified actions against it.
Actions should be passed into the actions parameter. Available actions are 'hard_reboot', 'soft_reboot', 'stop_start', 'rescue_unrescue', 'pause_unpause', 'suspend_resume', 'lock_unlock' and 'shelve_unshelve'. Delete server after all actions were completed.
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
force_delete: True if force_delete should be used
- actions: list of action dictionaries, where each action
dictionary speicifes an action to be performed in the following format: {"action_name": <no_of_iterations>}
kwargs: Optional additional arguments for server creation
NovaServers.boot_lock_unlock_and_delete [scenario]¶
Boot a server, lock it, then unlock and delete it.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between locking and unlocking the server (of random duration from min_sleep to max_sleep).
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
- min_sleep: Minimum sleep time between locking and unlocking
in seconds
- max_sleep: Maximum sleep time between locking and unlocking
in seconds
force_delete: True if force_delete should be used
kwargs: Optional additional arguments for server creation
NovaServers.snapshot_server [scenario]¶
Boot a server, make its snapshot and delete both.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- force_delete: True if force_delete should be used
- kwargs: Optional additional arguments for server creation
NovaServers.boot_server [scenario]¶
Boot a server.
Assumes that cleanup is done elsewhere.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- auto_assign_nic: True if NICs should be assigned
- kwargs: Optional additional arguments for server creation
NovaServers.boot_server_from_volume [scenario]¶
Boot a server from volume.
The scenario first creates a volume and then a server. Assumes that cleanup is done elsewhere.
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
volume_size: volume size (in GB)
- volume_type: specifies volume type when there are
multiple backends
auto_assign_nic: True if NICs should be assigned
kwargs: Optional additional arguments for server creation
NovaServers.resize_server [scenario]¶
Boot a server, then resize and delete it.
This test will confirm the resize by default, or revert the resize if confirm is set to false.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- to_flavor: flavor to be used to resize the booted instance
- force_delete: True if force_delete should be used
- kwargs: Optional additional arguments for server creation
NovaServers.boot_server_attach_created_volume_and_resize [scenario]¶
Create a VM from image, attach a volume to it and resize.
Simple test to create a VM and attach a volume, then resize the VM, detach the volume then delete volume and VM. Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between attaching a volume and running resize (of random duration from range [min_sleep, max_sleep]).
Namespace: default
Parameters:
image: Glance image name to use for the VM
flavor: VM flavor name
to_flavor: flavor to be used to resize the booted instance
volume_size: volume size (in GB)
min_sleep: Minimum sleep time in seconds (non-negative)
max_sleep: Maximum sleep time in seconds (non-negative)
force_delete: True if force_delete should be used
confirm: True if need to confirm resize else revert resize
- do_delete: True if resources needs to be deleted explicitly
else use rally cleanup to remove resources
boot_server_kwargs: optional arguments for VM creation
create_volume_kwargs: optional arguments for volume creation
NovaServers.boot_server_from_volume_and_resize [scenario]¶
Boot a server from volume, then resize and delete it.
The scenario first creates a volume and then a server. Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between volume creation and deletion (of random duration from [min_sleep, max_sleep]).
This test will confirm the resize by default, or revert the resize if confirm is set to false.
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
to_flavor: flavor to be used to resize the booted instance
volume_size: volume size (in GB)
min_sleep: Minimum sleep time in seconds (non-negative)
max_sleep: Maximum sleep time in seconds (non-negative)
force_delete: True if force_delete should be used
confirm: True if need to confirm resize else revert resize
- do_delete: True if resources needs to be deleted explicitly
else use rally cleanup to remove resources
boot_server_kwargs: optional arguments for VM creation
create_volume_kwargs: optional arguments for volume creation
NovaServers.suspend_and_resume_server [scenario]¶
Create a server, suspend, resume and then delete it
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- force_delete: True if force_delete should be used
- kwargs: Optional additional arguments for server creation
NovaServers.pause_and_unpause_server [scenario]¶
Create a server, pause, unpause and then delete it
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- force_delete: True if force_delete should be used
- kwargs: Optional additional arguments for server creation
NovaServers.shelve_and_unshelve_server [scenario]¶
Create a server, shelve, unshelve and then delete it
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- force_delete: True if force_delete should be used
- kwargs: Optional additional arguments for server creation
NovaServers.boot_and_live_migrate_server [scenario]¶
Live Migrate a server.
This scenario launches a VM on a compute node available in the availability zone and then migrates the VM to another compute node on the same availability zone.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between VM booting and running live migration (of random duration from range [min_sleep, max_sleep]).
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
block_migration: Specifies the migration type
- disk_over_commit: Specifies whether to allow overcommit
on migrated instance or not
min_sleep: Minimum sleep time in seconds (non-negative)
max_sleep: Maximum sleep time in seconds (non-negative)
kwargs: Optional additional arguments for server creation
NovaServers.boot_server_from_volume_and_live_migrate [scenario]¶
Boot a server from volume and then migrate it.
The scenario first creates a volume and a server booted from the volume on a compute node available in the availability zone and then migrates the VM to another compute node on the same availability zone.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between VM booting and running live migration (of random duration from range [min_sleep, max_sleep]).
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
volume_size: volume size (in GB)
- volume_type: specifies volume type when there are
multiple backends
block_migration: Specifies the migration type
- disk_over_commit: Specifies whether to allow overcommit
on migrated instance or not
force_delete: True if force_delete should be used
min_sleep: Minimum sleep time in seconds (non-negative)
max_sleep: Maximum sleep time in seconds (non-negative)
kwargs: Optional additional arguments for server creation
NovaServers.boot_server_attach_created_volume_and_live_migrate [scenario]¶
Create a VM, attach a volume to it and live migrate.
Simple test to create a VM and attach a volume, then migrate the VM, detach the volume and delete volume/VM.
Optional 'min_sleep' and 'max_sleep' parameters allow the scenario to simulate a pause between attaching a volume and running live migration (of random duration from range [min_sleep, max_sleep]).
Namespace: default
Parameters:
image: Glance image name to use for the VM
flavor: VM flavor name
size: volume size (in GB)
block_migration: Specifies the migration type
- disk_over_commit: Specifies whether to allow overcommit
on migrated instance or not
boot_server_kwargs: optional arguments for VM creation
create_volume_kwargs: optional arguments for volume creation
min_sleep: Minimum sleep time in seconds (non-negative)
max_sleep: Maximum sleep time in seconds (non-negative)
NovaServers.boot_and_migrate_server [scenario]¶
Migrate a server.
This scenario launches a VM on a compute node available in the availability zone, and then migrates the VM to another compute node on the same availability zone.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- kwargs: Optional additional arguments for server creation
NovaServers.boot_and_rebuild_server [scenario]¶
Rebuild a server.
This scenario launches a VM, then rebuilds that VM with a different image.
Namespace: default
Parameters:
- from_image: image to be used to boot an instance
- to_image: image to be used to rebuild the instance
- flavor: flavor to be used to boot an instance
- kwargs: Optional additional arguments for server creation
NovaServers.boot_and_associate_floating_ip [scenario]¶
Boot a server and associate a floating IP to it.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- kwargs: Optional additional arguments for server creation
NovaServers.boot_and_show_server [scenario]¶
Show server details.
This simple scenario tests the nova show command by retrieving the server details.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- kwargs: Optional additional arguments for server creation
Returns: Server details
NovaServers.boot_and_get_console_output [scenario]¶
Get text console output from server.
This simple scenario tests the nova console-log command by retrieving the text console log output.
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
- length: The number of tail log lines you would like to retrieve.
None (default value) or -1 means unlimited length.
kwargs: Optional additional arguments for server creation
Returns: Text console log output for server
NovaServers.boot_and_update_server [scenario]¶
Boot a server, then update its name and description.
The scenario first creates a server, then update it. Assumes that cleanup is done elsewhere.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- description: update the server description
- kwargs: Optional additional arguments for server creation
NovaServers.boot_server_from_volume_snapshot [scenario]¶
Boot a server from a snapshot.
The scenario first creates a volume and creates a snapshot from this volume, then boots a server from the created snapshot. Assumes that cleanup is done elsewhere.
Namespace: default
Parameters:
image: image to be used to boot an instance
flavor: flavor to be used to boot an instance
volume_size: volume size (in GB)
- volume_type: specifies volume type when there are
multiple backends
auto_assign_nic: True if NICs should be assigned
kwargs: Optional additional arguments for server creation
NovaAggregates.list_aggregates [scenario]¶
List all nova aggregates.
Measure the "nova aggregate-list" command performance.
Namespace: default
NovaAggregates.create_and_list_aggregates [scenario]¶
Create a aggregate and then list all aggregates.
This scenario creates a aggregate and then lists all aggregates.
Namespace: default
Parameters:
- availability_zone: The availability zone of the aggregate
NovaAggregates.create_and_delete_aggregate [scenario]¶
Create an aggregate and then delete it.
This scenario first creates an aggregate and then delete it.
Namespace: default
NovaAggregates.create_and_update_aggregate [scenario]¶
Create an aggregate and then update its name and availability_zone
This scenario first creates an aggregate and then update its name and availability_zone
Namespace: default
Parameters:
- availability_zone: The availability zone of the aggregate
NovaAggregates.create_aggregate_add_and_remove_host [scenario]¶
Create an aggregate, add a host to and remove the host from it
Measure "nova aggregate-add-host" and "nova aggregate-remove-host" command performance.
Namespace: default
NovaServers.boot_server_associate_and_dissociate_floating_ip [scenario]¶
Boot a server associate and dissociate a floating IP from it.
The scenario first boot a server and create a floating IP. then associate the floating IP to the server.Finally dissociate the floating IP.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- kwargs: Optional additional arguments for server creation
NovaImages.list_images [scenario]¶
List all images.
Measure the "nova image-list" command performance.
Namespace: default
Parameters:
- detailed: True if the image listing
should contain detailed information
kwargs: Optional additional arguments for image listing
NovaHosts.list_hosts [scenario]¶
List all nova hosts.
Measure the "nova host-list" command performance.
Namespace: default
Parameters:
- zone: List nova hosts in an availability-zone.
None (default value) means list hosts in all availability-zones
NovaAvailabilityZones.list_availability_zones [scenario]¶
List all availability zones.
Measure the "nova availability-zone-list" command performance.
Namespace: default
Parameters:
- detailed: True if the availability-zone listing should contain
detailed information about all of them
Module: rally.plugins.openstack.scenarios.nova.availability_zones
NovaNetworks.create_and_list_networks [scenario]¶
Create nova network and list all networks.
Namespace: default
Parameters:
- start_cidr: IP range
- kwargs: Optional additional arguments for network creation
NovaNetworks.create_and_delete_network [scenario]¶
Create nova network and delete it.
Namespace: default
Parameters:
- start_cidr: IP range
- kwargs: Optional additional arguments for network creation
NovaServices.list_services [scenario]¶
List all nova services.
Measure the "nova service-list" command performance.
Namespace: default
Parameters:
- host: List nova services on host
- binary: List nova services matching given binary
NovaFlavors.list_flavors [scenario]¶
List all flavors.
Measure the "nova flavor-list" command performance.
Namespace: default
Parameters:
- detailed: True if the flavor listing
should contain detailed information
kwargs: Optional additional arguments for flavor listing
NovaFlavors.create_and_list_flavor_access [scenario]¶
Create a non-public flavor and list its access rules
Namespace: default
Parameters:
- ram: Memory in MB for the flavor
- vcpus: Number of VCPUs for the flavor
- disk: Size of local disk in GB
- kwargs: Optional additional arguments for flavor creation
NovaFlavors.create_flavor [scenario]¶
Create a flavor.
Namespace: default
Parameters:
- ram: Memory in MB for the flavor
- vcpus: Number of VCPUs for the flavor
- disk: Size of local disk in GB
- kwargs: Optional additional arguments for flavor creation
NovaFlavors.create_and_get_flavor [scenario]¶
Create flavor and get detailed information of the flavor.
Namespace: default
Parameters:
- ram: Memory in MB for the flavor
- vcpus: Number of VCPUs for the flavor
- disk: Size of local disk in GB
- kwargs: Optional additional arguments for flavor creation
NovaFlavors.create_flavor_and_set_keys [scenario]¶
Create flavor and set keys to the flavor.
Measure the "nova flavor-key" command performance. the scenario first create a flavor,then add the extra specs to it.
Namespace: default
Parameters:
- ram: Memory in MB for the flavor
- vcpus: Number of VCPUs for the flavor
- disk: Size of local disk in GB
- extra_specs: additional arguments for flavor set keys
- kwargs: Optional additional arguments for flavor creation
NovaFloatingIpsBulk.create_and_list_floating_ips_bulk [scenario]¶
Create nova floating IP by range and list it.
This scenario creates a floating IP by range and then lists all.
Namespace: default
Parameters:
- start_cidr: Floating IP range
- kwargs: Optional additional arguments for range IP creation
Module: rally.plugins.openstack.scenarios.nova.floating_ips_bulk
NovaFloatingIpsBulk.create_and_delete_floating_ips_bulk [scenario]¶
Create nova floating IP by range and delete it.
This scenario creates a floating IP by range and then delete it.
Namespace: default
Parameters:
- start_cidr: Floating IP range
- kwargs: Optional additional arguments for range IP creation
Module: rally.plugins.openstack.scenarios.nova.floating_ips_bulk
NovaSecGroup.create_and_delete_secgroups [scenario]¶
Create and delete security groups.
This scenario creates N security groups with M rules per group and then deletes them.
Namespace: default
Parameters:
- security_group_count: Number of security groups
- rules_per_security_group: Number of rules per security group
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaSecGroup.create_and_list_secgroups [scenario]¶
Create and list security groups.
This scenario creates N security groups with M rules per group and then lists them.
Namespace: default
Parameters:
- security_group_count: Number of security groups
- rules_per_security_group: Number of rules per security group
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaSecGroup.create_and_update_secgroups [scenario]¶
Create and update security groups.
This scenario creates 'security_group_count' security groups then updates their name and description.
Namespace: default
Parameters:
- security_group_count: Number of security groups
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaSecGroup.boot_and_delete_server_with_secgroups [scenario]¶
Boot and delete server with security groups attached.
- Plan of this scenario:
- create N security groups with M rules per group vm with security groups)
- boot a VM with created security groups
- get list of attached security groups to server
- delete server
- delete all security groups
- check that all groups were attached to server
Namespace: default
Parameters:
- image: ID of the image to be used for server creation
- flavor: ID of the flavor to be used for server creation
- security_group_count: Number of security groups
- rules_per_security_group: Number of rules per security group
- **kwargs: Optional arguments for booting the instance
Module: rally.plugins.openstack.scenarios.nova.security_group
NovaHypervisors.list_hypervisors [scenario]¶
List hypervisors.
Measure the "nova hypervisor-list" command performance.
Namespace: default
Parameters:
- detailed: True if the hypervisor listing should contain
detailed information about all of them
NovaHypervisors.list_and_get_hypervisors [scenario]¶
List and Get hypervisors.
The scenario fist list all hypervisors,then get detailed information of the listed hypervisors in trun.
Measure the "nova hypervisor-show" command performance.
Namespace: default
Parameters:
- detailed: True if the hypervisor listing should contain
detailed information about all of them
NovaKeypair.create_and_list_keypairs [scenario]¶
Create a keypair with random name and list keypairs.
This scenario creates a keypair and then lists all keypairs.
Namespace: default
Parameters:
- kwargs: Optional additional arguments for keypair creation
NovaKeypair.create_and_delete_keypair [scenario]¶
Create a keypair with random name and delete keypair.
This scenario creates a keypair and then delete that keypair.
Namespace: default
Parameters:
- kwargs: Optional additional arguments for keypair creation
NovaKeypair.boot_and_delete_server_with_keypair [scenario]¶
Boot and delete server with keypair.
- Plan of this scenario:
- create a keypair
- boot a VM with created keypair
- delete server
- delete keypair
Namespace: default
Parameters:
image: ID of the image to be used for server creation
flavor: ID of the flavor to be used for server creation
- boot_server_kwargs: Optional additional arguments for VM
creation
server_kwargs: Deprecated alias for boot_server_kwargs
kwargs: Optional additional arguments for keypair creation
NovaAgents.list_agents [scenario]¶
List all builds.
Measure the "nova agent-list" command performance.
Namespace: default
Parameters:
- hypervisor: List agent builds on a specific hypervisor.
None (default value) means list for all hypervisors
MagnumClusters.list_clusters [scenario]¶
List all clusters.
Measure the "magnum clusters-list" command performance.
Namespace: default
Parameters:
- limit: (Optional) The maximum number of results to return
per request, if:
- limit > 0, the maximum number of clusters to return.
- limit param is NOT specified (None), the number of items returned respect the maximum imposed by the Magnum API (see Magnum's api.max_limit option).
kwargs: optional additional arguments for clusters listing
MagnumClusters.create_and_list_clusters [scenario]¶
create cluster and then list all clusters.
Namespace: default
Parameters:
node_count: the cluster node count.
- cluster_template_uuid: optional, if user want to use an existing
cluster_template
kwargs: optional additional arguments for cluster creation
MagnumClusterTemplates.list_cluster_templates [scenario]¶
List all cluster_templates.
Measure the "magnum cluster_template-list" command performance.
Namespace: default
Parameters:
- limit: (Optional) The maximum number of results to return
per request, if:
- limit > 0, the maximum number of cluster_templates to return.
- limit param is NOT specified (None), the number of items returned respect the maximum imposed by the Magnum API (see Magnum's api.max_limit option).
- kwargs: optional additional arguments for cluster_templates
listing
Module: rally.plugins.openstack.scenarios.magnum.cluster_templates
SwiftObjects.create_container_and_object_then_list_objects [scenario]¶
Create container and objects then list all objects.
Namespace: default
Parameters:
- objects_per_container: int, number of objects to upload
- object_size: int, temporary local object size
- kwargs: dict, optional parameters to create container
SwiftObjects.create_container_and_object_then_delete_all [scenario]¶
Create container and objects then delete everything created.
Namespace: default
Parameters:
- objects_per_container: int, number of objects to upload
- object_size: int, temporary local object size
- kwargs: dict, optional parameters to create container
SwiftObjects.create_container_and_object_then_download_object [scenario]¶
Create container and objects then download all objects.
Namespace: default
Parameters:
- objects_per_container: int, number of objects to upload
- object_size: int, temporary local object size
- kwargs: dict, optional parameters to create container
SwiftObjects.list_objects_in_containers [scenario]¶
List objects in all containers.
Namespace: default
SwiftObjects.list_and_download_objects_in_containers [scenario]¶
List and download objects in all containers.
Namespace: default
DesignateBasic.create_and_list_domains [scenario]¶
Create a domain and list all domains.
Measure the "designate domain-list" command performance.
If you have only 1 user in your context, you will add 1 domain on every iteration. So you will have more and more domain and will be able to measure the performance of the "designate domain-list" command depending on the number of domains owned by users.
Namespace: default
DesignateBasic.list_domains [scenario]¶
List Designate domains.
This simple scenario tests the designate domain-list command by listing all the domains.
Suppose if we have 2 users in context and each has 2 domains uploaded for them we will be able to test the performance of designate domain-list command in this case.
Namespace: default
DesignateBasic.create_and_delete_domain [scenario]¶
Create and then delete a domain.
Measure the performance of creating and deleting domains with different level of load.
Namespace: default
DesignateBasic.create_and_update_domain [scenario]¶
Create and then update a domain.
Measure the performance of creating and updating domains with different level of load.
Namespace: default
DesignateBasic.create_and_delete_records [scenario]¶
Create and then delete records.
Measure the performance of creating and deleting records with different level of load.
Namespace: default
Parameters:
- records_per_domain: Records to create pr domain.
DesignateBasic.list_records [scenario]¶
List Designate records.
This simple scenario tests the designate record-list command by listing all the records in a domain.
Suppose if we have 2 users in context and each has 2 domains uploaded for them we will be able to test the performance of designate record-list command in this case.
Namespace: default
Parameters:
- domain_id: Domain ID
DesignateBasic.create_and_list_records [scenario]¶
Create and then list records.
If you have only 1 user in your context, you will add 1 record on every iteration. So you will have more and more records and will be able to measure the performance of the "designate record-list" command depending on the number of domains/records owned by users.
Namespace: default
Parameters:
- records_per_domain: Records to create pr domain.
DesignateBasic.create_and_list_servers [scenario]¶
Create a Designate server and list all servers.
If you have only 1 user in your context, you will add 1 server on every iteration. So you will have more and more server and will be able to measure the performance of the "designate server-list" command depending on the number of servers owned by users.
Namespace: default
DesignateBasic.create_and_delete_server [scenario]¶
Create and then delete a server.
Measure the performance of creating and deleting servers with different level of load.
Namespace: default
DesignateBasic.list_servers [scenario]¶
List Designate servers.
This simple scenario tests the designate server-list command by listing all the servers.
Namespace: default
DesignateBasic.create_and_list_zones [scenario]¶
Create a zone and list all zones.
Measure the "openstack zone list" command performance.
If you have only 1 user in your context, you will add 1 zone on every iteration. So you will have more and more zone and will be able to measure the performance of the "openstack zone list" command depending on the number of zones owned by users.
Namespace: default
DesignateBasic.list_zones [scenario]¶
List Designate zones.
This simple scenario tests the openstack zone list command by listing all the zones.
Namespace: default
DesignateBasic.create_and_delete_zone [scenario]¶
Create and then delete a zone.
Measure the performance of creating and deleting zones with different level of load.
Namespace: default
DesignateBasic.list_recordsets [scenario]¶
List Designate recordsets.
This simple scenario tests the openstack recordset list command by listing all the recordsets in a zone.
Namespace: default
Parameters:
- zone_id: Zone ID
DesignateBasic.create_and_delete_recordsets [scenario]¶
Create and then delete recordsets.
Measure the performance of creating and deleting recordsets with different level of load.
Namespace: default
Parameters:
- recordsets_per_zone: recordsets to create pr zone.
DesignateBasic.create_and_list_recordsets [scenario]¶
Create and then list recordsets.
If you have only 1 user in your context, you will add 1 recordset on every iteration. So you will have more and more recordsets and will be able to measure the performance of the "openstack recordset list" command depending on the number of zones/recordsets owned by users.
Namespace: default
Parameters:
- recordsets_per_zone: recordsets to create pr zone.
Watcher.create_audit_template_and_delete [scenario]¶
Create audit template and delete it.
Namespace: default
Parameters:
- goal: The goal audit template is based on
- strategy: The strategy used to provide resource optimization
algorithm
- extra: This field is used to specify some audit template
options
Watcher.list_audit_templates [scenario]¶
List existing audit templates.
Audit templates are being created by Audit Template Context.
Namespace: default
Parameters:
name: Name of the audit template
goal: Name of the goal
strategy: Name of the strategy
- limit: The maximum number of results to return per
request, if:
- limit > 0, the maximum number of audit templates to return.
- limit == 0, return the entire list of audit_templates.
- limit param is NOT specified (None), the number of items returned respect the maximum imposed by the Watcher API (see Watcher's api.max_limit option).
sort_key: Optional, field used for sorting.
- sort_dir: Optional, direction of sorting, either 'asc' (the
default) or 'desc'.
- detail: Optional, boolean whether to return detailed information
about audit_templates.
Watcher.create_audit_and_delete [scenario]¶
Create and delete audit.
Create Audit, wait until whether Audit is in SUCCEEDED state or in FAILED and delete audit.
Namespace: default
SenlinClusters.create_and_delete_cluster [scenario]¶
Create a cluster and then delete it.
Measure the "senlin cluster-create" and "senlin cluster-delete" commands performance.
Namespace: default
Parameters:
- desired_capacity: The capacity or initial number of nodes
owned by the cluster
min_size: The minimum number of nodes owned by the cluster
- max_size: The maximum number of nodes owned by the cluster.
-1 means no limit
timeout: The timeout value in seconds for cluster creation
metadata: A set of key value pairs to associate with the cluster
HeatStacks.create_and_list_stack [scenario]¶
Create a stack and then list all stacks.
Measure the "heat stack-create" and "heat stack-list" commands performance.
Namespace: default
Parameters:
- template_path: path to stack template file
- parameters: parameters to use in heat template
- files: files used in template
- environment: stack environment definition
HeatStacks.list_stacks_and_resources [scenario]¶
List all resources from tenant stacks.
Namespace: default
HeatStacks.create_and_delete_stack [scenario]¶
Create and then delete a stack.
Measure the "heat stack-create" and "heat stack-delete" commands performance.
Namespace: default
Parameters:
- template_path: path to stack template file
- parameters: parameters to use in heat template
- files: files used in template
- environment: stack environment definition
HeatStacks.create_check_delete_stack [scenario]¶
Create, check and delete a stack.
Measure the performance of the following commands: - heat stack-create - heat action-check - heat stack-delete
Namespace: default
Parameters:
- template_path: path to stack template file
- parameters: parameters to use in heat template
- files: files used in template
- environment: stack environment definition
HeatStacks.create_update_delete_stack [scenario]¶
Create, update and then delete a stack.
Measure the "heat stack-create", "heat stack-update" and "heat stack-delete" commands performance.
Namespace: default
Parameters:
template_path: path to stack template file
updated_template_path: path to updated stack template file
parameters: parameters to use in heat template
- updated_parameters: parameters to use in updated heat template
If not specified then parameters will be used instead
files: files used in template
- updated_files: files used in updated template. If not specified
files value will be used instead
environment: stack environment definition
updated_environment: environment definition for updated stack
HeatStacks.create_stack_and_scale [scenario]¶
Create an autoscaling stack and invoke a scaling policy.
Measure the performance of autoscaling webhooks.
Namespace: default
Parameters:
- template_path: path to template file that includes an
OS::Heat::AutoScalingGroup resource
- output_key: the stack output key that corresponds to
the scaling webhook
- delta: the number of instances the stack is expected to
change by.
parameters: parameters to use in heat template
- files: files used in template (dict of file name to
file path)
environment: stack environment definition (dict)
HeatStacks.create_suspend_resume_delete_stack [scenario]¶
Create, suspend-resume and then delete a stack.
Measure performance of the following commands: heat stack-create heat action-suspend heat action-resume heat stack-delete
Namespace: default
Parameters:
- template_path: path to stack template file
- parameters: parameters to use in heat template
- files: files used in template
- environment: stack environment definition
HeatStacks.create_snapshot_restore_delete_stack [scenario]¶
Create, snapshot-restore and then delete a stack.
Measure performance of the following commands: heat stack-create heat stack-snapshot heat stack-restore heat stack-delete
Namespace: default
Parameters:
- template_path: path to stack template file
- parameters: parameters to use in heat template
- files: files used in template
- environment: stack environment definition
HeatStacks.create_stack_and_show_output_via_API [scenario]¶
Create stack and show output by using old algorithm.
Measure performance of the following commands: heat stack-create heat output-show
Namespace: default
Parameters:
template_path: path to stack template file
- output_key: the stack output key that corresponds to
the scaling webhook
parameters: parameters to use in heat template
files: files used in template
environment: stack environment definition
HeatStacks.create_stack_and_show_output [scenario]¶
Create stack and show output by using new algorithm.
Measure performance of the following commands: heat stack-create heat output-show
Namespace: default
Parameters:
template_path: path to stack template file
- output_key: the stack output key that corresponds to
the scaling webhook
parameters: parameters to use in heat template
files: files used in template
environment: stack environment definition
HeatStacks.create_stack_and_list_output_via_API [scenario]¶
Create stack and list outputs by using old algorithm.
Measure performance of the following commands: heat stack-create heat output-list
Namespace: default
Parameters:
- template_path: path to stack template file
- parameters: parameters to use in heat template
- files: files used in template
- environment: stack environment definition
HeatStacks.create_stack_and_list_output [scenario]¶
Create stack and list outputs by using new algorithm.
Measure performance of the following commands: heat stack-create heat output-list
Namespace: default
Parameters:
- template_path: path to stack template file
- parameters: parameters to use in heat template
- files: files used in template
- environment: stack environment definition
Authenticate.keystone [scenario]¶
Check Keystone Client.
Namespace: default
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_glance [scenario]¶
Check Glance Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated. In following we are checking for non-existent image.
Namespace: default
Parameters:
- repetitions: number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_nova [scenario]¶
Check Nova Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: default
Parameters:
- repetitions: number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_ceilometer [scenario]¶
Check Ceilometer Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: default
Parameters:
- repetitions: number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_cinder [scenario]¶
Check Cinder Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: default
Parameters:
- repetitions: number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_neutron [scenario]¶
Check Neutron Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: default
Parameters:
- repetitions: number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_heat [scenario]¶
Check Heat Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: default
Parameters:
- repetitions: number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
Authenticate.validate_monasca [scenario]¶
Check Monasca Client to ensure validation of token.
Creation of the client does not ensure validation of the token. We have to do some minimal operation to make sure token gets validated.
Namespace: default
Parameters:
- repetitions: number of times to validate
Module: rally.plugins.openstack.scenarios.authenticate.authenticate
SaharaJob.create_launch_job [scenario]¶
Create and execute a Sahara EDP Job.
This scenario Creates a Job entity and launches an execution on a Cluster.
Namespace: default
Parameters:
job_type: type of the Data Processing Job
configs: config dict that will be passed to a Job Execution
- job_idx: index of a job in a sequence. This index will be
used to create different atomic actions for each job in a sequence
SaharaJob.create_launch_job_sequence [scenario]¶
Create and execute a sequence of the Sahara EDP Jobs.
This scenario Creates a Job entity and launches an execution on a Cluster for every job object provided.
Namespace: default
Parameters:
- jobs: list of jobs that should be executed in one context
SaharaJob.create_launch_job_sequence_with_scaling [scenario]¶
Create and execute Sahara EDP Jobs on a scaling Cluster.
This scenario Creates a Job entity and launches an execution on a Cluster for every job object provided. The Cluster is scaled according to the deltas values and the sequence is launched again.
Namespace: default
Parameters:
jobs: list of jobs that should be executed in one context
- deltas: list of integers which will be used to add or
remove worker nodes from the cluster
SaharaNodeGroupTemplates.create_and_list_node_group_templates [scenario]¶
Create and list Sahara Node Group Templates.
This scenario creates two Node Group Templates with different set of node processes. The master Node Group Template contains Hadoop's management processes. The worker Node Group Template contains Hadoop's worker processes.
By default the templates are created for the vanilla Hadoop provisioning plugin using the version 1.2.1
After the templates are created the list operation is called.
Namespace: default
Parameters:
- flavor: Nova flavor that will be for nodes in the
created node groups
plugin_name: name of a provisioning plugin
- hadoop_version: version of Hadoop distribution supported by
the specified plugin.
- use_autoconfig: If True, instances of the node group will be
automatically configured during cluster creation. If False, the configuration values should be specify manually
Module: rally.plugins.openstack.scenarios.sahara.node_group_templates
SaharaNodeGroupTemplates.create_delete_node_group_templates [scenario]¶
Create and delete Sahara Node Group Templates.
This scenario creates and deletes two most common types of Node Group Templates.
By default the templates are created for the vanilla Hadoop provisioning plugin using the version 1.2.1
Namespace: default
Parameters:
- flavor: Nova flavor that will be for nodes in the
created node groups
plugin_name: name of a provisioning plugin
- hadoop_version: version of Hadoop distribution supported by
the specified plugin.
- use_autoconfig: If True, instances of the node group will be
automatically configured during cluster creation. If False, the configuration values should be specify manually
Module: rally.plugins.openstack.scenarios.sahara.node_group_templates
SaharaClusters.create_and_delete_cluster [scenario]¶
Launch and delete a Sahara Cluster.
This scenario launches a Hadoop cluster, waits until it becomes 'Active' and deletes it.
Namespace: default
Parameters:
- flavor: Nova flavor that will be for nodes in the
created node groups. Deprecated.
- master_flavor: Nova flavor that will be used for the master
instance of the cluster
- worker_flavor: Nova flavor that will be used for the workers of
the cluster
workers_count: number of worker instances in a cluster
plugin_name: name of a provisioning plugin
- hadoop_version: version of Hadoop distribution supported by
the specified plugin.
- floating_ip_pool: floating ip pool name from which Floating
IPs will be allocated. Sahara will determine automatically how to treat this depending on its own configurations. Defaults to None because in some cases Sahara may work w/o Floating IPs.
- volumes_per_node: number of Cinder volumes that will be
attached to every cluster node
volumes_size: size of each Cinder volume in GB
- auto_security_group: boolean value. If set to True Sahara will
create a Security Group for each Node Group in the Cluster automatically.
- security_groups: list of security groups that will be used
while creating VMs. If auto_security_group is set to True, this list can be left empty.
- node_configs: config dict that will be passed to each Node
Group
- cluster_configs: config dict that will be passed to the
Cluster
- enable_anti_affinity: If set to true the vms will be scheduled
one per compute node.
- enable_proxy: Use Master Node of a Cluster as a Proxy node and
do not assign floating ips to workers.
- use_autoconfig: If True, instances of the node group will be
automatically configured during cluster creation. If False, the configuration values should be specify manually
SaharaClusters.create_scale_delete_cluster [scenario]¶
Launch, scale and delete a Sahara Cluster.
This scenario launches a Hadoop cluster, waits until it becomes 'Active'. Then a series of scale operations is applied. The scaling happens according to numbers listed in
Namespace: default
Parameters:
- flavor: Nova flavor that will be for nodes in the
created node groups. Deprecated.
- master_flavor: Nova flavor that will be used for the master
instance of the cluster
- worker_flavor: Nova flavor that will be used for the workers of
the cluster
workers_count: number of worker instances in a cluster
plugin_name: name of a provisioning plugin
- hadoop_version: version of Hadoop distribution supported by
the specified plugin.
- deltas: list of integers which will be used to add or
remove worker nodes from the cluster
- floating_ip_pool: floating ip pool name from which Floating
IPs will be allocated. Sahara will determine automatically how to treat this depending on its own configurations. Defaults to None because in some cases Sahara may work w/o Floating IPs.
- neutron_net_id: id of a Neutron network that will be used
for fixed IPs. This parameter is ignored when Nova Network is set up.
- volumes_per_node: number of Cinder volumes that will be
attached to every cluster node
volumes_size: size of each Cinder volume in GB
- auto_security_group: boolean value. If set to True Sahara will
create a Security Group for each Node Group in the Cluster automatically.
- security_groups: list of security groups that will be used
while creating VMs. If auto_security_group is set to True this list can be left empty.
- node_configs: configs dict that will be passed to each Node
Group
- cluster_configs: configs dict that will be passed to the
Cluster
- enable_anti_affinity: If set to true the vms will be scheduled
one per compute node.
- enable_proxy: Use Master Node of a Cluster as a Proxy node and
do not assign floating ips to workers.
- use_autoconfig: If True, instances of the node group will be
automatically configured during cluster creation. If False, the configuration values should be specify manually
EC2Servers.list_servers [scenario]¶
List all servers.
This simple scenario tests the EC2 API list function by listing all the servers.
Namespace: default
EC2Servers.boot_server [scenario]¶
Boot a server.
Assumes that cleanup is done elsewhere.
Namespace: default
Parameters:
- image: image to be used to boot an instance
- flavor: flavor to be used to boot an instance
- kwargs: optional additional arguments for server creation
MistralWorkbooks.list_workbooks [scenario]¶
Scenario test mistral workbook-list command.
This simple scenario tests the Mistral workbook-list command by listing all the workbooks.
Namespace: default
MistralWorkbooks.create_workbook [scenario]¶
Scenario tests workbook creation and deletion.
This scenario is a very useful tool to measure the "mistral workbook-create" and "mistral workbook-delete" commands performance.
Namespace: default
Parameters:
- definition: string (yaml string) representation of given
file content (Mistral workbook definition)
- do_delete: if False than it allows to check performance
in "create only" mode.
MonascaMetrics.list_metrics [scenario]¶
Fetch user's metrics.
Namespace: default
Parameters:
- kwargs: optional arguments for list query:
name, dimensions, start_time, etc
MuranoEnvironments.list_environments [scenario]¶
List the murano environments.
Run murano environment-list for listing all environments.
Namespace: default
Module: rally.plugins.openstack.scenarios.murano.environments
MuranoEnvironments.create_and_delete_environment [scenario]¶
Create environment, session and delete environment.
Namespace: default
Module: rally.plugins.openstack.scenarios.murano.environments
MuranoEnvironments.create_and_deploy_environment [scenario]¶
Create environment, session and deploy environment.
Create environment, create session, add app to environment packages_per_env times, send environment to deploy.
Namespace: default
Parameters:
- packages_per_env: number of packages per environment
Module: rally.plugins.openstack.scenarios.murano.environments
MuranoPackages.import_and_list_packages [scenario]¶
Import Murano package and get list of packages.
Measure the "murano import-package" and "murano package-list" commands performance. It imports Murano package from "package" (if it is not a zip archive then zip archive will be prepared) and gets list of imported packages.
Namespace: default
Parameters:
- package: path to zip archive that represents Murano
application package or absolute path to folder with package components
- include_disabled: specifies whether the disabled packages will
be included in a the result or not. Default value is False.
MuranoPackages.import_and_delete_package [scenario]¶
Import Murano package and then delete it.
Measure the "murano import-package" and "murano package-delete" commands performance. It imports Murano package from "package" (if it is not a zip archive then zip archive will be prepared) and deletes it.
Namespace: default
Parameters:
- package: path to zip archive that represents Murano
application package or absolute path to folder with package components
MuranoPackages.package_lifecycle [scenario]¶
Import Murano package, modify it and then delete it.
Measure the Murano import, update and delete package commands performance. It imports Murano package from "package" (if it is not a zip archive then zip archive will be prepared), modifies it (using data from "body") and deletes.
Namespace: default
Parameters:
- package: path to zip archive that represents Murano
application package or absolute path to folder with package components
- body: dict object that defines what package property will be
updated, e.g {"tags": ["tag"]} or {"enabled": "true"}
- operation: string object that defines the way of how package
property will be updated, allowed operations are "add", "replace" or "delete". Default value is "replace".
MuranoPackages.import_and_filter_applications [scenario]¶
Import Murano package and then filter packages by some criteria.
Measure the performance of package import and package filtering commands. It imports Murano package from "package" (if it is not a zip archive then zip archive will be prepared) and filters packages by some criteria.
Namespace: default
Parameters:
- package: path to zip archive that represents Murano
application package or absolute path to folder with package components
- filter_query: dict that contains filter criteria, lately it
will be passed as **kwargs to filter method e.g. {"category": "Web"}
CeilometerSamples.list_matched_samples [scenario]¶
Get list of samples that matched fields from context and args.
Namespace: default
Parameters:
- filter_by_user_id: flag for query by user_id
- filter_by_project_id: flag for query by project_id
- filter_by_resource_id: flag for query by resource_id
- metadata_query: dict with metadata fields and values for query
- limit: count of samples in response
Module: rally.plugins.openstack.scenarios.ceilometer.samples
CeilometerSamples.list_samples [scenario]¶
Fetch all available queries for list sample request.
Namespace: default
Parameters:
- metadata_query: dict with metadata fields and values for query
- limit: count of samples in response
Module: rally.plugins.openstack.scenarios.ceilometer.samples
CeilometerAlarms.create_alarm [scenario]¶
Create an alarm.
This scenarios test POST /v2/alarms. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc that may be passed while creating an alarm.
Namespace: default
Parameters:
- meter_name: specifies meter name of the alarm
- threshold: specifies alarm threshold
- kwargs: specifies optional arguments for alarm creation.
CeilometerAlarms.list_alarms [scenario]¶
Fetch all alarms.
This scenario fetches list of all alarms using GET /v2/alarms.
Namespace: default
CeilometerAlarms.create_and_list_alarm [scenario]¶
Create and get the newly created alarm.
This scenarios test GET /v2/alarms/(alarm_id) Initially alarm is created and then the created alarm is fetched using its alarm_id. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc. that may be passed while creating an alarm.
Namespace: default
Parameters:
- meter_name: specifies meter name of the alarm
- threshold: specifies alarm threshold
- kwargs: specifies optional arguments for alarm creation.
CeilometerAlarms.create_and_update_alarm [scenario]¶
Create and update the newly created alarm.
This scenarios test PUT /v2/alarms/(alarm_id) Initially alarm is created and then the created alarm is updated using its alarm_id. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc that may be passed while alarm creation.
Namespace: default
Parameters:
- meter_name: specifies meter name of the alarm
- threshold: specifies alarm threshold
- kwargs: specifies optional arguments for alarm creation.
CeilometerAlarms.create_and_delete_alarm [scenario]¶
Create and delete the newly created alarm.
This scenarios test DELETE /v2/alarms/(alarm_id) Initially alarm is created and then the created alarm is deleted using its alarm_id. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc that may be passed while alarm creation.
Namespace: default
Parameters:
- meter_name: specifies meter name of the alarm
- threshold: specifies alarm threshold
- kwargs: specifies optional arguments for alarm creation.
CeilometerAlarms.create_alarm_and_get_history [scenario]¶
Create an alarm, get and set the state and get the alarm history.
- This scenario makes following queries:
- GET /v2/alarms/{alarm_id}/history GET /v2/alarms/{alarm_id}/state PUT /v2/alarms/{alarm_id}/state
Initially alarm is created and then get the state of the created alarm using its alarm_id. Then get the history of the alarm. And finally the state of the alarm is updated using given state. meter_name and threshold are required parameters for alarm creation. kwargs stores other optional parameters like 'ok_actions', 'project_id' etc that may be passed while alarm creation.
Namespace: default
Parameters:
meter_name: specifies meter name of the alarm
threshold: specifies alarm threshold
state: an alarm state to be set
- timeout: The number of seconds for which to attempt a
successful check of the alarm state
kwargs: specifies optional arguments for alarm creation.
CeilometerResource.list_resources [scenario]¶
Check all available queries for list resource request.
This scenario fetches list of all resources using GET /v2/resources.
Namespace: default
Parameters:
- metadata_query: dict with metadata fields and values for query
- start_time: lower bound of resource timestamp in isoformat
- end_time: upper bound of resource timestamp in isoformat
- limit: count of resources in response
Module: rally.plugins.openstack.scenarios.ceilometer.resources
CeilometerResource.get_tenant_resources [scenario]¶
Get all tenant resources.
This scenario retrieves information about tenant resources using GET /v2/resources/(resource_id)
Namespace: default
Module: rally.plugins.openstack.scenarios.ceilometer.resources
CeilometerResource.list_matched_resources [scenario]¶
Get resources that matched fields from context and args.
Namespace: default
Parameters:
- filter_by_user_id: flag for query by user_id
- filter_by_project_id: flag for query by project_id
- filter_by_resource_id: flag for query by resource_id
- metadata_query: dict with metadata fields and values for query
- start_time: lower bound of resource timestamp in isoformat
- end_time: upper bound of resource timestamp in isoformat
- limit: count of resources in response
Module: rally.plugins.openstack.scenarios.ceilometer.resources
CeilometerStats.create_meter_and_get_stats [scenario]¶
Create a meter and fetch its statistics.
Meter is first created and then statistics is fetched for the same using GET /v2/meters/(meter_name)/statistics.
Namespace: default
Parameters:
- kwargs: contains optional arguments to create a meter
CeilometerStats.get_stats [scenario]¶
Fetch statistics for certain meter.
Statistics is fetched for the using GET /v2/meters/(meter_name)/statistics.
Namespace: default
Parameters:
- meter_name: meter to take statistic for
- filter_by_user_id: flag for query by user_id
- filter_by_project_id: flag for query by project_id
- filter_by_resource_id: flag for query by resource_id
- metadata_query: dict with metadata fields and values for query
- period: the length of the time range covered by these stats
- groupby: the fields used to group the samples
- aggregates: name of function for samples aggregation
Returns: list of statistics data
CeilometerQueries.create_and_query_alarms [scenario]¶
Create an alarm and then query it with specific parameters.
This scenario tests POST /v2/query/alarms An alarm is first created and then fetched using the input query.
Namespace: default
Parameters:
- meter_name: specifies meter name of alarm
- threshold: specifies alarm threshold
- filter: optional filter query dictionary
- orderby: optional param for specifying ordering of results
- limit: optional param for maximum number of results returned
- kwargs: optional parameters for alarm creation
Module: rally.plugins.openstack.scenarios.ceilometer.queries
CeilometerQueries.create_and_query_alarm_history [scenario]¶
Create an alarm and then query for its history.
This scenario tests POST /v2/query/alarms/history An alarm is first created and then its alarm_id is used to fetch the history of that specific alarm.
Namespace: default
Parameters:
- meter_name: specifies meter name of alarm
- threshold: specifies alarm threshold
- orderby: optional param for specifying ordering of results
- limit: optional param for maximum number of results returned
- kwargs: optional parameters for alarm creation
Module: rally.plugins.openstack.scenarios.ceilometer.queries
CeilometerQueries.create_and_query_samples [scenario]¶
Create a sample and then query it with specific parameters.
This scenario tests POST /v2/query/samples A sample is first created and then fetched using the input query.
Namespace: default
Parameters:
- counter_name: specifies name of the counter
- counter_type: specifies type of the counter
- counter_unit: specifies unit of the counter
- counter_volume: specifies volume of the counter
- resource_id: specifies resource id for the sample created
- filter: optional filter query dictionary
- orderby: optional param for specifying ordering of results
- limit: optional param for maximum number of results returned
- kwargs: parameters for sample creation
Module: rally.plugins.openstack.scenarios.ceilometer.queries
CeilometerMeters.list_meters [scenario]¶
Check all available queries for list resource request.
Namespace: default
Parameters:
- metadata_query: dict with metadata fields and values
- limit: limit of meters in response
CeilometerMeters.list_matched_meters [scenario]¶
Get meters that matched fields from context and args.
Namespace: default
Parameters:
- filter_by_user_id: flag for query by user_id
- filter_by_project_id: flag for query by project_id
- filter_by_resource_id: flag for query by resource_id
- metadata_query: dict with metadata fields and values for query
- limit: count of resources in response
ZaqarBasic.create_queue [scenario]¶
Create a Zaqar queue with a random name.
Namespace: default
Parameters:
- kwargs: other optional parameters to create queues like
"metadata"
ZaqarBasic.producer_consumer [scenario]¶
Serial message producer/consumer.
Creates a Zaqar queue with random name, sends a set of messages and then retrieves an iterator containing those.
Namespace: default
Parameters:
min_msg_count: min number of messages to be posted
max_msg_count: max number of messages to be posted
- kwargs: other optional parameters to create queues like
"metadata"
Dummy.failure [scenario]¶
Raise errors in some iterations.
Namespace: default
Parameters:
sleep: float iteration sleep time in seconds
- from_iteration: int iteration number which starts range
of failed iterations
- to_iteration: int iteration number which ends range of
failed iterations
- each: int cyclic number of iteration which actually raises
an error in selected range. For example, each=3 will raise error in each 3rd iteration.
Dummy.dummy [scenario]¶
Do nothing and sleep for the given number of seconds (0 by default).
Dummy.dummy can be used for testing performance of different ScenarioRunners and of the ability of rally to store a large amount of results.
Namespace: default
Parameters:
- sleep: idle time of method (in seconds).
Dummy.dummy_exception [scenario]¶
Throw an exception.
Dummy.dummy_exception can be used for test if exceptions are processed properly by ScenarioRunners and benchmark and analyze rally results storing process.
Namespace: default
Parameters:
- size_of_message: int size of the exception message
- sleep: idle time of method (in seconds).
- message: message of the exception
Dummy.dummy_exception_probability [scenario]¶
Throw an exception with given probability.
Dummy.dummy_exception_probability can be used to test if exceptions are processed properly by ScenarioRunners. This scenario will throw an exception sometimes, depending on the given exception probability.
Namespace: default
Parameters:
- exception_probability: Sets how likely it is that an exception
will be thrown. Float between 0 and 1 0=never 1=always.
Dummy.dummy_output [scenario]¶
Generate dummy output.
This scenario generates example of output data.
Namespace: default
Parameters:
- random_range: max int limit for generated random values
Dummy.dummy_random_fail_in_atomic [scenario]¶
Dummy.dummy_random_fail_in_atomic in dummy actions.
Can be used to test atomic actions failures processing.
Namespace: default
Parameters:
- exception_probability: Probability with which atomic actions
fail in this dummy scenario (0 <= p <= 1)
Dummy.dummy_random_action [scenario]¶
Sleep random time in dummy actions.
Namespace: default
Parameters:
- actions_num: int number of actions to generate
- sleep_min: minimal time to sleep, numeric seconds
- sleep_max: maximum time to sleep, numeric seconds
Dummy.dummy_timed_atomic_actions [scenario]¶
Run some sleepy atomic actions for SLA atomic action tests.
Namespace: default
Parameters:
- number_of_actions: int number of atomic actions to create
- sleep_factor: int multiplier for number of seconds to sleep
HttpRequests.check_request [scenario]¶
Standard way to benchmark web services.
This benchmark is used to make request and check it with expected Response.
Namespace: default
Parameters:
- url: url for the Request object
- method: method for the Request object
- status_code: expected response code
- kwargs: optional additional request parameters
Module: rally.plugins.common.scenarios.requests.http_requests
HttpRequests.check_random_request [scenario]¶
Benchmark the list of requests
This scenario takes random url from list of requests, and raises exception if the response is not the expected response.
Namespace: default
Parameters:
- requests: List of request dicts
- status_code: Expected Response Code it will
be used only if we doesn't specified it in request proper
Module: rally.plugins.common.scenarios.requests.http_requests
Processing Output Charts¶
StackedArea [output chart]¶
Display results as stacked area.
This plugin processes additive data and displays it in HTML report as stacked area with X axis bound to iteration number. Complete output data is displayed as stacked area as well, without any processing.
Keys "description", "label" and "axis_label" are optional.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
additive={"title": "Additive data as stacked area",
"description": "Iterations trend for foo and bar",
"chart_plugin": "StackedArea",
"data": [["foo", 12], ["bar", 34]]},
complete={"title": "Complete data as stacked area",
"description": "Data is shown as stacked area, as-is",
"chart_plugin": "StackedArea",
"data": [["foo", [[0, 5], [1, 42], [2, 15], [3, 7]]],
["bar", [[0, 2], [1, 1.3], [2, 5], [3, 9]]]],
"label": "Y-axis label text",
"axis_label": "X-axis label text"})
Namespace: default
Module: rally.task.processing.charts
Lines [output chart]¶
Display results as generic chart with lines.
This plugin processes additive data and displays it in HTML report as linear chart with X axis bound to iteration number. Complete output data is displayed as linear chart as well, without any processing.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
additive={"title": "Additive data as stacked area",
"description": "Iterations trend for foo and bar",
"chart_plugin": "Lines",
"data": [["foo", 12], ["bar", 34]]},
complete={"title": "Complete data as stacked area",
"description": "Data is shown as stacked area, as-is",
"chart_plugin": "Lines",
"data": [["foo", [[0, 5], [1, 42], [2, 15], [3, 7]]],
["bar", [[0, 2], [1, 1.3], [2, 5], [3, 9]]]],
"label": "Y-axis label text",
"axis_label": "X-axis label text"})
Namespace: default
Module: rally.task.processing.charts
Pie [output chart]¶
Display results as pie, calculate average values for additive data.
This plugin processes additive data and calculate average values. Both additive and complete data are displayed in HTML report as pie chart.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
additive={"title": "Additive output",
"description": ("Pie with average data "
"from all iterations values"),
"chart_plugin": "Pie",
"data": [["foo", 12], ["bar", 34], ["spam", 56]]},
complete={"title": "Complete output",
"description": "Displayed as a pie, as-is",
"chart_plugin": "Pie",
"data": [["foo", 12], ["bar", 34], ["spam", 56]]})
Namespace: default
Module: rally.task.processing.charts
Table [output chart]¶
Display complete output as table, can not be used for additive data.
Use this plugin for complete output data to display it in HTML report as table. This plugin can not be used for additive data because it does not contain any processing logic.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
complete={"title": "Arbitrary Table",
"description": "Just show columns and rows as-is",
"chart_plugin": "Table",
"data": {"cols": ["foo", "bar", "spam"],
"rows": [["a row", 1, 2], ["b row", 3, 4],
["c row", 5, 6]]}})
Namespace: default
Module: rally.task.processing.charts
StatsTable [output chart]¶
Calculate statistics for additive data and display it as table.
This plugin processes additive data and compose statistics that is displayed as table in HTML report.
Examples of using this plugin in Scenario, for saving output data:
self.add_output(
additive={"title": "Statistics",
"description": ("Table with statistics generated "
"from all iterations values"),
"chart_plugin": "StatsTable",
"data": [["foo stat", 12], ["bar", 34], ["spam", 56]]})
Namespace: default
Module: rally.task.processing.charts
Deployment Engines¶
LxcEngine [engine]¶
Deploy with other engines in lxc containers.
Sample configuration:
{
"type": "LxcEngine",
"provider": {
"type": "DummyProvider",
"credentials": [{"user": "root", "host": "example.net"}]
},
"distribution": "ubuntu",
"release": "raring",
"tunnel_to": ["10.10.10.10", "10.10.10.11"],
"start_lxc_network": "10.1.1.0/24",
"container_name_prefix": "devstack-node",
"containers_per_host": 16,
"start_script": "~/start.sh",
"engine": { ... }
}
Namespace: default
Module: rally.deployment.engines.lxc
ExistingCloud [engine]¶
Just use an existing OpenStack deployment without deploying anything.
To use ExistingCloud, you should put credential information to the config:
{
"type": "ExistingCloud",
"auth_url": "http://localhost:5000/v2.0/",
"region_name": "RegionOne",
"endpoint_type": "public",
"admin": {
"username": "admin",
"password": "password",
"tenant_name": "demo"
},
"https_insecure": False,
"https_cacert": "",
}
Or, using keystone v3 API endpoint:
{
"type": "ExistingCloud",
"auth_url": "http://localhost:5000/v3/",
"region_name": "RegionOne",
"endpoint_type": "public",
"admin": {
"username": "admin",
"password": "admin",
"user_domain_name": "admin",
"project_name": "admin",
"project_domain_name": "admin",
},
"https_insecure": False,
"https_cacert": "",
}
To specify extra options use can use special "extra" parameter:
{
"type": "ExistingCloud",
"auth_url": "http://localhost:5000/v2.0/",
"region_name": "RegionOne",
"endpoint_type": "public",
"admin": {
"username": "admin",
"password": "password",
"tenant_name": "demo"
},
"https_insecure": False,
"https_cacert": "",
"extra": {"some_var": "some_value"}
}
Namespace: default
MultihostEngine [engine]¶
Deploy multihost cloud with existing engines.
Sample configuration:
{
"type": "MultihostEngine",
"controller": {
"type": "DevstackEngine",
"provider": {
"type": "DummyProvider"
}
},
"nodes": [
{"type": "Engine1", "config": "Config1"},
{"type": "Engine2", "config": "Config2"},
{"type": "Engine3", "config": "Config3"},
]
}
If {controller_ip} is specified in configuration values, it will be replaced with controller address taken from credential returned by controller engine:
...
"nodes": [
{
"type": "DevstackEngine",
"local_conf": {
"GLANCE_HOSTPORT": "{controller_ip}:9292",
...
Namespace: default
DevstackEngine [engine]¶
Deploy Devstack cloud.
Sample configuration:
{
"type": "DevstackEngine",
"devstack_repo": "https://example.com/devstack/",
"local_conf": {
"ADMIN_PASSWORD": "secret"
},
"provider": {
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "10.2.0.8"}]
}
}
Namespace: default
Deployment Server Providers¶
LxcProvider [server provider]¶
Provide lxc container(s) on given host.
Sample configuration:
{
"type": "LxcProvider",
"distribution": "ubuntu",
"start_lxc_network": "10.1.1.0/24",
"containers_per_host": 32,
"tunnel_to": ["10.10.10.10"],
"forward_ssh": false,
"container_name_prefix": "rally-multinode-02",
"host_provider": {
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "host.net"}]
}
}
Namespace: default
ExistingServers [server provider]¶
Just return endpoints from its own configuration.
Sample configuration:
{
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "localhost"}]
}
Namespace: default
OpenStackProvider [server provider]¶
Provide VMs using an existing OpenStack cloud.
Sample configuration:
{
"type": "OpenStackProvider",
"amount": 42,
"user": "admin",
"tenant": "admin",
"password": "secret",
"auth_url": "http://example.com/",
"flavor_id": 2,
"image": {
"checksum": "75846dd06e9fcfd2b184aba7fa2b2a8d",
"url": "http://example.com/disk1.img",
"name": "Ubuntu Precise(added by rally)",
"format": "qcow2",
"userdata": "disable_root: false"
},
"secgroup_name": "Rally"
}
Namespace: default
CobblerProvider [server provider]¶
Creates servers via PXE boot from given cobbler selector.
Cobbler selector may contain a combination of fields to select a number of system. It's user responsibility to provide selector which selects something. Since cobbler stores servers password encrypted the user needs to specify it configuration. All servers selected must have the same password.
Sample configuration:
{
"type": "CobblerProvider",
"host": "172.29.74.8",
"user": "cobbler",
"password": "cobbler",
"system_password": "password"
"selector": {"profile": "cobbler_profile_name", "owners": "user1"}
}
Namespace: default
VirshProvider [server provider]¶
Create VMs from prebuilt templates.
Sample configuration:
{
"type": "VirshProvider",
"connection": "alex@performance-01",
"template_name": "stack-01-devstack-template",
"template_user": "ubuntu",
"template_password": "password"
}
where :
- connection - ssh connection to vms host
- template_name - vm image template
- template_user - vm user to launch devstack
- template_password - vm password to launch devstack
Namespace: default