HowTo add support for new tool¶
First of all, you should start from the reading of Rally Plugins page. After you learned basic things about Rally plugin mechanism, let’s move to Verifier interface itself.
Spec¶
All verifiers plugins should inherit
rally.verification.manager.VerifierManager
and implement all abstract
methods. Here you can find its interface:
- class
rally.verification.manager.
VerifierManager
(verifier)[source]¶Verifier base class.
This class provides an interface for operating specific tool.
configure
(extra_options=None)[source]¶Configure a verifier.
Parameters: extra_options – a dictionary with external verifier specific options for configuration. Raises: NotImplementedError – This feature is verifier-specific, so you should override this method in your plugin if it supports configuration
extend_configuration
(extra_options)[source]¶Extend verifier configuration with new options.
Parameters: extra_options – Options to be used for extending configuration Raises: NotImplementedError – This feature is verifier-specific, so you should override this method in your plugin if it supports configuration
install_extension
(source, version=None, extra_settings=None)[source]¶Install a verifier extension.
Parameters:
- source – Path or URL to the repo to clone verifier extension from
- version – Branch, tag or commit ID to checkout before verifier extension installation
- extra_settings – Extra installation settings for verifier extension
Raises: NotImplementedError – This feature is verifier-specific, so you should override this method in your plugin if it supports extensions
list_tests
(pattern='')[source]¶List all verifier tests.
Parameters: pattern – Filter tests by given pattern
override_configuration
(new_configuration)[source]¶Override verifier configuration.
Parameters: new_configuration – Content which should be used while overriding existing configuration Raises: NotImplementedError – This feature is verifier-specific, so you should override this method in your plugin if it supports configuration
run
(context)[source]¶Run verifier tests.
Verification Component API expects that this method should return an object. There is no special class, you do it as you want, but it should have the following properties:
<object>.totals = { "tests_count": <total tests count>, "tests_duration": <total tests duration>, "failures": <total count of failed tests>, "skipped": <total count of skipped tests>, "success": <total count of successful tests>, "unexpected_success": <total count of unexpected successful tests>, "expected_failures": <total count of expected failed tests> } <object>.tests = { <test_id>: { "status": <test status>, "name": <test name>, "duration": <test duration>, "reason": <reason>, # optional "traceback": <traceback> # optional }, ... }
uninstall
(full=False)[source]¶Uninstall a verifier.
Parameters: full – If False (default behaviour), only deployment-specific data will be removed
Example of Fake Verifier Manager¶
FakeTool is a tool which doesn’t require configuration and installation.
import random import re from rally.verification import manager # Verification component expects that method "run" of verifier returns # object. Class Result is a simple wrapper for two expected properties. class Result(object): def __init__(self, totals, tests): self.totals = totals self.tests = tests @manager.configure("fake-tool", default_repo="https://example.com") class FakeTool(manager.VerifierManager): """Fake Tool \o/""" TESTS = ["fake_tool.tests.bar.FatalityTestCase.test_one", "fake_tool.tests.bar.FatalityTestCase.test_two", "fake_tool.tests.bar.FatalityTestCase.test_three", "fake_tool.tests.bar.FatalityTestCase.test_four", "fake_tool.tests.foo.MegaTestCase.test_one", "fake_tool.tests.foo.MegaTestCase.test_two", "fake_tool.tests.foo.MegaTestCase.test_three", "fake_tool.tests.foo.MegaTestCase.test_four"] # This fake verifier doesn't launch anything, just returns random # results, so let's override parent methods to avoid redundant # clonning repo, checking packages and so on. def install(self): pass def uninstall(self, full=False): pass # Each tool, which supports configuration, has the own mechanism # for that task. Writing unified method is impossible. That is why # `VerificationManager` implements the case when the tool doesn't # need (doesn't support) configuration at all. Such behaviour is # ideal for FakeTool, since we do not need to change anything :) # Let's implement method `run` to return random data. def run(self, context): totals = {"tests_count": len(self.TESTS), "tests_duration": 0, "failures": 0, "skipped": 0, "success": 0, "unexpected_success": 0, "expected_failures": 0} tests = {} for name in self.TESTS: duration = random.randint(0, 10000)/100. totals["tests_duration"] += duration test = {"name": name, "status": random.choice(["success", "fail"]), "duration": "%s" % duration} if test["status"] == "fail": test["traceback"] = "Ooooppps" totals["failures"] += 1 else: totals["success"] += 1 tests[name] = test return Result(totals, tests=tests) def list_tests(self, pattern=""): return [name for name in self.TESTS if re.match(pattern, name)]