tmt.steps.execute package

Submodules

tmt.steps.execute.internal module

class tmt.steps.execute.internal.ExecuteInternal(**kwargs: Any)

Bases: ExecutePlugin[ExecuteInternalData]

Use the internal tmt executor to execute tests.

The internal tmt executor runs tests on the guest one by one directly from the tmt code which shows testing progress and supports interactive debugging as well. This is the default execute step implementation. Test result is based on the script exit code (for shell tests) or the results file (for beakerlib tests).

The executor provides the following shell scripts which can be used by the tests for certain operations.

tmt-file-submit - archive the given file in the tmt test data directory. See the Save a log file section for more details.

tmt-reboot - soft reboot the machine from inside the test. After reboot the execution starts from the test which rebooted the machine. Use tmt-reboot -s for systemd soft-reboot which preserves the kernel and hardware state while restarting userspace only. An environment variable TMT_REBOOT_COUNT is provided which the test can use to handle the reboot. The variable holds the number of reboots performed by the test. For more information see the Reboot during test feature documentation.

tmt-report-result - generate a result report file from inside the test. Can be called multiple times by the test. The generated report file will be overwritten if a higher hierarchical result is reported by the test. The hierarchy is as follows: SKIP, PASS, WARN, FAIL. For more information see the Report test result feature documentation.

tmt-abort - generate an abort file from inside the test. This will set the current test result to failed and terminate the execution of subsequent tests. For more information see the Abort test execution feature documentation.

The scripts are hosted by default in the /usr/local/bin directory, except for guests using rpm-ostree, where /var/lib/tmt/scripts is used. The directory can be forced using the TMT_SCRIPTS_DIR environment variable. Note that for guests using rpm-ostree, the directory is added to executable paths using the system-wide /etc/profile.d/tmt.sh profile script.

Warning

Please be aware that for guests using rpm-ostree the provided scripts will only be available in a shell that loads the profile scripts. This is the default for bash-like shells, but might not work for others.

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
data: ExecuteInternalData
essential_requires() list[DependencySimple | DependencyFmfId | DependencyFile]

Collect all essential requirements of the plugin.

Essential requirements of a plugin are necessary for the plugin to perform its basic functionality.

Returns:

a list of requirements.

execute(*, invocation: TestInvocation, extra_environment: Environment | None = None, logger: Logger) list[Result]

Run test on the guest

go(*, guest: Guest, environment: Environment | None = None, logger: Logger) None

Execute available tests

results() list[Result]

Return test results

class tmt.steps.execute.internal.ExecuteInternalData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, where: list[str] = <factory>, duration: str = '1h', ignore_duration: bool = False, exit_first: bool = False, script: list[tmt.utils.ShellScript] = <factory>, interactive: bool = False, restraint_compatible: bool = False, no_progress_bar: bool = False)

Bases: ExecuteStepData

interactive: bool = False
no_progress_bar: bool = False
restraint_compatible: bool = False
script: list[ShellScript]
to_spec() dict[str, Any]

Convert to a form suitable for saving in a specification file

tmt.steps.execute.internal.TEST_INNER_WRAPPER_FILENAME_TEMPLATE = 'tmt-test-wrapper-inner.sh-{{ INVOCATION.test.pathless_safe_name }}-{{ INVOCATION.test.serial_number }}'

A template for the inner test wrapper filename.

Note

It is passed to tmt.utils.safe_filename(), but includes also test name and serial number to make it unique even among all test wrappers. See #2997 for issue motivating the inclusion, it seems to be a good idea to prevent accidental reuse in general.

tmt.steps.execute.internal.TEST_OUTER_WRAPPER_FILENAME_TEMPLATE = 'tmt-test-wrapper-outer.sh-{{ INVOCATION.test.pathless_safe_name }}-{{ INVOCATION.test.serial_number }}'

A template for the outer test wrapper filename.

Note

It is passed to tmt.utils.safe_filename(), but includes also test name and serial number to make it unique even among all test wrappers. See #2997 for issue motivating the inclusion, it seems to be a good idea to prevent accidental reuse in general.

class tmt.steps.execute.internal.UpdatableMessage(plugin: ExecuteInternal)

Bases: UpdatableMessage

Updatable message suitable for plan progress reporting.

Based on tmt.utils.UpdatableMessage, simplifies reporting of plan progress, namely by extracting necessary setup parameters from the plugin.

Updatable message suitable for progress-bar-like reporting.

with UpdatableMessage('foo') as message:
    while ...:
        ...

        # check state of remote request, and update message
        state = remote_api.check()
        message.update(state)
Parameters:
  • key – a string to use as the left-hand part of logged message.

  • enabled – if unset, no output would be performed.

  • indent_level – desired indentation level.

  • key_color – optional color to apply to key.

  • default_color – optional color to apply to value when update() is called with color left out.

  • clear_on_exit – if set, the message area would be cleared when leaving the progress bar when used as a context manager.

update(progress: str, test_name: str) None

Update progress message.

Parameters:
  • value – new message to update message area with.

  • color – optional message color.

tmt.steps.execute.upgrade module

class tmt.steps.execute.upgrade.ExecuteUpgrade(**kwargs: Any)

Bases: ExecuteInternal

Perform system upgrade during testing.

In order to enable developing tests for upgrade testing, we need to provide a way how to execute these tests easily. This does not cover unit tests for individual actors but rather system tests which verify the whole upgrade story.

The upgrade executor runs the discovered tests (using the internal executor), then performs a set of upgrade tasks from a remote repository, and finally, re-runs the tests on the upgraded guest.

The IN_PLACE_UPGRADE environment variable is set during the test execution to differentiate between the stages of the test. It is set to old during the first execution and new during the second execution. Test names are prefixed with this value to make the names unique. Based on this variable, the test can perform appropriate actions.

  • old: setup, test

  • new: test, cleanup

  • without: setup, test, cleanup

The upgrade tasks performing the actual system upgrade are taken from a remote repository (specified by the url key) based on an upgrade path (e.g. fedora35to36) or other filters (e.g. specified by the filter key). If both upgrade-path and extra filters are specified, the discover keys in the remote upgrade path plan are overridden by the filters specified in the local plan.

The upgrade path must correspond to a plan name in the remote repository whose discover step selects tests (upgrade tasks) performing the upgrade. Currently, selection of upgrade tasks in the remote repository can be done using both fmf and shell discover method. If the url is not provided, upgrade path and upgrade tasks are taken from the current repository. The supported keys in discover are:

  • ref

  • filter

  • exclude

  • tests

  • test

The environment variables defined in the remote upgrade path plan are passed to the upgrade tasks when they are executed. An example of an upgrade path plan (in the remote repository):

discover: # Selects appropriate upgrade tasks (L1 tests)
    how: fmf
    filter: "tag:fedora"
environment: # This is passed to upgrade tasks
    SOURCE: 35
    TARGET: 36
execute:
    how: tmt

If no upgrade path is specified in the plan, the tests (upgrade tasks) are selected based on the configuration of the upgrade plugin (e.g. based on the filter in its configuration).

If these two possible ways of specifying upgrade tasks are combined, the remote discover plan is used but its options are overridden with the values specified locally.

The same options and config keys and values can be used as in the internal executor.

Minimal execute config example with an upgrade path:

execute:
    how: upgrade
    url: https://github.com/teemtee/upgrade
    upgrade-path: /paths/fedora35to36

Execute config example without an upgrade path:

execute:
    how: upgrade
    url: https://github.com/teemtee/upgrade
    filter: "tag:fedora"
# A simple beakerlib test using the $IN_PLACE_UPGRADE variable
. /usr/share/beakerlib/beakerlib.sh || exit 1

VENV_PATH=/var/tmp/venv_test

rlJournalStart
    # Perform the setup only for the old distro
    if [[ "$IN_PLACE_UPGRADE" !=  "new" ]]; then
        rlPhaseStartSetup
            rlRun "python3.9 -m venv $VENV_PATH"
            rlRun "$VENV_PATH/bin/pip install pyjokes"
        rlPhaseEnd
    fi

    # Execute the test for both old & new distro
    rlPhaseStartTest
        rlAsssertExists "$VENV_PATH/bin/pyjoke"
        rlRun "$VENV_PATH/bin/pyjoke"
    rlPhaseEnd

    # Skip the cleanup phase when on the old distro
    if [[ "$IN_PLACE_UPGRADE" !=  "old" ]]; then
        rlPhaseStartCleanup
            rlRun "rm -rf $VENV_PATH"
        rlPhaseEnd
    fi
rlJournalEnd

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
data: ExecuteUpgradeData
property discover: Discover | DiscoverFmf

Return discover plugin instance

go(*, guest: Guest, environment: Environment | None = None, logger: Logger) None

Execute available tests

class tmt.steps.execute.upgrade.ExecuteUpgradeData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, where: list[str] = <factory>, duration: str = '1h', ignore_duration: bool = False, exit_first: bool = False, script: list[tmt.utils.ShellScript] = <factory>, interactive: bool = False, restraint_compatible: bool = False, no_progress_bar: bool = False, url: str | None = None, upgrade_path: str | None = None, skip_tests_before: bool = False, skip_tests_after: bool = False, ref: str | None = None, test: list[str] = <factory>, filter: list[str] = <factory>, exclude: list[str] = <factory>)

Bases: ExecuteInternalData

exclude: list[str]
filter: list[str]
ref: str | None = None
skip_tests_after: bool = False
skip_tests_before: bool = False
test: list[str]
upgrade_path: str | None = None
url: str | None = None

Module contents

class tmt.steps.execute.Execute(*, plan: Plan, data: _RawStepData | list[_RawStepData], logger: Logger)

Bases: Step

Run tests using the specified executor

Initialize execute step data

DEFAULT_HOW: str = 'tmt'
cli_invocation: 'tmt.cli.CliInvocation' | None = None
cli_invocations: list['tmt.cli.CliInvocation'] = []
create_results(tests: list[TestOrigin]) list[Result]

Get all available results from tests. For tests not yet executed, create a pending result.

go(force: bool = False) None

Execute tests

load() None

Load test results

results() list[Result]

Results from executed tests

Return a list with test results according to the spec: https://tmt.readthedocs.io/en/latest/spec/plans.html#execute

results_for_tests(tests: list[TestOrigin]) list[tuple[Result | None, TestOrigin | None]]

Collect results and corresponding tests.

Returns:

a list of result and test pairs. * if there is not test found for the result, e.g. when results were loaded from storage but tests were not, None represents the missing test: (result, None). * if there is no result for a test, e.g. when the test was not executed, None represents the missing result: (None, test).

save() None

Save test results to the workdir

summary() None

Give a concise summary of the execution

update_results(results: list[Result]) None

Update existing results with new results.

wake() None

Wake up the step (process workdir and command line)

class tmt.steps.execute.ExecutePlugin(*, step: Step, data: ExecuteStepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: Plugin[ExecuteStepDataT, None]

Common parent of execute plugins

Store plugin name, data and parent step

classmethod base_command(usage: str, method_class: type[Command] | None = None) Command

Create base click command (common for all execute plugins)

cli_invocation: 'tmt.cli.CliInvocation' | None = None
property discover: Discover

Return discover plugin instance

discover_phase: str | None = None

If set, plugin should run tests only from this discover phase.

extract_custom_results(invocation: TestInvocation) list[Result]

Extract results from the file generated by the test itself

extract_results(invocation: TestInvocation, logger: Logger) list[Result]

Check the test result

extract_tmt_report_results(invocation: TestInvocation) list[Result]

Extract results from a file generated by tmt-report-result script

extract_tmt_report_results_restraint(invocation: TestInvocation, default_log: Path) list[Result]

Extract results from the file generated by tmt-report-result script.

Special, restraint-like handling is used to convert each recorded result into a standalone result.

go(*, guest: Guest, environment: Environment | None = None, logger: Logger) None

Perform actions shared among plugins when beginning their tasks

how: str = 'tmt'
prepare_tests(guest: Guest, logger: Logger) list[TestInvocation]

Prepare discovered tests for testing

Check which tests have been discovered, for each test prepare the aggregated metadata in a file under the test data directory and finally return a list of discovered tests.

abstractmethod results() list[Result]

Return test results

run_checks_after_test(*, invocation: TestInvocation, environment: Environment | None = None, logger: Logger) list[CheckResult]
run_checks_before_test(*, invocation: TestInvocation, environment: Environment | None = None, logger: Logger) list[CheckResult]
run_internal_checks(*, invocation: TestInvocation, environment: Environment | None = None, logger: Logger) list[CheckResult]
timeout_hint(invocation: TestInvocation) None

Append a duration increase hint to the test output

class tmt.steps.execute.ExecuteStepData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, where: list[str] = <factory>, duration: str = '1h', ignore_duration: bool = False, exit_first: bool = False)

Bases: WhereableStepData, StepData

duration: str = '1h'
exit_first: bool = False
ignore_duration: bool = False
class tmt.steps.execute.ResultCollection(invocation: ~tmt.steps.execute.TestInvocation, filepaths: list[~tmt._compat.pathlib.Path], file_exists: bool = False, results: list[~typing.Any] = <factory>)

Bases: object

Collection of raw results loaded from a file

file_exists: bool = False
filepaths: list[Path]
invocation: TestInvocation
results: list[Any]
validate() None

Validate raw collected results against the result JSON schema.

Report found errors as warnings via invocation logger.

class tmt.steps.execute.TestInvocation(logger: ~tmt.log.Logger, phase: ~tmt.steps.execute.ExecutePlugin[~typing.Any], test: ~tmt.base.Test, guest: ~tmt.steps.provision.Guest, process: ~subprocess.Popen[bytes] | None = None, process_lock: ~_thread.lock = <factory>, on_interrupt_callback_token: int | None = None, results: list[~tmt.result.Result] = <factory>, check_results: list[~tmt.result.CheckResult] = <factory>, check_data: dict[str, ~typing.Any] = <factory>, return_code: int | None = None, start_time: str | None = None, end_time: str | None = None, real_duration: str | None = None, exceptions: list[Exception] = <factory>)

Bases: HasStepWorkdir

A bundle describing one test invocation.

Describes a test invoked on a particular guest under the supervision of an execute plugin phase.

property abort: AbortContext

Abort context for this invocation.

check_data: dict[str, Any]
property check_files_path: Path

Construct a directory path for check files needed by tmt

check_results: list[CheckResult]
end_time: str | None = None
exceptions: list[Exception]

List of exceptions encountered by the invocation.

guest: Guest
invoke_test(command: ShellScript, *, cwd: Path, env: Environment, log: LoggingFunction, interactive: bool, timeout: int | None) CommandOutput

Start the command which represents the test in this invocation.

Parameters:
  • cwd – if set, command would be executed in the given directory, otherwise the current working directory is used.

  • env – environment variables to combine with the current environment before running the command.

  • interactive – if set, the command would be executed in an interactive manner, i.e. with stdout and stdout connected to terminal for live interaction with user.

  • timeout – if set, command would be interrupted, if still running, after this many seconds.

  • log – a logging function to use for logging of command output. By default, logger.debug is used.

Returns:

command output.

property is_guest_healthy: bool

Whether the guest is deemed to be healthy and responsive.

Note

The answer is deduced from various flags set by execute code while observing the test, no additional checks are performed.

logger: Logger
on_interrupt_callback_token: int | None = None

If set, there is a callback registered through tmt.utils.signals.add_callback() which would be called when tmt gets terminated.

property path: Path

Absolute path to invocation directory

phase: ExecutePlugin[Any]
property pidfile: PidFileContext

Pidfile context for this invocation.

process: Popen[bytes] | None = None

Process running the test. What binary it is depends on the guest implementation and the test, it may be, for example, a shell process, SSH process, or a podman process.

process_lock: lock
real_duration: str | None = None
property reboot: RebootContext

Reboot context for this invocation.

property relative_path: Path

Invocation directory path relative to step workdir

property relative_test_data_path: Path

Test data path relative to step workdir

property restart: RestartContext

Restart context for this invocation.

property restraint: RestraintContext

Restraint context for this invocation.

results: list[Result]
return_code: int | None = None
start_time: str | None = None
property step_workdir: Path

Path to a step workdir.

Raises:

GeneralError – when there is no step workdir yet.

property submission_log_path: Path

A path to log containing submitted files paths

property submitted_files: list[Path]

Paths of all files submitted during test

terminate_process(signal: Signals = Signals.SIGTERM, logger: Logger | None = None) None

Terminate the invocation process.

Warning

This method should be used carefully. Process running the invocation’s test has been started by some part of tmt code which is responsible for its well-being. Unless you have a really good reason to do so, doing things behind the tmt’s back may lead to unexpected results.

Parameters:
  • signal – signal to send to the invocation process.

  • logger – logger to use for logging.

test: Test
property test_data_path: Path

Absolute path to test data directory