tmt.steps.provision package

Submodules

tmt.steps.provision.artemis module

class tmt.steps.provision.artemis.ArtemisAPI(guest: GuestArtemis)

Bases: object

create(path: str, data: dict[str, Any], request_kwargs: dict[str, Any] | None = None) Response

Create - or request creation of - a resource.

Parameters:
  • path – API path to contact.

  • data – optional key/value data to send with the request.

  • request_kwargs – optional request options, as supported by requests library.

delete(path: str, request_kwargs: dict[str, Any] | None = None) Response

Delete - or request removal of - a resource.

Parameters:
  • path – API path to contact.

  • request_kwargs – optional request options, as supported by requests library.

inspect(path: str, params: dict[str, Any] | None = None, request_kwargs: dict[str, Any] | None = None) Response

Inspect a resource.

Parameters:
  • path – API path to contact.

  • params – optional key/value query parameters.

  • request_kwargs – optional request options, as supported by requests library.

query(path: str, method: str = 'get', request_kwargs: dict[str, Any] | None = None) Response

Base helper for Artemis API queries.

Trivial dispatcher per method, returning retrieved response.

Parameters:
  • path – API path to contact.

  • method – HTTP method to use.

  • request_kwargs – optional request options, as supported by requests library.

class tmt.steps.provision.artemis.ArtemisGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, api_url: str = 'http://127.0.0.1:8001', api_version: str = '0.0.74', arch: str = 'x86_64', image: str | None = None, pool: str | None = None, priority_group: str = 'default-priority', keyname: str = 'master-key', user_data: dict[str, str] = <factory>, kickstart: dict[str, str] = <factory>, log_type: list[str] = <factory>, guestname: str | None = None, provision_timeout: int = 600, provision_tick: int = 60, api_timeout: int = 10, api_retries: int = 10, api_retry_backoff_factor: int = 1, watchdog_dispatch_delay: int | None = None, watchdog_period_delay: int | None = None, skip_prepare_verify_ssh: bool = False, post_install_script: str | None = None)

Bases: GuestSshData

api_retries: int = 10
api_retry_backoff_factor: int = 1
api_timeout: int = 10
api_url: str = 'http://127.0.0.1:8001'
api_version: str = '0.0.74'
arch: str = 'x86_64'
guestname: str | None = None
image: str | None = None
keyname: str = 'master-key'
kickstart: dict[str, str]
log_type: list[str]
pool: str | None = None
post_install_script: str | None = None
priority_group: str = 'default-priority'
provision_tick: int = 60
provision_timeout: int = 600
skip_prepare_verify_ssh: bool = False
user_data: dict[str, str]
watchdog_dispatch_delay: int | None = None
watchdog_period_delay: int | None = None
exception tmt.steps.provision.artemis.ArtemisProvisionError(message: str, response: Response | None = None, request_data: dict[str, Any] | None = None, *args: Any, **kwargs: Any)

Bases: ProvisionError

Artemis provisioning error.

For some provisioning errors, we can provide more context.

General error.

Parameters:
  • message – error message.

  • causes – optional list of exceptions that caused this one. Since raise ... from ... allows only for a single cause, and some of our workflows may raise exceptions triggered by more than one exception, we need a mechanism for storing them. Our reporting will honor this field, and report causes the same way as __cause__.

class tmt.steps.provision.artemis.GuestArtemis(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)

Bases: GuestSsh

Artemis guest instance

The following keys are expected in the ‘data’ dictionary:

Initialize guest data

property api: ArtemisAPI
api_retries: int
api_retry_backoff_factor: int
api_timeout: int
api_url: str
api_version: str
arch: str
cli_invocation: 'tmt.cli.CliInvocation' | None = None
guestname: str | None
image: str
property is_ready: bool

Detect the guest is ready or not

keyname: str
kickstart: dict[str, str]
log_type: list[str]
pool: str | None
post_install_script: str | None
priority_group: str
provision_tick: int
provision_timeout: int
reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

remove() None

Remove the guest

skip_prepare_verify_ssh: bool
start() None

Start the guest

Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.

user_data: dict[str, str]
watchdog_dispatch_delay: int | None
watchdog_period_delay: int | None
class tmt.steps.provision.artemis.GuestInspectType

Bases: TypedDict

address: str | None
state: str
class tmt.steps.provision.artemis.GuestLogArtemis(name: str, guest: tmt.steps.provision.artemis.GuestArtemis, log_type: str)

Bases: GuestLog

fetch(logger: Logger) str | None

Fetch and return content of a log.

Returns:

content of the log, or None if the log cannot be retrieved.

guest: GuestArtemis
log_type: str
class tmt.steps.provision.artemis.GuestLogBlobType

Bases: TypedDict

content: str
ctime: str
class tmt.steps.provision.artemis.ProvisionArtemis(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: ProvisionPlugin[ProvisionArtemisData]

Provision guest using Artemis backend.

Reserve a machine using the Artemis service. Users can specify many requirements, mostly regarding the desired OS, RAM, disk size and more. Most of the HW specifications defined in the Hardware are supported. Including the kickstart.

Artemis takes machines from AWS, OpenStack, Beaker or Azure. By default, Artemis handles the selection of a cloud provider to its best abilities and the required specification. However, it is possible to specify the keyword pool and select the desired cloud provider.

Artemis project: https://gitlab.com/testing-farm/artemis

Minimal configuration could look like this:

provision:
    how: artemis
    image: Fedora
    api-url: https://your-artemis.com/

Note

When used together with the Testing Farm infrastructure some of the options from the first example below will be filled for you by the Testing Farm service.

Note

The actual value of image depends on what images - or “composes” as Artemis calls them - supports and can deliver.

Note

The api-url can be also given via TMT_PLUGIN_PROVISION_ARTEMIS_API_URL environment variable.

Full configuration example:

provision:
    how: artemis

    # Artemis API
    api-url: https://your-artemis.com/
    api-version: 0.0.32

    # Mandatory environment properties
    image: Fedora

    # Optional environment properties
    arch: aarch64
    pool: optional-pool-name

    # Provisioning process control (optional)
    priority-group: custom-priority-group
    keyname: custom-SSH-key-name

    # Labels to be attached to guest request (optional)
    user-data:
        foo: bar

    # Timeouts and deadlines (optional)
    provision-timeout: 3600
    provision-tick: 10
    api-timeout: 600
    api-retries: 5
    api-retry-backoff-factor: 1

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
go(*, logger: Logger | None = None) None

Provision the guest

class tmt.steps.provision.artemis.ProvisionArtemisData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, api_url: str = 'http://127.0.0.1:8001', api_version: str = '0.0.74', arch: str = 'x86_64', image: str | None = None, pool: str | None = None, priority_group: str = 'default-priority', keyname: str = 'master-key', user_data: dict[str, str] = <factory>, kickstart: dict[str, str] = <factory>, log_type: list[str] = <factory>, guestname: str | None = None, provision_timeout: int = 600, provision_tick: int = 60, api_timeout: int = 10, api_retries: int = 10, api_retry_backoff_factor: int = 1, watchdog_dispatch_delay: int | None = None, watchdog_period_delay: int | None = None, skip_prepare_verify_ssh: bool = False, post_install_script: str | None = None)

Bases: ArtemisGuestData, ProvisionStepData

tmt.steps.provision.bootc module

class tmt.steps.provision.bootc.BootcData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: ForwardRef('Size') | None = None, disk: ForwardRef('Size') | None = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: str | None = None, instance_name: str | None = None, stop_retries: int = 10, stop_retry_delay: int = 1, container_file: str | None = None, container_file_workdir: str = '.', container_image: str | None = None, add_tmt_dependencies: bool = True, image_builder: str = 'quay.io/centos-bootc/bootc-image-builder:latest', rootfs: str = 'xfs', build_disk_image_only: bool = False)

Bases: ProvisionTestcloudData

add_tmt_dependencies: bool = True
build_disk_image_only: bool = False
container_file: str | None = None
container_file_workdir: str = '.'
container_image: str | None = None
image_builder: str = 'quay.io/centos-bootc/bootc-image-builder:latest'
rootfs: str = 'xfs'
class tmt.steps.provision.bootc.GuestBootc(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger, containerimage: str | None, rootless: bool)

Bases: GuestTestcloud

Initialize guest data

cli_invocation: 'tmt.cli.CliInvocation' | None = None
containerimage: str | None
remove() None

Remove the guest (disk cleanup)

class tmt.steps.provision.bootc.ProvisionBootc(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: ProvisionPlugin[BootcData]

Provision a local virtual machine using a bootc container image

Minimal config which uses the CentOS Stream 9 bootc image:

provision:
    how: bootc
    container-image: quay.io/centos-bootc/centos-bootc:stream9
    rootfs: xfs

Here’s a config example using a Containerfile:

provision:
    how: bootc
    container-file: "./my-custom-image.containerfile"
    container-file-workdir: .
    image-builder: quay.io/centos-bootc/bootc-image-builder:stream9
    rootfs: ext4
    disk: 100

Another config example using an image that already includes tmt dependencies:

provision:
    how: bootc
    add-tmt-dependencies: false
    container-image: localhost/my-image-with-deps
    rootfs: btrfs

This plugin is an extension of the virtual.testcloud plugin. Essentially, it takes a container image as input, builds a bootc disk image from the container image, then uses the virtual.testcloud plugin to create a virtual machine using the bootc disk image.

The bootc disk creation requires running podman as root. The plugin will automatically check if the current podman connection is rootless. If it is, a podman machine will be spun up and used to build the bootc disk.

To trigger hard reboot of a guest, plugin uses testcloud API. It is also used to trigger soft reboot unless a custom reboot command was specified via tmt-reboot -c ....

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
go(*, logger: Logger | None = None) None

Provision the bootc instance

property is_in_standalone_mode: bool

Enable standalone mode when build_disk_image_only is True

tmt.steps.provision.connect module

class tmt.steps.provision.connect.ConnectGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, guest: str | None = None, soft_reboot: tmt.utils.ShellScript | None = None, systemd_soft_reboot: tmt.utils.ShellScript | None = None, hard_reboot: tmt.utils.ShellScript | None = None)

Bases: GuestSshData

classmethod from_plugin(container: ProvisionConnect) ConnectGuestData

Create guest data from plugin and its current configuration

guest: str | None = None
hard_reboot: ShellScript | None = None
soft_reboot: ShellScript | None = None
systemd_soft_reboot: ShellScript | None = None
class tmt.steps.provision.connect.GuestConnect(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)

Bases: GuestSsh

Initialize guest data

cli_invocation: 'tmt.cli.CliInvocation' | None = None
hard_reboot: ShellScript | None
reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Plugin will use special commands if specified via soft-reboot, systemd-soft-reboot, and hard-reboot keys to perform the RebootMode.SOFT, RebootMode.SYSTEMD_SOFT, and RebootMode.HARD reboot modes, respectively.

Warning

Unlike command, these commands would be executed on the runner, not on the guest.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

soft_reboot: ShellScript | None
start() None

Start the guest

systemd_soft_reboot: ShellScript | None
class tmt.steps.provision.connect.ProvisionConnect(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: ProvisionPlugin[ProvisionConnectData]

Connect to a provisioned guest using SSH.

Do not provision any system, tests will be executed directly on the machine that has been already provisioned. Use provided authentication information to connect to it over SSH.

Private key authentication (using sudo to run scripts):

provision:
    how: connect
    guest: host.example.org
    user: fedora
    become: true
    key: /home/psss/.ssh/example_rsa

Password authentication:

provision:
    how: connect
    guest: host.example.org
    user: root
    password: secret

User defaults to root, so if you have private key correctly set the minimal configuration can look like this:

provision:
    how: connect
    guest: host.example.org

To support hard reboot of a guest, hard-reboot must be set to an executable command or script. Without this key set, hard reboot will remain unsupported and result in an error. In comparison, soft-reboot and systemd-soft-reboot are optional, but if set, the given commands will be preferred over the default soft and systemd soft-reboot commands:

provision:
  how: connect
  hard-reboot: virsh reboot my-example-vm
  systemd-soft-reboot: ssh root@my-example-vm 'systemd soft-reboot'
  soft-reboot: ssh root@my-example-vm 'shutdown -r now'
provision --how connect \
          --hard-reboot="virsh reboot my-example-vm" \
          --systemd-soft-reboot="ssh root@my-example-vm 'systemd soft-reboot'"
          --soft-reboot="ssh root@my-example-vm 'shutdown -r now'"

Warning

hard-reboot, systemd-soft-reboot, and soft-reboot commands are executed on the runner, not on the guest.

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
go(*, logger: Logger | None = None) None

Prepare the connection

class tmt.steps.provision.connect.ProvisionConnectData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, guest: str | None = None, soft_reboot: tmt.utils.ShellScript | None = None, systemd_soft_reboot: tmt.utils.ShellScript | None = None, hard_reboot: tmt.utils.ShellScript | None = None)

Bases: ConnectGuestData, ProvisionStepData

tmt.steps.provision.local module

class tmt.steps.provision.local.GuestLocal(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)

Bases: Guest

Local Host

Initialize guest data

cli_invocation: 'tmt.cli.CliInvocation' | None = None
execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: Callable[[Command, Popen[bytes], CommandOutput, Logger], None] | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput

Execute command on localhost

install_scripts(scripts: Sequence[Script]) None

Install scripts required by tmt.

property is_ready: bool

Local is always ready

localhost = True
pull(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None) None

Nothing to be done to pull workdir

push(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None, superuser: bool = False) None

Nothing to be done to push workdir

reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

property scripts_path: Path

Absolute path to tmt scripts directory

start() None

Start the guest

stop() None

Stop the guest

class tmt.steps.provision.local.ProvisionLocal(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: ProvisionPlugin[ProvisionLocalData]

Use the localhost for the test execution.

Do not provision any system, tests will be executed directly on the localhost.

Warning

In general, it is not recommended to run tests on your local machine as there might be security risks. Run only those tests which you know are safe so that you don’t destroy your workstation ;-)

From tmt version 1.38, the --feeling-safe option or the TMT_FEELING_SAFE=1 environment variable is required in order to use the local provision plugin.

Using the plugin:

provision:
    how: local
provision --how local

Note

tmt run is expected to be executed under a non-privileged user account. For some actions on the localhost, e.g. installation of test requirements, local will require elevated privileges, either by running under root account, or by using sudo to run the sensitive commands. You may be asked for a password in such cases.

Note

Neither hard nor soft reboot is supported.

Note

Currently the TMT_SCRIPTS_DIR variable is not supported in the local provision plugin and the default scripts path is used instead. See issue #4081 for details.

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
go(*, logger: Logger | None = None) None

Provision the container

class tmt.steps.provision.local.ProvisionLocalData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None)

Bases: GuestData, ProvisionStepData

tmt.steps.provision.mock module

class tmt.steps.provision.mock.GuestMock(*args: Any, **kwargs: Any)

Bases: Guest

Mock environment

Initialize guest data

cli_invocation: 'tmt.cli.CliInvocation' | None = None
execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: Callable[[Command, Popen[bytes], CommandOutput, Logger], None] | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput

Execute command inside mock

property is_ready: bool

Mock is always ready

pull(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None) None

Pull content from the mock chroot via a pipe at /srv/tmt-mock/filesync. For directories we use tar. For files we use cp / install. Compress option is ignored, it only slows down the execution.

push(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None, superuser: bool = False) None

Push content into the mock chroot via a pipe at /srv/tmt-mock/filesync. For directories we use tar. For files we use cp / install. Compress option is ignored, it only slows down the execution. Create destination option is ignored, there were problems with workdir.

reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

remove() None

Currently do not prune the mock chroot, that may be undesirable.

root: str | None = None
rootdir: Path | None = None
stop() None

Stop the guest

Shut down a running guest instance so that it does not consume any memory or cpu resources. If needed, perform any actions necessary to store the instance status to disk.

suspend() None

Suspend the guest.

Perform any actions necessary before quitting step and tmt. The guest may be reused by future tmt invocations.

class tmt.steps.provision.mock.MockGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, root: str | None = None, rootdir: tmt._compat.pathlib.Path | None = None)

Bases: GuestData

root: str | None = None
rootdir: Path | None = None
class tmt.steps.provision.mock.MockShell(parent: GuestMock, root: str | None, rootdir: Path | None)

Bases: object

class ManagedEpollFd(epoll: epoll, fd: int)

Bases: object

try_unregister() None
class Stream(logger: Callable[[str], None])

Bases: object

enter_shell() None
execute(*args: Any, **kwargs: Any) tuple[str, str]
exit_shell() None
class tmt.steps.provision.mock.ProvisionMock(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: ProvisionPlugin[ProvisionMockData]

Use the mock tool for the test execution.

Tests will be executed inside a mock buildroot.

Warning

This plugin requires the --feeling-safe option or the TMT_FEELING_SAFE=1 environment variable defined. While it is roughly as safe as container provisioning, the plugin still bind-mounts the test directory.

Using the plugin:

provision:
    how: mock
    config: fedora-rawhide-x86_64
provision --how mock --config fedora-rawhide-x86_64

Note

Neither hard nor soft reboot is supported.

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
go(*, logger: Logger | None = None) None

Provision the container

class tmt.steps.provision.mock.ProvisionMockData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, root: str | None = None, rootdir: tmt._compat.pathlib.Path | None = None)

Bases: MockGuestData, ProvisionStepData

tmt.steps.provision.mock.mock_config(root: str | None) dict[str, Any]

tmt.steps.provision.mrack module

tmt.steps.provision.mrack.BEAKER: Any
class tmt.steps.provision.mrack.BeakerAPI(guest: GuestBeaker)

Bases: object

Initialize the API class with defaults and load the config

create(data: CreateJobParameters) Any

Create - or request creation of - a resource using mrack up.

Parameters:

data – describes the provisioning request.

delete() Any

Delete - or request removal of - a resource

dsp_name: str = 'Beaker'
inspect() Any

Inspect a resource (kinda wait till provisioned)

mrack_requirement: dict[str, Any] = {}
class tmt.steps.provision.mrack.BeakerGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, whiteboard: str | None = None, arch: str = 'x86_64', image: str | None = 'fedora', job_id: str | None = None, provision_timeout: int = 3600, provision_tick: int = 60, api_session_refresh_tick: int = 3600, kickstart: dict[str, str] = <factory>, beaker_job_owner: str | None = None, public_key: list[str] = <factory>, beaker_job_group: str | None = None, bootc_check_system_url: str | None = 'https://gitlab.com/fedora/bootc/tests/bootc-beaker-test/-/archive/1.8/bootc-beaker-test-1.8.tar.gz#check-system', bootc_image_url: str | None = None, bootc_registry_secret: str | None = None, bootc: bool = False)

Bases: GuestSshData

api_session_refresh_tick: int = 3600
arch: str = 'x86_64'
beaker_job_group: str | None = None
beaker_job_owner: str | None = None
bootc: bool = False
bootc_check_system_url: str | None = 'https://gitlab.com/fedora/bootc/tests/bootc-beaker-test/-/archive/1.8/bootc-beaker-test-1.8.tar.gz#check-system'
bootc_image_url: str | None = None
bootc_registry_secret: str | None = None
image: str | None = 'fedora'
job_id: str | None = None
kickstart: dict[str, str]
provision_tick: int = 60
provision_timeout: int = 3600
public_key: list[str]
whiteboard: str | None = None
tmt.steps.provision.mrack.BeakerProvider: Any
tmt.steps.provision.mrack.BeakerTransformer: Any
class tmt.steps.provision.mrack.ConstraintT

A type var representing actual constraint type in transformers and their type annotations.

alias of TypeVar(‘ConstraintT’, bound=Constraint)

tmt.steps.provision.mrack.ConstraintTransformer

A type of constraint transformers.

alias of Callable[[ConstraintT, Logger], MrackBaseHWElement | dict[str, Any]]

class tmt.steps.provision.mrack.CreateJobParameters(tmt_name: str, name: str, os: str, arch: str, hardware: Hardware | None, kickstart: dict[str, str], whiteboard: str | None, beaker_job_owner: str | None, public_key: list[str], beaker_job_group: str | None, bootc_credentials: dict[str, Any] | None, bootc_image_url: str | None, bootc: bool, bootc_check_system_url: str | None, group: str = 'linux')

Bases: object

Collect all parameters for a future Beaker job

arch: str
beaker_job_group: str | None
beaker_job_owner: str | None
bootc: bool
bootc_check_system_url: str | None
bootc_credentials: dict[str, Any] | None
bootc_image_url: str | None
group: str = 'linux'
hardware: Hardware | None
kickstart: dict[str, str]
name: str
os: str
public_key: list[str]
tmt_name: str
to_mrack() dict[str, Any]
whiteboard: str | None
tmt.steps.provision.mrack.DEFAULT_API_SESSION_REFRESH = 3600

How often Beaker session should be refreshed to pick up up-to-date Kerberos ticket.

class tmt.steps.provision.mrack.GuestBeaker(*args: Any, **kwargs: Any)

Bases: GuestSsh

Beaker guest instance

Make sure that the mrack module is available and imported

property api: BeakerAPI

Create BeakerAPI leveraging mrack

api_session_refresh_tick: int
arch: str
beaker_job_group: str | None = None
beaker_job_owner: str | None = None
bootc: bool
bootc_check_system_url: str | None
bootc_image_url: str | None
bootc_registry_secret: str | None
cli_invocation: 'tmt.cli.CliInvocation' | None = None
hardware: Hardware | None = None
image: str = 'fedora-latest'
property is_ready: bool

Check if provisioning of machine is done

job_id: str | None
kickstart: dict[str, str]
provision_tick: int
provision_timeout: int
public_key: list[str]
reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Plugin will use bkr system-power command to trigger to perform the RebootMode.HARD reboot. Unlike command, this command would be executed on the runner, not on the guest.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

remove() None

Remove the guest

start() None

Start the guest

Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.

whiteboard: str | None
class tmt.steps.provision.mrack.GuestInspectType

Bases: TypedDict

address: str | None
status: str
system: str
class tmt.steps.provision.mrack.GuestLogBeaker(name: str, guest: tmt.steps.provision.mrack.GuestBeaker, url: str)

Bases: GuestLog

fetch(logger: Logger) str | None

Fetch and return content of a log.

Returns:

content of the log, or None if the log cannot be retrieved.

guest: GuestBeaker
url: str
class tmt.steps.provision.mrack.MrackBaseHWElement(name: str)

Bases: ABC

Base for Mrack hardware requirement elements

name: str
abstractmethod to_mrack() dict[str, Any]

Convert the element to Mrack-compatible dictionary tree

class tmt.steps.provision.mrack.MrackHWAndGroup(name: str = 'and', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement | dict[str, ~typing.Any]] = <factory>)

Bases: MrackHWGroup

Represents <and/> element

name: str = 'and'
class tmt.steps.provision.mrack.MrackHWBinOp(name: str, operator: str, value: str)

Bases: MrackHWElement

An element describing a binary operation, a “check”

class tmt.steps.provision.mrack.MrackHWDeviceElement(operator: str, value: str, attribute_name: str = 'value')

Bases: MrackHWElement

An element for device with op and value attributes

class tmt.steps.provision.mrack.MrackHWElement(name: str, attributes: dict[str, str] = <factory>)

Bases: MrackBaseHWElement

An element with name and attributes.

This type of element is not allowed to have any child elements.

attributes: dict[str, str]
to_mrack() dict[str, Any]

Convert the element to Mrack-compatible dictionary tree

class tmt.steps.provision.mrack.MrackHWGroup(name: str, children: list[~tmt.steps.provision.mrack.MrackBaseHWElement | dict[str, ~typing.Any]] = <factory>)

Bases: MrackBaseHWElement

An element with child elements.

This type of element is not allowed to have any attributes.

children: list[MrackBaseHWElement | dict[str, Any]]
to_mrack() dict[str, Any]

Convert the element to Mrack-compatible dictionary tree

class tmt.steps.provision.mrack.MrackHWKeyValue(name: str, operator: str, value: str)

Bases: MrackHWElement

A key-value element

class tmt.steps.provision.mrack.MrackHWNotGroup(name: str = 'not', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement | dict[str, ~typing.Any]] = <factory>)

Bases: MrackHWGroup

Represents <not/> element

name: str = 'not'
class tmt.steps.provision.mrack.MrackHWOrGroup(name: str = 'or', children: list[~tmt.steps.provision.mrack.MrackBaseHWElement | dict[str, ~typing.Any]] = <factory>)

Bases: MrackHWGroup

Represents <or/> element

name: str = 'or'
tmt.steps.provision.mrack.NotAuthenticatedError: Any
class tmt.steps.provision.mrack.ProvisionBeaker(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: ProvisionPlugin[ProvisionBeakerData]

Provision guest on Beaker system using mrack.

Reserve a machine from the Beaker pool using the mrack plugin. mrack is a multicloud provisioning library supporting multiple cloud services including Beaker.

The following two files are used for configuration:

/etc/tmt/mrack.conf for basic configuration

/etc/tmt/provisioning-config.yaml configuration per supported provider

Beaker installs distribution specified by the image key. If the image can not be translated using the provisioning-config.yaml file mrack passes the image value to Beaker hub and tries to request distribution based on the image value. This way we can bypass default translations and use desired distribution specified like the one in the example below.

Minimal configuration could look like this:

provision:
    how: beaker
    image: fedora

To trigger a hard reboot of a guest, bkr system-power --action reboot command is executed.

Warning

bkr system-power command is executed on the runner, not on the guest.

# Specify the distro directly
provision:
    how: beaker
    image: Fedora-37%
# Set custom whiteboard description (added in 1.30)
provision:
    how: beaker
    whiteboard: Just a smoke test for now

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
go(*, logger: Logger | None = None) None

Provision the guest

wake(data: BeakerGuestData | None = None) None

Wake up the plugin, process data, apply options

class tmt.steps.provision.mrack.ProvisionBeakerData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, whiteboard: str | None = None, arch: str = 'x86_64', image: str | None = 'fedora', job_id: str | None = None, provision_timeout: int = 3600, provision_tick: int = 60, api_session_refresh_tick: int = 3600, kickstart: dict[str, str] = <factory>, beaker_job_owner: str | None = None, public_key: list[str] = <factory>, beaker_job_group: str | None = None, bootc_check_system_url: str | None = 'https://gitlab.com/fedora/bootc/tests/bootc-beaker-test/-/archive/1.8/bootc-beaker-test-1.8.tar.gz#check-system', bootc_image_url: str | None = None, bootc_registry_secret: str | None = None, bootc: bool = False)

Bases: BeakerGuestData, ProvisionStepData

tmt.steps.provision.mrack.ProvisioningError: Any
tmt.steps.provision.mrack.TmtBeakerTransformer: Any
tmt.steps.provision.mrack.async_run(func: Any) Any

Decorate click actions to run as async

tmt.steps.provision.mrack.constraint_to_beaker_filter(constraint: BaseConstraint, logger: Logger) MrackBaseHWElement | dict[str, Any]

Convert a hardware constraint into a Mrack-compatible filter

tmt.steps.provision.mrack.import_and_load_mrack_deps(mrack_log: str, logger: Logger) None

Import mrack module only when needed (thread-safe)

tmt.steps.provision.mrack.init_mrack_global_context(config_path: str) None

Initialize mrack global context in a thread-safe manner

tmt.steps.provision.mrack.mrack: Any
tmt.steps.provision.mrack.mrack_constructs_ks_pre() bool

Kickstart construction has been improved in 1.21.0

tmt.steps.provision.mrack.operator_to_beaker_op(operator: Operator, value: str) tuple[str, str, bool]

Convert constraint operator to Beaker “op”.

Parameters:
  • operator – operator to convert.

  • value – value operator works with. It shall be a string representation of the the constraint value, as converted for the Beaker job XML.

Returns:

tuple of three items: Beaker operator, fit for op attribute of XML filters, a value to go with it instead of the input one, and a boolean signalizing whether the filter, constructed by the caller, should be negated.

tmt.steps.provision.mrack.providers: Any
tmt.steps.provision.mrack.transforms(fn: Callable[[ConstraintT, Logger], MrackBaseHWElement | dict[str, Any]]) Callable[[ConstraintT, Logger], MrackBaseHWElement | dict[str, Any]]

A decorator marking a function as a constraint transformer.

Function name is expected to provide the constraint name it transforms: decorator strips away the initial _transform_ prefix, and replaces the first underscore, _ with a dot, .:

_transform_beaker_pool => beaker.pool
_transform_disk_physical_sector_size => disk.physical_sector_size
Parameters:

fn – function to decorate.

tmt.steps.provision.podman module

class tmt.steps.provision.podman.GuestContainer(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)

Bases: Guest

Container Instance

Initialize guest data

cli_invocation: 'tmt.cli.CliInvocation' | None = None
container: str | None
execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: Callable[[Command, Popen[bytes], CommandOutput, Logger], None] | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput

Execute given commands in podman via shell

force_pull: bool
image: str | None
property is_ready: bool

Detect the guest is ready or not

logger: Logger
podman(command: Command, silent: bool = True, **kwargs: Any) CommandOutput

Run given command via podman

pull(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None) None

Nothing to be done to pull workdir

pull_attempts: int
pull_image() None

Pull image if not available or pull forced

pull_interval: int
push(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None, superuser: bool = False) None

Make sure that the workdir has a correct selinux context

reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Note

However, only RebootMode.HARD mode is supported by the plugin, other modes or a custom reboot command will result in an exception.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

remove() None

Remove the container

start() None

Start provisioned guest

stop() None

Stop provisioned guest

stop_time: int
user: str
wake() None

Wake up the guest

class tmt.steps.provision.podman.PodmanGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, image: str = 'fedora', user: str = 'root', force_pull: bool = False, container: str | None = None, network: str | None = None, pull_attempts: int = 5, pull_interval: int = 5, stop_time: int = 1)

Bases: GuestData

container: str | None = None
force_pull: bool = False
image: str = 'fedora'
network: str | None = None
pull_attempts: int = 5
pull_interval: int = 5
stop_time: int = 1
user: str = 'root'
class tmt.steps.provision.podman.ProvisionPodman(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: ProvisionPlugin[ProvisionPodmanData]

Create a new container using podman.

Example config:

provision:
    how: container
    image: fedora:latest
# Use an image with a non-root user with sudo privileges,
# and run scripts with sudo.
provision:
    how: container
    image: image with non-root user with sudo privileges
    user: tester
    become: true

In order to always pull the fresh container image use pull: true.

In order to run the container with different user as the default root, use user: USER.

Container-backed guests do not support soft reboots or custom reboot commands. Soft reboot or tmt-reboot -c ... will result in an error.

Store plugin name, data and parent step

cli_invocation: 'tmt.cli.CliInvocation' | None = None
default(option: str, default: Any = None) Any

Return default data for given option

go(*, logger: Logger | None = None) None

Provision the container

class tmt.steps.provision.podman.ProvisionPodmanData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, image: str = 'fedora', user: str = 'root', force_pull: bool = False, container: str | None = None, network: str | None = None, pull_attempts: int = 5, pull_interval: int = 5, stop_time: int = 1)

Bases: PodmanGuestData, ProvisionStepData

tmt.steps.provision.testcloud module

tmt.steps.provision.testcloud.AArch64ArchitectureConfiguration: Any
tmt.steps.provision.testcloud.BOOT_TIMEOUT: int = 120

How many seconds to wait for a VM to start. This is the effective value, combining the default and optional envvar, TMT_BOOT_TIMEOUT.

class tmt.steps.provision.testcloud.ConsoleLog(name: str, guest: 'Guest', testcloud_symlink_path: tmt._compat.pathlib.Path, exchange_directory: tmt._compat.pathlib.Path | None = None)

Bases: GuestLog

cleanup(logger: Logger) None

Remove the temporary directory.

exchange_directory: Path | None = None
fetch(logger: Logger) str | None

Read the content of the symlink target prepared by testcloud.

prepare(logger: Logger) None

Prepare temporary directory for the console log.

Special directory is needed for console logs with the right selinux context so that virtlogd is able to write there.

tmt.steps.provision.testcloud.DEFAULT_BOOT_TIMEOUT: int = 120

How many seconds to wait for a VM to start. This is the default value tmt would use unless told otherwise.

tmt.steps.provision.testcloud.DEFAULT_STOP_RETRIES = 10

Default number of attempts to stop a VM.

Note

The value testcloud starts with is 3, and we already observed some VMs with bootc involved to not shut down in time. Therefore starting with increased default on our side.

tmt.steps.provision.testcloud.DEFAULT_STOP_RETRY_DELAY = 1

Default time, in seconds, to wait between attempts to stop a VM.

tmt.steps.provision.testcloud.DomainConfiguration: Any
class tmt.steps.provision.testcloud.GuestTestcloud(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)

Bases: GuestSsh

Testcloud Instance

The following keys are expected in the ‘data’ dictionary:

image ...... qcov image name or url
user ....... user name to log in
memory ..... memory size for vm
disk ....... disk size for vm
connection . either session (default) or system, to be passed to qemu
arch ....... architecture for the VM, host arch is the default

Initialize guest data

arch: str
cli_invocation: 'tmt.cli.CliInvocation' | None = None
connection: str
disk: Size | None
image: str
image_url: str | None
instance_name: str | None
property is_coreos: bool
property is_kvm: bool
property is_legacy_os: bool
property is_ready: bool

Detect guest is ready or not

memory: Size | None
prepare_config() None

Prepare common configuration

prepare_ssh_key(key_type: str | None = None) str

Prepare ssh key for authentication

reboot(mode: RebootMode = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

remove() None

Remove the guest (disk cleanup)

start() None

Start provisioned guest

stop() None

Stop provisioned guest

stop_retries: int
stop_retry_delay: int
property testcloud_data_dirpath: Path
property testcloud_image_dirpath: Path
wake() None

Wake up the guest

tmt.steps.provision.testcloud.IMAGE_URL_FETCH_RETRY_ATTEMPTS = 5

Image url fetch retry attempts and interval

tmt.steps.provision.testcloud.NON_KVM_TIMEOUT_COEF = 10

How many times should the timeouts be multiplied in kvm-less cases. These include emulating a different architecture than the host, some nested virtualization cases, and hosts with degraded virt caps.

tmt.steps.provision.testcloud.Ppc64leArchitectureConfiguration: Any
class tmt.steps.provision.testcloud.ProvisionTestcloud(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: ProvisionPlugin[ProvisionTestcloudData]

Local virtual machine using testcloud library. Testcloud takes care of downloading an image and making necessary changes to it for optimal experience (such as disabling UseDNS and GSSAPI for SSH).

Minimal config which uses the latest Fedora image:

provision:
    how: virtual

Here’s a full config example:

# Provision a virtual machine from a specific QCOW2 file,
# using specific memory and disk settings, using the fedora user,
# and using sudo to run scripts.
provision:
    how: virtual
    image: https://mirror.vpsnet.com/fedora/linux/releases/41/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-41-1.4.x86_64.qcow2
    user: fedora
    become: true
    # in MB
    memory: 2048
    # in GB
    disk: 30

Images

As the image use fedora for the latest released Fedora compose, fedora-rawhide for the latest Rawhide compose, short aliases such as fedora-32, f-32 or f32 for specific release or a full url to the qcow2 image for example from https://kojipkgs.fedoraproject.org/compose/.

Short names are also provided for centos, centos-stream, alma, rocky, oracle, debian and ubuntu (e.g. centos-8 or c8).

Note

The non-rpm distros are not fully supported yet in tmt as the package installation is performed solely using dnf/yum and rpm. But you should be able the login to the provisioned guest and start experimenting. Full support is coming in the future :)

Supported Fedora CoreOS images are:

  • fedora-coreos

  • fedora-coreos-stable

  • fedora-coreos-testing

  • fedora-coreos-next

Use the full path for images stored on local disk, for example:

/var/tmp/images/Fedora-Cloud-Base-31-1.9.x86_64.qcow2

In addition to the qcow2 format, Vagrant boxes can be used as well, testcloud will take care of unpacking the image for you.

Reboot

To trigger hard reboot of a guest, plugin uses testcloud API. It is also used to trigger soft reboot unless a custom reboot command was specified via tmt-reboot -c ....

Console

The full console log is available, after the guest is booted, in the logs directory under the provision step workdir, for example: plan/provision/client/logs/console.txt. Enable verbose mode using -vv to get the full path printed to the terminal for easy investigation.

Store plugin name, data and parent step

classmethod clean_images(clean: Clean, dry: bool, workdir_root: Path) bool

Remove the testcloud images

cli_invocation: 'tmt.cli.CliInvocation' | None = None
go(*, logger: Logger | None = None) None

Provision the testcloud instance

class tmt.steps.provision.testcloud.ProvisionTestcloudData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None, _OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: ForwardRef('Size') | None = None, disk: ForwardRef('Size') | None = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: str | None = None, instance_name: str | None = None, stop_retries: int = 10, stop_retry_delay: int = 1)

Bases: TestcloudGuestData, ProvisionStepData

tmt.steps.provision.testcloud.QCow2StorageDevice: Any
tmt.steps.provision.testcloud.RawStorageDevice: Any
tmt.steps.provision.testcloud.S390xArchitectureConfiguration: Any
tmt.steps.provision.testcloud.SystemNetworkConfiguration: Any
tmt.steps.provision.testcloud.TPMConfiguration: Any
tmt.steps.provision.testcloud.TPM_CONFIG_ALLOWS_VERSIONS: bool = False

If set, testcloud TPM configuration accepts TPM version as a parameter.

tmt.steps.provision.testcloud.TPM_VERSION_ALLOWED_OPERATORS: tuple[Operator, ...] = (Operator.EQ, Operator.GTE, Operator.LTE)

List of operators supported for tpm.version HW requirement.

tmt.steps.provision.testcloud.TPM_VERSION_SUPPORTED_VERSIONS = {False: ['2.0', '2'], True: ['2.0', '2', '1.2']}

TPM versions supported by the plugin. The key is TPM_CONFIG_ALLOWS_VERSIONS.

class tmt.steps.provision.testcloud.TestcloudGuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: tmt.steps.provision.GuestFacts = <factory>, environment: tmt.utils.Environment = <factory>, hardware: tmt.hardware.Hardware | None = None, ansible: tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>, image: str = 'fedora', memory: ForwardRef('Size') | None = None, disk: ForwardRef('Size') | None = None, connection: str = 'session', arch: str = 'x86_64', list_local_images: bool = False, image_url: str | None = None, instance_name: str | None = None, stop_retries: int = 10, stop_retry_delay: int = 1)

Bases: GuestSshData

arch: str = 'x86_64'
connection: str = 'session'
disk: Size | None = None
image: str = 'fedora'
image_url: str | None = None
instance_name: str | None = None
list_local_images: bool = False
memory: Size | None = None
show(*, keys: list[str] | None = None, verbose: int = 0, logger: Logger) None

Display guest data in a nice way.

Parameters:
  • keys – if set, only these keys would be shown.

  • verbose – desired verbosity. Some fields may be omitted in low verbosity modes.

  • logger – logger to use for logging.

stop_retries: int = 10
stop_retry_delay: int = 1
tmt.steps.provision.testcloud.UserNetworkConfiguration: Any
tmt.steps.provision.testcloud.Workarounds: Any
tmt.steps.provision.testcloud.X86_64ArchitectureConfiguration: Any
tmt.steps.provision.testcloud.import_testcloud(logger: Logger) None

Import testcloud module only when needed

tmt.steps.provision.testcloud.normalize_disk_size(key_address: str, value: Any, logger: Logger) Size | None

Normalize disk size.

As of now, it’s just a simple integer with implicit unit, GB.

tmt.steps.provision.testcloud.normalize_memory_size(key_address: str, value: Any, logger: Logger) Size | None

Normalize memory size.

As of now, it’s just a simple integer with implicit unit, MB.

Module contents

tmt.steps.provision.BASE_SSH_OPTIONS: list[str | Path] = ['-oForwardX11=no', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', '-oConnectionAttempts=5', '-oConnectTimeout=60', '-oServerAliveInterval=5', '-oServerAliveCountMax=60']

Base SSH options. This is the base set of SSH options tmt would use for all SSH connections. It is a combination of the default SSH options and those provided by environment variables. SSH options are processed in order. Options provided via environment variables take precedence over default values. For options that set a specific value (e.g., ServerAliveInterval), the first occurrence takes precedence. For simple on/off flags (e.g., -v/-q), the last one wins. Identity files (-i) are all considered in order.

class tmt.steps.provision.BootMark

Bases: ABC

Fetch and compare “boot mark”

A “boot mark” is a piece of information identifying a particular guest boot, and it changes after a reboot. It is used to detect whether a reboot has already happened or not.

classmethod check(guest: Guest, current: str | None) None

Read the new boot mark, and compare it with the current one.

Intended to be called as tmt.utils.wait.wait() callback.

Raises:

tmt.utils.wait.WaitingIncompleteError – when the guest is not yet ready after a reboot, e.g. because the boot mark is not updated yet.

abstractmethod classmethod fetch(guest: Guest) str

Read and return the current value of the boot mark.

class tmt.steps.provision.BootMarkBootTime

Bases: BootMark

Use boot time as a boot mark.

classmethod fetch(guest: Guest) str

Read and return the current value of the boot mark.

class tmt.steps.provision.BootMarkSystemdSoftRebootCount

Bases: BootMark

Use soft reboot count a boot mark.

classmethod fetch(guest: Guest) str

Read and return the current value of the boot mark.

tmt.steps.provision.CONNECT_TIMEOUT: int = 120

How many seconds to wait for a connection to succeed after guest boot. This is the effective value, combining the default and optional envvar, TMT_CONNECT_TIMEOUT.

tmt.steps.provision.DEFAULT_CONNECT_TIMEOUT = 120

How many seconds to wait for a connection to succeed after guest boot. This is the default value tmt would use unless told otherwise.

tmt.steps.provision.DEFAULT_REBOOT_TIMEOUT: int = 600

How many seconds to wait for a connection to succeed after guest reboot. This is the default value tmt would use unless told otherwise.

tmt.steps.provision.DEFAULT_SSH_OPTIONS: list[str | Path] = ['-oForwardX11=no', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', '-oConnectionAttempts=5', '-oConnectTimeout=60', '-oServerAliveInterval=5', '-oServerAliveCountMax=60']

Default SSH options. This is the default set of SSH options tmt would use for all SSH connections.

tmt.steps.provision.DEFAULT_USER = 'root'

Default username to use in SSH connections.

class tmt.steps.provision.Guest(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)

Bases: HasRunWorkdir, HasPlanWorkdir, HasStepWorkdir, HasGuestWorkdir, Common

Guest provisioned for test execution

A base class for guest-like classes. Provides some of the basic methods and functionality, but note some of the methods are left intentionally empty. These do not have valid implementation on this level, and it’s up to Guest subclasses to provide one working in their respective infrastructure.

The following keys are expected in the ‘data’ container:

role ....... guest role in the multihost scenario
guest ...... name, hostname or ip address
become ..... boolean, whether to run shell scripts in tests, prepare, and finish with sudo

These are by default imported into instance attributes.

Initialize guest data

ansible: GuestAnsible | None
property ansible_host_groups: list[str]

Get guest list of groups for Ansible inventory.

property ansible_host_vars: dict[str, Any]

Get host variables for Ansible inventory.

assert_reachable(wait: Waiting | None = None) None

Assert that the guest is reachable and responding.

become: bool
property bootc_builder: PackageManager[PackageManagerEngine]
cli_invocation: 'tmt.cli.CliInvocation' | None = None
environment: Environment
classmethod essential_requires() list[DependencySimple | DependencyFmfId | DependencyFile]

Collect all essential requirements of the guest.

Essential requirements of a guest are necessary for the guest to be usable for testing.

Returns:

a list of requirements.

abstractmethod execute(command: ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: OnProcessEndCallback | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput
abstractmethod execute(command: Command, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: OnProcessEndCallback | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput

Execute a command on the guest.

Parameters:
  • command – either a command or a shell script to execute.

  • cwd – if set, execute command in this directory on the guest.

  • env – if set, set these environment variables before running the command.

  • friendly_command – nice, human-friendly representation of the command.

property facts: GuestFacts
fetch_logs(logger: Logger, dirpath: Path | None = None, guest_logs: list[GuestLog] | None = None) None

Get log content and save it to a directory.

Parameters:
  • logger – logger to use for logging.

  • dirpath – a directory to save into. If not set, logdir, or current working directory will be used.

  • guest_logs – optional list of GuestLog. If not set, all guest logs from Guest.guest_logs would be collected.

classmethod get_data_class() type[GuestData]

Return step data class for this plugin.

By default, _data_class is returned, but plugin may override this method to provide different class.

property guest_workdir: Path

Path to a guest workdir.

Raises:

GeneralError – when there is no guest workdir yet.

hardware: Hardware | None
install_scripts(scripts: Sequence[Script]) None

Install scripts required by tmt.

abstract property is_ready: bool

Detect guest is ready or not

load(data: GuestData) None

Load guest data into object attributes for easy access

Called during guest object initialization. Takes care of storing all supported keys (see class attribute _keys for the list) from provided data to the guest object attributes. Child classes can extend it to make additional guest attributes easily available.

Data dictionary can contain guest information from both command line options / L2 metadata / user configuration and wake up data stored by the save() method below.

localhost = False
property logdir: Path | None

Path to store logs

Create the directory if it does not exist yet.

mkdtemp(prefix: str | None = None, template: str | None = None, parent: Path | None = None) Iterator[Path]

Create a temporary directory.

Modeled after tempfile.mkdtemp(), but creates the temporary directory on the guest, by invoking mktemp -d. The implementation may slightly differ, but the temporary directory should be created safely, without conflicts, and it should be accessible only to user who created it.

Since the caller is responsible for removing the directory, it is recommended to use it as a context manager, just as one would use tempfile.mkdtemp(); leaving the context will remove the directory:

with guest.mkdtemp() as path:
    ...
Parameters:
  • prefix – if set, the directory name will begin with this string.

  • template – if set, the directory name will follow the given naming scheme: the template must end with 6 consecutive X``s, i.e. ``XXXXXX. All X letters will be replaced with random characters.

  • parent – if set, new directory will be created under this path. Otherwise, the default directory is used.

property multihost_name: str

Return guest’s multihost name, i.e. name and its role

classmethod options(how: str | None = None) list[Callable[[Any], Any]]

Prepare command line options related to guests

property package_manager: PackageManager[PackageManagerEngine]
perform_reboot(mode: RebootMode, action: Callable[[], Any], wait: Waiting) bool

Perform the actual reboot and wait for the guest to recover.

This is the core implementation of the common task of triggering a reboot and waiting for the guest to recover. reboot() is the public API of guest classes, and feeds perform_reboot() with the right action callable.

Note

perform_reboot() should be used by provision plugins only, when they decide what action they need to take to take to perform the desired reboot of the guest. Other code should use Guest.reboot() instead.

Parameters:
  • mode – which boot mode to perform.

  • action – a callable which will trigger the requested reboot.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

property plan_workdir: Path

Path to a plan workdir.

Raises:

GeneralError – when there is no plan workdir yet.

primary_address: str | None = None

Primary hostname or IP address for tmt/guest communication.

abstractmethod pull(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None) None

Pull files from the guest

abstractmethod push(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None, superuser: bool = False) None

Push files to the guest

abstractmethod reboot(mode: Literal[RebootMode.HARD] = RebootMode.HARD, command: None = None, waiting: Waiting | None = None) bool
abstractmethod reboot(mode: Literal[RebootMode.SOFT, RebootMode.SYSTEMD_SOFT] = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

reconnect(wait: Waiting | None = None) bool

Ensure the connection to the guest is working

The default timeout is 5 minutes. Custom number of seconds can be provided in the timeout parameter. This may be useful when long operations (such as system upgrade) are performed.

remove() None

Remove the guest

Completely remove all guest instance data so that it does not consume any disk resources.

role: str | None
run_ansible_playbook(playbook: AnsibleCollectionPlaybook | Path, playbook_root: Path | None = None, extra_args: str | None = None, friendly_command: str | None = None, log: LoggingFunction | None = None, silent: bool = False) CommandOutput

Run an Ansible playbook on the guest.

A wrapper for _run_ansible() which is responsible for running the playbook while this method makes sure our logging is consistent.

Parameters:
  • playbook – path to the playbook to run.

  • playbook_root – if set, playbook path must be located under the given root path.

  • extra_args – additional arguments to be passed to ansible-playbook via --extra-args.

  • friendly_command – if set, it would be logged instead of the command itself, to improve visibility of the command in logging output.

  • log – a logging function to use for logging of command output. By default, logger.debug is used.

  • silent – if set, logging of steps taken by this function would be reduced.

property run_workdir: Path

Path to a run workdir.

Raises:

GeneralError – when there is no current run, or the run does not have a workdir yet.

save() GuestData

Save guest data for future wake up

Export all essential guest data into a dictionary which will be stored in the guests.yaml file for possible future wake up of the guest. Everything needed to attach to a running instance should be added into the data dictionary by child classes.

property scripts_path: Path

Absolute path to tmt scripts directory

setup() None

Setup the guest

Setup the guest after it has been started. It is called after Guest.start().

show(show_multihost_name: bool = True) None

Show guest details such as distro and kernel

start() None

Start the guest

Get a new guest instance running. This should include preparing any configuration necessary to get it started. Called after load() is completed so all guest data should be available.

property step_workdir: Path

Path to a step workdir.

Raises:

GeneralError – when there is no step workdir yet.

abstractmethod stop() None

Stop the guest

Shut down a running guest instance so that it does not consume any memory or cpu resources. If needed, perform any actions necessary to store the instance status to disk.

suspend() None

Suspend the guest.

Perform any actions necessary before quitting step and tmt. The guest may be reused by future tmt invocations.

topology_address: str | None = None

Guest topology hostname or IP address for guest/guest communication.

wake() None

Wake up the guest

Perform any actions necessary after step wake up to be able to attach to a running guest instance and execute commands. Called after load() is completed so all guest data should be prepared.

class tmt.steps.provision.GuestCapability(*values)

Bases: Enum

Various Linux capabilities

SYSLOG_ACTION_READ_ALL = 'syslog-action-read-all'

Read all messages remaining in the ring buffer.

SYSLOG_ACTION_READ_CLEAR = 'syslog-action-read-clear'

Read and clear all messages remaining in the ring buffer.

class tmt.steps.provision.GuestData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: ~tmt.steps.provision.GuestFacts = <factory>, environment: ~tmt.utils.Environment = <factory>, hardware: ~tmt.hardware.Hardware | None = None, ansible: ~tmt.ansible.GuestAnsible | None = None)

Bases: SerializableContainer

Keys necessary to describe, create, save and restore a guest.

Very basic set of keys shared across all known guest classes.

ansible: GuestAnsible | None = None
become: bool = False
environment: Environment
facts: GuestFacts
classmethod from_plugin(container: ProvisionPlugin[ProvisionStepDataT]) Self

Create guest data from plugin and its current configuration

hardware: Hardware | None = None
classmethod options() Iterator[tuple[str, str]]

Iterate over option names.

Based on keys(), but skips fields that cannot be set by options.

Yields:

two-item tuples, a key and corresponding option name.

primary_address: str | None = None

Primary hostname or IP address for tmt/guest communication.

role: str | None = None
show(*, keys: list[str] | None = None, verbose: int = 0, logger: Logger) None

Display guest data in a nice way.

Parameters:
  • keys – if set, only these keys would be shown.

  • verbose – desired verbosity. Some fields may be omitted in low verbosity modes.

  • logger – logger to use for logging.

topology_address: str | None = None

Guest topology hostname or IP address for guest/guest communication.

class tmt.steps.provision.GuestFacts(in_sync: bool = False, arch: str | None = None, distro: str | None = None, kernel_release: str | None = None, package_manager: tmt.package_managers.GuestPackageManager | None = None, bootc_builder: tmt.package_managers.GuestPackageManager | None = None, has_selinux: bool | None = None, has_systemd: bool | None = None, has_rsync: bool | None = None, is_superuser: bool | None = None, can_sudo: bool | None = None, sudo_prefix: str | None = None, is_ostree: bool | None = None, is_toolbox: bool | None = None, toolbox_container_name: str | None = None, is_container: bool | None = None, systemd_soft_reboot: bool | None = None, capabilities: dict[~tmt.steps.provision.GuestCapability, bool] = <factory>, os_release_content: dict[str, str] = <factory>, lsb_release_content: dict[str, str] = <factory>)

Bases: SerializableContainer

Contains interesting facts about the guest.

Inspired by Ansible or Puppet facts, interesting guest facts tmt discovers while managing the guest are stored in this container, plus the code performing the discovery of these facts.

arch: str | None = None
bootc_builder: tmt.package_managers.GuestPackageManager | None = None
can_sudo: bool | None = None
capabilities: dict[GuestCapability, bool]

Various Linux capabilities and whether they are permitted to commands executed on this guest.

distro: str | None = None
format() Iterator[tuple[str, str, str]]

Format facts for pretty printing.

Yields:

three-item tuples: the field name, its pretty label, and formatted representation of its value.

has_capability(cap: GuestCapability) bool
has_rsync: bool | None = None
has_selinux: bool | None = None
has_systemd: bool | None = None
in_sync: bool = False

Set to True by the first call to sync().

is_container: bool | None = None
is_ostree: bool | None = None
is_superuser: bool | None = None
is_toolbox: bool | None = None
kernel_release: str | None = None
lsb_release_content: dict[str, str]
os_release_content: dict[str, str]
package_manager: tmt.package_managers.GuestPackageManager | None = None
sudo_prefix: str | None = None
sync(guest: Guest, *facts: str) None

Update stored facts to reflect the given guest.

Parameters:
  • guest – guest whose facts this container should represent.

  • facts – if specified, only the listed facts - names of attributes of this container, like arch or is_container - will be synced.

systemd_soft_reboot: bool | None = None
toolbox_container_name: str | None = None
class tmt.steps.provision.GuestLog(name: str, guest: 'Guest')

Bases: ABC

abstractmethod fetch(logger: Logger) str | None

Fetch and return content of a log.

Returns:

content of the log, or None if the log cannot be retrieved.

guest: Guest
name: str
store(logger: Logger, path: Path, logname: str | None = None) None

Save log content to a file.

Parameters:
  • logger – logger to use for logging.

  • path – a path to save into, could be a directory or a file path.

  • logname – name of the log, if not set, path is supposed to be a file path.

class tmt.steps.provision.GuestSsh(*, data: GuestData, name: str | None = None, parent: Common | None = None, logger: Logger)

Bases: Guest

Guest provisioned for test execution, capable of accepting SSH connections

The following keys are expected in the ‘data’ dictionary:

role ....... guest role in the multihost scenario (inherited)
guest ...... hostname or ip address (inherited)
become ..... run shell scripts in tests, prepare, and finish with sudo (inherited)
port ....... port to connect to
user ....... user name to log in
key ........ path to the private key (str or list)
password ... password

These are by default imported into instance attributes.

Initialize guest data

property ansible_host_vars: dict[str, Any]

Get host variables for Ansible inventory with SSH-specific variables.

cli_invocation: 'tmt.cli.CliInvocation' | None = None
execute(command: Command | ShellScript, cwd: Path | None = None, env: Environment | None = None, friendly_command: str | None = None, test_session: bool = False, tty: bool = False, silent: bool = False, log: LoggingFunction | None = None, interactive: bool = False, on_process_start: Callable[[Command, Popen[bytes], Logger], None] | None = None, on_process_end: Callable[[Command, Popen[bytes], CommandOutput, Logger], None] | None = None, sourced_files: list[Path] | None = None, **kwargs: Any) CommandOutput

Execute a command on the guest.

Parameters:
  • command – either a command or a shell script to execute.

  • cwd – execute command in this directory on the guest.

  • env – if set, set these environment variables before running the command.

  • friendly_command – nice, human-friendly representation of the command.

property is_ready: bool

Detect guest is ready or not

property is_ssh_multiplexing_enabled: bool

Whether SSH multiplexing should be used

key: list[Path]
password: str | None
port: int | None
pull(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None) None

Pull files from the guest.

By default the whole plan workdir is synced from the same location on the guest. Use the source and destination to sync custom locations.

Parameters:
  • source – if set, this path will be downloaded from the guest. If not set, plan workdir is downloaded.

  • destination – if set, content will be downloaded to this path. If not set, root (/) is used.

  • options – custom transfer options to use instead of DEFAULT_PULL_OPTIONS.

push(source: Path | None = None, destination: Path | None = None, options: TransferOptions | None = None, superuser: bool = False) None

Push files to the guest.

By default the whole plan workdir is synced to the same location on the guest. Use the source and destination to sync custom locations.

Parameters:
  • source – if set, this path will be uploaded to the guest. If not set, plan workdir is uploaded.

  • destination – if set, content will be uploaded to this path. If not set, root (/) is used.

  • options – custom transfer options to use instead of DEFAULT_PUSH_OPTIONS.

  • superuser – if set, use sudo if user is not privileged. It is necessary for pushing to locations that only privileged users are allowed to modify.

reboot(mode: Literal[RebootMode.HARD] = RebootMode.HARD, command: None = None, waiting: Waiting | None = None) bool
reboot(mode: Literal[RebootMode.SOFT, RebootMode.SYSTEMD_SOFT] = RebootMode.SOFT, command: Command | ShellScript | None = None, waiting: Waiting | None = None) bool

Reboot the guest, and wait for the guest to recover.

Parameters:
  • mode – which boot mode to perform.

  • command – a command to run on the guest to trigger the reboot. Only usable when mode is not RebootMode.HARD.

  • waiting – deadline for the reboot.

Returns:

True if the reboot succeeded, False otherwise.

remove() None

Remove the guest

Completely remove all guest instance data so that it does not consume any disk resources.

setup() None

Setup the guest

Setup the guest after it has been started. It is called after Guest.start().

ssh_option: list[str]
stop() None

Stop the guest

Shut down a running guest instance so that it does not consume any memory or cpu resources. If needed, perform any actions necessary to store the instance status to disk.

suspend() None

Suspend the guest.

Perform any actions necessary before quitting step and tmt. The guest may be reused by future tmt invocations.

user: str | None
class tmt.steps.provision.GuestSshData(_OPTIONLESS_FIELDS: tuple[str, ...] = ('primary_address', 'topology_address', 'facts'), primary_address: str | None = None, topology_address: str | None = None, role: str | None = None, become: bool = False, facts: ~tmt.steps.provision.GuestFacts = <factory>, environment: ~tmt.utils.Environment = <factory>, hardware: ~tmt.hardware.Hardware | None = None, ansible: ~tmt.ansible.GuestAnsible | None = None, port: int | None = None, user: str = 'root', key: list[~tmt._compat.pathlib.Path] = <factory>, password: str | None = None, ssh_option: list[str] = <factory>)

Bases: GuestData

Keys necessary to describe, create, save and restore a guest with SSH capability.

Derived from GuestData, this class adds keys relevant for guests that can be reached over SSH.

key: list[Path]
password: str | None = None
port: int | None = None
ssh_option: list[str]
user: str = 'root'
class tmt.steps.provision.Provision(*, plan: Plan, data: _RawStepData | list[_RawStepData], logger: Logger)

Bases: Step

Provision an environment for testing or use localhost.

Initialize provision step data

DEFAULT_HOW: str = 'virtual'
property ansible_inventory_path: Path

Get path to Ansible inventory This property lazily generates the Ansible inventory file on first access.

Returns:

Path to the generated inventory.yaml file

cli_invocation: 'tmt.cli.CliInvocation' | None = None
cli_invocations: list['tmt.cli.CliInvocation'] = []
get_guests_info() list[tuple[str, str | None]]

Get a list containing the names and roles of guests that should be enabled.

go(force: bool = False) None

Provision all guests

guests: list[Guest]

All known guests.

Warning

Guests may not necessarily be fully provisioned. They are from plugins as soon as possible, and guests may easily be still waiting for their infrastructure to finish the task. For the list of successfully provisioned guests, see ready_guests.

property is_multihost: bool
load() None

Load guest data from the workdir

property ready_guests: list[Guest]

All successfully provisioned guests.

Most of the time, after provision step finishes successfully, the list should be the same as guests, i.e. it should contain all known guests. There are situations when ready_guests will be a subset of guests, and their users must decide which collection is the best for the desired goal:

  • when provision is still running. ready_guests will be slowly gaining new guests as they get up and running.

  • in dry-run mode, no actual provisioning is expected to happen, therefore there are no unsuccessfully provisioned guests. In this mode, all known guests are considered as ready, and ready_guests is the same as guests.

  • if tmt is interrupted by a signal or user. Not all guests will finish their provisioning process, and ready_guests may contain just the finished ones.

save() None

Save guest data to the workdir

summary() None

Give a concise summary of the provisioning

suspend() None

Suspend the step.

Perform any actions necessary before quitting the step and tmt. The step may be revisited by future tmt invocations.

wake() None

Wake up the step (process workdir and command line)

class tmt.steps.provision.ProvisionPlugin(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)

Bases: GuestlessPlugin[ProvisionStepDataT, None]

Common parent of provision plugins

Store plugin name, data and parent step

classmethod base_command(usage: str, method_class: type[Command] | None = None) Command

Create base click command (common for all provision plugins)

classmethod clean_images(clean: Clean, dry: bool, workdir_root: Path) bool

Remove the images of one particular plugin

cli_invocation: 'tmt.cli.CliInvocation' | None = None
essential_requires() list[DependencySimple | DependencyFmfId | DependencyFile]

Collect all essential requirements of the guest implementation.

Essential requirements of a guest are necessary for the guest to be usable for testing.

By default, plugin’s guest class, ProvisionPlugin._guest_class, is asked to provide the list of required packages via Guest.requires() method.

Returns:

a list of requirements.

go(*, logger: Logger | None = None) None

Perform actions shared among plugins when beginning their tasks

property guest: Guest | None

Return the provisioned guest.

how: str = 'virtual'
opt(option: str, default: Any | None = None) Any

Get an option from the command line options

classmethod options(how: str | None = None) list[Callable[[Any], Any]]

Return list of options.

show(keys: list[str] | None = None) None

Show plugin details for given or all available keys

wake(data: GuestData | None = None) None

Wake up the plugin

Override data with command line options. Wake up the guest based on provided guest data.

class tmt.steps.provision.ProvisionQueue(name: str, logger: Logger)

Bases: Queue[ProvisionTask]

Queue class for running provisioning tasks

enqueue(*, phases: list[ProvisionPlugin[ProvisionStepData]], logger: Logger) None
class tmt.steps.provision.ProvisionStepData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, role: str | None = None, hardware: tmt.hardware.Hardware | None = None)

Bases: StepData

hardware: Hardware | None = None
role: str | None = None
class tmt.steps.provision.ProvisionTask(phases: list[ProvisionPlugin[ProvisionStepData]], logger: Logger)

Bases: GuestlessTask[None]

A task to run provisioning of multiple guests

go() Iterator[ProvisionTask]

Perform the task.

Called by Queue machinery to accomplish the task.

Invokes run() method to perform the task itself, and derived classes therefore must provide implementation of run method.

Yields:

instances of the same class, describing invocations of the task and their outcome. The task might be executed multiple times, depending on how exactly it was queued, and method would yield corresponding results.

property name: str

A name of this task.

Left for child classes to implement, because the name depends on the actual task.

phase: ProvisionPlugin[ProvisionStepData] | None = None

When ProvisionTask instance is received from the queue, phase points to the phase that has been provisioned by the task.

phases: list[ProvisionPlugin[ProvisionStepData]]

Phases describing guests to provision. In the provision step, each phase describes one guest.

run(logger: Logger) None

Perform the task.

Called once from go(). Subclasses of must implement their logic in this method rather than in go() which is already provided.

tmt.steps.provision.REBOOT_TIMEOUT: int = 600

How many seconds to wait for a connection to succeed after guest reboot. This is the effective value, combining the default and optional envvar, TMT_REBOOT_TIMEOUT.

class tmt.steps.provision.RebootMode(*values)

Bases: Enum

HARD = 'hard'

A hardware-invoked reboot of the guest. Power off/power on kind of reboot.

SOFT = 'soft'

A software-invoked reboot of the guest. reboot or shutdown -r now kind of reboot.

SYSTEMD_SOFT = 'systemd-soft'

A software-invoked reboot of the guest userspace. systemd soft-reboot kind of reboot.

See https://www.freedesktop.org/software/systemd/man/latest/systemd-soft-reboot.service.html for systemd documentation on soft-reboot.

exception tmt.steps.provision.RebootModeNotSupportedError(message: str | None = None, guest: Guest | None = None, mode: RebootMode = RebootMode.SOFT, *args: Any, **kwargs: Any)

Bases: ProvisionError

A requested reboot mode is not supported by the guest

General error.

Parameters:
  • message – error message.

  • causes – optional list of exceptions that caused this one. Since raise ... from ... allows only for a single cause, and some of our workflows may raise exceptions triggered by more than one exception, we need a mechanism for storing them. Our reporting will honor this field, and report causes the same way as __cause__.

tmt.steps.provision.SSH_MASTER_SOCKET_LENGTH_LIMIT = 84

SSH master socket path is limited to this many characters.

  • UNIX socket path is limited to either 108 or 104 characters, depending on the platform. See man 7 unix and/or kernel sources, for example.

  • SSH client processes may create paths with added “connection hash” when connecting to the master, that is a couple of characters we need space for.

tmt.steps.provision.SSH_MASTER_SOCKET_MAX_HASH_LENGTH = 64

A maximal number of characters of guest ID hash used by _socket_path_hash() when looking for a free SSH socket filename.

tmt.steps.provision.SSH_MASTER_SOCKET_MIN_HASH_LENGTH = 4

A minimal number of characters of guest ID hash used by _socket_path_hash() when looking for a free SSH socket filename.

tmt.steps.provision.STAT_BTIME_PATTERN = re.compile('btime\\s+(\\d+)')

A pattern to extract btime from /proc/stat file.

class tmt.steps.provision.TransferOptions(chmod: int | None = None, compress: bool = False, delete: bool = False, exclude: list[str] = <factory>, links: bool = False, preserve_perms: bool = False, protect_args: bool = False, recursive: bool = False, relative: bool = False, safe_links: bool = False, create_destination: bool = False)

Bases: object

Options for transferring files to/from the guest.

chmod: int | None = None

Apply permissions to the destination files

compress: bool = False

Enable compression during transfer

copy() TransferOptions

Create a copy of the options.

create_destination: bool = False

Run a mkdir -p of the destination before doing transfer

delete: bool = False

Delete extraneous files from destination directory

exclude: list[str]

Exclude files matching any of these patterns

Copy symlinks as symlinks

preserve_perms: bool = False

Preserve file permissions

protect_args: bool = False

Protect file and directory names from interpretation

recursive: bool = False

Recurse into directories

relative: bool = False

Use relative paths

Ignore symlinks that point outside the source tree

to_rsync() list[str]

Convert to rsync command line options.

tmt.steps.provision.configure_ssh_options() list[str | Path]

Extract custom SSH options from environment variables

tmt.steps.provision.default_connect_waiting() Waiting

Create default waiting context for connecting to the guest.

tmt.steps.provision.default_reboot_waiting() Waiting

Create default waiting context for guest reboots.

tmt.steps.provision.default_reconnect_waiting() Waiting

Create default waiting context for guest reconnect.

tmt.steps.provision.essential_ansible_requires() list[DependencySimple | DependencyFmfId | DependencyFile]

Return essential requirements for running Ansible modules

tmt.steps.provision.format_guest_full_name(name: str, role: str | None) str

Render guest’s full name, i.e. name and its role

tmt.steps.provision.normalize_hardware(key_address: str, raw_hardware: None | Any | Hardware, logger: Logger) Hardware | None

Normalize a hardware key value.

Parameters:
  • key_address – location of the key being that’s being normalized.

  • logger – logger to use for logging.

  • raw_hardware – input from either command line or fmf node.