tmt.steps.report package
Submodules
tmt.steps.report.display module
- class tmt.steps.report.display.ReportDisplay(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ReportPlugin[ReportDisplayData]Show test results on the terminal.
Give a concise summary of test results directly on the terminal. Allows to select the desired level of verbosity.
tmt run -l report # overall summary only tmt run -l report -v # individual test results tmt run -l report -vv # show full paths to logs tmt run -l report -vvv # provide complete test output
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.report.display.ReportDisplayData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, display_guest: str = 'auto')
Bases:
ReportStepData- display_guest: str = 'auto'
- class tmt.steps.report.display.ResultRenderer(basepath: ~tmt._compat.pathlib.Path, logger: ~tmt.log.Logger, shift: int, verbosity: int = 0, display_guest: bool = True, variables: dict[str, ~typing.Any] = <factory>, result_header_template: str = '\n{{ RESULT | format_duration | style(fg="cyan") }} {{ OUTCOME | style(fg=OUTCOME_COLOR) }} {{ RESULT.name }}\n{%- if CONTEXT.display_guest %} (on {{ RESULT.guest | guest_full_name }}){% endif %}\n{%- if PROGRESS is defined %} {{ PROGRESS }}{% endif %}\n', result_check_header_template: str = '\n{{ RESULT | format_duration | style(fg="cyan") }} {{ OUTCOME | style(fg=OUTCOME_COLOR) }} {{ RESULT.name }} ({{ RESULT.event.value }} check)\n', subresult_header_template: str = '\n{{ RESULT | format_duration | style(fg="cyan") }} {{ OUTCOME | style(fg=OUTCOME_COLOR) }} {{ RESULT.name }} (subresult)\n', subresult_check_header_template: str = '\n{{ RESULT | format_duration | style(fg="cyan") }} {{ OUTCOME | style(fg=OUTCOME_COLOR) }} {{ RESULT.name }} ({{ RESULT.event.value }} check)\n', note_template: str = '\n{{ "Note:" | style(fg="yellow") }} {{ NOTE_LINES.pop(0) }}\n{% for line in NOTE_LINES %}\n {{ line }}\n{% endfor %}\n')
Bases:
objectA rendering engine for turning results into printable representation.
- basepath: Path
A base path for all log references.
- display_guest: bool = True
Whether guest from which results originated should be printed out.
- note_template: str = '\n{{ "Note:" | style(fg="yellow") }} {{ NOTE_LINES.pop(0) }}\n{% for line in NOTE_LINES %}\n {{ line }}\n{% endfor %}\n'
- render_check_result(result: CheckResult, template: str) Iterator[str]
Render a single test check result.
- render_check_results(results: Iterable[CheckResult], template: str) Iterator[str]
Render test check results.
- static render_log_content(log: Path) Iterator[str]
Render log info and content of a single log.
- static render_log_info(log: Path) Iterator[str]
Render info about a single log.
- render_logs_content(result: BaseResult) Iterator[str]
Render log info and content of result logs.
- render_logs_info(result: BaseResult) Iterator[str]
Render info about result logs.
- render_note(note: str) Iterator[str]
Render a single result note.
- render_notes(result: BaseResult) Iterator[str]
Render result notes.
- result_check_header_template: str = '\n{{ RESULT | format_duration | style(fg="cyan") }} {{ OUTCOME | style(fg=OUTCOME_COLOR) }} {{ RESULT.name }} ({{ RESULT.event.value }} check)\n'
- result_header_template: str = '\n{{ RESULT | format_duration | style(fg="cyan") }} {{ OUTCOME | style(fg=OUTCOME_COLOR) }} {{ RESULT.name }}\n{%- if CONTEXT.display_guest %} (on {{ RESULT.guest | guest_full_name }}){% endif %}\n{%- if PROGRESS is defined %} {{ PROGRESS }}{% endif %}\n'
- shift: int
Default shift of all rendered lines.
- subresult_check_header_template: str = '\n{{ RESULT | format_duration | style(fg="cyan") }} {{ OUTCOME | style(fg=OUTCOME_COLOR) }} {{ RESULT.name }} ({{ RESULT.event.value }} check)\n'
- subresult_header_template: str = '\n{{ RESULT | format_duration | style(fg="cyan") }} {{ OUTCOME | style(fg=OUTCOME_COLOR) }} {{ RESULT.name }} (subresult)\n'
- variables: dict[str, Any]
Additional variables to use when rendering templates.
- verbosity: int = 0
When 2 or more, log info - name and path - would be printed out. When 3 or more, log output would be printed out as well.
tmt.steps.report.html module
- class tmt.steps.report.html.ReportHtml(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ReportPlugin[ReportHtmlData]Format test results into an HTML report.
Create a local
htmlfile with test results arranged in a table. Optionally open the page in the default browser.Example config:
# Enable html report from the command line tmt run --all report --how html tmt run --all report --how html --open tmt run -l report -h html -o
# Use html as the default report for given plan report: how: html open: true
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.report.html.ReportHtmlData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, file: tmt._compat.pathlib.Path | None = None, open: bool = False, absolute_paths: bool = False, display_guest: str = 'auto')
Bases:
ReportStepData- absolute_paths: bool = False
- display_guest: str = 'auto'
- file: Path | None = None
- open: bool = False
tmt.steps.report.junit module
- class tmt.steps.report.junit.ImplementProperties
Bases:
objectDefine a properties attribute.
This class can be used to easily add properties attribute by inheriting it.
- class PropertyDict
Bases:
TypedDictDefines a property dict, which gets propagated into the final template properties.
- name: str
- value: str
- property properties: list[PropertyDict]
- class tmt.steps.report.junit.ReportJUnit(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ReportPlugin[ReportJUnitData]Save test results in chosen JUnit flavor format.
When flavor is set to custom, the
template-pathwith a path to a custom template must be provided.When
fileis not specified, output is written into a file namedjunit.xmllocated in the current workdir.# Enable junit report from the command line tmt run --all report --how junit tmt run --all report --how junit --file test.xml
# Use junit as the default report for given plan report: how: junit file: test.xml
Store plugin name, data and parent step
- check_options() None
Check the module options
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.report.junit.ReportJUnitData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, file: tmt._compat.pathlib.Path | None = None, flavor: str = 'default', template_path: tmt._compat.pathlib.Path | None = None, prettify: bool = True, include_output_log: bool = True)
Bases:
ReportStepData- file: Path | None = None
- flavor: str = 'default'
- include_output_log: bool = True
- prettify: bool = True
- template_path: Path | None = None
- class tmt.steps.report.junit.ResultWrapper(wrapped: Result | SubResult, subresults_context_class: type[ResultsContext])
Bases:
ImplementPropertiesThe context wrapper for
tmt.Result.Adds possibility to wrap the
tmt.Resultand dynamically add more attributes which get available inside the template context.- property subresult: ResultsContext
Override the
tmt.Result.subresultand wrap all thetmt.result.SubResultinstances into theResultsContext.
- class tmt.steps.report.junit.ResultsContext(results: list[Result] | list[SubResult])
Bases:
ImplementPropertiesThe results context for Jinja templates.
A class which keeps the results context (especially the result summary) for JUnit template. It wraps all the
tmt.Resultinstances into theResultWrapper.Decorate/wrap all the
ResultandSubResultinstances with more attributes- property duration: int
Returns the total duration of all tests in seconds
- property errored: list[ResultWrapper]
Returns results of tests with error/warn outcome
- property failed: list[ResultWrapper]
Returns results of failed tests
- property passed: list[ResultWrapper]
Returns results of passed tests
- property skipped: list[ResultWrapper]
Returns results of skipped tests
- tmt.steps.report.junit.make_junit_xml(phase: ReportPlugin[Any], flavor: str = 'default', template_path: Path | None = None, include_output_log: bool = True, prettify: bool = True, results_context: ResultsContext | None = None, **extra_variables: Any) str
Create JUnit XML file and return the data.
- Parameters:
phase – instance of a
tmt.steps.report.ReportPlugin.flavor – name of a JUnit flavor to generate.
template_path – if set, the provided template will be used instead of a pre-defined flavor template. In this case, the
flavormust be set tocustomvalue.include_output_log – if enabled, the
<system-out>tags are included in the final template output.prettify – allows to control the XML pretty print.
results_context – if set, the provided
ResultsContextis used in a template.extra_variables – if set, these variables get propagated into the Jinja template.
tmt.steps.report.polarion module
- class tmt.steps.report.polarion.ReportPolarion(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ReportPlugin[ReportPolarionData]Write test results into an xUnit file and upload to Polarion.
In order to get quickly started create a pylero config file
~/.pyleroin your home directory with the following content:[webservice] url=https://{your polarion web URL}/polarion svn_repo=https://{your polarion web URL}/repo default_project={your project name} user={your username} password={your password}
See the
Pylero Documentationfor more details on how to configure thepyleromodule.https://github.com/RedHatQE/pylero
Note
For Polarion report to export correctly you need to use password authentication, since exporting the report happens through Polarion XUnit importer which does not support using tokens. You can still authenticate with token to only generate the report using
--no-uploadargument.Note
Your Polarion project might need a custom value format for the
arch,planned-inand other fields. The format of these fields might differ across Polarion projects, for example,x8664can be used instead ofx86_64for the architecture.Examples:
# Enable polarion report from the command line tmt run --all report --how polarion --project-id tmt tmt run --all report --how polarion --project-id tmt --no-upload --file test.xml
# Use polarion as the default report for given plan report: how: polarion file: test.xml project-id: tmt title: tests_that_pass planned-in: RHEL-9.1.0 pool-team: sst_tmt
Store plugin name, data and parent step
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- class tmt.steps.report.polarion.ReportPolarionData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, file: tmt._compat.pathlib.Path | None = None, upload: bool = True, project_id: str | None = None, title: str | None = None, description: str | None = None, template: str | None = None, use_facts: bool = False, planned_in: str | None = None, assignee: str | None = None, pool_team: str | None = None, arch: str | None = None, platform: str | None = None, build: str | None = None, sample_image: str | None = None, logs: str | None = None, compose_id: str | None = None, test_cycle: str | None = None, fips: bool = False, prettify: bool = True, include_output_log: bool = True)
Bases:
ReportStepData- arch: str | None = None
- assignee: str | None = None
- build: str | None = None
- compose_id: str | None = None
- description: str | None = None
- file: Path | None = None
- fips: bool = False
- include_output_log: bool = True
- logs: str | None = None
- planned_in: str | None = None
- platform: str | None = None
- pool_team: str | None = None
- prettify: bool = True
- project_id: str | None = None
- sample_image: str | None = None
- template: str | None = None
- test_cycle: str | None = None
- title: str | None = None
- upload: bool = True
- use_facts: bool = False
tmt.steps.report.reportportal module
- class tmt.steps.report.reportportal.LogFilterSettings(size: 'Size' = <Quantity(1, 'megabyte')>, is_traceback: bool = False)
Bases:
object- is_traceback: bool = False
- size: Size = <Quantity(1, 'megabyte')>
- class tmt.steps.report.reportportal.ReportReportPortal(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
ReportPlugin[ReportReportPortalData]Report test results and their subresults to a ReportPortal instance via API.
For communication with Report Portal API is necessary to provide following options:
token for authentication
url of the ReportPortal instance
project name
In addition to command line options it’s possible to use environment variables:
export TMT_PLUGIN_REPORT_REPORTPORTAL_${MY_OPTION}=${MY_VALUE} # Boolean options are activated with value of 1: TMT_PLUGIN_REPORT_REPORTPORTAL_SUITE_PER_PLAN=1
Assuming the URL and token are provided by the environment variables, the plan config can look like this:
# Use ReportPortal as the default report for given plan report: how: reportportal project: baseosqe # Report context attributes for given plan context: ... environment: ...
# Report description, contact, id and environment variables for given test summary: ... contact: ... id: ... environment: ...
Where the context and environment sections must be filled with corresponding data in order to report context as attributes (arch, component, distro, trigger, compose, etc.) and environment variables as parameters in the Item Details.
Other reported fmf data are summary, id, web link and contact per test.
Two types of data structures are supported for reporting to ReportPortal:
launch-per-planmapping (default) that results in launch-test structure.suite-per-planmapping that results in launch-suite-test structure.
Supported report use cases:
Report a new run in launch-suite-test or launch-test structure
Report an additional rerun with
launch-rerunoption and same launch name (-> Retry items) or by reusing the run and reporting withagainoption (-> append logs)To see plan progress, discover and report an empty (IDLE) run and reuse the run for execution and updating the report with
againoptionReport contents of a new run to an existing launch via the URL ID in three ways: tests to launch, suites to launch and tests to suite.
Example:
# Enable ReportPortal report from the command line depending on the use case: ## Simple upload with all project, url endpoint and user token passed in command line tmt run --all report --how reportportal --project=baseosqe --url="https://reportportal.xxx.com" --token="abc...789" ## Simple upload with url and token exported in environment variable tmt run --all report --how reportportal --project=baseosqe ## Upload with project name in fmf data, filtering out parameters (environment variables) ## that tend to be unique and break the history aggregation tmt run --all report --how reportportal --exclude-variables="^(TMT|PACKIT|TESTING_FARM).*" ## Upload all plans as suites into one ReportPortal launch tmt run --all report --how reportportal --suite-per-plan --launch=Errata --launch-description="..." ## Rerun the launch with suite structure for the test results to be uploaded ## into the latest launch with the same name as a new 'Retry' tab ## (mapping based on unique paths) tmt run --all report --how reportportal --suite-per-plan --launch=Errata --launch-rerun ## Rerun the tmt run and append the new result logs under the previous one ## uploaded in ReportPortal (precise mapping) tmt run --id run-012 --all report --how reportportal --again ## Additional upload of new suites into given launch with suite structure tmt run --all report --how reportportal --suite-per-plan --upload-to-launch=4321 ## Additional upload of new tests into given launch with non-suite structure tmt run --all report --how reportportal --launch-per-plan --upload-to-launch=1234 ## Additional upload of new tests into given suite tmt run --all report --how reportportal --upload-to-suite=123456 ## Upload Idle tests, then execute it and add result logs into prepared empty tests tmt run discover report --how reportportal --defect-type=Idle tmt run --last --all report --how reportportal --again
Store plugin name, data and parent step
- TMT_TO_RP_RESULT_STATUS = {ResultOutcome.ERROR: 'FAILED', ResultOutcome.FAIL: 'FAILED', ResultOutcome.INFO: 'SKIPPED', ResultOutcome.PASS: 'PASSED', ResultOutcome.PENDING: 'SKIPPED', ResultOutcome.SKIP: 'SKIPPED', ResultOutcome.WARN: 'FAILED'}
- append_description(curr_description: str) str
Extend text with the launch description (if provided)
- append_link_template(description: str, link_template: str, result: Result) str
Extend text with rendered link template
- check_options() None
Check options for known troublesome combinations
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- construct_item_attributes(attributes: list[dict[str, str]] | None = None, result: Result | None = None, test: Test | None = None) list[dict[str, str]]
- construct_launch_attributes(suite_per_plan: bool, attributes: list[dict[str, str]]) list[dict[str, str]]
- property datetime: str
- execute_rp_import(logger: Logger) None
Execute the import of test, results and subresults into ReportPortal
- get_defect_type_locator(session: Session, defect_type: str | None) str
- go(*, logger: Logger | None = None) None
Report test results to the endpoint
Create a ReportPortal launch and its test items, fill it with all parts needed and report the logs.
- handle_response(response: Response) None
Check the endpoint response and raise an exception if needed
- property headers: dict[str, str]
- rp_api_get(session: Session, path: str) Response
- rp_api_post(session: Session, path: str, json: Any) Response
- rp_api_put(session: Session, path: str, json: Any) Response
- upload_result_logs(result: Result | SubResult, session: Session, item_uuid: str, launch_uuid: str, timestamp: str, write_out_failures: bool = True) None
Upload all result log files into the ReportPortal instance
- property url: str
- class tmt.steps.report.reportportal.ReportReportPortalData(name: str, how: str, order: int = 50, when: list[str] = <factory>, summary: str | None = None, url: str | None = None, token: str | None = None, project: str | None = None, launch: str | None = None, launch_description: str | None = None, launch_per_plan: bool = False, suite_per_plan: bool = False, upload_to_launch: str | None = None, upload_to_suite: str | None = None, launch_rerun: bool = False, defect_type: str | None = None, log_size_limit: 'Size' = <Quantity(1, 'megabyte')>, traceback_size_limit: 'Size' = <Quantity(50, 'kilobyte')>, exclude_variables: str = '^TMT_.*', api_version: str = 'v1', artifacts_url: str | None = None, ssl_verify: bool = True, upload_subresults: bool = False, link_template: str | None = None, upload_log_pattern: list[re.Pattern[str]] = <factory>, auto_analysis: bool = False, launch_url: str | None = None, launch_uuid: str | None = None, suite_uuid: str | None = None, test_uuids: dict[int, str] = <factory>)
Bases:
ReportStepData- api_version: str = 'v1'
- artifacts_url: str | None = None
- auto_analysis: bool = False
- defect_type: str | None = None
- exclude_variables: str = '^TMT_.*'
- launch: str | None = None
- launch_description: str | None = None
- launch_per_plan: bool = False
- launch_rerun: bool = False
- launch_url: str | None = None
- launch_uuid: str | None = None
- link_template: str | None = None
- log_size_limit: Size = <Quantity(1, 'megabyte')>
- project: str | None = None
- ssl_verify: bool = True
- suite_per_plan: bool = False
- suite_uuid: str | None = None
- test_uuids: dict[int, str]
- token: str | None = None
- traceback_size_limit: Size = <Quantity(50, 'kilobyte')>
- upload_log_pattern: list[Pattern[str]]
- upload_subresults: bool = False
- upload_to_launch: str | None = None
- upload_to_suite: str | None = None
- url: str | None = None
Module contents
- class tmt.steps.report.Report(*, plan: Plan, data: _RawStepData | list[_RawStepData] | None = None, name: str | None = None, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
StepProvide test results overview and send reports.
Initialize and check the step data
- DEFAULT_HOW: str = 'display'
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- cli_invocations: list['tmt.cli.CliInvocation'] = []
- go(force: bool = False) None
Report the results
- summary() None
Give a concise report summary
- wake() None
Wake up the step (process workdir and command line)
- class tmt.steps.report.ReportPlugin(*, step: Step, data: StepDataT, workdir: Literal[True] | Path | None = None, logger: Logger)
Bases:
GuestlessPlugin[ReportStepDataT,None]Common parent of report plugins
Store plugin name, data and parent step
- classmethod base_command(usage: str, method_class: type[Command] | None = None) Command
Create base click command (common for all report plugins)
- cli_invocation: 'tmt.cli.CliInvocation' | None = None
- go(*, logger: Logger | None = None) None
Perform actions shared among plugins when beginning their tasks
- how: str = 'display'