TestRail CLI

The TestRail CLI is a command line interface tool that allows you to effortlessly upload test automation results from any JUnit-style XML file to TestRail.

Overview

You can use the TestRail CLI to parse test automation results from nearly any test automation tool or framework (as long as that tool can export test results in a JUnit-style XML file). Some of the most used test automation frameworks that can easily generate reports which can be used by the TestRail CLI include:

  • JUnit
  • TestNG
  • Cypress.io
  • Playwright
  • Robot Framework
  • Pytest
  • NUnit

By parsing and uploading JUnit style test results into TestRail directly from the command line or by running the CLI as part of an automated build pipeline, you can focus on writing test code instead of having to worry about API calls and other technical details related to uploading test results.

The TestRail CLI is an open source project hosted on GitHub, so anyone can contribute by creating issues or even developing code to implement new features or improvements. You can read further instructions in our README.

Installing the TestRail CLI

The TestRail CLI is programming-language agnostic when it comes to uploading your test results, but the tool itself is developed in Python and can be installed from the public Python Package Index (PyPI). To install it on your machine or build agent, we recommend downloading the Python 3.10.4 installer matching your operating system and following the install wizard, making sure pip is also installed. To make sure the install was successful, try executing the commands python –version and pip --version from your command line and they  should output their versions.

After you’re done installing Python, installing the TestRail CLI is as easy as running one line on your system’s command line.

$ pip install trcli

Configuring your TestRail instance

Before using the TestRail CLI, you first need to configure your TestRail instance as follows:

  1. Enable the TestRail API for your instance. You can do this by accessing Administration > Site Settings, clicking on the API tab, and checking the Enable API option. You can refer to the API Introduction page for more information.

    Enable TestRail API

  2. Create a custom test case field to map your automated test cases code to cases in  TestRail. This will allow you upload test results to TestRail without duplicating test cases or having to write TestRail case IDs in your test automation code. You can create a new custom field by accessing Administration > Customizations and clicking Add Field. After you’ve reached the field creation screen, as per the image below, you must comply with the two requirements listed below to create the field. If you need more information please refer to the Configuring custom fields page.
    • System Name must be automation_id 
    • Type must be String.

    Create custom case field

Using the TestRail CLI to upload test automation results

The TestRail CLI is designed to be simple and efficient. Once it has been installed and your TestRail instance is properly configured, a JUnit results file can be passed through the command line to quickly create a run, add test results, and even automatically create cases for automated test cases that do not yet exist on TestRail.

Using the sample JUnit XML report below you can execute a simple command such as the one just below it to send the test results to TestRail. Notice the -y option on the command itself, which skips the prompts to automatically create new tests. This is useful when running the TestRail CLI through your CI tools. 

 

It is recommended that you create an API key under My Settings on your account and use it instead of your password. This will prevent you from exposing your password and thereby creating a security risk.

<testsuites name="test suites root">
  <testsuite failures="0" errors="0" skipped="1" tests="1" time="0.05" name="tests.LoginTests">
    <properties>
      <property name="setting1" value="True"/>
    </properties>
    <testcase classname="tests.LoginTests" name="test_case_1" time="159">
      <skipped type="pytest.skip" message="Please skip">
        skipped by user
      </skipped>
    </testcase>
    <testcase classname="tests.LoginTests" name="test_case_2" time="650">
    </testcase>
    <testcase classname="tests.LoginTests" name="test_case_3" time="159">
      <failure type="pytest.failure" message="Fail due to...">
        failed due to…
      </failure>
    </testcase>
  </testsuite>
</testsuites>
$ trcli -y \
>    -h https://INSTANCE-NAME.testrail.io \
>    --project "TRCLI Test" \
>    --username user@domain.com \
>    --password passwordORapikey \
>    parse_junit \
>    --title "Automated Tests Run" \
>    -f results.xml

Checking project. Done. Adding missing sections to the suite. Found test cases not matching any TestRail case (count: 3) Adding missing test cases to the suite. Adding test cases: 3/3, Done. Creating test run. Done. Adding results: 3/3, Done. Submitted 3 test results in 6.5 secs.

Once the import process is complete, if you go to the Test Cases page in your TestRail project, you’ll notice that the TestRail CLI automatically created the test cases that were on your test results report. Notice that it added a unique Automation ID by combining the classname and name attributes on the JUnit report using the pattern classname.name for each test on the report. This Automation ID is used to map the tests on your automation code base to the test cases on TestRail. This means that each time you run the TestRail CLI, it first attempts to match an existing test case on TestRail and only creates a new one in case there is no test case with that Automation ID.

Example:

Test Result from Automation Results File Automation ID in TestRail
<testcase
  classname=”tests.LoginTests”  name=”test_case_1″  time=”159″>
</testcase>

tests.LoginTests.test_case_1

 

To make sure your test case mapping is on point, please beware of the following

  • If you would like to upload automation results for test cases that already exist in TestRail, be sure to update the automation_id for those test cases before uploading your automation results
  • If you change the test name or location in your automation suite later, that will create a new test case in TestRail, unless you also update the automation_id field for the test case in TestRail

Test Cases

If you go to the Test Runs and Results page you will see a new run titled Automated Tests Run within the TRCLI Test project.

Test Run

By opening the test run, you can see the results for each test case. You can then drill further into a failed test case and check the error message that was imported directly from the JUnit report. This can be helpful to have a quick overview of what went wrong during the test.

Test results

Other useful features

Using config files to store alternate configurations

Quickly and easily submit results to different instances or projects, use different credentials, or other preset parameters in your commands by using an alternate config file. The configuration file will be written in YAML format, named config.yml, and stored in the same directory as the TRCLI executable file, unless otherwise specified. Environment variables can also be used. If a configuration file is referenced in the command, all parameters within the configuration file will override the environment variables. Any parameters specified in the command will override the config file.

The following example displays the use of an alternate configuration file that stores users credentials:

host: https://INSERT-INSTANCE-NAME.testrail.io 
project: TRCLI Test
username: username@domain.com 
password: passwordORapikey
title: Automated Tests Run
$ trcli -y \
>    --config alternate_config.yaml \
>    parse_junit \
>    -f results.xml

Checking project. Done.
Creating test run. Done. Adding results: 3/3, Done. Closing test run. Done. Submitted 3 test results in 3.2 secs.

Updating test results on existing test run

Imagine you already ran the TestRail CLI and you have your automated test results on TestRail, but some of the tests failed and you want to rerun those and update the existing test run with the new results. In order to do so, you just need to pass the --run-id argument and the TestRail CLI will update the test run with that id.

$ trcli -y \
>    -h https://INSERT-INSTANCE-NAME.testrail.io \
>    --project "TRCLI Test" \
>    --username user@domain.com \
>    --password passwordORapikey \
>    parse_junit \
>    --title "Automated Tests Run" \
>    --run-id 32
>    -f results.xml

Checking project. Done. Adding results: 3/3, Done. Closing test run. Done. Submitted 2 test results in 2.5 secs.

You should now see the new test result on the test details panel. This is one way to keep track of your automated test results under the same test run.

Test result updated

Closing the test run

If you want to immediately close your newly created test run, you can simply pass the --close-runargument and the TestRail CLI will perform that action after all the results have been added. This is useful if you don’t want to allow changes to be made to the results after the run has finished.

$ trcli -y \
>    -h https://INSERT-INSTANCE-NAME.testrail.io \
>    --project "TRCLI Test" \
>    --username username@domain.com \
>    --password passwordORapikey \
>    parse_junit \
>    --title "Automated Tests Run" \
>    --close-run \
>    -f results.xml

Checking project. Done.
Creating test run. Done.
Adding results: 3/3, Done.
Closing test run. Done.
Submitted 3 test results in 3.2 secs.

You can find your test run under the Completed test runs section.

Test run closed

CLI reference

General reference 

$ trcli --help
Usage: trcli [OPTIONS] COMMAND [ARGS]...

  TestRail CLI

Options:
  -c, --config       Optional path definition for testrail-credentials file or
                     CF file.
 -h, --host          Hostname of instance.
 --project           Name of project the Test Run should be created under.
 --project-id        Project id. Will be only used in case project name will
                     be duplicated in TestRail [x>=1]
 -u, --username      Username.
 -p, --password      Password.
 -k, --key           API key.
 -v, --verbose       Output all API calls and their results.
 --verify            Verify the data was added correctly.
 --insecure          Allow insecure requests.
 -b, --batch-size    Configurable batch size. [default: (50); x>=2]
 -t, --timeout       Batch timeout duration. [default: (30); x>=0]
 -y, --yes           answer 'yes' to all prompts around auto-creation
 -n, --no            answer 'no' to all prompts around auto-creation
 -s, --silent        Silence stdout
 --help              Show this message and exit.

JUnit results upload reference

$ trcli parse_junit --help
Usage: trcli parse_junit [OPTIONS]

  Parse report files and upload results to TestRail

Options:
  -f, --file            Filename and path.
  --close-run           Whether to close the newly created run
  --title               Title of Test Run to be created in TestRail.
  --suite-id            Suite ID for the results they are reporting. [x>=1]
  --run-id              Run ID for the results they are reporting (otherwise
                        the tool will attempt to create a new run). [x>=1]
  --case-fields         List of case fields and values for new test cases
                        creation. Usage: --case-fields type_id:1
                        --case-fields priority_id:3
  --help                Show this message and exit.

What next?

Now that you have centralized your test results on TestRail, not only can you check the results of your automated test runs, along with the error messages for failed tests, but you can also aggregate both your manual and automated testing efforts on reports that show you the full test coverage surrounding your app and even track test automation progress. You can also report a bug directly from the automated test result to an issue tracker of your preference as you would do for your manual test results!

You can look into the TestRail’s Reports and Test Metrics video to learn about how you can leverage TestRail’s reporting capabilities.

Was this article helpful?
0 out of 2 found this helpful