项目作者: qase-tms

项目描述 :
Qase TMS Pytest Plugin
高级语言: Python
项目地址: git://github.com/qase-tms/qase-pytest.git
创建时间: 2020-05-03T08:47:24Z
项目社区:https://github.com/qase-tms/qase-pytest

开源协议:Apache License 2.0

下载


THIS REPO IS DEPRECATED. IT WAS MOVED HERE


Qase TMS Pytest Plugin

License

Installation

  1. pip install qase-pytest

Usage

Configuration

Configuration could be provided both by pytest.ini/tox.ini params
and using command-line arguments:

  • Command-line args:

    1. --qase Use Qase TMS
    2. --qase-api-token=QS_API_TOKEN
    3. Api token for Qase TMS
    4. --qase-project=QS_PROJECT_CODE
    5. Project code in Qase TMS
    6. --qase-testrun=QS_TESTRUN_ID
    7. Testrun ID in Qase TMS
    8. --qase-testplan=QS_TESTPLAN_ID
    9. Testplan ID in Qase TMS
    10. --qase-new-run Create new testrun, if no testrun id provided
    11. --qase-debug Prints additional output of plugin
  • INI file parameters:

  1. qs_enabled (bool): default value for --qase
  2. qs_api_token (string):
  3. default value for --qase-api-token
  4. qs_project_code (string):
  5. default value for --qase-project
  6. qs_testrun_id (string):
  7. default value for --qase-testrun
  8. qs_testplan_id (string):
  9. default value for --qase-testplan
  10. qs_new_run (bool): default value for --qase-new-run
  11. qs_debug (bool): default value for --qase-debug

To link tests with test-cases in Qase TMS you should use predefined decorator:

  1. from qaseio.pytest import qase
  2. @qase.id(13)
  3. def test_example_1():
  4. pass
  5. @qase.id(12, 156)
  6. def test_example_2():
  7. pass

You could pass as much IDs as you need.

Possible cases statuses

  • PASSED - when test passed
  • FAILED - when test failed with AssertionError
  • BLOCKED - when test failed with any other exception
  • SKIPPED - when test has been skipped

Add attachments to test results

When you need to push some additional information to server you could use
attachments:

  1. import pytest
  2. from qaseio.client.models import MimeTypes
  3. from qaseio.pytest import qase
  4. @pytest.fixture(scope="session")
  5. def driver():
  6. driver = webdriver.Chrome()
  7. yield driver
  8. logs = "\n".join(str(row) for row in driver.get_log('browser')).encode('utf-8')
  9. qase.attach((logs, MimeTypes.TXT, "browser.log"))
  10. driver.quit()
  11. @qase.id(13)
  12. def test_example_1():
  13. qase.attach("/path/to/file", "/path/to/file/2")
  14. qase.attach(
  15. ("/path/to/file/1", "application/json"),
  16. ("/path/to/file/3", MimeTypes.XML),
  17. )
  18. @qase.id(12, 156)
  19. def test_example_2(driver):
  20. qase.attach((driver.get_screenshot_as_png(), MimeTypes.PNG, "result.png"))

You could pass as much files as you need.

Also you should know, that if no case id is associated with current test in
pytest - attachment would not be uploaded:

  1. import pytest
  2. from qaseio.client.models import MimeTypes
  3. from qaseio.pytest import qase
  4. @pytest.fixture(scope="session")
  5. def driver():
  6. driver = webdriver.Chrome()
  7. yield driver
  8. logs = "\n".join(str(row) for row in driver.get_log('browser')).encode('utf-8')
  9. # This would do nothing, because last test does not have case id link
  10. qase.attach((logs, MimeTypes.TXT, "browser.log"))
  11. driver.quit()
  12. def test_example_2(driver):
  13. # This would do nothing
  14. qase.attach((driver.get_screenshot_as_png(), MimeTypes.PNG, "result.png"))

Linking code with steps

It is possible to link test step with function, or using context.
There is 3 variants to link code with step:

  • position in case
  • step name
  • step uniq hash
  1. from qaseio.pytest import qase
  2. @qase.step(1) # position
  3. def some_step():
  4. sleep(5)
  5. @qase.step("Second step") # test step name
  6. def another_step():
  7. sleep(3)
  8. ...
  9. def test_example():
  10. some_step()
  11. another_step()
  12. # test step hash
  13. with qase.step("2898ba7f3b4d857cec8bee4a852cdc85f8b33132"):
  14. sleep(1)

Sending tests to existing testrun

Testrun in TMS will contain only those test results, which are presented in testrun,
but every test would be executed.

  1. pytest \
  2. --qase \
  3. --qase-api-token=<your api token here> \
  4. --qase-project=PRJCODE \ # project, where your testrun exists in
  5. --qase-testrun=3 # testrun id

Creating testrun base on testplan

Create new testrun base on testplan. Testrun in TMS will contain only those
test results, which are presented in testrun, but every test would be executed.

  1. pytest \
  2. --qase \
  3. --qase-api-token=<your api token here> \
  4. --qase-project=PRJCODE \ # project, where your testrun exists in
  5. --qase-testplan=3 # testplan id

Creating new testrun according to current pytest run

Testrun in TMS will contain only those test results, which has correct case ids,
but every test would be executed.

  1. pytest \
  2. --qase \
  3. --qase-api-token=<your api token here> \
  4. --qase-project=PRJCODE \ # project, where your testrun would be created
  5. --qase-new-run

Debug information

If you specify --qase-debug parameter you would get additional output:

  1. =================================== Qase TMS ===================================
  2. This tests does not have test case ids:
  3. test_no_deco
  4. For test test_complex_run.py::test_multiple_ids_fail could not find test cases in run: 3
  5. =========================== Qase TMS setup finished ============================

Execution logic

  1. Check project exists
  2. Check testrun exists
  3. Load all ids for each test-case
  4. Check which tests does not have ids (debug: will list them all)
  5. Check every id exists in project (debug: will show which missing)
  6. Check every id present in testrun (debug: will show which missing)
  7. Execute tests and publish results in a runtime,
    not waiting all run to finish