基本模式和示例¶
如何更改命令行选项的默认值¶
每次使用 pytest
时,输入相同的命令行选项系列可能会很繁琐。例如,如果你总是想查看已跳过和 xfailed 测试的详细信息,以及具有简洁的“点”进度输出,你可以将其写入配置文件
# content of pytest.ini
[pytest]
addopts = -ra -q
或者,你可以设置 PYTEST_ADDOPTS
环境变量,以便在使用环境时添加命令行选项
export PYTEST_ADDOPTS="-v"
在存在 addopts
或环境变量的情况下,命令行是如此构建的
<pytest.ini:addopts> $PYTEST_ADDOPTS <extra command-line arguments>
因此,如果用户在命令行中执行
pytest -m slow
执行的实际命令行是
pytest -ra -q -v -m slow
请注意,与其他命令行应用程序一样,如果选项冲突,则最后一个选项获胜,因此上面的示例将显示详细输出,因为 -v
覆盖了 -q
。
根据命令行选项将不同的值传递给测试函数¶
假设我们想编写一个依赖于命令行选项的测试。以下是实现此目的的基本模式
# content of test_sample.py
def test_answer(cmdopt):
if cmdopt == "type1":
print("first")
elif cmdopt == "type2":
print("second")
assert 0 # to see what was printed
要实现此目的,我们需要添加一个命令行选项,并通过 fixture 函数 提供 cmdopt
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--cmdopt", action="store", default="type1", help="my option: type1 or type2"
)
@pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")
让我们在不提供新选项的情况下运行它
$ pytest -q test_sample.py
F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
cmdopt = 'type1'
def test_answer(cmdopt):
if cmdopt == "type1":
print("first")
elif cmdopt == "type2":
print("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
first
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 0
1 failed in 0.12s
现在,提供一个命令行选项
$ pytest -q --cmdopt=type2
F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
cmdopt = 'type2'
def test_answer(cmdopt):
if cmdopt == "type1":
print("first")
elif cmdopt == "type2":
print("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
second
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 0
1 failed in 0.12s
你可以看到命令行选项已到达我们的测试中。
我们可以通过列出选项来为输入添加简单的验证
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--cmdopt",
action="store",
default="type1",
help="my option: type1 or type2",
choices=("type1", "type2"),
)
现在,我们将获得有关错误参数的反馈
$ pytest -q --cmdopt=type3
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: argument --cmdopt: invalid choice: 'type3' (choose from 'type1', 'type2')
如果你需要提供更详细的错误消息,可以使用 type
参数并引发 pytest.UsageError
# content of conftest.py
import pytest
def type_checker(value):
msg = "cmdopt must specify a numeric type as typeNNN"
if not value.startswith("type"):
raise pytest.UsageError(msg)
try:
int(value[4:])
except ValueError:
raise pytest.UsageError(msg)
return value
def pytest_addoption(parser):
parser.addoption(
"--cmdopt",
action="store",
default="type1",
help="my option: type1 or type2",
type=type_checker,
)
这就完成了基本模式。但是,人们通常希望在测试之外处理命令行选项,并传入不同或更复杂的对象。
动态添加命令行选项¶
通过 addopts
,你可以为你的项目静态添加命令行选项。你还可以动态修改命令行参数,然后再进行处理
# setuptools plugin
import sys
def pytest_load_initial_conftests(args):
if "xdist" in sys.modules: # pytest-xdist plugin
import multiprocessing
num = max(multiprocessing.cpu_count() / 2, 1)
args[:] = ["-n", str(num)] + args
如果您安装了xdist 插件,您现在将始终使用接近您 CPU 的多个子进程执行测试运行。使用上述 conftest.py 在空目录中运行
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 0 items
========================== no tests ran in 0.12s ===========================
根据命令行选项控制跳过测试¶
这是一个conftest.py
文件,添加--runslow
命令行选项以控制跳过pytest.mark.slow
标记的测试
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test as slow to run")
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
我们现在可以编写一个这样的测试模块
# content of test_module.py
import pytest
def test_func_fast():
pass
@pytest.mark.slow
def test_func_slow():
pass
运行时,它将看到一个跳过的“慢”测试
$ pytest -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 2 items
test_module.py .s [100%]
========================= short test summary info ==========================
SKIPPED [1] test_module.py:8: need --runslow option to run
======================= 1 passed, 1 skipped in 0.12s =======================
或者运行它,包括slow
标记的测试
$ pytest --runslow
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 2 items
test_module.py .. [100%]
============================ 2 passed in 0.12s =============================
编写集成的断言帮助程序¶
如果您有一个从测试中调用的测试帮助程序函数,您可以使用pytest.fail
标记来使测试失败并显示特定消息。如果您在帮助程序函数的某个位置设置__tracebackhide__
选项,测试支持函数将不会显示在回溯中。示例
# content of test_checkconfig.py
import pytest
def checkconfig(x):
__tracebackhide__ = True
if not hasattr(x, "config"):
pytest.fail(f"not configured: {x}")
def test_something():
checkconfig(42)
__tracebackhide__
设置影响pytest
显示回溯:除非指定--full-trace
命令行选项,否则不会显示checkconfig
函数。让我们运行我们的小函数
$ pytest -q test_checkconfig.py
F [100%]
================================= FAILURES =================================
______________________________ test_something ______________________________
def test_something():
> checkconfig(42)
E Failed: not configured: 42
test_checkconfig.py:11: Failed
========================= short test summary info ==========================
FAILED test_checkconfig.py::test_something - Failed: not configured: 42
1 failed in 0.12s
如果您只想隐藏某些异常,您可以将__tracebackhide__
设置为一个获取ExceptionInfo
对象的 callable。例如,您可以使用它来确保不会隐藏意外的异常类型
import operator
import pytest
class ConfigException(Exception):
pass
def checkconfig(x):
__tracebackhide__ = operator.methodcaller("errisinstance", ConfigException)
if not hasattr(x, "config"):
raise ConfigException(f"not configured: {x}")
def test_something():
checkconfig(42)
这将避免在不相关的异常(即断言帮助程序中的错误)上隐藏异常回溯。
检测是否在 pytest 运行中运行¶
通常,如果从测试中调用,让应用程序代码表现得不同是一个坏主意。但是,如果您绝对必须找出您的应用程序代码是否正在从测试中运行,您可以执行以下操作
# content of your_module.py
_called_from_test = False
# content of conftest.py
def pytest_configure(config):
your_module._called_from_test = True
然后检查your_module._called_from_test
标志
if your_module._called_from_test:
# called from within a test run
...
else:
# called "normally"
...
相应地在您的应用程序中。
向测试报告标题添加信息¶
在pytest
运行中显示额外信息很容易
# content of conftest.py
def pytest_report_header(config):
return "project deps: mylib-1.1"
它会将字符串相应地添加到测试标题
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
project deps: mylib-1.1
rootdir: /home/sweet/project
collected 0 items
========================== no tests ran in 0.12s ===========================
还可以返回一个字符串列表,该列表将被视为多行信息。您可以考虑config.getoption('verbose')
以便在适用时显示更多信息
# content of conftest.py
def pytest_report_header(config):
if config.getoption("verbose") > 0:
return ["info1: did you know that ...", "did you?"]
仅在使用“–v”运行时添加信息
$ pytest -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: .pytest_cache
info1: did you know that ...
did you?
rootdir: /home/sweet/project
collecting ... collected 0 items
========================== no tests ran in 0.12s ===========================
并且在普通运行时不添加任何信息
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 0 items
========================== no tests ran in 0.12s ===========================
分析测试持续时间¶
如果您有一个运行缓慢的大型测试套件,您可能需要找出哪些测试最慢。让我们创建一个人工测试套件
# content of test_some_are_slow.py
import time
def test_funcfast():
time.sleep(0.1)
def test_funcslow1():
time.sleep(0.2)
def test_funcslow2():
time.sleep(0.3)
现在,我们可以分析哪个测试函数执行得最慢
$ pytest --durations=3
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 3 items
test_some_are_slow.py ... [100%]
=========================== slowest 3 durations ============================
0.30s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1
0.10s call test_some_are_slow.py::test_funcfast
============================ 3 passed in 0.12s =============================
增量测试 - 测试步骤¶
有时,您可能遇到由一系列测试步骤组成的测试情况。如果一个步骤失败,执行进一步的步骤毫无意义,因为预计所有步骤都会失败,并且它们的回溯不会增加任何见解。这是一个简单的conftest.py
文件,它引入了一个incremental
标记,该标记用于类
# content of conftest.py
from typing import Dict, Tuple
import pytest
# store history of failures per test class name and per index in parametrize (if parametrize used)
_test_failed_incremental: Dict[str, Dict[Tuple[int, ...], str]] = {}
def pytest_runtest_makereport(item, call):
if "incremental" in item.keywords:
# incremental marker is used
if call.excinfo is not None:
# the test has failed
# retrieve the class name of the test
cls_name = str(item.cls)
# retrieve the index of the test (if parametrize is used in combination with incremental)
parametrize_index = (
tuple(item.callspec.indices.values())
if hasattr(item, "callspec")
else ()
)
# retrieve the name of the test function
test_name = item.originalname or item.name
# store in _test_failed_incremental the original name of the failed test
_test_failed_incremental.setdefault(cls_name, {}).setdefault(
parametrize_index, test_name
)
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
# retrieve the class name of the test
cls_name = str(item.cls)
# check if a previous test has failed for this class
if cls_name in _test_failed_incremental:
# retrieve the index of the test (if parametrize is used in combination with incremental)
parametrize_index = (
tuple(item.callspec.indices.values())
if hasattr(item, "callspec")
else ()
)
# retrieve the name of the first test function to fail for this class name and index
test_name = _test_failed_incremental[cls_name].get(parametrize_index, None)
# if name found, test has failed for the combination of class name & test name
if test_name is not None:
pytest.xfail(f"previous test failed ({test_name})")
这两个钩子实现协同工作,以中止类中标记为增量的测试。这是一个测试模块示例
# content of test_step.py
import pytest
@pytest.mark.incremental
class TestUserHandling:
def test_login(self):
pass
def test_modification(self):
assert 0
def test_deletion(self):
pass
def test_normal():
pass
如果我们运行这个
$ pytest -rx
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 4 items
test_step.py .Fx. [100%]
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling object at 0xdeadbeef0001>
def test_modification(self):
> assert 0
E assert 0
test_step.py:11: AssertionError
================================ XFAILURES =================================
______________________ TestUserHandling.test_deletion ______________________
item = <Function test_deletion>
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
# retrieve the class name of the test
cls_name = str(item.cls)
# check if a previous test has failed for this class
if cls_name in _test_failed_incremental:
# retrieve the index of the test (if parametrize is used in combination with incremental)
parametrize_index = (
tuple(item.callspec.indices.values())
if hasattr(item, "callspec")
else ()
)
# retrieve the name of the first test function to fail for this class name and index
test_name = _test_failed_incremental[cls_name].get(parametrize_index, None)
# if name found, test has failed for the combination of class name & test name
if test_name is not None:
> pytest.xfail(f"previous test failed ({test_name})")
E _pytest.outcomes.XFailed: previous test failed (test_modification)
conftest.py:47: XFailed
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::test_deletion - reason: previous test failed (test_modification)
================== 1 failed, 2 passed, 1 xfailed in 0.12s ==================
我们将看到test_deletion
没有执行,因为test_modification
失败了。它被报告为“预期失败”。
包/目录级固定装置(设置)¶
如果您有嵌套的测试目录,则可以通过将固定函数放置在该目录中的 conftest.py
文件中,为每个目录设置固定范围。您可以使用所有类型的固定装置,包括 autouse 固定装置,它等同于 xUnit 的设置/拆除概念。但是,建议在测试或测试类中明确引用固定装置,而不是依赖于隐式执行设置/拆除函数,特别是如果它们远离实际测试时。
以下是在目录中提供 db
固定装置的示例
# content of a/conftest.py
import pytest
class DB:
pass
@pytest.fixture(scope="package")
def db():
return DB()
然后是该目录中的测试模块
# content of a/test_db.py
def test_a1(db):
assert 0, db # to show value
另一个测试模块
# content of a/test_db2.py
def test_a2(db):
assert 0, db # to show value
然后是姊妹目录中的模块,它将看不到 db
固定装置
# content of b/test_error.py
def test_root(db): # no db here, will error out
pass
我们可以运行此命令
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 7 items
a/test_db.py F [ 14%]
a/test_db2.py F [ 28%]
b/test_error.py E [ 42%]
test_step.py .Fx. [100%]
================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file /home/sweet/project/b/test_error.py, line 1
def test_root(db): # no db here, will error out
E fixture 'db' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
/home/sweet/project/b/test_error.py:1
================================= FAILURES =================================
_________________________________ test_a1 __________________________________
db = <conftest.DB object at 0xdeadbeef0002>
def test_a1(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB object at 0xdeadbeef0002>
E assert 0
a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
db = <conftest.DB object at 0xdeadbeef0002>
def test_a2(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB object at 0xdeadbeef0002>
E assert 0
a/test_db2.py:2: AssertionError
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling object at 0xdeadbeef0003>
def test_modification(self):
> assert 0
E assert 0
test_step.py:11: AssertionError
========================= short test summary info ==========================
FAILED a/test_db.py::test_a1 - AssertionError: <conftest.DB object at 0x7...
FAILED a/test_db2.py::test_a2 - AssertionError: <conftest.DB object at 0x...
FAILED test_step.py::TestUserHandling::test_modification - assert 0
ERROR b/test_error.py::test_root
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ==============
a
目录中的两个测试模块看到相同的 db
固定装置实例,而姊妹目录 b
中的一个测试看不到它。当然,我们还可以在该姊妹目录的 conftest.py
文件中定义 db
固定装置。请注意,只有当测试实际需要固定装置时,才会实例化每个固定装置(除非您使用“autouse”固定装置,该固定装置总是在第一个测试执行之前执行)。
后处理测试报告/失败¶
如果您想后处理测试报告并需要访问执行环境,则可以实现一个钩子,该钩子将在即将创建测试“报告”对象时调用。在这里,我们写出所有失败的测试调用,并在后处理期间需要查询/查看时访问固定装置(如果测试使用了该固定装置)。在我们的示例中,我们只是将一些信息写入 failures
文件
# content of conftest.py
import os.path
import pytest
@pytest.hookimpl(wrapper=True, tryfirst=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
rep = yield
# we only look at actual failing test calls, not setup/teardown
if rep.when == "call" and rep.failed:
mode = "a" if os.path.exists("failures") else "w"
with open("failures", mode, encoding="utf-8") as f:
# let's also access a fixture for the fun of it
if "tmp_path" in item.fixturenames:
extra = " ({})".format(item.funcargs["tmp_path"])
else:
extra = ""
f.write(rep.nodeid + extra + "\n")
return rep
如果您有失败的测试
# content of test_module.py
def test_fail1(tmp_path):
assert 0
def test_fail2():
assert 0
并运行它们
$ pytest test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 2 items
test_module.py FF [100%]
================================= FAILURES =================================
________________________________ test_fail1 ________________________________
tmp_path = PosixPath('PYTEST_TMPDIR/test_fail10')
def test_fail1(tmp_path):
> assert 0
E assert 0
test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_fail1 - assert 0
FAILED test_module.py::test_fail2 - assert 0
============================ 2 failed in 0.12s =============================
您将拥有一个“failures”文件,其中包含失败的测试 ID
$ cat failures
test_module.py::test_fail1 (PYTEST_TMPDIR/test_fail10)
test_module.py::test_fail2
在固定装置中提供测试结果信息¶
如果您想在固定装置终结器中提供测试结果报告,这里有一个通过本地插件实现的小示例
# content of conftest.py
from typing import Dict
import pytest
from pytest import StashKey, CollectReport
phase_report_key = StashKey[Dict[str, CollectReport]]()
@pytest.hookimpl(wrapper=True, tryfirst=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
rep = yield
# store test results for each phase of a call, which can
# be "setup", "call", "teardown"
item.stash.setdefault(phase_report_key, {})[rep.when] = rep
return rep
@pytest.fixture
def something(request):
yield
# request.node is an "item" because we use the default
# "function" scope
report = request.node.stash[phase_report_key]
if report["setup"].failed:
print("setting up a test failed or skipped", request.node.nodeid)
elif ("call" not in report) or report["call"].failed:
print("executing test failed or skipped", request.node.nodeid)
如果您有失败的测试
# content of test_module.py
import pytest
@pytest.fixture
def other():
assert 0
def test_setup_fails(something, other):
pass
def test_call_fails(something):
assert 0
def test_fail2():
assert 0
并运行它
$ pytest -s test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 3 items
test_module.py Esetting up a test failed or skipped test_module.py::test_setup_fails
Fexecuting test failed or skipped test_module.py::test_call_fails
F
================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________
@pytest.fixture
def other():
> assert 0
E assert 0
test_module.py:7: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________
something = None
def test_call_fails(something):
> assert 0
E assert 0
test_module.py:15: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:19: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_call_fails - assert 0
FAILED test_module.py::test_fail2 - assert 0
ERROR test_module.py::test_setup_fails - assert 0
======================== 2 failed, 1 error in 0.12s ========================
您将看到固定装置终结器可以使用精确的报告信息。
PYTEST_CURRENT_TEST
环境变量¶
有时测试会话可能会卡住,并且可能没有简单的方法来找出哪个测试卡住了,例如,如果 pytest 在安静模式下运行 (-q
) 或您无法访问控制台输出。如果问题只是偶尔发生,即著名的“不稳定”类型的测试,则尤其会出现此问题。
pytest
在运行测试时设置 PYTEST_CURRENT_TEST
环境变量,如果需要,可以由进程监视实用程序或 psutil 等库检查该变量以找出哪个测试卡住了
import psutil
for pid in psutil.pids():
environ = psutil.Process(pid).environ()
if "PYTEST_CURRENT_TEST" in environ:
print(f'pytest process {pid} running: {environ["PYTEST_CURRENT_TEST"]}')
在测试会话期间,pytest 会将 PYTEST_CURRENT_TEST
设置为当前测试 节点 ID 和当前阶段,该阶段可以是 setup
、call
或 teardown
。
例如,当从 foo_module.py
运行名为 test_foo
的单个测试函数时,PYTEST_CURRENT_TEST
将设置为
foo_module.py::test_foo (setup)
foo_module.py::test_foo (call)
foo_module.py::test_foo (teardown)
按此顺序。
注意
PYTEST_CURRENT_TEST
的内容旨在便于人类阅读,实际格式可以在版本之间(甚至错误修复)进行更改,因此不应依赖它进行脚本编写或自动化。
冻结 pytest¶
如果你使用 PyInstaller 等工具冻结你的应用程序以便将其分发给你的最终用户,那么最好也打包你的测试运行器并使用冻结的应用程序运行你的测试。这样,可以及早检测到打包错误(例如依赖项未包含在可执行文件中),同时还可以向用户发送测试文件,以便他们可以在自己的机器上运行这些文件,这对于获取有关难以重现的错误的更多信息非常有用。
幸运的是,最近的 PyInstaller
版本已经为 pytest 提供了一个自定义挂钩,但是如果你正在使用其他工具(如 cx_freeze
或 py2exe
)冻结可执行文件,则可以使用 pytest.freeze_includes()
获取内部 pytest 模块的完整列表。但是,如何配置工具以查找内部模块因工具而异。
你无需将 pytest 运行器冻结为单独的可执行文件,而可以通过在程序启动期间进行一些巧妙的参数处理,使你的冻结程序作为 pytest 运行器工作。这允许你拥有一个可执行文件,这通常更方便。请注意,pytest 使用的插件发现机制(setuptools 入口点)不适用于冻结的可执行文件,因此 pytest 无法自动查找任何第三方插件。要包含 pytest-timeout
等第三方插件,必须显式导入它们并传递给 pytest.main。
# contents of app_main.py
import sys
import pytest_timeout # Third party plugin
if len(sys.argv) > 1 and sys.argv[1] == "--pytest":
import pytest
sys.exit(pytest.main(sys.argv[2:], plugins=[pytest_timeout]))
else:
# normal application execution: at this point argv can be parsed
# by your argument-parsing library of choice as usual
...
这允许你使用标准 pytest
命令行选项,使用冻结的应用程序执行测试
./app_main --pytest --verbose --tb=long --junit=xml=results.xml test-suite/