管理 pytest 的输出¶
修改 Python 回溯打印¶
修改回溯打印的示例
pytest --showlocals # show local variables in tracebacks
pytest -l # show local variables (shortcut)
pytest --no-showlocals # hide local variables (if addopts enables them)
pytest --capture=fd # default, capture at the file descriptor level
pytest --capture=sys # capture at the sys level
pytest --capture=no # don't capture
pytest -s # don't capture (shortcut)
pytest --capture=tee-sys # capture to logs but also output to sys level streams
pytest --tb=auto # (default) 'long' tracebacks for the first and last
# entry, but 'short' style for the other entries
pytest --tb=long # exhaustive, informative traceback formatting
pytest --tb=short # shorter traceback format
pytest --tb=line # only one line per failure
pytest --tb=native # Python standard library formatting
pytest --tb=no # no traceback at all
--full-trace
会导致在出错时打印非常长的回溯(比 --tb=long
更长)。它还确保在 KeyboardInterrupt(Ctrl+C)上打印堆栈跟踪。如果测试花费的时间过长,并且你用 Ctrl+C 中断测试以找出测试在什么地方挂起,这非常有用。默认情况下,不会显示任何输出(因为 KeyboardInterrupt 被 pytest 捕获)。通过使用此选项,你可以确保显示回溯。
详细程度¶
修改打印详细程度的示例
pytest --quiet # quiet - less verbose - mode
pytest -q # quiet - less verbose - mode (shortcut)
pytest -v # increase verbosity, display individual test names
pytest -vv # more verbose, display more details from the test output
pytest -vvv # not a standard , but may be used for even more detail in certain setups
-v
标志控制 pytest 输出在各个方面的详细程度:测试会话进度、测试失败时的断言详细信息、使用 --fixtures
的 fixture 详细信息等。
考虑这个简单的文件
# content of test_verbosity_example.py
def test_ok():
pass
def test_words_fail():
fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"]
fruits2 = ["banana", "apple", "orange", "melon", "kiwi"]
assert fruits1 == fruits2
def test_numbers_fail():
number_to_text1 = {str(x): x for x in range(5)}
number_to_text2 = {str(x * 10): x * 10 for x in range(5)}
assert number_to_text1 == number_to_text2
def test_long_text_fail():
long_text = "Lorem ipsum dolor sit amet " * 10
assert "hello world" in long_text
正常执行 pytest 会给我们此输出(我们跳过标题,重点关注其余部分)
$ pytest --no-header
=========================== test session starts ============================
collected 4 items
test_verbosity_example.py .FFF [100%]
================================= FAILURES =================================
_____________________________ test_words_fail ______________________________
def test_words_fail():
fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"]
fruits2 = ["banana", "apple", "orange", "melon", "kiwi"]
> assert fruits1 == fruits2
E AssertionError: assert ['banana', 'a...elon', 'kiwi'] == ['banana', 'a...elon', 'kiwi']
E
E At index 2 diff: 'grapes' != 'orange'
E Use -v to get more diff
test_verbosity_example.py:8: AssertionError
____________________________ test_numbers_fail _____________________________
def test_numbers_fail():
number_to_text1 = {str(x): x for x in range(5)}
number_to_text2 = {str(x * 10): x * 10 for x in range(5)}
> assert number_to_text1 == number_to_text2
E AssertionError: assert {'0': 0, '1':..., '3': 3, ...} == {'0': 0, '10'...'30': 30, ...}
E
E Omitting 1 identical items, use -vv to show
E Left contains 4 more items:
E {'1': 1, '2': 2, '3': 3, '4': 4}
E Right contains 4 more items:
E {'10': 10, '20': 20, '30': 30, '40': 40}
E Use -v to get more diff
test_verbosity_example.py:14: AssertionError
___________________________ test_long_text_fail ____________________________
def test_long_text_fail():
long_text = "Lorem ipsum dolor sit amet " * 10
> assert "hello world" in long_text
E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ips... sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet '
test_verbosity_example.py:19: AssertionError
========================= short test summary info ==========================
FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser...
FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass...
FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a...
======================= 3 failed, 1 passed in 0.12s ========================
请注意
文件中每个测试在输出中显示为一个字符:
.
表示通过,F
表示失败。test_words_fail
失败,并且向我们显示了一个简短的摘要,表明两个列表的索引 2 不同。test_numbers_fail
失败,并且向我们显示了字典项左右差异的摘要。省略了相同的项。test_long_text_fail
失败,并且in
语句的右侧被截断,使用...`
,因为其长度超过了内部阈值(当前为 240 个字符)。
现在我们可以增加 pytest 的详细程度
$ pytest --no-header -v
=========================== test session starts ============================
collecting ... collected 4 items
test_verbosity_example.py::test_ok PASSED [ 25%]
test_verbosity_example.py::test_words_fail FAILED [ 50%]
test_verbosity_example.py::test_numbers_fail FAILED [ 75%]
test_verbosity_example.py::test_long_text_fail FAILED [100%]
================================= FAILURES =================================
_____________________________ test_words_fail ______________________________
def test_words_fail():
fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"]
fruits2 = ["banana", "apple", "orange", "melon", "kiwi"]
> assert fruits1 == fruits2
E AssertionError: assert ['banana', 'a...elon', 'kiwi'] == ['banana', 'a...elon', 'kiwi']
E
E At index 2 diff: 'grapes' != 'orange'
E
E Full diff:
E [
E 'banana',
E 'apple',...
E
E ...Full output truncated (7 lines hidden), use '-vv' to show
test_verbosity_example.py:8: AssertionError
____________________________ test_numbers_fail _____________________________
def test_numbers_fail():
number_to_text1 = {str(x): x for x in range(5)}
number_to_text2 = {str(x * 10): x * 10 for x in range(5)}
> assert number_to_text1 == number_to_text2
E AssertionError: assert {'0': 0, '1':..., '3': 3, ...} == {'0': 0, '10'...'30': 30, ...}
E
E Omitting 1 identical items, use -vv to show
E Left contains 4 more items:
E {'1': 1, '2': 2, '3': 3, '4': 4}
E Right contains 4 more items:
E {'10': 10, '20': 20, '30': 30, '40': 40}
E ...
E
E ...Full output truncated (16 lines hidden), use '-vv' to show
test_verbosity_example.py:14: AssertionError
___________________________ test_long_text_fail ____________________________
def test_long_text_fail():
long_text = "Lorem ipsum dolor sit amet " * 10
> assert "hello world" in long_text
E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet '
test_verbosity_example.py:19: AssertionError
========================= short test summary info ==========================
FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser...
FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass...
FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a...
======================= 3 failed, 1 passed in 0.12s ========================
现在请注意
文件中每个测试在输出中获取自己的行。
test_words_fail
现在完整显示了两个失败的列表,此外还显示了哪个索引不同。test_numbers_fail
现在显示了两个字典的文本差异(已截断)。test_long_text_fail
不再截断in
语句的右侧,因为现在截断的内部阈值更大(当前为 2400 个字符)。
现在,如果我们进一步增加详细程度
$ pytest --no-header -vv
=========================== test session starts ============================
collecting ... collected 4 items
test_verbosity_example.py::test_ok PASSED [ 25%]
test_verbosity_example.py::test_words_fail FAILED [ 50%]
test_verbosity_example.py::test_numbers_fail FAILED [ 75%]
test_verbosity_example.py::test_long_text_fail FAILED [100%]
================================= FAILURES =================================
_____________________________ test_words_fail ______________________________
def test_words_fail():
fruits1 = ["banana", "apple", "grapes", "melon", "kiwi"]
fruits2 = ["banana", "apple", "orange", "melon", "kiwi"]
> assert fruits1 == fruits2
E AssertionError: assert ['banana', 'apple', 'grapes', 'melon', 'kiwi'] == ['banana', 'apple', 'orange', 'melon', 'kiwi']
E
E At index 2 diff: 'grapes' != 'orange'
E
E Full diff:
E [
E 'banana',
E 'apple',
E - 'orange',
E ? ^ ^^
E + 'grapes',
E ? ^ ^ +
E 'melon',
E 'kiwi',
E ]
test_verbosity_example.py:8: AssertionError
____________________________ test_numbers_fail _____________________________
def test_numbers_fail():
number_to_text1 = {str(x): x for x in range(5)}
number_to_text2 = {str(x * 10): x * 10 for x in range(5)}
> assert number_to_text1 == number_to_text2
E AssertionError: assert {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4} == {'0': 0, '10': 10, '20': 20, '30': 30, '40': 40}
E
E Common items:
E {'0': 0}
E Left contains 4 more items:
E {'1': 1, '2': 2, '3': 3, '4': 4}
E Right contains 4 more items:
E {'10': 10, '20': 20, '30': 30, '40': 40}
E
E Full diff:
E {
E '0': 0,
E - '10': 10,
E ? - -
E + '1': 1,
E - '20': 20,
E ? - -
E + '2': 2,
E - '30': 30,
E ? - -
E + '3': 3,
E - '40': 40,
E ? - -
E + '4': 4,
E }
test_verbosity_example.py:14: AssertionError
___________________________ test_long_text_fail ____________________________
def test_long_text_fail():
long_text = "Lorem ipsum dolor sit amet " * 10
> assert "hello world" in long_text
E AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet '
test_verbosity_example.py:19: AssertionError
========================= short test summary info ==========================
FAILED test_verbosity_example.py::test_words_fail - AssertionError: assert ['banana', 'apple', 'grapes', 'melon', 'kiwi'] == ['banana', 'apple', 'orange', 'melon', 'kiwi']
At index 2 diff: 'grapes' != 'orange'
Full diff:
[
'banana',
'apple',
- 'orange',
? ^ ^^
+ 'grapes',
? ^ ^ +
'melon',
'kiwi',
]
FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: assert {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4} == {'0': 0, '10': 10, '20': 20, '30': 30, '40': 40}
Common items:
{'0': 0}
Left contains 4 more items:
{'1': 1, '2': 2, '3': 3, '4': 4}
Right contains 4 more items:
{'10': 10, '20': 20, '30': 30, '40': 40}
Full diff:
{
'0': 0,
- '10': 10,
? - -
+ '1': 1,
- '20': 20,
? - -
+ '2': 2,
- '30': 30,
? - -
+ '3': 3,
- '40': 40,
? - -
+ '4': 4,
}
FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet '
======================= 3 failed, 1 passed in 0.12s ========================
现在请注意
文件中每个测试在输出中获取自己的行。
在这种情况下,
test_words_fail
给出的输出与之前相同。test_numbers_fail
现在显示了两个字典的完整文本差异。test_long_text_fail
也不再像以前那样在右侧截断,但现在 pytest 无论文本大小如何,都不会截断任何文本。
这些是冗余性影响常规测试会话输出方式的示例,但冗余性还用于其他情况,例如,如果你使用 pytest --fixtures -v
,则会显示以 _
开头的固定装置。
支持使用较高的冗余级别 (-vvv
、-vvvv
、…),但目前在 pytest 本身中不起作用,但某些插件可能会使用较高的冗余性。
细粒度冗余性¶
除了指定应用程序范围的冗余级别外,还可以独立控制特定方面。这是通过在输出的特定方面配置文件中设置冗余级别来完成的。
verbosity_assertions
:控制执行 pytest 时断言输出的冗余程度。使用值 2
运行 pytest --no-header
将具有与上一个示例相同的输出,但文件中的每个测试都由输出中的单个字符显示。
verbosity_test_cases
:控制执行 pytest 时测试执行输出的冗余程度。使用值 2
运行 pytest --no-header
将具有与第一个冗余示例相同的输出,但文件中的每个测试在输出中都有自己的行。
生成详细的摘要报告¶
可以在测试会话结束时使用 -r
标志显示“简短的测试摘要信息”,这使得在大型测试套件中轻松获得所有失败、跳过、xfail 等的清晰画面。
它默认为 fE
以列出失败和错误。
示例
# content of test_example.py
import pytest
@pytest.fixture
def error_fixture():
assert 0
def test_ok():
print("ok")
def test_fail():
assert 0
def test_error(error_fixture):
pass
def test_skip():
pytest.skip("skipping this test")
def test_xfail():
pytest.xfail("xfailing this test")
@pytest.mark.xfail(reason="always xfail")
def test_xpass():
pass
$ pytest -ra
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 6 items
test_example.py .FEsxX [100%]
================================== ERRORS ==================================
_______________________ ERROR at setup of test_error _______________________
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
def test_fail():
> assert 0
E assert 0
test_example.py:14: AssertionError
================================ XFAILURES =================================
________________________________ test_xfail ________________________________
def test_xfail():
> pytest.xfail("xfailing this test")
E _pytest.outcomes.XFailed: xfailing this test
test_example.py:26: XFailed
================================= XPASSES ==================================
========================= short test summary info ==========================
SKIPPED [1] test_example.py:22: skipping this test
XFAIL test_example.py::test_xfail - reason: xfailing this test
XPASS test_example.py::test_xpass - always xfail
ERROR test_example.py::test_error - assert 0
FAILED test_example.py::test_fail - assert 0
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
在 -r
选项后面接受多个字符,上面使用的 a
表示“除通过之外的所有”。
以下是可用的完整字符列表
f
- 失败
E
- 错误
s
- 跳过
x
- xfail
X
- xpass
p
- 通过
P
- 通过并有输出
用于(取消)选择组的特殊字符
a
- 除pP
之外的所有
A
- 所有
N
- 无,这可用于不显示任何内容(因为fE
是默认值)
可以使用多个字符,因此,例如,要仅查看失败和跳过的测试,可以执行
$ pytest -rfs
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 6 items
test_example.py .FEsxX [100%]
================================== ERRORS ==================================
_______________________ ERROR at setup of test_error _______________________
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
def test_fail():
> assert 0
E assert 0
test_example.py:14: AssertionError
========================= short test summary info ==========================
FAILED test_example.py::test_fail - assert 0
SKIPPED [1] test_example.py:22: skipping this test
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
使用 p
列出通过的测试,而 P
会添加一个额外的“通过”部分,其中包含通过但已捕获输出的测试
$ pytest -rpP
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 6 items
test_example.py .FEsxX [100%]
================================== ERRORS ==================================
_______________________ ERROR at setup of test_error _______________________
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
def test_fail():
> assert 0
E assert 0
test_example.py:14: AssertionError
================================== PASSES ==================================
_________________________________ test_ok __________________________________
--------------------------- Captured stdout call ---------------------------
ok
========================= short test summary info ==========================
PASSED test_example.py::test_ok
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
创建 resultlog 格式文件¶
要创建纯文本机器可读结果文件,可以发出
pytest --resultlog=path
并查看 path
位置的内容。此类文件由 PyPy-test 网页使用,以显示多个修订版的测试结果。
创建 JUnitXML 格式文件¶
要创建可由 Jenkins 或其他持续集成服务器读取的结果文件,请使用此调用
pytest --junit-xml=path
在 path
创建一个 XML 文件。
要设置根测试套件 xml 项目的名称,可以在配置文件中配置 junit_suite_name
选项
[pytest]
junit_suite_name = my_suite
在 4.0 版本中添加。
JUnit XML 规范似乎表明 "time"
属性应报告总测试执行时间,包括设置和拆除 (1,2)。这是默认的 pytest 行为。要仅报告调用持续时间,请像这样配置 junit_duration_report
选项
[pytest]
junit_duration_report = call
record_property¶
如果你想记录测试的附加信息,可以使用 record_property
固件
def test_function(record_property):
record_property("example_key", 1)
assert True
这将向生成的 testcase
标记添加一个额外的属性 example_key="1"
<testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
<properties>
<property name="example_key" value="1" />
</properties>
</testcase>
或者,你可以将此功能与自定义标记集成
# content of conftest.py
def pytest_collection_modifyitems(session, config, items):
for item in items:
for marker in item.iter_markers(name="test_id"):
test_id = marker.args[0]
item.user_properties.append(("test_id", test_id))
并在你的测试中
# content of test_function.py
import pytest
@pytest.mark.test_id(1501)
def test_function():
assert True
将导致
<testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
<properties>
<property name="test_id" value="1501" />
</properties>
</testcase>
警告
请注意,使用此功能将破坏最新 JUnitXML 架构的架构验证。当与某些 CI 服务器一起使用时,这可能是一个问题。
record_xml_attribute¶
要向测试用例元素添加一个额外的 xml 属性,可以使用 record_xml_attribute
固件。这也可以用来覆盖现有值
def test_function(record_xml_attribute):
record_xml_attribute("assertions", "REQ-1234")
record_xml_attribute("classname", "custom_classname")
print("hello world")
assert True
与 record_property
不同,这不会添加一个新的子元素。相反,这将在生成的 testcase
标记内添加一个属性 assertions="REQ-1234"
并用 "classname=custom_classname"
覆盖默认的 classname
<testcase classname="custom_classname" file="test_function.py" line="0" name="test_function" time="0.003" assertions="REQ-1234">
<system-out>
hello world
</system-out>
</testcase>
警告
record_xml_attribute
是一项实验性功能,其接口可能会在未来版本中被更强大、更通用的功能所取代。但是,其功能本身将保留。
使用此方法代替 record_xml_property
有助于 ci 工具解析 xml 报告。但是,某些解析器对允许的元素和属性非常严格。许多工具使用 xsd 模式(如以下示例)来验证传入的 xml。请确保使用解析器允许的属性名称。
以下是 Jenkins 用于验证 XML 报告的模式
<xs:element name="testcase">
<xs:complexType>
<xs:sequence>
<xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
<xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="failure" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-out" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-err" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="name" type="xs:string" use="required"/>
<xs:attribute name="assertions" type="xs:string" use="optional"/>
<xs:attribute name="time" type="xs:string" use="optional"/>
<xs:attribute name="classname" type="xs:string" use="optional"/>
<xs:attribute name="status" type="xs:string" use="optional"/>
</xs:complexType>
</xs:element>
警告
请注意,使用此功能将破坏最新 JUnitXML 架构的架构验证。当与某些 CI 服务器一起使用时,这可能是一个问题。
record_testsuite_property¶
在 4.5 版本中添加。
如果你想在测试套件级别添加一个属性节点,其中可能包含与所有测试相关的属性,则可以使用 record_testsuite_property
会话范围的夹具
会话范围的夹具 record_testsuite_property
可用于添加与所有测试相关的属性。
import pytest
@pytest.fixture(scope="session", autouse=True)
def log_global_env_facts(record_testsuite_property):
record_testsuite_property("ARCH", "PPC")
record_testsuite_property("STORAGE_TYPE", "CEPH")
class TestMe:
def test_foo(self):
assert True
该夹具是一个可调用的函数,它接收 name
和 value
,这些值在生成的 xml 的测试套件级别添加的 <property>
标记中
<testsuite errors="0" failures="0" name="pytest" skipped="0" tests="1" time="0.006">
<properties>
<property name="ARCH" value="PPC"/>
<property name="STORAGE_TYPE" value="CEPH"/>
</properties>
<testcase classname="test_me.TestMe" file="test_me.py" line="16" name="test_foo" time="0.000243663787842"/>
</testsuite>
name
必须是一个字符串,value
将被转换为一个字符串并正确地进行 xml 转义。
生成的 XML 与最新的 xunit
标准兼容,与 record_property 和 record_xml_attribute 相反。
将测试报告发送到在线粘贴服务¶
为每个测试失败创建 URL:
pytest --pastebin=failed
这会将测试运行信息提交到远程粘贴服务,并为每个失败提供一个 URL。你可以像往常一样选择测试,或者添加例如 -x
,如果你只想发送一个特定的失败。
为整个测试会话日志创建 URL:
pytest --pastebin=all
当前仅实现了粘贴到 https://bpaste.net/ 服务。
在 5.2 版本中更改。
如果由于任何原因而导致创建 URL 失败,则会生成警告,而不是使整个测试套件失败。