Errors and Failures¶
Let’s look at tests that have errors and failures, first we need to make a temporary copy of the entire testing directory (except .svn files which may be read only):
>>> import os.path, sys, tempfile, shutil
>>> tmpdir = tempfile.mkdtemp()
>>> directory_with_tests = os.path.join(tmpdir, 'testrunner-ex')
>>> source = os.path.join(this_directory, 'testrunner-ex')
>>> n = len(source) + 1
>>> for root, dirs, files in os.walk(source):
... dirs[:] = [d for d in dirs if d != ".svn"] # prune cruft
... os.mkdir(os.path.join(directory_with_tests, root[n:]))
... for f in files:
... _ = shutil.copy(os.path.join(root, f),
... os.path.join(directory_with_tests, root[n:], f))
>>> from zope import testrunner
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]
>>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ '.split()
>>> testrunner.run_internal(defaults)
...
Running zope.testrunner.layer.UnitTests tests:
...
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
...
File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
Failed example:
f()
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
f()
File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
g()
File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
x = y + 1 # noqa: F821
- __traceback_info__: I don't know what Y should be.
NameError: name 'y' is not defined
Error in test test3 (sample2.sampletests_e.Test...)
Traceback (most recent call last):
File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
f()
File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
g()
File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
x = y + 1 # noqa: F821
- __traceback_info__: I don't know what Y should be.
NameError: name 'y' is not defined
Failure in test testrunner-ex/sample2/e.rst
Failed doctest test for e.rst
File "testrunner-ex/sample2/e.rst", line 0
...
File "testrunner-ex/sample2/e.rst", line 4, in e.rst
Failed example:
f()
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest e.rst[1]>", line 1, in ?
f()
File "<doctest e.rst[0]>", line 2, in f
return x
NameError: name 'x' is not defined
Failure in test test (sample2.sampletests_f.Test...)
Traceback (most recent call last):
File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
self.assertEqual(1, 0)
File "/usr/local/python/2.3/lib/python2.3/unittest.py", line 302, in failUnlessEqual
raise self.failureException, \
AssertionError: 1 != 0
Ran 164 tests with 3 failures, 1 errors and 0 skipped in N.NNN seconds.
...
Total: 329 tests, 3 failures, 1 errors and 0 skipped in N.NNN seconds.
True
We see that we get an error report and a traceback for the failing test. In addition, the test runner returned True, indicating that there was an error.
If we ask for verbosity, the dotted output will be interrupted, and there’ll be a summary of the errors at the end of the test:
>>> sys.argv = 'test --tests-pattern ^sampletests(f|_e|_f)?$ -uv'.split()
>>> testrunner.run_internal(defaults)
...
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
...
Running:
.................................................................................................
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
...
File "testrunner-ex/sample2/sampletests_e.py", line 30,
in sample2.sampletests_e.eek
Failed example:
f()
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
f()
File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
g()
File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
x = y + 1 # noqa: F821
- __traceback_info__: I don't know what Y should be.
NameError: name 'y' is not defined
...
Error in test test3 (sample2.sampletests_e.Test...)
Traceback (most recent call last):
File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
f()
File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
g()
File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
x = y + 1 # noqa: F821
- __traceback_info__: I don't know what Y should be.
NameError: name 'y' is not defined
...
Failure in test testrunner-ex/sample2/e.rst
Failed doctest test for e.rst
File "testrunner-ex/sample2/e.rst", line 0
...
File "testrunner-ex/sample2/e.rst", line 4, in e.rst
Failed example:
f()
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest e.rst[1]>", line 1, in ?
f()
File "<doctest e.rst[0]>", line 2, in f
return x
NameError: name 'x' is not defined
.
Failure in test test (sample2.sampletests_f.Test...)
Traceback (most recent call last):
File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
self.assertEqual(1, 0)
File ".../unittest.py", line 302, in failUnlessEqual
raise self.failureException, \
AssertionError: 1 != 0
................................................................................................
Ran 164 tests with 3 failures, 1 errors and 0 skipped in 0.040 seconds.
...
Tests with errors:
test3 (sample2.sampletests_e.Test...)
Tests with failures:
eek (sample2.sampletests_e)
testrunner-ex/sample2/e.rst
test (sample2.sampletests_f.Test...)
True
Similarly for progress output, the progress ticker will be interrupted:
>>> sys.argv = ('test --tests-pattern ^sampletests(f|_e|_f)?$ -u -ssample2'
... ' -p').split()
>>> testrunner.run_internal(defaults)
...
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Running:
1/47 (2.1%)
Failure in test eek (sample2.sampletests_e)
Failed doctest test for sample2.sampletests_e.eek
File "testrunner-ex/sample2/sampletests_e.py", line 28, in eek
...
File "testrunner-ex/sample2/sampletests_e.py", line 30, in sample2.sampletests_e.eek
Failed example:
f()
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_e.eek[0]>", line 1, in ?
f()
File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
g()
File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
x = y + 1 # noqa: F821
- __traceback_info__: I don't know what Y should be.
NameError: name 'y' is not defined
2/47 (4.3%)\r
\r
3/47 (6.4%)\r
\r
4/47 (8.5%)
Error in test test3 (sample2.sampletests_e.Test...)
Traceback (most recent call last):
File "testrunner-ex/sample2/sampletests_e.py", line 43, in test3
f()
File "testrunner-ex/sample2/sampletests_e.py", line 19, in f
g()
File "testrunner-ex/sample2/sampletests_e.py", line 24, in g
x = y + 1 # noqa: F821
- __traceback_info__: I don't know what Y should be.
NameError: name 'y' is not defined
5/47 (10.6%)\r
\r
6/47 (12.8%)\r
\r
7/47 (14.9%)
Failure in test testrunner-ex/sample2/e.rst
Failed doctest test for e.rst
File "testrunner-ex/sample2/e.rst", line 0
...
File "testrunner-ex/sample2/e.rst", line 4, in e.rst
Failed example:
f()
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest e.rst[1]>", line 1, in ?
f()
File "<doctest e.rst[0]>", line 2, in f
return x
NameError: name 'x' is not defined
8/47 (17.0%)
Failure in test test (sample2.sampletests_f.Test...)
Traceback (most recent call last):
File "testrunner-ex/sample2/sampletests_f.py", line 21, in test
self.assertEqual(1, 0)
File ".../unittest.py", line 302, in failUnlessEqual
raise self.failureException, \
AssertionError: 1 != 0
9/47 (19.1%)\r
\r
10/47 (21.3%)\r
\r
11/47 (23.4%)\r
\r
12/47 (25.5%)\r
\r
13/47 (27.7%)\r
\r
14/47 (29.8%)\r
\r
15/47 (31.9%)\r
\r
16/47 (34.0%)\r
\r
17/47 (36.2%)\r
\r
18/47 (38.3%)\r
\r
19/47 (40.4%)\r
\r
20/47 (42.6%)\r
\r
21/47 (44.7%)\r
\r
22/47 (46.8%)\r
\r
23/47 (48.9%)\r
\r
24/47 (51.1%)\r
\r
25/47 (53.2%)\r
\r
26/47 (55.3%)\r
\r
27/47 (57.4%)\r
\r
28/47 (59.6%)\r
\r
29/47 (61.7%)\r
\r
30/47 (63.8%)\r
\r
31/47 (66.0%)\r
\r
32/47 (68.1%)\r
\r
33/47 (70.2%)\r
\r
34/47 (72.3%)\r
\r
35/47 (74.5%)\r
\r
36/47 (76.6%)\r
\r
37/47 (78.7%)\r
\r
38/47 (80.9%)\r
\r
39/47 (83.0%)\r
\r
40/47 (85.1%)\r
\r
41/47 (87.2%)\r
\r
42/47 (89.4%)\r
\r
43/47 (91.5%)\r
\r
44/47 (93.6%)\r
\r
45/47 (95.7%)\r
\r
46/47 (97.9%)\r
\r
47/47 (100.0%)\r
\r
Ran 47 tests with 3 failures, 1 errors and 0 skipped in 0.054 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True
If you also want a summary of errors at the end, ask for verbosity as well as progress output.
Capturing output¶
We can ask for output on stdout and stderr to be buffered, and emitted only for failing and erroring tests.
>>> sys.argv = (
... 'test -vv --buffer -ssample2'
... ' --tests-pattern ^stdstreamstest$'.split())
>>> orig_stderr = sys.stderr
>>> sys.stderr = sys.stdout
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_stderr_error (sample2.stdstreamstest.Test...)
Error in test test_stderr_error (sample2.stdstreamstest.Test...)
Traceback (most recent call last):
testrunner-ex/sample2/stdstreamstest.py", Line NNN, in test_stderr_error
raise Exception("boom")
Exception: boom
Stderr:
stderr output on error
stderr buffer output on error
test_stderr_failure (sample2.stdstreamstest.Test...)
Failure in test test_stderr_failure (sample2.stdstreamstest.Test...)
Traceback (most recent call last):
testrunner-ex/sample2/stdstreamstest.py", Line NNN, in test_stderr_failure
self.assertTrue(False)
AssertionError: False is not true
Stderr:
stderr output on failure
stderr buffer output on failure
test_stderr_success (sample2.stdstreamstest.Test...)
test_stdout_error (sample2.stdstreamstest.Test...)
Error in test test_stdout_error (sample2.stdstreamstest.Test...)
Traceback (most recent call last):
testrunner-ex/sample2/stdstreamstest.py", Line NNN, in test_stdout_error
raise Exception("boom")
Exception: boom
Stdout:
stdout output on error
stdout buffer output on error
test_stdout_failure (sample2.stdstreamstest.Test...)
Failure in test test_stdout_failure (sample2.stdstreamstest.Test...)
Traceback (most recent call last):
testrunner-ex/sample2/stdstreamstest.py", Line NNN, in test_stdout_failure
self.assertTrue(False)
AssertionError: False is not true
Stdout:
stdout output on failure
stdout buffer output on failure
test_stdout_success (sample2.stdstreamstest.Test...)
Ran 6 tests with 2 failures, 2 errors and 0 skipped in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Tests with errors:
test_stderr_error (sample2.stdstreamstest.Test...)
test_stdout_error (sample2.stdstreamstest.Test...)
Tests with failures:
test_stderr_failure (sample2.stdstreamstest.Test...)
test_stdout_failure (sample2.stdstreamstest.Test...)
True
>>> sys.stderr = orig_stderr
Suppressing multiple doctest errors¶
Often, when a doctest example fails, the failure will cause later examples in the same test to fail. Each failure is reported:
>>> sys.argv = 'test --tests-pattern ^sampletests_1$'.split()
>>> testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
...
File "testrunner-ex/sample2/sampletests_1.py", line 19,
in sample2.sampletests_1.eek
Failed example:
x = y
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
x = y
NameError: name 'y' is not defined
...
File "testrunner-ex/sample2/sampletests_1.py", line 21,
in sample2.sampletests_1.eek
Failed example:
x
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
x
NameError: name 'x' is not defined
...
File "testrunner-ex/sample2/sampletests_1.py", line 24,
in sample2.sampletests_1.eek
Failed example:
z = x + 1
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
z = x + 1
NameError: name 'x' is not defined
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.002 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True
This can be a bit confusing, especially when there are enough tests that they scroll off a screen. Often you just want to see the first failure. This can be accomplished with the -1 option (for “just show me the first failed example in a doctest” :)
>>> sys.argv = 'test --tests-pattern ^sampletests_1$ -1'.split()
>>> testrunner.run_internal(defaults) # doctest:
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
...
File "testrunner-ex/sample2/sampletests_1.py", line 19,
in sample2.sampletests_1.eek
Failed example:
x = y
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
x = y
NameError: name 'y' is not defined
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.001 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True
The –hide-secondary-failures option is an alias for -1:
>>> sys.argv = (
... 'test --tests-pattern ^sampletests_1$'
... ' --hide-secondary-failures'
... ).split()
>>> testrunner.run_internal(defaults) # doctest:
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
...
File "testrunner-ex/sample2/sampletests_1.py", line 19, in sample2.sampletests_1.eek
Failed example:
x = y
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
x = y
NameError: name 'y' is not defined
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.001 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True
The –show-secondary-failures option counters -1 (or it’s alias), causing the second and subsequent errors to be shown. This is useful if -1 is provided by a test script by inserting it ahead of command-line options in sys.argv.
>>> sys.argv = (
... 'test --tests-pattern ^sampletests_1$'
... ' --hide-secondary-failures --show-secondary-failures'
... ).split()
>>> testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Failure in test eek (sample2.sampletests_1)
Failed doctest test for sample2.sampletests_1.eek
File "testrunner-ex/sample2/sampletests_1.py", line 17, in eek
...
File "testrunner-ex/sample2/sampletests_1.py", line 19, in sample2.sampletests_1.eek
Failed example:
x = y
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_1.eek[0]>", line 1, in ?
x = y
NameError: name 'y' is not defined
...
File "testrunner-ex/sample2/sampletests_1.py", line 21, in sample2.sampletests_1.eek
Failed example:
x
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_1.eek[1]>", line 1, in ?
x
NameError: name 'x' is not defined
...
File "testrunner-ex/sample2/sampletests_1.py", line 24, in sample2.sampletests_1.eek
Failed example:
z = x + 1
Exception raised:
Traceback (most recent call last):
File ".../doctest.py", line 1256, in __run
compileflags, 1) in test.globs
File "<doctest sample2.sampletests_1.eek[2]>", line 1, in ?
z = x + 1
NameError: name 'x' is not defined
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.002 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
True
Getting diff output for doctest failures¶
If a doctest has large expected and actual output, it can be hard to see differences when expected and actual output differ. The –ndiff, –udiff, and –cdiff options can be used to get diff output of various kinds.
>>> sys.argv = 'test --tests-pattern ^pledge$'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
File "testrunner-ex/pledge.py", line 24, in pledge
...
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
print_pledge()
Expected:
I give my pledge, as an earthling,
to save, and faithfully, to defend from waste,
the natural resources of my planet.
It's soils, minerals, forests, waters, and wildlife.
Got:
I give my pledge, as and earthling,
to save, and faithfully, to defend from waste,
the natural resources of my planet.
It's soils, minerals, forests, waters, and wildlife.
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.002 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Here, the actual output uses the word “and” rather than the word “an”, but it’s a bit hard to pick this out. We can use the various diff outputs to see this better. We could modify the test to ask for diff output, but it’s easier to use one of the diff options.
The –ndiff option requests a diff using Python’s ndiff utility. This is the only method that marks differences within lines as well as across lines. For example, if a line of expected output contains digit 1 where actual output contains letter l, a line is inserted with a caret marking the mismatching column positions.
>>> sys.argv = 'test --tests-pattern ^pledge$ --ndiff'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
File "testrunner-ex/pledge.py", line 24, in pledge
...
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
print_pledge()
Differences (ndiff with -expected +actual):
- I give my pledge, as an earthling,
+ I give my pledge, as and earthling,
? +
to save, and faithfully, to defend from waste,
the natural resources of my planet.
It's soils, minerals, forests, waters, and wildlife.
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.003 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
The -udiff option requests a standard “unified” diff:
>>> sys.argv = 'test --tests-pattern ^pledge$ --udiff'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
File "testrunner-ex/pledge.py", line 24, in pledge
...
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
print_pledge()
Differences (unified diff with -expected +actual):
@@ -1,3 +1,3 @@
-I give my pledge, as an earthling,
+I give my pledge, as and earthling,
to save, and faithfully, to defend from waste,
the natural resources of my planet.
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.002 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
The -cdiff option requests a standard “context” diff:
>>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff'.split()
>>> _ = testrunner.run_internal(defaults)
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
Failure in test pledge (pledge)
Failed doctest test for pledge.pledge
File "testrunner-ex/pledge.py", line 24, in pledge
...
File "testrunner-ex/pledge.py", line 26, in pledge.pledge
Failed example:
print_pledge()
Differences (context diff with expected followed by actual):
***************
*** 1,3 ****
! I give my pledge, as an earthling,
to save, and faithfully, to defend from waste,
the natural resources of my planet.
--- 1,3 ----
! I give my pledge, as and earthling,
to save, and faithfully, to defend from waste,
the natural resources of my planet.
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.002 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
Specifying more than one diff option at once causes an error:
>>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff --udiff'.split()
>>> _ = testrunner.run_internal(defaults)
Traceback (most recent call last):
...
SystemExit: 1
>>> sys.argv = 'test --tests-pattern ^pledge$ --cdiff --ndiff'.split()
>>> _ = testrunner.run_internal(defaults)
Traceback (most recent call last):
...
SystemExit: 1
>>> sys.argv = 'test --tests-pattern ^pledge$ --udiff --ndiff'.split()
>>> _ = testrunner.run_internal(defaults)
Traceback (most recent call last):
...
SystemExit: 1
Testing-Module Import Errors¶
If there are errors when importing a test module, these errors are reported. In order to illustrate a module with a syntax error, we create one now: this module used to be checked in to the project, but then it was included in distributions of projects using zope.testrunner too, and distutils complained about the syntax error when it compiled Python files during installation of such projects. So first we create a module with bad syntax:
>>> badsyntax_path = os.path.join(directory_with_tests,
... "sample2", "sampletests_i.py")
>>> f = open(badsyntax_path, "w")
>>> print("importx unittest", file=f) # syntax error
>>> f.close()
Then run the tests:
>>> sys.argv = ('test --tests-pattern ^sampletests(f|_i)?$ --layer 1 '
... ).split()
>>> testrunner.run_internal(defaults)
...
Test-module import failures:
Module: sample2.sampletests_i
Traceback (most recent call last):
File "testrunner-ex/sample2/sampletests_i.py", line 1
importx unittest
...^
SyntaxError: invalid syntax...
Module: sample2.sample21.sampletests_i
Traceback (most recent call last):
File "testrunner-ex/sample2/sample21/sampletests_i.py", line 15, in ?
import zope.testrunner.huh # noqa: F401
ModuleNotFoundError: No module named 'zope.testrunner.huh'
Module: sample2.sample23.sampletests_i
Traceback (most recent call last):
File "testrunner-ex/sample2/sample23/sampletests_i.py", line 18, in ?
class Test(unittest.TestCase):
File "testrunner-ex/sample2/sample23/sampletests_i.py", line 23, in Test
raise TypeError('eek')
TypeError: eek
Running samplelayers.Layer1 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Ran 9 tests with 0 failures, 3 errors and 0 skipped in 0.000 seconds.
Running samplelayers.Layer11 tests:
Set up samplelayers.Layer11 in 0.000 seconds.
Ran 26 tests with 0 failures, 3 errors and 0 skipped in 0.007 seconds.
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in 0.000 seconds.
Set up samplelayers.Layer111 in 0.000 seconds.
Ran 26 tests with 0 failures, 3 errors and 0 skipped in 0.007 seconds.
Running samplelayers.Layer112 tests:
Tear down samplelayers.Layer111 in 0.000 seconds.
Set up samplelayers.Layer112 in 0.000 seconds.
Ran 26 tests with 0 failures, 3 errors and 0 skipped in 0.007 seconds.
Running samplelayers.Layer12 tests:
Tear down samplelayers.Layer112 in 0.000 seconds.
Tear down samplelayers.Layerx in 0.000 seconds.
Tear down samplelayers.Layer11 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Ran 26 tests with 0 failures, 3 errors and 0 skipped in 0.007 seconds.
Running samplelayers.Layer121 tests:
Set up samplelayers.Layer121 in 0.000 seconds.
Ran 26 tests with 0 failures, 3 errors and 0 skipped in 0.007 seconds.
Running samplelayers.Layer122 tests:
Tear down samplelayers.Layer121 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Ran 26 tests with 0 failures, 3 errors and 0 skipped in 0.006 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
Test-modules with import problems:
sample2.sampletests_i
sample2.sample21.sampletests_i
sample2.sample23.sampletests_i
Total: 165 tests, 0 failures, 3 errors and 0 skipped in N.NNN seconds.
True
Reporting Errors to Calling Processes¶
The testrunner returns the error status, indicating that the tests failed. This can be useful for an invoking process that wants to monitor the result of a test run.
This is applied when invoking the testrunner using the run()
function
instead of run_internal()
:
>>> sys.argv = (
... 'test --tests-pattern ^sampletests_1$'.split())
>>> try:
... testrunner.run(defaults)
... except SystemExit as e:
... print('exited with code', e.code)
... else:
... print('sys.exit was not called')
...
Running zope.testrunner.layer.UnitTests tests:
Set up zope.testrunner.layer.UnitTests in N.NNN seconds.
...
Ran 1 tests with 1 failures, 0 errors and 0 skipped in 0.002 seconds.
Tearing down left over layers:
Tear down zope.testrunner.layer.UnitTests in N.NNN seconds.
exited with code 1
Passing tests exit with code 0 according to UNIX practices:
>>> sys.argv = (
... 'test --tests-pattern ^sampletests$'.split())
>>> try:
... testrunner.run(defaults)
... except SystemExit as e2:
... print('exited with code', e2.code)
... else:
... print('sys.exit was not called')
...
Running zope.testrunner.layer.UnitTests tests:
...
Total: 286 tests, 0 failures, 0 errors and 0 skipped in N.NNN seconds.
exited with code 0
And remove the temporary directory:
>>> shutil.rmtree(tmpdir)