Contents

Introduction to Python Code Testing with PyTest

PyTest is a popular testing library for Python. It is a simple, easy-to-use testing framework that allows you to write and execute tests for your Python code. Here is a brief introduction to using PyTest

A test is code that executes code. When you start developing a new feature for your Python project, you could formalize its requirements as code. When you do so, you not only document the way your implementation’s code shall be used, but you can also run all the tests automatically to always make sure your code matches your requirements. One such tool, which assists you in doing this is pytest and it’s probably the most popular testing tool in the Python universe.

It’s all about assert

Let’s assume you have written a function that validates an email address. Note that we keep it simple here and don’t use Regular Expressions or DNS testing for validating email addresses. Instead, we just make sure that there is exactly one @ sign in the string to be tested and only Latin characters, numbers, and ., - and _ characters.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import string

def is_valid_email_address(s):
    s = s.lower()
    parts = s.split('@')
    if len(parts) != 2:
      # Not exactly one at-sign
      return False
    allowed = set(string.ascii_lowercase + string.digits + '.-_')
    for part in parts:
        if not set(part) <= allowed:
          # Characters other than the allowed ones are found
          return False
    return True

Now, we have some assertions to our code. For example, we assert that these email addresses are valid:

On the other hand, we would expect that our function returns False for email addresses like:

We can check that our function indeed behaves the way we expect:

1
2
3
4
5
6
print(is_valid_email_address('[email protected]'))               # True
print(is_valid_email_address('[email protected]'))  # True
print(is_valid_email_address('[email protected]'))     # True
print(is_valid_email_address('not [email protected]'))          # False
print(is_valid_email_address('john.doe'))                       # False
print(is_valid_email_address('john,[email protected]'))           # False

These email address examples we come up with are called test cases. For each test case, we expect a certain result. A tool like pytest can help automate testing these assertions. Writing down these assertions can help you to:

  • document how your code is going to be used
  • make sure that future changes do not break other parts of your software
  • think about possible edge cases of your functionalities

To make that happen, we just create a new file for all of our tests and put a few functions in there.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
def test_regular_email_validates():
    assert is_valid_email_address('[email protected]')
    assert is_valid_email_address('[email protected]')
    assert is_valid_email_address('[email protected]')

def test_valid_email_has_one_at_sign():
    assert not is_valid_email_address('john.doe')

def test_valid_email_has_only_allowed_chars():
    assert not is_valid_email_address('john,[email protected]')
    assert not is_valid_email_address('not [email protected]')

Running tests

Easy example

So, we have two files in our project directory: validator.py and test_validator.py.

We can now simply run pytest from the command line. Its output should look something like this:

1
2
3
4
5
6
7
8
======================= test session starts =========================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example
collected 3 items

test_validator.py ...                                          [100%]

======================== 3 passed in 0.01s ==========================

Here pytest informs us that it has found three test functions inside test_validator.py and that all of these functions were passed (as indicated by the three dots ...).

The 100% indicator gives us a good feeling since we are confident that our validator works as expected. However, as outlined in the introduction, the validator function is far from perfect. And so are our test cases. Even without DNS testing, we would mark an email address like [email protected] as valid, while an address like [email protected] would be marked invalid.

Let’s add these test cases now to our test_validator.py

1
2
3
4
5
6
...
def test_valid_email_can_have_plus_sign():
    assert is_valid_email_address('[email protected]')

def test_valid_email_must_have_a_tld():
    assert not is_valid_email_address('[email protected]')

If we run pytest again, we see failing tests:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example
collected 5 items

test_validator.py ...FF                                                  [100%]

=================================== FAILURES ===================================
_____________________ test_valid_email_can_have_plus_sign ______________________

    def test_valid_email_can_have_plus_sign():
>       assert is_valid_email_address('[email protected]')
E       AssertionError: assert False
E        +  where False = is_valid_email_address('[email protected]')

test_validator.py:17: AssertionError
_______________________test_valid_email_must_have_a_tld_______________________

    def test_valid_email_must_have_a_tld():
>       assert not is_valid_email_address('[email protected]')
E       AssertionError: assert not True
E        +  where True = is_valid_email_address('[email protected]')

test_validator.py:20: AssertionError
=========================== short test summary info ============================
FAILED test_validator.py::test_valid_email_can_have_plus_sign - AssertionErro...
FAILED test_validator.py::test_valid_email_must_have_a_tld - AssertionError: ...
========================= 2 failed, 3 passed in 0.05s ==========================

Note that we got two FF in addition to our three ... dots to indicate that two test functions failed.

In addition, we get a new FAILURES section in our output which explains in detail at which point our test failed. That’s pretty helpful for debugging.

Designing Tests

Our small validator example is a testament to the importance of designing tests.

We wrote our validator function first and then came up with some test cases for it. Soon we noticed that these test cases are by no means comprehensive. Instead, we missed some essential aspects of validating an email address.

You may have heard about Test Driven Development (TDD), which advocates for the exact opposite: Getting your requirements right by writing your test cases first and not start implementing a feature before you feel you have covered all test cases. This way of thinking has always been a good idea, but has gained even more importance over time since software projects have increased complexity.

I will write another blog post about TDD soon to cover it in depth.

Configuration

Usually, a project setup is much more complicated than just a single file with a validator function in it.

You may have a Python package structure for your project, or your code relies on external dependencies like a database.

Fixtures

You might have used the term fixture in different contexts. For example, for the Django web framework, fixtures refer to a collection of initial data to be loaded into the database. However, in pytest context, fixtures only refer to functions run by pytest before and/or after the actual test functions.

Setup and Tear Down

We can create such functions using the pytest.fixture() decorator. We do this inside the test_validator.py file for now.

1
2
3
4
5
6
7
import pytest

@pytest.fixture()
def database_environment():
    setup_database()
    yield
    teardown_database()

Note that setting up the database and tearing it down happens in the same fixture. The yield keyword indicates the part where pytest running the actual tests.

To have the fixture actually be used by one of your test, you simply add the fixture’s name as an argument, like so (still in test_validator.py):

1
2
def test_world(database_environment):
    assert 1 == 1

Getting Data from Fixtures

Instead of using yield, a fixture function can also return arbitrary values:

1
2
3
4
5
import pytest

@pytest.fixture()
def my_fruit():
    return "apple"

Again, requesting that fixture from a test function is done by providing the fixture’s name as a parameter:

1
2
def test_fruit(my_fruit):
    assert my_fruit == "apple"

Configuration Files

pytest can read its project-specific configuration from one of these files:

  • pytest.ini
  • tox.ini
  • setup.cfg

Which file to use depends on what other tooling you might use in your project. If you have packaged your project, you should use the setup.cfg file. If you use tox to test your code in different environments, you can put the pytest configuration into the tox.ini file. The pytest.ini file is used can be used if you do not want to utilize any additional tooling, but pytest.

The configuration file looks mostly the same for each of these three file types:

Using pytest.ini and tox.ini

1
2
[pytest]
addopts = ​-rsxX -l --tb=short --strict​

If you are using the setup.cfg file, the only difference is that you have to prefix the [pytest] section with tool: like so:

1
2
[tool:pytest]
addopts = ​-rsxX -l --tb=short --strict​

conftest.py

Each folder containing test files can contain a conftest.py file which is read by pytest. This is a good place to place your custom fixtures into as these could be shared between different test files.

The conftest.py file(s) can alter the behavior of pytest on a per-project basis.

Apart from shared fixtures, you could place external hooks and plugins or modifiers for the PATH used by pytest to discover tests and implementation code.

CLI / PDB

During development, mainly when you write your tests before your implementation, pytest can be a beneficial tool for debugging.

We will have a look at the most useful command-line options.

Running Only One Test

If you want to run one particular test only, you can reference that test via the test_ file it is in and the function’s name:

1
pytest test_validator.py::test_regular_email_validates

Collect Only

Sometimes you just want to have a list of the test collection rather than executing all test functions.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
pytest --collect-only

============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example
collected 5 items

<Module test_validator.py>
  <Function test_regular_email_validates>
  <Function test_valid_email_has_one_at_sign>
  <Function test_valid_email_has_only_allowed_chars>
  <Function test_valid_email_can_have_plus_sign>
  <Function test_valid_email_must_have_a_tld>

========================== 5 tests collected in 0.01s ==========================

Exit on the first error

You can force pytest to stop executing further tests after a failed one:

1
pytest -x

Run the last failed test only

If you want to run only the tests that failed the last time, you can do so using the --lf flag:

1
pytest --lf

Run all tests, but run the last failed ones first

1
pytest --ff

Show values of local variables in the output

If we set up a more complex test function with some local variables, we can instruct pytest to display these local variables with the -l flag.

Let’s rewrite our test function like so:

1
2
3
4
5
...
def test_valid_email_can_have_plus_sign():
    email = '[email protected]'
    assert is_valid_email_address('[email protected]')
...

Then,

1
pytest -l

will give us this output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example
collected 5 items

test_validator.py ...FF                                                  [100%]

=================================== FAILURES ===================================
_____________________ test_valid_email_can_have_plus_sign ______________________

    def test_valid_email_can_have_plus_sign():
        email = '[email protected]'
>       assert is_valid_email_address('[email protected]')
E       AssertionError: assert False
E        +  where False = is_valid_email_address('[email protected]')

email      = '[email protected]'

test_validator.py:18: AssertionError
_______________________test_valid_email_must_have_a_tld_______________________

    def test_valid_email_must_have_a_tld():
>       assert not is_valid_email_address('[email protected]')
E       AssertionError: assert not True
E        +  where True = is_valid_email_address('[email protected]')

test_validator.py:21: AssertionError
=========================== short test summary info ============================
FAILED test_validator.py::test_valid_email_can_have_plus_sign - AssertionErro...
FAILED test_validator.py::test_valid_email_must_have_a_tld - AssertionError: ...
========================= 2 failed, 3 passed in 0.09s ==========================

Using pytest with a debugger

There is a command line debugger, called pdb, which is built into Python. You can pytest to debug your test function’s code.

If you start pytest with --pdb, it will start a pdb debugging session right after an exception is raised in your test. Most of the time this is not particularly useful as you might want to inspect each line of code before the raised exception.

Another option is the --trace flag for pytest which will set a breakpoint at each test function’s first line. This might become a bit unhandy if you have a lot of tests. So, for debugging purposes, a good combination is --lf --trace which would start a debug session with pdb at the beginning of the last test that failed:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
pytest --lf --trace

============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-7.0.1, pluggy-1.0.0
rootdir: /Users/bascodes/Code/blogworkspace/pytest-example, configfile: pytest.ini
collected 2 items
run-last-failure: rerun previous 2 failures

test_validator.py
>>>>>>>>>>>>>>>>>>>> PDB runcall (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>
> /Users/bascodes/Code/blogworkspace/pytest-example/test_validator.py(17)test_valid_email_can_have_plus_sign()
-> email = '[email protected]'
(Pdb)

CI / CD

In modern software projects, software is developed according to Test Driven Development principles and delivered through a Continuous Integration / Continuous Deployment pipeline that includes automated testing.

A typical setup is that commits to the main/master branch are rejected unless all test functions pass.

If you want to know more about using pytest in a CI/CD environment, stay tuned as I am planning a new article on that topic.

Documentation

Official documentation for pytest is here: https://docs.pytest.org

A quick note:

This is a repost of the original article by Bas Steins made with his permission. Visit his site for more articles and/or follow him on Twitter: @bascodes