Introduction to Python Code Testing with PyTest
A Gentle Introduction to Testing with PyTest
A test is code that executes code. When you start developing a new feature for your Python project, you could formalize its requirements as code. When you do so, you not only document the way your implementation’s code shall be used, but you can also run all the tests automatically to always make sure your code matches your requirements. One such tool, which assists you in doing this is
pytest and it’s probably the most popular testing tool in the Python universe.
It’s all about
Let’s assume you have written a function that validates an email address. Note that we keep it simple here and don’t use Regular Expressions or DNS testing for validating email addresses. Instead, we just make sure that there is exactly one
@ sign in the string to be tested and only Latin characters, numbers, and
Now, we have some assertions to our code. For example, we assert that these email addresses are valid:
On the other hand, we would expect that our function returns False for email addresses like:
We can check that our function indeed behaves the way we expect:
These email address examples we come up with are called test cases. For each test case, we expect a certain result. A tool like
pytest can help automate testing these assertions. Writing down these assertions can help you to:
- document how your code is going to be used
- make sure that future changes do not break other parts of your software
- think about possible edge cases of your functionalities
To make that happen, we just create a new file for all of our tests and put a few functions in there.
So, we have two files in our project directory:
We can now simply run
pytest from the command line. Its output should look something like this:
pytest informs us that it has found three test functions inside
test_validator.py and that all of these functions were passed (as indicated by the three dots
100% indicator gives us a good feeling since we are confident that our validator works as expected. However, as outlined in the introduction, the validator function is far from perfect. And so are our test cases. Even without DNS testing, we would mark an email address like
[email protected] as valid, while an address like
[email protected] would be marked invalid.
Let’s add these test cases now to our
If we run
pytest again, we see failing tests:
Note that we got two
FF in addition to our three
... dots to indicate that two test functions failed.
In addition, we get a new
FAILURES section in our output which explains in detail at which point our test failed. That’s pretty helpful for debugging.
Our small validator example is a testament to the importance of designing tests.
We wrote our validator function first and then came up with some test cases for it. Soon we noticed that these test cases are by no means comprehensive. Instead, we missed some essential aspects of validating an email address.
You may have heard about Test Driven Development (TDD), which advocates for the exact opposite: Getting your requirements right by writing your test cases first and not start implementing a feature before you feel you have covered all test cases. This way of thinking has always been a good idea, but has gained even more importance over time since software projects have increased complexity.
I will write another blog post about TDD soon to cover it in depth.
Usually, a project setup is much more complicated than just a single file with a validator function in it.
You may have a Python package structure for your project, or your code relies on external dependencies like a database.
You might have used the term fixture in different contexts. For example, for the Django web framework, fixtures refer to a collection of initial data to be loaded into the database. However, in
pytest context, fixtures only refer to functions run by
pytest before and/or after the actual test functions.
Setup and Tear Down
We can create such functions using the
pytest.fixture() decorator. We do this inside the
test_validator.py file for now.
Note that setting up the database and tearing it down happens in the same fixture. The
yield keyword indicates the part where
pytest running the actual tests.
To have the fixture actually be used by one of your test, you simply add the fixture’s name as an argument, like so (still in
Getting Data from Fixtures
Instead of using
yield, a fixture function can also return arbitrary values:
Again, requesting that fixture from a test function is done by providing the fixture’s name as a parameter:
pytest can read its project-specific configuration from one of these files:
Which file to use depends on what other tooling you might use in your project. If you have packaged your project, you should use the
setup.cfg file. If you use tox to test your code in different environments, you can put the
pytest configuration into the
tox.ini file. The
pytest.ini file is used can be used if you do not want to utilize any additional tooling, but
The configuration file looks mostly the same for each of these three file types:
Using pytest.ini and tox.ini
If you are using the setup.cfg file, the only difference is that you have to prefix the
[pytest] section with
tool: like so:
Each folder containing test files can contain a
conftest.py file which is read by
pytest. This is a good place to place your custom fixtures into as these could be shared between different test files.
conftest.py file(s) can alter the behavior of
pytest on a per-project basis.
Apart from shared fixtures, you could place external hooks and plugins or modifiers for the
PATH used by
pytest to discover tests and implementation code.
CLI / PDB
During development, mainly when you write your tests before your implementation,
pytest can be a beneficial tool for debugging.
We will have a look at the most useful command-line options.
Running Only One Test
If you want to run one particular test only, you can reference that test via the
test_ file it is in and the function’s name:
Sometimes you just want to have a list of the test collection rather than executing all test functions.
Exit on the first error
You can force
pytest to stop executing further tests after a failed one:
Run the last failed test only
If you want to run only the tests that failed the last time, you can do so using the
Run all tests, but run the last failed ones first
Show values of local variables in the output
If we set up a more complex test function with some local variables, we can instruct
pytest to display these local variables with the
Let’s rewrite our test function like so:
will give us this output:
Using pytest with a debugger
There is a command line debugger, called
pdb, which is built into Python. You can
pytest to debug your test function’s code.
If you start
--pdb, it will start a
pdb debugging session right after an exception is raised in your test. Most of the time this is not particularly useful as you might want to inspect each line of code before the raised exception.
Another option is the
--trace flag for
pytest which will set a breakpoint at each test function’s first line. This might become a bit unhandy if you have a lot of tests. So, for debugging purposes, a good combination is
--lf --trace which would start a debug session with
pdb at the beginning of the last test that failed:
CI / CD
In modern software projects, software is developed according to Test Driven Development principles and delivered through a Continuous Integration / Continuous Deployment pipeline that includes automated testing.
A typical setup is that commits to the
main/master branch are rejected unless all test functions pass.
If you want to know more about using
pytest in a CI/CD environment, stay tuned as I am planning a new article on that topic.
Official documentation for
pytest is here: https://docs.pytest.org
A quick note: