A Quick Introduction to Code Coverage in Python
Every line of code should be tested after writing it to prevent any last-minute surprises. Additionally, if you have testing set up, you can run the tests and if any error occur, you will be able to identify the problematic area of the code without having to halt your functioning product (stating the obvious, obviously).
The most common methods for evaluating the efficacy of the code are test coverage and code coverage. Nevertheless, due to the similarity of their underlying concepts, both terms are occasionally used synonymously. They are not, however, as identical as you might imagine.
Difference between Code Coverage & Test Coverage
Code Coverage : shows the proportion of the code that is covered by the test cases, both manually and with the use of selenium or another test automation framework. For instance if your source code contained a straightforward if…else loop and your test code covered both the if and else cases, the code coverage would be 100%.
Test Coverage : entails checking the functionality of the elements that were implemented in accordance with the functional requirements specification, software requirements specification, and other relevant documentation. For instance, how would you know if your online application had undergone cross-browser testing to see if it rendered correctly on various browsers? The number of browser + OS combinations that you have verified your web application’s browser compatibility across would be your test coverage.
Now that we have a fundamental understanding of what Test Coverage and Code Coverage are, let’s explore how to ensure Code Coverage in Python.
A useful testing tool is Coverage, which generates a report and tells you how much of your code has been covered by testing. In addition to unittest, pytest, and even nosetest, coverage can be used. We’ll quickly go through the fundamentals of using coverage with a very simple example.
We have some quite simple Python code; for example, there is a function that, when given a name, returns a hello statement with that name; but, if no name is provided, it returns a “Hello Stranger” statement.
#tutorial.py
def say_hello(name=None):
if name:
return "Hello {}".format(name)
else:
return "Hello Stranger"
We have another code snippet, test_tutorial.py, which uses pytest, tests the say_hello function by passing it my name, “Achal,” and determining whether or not it outputs “Hello Achal.”
# test_tutorial.py
import pytest
from tutorial import say_hello
class TestTutorial:
""" PyTest Test Cases
"""
def test_hello_with_name(self):
name = "Achal"
expected_output = "Hello Achal"
assert say_hello(name) == expected_output
We’ll now go through how to utilise coverage to create a report that shows how much of our code is covered with the test cases.
Installation
Like any other Python Library, coverage can. be installed using pip
pip install coverage
Usage
Using it in combination with your test runner is the idea. Through the command line, it is quite easy. Let’s see it with an example.
If you are using pytest, you can add coverage -m before the command. So:
pytest arg_1 arg_2
will become
coverage run -m pytest arg_1 arg_2
Now that you are aware of the syntax, let’s run our own example.
coverage run -m pytest
coverage report
Name Stmts Miss Cover
--------------------------------------
test_tutorial.py 7 0 100%
tutorial.py 4 1 75%
--------------------------------------
TOTAL 11 1 91%
You could wonder why the coverage is 91%. The if statement has another branch, which is what it should do in the case of “else,” as can be seen if we look at tutorial.py once more. Our tests must provide coverage for that as well. So that we can test that branch, we should add another method to the testing code.
# test_tutorial.py
import pytest
from tutorial import say_hello
class TestTutorial:
""" PyTest Test Cases
"""
def test_hello_with_name(self):
name = "Achal"
expected_output = "Hello Achal"
assert say_hello(name) == expected_output
def test_hello_without_name(self):
expected_output = "Hello Stranger"
assert say_hello() == expected_output
The newly added “test_hello_without_name” test case pass nothing in the say_hello function and check if the output is “Hello Stranger”. Running coverage now generates the following report:
coverage run -m pytest
coverage report
Name Stmts Miss Cover
--------------------------------------
test_tutorial.py 10 0 100%
tutorial.py 4 0 100%
--------------------------------------
TOTAL 14 0 100%
As you can see, it now displays 100% code coverage. 🙂
References
- https://coverage.readthedocs.io/en/6.4.3/
- https://www.lambdatest.com/blog/code-coverage-vs-test-coverage/
Add Comment
You must be logged in to post a comment.