Python Testing

Automate and integrate Python testing workflows using modern frameworks and best practices

Python Testing is an AI skill that provides patterns and strategies for writing effective tests in Python using pytest and the standard library. It covers test organization, fixture design, parametrized tests, mocking, integration test setup, and coverage analysis that enable developers to build reliable Python test suites.

What Is This?

Overview

Python Testing provides structured approaches to writing comprehensive test suites in Python. It handles organizing tests with pytest conventions and directory structures, creating reusable fixtures with setup and teardown through yield patterns, writing parametrized tests that cover multiple input scenarios efficiently, mocking external dependencies with unittest.mock for isolated unit tests, setting up integration tests with database and API dependencies, and measuring code coverage to identify untested paths.

Who Should Use This

This skill serves Python developers writing tests for application logic, backend teams establishing testing standards for Python services, QA engineers building integration test suites, and teams improving code coverage on existing projects.

Why Use It?

Problems It Solves

Duplicated setup code across test functions creates maintenance burden when dependencies change. Tests that call external services fail intermittently due to network issues. Without parametrized tests, similar test cases require separate functions for each input variation. Missing coverage analysis leaves untested code paths that harbor bugs.

Core Highlights

Fixtures provide reusable setup with automatic cleanup through yield-based teardown. Parametrized tests consolidate many scenarios into a single function definition. Mock objects isolate units from external dependencies for reliable testing. Coverage reporting identifies untested code paths for targeted test writing.

How to Use It?

Basic Usage

import pytest

def calculate_discount(price, percentage):
    if percentage < 0 or percentage > 100:
        raise ValueError("Invalid percentage")
    return price * (1 - percentage / 100)

@pytest.mark.parametrize("price,pct,expected", [
    (100, 10, 90.0),
    (200, 50, 100.0),
    (50, 0, 50.0),
    (80, 100, 0.0),
])
def test_calculate_discount(price, pct, expected):
    assert calculate_discount(price, pct) == expected

@pytest.mark.parametrize("pct", [-1, 101, 200])
def test_discount_invalid_percentage(pct):
    with pytest.raises(ValueError):
        calculate_discount(100, pct)

@pytest.fixture
def sample_products():
    return [
        {"name": "Widget", "price": 25.0},
        {"name": "Gadget", "price": 49.99},
        {"name": "Doohickey", "price": 12.50},
    ]

def test_product_count(sample_products):
    assert len(sample_products) == 3

Real-World Examples

import pytest
from unittest.mock import Mock, patch, AsyncMock

class UserService:
    def __init__(self, repo, email_client):
        self.repo = repo
        self.email_client = email_client

    def register(self, email, name):
        existing = self.repo.find_by_email(email)
        if existing:
            raise ValueError("Email taken")
        user = self.repo.create({"email": email, "name": name})
        self.email_client.send_welcome(email)
        return user

@pytest.fixture
def mock_repo():
    repo = Mock()
    repo.find_by_email.return_value = None
    repo.create.return_value = {
        "id": "1", "email": "a@b.com", "name": "Alice"
    }
    return repo

@pytest.fixture
def mock_email():
    return Mock()

@pytest.fixture
def service(mock_repo, mock_email):
    return UserService(mock_repo, mock_email)

def test_register_success(service, mock_repo, mock_email):
    user = service.register("a@b.com", "Alice")
    assert user["email"] == "a@b.com"
    mock_repo.create.assert_called_once()
    mock_email.send_welcome.assert_called_once_with("a@b.com")

def test_register_duplicate(service, mock_repo):
    mock_repo.find_by_email.return_value = {"id": "1"}
    with pytest.raises(ValueError, match="Email taken"):
        service.register("a@b.com", "Alice")

@pytest.fixture
def db_connection():
    import sqlite3
    conn = sqlite3.connect(":memory:")
    conn.execute(
        "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)"
    )
    yield conn
    conn.close()

Advanced Tips

Use fixture scoping (session, module, class) to control setup frequency and reduce test suite execution time. Apply conftest.py files to share fixtures across test directories without imports. Mark integration tests with custom markers so they run separately from unit tests in CI.

When to Use It?

Use Cases

Use Python Testing when writing parametrized tests for functions with multiple input variations, when mocking external services for isolated unit testing, when setting up integration tests with database fixtures, or when measuring and improving code coverage.

Related Topics

Pytest plugin ecosystem, unittest.mock library, coverage.py analysis, factory_boy test data generation, and CI test pipeline configuration complement Python testing.

Important Notes

Requirements

Pytest installed as the test runner. unittest.mock from the standard library for mocking. coverage.py for code coverage measurement.

Usage Recommendations

Do: use fixtures for shared setup to eliminate duplication across test functions. Write parametrized tests when multiple inputs exercise the same logic path. Assert on mock call arguments to verify correct interaction with dependencies.

Don't: mock everything, which creates tests that pass regardless of actual behavior. Write tests that depend on execution order or shared mutable state. Skip testing error paths, which leaves failure handling unverified.

Limitations

Heavy mocking creates tests tightly coupled to implementation details rather than behavior. Coverage metrics measure line execution but not logical correctness. Fixture dependency chains can become complex and difficult to trace in large test suites.