Python Patterns

Implementing advanced Python design patterns for scalable automation and robust software architecture

Python Patterns is an AI skill that provides idiomatic design patterns and implementation strategies for building Python applications. It covers class design, decorator usage, context managers, error handling, generator patterns, and module organization that enable developers to write clean, Pythonic code.

What Is This?

Overview

Python Patterns provides structured approaches to solving common Python architecture challenges. It handles designing classes with dataclasses and protocols for type-safe composition, implementing decorators for cross-cutting concerns like caching, logging, and retry logic, building context managers for resource lifecycle management, structuring error handling with custom exception hierarchies, using generators for memory-efficient processing of large datasets, and organizing packages with clear public APIs through init exports.

Who Should Use This

This skill serves Python developers establishing project architecture, backend teams building services and CLI tools in Python, tech leads defining coding patterns for team adoption, and developers transitioning from other languages to idiomatic Python.

Why Use It?

Problems It Solves

Verbose class definitions with boilerplate init methods obscure the actual data structure. Repeated cross-cutting logic like retry and caching gets duplicated across functions. Resource leaks from unclosed files and connections cause memory and handle exhaustion. Flat module structures without clear APIs make it difficult to identify public interfaces.

Core Highlights

Dataclasses reduce boilerplate while providing type hints and comparison methods automatically. Decorators encapsulate reusable behavior that applies across multiple functions. Context managers guarantee resource cleanup through the with statement protocol. Protocol classes enable structural typing for flexible interface definitions.

How to Use It?

Basic Usage

from dataclasses import dataclass, field
from typing import Protocol
from functools import wraps
import time

class Repository(Protocol):
    def find(self, id: str) -> dict | None: ...
    def save(self, entity: dict) -> None: ...

@dataclass
class Config:
    host: str
    port: int = 8080
    debug: bool = False
    tags: list = field(default_factory=list)

def retry(max_attempts=3, delay=1.0):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            last_error = None
            for attempt in range(max_attempts):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    last_error = e
                    if attempt < max_attempts - 1:
                        time.sleep(delay * (attempt + 1))
            raise last_error
        return wrapper
    return decorator

@retry(max_attempts=3)
def fetch_data(url: str) -> dict:
    import urllib.request
    with urllib.request.urlopen(url) as resp:
        return resp.read()

Real-World Examples

from contextlib import contextmanager
from pathlib import Path

class AppError(Exception):
    def __init__(self, message, code="UNKNOWN"):
        super().__init__(message)
        self.code = code

class NotFoundError(AppError):
    def __init__(self, resource, id):
        super().__init__(
            f"{resource} {id} not found", code="NOT_FOUND"
        )

@contextmanager
def managed_connection(dsn):
    import psycopg2
    conn = psycopg2.connect(dsn)
    try:
        yield conn
        conn.commit()
    except Exception:
        conn.rollback()
        raise
    finally:
        conn.close()

def stream_csv(path, chunk_size=1000):
    import csv
    with open(path) as f:
        reader = csv.DictReader(f)
        batch = []
        for row in reader:
            batch.append(row)
            if len(batch) >= chunk_size:
                yield batch
                batch = []
        if batch:
            yield batch

class UserService:
    def __init__(self, repo: Repository):
        self.repo = repo

    def get(self, user_id: str) -> dict:
        user = self.repo.find(user_id)
        if not user:
            raise NotFoundError("User", user_id)
        return user

for batch in stream_csv("users.csv"):
    print(f"Processing {len(batch)} rows")

Advanced Tips

Use Protocol classes instead of ABC when you want structural typing that checks method signatures without requiring explicit inheritance. Combine dataclasses with post_init for validation that runs at construction time. Use generator pipelines to chain data transformations without materializing intermediate collections.

When to Use It?

Use Cases

Use Python Patterns when establishing project structure for new Python services, when implementing reusable decorators for cross-cutting concerns, when designing resource management with context managers, or when processing large datasets with generator-based streaming.

Related Topics

Python typing module usage, dataclass customization, asyncio patterns, packaging with pyproject.toml, and dependency injection in Python complement Python pattern adoption.

Important Notes

Requirements

Python 3.10 or later for current typing features including Protocol. Understanding of Python descriptor and iterator protocols. Type checker like mypy for Protocol validation.

Usage Recommendations

Do: use dataclasses for value objects and configuration to reduce boilerplate. Write context managers for any resource that requires cleanup after use. Apply decorators for behavior that repeats across multiple functions.

Don't: create deep class hierarchies when composition with protocols provides more flexibility. Use decorators that obscure the function signature and make debugging difficult. Ignore type hints on public interfaces, which reduces IDE support and documentation.

Limitations

Protocol-based typing is checked statically and does not enforce contracts at runtime by default. Decorator stacking can make the execution order difficult to reason about. Generator-based patterns add complexity that may not benefit small datasets.