Property Based Testing
Automated property-based testing integration to ensure robust software behavior across diverse edge cases
Property-Based Testing is a community skill for writing tests that verify code properties using generated inputs, covering property definition, input generation, shrinking strategies, stateful testing, and integration with existing test suites for comprehensive software testing.
What Is This?
Overview
Property-Based Testing provides tools for testing software by defining properties that should hold for all valid inputs rather than writing specific test cases. It covers property definition that specifies invariants and relationships the code must satisfy, input generation that creates random test data matching type constraints and preconditions, shrinking strategies that minimize failing inputs to the simplest reproducing example, stateful testing that verifies sequences of operations maintain system invariants, and integration with pytest and unittest for adding property tests alongside existing test suites. The skill enables developers to find edge cases that example-based tests miss.
Who Should Use This
This skill serves software engineers testing complex algorithms and data transformations, quality assurance teams building comprehensive test suites that catch edge cases, and library authors verifying correctness properties across wide input ranges.
Why Use It?
Problems It Solves
Example-based tests only verify specific inputs leaving edge cases undiscovered until production. Writing enough test cases to cover boundary conditions manually is impractical for functions with large input spaces. Bugs in serialization roundtrips and data transformations hide in specific input combinations that developers do not anticipate. Stateful system testing with fixed scenarios misses interaction bugs between operation sequences.
Core Highlights
Property engine defines invariants that must hold across all generated inputs. Data generator creates typed random inputs matching specified constraints. Shrinker minimizes failing cases to the simplest reproducing example. Stateful tester verifies operation sequences maintain system invariants.
How to Use It?
Basic Usage
from hypothesis import (
given, settings)
from hypothesis import (
strategies as st)
@given(
st.lists(
st.integers(),
min_size=1))
def test_sort_preserves(
xs: list[int]
):
result = sorted(xs)
assert len(result) == (
len(xs))
assert set(result) == (
set(xs))
assert all(
result[i] <=
result[i + 1]
for i in range(
len(result) - 1))
@given(
st.dictionaries(
st.text(
min_size=1),
st.integers()))
def test_json_roundtrip(
data: dict
):
import json
encoded = (
json.dumps(data))
decoded = (
json.loads(encoded))
assert decoded == data
@given(
st.integers(),
st.integers())
def test_addition(
a: int, b: int
):
assert a + b == b + a
assert (a + b) - b == aReal-World Examples
from hypothesis.stateful\
import (
RuleBasedStateMachine,
rule,
initialize,
invariant)
from hypothesis import (
strategies as st)
class StackMachine(
RuleBasedStateMachine
):
def __init__(self):
super().__init__()
self.stack = []
self.model = []
@rule(
value=st.integers())
def push(self, value):
self.stack.append(
value)
self.model.append(
value)
@rule()
def pop(self):
if self.stack:
actual = (
self.stack.pop())
expected = (
self.model.pop())
assert actual == (
expected)
@invariant()
def sizes_match(self):
assert len(
self.stack) == (
len(self.model))
TestStack = (
StackMachine
.TestCase)Advanced Tips
Use composite strategies to build complex test data that mirrors your domain models with valid relationships between fields. Register custom strategies for domain types so they can be reused across tests without duplicating generation logic. Use the @example decorator to pin specific regression cases alongside generated tests for known edge cases.
When to Use It?
Use Cases
Test serialization functions by verifying that encode followed by decode returns the original data for all valid inputs. Verify sorting algorithms preserve element counts and produce ordered output across random lists. Test stateful systems by generating random operation sequences and checking invariants after each step.
Related Topics
Property-based testing, Hypothesis, QuickCheck, fuzzing, test generation, software testing, and automated verification.
Important Notes
Requirements
Hypothesis library for Python property-based testing support. pytest or unittest as the test runner framework. Understanding of properties and invariants that characterize correct behavior.
Usage Recommendations
Do: start with simple properties like roundtrip invariants and commutativity before writing complex stateful tests. Use settings to control the number of generated examples balancing thoroughness with test execution time. Store the Hypothesis database in version control to preserve failing examples across runs.
Don't: replace all example-based tests since specific test cases serve as documentation of expected behavior. Write properties that are trivially true and do not actually test meaningful behavior. Ignore shrunk failing examples since they reveal the minimal conditions that trigger the bug.
Limitations
Property tests run slower than example-based tests due to generating many inputs per test function. Defining meaningful properties requires understanding the mathematical invariants of the code being tested. Flaky failures can occur when properties depend on timing or external system state.
More Skills You Might Like
Explore similar skills to enhance your workflow
Axiom Vision
iOS and xOS development guidance for Vision patterns and best practices
Award Application
A Claude Code skill for award application workflows and automation
Vueuse Functions
Vueuse Functions for seamless automation, integration, and composable utilities
Last30days
Research any topic across Reddit, X, YouTube, and Hacker News from the past 30 days
Pre Mortem
Run a pre-mortem risk analysis on a PRD or launch plan. Categorizes risks as Tigers (real problems), Paper Tigers (overblown concerns), and
Prisma Database Setup
Professional Prisma database configuration including automated schema deployment and environment integration