Logging Best Practices
Implement standardized logging frameworks and automated monitoring integrations
Logging Best Practices is an AI skill that provides guidelines and implementation patterns for effective application logging across development, staging, and production environments. It covers log level strategies, structured logging formats, correlation identifiers, log aggregation, and retention policies that enable efficient debugging and operational monitoring.
What Is This?
Overview
Logging Best Practices provides structured approaches to instrumenting applications with meaningful log output. It handles selecting appropriate log levels for different event types, formatting log entries as structured JSON for machine parsing, propagating correlation IDs across service boundaries for request tracing, configuring log rotation and retention to manage storage costs, filtering sensitive data from log output to maintain compliance, and integrating with centralized aggregation platforms for search and alerting.
Who Should Use This
This skill serves backend developers instrumenting application services, SRE teams building observability stacks, platform engineers configuring centralized logging infrastructure, and security teams establishing audit logging requirements.
Why Use It?
Problems It Solves
Unstructured log messages are difficult to search and aggregate across distributed services. Without consistent log levels, critical errors drown in verbose debug output. Missing correlation identifiers make tracing requests across microservices impossible during incident investigation. Logging sensitive data such as passwords or tokens creates compliance violations and security risks.
Core Highlights
Structured formatting outputs log entries as JSON for reliable parsing by aggregation tools. Level discipline reserves each severity for specific event categories, reducing noise. Correlation propagation traces requests across services through shared identifiers. Sensitive data filtering prevents credentials and personal information from reaching log storage.
How to Use It?
Basic Usage
import logging
import json
import uuid
from datetime import datetime, timezone
class StructuredFormatter(logging.Formatter):
def format(self, record):
entry = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"level": record.levelname,
"logger": record.name,
"message": record.getMessage(),
"module": record.module,
"line": record.lineno
}
if hasattr(record, "correlation_id"):
entry["correlation_id"] = record.correlation_id
if record.exc_info:
entry["exception"] = self.formatException(
record.exc_info
)
return json.dumps(entry)
def setup_logger(name, level=logging.INFO):
logger = logging.getLogger(name)
logger.setLevel(level)
handler = logging.StreamHandler()
handler.setFormatter(StructuredFormatter())
logger.addHandler(handler)
return logger
logger = setup_logger("api")
logger.info("Server started on port 8080")Real-World Examples
import logging
import re
from contextvars import ContextVar
request_id_var = ContextVar("request_id", default=None)
SENSITIVE_PATTERNS = [
(re.compile(r'"password"\s*:\s*"[^"]+"'),
'"password": "***"'),
(re.compile(r'"token"\s*:\s*"[^"]+"'),
'"token": "***"'),
(re.compile(r'"ssn"\s*:\s*"[^"]+"'),
'"ssn": "***"'),
]
class SanitizingFilter(logging.Filter):
def filter(self, record):
msg = record.getMessage()
for pattern, replacement in SENSITIVE_PATTERNS:
msg = pattern.sub(replacement, msg)
record.msg = msg
record.args = ()
rid = request_id_var.get()
if rid:
record.correlation_id = rid
return True
class LoggingMiddleware:
def __init__(self, app, logger):
self.app = app
self.logger = logger
async def __call__(self, scope, receive, send):
import uuid, time
rid = str(uuid.uuid4())
request_id_var.set(rid)
start = time.monotonic()
self.logger.info(
f"Request started: {scope.get('path')}"
)
await self.app(scope, receive, send)
duration = time.monotonic() - start
self.logger.info(
f"Request completed in {duration:.3f}s"
)
logger = logging.getLogger("api")
logger.addFilter(SanitizingFilter())Advanced Tips
Use context variables to propagate correlation IDs without passing them through every function signature. Configure separate log handlers for different destinations such as console for development and file or remote for production. Set log levels per module to enable verbose debugging in specific components without flooding output from the entire application.
When to Use It?
Use Cases
Use Logging Best Practices when instrumenting new services with structured log output, when migrating from unstructured print statements to a proper logging framework, when building request tracing across microservice architectures, or when establishing compliance-aware logging that filters sensitive data.
Related Topics
Distributed tracing with OpenTelemetry, log aggregation with the ELK stack, metrics and observability platforms, structured logging libraries, and audit trail design complement logging best practices.
Important Notes
Requirements
Logging framework appropriate to the application language and runtime. Storage or aggregation service for production log collection. Team agreement on log level definitions and structured field conventions.
Usage Recommendations
Do: use structured JSON formatting for all log output in production to enable automated parsing. Include correlation identifiers in every log entry to support request tracing across services. Review log output periodically to verify that sensitive data is properly masked.
Don't: log at debug level in production, which generates excessive volume and increases storage costs. Include raw request bodies containing user credentials or personal data in log entries. Rely on log output as the sole monitoring mechanism without complementary metrics and alerts.
Limitations
Structured logging adds serialization overhead compared to plain text output. Log aggregation systems require infrastructure investment for storage and search capabilities. Sensitive data filtering relies on pattern matching that may miss novel data formats.
More Skills You Might Like
Explore similar skills to enhance your workflow
College Football Data Automation
Automate College Football Data retrieval and analysis tasks via Rube MCP server
Cto Advisor
Automate and integrate CTO Advisor workflows to support strategic technology leadership and planning
Baoyu Post To X
Baoyu Post To X automation and integration for effortless social media posting
Eversign Automation
Automate Eversign operations through Composio's Eversign toolkit via
Kit Automation
Automate Kit operations through Composio's Kit toolkit via Rube MCP
Bigml Automation
Automate Bigml operations through Composio's Bigml toolkit via Rube MCP