Sentry

Automate Sentry error tracking and integrate real-time monitoring into your software development lifecycle

Sentry is a community skill for integrating error tracking and performance monitoring using the Sentry platform, covering SDK setup, error capture, breadcrumb logging, release tracking, alert configuration, and data scrubbing for production applications deployed at scale.

What Is This?

Overview

Sentry provides integration patterns for the Sentry error monitoring platform. It covers SDK initialization and configuration, automatic and manual error capture, breadcrumb trails for debugging context, performance transaction tracing, release and deployment tracking, and alert rule configuration. The skill enables teams to detect, diagnose, and resolve production issues faster by providing detailed error context that log files alone cannot offer.

Who Should Use This

This skill serves developers deploying applications to production who need real-time error visibility, operations teams building monitoring dashboards for application health, and engineering managers tracking error rates and resolution times across services and releases.

Why Use It?

Problems It Solves

Production errors go unnoticed when the only monitoring is log file review. Without stack traces and request context, reproducing reported bugs requires extensive manual investigation. Error volume spikes after deployments are difficult to correlate with specific code changes. Distributed systems produce errors across multiple services that need correlation to identify root causes effectively. Manual error triage from log files does not scale as application complexity and traffic volume increase.

Core Highlights

Automatic exception capture sends unhandled errors with full stack traces and context to the Sentry dashboard. Breadcrumbs record events leading up to an error for debugging the sequence of actions that triggered the failure. Release tracking correlates errors with specific deployment versions to identify regression sources. Performance monitoring traces request latency through service boundaries for bottleneck identification.

How to Use It?

Basic Usage

import sentry_sdk
from sentry_sdk.integrations.fastapi import FastApiIntegration

sentry_sdk.init(
    dsn="https://key@sentry.io/project",
    traces_sample_rate=0.2,
    profiles_sample_rate=0.1,
    release="myapp@1.2.0",
    environment="production",
    integrations=[FastApiIntegration()]
)

def process_payment(order_id: str, amount: float):
    with sentry_sdk.start_transaction(name="process_payment") as txn:
        sentry_sdk.add_breadcrumb(
            category="payment", message=f"Processing order {order_id}",
            level="info"
        )
        try:
            result = charge_card(order_id, amount)
            txn.set_tag("payment.status", "success")
            return result
        except Exception as e:
            sentry_sdk.set_context("payment", {
                "order_id": order_id, "amount": amount
            })
            sentry_sdk.capture_exception(e)
            raise

Real-World Examples

class SentryErrorHandler:
    def __init__(self, dsn: str, app_name: str, version: str):
        sentry_sdk.init(
            dsn=dsn, release=f"{app_name}@{version}",
            traces_sample_rate=0.1,
            before_send=self._filter_events
        )

    def _filter_events(self, event, hint):
        if "exc_info" in hint:
            exc_type = hint["exc_info"][0]
            if exc_type in (KeyboardInterrupt, SystemExit):
                return None
        return event

    def capture_with_context(self, error: Exception, context: dict):
        with sentry_sdk.push_scope() as scope:
            for key, value in context.items():
                scope.set_extra(key, value)
            sentry_sdk.capture_exception(error)

    def log_message(self, message: str, level: str = "info"):
        sentry_sdk.capture_message(message, level=level)

handler = SentryErrorHandler(
    dsn="https://key@sentry.io/project",
    app_name="order-service", version="2.1.0"
)

Advanced Tips

Use before_send hooks to scrub sensitive data from error events before they leave the application. Configure alert rules that notify the team when new error types appear or existing error rates spike above baseline thresholds. Set meaningful release versions that correspond to Git tags for accurate regression tracking.

When to Use It?

Use Cases

Monitor production web applications for unhandled exceptions with automatic alerting. Track error rates across releases to identify deployments that introduce regressions. Profile request performance to find slow endpoints and optimize database queries.

Related Topics

Application performance monitoring, distributed tracing, log aggregation platforms, incident management workflows, and observability engineering practices.

Important Notes

Requirements

A Sentry account with a project DSN for the target application. The sentry-sdk Python package or equivalent SDK for the application language. Network connectivity from the application environment to Sentry servers for event transmission.

Usage Recommendations

Do: configure sample rates for performance tracing to control data volume and costs. Add custom tags and context to errors for faster debugging. Filter known, non-actionable exceptions using before_send hooks to reduce noise in the dashboard.

Don't: set traces_sample_rate to 1.0 in production, as this captures every request and generates excessive data volume. Send personally identifiable information to Sentry without configuring data scrubbing rules. Ignore Sentry alerts that pile up, as alert fatigue leads to missing critical production issues.

Limitations

High event volumes on free or lower-tier plans hit rate limits that may drop error events. Client-side SDK initialization adds a small overhead to application startup time. Source map upload is required for meaningful JavaScript stack traces in minified production code. Self-hosted Sentry deployments require additional infrastructure management compared to the cloud-hosted service.