Code Review
Perform code reviews following Sentry engineering practices. Use when reviewing pull requests, examining code changes, or providing feedback on
Category: design Source: getsentry/skillsWhat Is This?
Overview
Code Review is a structured practice for examining code changes before they are merged into a codebase. Following Sentry engineering practices, this skill provides a systematic approach to evaluating pull requests across multiple dimensions including security vulnerabilities, performance implications, test coverage, and overall design quality. It transforms code review from an informal glance into a disciplined engineering process.
The Sentry code review methodology emphasizes identifying problems early, providing actionable feedback, and maintaining consistent standards across a team. Rather than relying on individual reviewer instincts, it applies a repeatable checklist that covers runtime errors, edge cases, security concerns, and architectural decisions. This consistency reduces the chance that critical issues slip through during busy review cycles.
Effective code review is one of the highest-leverage activities in software engineering. A single review session can prevent production incidents, improve long-term maintainability, and transfer knowledge between team members. This skill formalizes that process so every reviewer operates from the same foundation.
Who Should Use This
- Software engineers reviewing pull requests on any project that follows Sentry-style engineering standards
- Engineering leads who want to establish consistent review practices across their teams
- Junior developers learning how to give structured, constructive feedback on code changes
- Security-conscious teams that need a repeatable process for catching vulnerabilities during review
- Open source maintainers managing contributions from multiple external contributors
- DevOps and platform engineers reviewing infrastructure-as-code changes where correctness is critical
Why Use It?
Problems It Solves
- Inconsistent review quality: Without a structured checklist, different reviewers focus on different concerns, leading to uneven code quality across the codebase.
- Missed security issues: Security vulnerabilities are easy to overlook when reviewers focus only on logic correctness. A dedicated security review step catches injection risks, improper authentication, and data exposure.
- Performance regressions: Code that works correctly can still degrade system performance. Structured review prompts engineers to evaluate query efficiency, memory usage, and algorithmic complexity.
- Insufficient test coverage: Reviews without explicit testing criteria often approve changes that lack meaningful tests, creating technical debt and fragile code paths.
Core Highlights
- Covers runtime error detection including null pointer risks and unhandled exceptions
- Includes dedicated security review steps for common vulnerability patterns
- Addresses performance considerations such as database query efficiency and resource usage
- Evaluates test quality, not just test presence
- Encourages design-level feedback on architecture and component boundaries
- Provides a consistent framework applicable across different programming languages
- Supports knowledge transfer between senior and junior engineers
- Reduces time spent in post-merge debugging and hotfix cycles
How to Use It?
Basic Usage
When opening a pull request for review, start by examining the diff for potential runtime errors. Look for patterns like unguarded attribute access or missing error handling.
## Problematic: no null check before attribute access
def get_user_email(user_id):
user = User.objects.get(id=user_id)
return user.profile.email # profile could be None
## Improved: explicit guard
def get_user_email(user_id):
user = User.objects.get(id=user_id)
if user.profile is None:
return None
return user.profile.email
Specific Scenarios
Reviewing a database query change: Check whether new queries use appropriate indexes, avoid N+1 patterns, and include query timeouts where relevant. Use EXPLAIN ANALYZE output as supporting evidence in your review comment.
Reviewing authentication logic: Verify that permission checks occur before data access, tokens are validated properly, and sensitive values are never logged.
Real-World Examples
A reviewer notices a new API endpoint returns full user objects including internal fields. The structured review process flags this as a data exposure risk and requests a serializer that explicitly allowlists returned fields.
A performance review step catches a loop that executes one database query per iteration. The reviewer requests a bulk fetch using select_related or prefetch_related before the loop begins.
When to Use It?
Use Cases
- Reviewing feature pull requests before merging to the main branch
- Auditing hotfix changes where speed pressure can cause shortcuts
- Evaluating third-party library upgrades for security and compatibility
- Reviewing infrastructure configuration changes such as Terraform or Kubernetes manifests
- Assessing refactoring PRs to confirm behavior is preserved
- Onboarding new engineers by walking through review standards on their first submissions
- Conducting periodic retrospective reviews of merged code to identify systemic patterns
Important Notes
Requirements
- Reviewers should have sufficient context about the system being changed to evaluate design decisions accurately
- A working local environment or CI pipeline output should be available to verify test results
- Teams should agree on review standards before applying this skill to avoid conflicting feedback
- Access to the pull request diff and associated issue or ticket context is necessary for complete review