Code Review Pro
Automate and integrate Code Review Pro for professional-grade code reviews
Code Review Pro is an AI skill that automates and enhances the code review process through structured analysis, best practice enforcement, and actionable feedback generation. It covers review checklists, automated comment generation, diff analysis, severity classification, and team workflow integration that enable faster and more consistent pull request reviews.
What Is This?
Overview
Code Review Pro provides structured approaches to conducting thorough code reviews. It handles analyzing pull request diffs for common anti-patterns and potential bugs, generating review comments with specific improvement suggestions and code examples, classifying findings by severity to prioritize critical issues over style preferences, enforcing team coding standards through configurable rule sets, tracking review metrics such as turnaround time and comment resolution rates, and integrating with version control platforms to post automated review feedback.
Who Should Use This
This skill serves development teams seeking to standardize their review process, tech leads responsible for code quality across multiple repositories, junior developers learning from structured review feedback, and organizations scaling code review practices across distributed teams.
Why Use It?
Problems It Solves
Manual reviews vary in quality depending on reviewer availability and expertise. Common issues like missing error handling or security vulnerabilities slip through when reviewers focus on logic. Review bottlenecks slow down merge velocity when senior developers become gatekeepers. Inconsistent feedback across reviewers confuses authors about actual team standards.
Core Highlights
Structured analysis checks diffs against configurable categories including security, performance, and maintainability. Severity classification distinguishes blocking issues from suggestions and nitpicks. Actionable comments include fix examples rather than vague improvement requests. Platform integration posts review results directly to pull request interfaces.
How to Use It?
Basic Usage
from dataclasses import dataclass, field
from enum import Enum
class Severity(Enum):
CRITICAL = "critical"
WARNING = "warning"
SUGGESTION = "suggestion"
@dataclass
class ReviewComment:
file: str
line: int
severity: Severity
message: str
suggestion: str = ""
@dataclass
class ReviewResult:
comments: list = field(default_factory=list)
approved: bool = True
def add(self, comment):
self.comments.append(comment)
if comment.severity == Severity.CRITICAL:
self.approved = False
def summary(self):
by_sev = {}
for c in self.comments:
by_sev.setdefault(c.severity.value, []).append(c)
lines = []
for sev in ["critical", "warning", "suggestion"]:
items = by_sev.get(sev, [])
if items:
lines.append(f"{sev}: {len(items)}")
status = "approved" if self.approved else "changes requested"
lines.append(f"Status: {status}")
return "\n".join(lines)Real-World Examples
import re
class SecurityReviewer:
PATTERNS = [
(re.compile(r'eval\('), "Avoid eval() which enables code injection"),
(re.compile(r'password\s*=\s*["\']'),
"Hardcoded passwords detected; use environment variables"),
(re.compile(r'SELECT.*\+.*input'),
"Potential SQL injection; use parameterized queries"),
]
def review_diff(self, diff_lines):
result = ReviewResult()
for i, line in enumerate(diff_lines):
if not line.startswith("+"):
continue
content = line[1:]
for pattern, msg in self.PATTERNS:
if pattern.search(content):
result.add(ReviewComment(
file="", line=i + 1,
severity=Severity.CRITICAL,
message=msg,
suggestion=self.get_fix(pattern, content)
))
return result
def get_fix(self, pattern, content):
fixes = {
"eval": 'Use ast.literal_eval() for safe parsing',
"password": 'password = os.environ["DB_PASSWORD"]',
"SELECT": 'cursor.execute("SELECT * FROM t WHERE id = %s", (input_val,))',
}
for key, fix in fixes.items():
if key in pattern.pattern:
return fix
return ""
reviewer = SecurityReviewer()
diff = ["+password = 'admin123'", "+name = input()"]
result = reviewer.review_diff(diff)
print(result.summary())Advanced Tips
Layer multiple specialized reviewers for security, performance, and style to produce comprehensive feedback. Weight severity based on the file path so that changes to core modules receive stricter scrutiny than test utilities. Cache analysis results for unchanged files to speed up reviews on large pull requests.
When to Use It?
Use Cases
Use Code Review Pro when automating first-pass reviews on pull requests before human reviewers, when enforcing security scanning as part of the merge process, when training junior developers through structured review feedback, or when tracking review quality metrics across teams.
Related Topics
Static analysis with ESLint and pylint, GitHub Actions for CI review automation, security scanning with Semgrep, code quality metrics, and review workflow optimization complement code review automation.
Important Notes
Requirements
Access to pull request diff data through version control platform APIs. Configurable rule sets for the target language and framework. Integration with the team notification workflow for review delivery.
Usage Recommendations
Do: classify findings by severity so authors address critical issues before style suggestions. Include code examples in suggestions to make feedback immediately actionable. Update review rules as the team encounters new patterns or changes conventions.
Don't: configure automated reviews to block merges on style-only findings, which slows velocity. Replace human review entirely with automation, as context-dependent decisions require human judgment. Generate vague comments like "improve this" without specific guidance on what to change.
Limitations
Pattern-based detection cannot understand business logic intent behind code changes. Automated reviews may produce false positives that require tuning to reduce noise. Complex architectural concerns require human reviewers who understand the broader system design.
More Skills You Might Like
Explore similar skills to enhance your workflow
Building Red Team C2 Infrastructure with Havoc
Deploy and configure the Havoc C2 framework with teamserver, HTTPS listeners, redirectors, and Demon agents for
Helm Chart Builder
Helm chart development agent skill and plugin for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw — chart scaffolding, values design, template patter
Sharp Edges
Identify error-prone APIs and dangerous configurations
Agent Skills
The official Agent Skills for [ClickHouse](https://clickhouse.com/). These skills help LLMs and agents to adopt best practices when working with
Breakdown Epic Pm
breakdown-epic-pm skill for programming & development
Golang Pro
Advanced Go development automation and integration for high-performance backend systems