Code Review Expert
Automate and integrate Code Review Expert for advanced code quality checks
Category: development Source: sanyuan0704/sanyuan-skillsCode Review Expert is an AI skill that provides automated, thorough code review with actionable feedback across quality, security, performance, and maintainability dimensions. It covers static analysis integration, pattern detection, security vulnerability identification, performance anti-pattern recognition, and structured feedback generation that elevate code quality across development teams.
What Is This?
Overview
Code Review Expert delivers comprehensive code analysis that supplements human reviewers with systematic, consistent checks. It addresses code quality assessment covering naming conventions, complexity metrics, and code duplication, security vulnerability scanning for injection risks, authentication flaws, and data exposure, performance analysis identifying inefficient algorithms, unnecessary database queries, and memory issues, architecture compliance checking against team coding standards and design patterns, test coverage evaluation for reviewed code changes, and structured feedback formatted with severity levels and specific fix suggestions.
Who Should Use This
This skill serves development teams that want to improve code review consistency, senior engineers reviewing large pull requests across multiple files, team leads establishing automated quality gates in CI/CD pipelines, and developers who want pre-review checks before requesting human review.
Why Use It?
Problems It Solves
Human code reviews are inconsistent because different reviewers focus on different aspects. Security vulnerabilities and performance issues require specialized knowledge that not every reviewer has. Large pull requests receive superficial reviews because thorough review of hundreds of changed lines is exhausting. Review feedback quality varies between teams without shared standards.
Core Highlights
The skill applies consistent review criteria across all code changes regardless of reviewer. Security checks cover OWASP Top 10 vulnerability patterns in the reviewed language. Performance analysis identifies algorithmic and resource usage issues. Every finding includes a severity rating, explanation, and concrete fix suggestion.
How to Use It?
Basic Usage
class CodeReviewEngine:
def __init__(self, rules_config):
self.rules = self.load_rules(rules_config)
self.findings = []
def review_file(self, filepath, content, diff_only=True):
for rule in self.rules:
matches = rule.check(content)
for match in matches:
self.findings.append({
"file": filepath,
"line": match.line,
"severity": rule.severity,
"category": rule.category,
"message": rule.format_message(match),
"suggestion": rule.suggest_fix(match)
})
return self.findings
def generate_summary(self):
by_severity = {}
for f in self.findings:
sev = f["severity"]
by_severity[sev] = by_severity.get(sev, 0) + 1
return {
"total_findings": len(self.findings),
"by_severity": by_severity,
"blocking": any(f["severity"] == "critical" for f in self.findings)
}
Real-World Examples
Code Review Report
PR #247: Add user search endpoint
Files reviewed: 5 | Lines changed: 342
[CRITICAL] src/api/users.py:67
SQL Injection: User input concatenated directly into query string.
Found: f"SELECT * FROM users WHERE name = '{query}'"
Fix: Use parameterized query:
cursor.execute("SELECT * FROM users WHERE name = %s", (query,))
[WARNING] src/api/users.py:89
N+1 Query: Database query inside loop will execute once per user.
Found: for user in users: get_profile(user.id)
Fix: Batch load profiles with single query using IN clause.
[SUGGESTION] src/api/users.py:45
Missing pagination: Endpoint returns all matching users without limit.
Fix: Add limit/offset parameters with a default maximum of 100.
[PASS] Security: No hardcoded credentials detected.
[PASS] Testing: New endpoint has corresponding test cases.
Summary: 1 critical (must fix), 1 warning, 1 suggestion
Recommendation: Request changes (critical finding)
Advanced Tips
Configure review rules per repository to match team conventions and suppress false positives. Integrate automated review as a CI check that runs before human reviewers are assigned. Track review finding trends over time to identify recurring issues that training could address.
When to Use It?
Use Cases
Use Code Review Expert when reviewing pull requests that need thorough security and performance analysis, when establishing automated quality gates in CI/CD pipelines, when onboarding new team members who need consistent feedback on their code, or when auditing existing code for quality and security issues.
Related Topics
Static analysis tools like SonarQube and CodeQL, linting frameworks, SAST security scanning, pull request automation, and coding standards documentation all complement automated code review.
Important Notes
Requirements
Access to the code changes being reviewed, either as a diff or full file content. A configured rule set that matches the project's language, framework, and team conventions. Integration with the version control platform for inline comment delivery.
Usage Recommendations
Do: configure severity levels that match your team's blocking criteria so critical findings prevent merges. Include positive feedback for well-written code alongside issue findings. Update rules regularly to cover new vulnerability patterns and framework-specific best practices.
Don't: rely solely on automated review without human review for complex architectural decisions. Set overly strict rules that generate excessive false positives and slow down development. Ignore automated findings because they come from a tool rather than a person.
Limitations
Automated review excels at pattern matching but cannot evaluate business logic correctness or architectural appropriateness. False positives require rule tuning specific to each project's conventions. The tool cannot assess code readability and naming quality as effectively as an experienced human reviewer.