Gepetto

Gepetto

Gepetto automation and integration for streamlined development and coding workflows

Category: productivity Source: softaworks/agent-toolkit

Gepetto is an AI skill that provides automated code review and analysis for pull requests, identifying potential bugs, security vulnerabilities, performance issues, and style violations. It covers static analysis integration, review comment generation, severity classification, fix suggestions, and continuous code quality monitoring across development workflows.

What Is This?

Overview

Gepetto offers automated code review capabilities that supplement human reviewers in pull request workflows. It handles scanning diffs for common bug patterns and logical errors, detecting security vulnerabilities such as injection flaws, identifying performance bottlenecks in changed code, checking adherence to project coding standards, generating inline review comments with suggested fixes, and tracking code quality trends across pull requests.

Who Should Use This

This skill serves development teams that want faster initial review coverage on pull requests, security conscious organizations needing automated vulnerability scanning, team leads ensuring consistent code quality, and open source maintainers handling high volumes of contributions.

Why Use It?

Problems It Solves

Human reviewers miss subtle bugs when fatigued or reviewing large pull requests. Security vulnerabilities in less familiar code paths escape detection during manual review. Inconsistent coding standards create style drift across a codebase. Review bottlenecks slow merge velocity when senior developers are overloaded.

Core Highlights

Pattern based analysis catches common mistakes that human reviewers overlook during large diffs. Security scanning identifies OWASP Top 10 vulnerabilities in changed files. Inline comments with suggested fixes reduce the back and forth between reviewer and author. Severity classification helps authors prioritize findings.

How to Use It?

Basic Usage

name: Automated Code Review
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - name: Get changed files
        id: diff
        run: |
          echo "files=$(git diff --name-only origin/main...HEAD             | grep -E '\.(js|ts|py)
#39; | tr ' ' ' ')" >> $GITHUB_OUTPUT - name: Run static analysis run: | npx eslint ${{ steps.diff.outputs.files }} --format json --output-file eslint-results.json - name: Run security scan run: | pip install bandit bandit -r . -f json -o security-results.json --only-changed - name: Post review comments uses: gepetto/review-action@v2 with: eslint-results: eslint-results.json security-results: security-results.json severity-threshold: medium

Real-World Examples

import ast
import re
from dataclasses import dataclass

@dataclass
class ReviewFinding:
    file: str
    line: int
    severity: str
    category: str
    message: str
    suggestion: str

class CodeAnalyzer:
    def __init__(self, diff_files):
        self.files = diff_files
        self.findings = []

    def check_sql_injection(self, filepath, source):
        tree = ast.parse(source)
        for node in ast.walk(tree):
            if isinstance(node, ast.JoinedStr):
                parent = self.get_parent_call(tree, node)
                if parent and "execute" in str(parent):
                    self.findings.append(ReviewFinding(
                        file=filepath,
                        line=node.lineno,
                        severity="critical",
                        category="security",
                        message="Potential SQL injection via "
                                "f-string in execute call",
                        suggestion="Use parameterized queries: "
                                   "cursor.execute(sql, params)"
                    ))

    def check_error_handling(self, filepath, source):
        pattern = r"except\s*:"
        for i, line in enumerate(source.split("
"), 1):
            if re.search(pattern, line):
                self.findings.append(ReviewFinding(
                    file=filepath, line=i,
                    severity="warning",
                    category="quality",
                    message="Bare except clause catches all "
                            "exceptions including SystemExit",
                    suggestion="Catch specific exceptions: "
                               "except ValueError as e:"
                ))

    def generate_report(self):
        critical = [f for f in self.findings
                    if f.severity == "critical"]
        warnings = [f for f in self.findings
                    if f.severity == "warning"]
        return {
            "total": len(self.findings),
            "critical": len(critical),
            "warnings": len(warnings),
            "findings": self.findings
        }

Advanced Tips

Configure severity thresholds per directory so that security critical paths receive stricter analysis than utility code. Suppress known false positives with inline annotations rather than disabling entire rule categories. Track finding trends over time to identify which areas accumulate the most issues.

When to Use It?

Use Cases

Use Gepetto when scaling code review across a growing team, when enforcing security standards on every pull request, when onboarding new developers who benefit from automated feedback, or when maintaining open source projects with external contributors.

Related Topics

Static analysis tools like ESLint and Pylint, security scanning with Bandit and Semgrep, code quality platforms such as SonarQube, pull request automation with GitHub Actions, and code review best practices all complement automated review workflows.

Important Notes

Requirements

A CI/CD pipeline that triggers on pull request events. Static analysis tools configured for the project languages. Repository permissions allowing the review bot to post inline comments.

Usage Recommendations

Do: treat automated findings as a first pass that supplements human review rather than replacing it. Configure rules to match your team's coding standards so findings feel relevant. Update analysis rules quarterly to reduce false positives.

Don't: rely on automated review as the sole quality gate, since many design and architecture issues require human judgment. Enable every available rule at once, as excessive noise causes developers to ignore all findings. Block merges on low severity style warnings that do not affect correctness.

Limitations

Static analysis detects patterns but cannot understand business logic or architectural intent. False positives are inevitable and require ongoing rule tuning. Runtime behavior issues like race conditions and memory leaks are difficult to catch through diff based analysis alone.