Vet

Vet

Automate and integrate Vet workflows to streamline your development pipeline

Category: productivity Source: imbue-ai/vet

Vet is a code analysis and review automation tool that examines repositories for quality, security, and best practice compliance through configurable rule sets and automated checks. It covers static analysis configuration, custom rule definition, CI integration, report generation, and review workflow automation that enable consistent code quality enforcement.

What Is This?

Overview

Vet provides structured approaches to automated code review and quality analysis. It handles scanning codebases against configurable quality and security rules, running custom analysis checks defined through YAML or JSON configuration files, integrating with CI pipelines to enforce standards on every commit, generating detailed reports with findings categorized by severity and type, tracking quality metrics over time to identify trends and regressions, and supporting multiple languages through pluggable analyzer backends.

Who Should Use This

This skill serves development teams establishing code quality standards, security engineers enforcing vulnerability scanning in CI pipelines, tech leads maintaining consistency across large codebases, and open source maintainers reviewing external contributions.

Why Use It?

Problems It Solves

Manual code review catches only what reviewers remember to check and varies by reviewer experience. Without automated analysis, security vulnerabilities slip through when reviewers focus on functionality. Inconsistent coding standards across teams create maintenance burdens as developers context switch between projects. Tracking code quality trends requires manual effort without tooling that records findings over time.

Core Highlights

Configurable rules let teams define and enforce standards specific to their codebase and requirements. CI integration runs checks automatically on every pull request to catch issues before merge. Severity categorization prioritizes critical findings over style suggestions in review output. Multi-language support analyzes polyglot repositories through a unified configuration interface.

How to Use It?

Basic Usage

version: 1
analyzers:
  - name: security
    enabled: true
    rules:
      - no-hardcoded-secrets
      - no-sql-injection
      - no-eval-usage
  - name: quality
    enabled: true
    rules:
      - max-function-length: 50
      - max-complexity: 10
      - no-unused-imports
  - name: style
    enabled: true
    severity: warning
    rules:
      - consistent-naming
      - require-docstrings

ignore:
  paths:
    - vendor/
    - generated/
    - "*_test.go"

reporting:
  format: json
  output: vet-report.json
  fail_on: error

Real-World Examples

import subprocess
import json
from pathlib import Path
from dataclasses import dataclass, field

@dataclass
class Finding:
    file: str
    line: int
    rule: str
    severity: str
    message: str

@dataclass
class VetReport:
    findings: list = field(default_factory=list)
    passed: bool = True

class VetRunner:
    def __init__(self, config_path=".vet.yaml"):
        self.config_path = config_path

    def run_analysis(self, target_path="."):
        result = subprocess.run(
            ["vet", "check", target_path,
             "--config", self.config_path,
             "--format", "json"],
            capture_output=True, text=True
        )
        raw = json.loads(result.stdout)
        findings = [
            Finding(
                file=f["file"], line=f["line"],
                rule=f["rule"], severity=f["severity"],
                message=f["message"]
            )
            for f in raw.get("findings", [])
        ]
        has_errors = any(
            f.severity == "error" for f in findings
        )
        return VetReport(
            findings=findings, passed=not has_errors
        )

    def format_summary(self, report):
        by_severity = {}
        for f in report.findings:
            by_severity.setdefault(f.severity, []).append(f)
        lines = []
        for sev in ["error", "warning", "info"]:
            items = by_severity.get(sev, [])
            if items:
                lines.append(f"{sev.upper()}: {len(items)}")
                for item in items[:5]:
                    lines.append(
                        f"  {item.file}:{item.line} "
                        f"{item.rule}: {item.message}"
                    )
        return "\n".join(lines)

runner = VetRunner()
report = runner.run_analysis("src/")
print(runner.format_summary(report))

Advanced Tips

Create baseline files to suppress existing findings when adopting Vet on legacy codebases, then enforce zero new findings going forward. Write custom rules targeting project specific patterns such as deprecated API usage or required error handling conventions. Run analysis in parallel across subdirectories to reduce check times on large monorepos.

When to Use It?

Use Cases

Use Vet when enforcing code quality gates in CI pipelines before merging pull requests, when scanning repositories for security vulnerabilities during regular audit cycles, when onboarding new team members who need automated guidance on project standards, or when tracking code quality trends across releases.

Related Topics

Static analysis with ESLint and pylint, security scanning with Semgrep, CI pipeline configuration, code review automation with GitHub Actions, and custom linter rule development complement Vet workflows.

Important Notes

Requirements

Vet CLI installed in the development or CI environment. Configuration file defining enabled analyzers and rules. Access to the repository source code for analysis.

Usage Recommendations

Do: start with a small set of critical rules and expand gradually to avoid overwhelming developers with findings. Configure severity levels so errors block merges while warnings remain informational. Review and update rule configurations as project conventions evolve.

Don't: enable every available rule on day one, which generates excessive noise and discourages adoption. Ignore findings by suppressing entire categories rather than addressing root causes. Skip running Vet locally before pushing, relying solely on CI to catch issues.

Limitations

Static analysis cannot detect all runtime issues such as race conditions or environment specific bugs. Custom rule development requires understanding the tool analyzer API and rule syntax. Large repositories with thousands of files may experience slower analysis times without parallel execution.