Second Opinion

Second Opinion

Second Opinion automation for getting alternative insights and integration

Category: productivity Source: trailofbits/skills

Second Opinion is a community skill for obtaining alternative analysis and review perspectives on code, architecture decisions, debugging strategies, and technical approaches by providing a structured second evaluation of existing work.

What Is This?

Overview

Second Opinion provides a framework for getting an independent review of technical decisions and code quality. It covers code review analysis that examines existing code for potential issues, alternative patterns, and improvement opportunities, architecture evaluation that assesses system design choices against established patterns and scalability requirements, debugging review that examines diagnostic approaches and suggests alternative investigation paths, approach comparison that evaluates multiple solution strategies with tradeoff analysis, and risk assessment that identifies potential failure modes in proposed implementations. The skill helps developers validate their technical decisions.

Who Should Use This

This skill serves developers seeking validation of architectural choices, team leads reviewing critical code changes before deployment, and engineers debugging complex issues who want alternative diagnostic perspectives.

Why Use It?

Problems It Solves

Developers can develop blind spots when working on the same codebase for extended periods. Complex debugging sessions can lead to tunnel vision where obvious solutions are overlooked. Architecture decisions made under time pressure may miss important tradeoffs. Code reviews by a single reviewer may not catch subtle issues that fresh eyes would identify.

Core Highlights

Code analyzer examines implementations for bugs, patterns, and improvements. Architecture evaluator assesses design decisions against best practices. Debug reviewer suggests alternative diagnostic approaches. Tradeoff comparator weighs multiple solution strategies with clear criteria.

How to Use It?

Basic Usage

from dataclasses import (
  dataclass, field)
from enum import Enum

class ReviewType(Enum):
  CODE = 'code'
  ARCHITECTURE = 'arch'
  DEBUG = 'debug'
  APPROACH = 'approach'

@dataclass
class ReviewItem:
  category: str
  severity: str
  finding: str
  suggestion: str

@dataclass
class SecondOpinion:
  review_type: ReviewType
  items: list = field(
    default_factory=list)

  def add_finding(
    self,
    category: str,
    severity: str,
    finding: str,
    suggestion: str
  ):
    self.items.append(
      ReviewItem(
        category,
        severity,
        finding,
        suggestion))

  def summary(self) -> str:
    high = sum(
      1 for i in self.items
      if i.severity == 'high')
    med = sum(
      1 for i in self.items
      if i.severity == 'medium')
    return (
      f'{len(self.items)} '
      f'findings: {high} '
      f'high, {med} medium')

review = SecondOpinion(
  ReviewType.CODE)
review.add_finding(
  'error-handling',
  'high',
  'Bare except catches '
  'all exceptions',
  'Catch specific '
  'exception types')
print(review.summary())

Real-World Examples

import ast
import os

class CodeReviewer:
  def __init__(self):
    self.findings = []

  def check_complexity(
    self, filepath: str
  ) -> list:
    with open(filepath) as f:
      tree = ast.parse(
        f.read())
    results = []
    for node in ast.walk(
      tree
    ):
      if isinstance(node,
        ast.FunctionDef):
        branches = sum(
          1 for n in
          ast.walk(node)
          if isinstance(n,
            (ast.If,
             ast.For,
             ast.While,
             ast.ExceptHandler
            )))
        if branches > 8:
          results.append({
            'func':
              node.name,
            'branches':
              branches,
            'note':
              'Consider '
              'splitting'})
    return results

  def check_imports(
    self, filepath: str
  ) -> list:
    with open(filepath) as f:
      tree = ast.parse(
        f.read())
    imports = [
      n for n in
      ast.walk(tree)
      if isinstance(n,
        (ast.Import,
         ast.ImportFrom))]
    if len(imports) > 20:
      return [{
        'count':
          len(imports),
        'note':
          'Module may have '
          'too many '
          'responsibilities'}]
    return []

reviewer = CodeReviewer()
issues = (
  reviewer
  .check_complexity(
    'app.py'))
for i in issues:
  print(
    f'{i["func"]}: '
    f'{i["branches"]} '
    f'branches')

Advanced Tips

Run automated complexity checks before requesting manual review to focus human attention on architectural concerns rather than mechanical issues. Compare findings across review cycles to identify recurring patterns indicating systemic issues. Document review criteria in a checklist to ensure consistent evaluation across reviewers.

When to Use It?

Use Cases

Review a proposed database schema migration before applying it to production. Evaluate competing approaches for implementing a caching layer with performance and complexity tradeoffs. Get a fresh perspective on a debugging session that has stalled after initial investigation.

Related Topics

Code review, architecture review, debugging strategies, software quality, technical evaluation, peer review, and static analysis.

Important Notes

Requirements

Clear description of the code or decision being reviewed with sufficient context. Access to relevant source files and documentation for thorough evaluation. Understanding of the project constraints and requirements that influenced the original decision.

Usage Recommendations

Do: provide full context including constraints, deadlines, and requirements when requesting a second opinion. Focus review effort on high-risk areas such as security, data integrity, and performance critical paths. Use structured findings with severity levels to prioritize follow-up actions.

Don't: treat automated analysis as a replacement for human judgment on architectural decisions. Request reviews without providing the reasoning behind original choices. Ignore findings that contradict your initial approach without careful evaluation.

Limitations

Automated analysis tools cannot fully assess design intent or business context behind implementation choices. Review quality depends on the completeness of context provided about project constraints and goals. Second opinions may conflict with each other, requiring judgment to reconcile different perspectives on the same problem.