Brainstorm Experiments Existing

Brainstorm Experiments Existing

Design experiments to test assumptions for an existing product — prototypes, A/B tests, spikes, and other low-effort validation methods. Use when

Category: design Source: phuryn/pm-skills

What Is This?

Overview

When a product already exists in the market, every new feature or change carries risk. Teams often invest weeks of engineering effort into ideas that users ultimately ignore or reject. The brainstorm-experiments-existing skill addresses this problem by helping product teams design low-effort validation methods before committing to full implementation. It provides a structured approach to generating experiment ideas, selecting the right validation method, and defining success criteria upfront.

This skill draws on established product experimentation techniques including A/B tests, prototypes, fake door tests, and technical spikes. Rather than treating experimentation as an afterthought, it positions validation as a first-class activity in the product development cycle. Teams use it to surface assumptions hidden inside feature ideas and then design the cheapest possible test to confirm or deny those assumptions.

The skill is particularly valuable for mature products where changes affect existing user behavior. Unlike greenfield development, changes to an existing product must account for current workflows, established expectations, and the risk of regression. Experiments designed with this skill respect those constraints while still enabling rapid learning.

Who Should Use This

  • Product managers who need to validate feature hypotheses before writing detailed specifications
  • UX designers looking to test interaction patterns without building full prototypes
  • Engineering leads evaluating technical approaches through spikes before committing to architecture decisions
  • Growth teams planning A/B tests to optimize conversion funnels or activation flows
  • Startup founders iterating on an existing product with limited engineering resources
  • Product analysts who need to define measurable success criteria before experiments begin

Why Use It?

Problems It Solves

  • Teams build features based on assumptions that were never validated, wasting significant development time on ideas users do not want
  • Experiments get designed after implementation begins, making it difficult to establish clean baselines or control groups
  • Product teams lack a shared vocabulary for experiment types, leading to inconsistent approaches across squads
  • Success criteria are defined vaguely, making it impossible to determine whether an experiment succeeded or failed
  • Technical feasibility is assumed rather than tested, causing late-stage surprises when engineering begins

Core Highlights

  • Generates multiple experiment options for a single assumption, allowing teams to choose based on cost and confidence requirements
  • Covers the full spectrum of experiment types from qualitative user interviews to quantitative A/B tests
  • Helps teams distinguish between assumption types: desirability, feasibility, and viability
  • Produces experiment designs with explicit hypotheses, metrics, and minimum detectable effects
  • Supports low-fidelity methods such as Wizard of Oz tests and fake door experiments
  • Integrates with existing product development workflows without requiring process overhaul
  • Encourages parallel experimentation to accelerate learning velocity

How to Use It?

Basic Usage

A typical experiment design prompt follows this structure:

Feature idea: Add a collaborative editing mode to the dashboard
Core assumption: Users want to edit reports simultaneously with teammates
Experiment type: Fake door test
Success metric: Click-through rate on "Invite collaborator" button
Minimum threshold: 15% CTR within 14 days

This format forces clarity on what is being tested and how success will be measured before any code is written.

Specific Scenarios

Scenario 1: Testing demand before building. A team wants to know if users will pay for an advanced analytics tier. Rather than building the tier, they add a pricing page with a waitlist form. Sign-up rate becomes the validation metric.

Scenario 2: Technical spike for a risky integration. Engineering is unsure whether a third-party API can handle the required throughput. A time-boxed spike of two days produces a working proof of concept that answers the feasibility question.

Real-World Examples

A SaaS company tests a new onboarding flow by running a five-user moderated usability session on a Figma prototype before writing a single line of production code. A mobile app team runs a 50/50 A/B test on two notification copy variants, measuring seven-day retention as the primary metric. An e-commerce platform uses a fake door to gauge interest in a subscription model before committing to the billing infrastructure.

Important Notes

Requirements

  • A clearly articulated feature idea or product change with at least one testable assumption
  • Access to users or a user panel for qualitative experiment methods
  • Basic analytics instrumentation on the existing product to support quantitative tests
  • Stakeholder alignment on the experiment timeline and decision criteria