Create Technical Spike

create-technical-spike skill for programming & development

What Is This?

Overview

Create Technical Spike generates structured research plans for investigating technical unknowns. It identifies specific questions requiring answers, proposes research methodologies like prototyping or benchmarking, defines time boxes for investigation, establishes success criteria and deliverables, outlines documentation requirements, and suggests evaluation frameworks for comparing alternatives.

Technical spikes are time-boxed investigations answering questions like "Can this library handle our scale?" or "Which approach performs better?" The skill structures these investigations ensuring useful conclusions within time constraints.

Who Should Use This

Technical leads evaluating technologies. Software architects assessing approaches. Development teams facing unknowns. Engineering managers planning risky work. Platform engineers selecting tools. Developers learning new technologies.

Why Use It?

Problems It Solves

Technical unknowns cause paralysis without investigation structure. Spikes provide a framework for systematic research within time limits.

Open-ended research consumes excessive time without clear goals. Defined success criteria and time boxes keep investigations focused and productive.

Investigation findings often go undocumented, causing repeated research. Spike documentation captures learnings for future reference and decision making.

Alternative evaluations lack objective criteria. Structured spikes define metrics enabling data-driven technology choices.

Core Highlights

Research question formulation. Investigation approach definition. Time box establishment. Success criteria specification. Deliverable documentation requirements. Prototype or proof of concept scope. Evaluation metric definition. Finding presentation format.

How to Use It?

Basic Usage

Describe the technical uncertainty or decision requiring investigation. The skill generates a spike plan with questions, approach, and success criteria.

Create technical spike for evaluating database options
comparing PostgreSQL and MongoDB for our use case
Plan spike investigating GraphQL versus REST
for our API architecture

Specific Scenarios

For technology evaluation, emphasize comparison criteria.

Create spike comparing message queue technologies
measuring throughput, latency, and operational complexity

For proof of concept, focus on feasibility.

Plan spike building prototype demonstrating
real-time collaboration feasibility

For performance investigation, define benchmarks.

Create spike measuring query performance impact
of different indexing strategies

Real World Examples

A team debates between microservices and modular monolith for a new system. Arguments remain theoretical without data. A technical spike is created with research questions about deployment complexity, development velocity, and operational overhead. The investigation approach involves building a small prototype of each option, with success criteria measuring build time, deployment steps, and monitoring setup. A one-week time box covers both prototypes, and a documentation template captures findings. The spike produces an informed decision based on actual experience rather than opinion.

A platform team evaluates caching solutions to reduce database load. A spike plan defines questions about performance improvement, memory usage, and failure modes, with benchmark scenarios using representative workloads. Success criteria target 50% load reduction within a three-day time box. Metrics measure cache hit rate, latency reduction, and memory consumption, with a deliverable comparing Redis, Memcached, and application-level caching. A data-driven choice follows the systematic evaluation.

A developer considers reactive programming for a new service. A technical spike asks whether reactive patterns improve throughput for the workload, with investigation building small services using both blocking and reactive approaches. Success criteria require handling 10k requests per second within a four-day time box. Comparison metrics include CPU usage, memory footprint, and code complexity. The spike reveals reactive approach benefits while identifying team training needs.

Advanced Tips

Keep spikes strictly time-boxed to prevent scope creep. Focus on answering specific questions, not building production code. Document negative findings as they are equally valuable. Share spike results broadly for organizational learning and archive documentation for future reference.

When to Use It?

Use Cases

Technology evaluation and selection. Architecture decision investigation. Performance optimization research. Third-party library assessment. New paradigm or pattern exploration. Proof of concept development. Risk mitigation before major changes. Team learning and skill development.

Related Topics

Architectural decision records. Risk management practices. Technology radar and evaluation frameworks. Prototyping methodologies. Benchmarking and performance testing. Decision-making frameworks.

Important Notes

Requirements

Clear technical question or uncertainty. Time availability for investigation. Resources for prototyping or testing. Success criteria definition. Commitment to time box limits. Plan for documenting and sharing findings.

Usage Recommendations

Define scope narrowly for focused investigation. Set realistic time boxes, typically one to five days. Establish measurable success criteria upfront. Document findings regardless of outcome. Share learnings with the broader team. Use findings to inform decisions rather than delay them indefinitely. Archive spike artifacts for reference and convert useful prototypes to proper implementations carefully.

Limitations

Spikes should not become extended development efforts. Time boxes must be respected strictly. Spikes cannot replace all technical risk management. Findings may not generalize beyond tested scenarios. Requires discipline to avoid perfectionism. Prototype code is generally not production ready.