Security Threat Model
Automate security threat modeling and integrate risk assessment into development workflows
Security Threat Model is a community skill for building structured threat modeling processes that identify potential attack vectors, assess risk levels, and produce actionable security requirements for software systems and their components.
What Is This?
Overview
Security Threat Model provides frameworks for systematically analyzing software systems to identify security threats before they are exploited. It covers asset identification, threat enumeration using STRIDE methodology, attack tree construction, risk scoring with likelihood and impact assessment, and mitigation planning. The skill produces structured threat documentation that drives security requirements and testing priorities.
Who Should Use This
This skill serves security engineers leading threat modeling exercises, development teams building security into the design phase, and architects evaluating the security posture of new system designs before implementation begins.
Why Use It?
Problems It Solves
Security vulnerabilities discovered after deployment cost significantly more to fix than those caught during design. Without systematic threat analysis, teams address obvious risks while missing subtle attack vectors. Ad-hoc security discussions produce incomplete coverage that varies based on who participates. Prioritizing security work requires structured risk assessment rather than reacting to the latest headline vulnerability.
Core Highlights
STRIDE classification categorizes threats into Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege categories. Data flow diagrams identify trust boundaries where security controls are needed. Risk scoring combines likelihood and impact ratings to prioritize mitigation efforts. Mitigation mapping connects identified threats to specific security controls and implementation tasks.
How to Use It?
Basic Usage
from dataclasses import dataclass, field
from enum import Enum
class StrideCategory(Enum):
SPOOFING = "spoofing"
TAMPERING = "tampering"
REPUDIATION = "repudiation"
INFO_DISCLOSURE = "information_disclosure"
DENIAL_OF_SERVICE = "denial_of_service"
ELEVATION = "elevation_of_privilege"
@dataclass
class Threat:
title: str
category: StrideCategory
description: str
likelihood: int # 1-5
impact: int # 1-5
mitigations: list[str] = field(default_factory=list)
@property
def risk_score(self) -> int:
return self.likelihood * self.impact
@property
def priority(self) -> str:
score = self.risk_score
if score >= 15: return "critical"
if score >= 10: return "high"
if score >= 5: return "medium"
return "low"Real-World Examples
class ThreatModel:
def __init__(self, system_name: str):
self.system = system_name
self.assets: list[str] = []
self.threats: list[Threat] = []
self.trust_boundaries: list[str] = []
def add_threat(self, threat: Threat):
self.threats.append(threat)
def by_priority(self) -> dict[str, list[Threat]]:
grouped = {}
for t in self.threats:
grouped.setdefault(t.priority, []).append(t)
return grouped
def unmitigated(self) -> list[Threat]:
return [t for t in self.threats if not t.mitigations]
def summary(self) -> dict:
by_cat = {}
for t in self.threats:
cat = t.category.value
by_cat[cat] = by_cat.get(cat, 0) + 1
return {
"system": self.system,
"total_threats": len(self.threats),
"unmitigated": len(self.unmitigated()),
"by_category": by_cat,
"critical_count": len(self.by_priority().get("critical", []))
}
model = ThreatModel("Payment API")
model.assets = ["credit card data", "user credentials", "transaction logs"]
model.trust_boundaries = ["internet to load balancer", "API to database"]
model.add_threat(Threat(
title="SQL injection in payment endpoint",
category=StrideCategory.TAMPERING,
description="Attacker manipulates query parameters",
likelihood=3, impact=5,
mitigations=["Parameterized queries", "Input validation"]
))
print(model.summary())Advanced Tips
Review threat models when system architecture changes significantly, not just during initial design. Use attack trees to decompose high-level threats into specific exploit paths that map to concrete test cases. Involve both developers and security specialists in threat modeling sessions to combine system knowledge with adversarial thinking.
When to Use It?
Use Cases
Evaluate the security posture of a new microservice before deployment. Identify threats to data-sensitive features during the design phase. Produce security requirements that inform penetration testing scope and priorities.
Related Topics
STRIDE methodology, OWASP threat modeling guides, attack surface analysis, risk assessment frameworks, and security requirements engineering.
Important Notes
Requirements
Architecture diagrams or data flow diagrams of the system under analysis. Understanding of system trust boundaries and data sensitivity classifications. Participation from team members who understand both the system internals and potential attack scenarios.
Usage Recommendations
Do: document threats with specific descriptions rather than vague categories. Score risk using consistent criteria across all threats for meaningful prioritization. Update the threat model as the system evolves and new components are added.
Don't: treat threat modeling as a one-time exercise completed only during initial design. Accept risk scores without discussion when team members disagree on likelihood or impact ratings. Skip mitigation planning for threats that score below the critical threshold, as lower-priority threats still need documented acceptance.
Limitations
Threat models reflect the understanding of participants and may miss novel attack vectors outside their experience. Risk scoring is inherently subjective even with structured criteria. Maintaining threat models as systems evolve requires dedicated effort that competes with feature development priorities.
More Skills You Might Like
Explore similar skills to enhance your workflow
Scholar Evaluation
Automate and integrate Scholar Evaluation to streamline academic assessment workflows
Codeinterpreter Automation
Automate Codeinterpreter tasks via Rube MCP (Composio)
Abstract Automation
Automate Abstract operations through Composio's Abstract toolkit via
Blackboard Automation
Automate Blackboard operations through Composio's Blackboard toolkit
Peer Review
Streamline academic and professional Peer Review processes with automation tools
Sarif Parsing
Automate and integrate SARIF Parsing to streamline security analysis results