Database Designer

Database Designer

Database Designer automation and integration for structured data modeling

Category: development Source: alirezarezvani/claude-skills

Database Designer is a community skill for designing relational database schemas, covering entity-relationship modeling, normalization analysis, index strategy planning, constraint definition, and schema migration management for application data architecture.

What Is This?

Overview

Database Designer provides patterns for creating well-structured relational database schemas. It covers entity-relationship modeling that maps business concepts to tables with appropriate relationships, normalization analysis that eliminates data redundancy by applying normal form rules, index strategy planning that identifies columns and composite keys to index based on query patterns, constraint definition that enforces data integrity through foreign keys, unique constraints, and check rules, and migration management that versions schema changes for safe deployment across environments. The skill enables developers to build database schemas that balance performance with data integrity.

Who Should Use This

This skill serves backend developers designing data models for new applications, database administrators reviewing and optimizing existing schemas, and architects planning data layer strategy for multi-service systems.

Why Use It?

Problems It Solves

Poorly designed schemas create data anomalies where updates must be applied to multiple rows risking inconsistency. Missing indexes cause queries to scan entire tables as data volume grows. Inadequate constraints allow invalid data to enter the database which corrupts downstream processing. Schema changes without migration tooling risk data loss during deployment.

Core Highlights

ER modeler maps business entities to tables with relationship cardinality definitions. Normalizer detects redundancy and recommends decomposition to eliminate update anomalies. Index planner analyzes query patterns to recommend covering indexes. Migration generator creates versioned DDL scripts for safe schema evolution.

How to Use It?

Basic Usage

-- Schema design example
CREATE TABLE customers (
  id SERIAL PRIMARY KEY,
  email VARCHAR(255)
    NOT NULL UNIQUE,
  name VARCHAR(100)
    NOT NULL,
  created_at TIMESTAMP
    DEFAULT NOW()
);

CREATE TABLE orders (
  id SERIAL PRIMARY KEY,
  customer_id INTEGER
    NOT NULL
    REFERENCES customers(id)
    ON DELETE CASCADE,
  total DECIMAL(10,2)
    NOT NULL
    CHECK (total >= 0),
  status VARCHAR(20)
    DEFAULT 'pending',
  created_at TIMESTAMP
    DEFAULT NOW()
);

CREATE INDEX
  idx_orders_customer
  ON orders(customer_id);
CREATE INDEX
  idx_orders_status
  ON orders(status)
  WHERE status
    != 'completed';

Real-World Examples

import hashlib
from pathlib import Path
from datetime\
  import datetime

class MigrationManager:
  def __init__(
    self,
    migrations_dir: str
  ):
    self.dir = Path(
      migrations_dir)
    self.dir.mkdir(
      exist_ok=True)

  def create(
    self,
    name: str,
    up_sql: str,
    down_sql: str
  ) -> str:
    ts = datetime.now()\
      .strftime(
        '%Y%m%d%H%M%S')
    filename =\
      f'{ts}_{name}.sql'
    content = (
      f'-- UP\n'
      f'{up_sql}\n\n'
      f'-- DOWN\n'
      f'{down_sql}\n')
    path = self.dir\
      / filename
    path.write_text(
      content)
    return filename

  def pending(
    self,
    applied: set[str]
  ) -> list[str]:
    files = sorted(
      self.dir.glob(
        '*.sql'))
    return [
      f.name for f
      in files
      if f.name not\
        in applied]

Advanced Tips

Use partial indexes on columns with skewed distributions to index only the rows that queries actually filter on, reducing index size and write overhead. Add composite indexes that match the exact column order of frequent multi-column WHERE clauses for covering index optimization. Include a down migration for every schema change to enable safe rollback in case a deployment needs to be reverted.

When to Use It?

Use Cases

Design a relational schema for a new application with proper normalization and referential integrity. Plan an index strategy for an existing database based on slow query log analysis. Create a migration workflow that versions schema changes across development and production environments.

Related Topics

Database design, schema normalization, SQL, index optimization, database migrations, and entity-relationship modeling.

Important Notes

Requirements

Relational database system such as PostgreSQL, MySQL, or SQLite. Migration tooling such as Alembic, Flyway, or custom scripts for versioned schema changes. Query logs or access patterns for informed index planning.

Usage Recommendations

Do: normalize to third normal form as a starting point and selectively denormalize only where query performance requires it. Define foreign key constraints to enforce referential integrity at the database level. Test migrations against a copy of production data before deploying.

Don't: create indexes on every column since each index adds write overhead and storage cost. Skip down migrations which makes rollback impossible during failed deployments. Use generic column names like data or value which obscure the schema meaning and make queries harder to understand.

Limitations

Relational schema design patterns assume structured data and may not suit document-oriented or graph data models. Index recommendations based on current query patterns may become suboptimal as application access patterns change. Migration tooling adds deployment complexity and requires coordination in multi-service environments sharing the same database.