Memory Optimize

Memory Optimize

Enhance system performance through automated memory optimization and management

Category: productivity Source: kochetkov-ma/claude-brewcode

Memory Optimize is an AI skill that provides techniques and tools for identifying, diagnosing, and resolving memory issues in applications across languages and runtimes. It covers memory profiling, leak detection, allocation analysis, garbage collection tuning, and resource management patterns that enable developers to reduce memory consumption and prevent out-of-memory failures.

What Is This?

Overview

Memory Optimize provides structured approaches to managing application memory usage. It handles profiling heap allocations to identify which objects consume the most memory, detecting memory leaks through snapshot comparison and growth tracking, analyzing garbage collection behavior to tune collector settings, identifying unnecessary object retention caused by stale references or caches, implementing resource cleanup patterns that release memory deterministically, and monitoring memory trends in production to catch regressions before they cause outages.

Who Should Use This

This skill serves backend developers troubleshooting memory growth in long-running services, frontend engineers optimizing single-page application memory, DevOps teams configuring memory limits and monitoring for containerized workloads, and performance engineers conducting memory audits during optimization cycles.

Why Use It?

Problems It Solves

Memory leaks cause gradual consumption growth that eventually crashes long-running services. Excessive object allocation triggers frequent garbage collection pauses that degrade response latency. Unbounded caches grow without limit, consuming memory intended for active request processing. Without profiling, developers guess at memory issues rather than targeting actual allocation sources.

Core Highlights

Heap profiling identifies the exact allocation sites responsible for memory consumption. Leak detection through snapshot diffing reveals objects that accumulate over time. GC analysis exposes collection frequency and pause durations affecting application throughput. Resource patterns ensure deterministic cleanup of memory-holding objects.

How to Use It?

Basic Usage

import tracemalloc
import gc

tracemalloc.start()

snapshot1 = tracemalloc.take_snapshot()

data = [dict(key=i, value="x" * 100) for i in range(10000)]

snapshot2 = tracemalloc.take_snapshot()

stats = snapshot2.compare_to(snapshot1, "lineno")
print("Top memory growth:")
for stat in stats[:5]:
    print(f"  {stat}")

gc.collect()
if gc.garbage:
    print(f"Uncollectable objects: {len(gc.garbage)}")

Real-World Examples

import tracemalloc
import weakref
from functools import lru_cache

class MemoryMonitor:
    def __init__(self):
        self.snapshots = []
        tracemalloc.start(25)

    def capture(self, label=""):
        snap = tracemalloc.take_snapshot()
        self.snapshots.append((label, snap))
        return snap

    def report_growth(self):
        if len(self.snapshots) < 2:
            return "Need at least 2 snapshots"
        first = self.snapshots[0][1]
        latest = self.snapshots[-1][1]
        stats = latest.compare_to(first, "lineno")
        lines = []
        for s in stats[:10]:
            if s.size_diff > 0:
                kb = s.size_diff / 1024
                lines.append(f"+{kb:.1f}KB {s.traceback}")
        return "\n".join(lines)

class BoundedCache:
    def __init__(self, max_size=1000):
        self.max_size = max_size
        self.cache = {}
        self.access_order = []

    def get(self, key):
        value = self.cache.get(key)
        if value is not None:
            self.access_order.remove(key)
            self.access_order.append(key)
        return value

    def put(self, key, value):
        if key in self.cache:
            self.access_order.remove(key)
        elif len(self.cache) >= self.max_size:
            evicted = self.access_order.pop(0)
            del self.cache[evicted]
        self.cache[key] = value
        self.access_order.append(key)

class ResourceTracker:
    def __init__(self):
        self.refs = []

    def track(self, obj, cleanup):
        ref = weakref.finalize(obj, cleanup)
        self.refs.append(ref)

    def report(self):
        alive = sum(1 for r in self.refs if r.alive)
        return f"Tracked: {len(self.refs)}, Alive: {alive}"

monitor = MemoryMonitor()
monitor.capture("before")
cache = BoundedCache(max_size=500)
for i in range(2000):
    cache.put(f"key_{i}", "x" * 200)
monitor.capture("after")
print(monitor.report_growth())
print(f"Cache size: {len(cache.cache)}")

Advanced Tips

Use weak references for caches where entries can be recreated on demand, allowing the garbage collector to reclaim memory under pressure. Set tracemalloc frame depth to 25 or higher when tracking allocations through deep call stacks. Profile memory after garbage collection runs to measure retained heap size rather than transient allocations.

When to Use It?

Use Cases

Use Memory Optimize when diagnosing memory leaks in long-running server processes, when reducing peak memory usage to fit within container limits, when tuning cache sizes to balance hit rates against memory consumption, or when profiling allocation hotspots during performance optimization.

Related Topics

Garbage collection algorithms and tuning, heap profiling tools, container memory limits, weak reference patterns, and production monitoring with memory alerts complement memory optimization.

Important Notes

Requirements

Profiling tools appropriate to the target language and runtime. Access to reproduce memory issues in a test environment. Monitoring infrastructure for tracking memory metrics in production.

Usage Recommendations

Do: profile with realistic workloads that match production data volumes and access patterns. Use bounded data structures with explicit eviction policies for all caches. Monitor memory trends over time rather than checking only instantaneous usage.

Don't: disable garbage collection as an optimization without understanding the memory management implications. Cache objects indefinitely without size limits or expiration policies. Rely solely on manual memory inspection instead of automated profiling tools.

Limitations

Profiling instruments add overhead that can alter the memory behavior being measured. Some memory leaks only manifest under production traffic patterns not reproducible in testing. Language-level profilers may not capture memory consumed by native extensions or system libraries.