Files
hakmem/docs/design/DESIGN_FLAWS_SUMMARY.md
Moe Charm (CI) 67fb15f35f Wrap debug fprintf in !HAKMEM_BUILD_RELEASE guards (Release build optimization)
## Changes

### 1. core/page_arena.c
- Removed init failure message (lines 25-27) - error is handled by returning early
- All other fprintf statements already wrapped in existing #if !HAKMEM_BUILD_RELEASE blocks

### 2. core/hakmem.c
- Wrapped SIGSEGV handler init message (line 72)
- CRITICAL: Kept SIGSEGV/SIGBUS/SIGABRT error messages (lines 62-64) - production needs crash logs

### 3. core/hakmem_shared_pool.c
- Wrapped all debug fprintf statements in #if !HAKMEM_BUILD_RELEASE:
  - Node pool exhaustion warning (line 252)
  - SP_META_CAPACITY_ERROR warning (line 421)
  - SP_FIX_GEOMETRY debug logging (line 745)
  - SP_ACQUIRE_STAGE0.5_EMPTY debug logging (line 865)
  - SP_ACQUIRE_STAGE0_L0 debug logging (line 803)
  - SP_ACQUIRE_STAGE1_LOCKFREE debug logging (line 922)
  - SP_ACQUIRE_STAGE2_LOCKFREE debug logging (line 996)
  - SP_ACQUIRE_STAGE3 debug logging (line 1116)
  - SP_SLOT_RELEASE debug logging (line 1245)
  - SP_SLOT_FREELIST_LOCKFREE debug logging (line 1305)
  - SP_SLOT_COMPLETELY_EMPTY debug logging (line 1316)
- Fixed lock_stats_init() for release builds (lines 60-65) - ensure g_lock_stats_enabled is initialized

## Performance Validation

Before: 51M ops/s (with debug fprintf overhead)
After:  49.1M ops/s (consistent performance, fprintf removed from hot paths)

## Build & Test

```bash
./build.sh larson_hakmem
./out/release/larson_hakmem 1 5 1 1000 100 10000 42
# Result: 49.1M ops/s
```

Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-26 13:14:18 +09:00

8.1 KiB
Raw Blame History

HAKMEM Design Flaws - Quick Reference

Date: 2025-11-08 Key Insight: "キャッシュ層って足らなくなったら動的拡張するものではないですかにゃ?" ← 100% CORRECT

Visual Summary

┌─────────────────────────────────────────────────────────────────┐
│                  HAKMEM Resource Management                     │
│                   Fixed vs Dynamic Analysis                     │
└─────────────────────────────────────────────────────────────────┘

Component          │ Type           │ Capacity      │ Expansion    │ Priority
───────────────────┼────────────────┼───────────────┼──────────────┼──────────
SuperSlab          │ Fixed Array    │ 32 slabs      │ ❌ None      │ 🔴 CRITICAL
  └─ slabs[]       │                │ COMPILE-TIME  │              │ 4T OOM!
                   │                │               │              │
TLS Cache          │ Fixed Cap      │ 256-768 slots │ ❌ None      │ 🟡 HIGH
  └─ g_tls_sll_*   │                │ ENV override  │              │ No adapt
                   │                │               │              │
BigCache           │ Fixed 2D Array │ 256×8 = 2048  │ ❌ Eviction  │ 🟡 MEDIUM
  └─ g_cache[][]   │                │ COMPILE-TIME  │              │ Hash coll
                   │                │               │              │
L2.5 Pool          │ Fixed Shards   │ 64 shards     │ ❌ None      │ 🟡 MEDIUM
  └─ freelist[][]  │                │ COMPILE-TIME  │              │ Contention
                   │                │               │              │
Mid Registry       │ Dynamic Array  │ 64 → 2x       │ ✅ Grows     │ ✅ GOOD
  └─ entries       │                │ RUNTIME mmap  │              │ Correct!
                   │                │               │              │
Mid TLS Ring       │ Fixed Array    │ 48 slots      │ ❌ Overflow  │ 🟢 LOW
  └─ items[]       │                │ to LIFO       │              │ Minor

Problem: SuperSlab Fixed 32 Slabs (CRITICAL)

Current Design (BROKEN):
┌────────────────────────────────────────────┐
│ SuperSlab (2MB)                            │
│ ┌────────────────────────────────────────┐ │
│ │ slabs[32] ← FIXED ARRAY!               │ │
│ │ [0][1][2]...[31] ← Cannot grow!        │ │
│ └────────────────────────────────────────┘ │
│                                            │
│ 4T high-contention:                        │
│   Thread 1: slabs[0-7]   ← all busy       │
│   Thread 2: slabs[8-15]  ← all busy       │
│   Thread 3: slabs[16-23] ← all busy       │
│   Thread 4: slabs[24-31] ← all busy       │
│   → OOM! No more slabs!                    │
└────────────────────────────────────────────┘

Proposed Fix (Mimalloc-style):
┌────────────────────────────────────────────┐
│ SuperSlabChunk (2MB)                       │
│ ┌────────────────────────────────────────┐ │
│ │ slabs[32] (initial)                    │ │
│ └────────────────────────────────────────┘ │
│          ↓ link on overflow               │
│ ┌────────────────────────────────────────┐ │
│ │ slabs[32] (expansion chunk)            │ │
│ └────────────────────────────────────────┘ │
│          ↓ can continue growing           │
│         ...                                │
│                                            │
│ 4T high-contention:                        │
│   Chunk 1: slabs[0-31]   ← full           │
│   → Allocate Chunk 2                       │
│   Chunk 2: slabs[32-63]  ← expand!        │
│   → No OOM!                                │
└────────────────────────────────────────────┘

Comparison: HAKMEM vs Other Allocators

┌─────────────────────────────────────────────────────────────────┐
│                      Dynamic Expansion                          │
└─────────────────────────────────────────────────────────────────┘

mimalloc:
  Segment → Pages → Blocks
  ✅ Variable segment size
  ✅ Dynamic page allocation
  ✅ Adaptive thread cache

jemalloc:
  Chunk → Runs → Regions
  ✅ Variable chunk size
  ✅ Dynamic run creation
  ✅ Adaptive tcache

HAKMEM:
  SuperSlab → Slabs → Blocks
  ❌ Fixed 2MB SuperSlab size
  ❌ Fixed 32 slabs per SuperSlab  ← PROBLEM!
  ❌ Fixed TLS cache capacity
  ✅ Dynamic Mid Registry (only this!)

Fix Priority Matrix

                High Impact
                     ▲
                     │
        ┌────────────┼────────────┐
        │ SuperSlab  │            │
        │ (32 slabs) │ TLS Cache  │
        │ 🔴 CRITICAL│ (256-768)  │
        │ 7-10 days  │ 🟡 HIGH    │
        │            │ 3-5 days   │
        ├────────────┼────────────┤
        │ BigCache   │ L2.5 Pool  │
        │ (256×8)    │ (64 shards)│
        │ 🟡 MEDIUM  │ 🟡 MEDIUM  │
        │ 1-2 days   │ 2-3 days   │
        └────────────┼────────────┘
                     │
                     ▼
                Low Impact
        ◄────────────┼────────────►
        Low Effort       High Effort

Quick Stats

Total Components Analyzed:    6
  ├─ CRITICAL issues:         1 (SuperSlab)
  ├─ HIGH issues:             1 (TLS Cache)
  ├─ MEDIUM issues:           2 (BigCache, L2.5)
  ├─ LOW issues:              1 (Mid TLS Ring)
  └─ GOOD examples:           1 (Mid Registry) ✅

Estimated Fix Effort:         13-20 days
  ├─ Phase 2a (SuperSlab):    7-10 days
  ├─ Phase 2b (TLS Cache):    3-5 days
  └─ Phase 2c (Others):       3-5 days

Expected Outcomes:
  ✅ 4T stable operation (no OOM)
  ✅ Adaptive performance (hot classes get more cache)
  ✅ Better memory efficiency (no over-provisioning)

Key Takeaways

  1. User is 100% correct: Cache layers should expand dynamically.

  2. Root cause of 4T crashes: SuperSlab fixed 32-slab array.

  3. Mid Registry is the gold standard: Use its pattern for other components.

  4. Design principle: "Resources should expand on-demand, not be pre-allocated."

  5. Fix order: SuperSlab → TLS Cache → BigCache → L2.5 Pool.


Full Analysis: See DESIGN_FLAWS_ANALYSIS.md (11 chapters, detailed roadmap)