7311d32574
Phase 24 PageArena/HotSpanBox: Mid/VM page reuse cache (structural limit identified)
...
Summary:
- Implemented PageArena (Box PA1-PA3) for Mid-Large (8-52KB) / L25 (64KB-2MB)
- Integration: Pool TLS Arena + L25 alloc/refill paths
- Result: Minimal impact (+4.7% Mid, 0% VM page-fault reduction)
- Conclusion: Structural limit - existing Arena/Pool/L25 already optimized
Implementation:
1. Box PA1: Hot Page Cache (4KB pages, LIFO stack, 1024 slots)
- core/page_arena.c: hot_page_alloc/free with mutex protection
- TLS cache for 4KB pages
2. Box PA2: Warm Span Cache (64KB-2MB spans, size-bucketed)
- 64KB/128KB/2MB span caches (256/128/64 slots)
- Size-class based allocation
3. Box PA3: Cold Path (mmap fallback)
- page_arena_alloc_pages/aligned with fallback to direct mmap
Integration Points:
4. Pool TLS Arena (core/pool_tls_arena.c)
- chunk_ensure(): Lazy init + page_arena_alloc_pages() hook
- arena_cleanup_thread(): Return chunks to PageArena if enabled
- Exponential growth preserved (1MB → 8MB)
5. L25 Pool (core/hakmem_l25_pool.c)
- l25_alloc_new_run(): Lazy init + page_arena_alloc_aligned() hook
- refill_freelist(): PageArena allocation for bundles
- 2MB run carving preserved
ENV Variables:
- HAKMEM_PAGE_ARENA_ENABLE=1 (default: 0, OFF)
- HAKMEM_PAGE_ARENA_HOT_SIZE=1024 (default: 1024)
- HAKMEM_PAGE_ARENA_WARM_64K=256 (default: 256)
- HAKMEM_PAGE_ARENA_WARM_128K=128 (default: 128)
- HAKMEM_PAGE_ARENA_WARM_2M=64 (default: 64)
Benchmark Results:
- Mid-Large MT (4T, 40K iter, 2KB):
- OFF: 84,535 page-faults, 726K ops/s
- ON: 84,534 page-faults, 760K ops/s (+4.7% ops, -0.001% faults)
- VM Mixed (200K iter):
- OFF: 102,134 page-faults, 257K ops/s
- ON: 102,134 page-faults, 255K ops/s (0% change)
Root Cause Analysis:
- Hypothesis: 50-66% page-fault reduction (80-100K → 30-40K)
- Actual: <1% page-fault reduction, minimal performance impact
- Reason: Structural limit - existing Arena/Pool/L25 already highly optimized
- 1MB chunk sizes with high-density linear carving
- TLS ring + exponential growth minimize mmap calls
- PageArena becomes double-buffering layer with no benefit
- Remaining page-faults from kernel zero-clear + app access patterns
Lessons Learned:
1. Mid/Large allocators already page-optimal via Arena/Pool design
2. Middle-layer caching ineffective when base layer already optimized
3. Page-fault reduction requires app-level access pattern changes
4. Tiny layer (Phase 23) remains best target for frontend optimization
Next Steps:
- Defer PageArena (low ROI, structural limit reached)
- Focus on upper layers (allocation pattern analysis, size distribution)
- Consider app-side access pattern optimization
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com >
2025-11-17 03:22:27 +09:00
03ba62df4d
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified
...
Summary:
- Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s)
- PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM)
- Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization
Phase 23 Changes:
1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h})
- Direct SuperSlab carve (TLS SLL bypass)
- Self-contained pop-or-refill pattern
- ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128
2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h)
- Unified ON → direct cache access (skip all intermediate layers)
- Alloc: unified_cache_pop_or_refill() → immediate fail to slow
- Free: unified_cache_push() → fallback to SLL only if full
PageFaultTelemetry Changes:
3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h})
- PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement
- Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked()
4. Measurement results (Random Mixed 500K / 256B):
- Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page)
- SSM: 512 pages (initialization footprint)
- MID/L25: 0 (unused in this workload)
- Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny)
Ring Cache Enhancements:
5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h})
- ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size
- Conditional compilation cleanup
Documentation:
6. Analysis reports
- RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown
- RANDOM_MIXED_SUMMARY.md: Phase 23 summary
- RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage
- CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan
Next Steps (Phase 24):
- Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K)
- Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal)
- Expected improvement: +30-50% for Mid/Large workloads
Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com >
2025-11-17 02:47:58 +09:00
1010a961fb
Tiny: fix header/stride mismatch and harden refill paths
...
- Root cause: header-based class indexing (HEADER_CLASSIDX=1) wrote a 1-byte
header during allocation, but linear carve/refill and initial slab capacity
still used bare class block sizes. This mismatch could overrun slab usable
space and corrupt freelists, causing reproducible SEGV at ~100k iters.
Changes
- Superslab: compute capacity with effective stride (block_size + header for
classes 0..6; class7 remains headerless) in superslab_init_slab(). Add a
debug-only bound check in superslab_alloc_from_slab() to fail fast if carve
would exceed usable bytes.
- Refill (non-P0 and P0): use header-aware stride for all linear carving and
TLS window bump operations. Ensure alignment/validation in tiny_refill_opt.h
also uses stride, not raw class size.
- Drain: keep existing defense-in-depth for remote sentinel and sanitize nodes
before splicing into freelist (already present).
Notes
- This unifies the memory layout across alloc/linear-carve/refill with a single
stride definition and keeps class7 (1024B) headerless as designed.
- Debug builds add fail-fast checks; release builds remain lean.
Next
- Re-run Tiny benches (256/1024B) in debug to confirm stability, then in
release. If any remaining crash persists, bisect with HAKMEM_TINY_P0_BATCH_REFILL=0
to isolate P0 batch carve, and continue reducing branch-miss as planned.
2025-11-09 18:55:50 +09:00