Phase 24 PageArena/HotSpanBox: Mid/VM page reuse cache (structural limit identified)

Summary:
- Implemented PageArena (Box PA1-PA3) for Mid-Large (8-52KB) / L25 (64KB-2MB)
- Integration: Pool TLS Arena + L25 alloc/refill paths
- Result: Minimal impact (+4.7% Mid, 0% VM page-fault reduction)
- Conclusion: Structural limit - existing Arena/Pool/L25 already optimized

Implementation:
1. Box PA1: Hot Page Cache (4KB pages, LIFO stack, 1024 slots)
   - core/page_arena.c: hot_page_alloc/free with mutex protection
   - TLS cache for 4KB pages

2. Box PA2: Warm Span Cache (64KB-2MB spans, size-bucketed)
   - 64KB/128KB/2MB span caches (256/128/64 slots)
   - Size-class based allocation

3. Box PA3: Cold Path (mmap fallback)
   - page_arena_alloc_pages/aligned with fallback to direct mmap

Integration Points:
4. Pool TLS Arena (core/pool_tls_arena.c)
   - chunk_ensure(): Lazy init + page_arena_alloc_pages() hook
   - arena_cleanup_thread(): Return chunks to PageArena if enabled
   - Exponential growth preserved (1MB → 8MB)

5. L25 Pool (core/hakmem_l25_pool.c)
   - l25_alloc_new_run(): Lazy init + page_arena_alloc_aligned() hook
   - refill_freelist(): PageArena allocation for bundles
   - 2MB run carving preserved

ENV Variables:
- HAKMEM_PAGE_ARENA_ENABLE=1 (default: 0, OFF)
- HAKMEM_PAGE_ARENA_HOT_SIZE=1024 (default: 1024)
- HAKMEM_PAGE_ARENA_WARM_64K=256 (default: 256)
- HAKMEM_PAGE_ARENA_WARM_128K=128 (default: 128)
- HAKMEM_PAGE_ARENA_WARM_2M=64 (default: 64)

Benchmark Results:
- Mid-Large MT (4T, 40K iter, 2KB):
  - OFF: 84,535 page-faults, 726K ops/s
  - ON:  84,534 page-faults, 760K ops/s (+4.7% ops, -0.001% faults)
- VM Mixed (200K iter):
  - OFF: 102,134 page-faults, 257K ops/s
  - ON:  102,134 page-faults, 255K ops/s (0% change)

Root Cause Analysis:
- Hypothesis: 50-66% page-fault reduction (80-100K → 30-40K)
- Actual: <1% page-fault reduction, minimal performance impact
- Reason: Structural limit - existing Arena/Pool/L25 already highly optimized
  - 1MB chunk sizes with high-density linear carving
  - TLS ring + exponential growth minimize mmap calls
  - PageArena becomes double-buffering layer with no benefit
  - Remaining page-faults from kernel zero-clear + app access patterns

Lessons Learned:
1. Mid/Large allocators already page-optimal via Arena/Pool design
2. Middle-layer caching ineffective when base layer already optimized
3. Page-fault reduction requires app-level access pattern changes
4. Tiny layer (Phase 23) remains best target for frontend optimization

Next Steps:
- Defer PageArena (low ROI, structural limit reached)
- Focus on upper layers (allocation pattern analysis, size distribution)
- Consider app-side access pattern optimization

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Moe Charm (CI)
2025-11-17 03:22:27 +09:00
parent 03ba62df4d
commit 7311d32574
7 changed files with 751 additions and 17 deletions

View File

@ -51,6 +51,8 @@
#include "hakmem_internal.h" // For AllocHeader and HAKMEM_MAGIC
#include "hakmem_syscall.h" // Phase 6.X P0 Fix: Box 3 syscall layer (bypasses LD_PRELOAD)
#include "box/pagefault_telemetry_box.h" // Box PageFaultTelemetry (PF_BUCKET_L25)
#include "page_arena.h" // Phase 24: PageArena integration for L25
#include "page_arena.h" // Phase 24: PageArena integration
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
@ -335,7 +337,18 @@ static inline int l25_alloc_new_run(int class_idx) {
int blocks = l25_blocks_per_run(class_idx);
size_t stride = l25_stride_bytes(class_idx);
size_t run_bytes = (size_t)blocks * stride;
void* raw = mmap(NULL, run_bytes, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
// Phase 24: Try PageArena first, fallback to mmap
if (page_arena_enabled() && g_page_arena.hot.pages == NULL) {
page_arena_init(&g_page_arena);
}
void* raw = page_arena_alloc_aligned(&g_page_arena, run_bytes, L25_PAGE_SIZE);
if (!raw) {
// PageArena cache miss → fallback to mmap
raw = mmap(NULL, run_bytes, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
}
if (raw == MAP_FAILED || raw == NULL) return 0;
L25ActiveRun* ar = &g_l25_active[class_idx];
ar->base = (char*)raw;
@ -641,9 +654,14 @@ static int refill_freelist(int class_idx, int shard_idx) {
int ok_any = 0;
for (int b = 0; b < bundles; b++) {
// Allocate bundle via mmap to avoid malloc contention and allow THP policy later
void* raw = mmap(NULL, bundle_size, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
// Phase 24: Try PageArena first, fallback to mmap
void* raw = page_arena_alloc_aligned(&g_page_arena, bundle_size, L25_PAGE_SIZE);
if (!raw) {
// PageArena cache miss → fallback to mmap
raw = mmap(NULL, bundle_size, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
}
if (!raw) {
if (ok_any) break; else return 0;
}