Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified
Summary:
- Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s)
- PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM)
- Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization
Phase 23 Changes:
1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h})
- Direct SuperSlab carve (TLS SLL bypass)
- Self-contained pop-or-refill pattern
- ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128
2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h)
- Unified ON → direct cache access (skip all intermediate layers)
- Alloc: unified_cache_pop_or_refill() → immediate fail to slow
- Free: unified_cache_push() → fallback to SLL only if full
PageFaultTelemetry Changes:
3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h})
- PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement
- Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked()
4. Measurement results (Random Mixed 500K / 256B):
- Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page)
- SSM: 512 pages (initialization footprint)
- MID/L25: 0 (unused in this workload)
- Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny)
Ring Cache Enhancements:
5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h})
- ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size
- Conditional compilation cleanup
Documentation:
6. Analysis reports
- RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown
- RANDOM_MIXED_SUMMARY.md: Phase 23 summary
- RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage
- CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan
Next Steps (Phase 24):
- Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K)
- Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal)
- Expected improvement: +30-50% for Mid/Large workloads
Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@ -29,10 +29,12 @@
|
||||
#ifdef HAKMEM_TINY_HEADER_CLASSIDX
|
||||
#include "front/tiny_front_c23.h" // Phase B: Ultra-simple C2/C3 front
|
||||
#include "front/tiny_ring_cache.h" // Phase 21-1: Ring cache (C2/C3 array-based TLS cache)
|
||||
#include "front/tiny_unified_cache.h" // Phase 23: Unified frontend cache (tcache-style, all classes)
|
||||
#include "front/tiny_heap_v2.h" // Phase 13-A: TinyHeapV2 magazine front
|
||||
#include "front/tiny_ultra_hot.h" // Phase 14: TinyUltraHot C1/C2 ultra-fast path
|
||||
#endif
|
||||
#include "box/front_metrics_box.h" // Phase 19-1: Frontend layer metrics
|
||||
#include "hakmem_tiny_lazy_init.inc.h" // Phase 22: Lazy per-class initialization
|
||||
#include <stdio.h>
|
||||
|
||||
// Phase 7 Task 2: Aggressive inline TLS cache access
|
||||
@ -562,6 +564,9 @@ static inline void* tiny_alloc_fast(size_t size) {
|
||||
uint64_t call_num = atomic_fetch_add(&alloc_call_count, 1);
|
||||
#endif
|
||||
|
||||
// Phase 22: Global init (once per process)
|
||||
lazy_init_global();
|
||||
|
||||
// 1. Size → class index (inline, fast)
|
||||
int class_idx = hak_tiny_size_to_class(size);
|
||||
|
||||
@ -569,6 +574,9 @@ static inline void* tiny_alloc_fast(size_t size) {
|
||||
return NULL; // Size > 1KB, not Tiny
|
||||
}
|
||||
|
||||
// Phase 22: Lazy per-class init (on first use)
|
||||
lazy_init_class(class_idx);
|
||||
|
||||
#if !HAKMEM_BUILD_RELEASE
|
||||
// Phase 3: Debug checks eliminated in release builds
|
||||
// CRITICAL: Bounds check to catch corruption
|
||||
@ -606,8 +614,26 @@ static inline void* tiny_alloc_fast(size_t size) {
|
||||
}
|
||||
#endif
|
||||
|
||||
// Phase 23-E: Unified Frontend Cache (self-contained, single-layer tcache)
|
||||
// ENV-gated: HAKMEM_TINY_UNIFIED_CACHE=1 (default: OFF)
|
||||
// Design: Pop-or-Refill → Direct SuperSlab batch refill (bypasses ALL frontend layers)
|
||||
// Target: 20-30% improvement (25-27M ops/s) via cache miss reduction (8-10 → 2-3)
|
||||
if (__builtin_expect(unified_cache_enabled(), 0)) {
|
||||
void* base = unified_cache_pop_or_refill(class_idx);
|
||||
if (base) {
|
||||
// Unified cache hit OR refill success - return USER pointer (BASE + 1)
|
||||
HAK_RET_ALLOC(class_idx, base);
|
||||
}
|
||||
// Unified cache is enabled but refill failed (OOM) → go directly to slow path.
|
||||
ptr = hak_tiny_alloc_slow(size, class_idx);
|
||||
if (ptr) {
|
||||
HAK_RET_ALLOC(class_idx, ptr);
|
||||
}
|
||||
return ptr;
|
||||
}
|
||||
|
||||
// Phase 21-1: Ring Cache (C2/C3 only) - Array-based TLS cache
|
||||
// ENV-gated: HAKMEM_TINY_HOT_RING_ENABLE=1
|
||||
// ENV-gated: HAKMEM_TINY_HOT_RING_ENABLE=1 (default: ON after Phase 21-1-D)
|
||||
// Target: +15-20% (54.4M → 62-65M ops/s) by eliminating pointer chasing
|
||||
// Design: Ring (L0) → SLL (L1) → SuperSlab (L2) cascade hierarchy
|
||||
if (class_idx == 2 || class_idx == 3) {
|
||||
|
||||
Reference in New Issue
Block a user