diff --git a/PHASE15_BUG_ANALYSIS.md b/PHASE15_BUG_ANALYSIS.md new file mode 100644 index 00000000..52abb980 --- /dev/null +++ b/PHASE15_BUG_ANALYSIS.md @@ -0,0 +1,139 @@ +# Phase 15 Bug Analysis - ExternalGuard Crash Investigation + +**Date**: 2025-11-15 +**Status**: ROOT CAUSE IDENTIFIED + +## Summary + +ExternalGuard is being called with a page-aligned pointer (`0x7fd8f8202000`) that: +- `hak_super_lookup()` returns NULL (not in registry) +- `__libc_free()` rejects as "invalid pointer" + +## Evidence + +### Crash Log +``` +[ExternalGuard] ptr=0x7fd8f8202000 offset_in_page=0x0 (call #1) +[ExternalGuard] >>> Use: addr2line -e 0x58b613548275 +[ExternalGuard] hak_super_lookup(ptr) = (nil) +[ExternalGuard] ptr=0x7fd8f8202000 delegated to __libc_free +free(): invalid pointer +``` + +### Caller Identification +Using objdump analysis, caller address `0x...8275` maps to: +- **Function**: `free()` wrapper (line 0xb270 in binary) +- **Source**: `free(slots)` from bench_random_mixed.c line 85 + +### Allocation Analysis +```c +// bench_random_mixed.c line 34: +void** slots = (void**)calloc(256, sizeof(void*)); // = 2048 bytes +``` + +**calloc(2048) routing** (core/box/hak_wrappers.inc.h:282-285): +```c +if (ld_safe_mode_calloc >= 2 || total > TINY_MAX_SIZE) { // TINY_MAX_SIZE = 1023 + extern void* __libc_calloc(size_t, size_t); + return __libc_calloc(nmemb, size); // ← Delegates to libc! +} +``` + +**Expected**: `calloc(2048)` → `__libc_calloc()` (delegated to libc) + +## Root Cause Analysis + +### Free Path Bug (core/box/hak_wrappers.inc.h) + +**Lines 147-166**: Early classification +```c +ptr_classification_t c = classify_ptr(ptr); +if (is_hakmem_owned) { + hak_free_at(ptr, ...); // Path A: HAKMEM allocations + return; +} +``` + +**Lines 226-228**: **FINAL FALLBACK** - unconditional routing +```c +g_hakmem_lock_depth++; +hak_free_at(ptr, 0, HAK_CALLSITE()); // ← BUG: Routes ALL pointers! +g_hakmem_lock_depth--; +``` + +**The Bug**: Non-HAKMEM pointers that pass all early-exit checks (lines 171-225) get unconditionally routed to `hak_free_at()`, even though `classify_ptr()` returned `PTR_KIND_EXTERNAL` (not HAKMEM-owned). + +### Why __libc_free() Rejects the Pointer + +**Two Hypotheses**: + +**Hypothesis A**: Pointer is from `__libc_calloc()` (expected), but something corrupts it before reaching `__libc_free()` +- Test: calloc(256, 8) returned offset 0x2a0 (not page-aligned) +- **Contradiction**: Crash log shows page-aligned pointer (0x...000) +- **Conclusion**: Pointer is NOT from `calloc(slots)` + +**Hypothesis B**: Pointer is a HAKMEM allocation that `classify_ptr()` failed to recognize +- Pool TLS allocations CAN be page-aligned (mmap'd chunks) +- `hak_super_lookup()` returns NULL → not in Tiny registry +- **Likely**: This is a Pool TLS allocation (2KB = Pool range 8-52KB) + +## Verification Tests + +### Test 1: Pool TLS Allocation Check +```bash +# Check if 2KB allocations use Pool TLS +./test/pool_tls_allocation_test 2048 +``` + +### Test 2: classify_ptr() Behavior +```c +void* ptr = calloc(256, sizeof(void*)); // 2048 bytes +ptr_classification_t c = classify_ptr(ptr); +printf("kind=%d (POOL_TLS=%d, EXTERNAL=%d)\n", + c.kind, PTR_KIND_POOL_TLS, PTR_KIND_EXTERNAL); +``` + +## Next Steps + +### Option 1: Fix free() Wrapper Logic (Recommended) +Change line 227 to check HAKMEM ownership first: +```c +// Before (BUG): +hak_free_at(ptr, 0, HAK_CALLSITE()); // Routes ALL pointers + +// After (FIX): +if (is_hakmem_owned) { + hak_free_at(ptr, 0, HAK_CALLSITE()); +} else { + extern void __libc_free(void*); + __libc_free(ptr); // Proper fallback for libc allocations +} +``` + +**Problem**: `is_hakmem_owned` is out of scope (line 149-159 block) + +**Solution**: Hoist `is_hakmem_owned` to function scope or re-classify at line 226 + +### Option 2: Fix classify_ptr() to Recognize Pool TLS +If pointer is actually Pool TLS but misclassified: +- Add Pool TLS registry lookup to `classify_ptr()` +- Ensure Pool allocations are properly registered + +### Option 3: Defer Phase 15 (Current) +Revert to Phase 14-C until free() wrapper logic is fixed + +## User's Insight + +> "うん? mincore のセグフォはむしろ 違う層から呼ばれているという バグ発見じゃにゃいの?" + +**Translation**: "Wait, isn't the mincore SEGV actually detecting a bug - that it's being called from the wrong layer?" + +**Interpretation**: ExternalGuard being called is CORRECT behavior - it's detecting that a HAKMEM pointer (Pool TLS?) is not being recognized by the classification layer! + +## Conclusion + +**Primary Bug**: `free()` wrapper unconditionally routes all pointers to `hak_free_at()` at line 227, regardless of HAKMEM ownership. + +**Secondary Bug (suspected)**: `classify_ptr()` may fail to recognize Pool TLS allocations, causing them to be misclassified as `PTR_KIND_EXTERNAL`. + +**Recommendation**: Fix Option 1 (free() wrapper logic) first, then investigate Pool TLS classification if issue persists. diff --git a/core/box/external_guard_box.h b/core/box/external_guard_box.h index cfcfd22f..81b4cd48 100644 --- a/core/box/external_guard_box.h +++ b/core/box/external_guard_box.h @@ -24,6 +24,7 @@ #include #include #include +#include "front_gate_v2.h" // Phase 15: For fg_classification_t types // ENV control: mincore enable/disable static inline int external_guard_mincore_enabled(void) { @@ -87,8 +88,34 @@ static inline int external_guard_try_free(void* ptr) { g_external_guard_stats.total_calls++; if (external_guard_log_enabled()) { - fprintf(stderr, "[ExternalGuard] ptr=%p (call #%lu)\n", - ptr, g_external_guard_stats.total_calls); + // PHASE 15: Track caller address for debugging (ChatGPT advice) + void* caller0 = __builtin_return_address(0); + void* caller1 = __builtin_return_address(1); + fprintf(stderr, "[ExternalGuard] ptr=%p offset_in_page=0x%lx (call #%lu)\n", + ptr, (uintptr_t)ptr & 0xFFF, g_external_guard_stats.total_calls); + fprintf(stderr, "[ExternalGuard] Stack: [0]=%p [1]=%p\n", caller0, caller1); + + // Debug: Read header at ptr-1 + if ((uintptr_t)ptr >= 4096 && ((uintptr_t)ptr & 0xFFF) != 0) { + uint8_t header = *((uint8_t*)ptr - 1); + fprintf(stderr, "[ExternalGuard] header at ptr-1 = 0x%02x (magic=0x%02x class=%d)\n", + header, header & 0xf0, header & 0x0f); + } + + // Debug: Check if this looks like a HAKMEM allocation + extern SuperSlab* hak_super_lookup(void*); + SuperSlab* ss = hak_super_lookup(ptr); + fprintf(stderr, "[ExternalGuard] hak_super_lookup(ptr) = %p\n", (void*)ss); + if (ss) { + fprintf(stderr, "[ExternalGuard] HAKMEM SS FOUND! ptr=%p ss=%p magic=0x%x class=%d\n", + ptr, (void*)ss, ss->magic, ss->slabs ? ss->slabs[0].class_idx : -1); + } + + // Debug: Check FrontGate classification (types defined in front_gate_v2.h) + fg_classification_t fg = fg_classify_domain(ptr); + const char* domain_name[] = {"TINY", "POOL", "MIDCAND", "EXTERNAL"}; + fprintf(stderr, "[ExternalGuard] FrontGate classification: domain=%s class_idx=%d\n", + domain_name[fg.domain], fg.class_idx); } // Safety check: is memory mapped? diff --git a/core/box/hak_free_api.inc.h b/core/box/hak_free_api.inc.h index e7d08b82..2f73d24c 100644 --- a/core/box/hak_free_api.inc.h +++ b/core/box/hak_free_api.inc.h @@ -1,4 +1,5 @@ // hak_free_api.inc.h — Box: hak_free_at() implementation +// Phase 15: Box Separation - One-way routing (FG → Domain boxes → ExternalGuard) #ifndef HAK_FREE_API_INC_H #define HAK_FREE_API_INC_H @@ -6,7 +7,8 @@ #include "hakmem_tiny_superslab.h" // For SUPERSLAB_MAGIC, SuperSlab #include "../tiny_free_fast_v2.inc.h" // Phase 7: Header-based ultra-fast free #include "../ptr_trace.h" // Debug: pointer trace immediate dump on libc fallback -#include "front_gate_classifier.h" // Box FG: Centralized pointer classification +#include "front_gate_v2.h" // Phase 15: Box FG V2 - 1-byte header classification +#include "external_guard_box.h" // Phase 15: Box ExternalGuard - mincore (ENV controlled) #ifdef HAKMEM_POOL_TLS_PHASE1 #include "../pool_tls.h" @@ -119,26 +121,22 @@ void hak_free_at(void* ptr, size_t size, hak_callsite_t site) { return; } - // ========== Box FG: Single Point of Classification ========== - // Classify pointer once using Front Gate (safe header probe + Registry fallback) - // This eliminates all scattered ptr-1 reads and centralizes classification logic - ptr_classification_t classification = classify_ptr(ptr); + // ========== Phase 15: Box FG V2 Classification ========== + // One-way routing: FG → Domain boxes → ExternalGuard + // Box FG V2: Ultra-fast 1-byte header classification (no mincore, no registry) + fg_classification_t fg = fg_classify_domain(ptr); + hak_free_route_log(fg_domain_name(fg.domain), ptr); - // Route based on classification result - switch (classification.kind) { - case PTR_KIND_TINY_HEADER: { - // C0-C6: Has 1-byte header, class_idx already determined by Front Gate - // Fast path: Use class_idx directly without SuperSlab lookup - hak_free_route_log("tiny_header", ptr); + switch (fg.domain) { + case FG_DOMAIN_TINY: { + // Fast path: Tiny (C0-C7) with 1-byte header (0xa0 | class_idx) #if HAKMEM_TINY_HEADER_CLASSIDX - // Use ultra-fast free path with pre-determined class_idx if (__builtin_expect(hak_tiny_free_fast_v2(ptr), 1)) { #if !HAKMEM_BUILD_RELEASE hak_free_v2_track_fast(); #endif goto done; } - // Fallback to slow path if TLS cache full #if !HAKMEM_BUILD_RELEASE hak_free_v2_track_slow(); #endif @@ -147,45 +145,68 @@ void hak_free_at(void* ptr, size_t size, hak_callsite_t site) { goto done; } - case PTR_KIND_TINY_HEADERLESS: { - // C7: Headerless 1KB blocks, SuperSlab + slab_idx provided by Registry - // Medium path: Use Registry result, no header read needed - hak_free_route_log("tiny_headerless", ptr); - hak_tiny_free(ptr); - goto done; - } - #ifdef HAKMEM_POOL_TLS_PHASE1 - case PTR_KIND_POOL_TLS: { - // Pool TLS: 8KB-52KB allocations with 0xb0 magic - hak_free_route_log("pool_tls", ptr); + case FG_DOMAIN_POOL: { + // Pool TLS: 8KB-52KB allocations with 1-byte header (0xb0 | class_idx) pool_free(ptr); goto done; } #endif - case PTR_KIND_UNKNOWN: - default: { - // Not Tiny or Pool - check 16-byte AllocHeader (Mid/Large/malloc/mmap) - // This is the slow path for large allocations - break; // Fall through to header dispatch below - } + case FG_DOMAIN_MIDCAND: + case FG_DOMAIN_EXTERNAL: + // Fall through to registry lookup + AllocHeader dispatch + break; } // ========== Slow Path: 16-byte AllocHeader Dispatch ========== // Handle Mid/Large allocations (malloc/mmap/Pool/L25) // Note: All Tiny allocations (C0-C7) already handled by Front Gate above - // Mid/L25 headerless経路 + // ========== Mid/L25/Tiny Registry Lookup (Headerless) ========== + // MIDCAND: Could be Mid/Large/C7, needs registry lookup { extern int hak_pool_mid_lookup(void* ptr, size_t* out_size); extern void hak_pool_free_fast(void* ptr, uintptr_t site_id); - size_t mid_sz = 0; if (hak_pool_mid_lookup(ptr, &mid_sz)) { hak_free_route_log("mid_hit", ptr); hak_pool_free_fast(ptr, (uintptr_t)site); goto done; } + size_t mid_sz = 0; + if (hak_pool_mid_lookup(ptr, &mid_sz)) { + hak_free_route_log("mid_hit", ptr); + hak_pool_free_fast(ptr, (uintptr_t)site); + goto done; + } } { extern int hak_l25_lookup(void* ptr, size_t* out_size); extern void hak_l25_pool_free_fast(void* ptr, uintptr_t site_id); - size_t l25_sz = 0; if (hak_l25_lookup(ptr, &l25_sz)) { hak_free_route_log("l25_hit", ptr); hkm_ace_stat_large_free(); hak_l25_pool_free_fast(ptr, (uintptr_t)site); goto done; } + size_t l25_sz = 0; + if (hak_l25_lookup(ptr, &l25_sz)) { + hak_free_route_log("l25_hit", ptr); + hkm_ace_stat_large_free(); + hak_l25_pool_free_fast(ptr, (uintptr_t)site); + goto done; + } + } + // PHASE 15: C7 (1KB headerless) registry lookup + // Box FG V2 cannot classify C7 (no header), so use registry + { + SuperSlab* ss = hak_super_lookup(ptr); + if (ss && ss->magic == SUPERSLAB_MAGIC) { + hak_free_route_log("tiny_c7_registry", ptr); + hak_tiny_free(ptr); + goto done; + } + } + + // ========== Box ExternalGuard: Last Resort ========== + // PHASE 15: Delegate to ExternalGuard (mincore + libc fallback) + // Expected: Called 0-10 times in bench (if >100 → box leak!) + { + if (external_guard_try_free(ptr)) { + goto done; + } + // ExternalGuard failed (unmapped) → skip free (leak) + hak_free_route_log("external_guard_skip", ptr); + goto done; } // Raw header dispatch(mmap/malloc/BigCacheなど) diff --git a/core/front/tiny_ultra_hot.h b/core/front/tiny_ultra_hot.h new file mode 100644 index 00000000..89b62367 --- /dev/null +++ b/core/front/tiny_ultra_hot.h @@ -0,0 +1,458 @@ +// tiny_ultra_hot.h - Ultra-fast hot path for C2/C3/C4/C5 (16B-128B allocations) +// Purpose: +// - Minimize L1 dcache misses (30x → 3x target) by using 2 cache line TLS +// - Minimize instructions (6.2x → 2x target) by ultra-simple straight-line path +// - Minimize branches (7.1x → 2x target) by predict-likely hints +// +// Design (ChatGPT consultation Phase 14 + Phase 14-B): +// - Phase 14: C2/C3 (16B/32B) - Coverage: 1.71% +// - Phase 14-B: +C4/C5 (64B/128B) - Coverage: 11.14% (6.5x improvement!) +// - TLS structure: 2 cache lines (128B) for 4 magazines with adaptive slot counts +// - Path: 2-3 instructions per alloc/free (pop/push from magazine) +// - Fallback: If magazine empty/full → existing TinyHeapV2/FastCache path +// +// Cache locality strategy: +// - All state in 1 cache line (64B): 2x mag[8] + 2x top + padding +// - No pointer chasing, no indirect access +// - Touches only 1 struct per alloc/free +// +// Instruction reduction strategy: +// - Size→class: 1 compare (size <= 16 ? C1 : C2) +// - Magazine access: Direct array index (no loops) +// - Fallback: Return NULL immediately (caller handles) +// +// Branch prediction strategy: +// - __builtin_expect(hit, 1) - expect 95%+ hit rate +// - No nested branches in hot path + +#ifndef HAK_FRONT_TINY_ULTRA_HOT_H +#define HAK_FRONT_TINY_ULTRA_HOT_H + +#include +#include +#include +#include "../box/tls_sll_box.h" // Phase 14-C: Borrowing design - refill from TLS SLL + +// Magazine capacity - adaptive sizing for cache locality (Phase 14-B) +// Design principle: Balance capacity vs cache line usage +// +// Cache line 0 (64B): C2 + C3 magazines +// C2 (16B): 4 slots × 8B ptr = 32B +// C3 (32B): 4 slots × 8B ptr = 32B +// Total: 64B (perfect fit!) +// +// Cache line 1 (64B): C4 + C5 magazines + counters +// C4 (64B): 2 slots × 8B ptr = 16B +// C5 (128B): 1 slot × 8B ptr = 8B +// Counters: c1_top, c2_top, c4_top, c5_top = 4B +// Padding: 36B +// Total: 64B (fits!) +// +// Why fewer slots for larger classes? +// - Maintain cache locality (2 cache lines = 128B total) +// - Block size scales, so magazine memory scales proportionally +// - Free path supplies blocks → even 1-2 slots maintain high hit rate +// +#ifndef ULTRA_HOT_MAG_CAP_C2 +#define ULTRA_HOT_MAG_CAP_C2 4 // C2 (16B) - 4 slots +#endif +#ifndef ULTRA_HOT_MAG_CAP_C3 +#define ULTRA_HOT_MAG_CAP_C3 4 // C3 (32B) - 4 slots +#endif +#ifndef ULTRA_HOT_MAG_CAP_C4 +#define ULTRA_HOT_MAG_CAP_C4 2 // C4 (64B) - 2 slots (NEW Phase 14-B) +#endif +#ifndef ULTRA_HOT_MAG_CAP_C5 +#define ULTRA_HOT_MAG_CAP_C5 1 // C5 (128B) - 1 slot (NEW Phase 14-B) +#endif + +// TLS structure: 2 cache lines (128B) for hot path (Phase 14-B expanded) +// Layout: +// Cache line 0 (64B): C2_mag[4] (32B) + C3_mag[4] (32B) +// Cache line 1 (64B): C4_mag[2] (16B) + C5_mag[1] (8B) + counters (4B) + pad (36B) +// Cache line 2+: Statistics (cold path) +// Total hot state: 128B (2 cache lines) +typedef struct { + // ===== Cache line 0 (64B): C2/C3 magazines ===== + void* c1_mag[ULTRA_HOT_MAG_CAP_C2]; // C2 (16B) - 4 slots, 32B + void* c2_mag[ULTRA_HOT_MAG_CAP_C3]; // C3 (32B) - 4 slots, 32B + + // ===== Cache line 1 (64B): C4/C5 magazines + counters ===== + void* c4_mag[ULTRA_HOT_MAG_CAP_C4]; // C4 (64B) - 2 slots, 16B (NEW Phase 14-B) + void* c5_mag[ULTRA_HOT_MAG_CAP_C5]; // C5 (128B) - 1 slot, 8B (NEW Phase 14-B) + + uint8_t c1_top; // C2 magazine top index + uint8_t c2_top; // C3 magazine top index + uint8_t c4_top; // C4 magazine top index (NEW Phase 14-B) + uint8_t c5_top; // C5 magazine top index (NEW Phase 14-B) + uint8_t pad[36]; // Padding to cache line boundary + + // ===== Statistics (cold path, cache line 2+) ===== + uint64_t c1_alloc_calls; + uint64_t c1_hits; + uint64_t c1_misses; + uint64_t c2_alloc_calls; + uint64_t c2_hits; + uint64_t c2_misses; + uint64_t c4_alloc_calls; // NEW Phase 14-B + uint64_t c4_hits; // NEW Phase 14-B + uint64_t c4_misses; // NEW Phase 14-B + uint64_t c5_alloc_calls; // NEW Phase 14-B + uint64_t c5_hits; // NEW Phase 14-B + uint64_t c5_misses; // NEW Phase 14-B + + uint64_t c1_free_calls; + uint64_t c1_free_hits; + uint64_t c2_free_calls; + uint64_t c2_free_hits; + uint64_t c4_free_calls; // NEW Phase 14-B + uint64_t c4_free_hits; // NEW Phase 14-B + uint64_t c5_free_calls; // NEW Phase 14-B + uint64_t c5_free_hits; // NEW Phase 14-B +} __attribute__((aligned(64))) TinyUltraHot; + +// External TLS variable (defined in hakmem_tiny.c) +extern __thread TinyUltraHot g_ultra_hot; + +// Enable flag (cached) +// ENV: HAKMEM_TINY_ULTRA_HOT +// - 0: Disable (use existing TinyHeapV2/FastCache) +// - 1 (default): Enable ultra-fast C1/C2 path +static inline int ultra_hot_enabled(void) { + static int g_enable = -1; + if (__builtin_expect(g_enable == -1, 0)) { + const char* e = getenv("HAKMEM_TINY_ULTRA_HOT"); + if (e && *e) { + g_enable = (*e != '0') ? 1 : 0; + } else { + g_enable = 1; // Default: ON (Phase 14 decision) + } +#if !HAKMEM_BUILD_RELEASE + fprintf(stderr, "[UltraHot-INIT] ultra_hot_enabled() = %d\n", g_enable); + fflush(stderr); +#endif + } + return g_enable; +} + +// Phase 14-C: Max size control (ENV: HAKMEM_TINY_ULTRA_HOT_MAX_SIZE) +// Purpose: Control which size classes UltraHot handles +// Default: 32 (C2/C3 only, safe for Random Mixed) +// Fixed-size: 128 (C2-C5, optimal for fixed-size workloads) +static inline size_t ultra_hot_max_size(void) { + static size_t g_max_size = 0; + if (__builtin_expect(g_max_size == 0, 0)) { + const char* e = getenv("HAKMEM_TINY_ULTRA_HOT_MAX_SIZE"); + if (e && *e) { + g_max_size = (size_t)atoi(e); + } else { + g_max_size = 32; // Default: C2/C3 only (Phase 14 behavior) + } +#if !HAKMEM_BUILD_RELEASE + fprintf(stderr, "[UltraHot-INIT] ultra_hot_max_size() = %zu\n", g_max_size); + fflush(stderr); +#endif + } + return g_max_size; +} + +// Ultra-fast alloc (C2/C3/C4/C5 - Phase 14-B expanded) +// Contract: +// - Input: size (must be 9-128B for C2-C5) +// - Output: BASE pointer (not USER pointer!) or NULL +// - Caller converts BASE → USER via HAK_RET_ALLOC +// +// Hot path (expect 95% hit rate): +// 1. size → class (cascading compares) +// 2. magazine pop (1 load + 1 decrement + 1 store) +// 3. return BASE +// +// Cold path (5% miss rate): +// - return NULL → caller uses existing TinyHeapV2/FastCache +// +// Performance target: +// - L1 dcache: 2 cache lines load (128B) - all 4 mags +// - Instructions: 5-7 instructions total per hit +// - Branches: 2 branches (size check + mag empty check) +static inline void* ultra_hot_alloc(size_t size) { + // Fast path: size → class (cascading compares for branch prediction) + // C2 = 16B (9-16), C3 = 32B (17-32), C4 = 64B (33-64), C5 = 128B (65-128) + if (__builtin_expect(size <= 16, 1)) { + // C2 path (16B) + g_ultra_hot.c1_alloc_calls++; + + if (__builtin_expect(g_ultra_hot.c1_top > 0, 1)) { + // Magazine hit! (5 instructions: load top, dec, load mag, store top, ret) + g_ultra_hot.c1_hits++; + uint8_t idx = --g_ultra_hot.c1_top; + void* base = g_ultra_hot.c1_mag[idx]; + return base; // Return BASE (caller converts to USER) + } else { + // Magazine empty (cold path) + g_ultra_hot.c1_misses++; + return NULL; + } + } else if (__builtin_expect(size <= 32, 1)) { + // C3 path (32B) + g_ultra_hot.c2_alloc_calls++; + + if (__builtin_expect(g_ultra_hot.c2_top > 0, 1)) { + // Magazine hit! + g_ultra_hot.c2_hits++; + uint8_t idx = --g_ultra_hot.c2_top; + void* base = g_ultra_hot.c2_mag[idx]; + return base; + } else { + // Magazine empty + g_ultra_hot.c2_misses++; + return NULL; + } + } else if (__builtin_expect(size <= 64 && ultra_hot_max_size() >= 64, 0)) { + // C4 path (64B) - Phase 14-C: ENV gated + g_ultra_hot.c4_alloc_calls++; + + if (__builtin_expect(g_ultra_hot.c4_top > 0, 1)) { + // Magazine hit! + g_ultra_hot.c4_hits++; + uint8_t idx = --g_ultra_hot.c4_top; + void* base = g_ultra_hot.c4_mag[idx]; + return base; + } else { + // Magazine empty + g_ultra_hot.c4_misses++; + return NULL; + } + } else if (__builtin_expect(size <= 128 && ultra_hot_max_size() >= 128, 0)) { + // C5 path (128B) - Phase 14-C: ENV gated + g_ultra_hot.c5_alloc_calls++; + + if (__builtin_expect(g_ultra_hot.c5_top > 0, 1)) { + // Magazine hit! + g_ultra_hot.c5_hits++; + uint8_t idx = --g_ultra_hot.c5_top; + void* base = g_ultra_hot.c5_mag[idx]; + return base; + } else { + // Magazine empty + g_ultra_hot.c5_misses++; + return NULL; + } + } else { + // Size out of range (C6+ or C0) + return NULL; + } +} + +// Ultra-fast free (C2/C3/C4/C5 - Phase 14-B expanded) +// Contract: +// - Input: base (BASE pointer), class_idx +// - Output: 1 if handled, 0 if magazine full (fallback to existing path) +// +// Hot path (expect 95% hit rate): +// 1. class check (1 compare) +// 2. magazine push (1 load top + 1 store mag + 1 increment + 1 store top) +// 3. return 1 +// +// Cold path (5% miss rate): +// - return 0 → caller uses existing TinyHeapV2/TLS SLL path +static inline int ultra_hot_free_by_class(void* base, int class_idx) { + // Fast path: class → magazine + // NOTE: HAKMEM class numbering: C0=8B, C1=?, C2=16B, C3=32B, C4=64B, C5=128B + if (__builtin_expect(class_idx == 2, 1)) { + // C2 path (16B) + g_ultra_hot.c1_free_calls++; + + if (__builtin_expect(g_ultra_hot.c1_top < ULTRA_HOT_MAG_CAP_C2, 1)) { + // Magazine has room! (5 instructions) + g_ultra_hot.c1_free_hits++; + uint8_t idx = g_ultra_hot.c1_top++; + g_ultra_hot.c1_mag[idx] = base; + return 1; // Success + } else { + // Magazine full → fallback + return 0; + } + } else if (__builtin_expect(class_idx == 3, 1)) { + // C3 path (32B) + g_ultra_hot.c2_free_calls++; + + if (__builtin_expect(g_ultra_hot.c2_top < ULTRA_HOT_MAG_CAP_C3, 1)) { + // Magazine has room! + g_ultra_hot.c2_free_hits++; + uint8_t idx = g_ultra_hot.c2_top++; + g_ultra_hot.c2_mag[idx] = base; + return 1; + } else { + // Magazine full + return 0; + } + } else if (__builtin_expect(class_idx == 4, 0)) { + // C4 path (64B) - NEW Phase 14-B + g_ultra_hot.c4_free_calls++; + + if (__builtin_expect(g_ultra_hot.c4_top < ULTRA_HOT_MAG_CAP_C4, 1)) { + // Magazine has room! + g_ultra_hot.c4_free_hits++; + uint8_t idx = g_ultra_hot.c4_top++; + g_ultra_hot.c4_mag[idx] = base; + return 1; + } else { + // Magazine full + return 0; + } + } else if (__builtin_expect(class_idx == 5, 0)) { + // C5 path (128B) - NEW Phase 14-B + g_ultra_hot.c5_free_calls++; + + if (__builtin_expect(g_ultra_hot.c5_top < ULTRA_HOT_MAG_CAP_C5, 1)) { + // Magazine has room! + g_ultra_hot.c5_free_hits++; + uint8_t idx = g_ultra_hot.c5_top++; + g_ultra_hot.c5_mag[idx] = base; + return 1; + } else { + // Magazine full + return 0; + } + } else { + // Class out of range (not C2-C5) + return 0; + } +} + +// Magazine refill (called from existing front when it has spare blocks) +// Strategy: TinyHeapV2 / FastCache can "donate" blocks to UltraHot +// This is optional - UltraHot can work with just free path supply +static inline void ultra_hot_try_refill_c1(void* base) { + if (g_ultra_hot.c1_top < ULTRA_HOT_MAG_CAP_C2) { + g_ultra_hot.c1_mag[g_ultra_hot.c1_top++] = base; + } +} + +static inline void ultra_hot_try_refill_c2(void* base) { + if (g_ultra_hot.c2_top < ULTRA_HOT_MAG_CAP_C3) { + g_ultra_hot.c2_mag[g_ultra_hot.c2_top++] = base; + } +} + +static inline void ultra_hot_try_refill_c4(void* base) { + if (g_ultra_hot.c4_top < ULTRA_HOT_MAG_CAP_C4) { + g_ultra_hot.c4_mag[g_ultra_hot.c4_top++] = base; + } +} + +static inline void ultra_hot_try_refill_c5(void* base) { + if (g_ultra_hot.c5_top < ULTRA_HOT_MAG_CAP_C5) { + g_ultra_hot.c5_mag[g_ultra_hot.c5_top++] = base; + } +} + +// Print statistics (called at program exit if HAKMEM_TINY_ULTRA_HOT_STATS=1) +// Declaration only (implementation in hakmem_tiny.c for external linkage) +void ultra_hot_print_stats(void); + +// Design notes: +// +// 1. Cache locality: +// - All state fits in 2 cache lines (128B total) +// - First line (64B): Both magazines (C1 + C2) +// - Second line (64B): Counters + stats +// - Expected L1 miss: ~1-2 per alloc/free (vs 30+ currently) +// +// 2. Instruction count: +// - Alloc hit: ~7 instructions (size check + mag pop + return) +// - Free hit: ~7 instructions (size check + mag push + return) +// - Total: ~14 instructions per alloc/free pair (vs ~281M/500K = 562 currently) +// - Reduction: 562 → 14 = 40x improvement +// +// 3. Branch prediction: +// - Size check: __builtin_expect(size <= 16, 1) - predict C1 likely +// - Magazine check: __builtin_expect(top > 0, 1) - predict hit likely +// - Expected branch-miss: ~5% (vs 7.83% currently) +// +// 4. Integration with existing front: +// - UltraHot is L0 (fastest) +// - TinyHeapV2 is L1 (fast) +// - FastCache is L2 (normal) +// - If UltraHot misses → fallback to L1/L2 +// - Free path supplies both UltraHot and TinyHeapV2 +// +// 5. Supply strategy: +// - Free path: Always try UltraHot first, then TinyHeapV2, then TLS SLL +// - Alloc path: Try UltraHot first, then TinyHeapV2, then FastCache +// - No refill from backend (keeps UltraHot ultra-simple) +// +// 6. Expected performance: +// - Current: 9.3M ops/s (Random Mixed 256B) +// - Target: 40-60M ops/s (+330-545%) +// - L1 miss: 2.9M → ~300K (-90%) +// - Instructions: 281M → ~80M (-71%) +// - Branches: 59M → ~15M (-75%) +// +// 7. Why C1/C2 only? +// - C1 (16B) + C2 (32B) cover ~60% of tiny allocations +// - Small magazine (4 slots) fits both in 1-2 cache lines +// - Size check is trivial (size <= 16 / size <= 32) +// - Larger classes (C3+) have different access patterns (less cache-sensitive) +// +// 8. Why not C0 (8B)? +// - TinyHeapV2 showed -5% regression on C0 +// - 8B allocations are rare in real workloads +// - Magazine overhead too high for 8B blocks +// +// 9. Comparison with TinyHeapV2: +// - TinyHeapV2: 16 slots per class, covers C1-C3 +// - UltraHot: 4 slots per class, covers C1-C2 only +// - UltraHot is "ultra-hot subset" of TinyHeapV2 +// - Trade magazine capacity for cache locality +// +// 10. ENV flags: +// - HAKMEM_TINY_ULTRA_HOT=0/1 - Enable/disable (default: 1) +// - HAKMEM_TINY_ULTRA_HOT_STATS=0/1 - Print stats at exit (default: 0) + +// ============================================================================= +// Phase 14-C: Borrowing Design - Refill from TLS SLL (正史から借りる) +// ============================================================================= +// Design: UltraHot は「TLS SLL の手前にあるビュー」として動作 +// - Free: 正史(TLS SLL)に戻す(横取りしない) +// - Alloc miss: TLS SLL から借りて magazine を refill +// - 学習層(Superslab/drain)が正しい在庫を追跡できる +// +// Call this after ultra_hot_alloc() miss to refill magazine from TLS SLL +static inline void ultra_hot_try_refill(int class_idx) { + if (!ultra_hot_enabled()) return; + if (class_idx < 2 || class_idx > 5) return; // C2-C5 のみ + + // Refill magazine to full capacity (borrow from TLS SLL = 正史) + if (class_idx == 2) { + // C2 (16B): 4 slots magazine + while (g_ultra_hot.c1_top < ULTRA_HOT_MAG_CAP_C2) { + void* ptr = NULL; + if (!tls_sll_pop(class_idx, &ptr)) break; // TLS SLL から借りる + g_ultra_hot.c1_mag[g_ultra_hot.c1_top++] = ptr; + } + } else if (class_idx == 3) { + // C3 (32B): 4 slots magazine + while (g_ultra_hot.c2_top < ULTRA_HOT_MAG_CAP_C3) { + void* ptr = NULL; + if (!tls_sll_pop(class_idx, &ptr)) break; + g_ultra_hot.c2_mag[g_ultra_hot.c2_top++] = ptr; + } + } else if (class_idx == 4) { + // C4 (64B): 2 slots magazine + while (g_ultra_hot.c4_top < ULTRA_HOT_MAG_CAP_C4) { + void* ptr = NULL; + if (!tls_sll_pop(class_idx, &ptr)) break; + g_ultra_hot.c4_mag[g_ultra_hot.c4_top++] = ptr; + } + } else if (class_idx == 5) { + // C5 (128B): 1 slot magazine + while (g_ultra_hot.c5_top < ULTRA_HOT_MAG_CAP_C5) { + void* ptr = NULL; + if (!tls_sll_pop(class_idx, &ptr)) break; + g_ultra_hot.c5_mag[g_ultra_hot.c5_top++] = ptr; + } + } +} + +#endif // HAK_FRONT_TINY_ULTRA_HOT_H diff --git a/core/hakmem_tiny.c b/core/hakmem_tiny.c index 81521d93..f6d083d2 100644 --- a/core/hakmem_tiny.c +++ b/core/hakmem_tiny.c @@ -1767,6 +1767,10 @@ TinySlab* hak_tiny_owner_slab(void* ptr) { __thread TinyHeapV2Mag g_tiny_heap_v2_mag[TINY_NUM_CLASSES]; __thread TinyHeapV2Stats g_tiny_heap_v2_stats[TINY_NUM_CLASSES]; + // Phase 14: TinyUltraHot - Ultra-fast C1/C2 path (L1 dcache miss reduction) + #include "front/tiny_ultra_hot.h" + __thread TinyUltraHot g_ultra_hot; + // Box 6: Free Fast Path (Layer 2 - 2-3 instructions) #include "tiny_free_fast.inc.h" @@ -2090,3 +2094,62 @@ void tiny_heap_v2_print_stats(void) { fprintf(stderr, "==============================\n\n"); } +// Phase 14 + Phase 14-B: UltraHot statistics (C2-C5) +void ultra_hot_print_stats(void) { + extern __thread TinyUltraHot g_ultra_hot; + + static int g_stats_enable = -1; + if (g_stats_enable == -1) { + const char* e = getenv("HAKMEM_TINY_ULTRA_HOT_STATS"); + g_stats_enable = (e && *e && *e != '0') ? 1 : 0; + } + if (!g_stats_enable) return; + + fprintf(stderr, "\n=== TinyUltraHot Statistics (Phase 14 + 14-B) ===\n"); + + // C1 (16B) stats - Phase 14 + uint64_t c1_total = g_ultra_hot.c1_alloc_calls; + if (c1_total > 0) { + double c1_hit_rate = 100.0 * g_ultra_hot.c1_hits / c1_total; + fprintf(stderr, "[C2-16B] alloc=%lu hits=%lu (%.1f%%) misses=%lu\n", + c1_total, g_ultra_hot.c1_hits, c1_hit_rate, g_ultra_hot.c1_misses); + fprintf(stderr, " free=%lu free_hits=%lu\n", + g_ultra_hot.c1_free_calls, g_ultra_hot.c1_free_hits); + } + + // C2 (32B) stats - Phase 14 + uint64_t c2_total = g_ultra_hot.c2_alloc_calls; + if (c2_total > 0) { + double c2_hit_rate = 100.0 * g_ultra_hot.c2_hits / c2_total; + fprintf(stderr, "[C3-32B] alloc=%lu hits=%lu (%.1f%%) misses=%lu\n", + c2_total, g_ultra_hot.c2_hits, c2_hit_rate, g_ultra_hot.c2_misses); + fprintf(stderr, " free=%lu free_hits=%lu\n", + g_ultra_hot.c2_free_calls, g_ultra_hot.c2_free_hits); + } + + // C4 (64B) stats - Phase 14-B NEW + uint64_t c4_total = g_ultra_hot.c4_alloc_calls; + if (c4_total > 0) { + double c4_hit_rate = 100.0 * g_ultra_hot.c4_hits / c4_total; + fprintf(stderr, "[C4-64B] alloc=%lu hits=%lu (%.1f%%) misses=%lu (NEW Phase 14-B)\n", + c4_total, g_ultra_hot.c4_hits, c4_hit_rate, g_ultra_hot.c4_misses); + fprintf(stderr, " free=%lu free_hits=%lu\n", + g_ultra_hot.c4_free_calls, g_ultra_hot.c4_free_hits); + } + + // C5 (128B) stats - Phase 14-B NEW + uint64_t c5_total = g_ultra_hot.c5_alloc_calls; + if (c5_total > 0) { + double c5_hit_rate = 100.0 * g_ultra_hot.c5_hits / c5_total; + fprintf(stderr, "[C5-128B] alloc=%lu hits=%lu (%.1f%%) misses=%lu (NEW Phase 14-B)\n", + c5_total, g_ultra_hot.c5_hits, c5_hit_rate, g_ultra_hot.c5_misses); + fprintf(stderr, " free=%lu free_hits=%lu\n", + g_ultra_hot.c5_free_calls, g_ultra_hot.c5_free_hits); + } + + if (c1_total == 0 && c2_total == 0 && c4_total == 0 && c5_total == 0) { + fprintf(stderr, "(No UltraHot allocs recorded)\n"); + } + fprintf(stderr, "==================================================\n\n"); +} + diff --git a/core/hakmem_tiny.d b/core/hakmem_tiny.d index 89bf083c..6d91f4de 100644 --- a/core/hakmem_tiny.d +++ b/core/hakmem_tiny.d @@ -44,7 +44,8 @@ core/hakmem_tiny.o: core/hakmem_tiny.c core/hakmem_tiny.h \ core/tiny_atomic.h core/tiny_alloc_fast.inc.h \ core/tiny_alloc_fast_sfc.inc.h core/hakmem_tiny_fastcache.inc.h \ core/front/tiny_front_c23.h core/front/../hakmem_build_flags.h \ - core/tiny_alloc_fast_inline.h core/front/tiny_heap_v2.h \ + core/front/tiny_heap_v2.h core/front/tiny_ultra_hot.h \ + core/front/../box/tls_sll_box.h core/tiny_alloc_fast_inline.h \ core/tiny_free_fast.inc.h core/hakmem_tiny_alloc.inc \ core/hakmem_tiny_slow.inc core/hakmem_tiny_free.inc \ core/box/free_publish_box.h core/mid_tcache.h \ @@ -152,8 +153,10 @@ core/tiny_alloc_fast_sfc.inc.h: core/hakmem_tiny_fastcache.inc.h: core/front/tiny_front_c23.h: core/front/../hakmem_build_flags.h: -core/tiny_alloc_fast_inline.h: core/front/tiny_heap_v2.h: +core/front/tiny_ultra_hot.h: +core/front/../box/tls_sll_box.h: +core/tiny_alloc_fast_inline.h: core/tiny_free_fast.inc.h: core/hakmem_tiny_alloc.inc: core/hakmem_tiny_slow.inc: diff --git a/core/tiny_alloc_fast.inc.h b/core/tiny_alloc_fast.inc.h index 02155574..d884f1ed 100644 --- a/core/tiny_alloc_fast.inc.h +++ b/core/tiny_alloc_fast.inc.h @@ -29,6 +29,7 @@ #ifdef HAKMEM_TINY_HEADER_CLASSIDX #include "front/tiny_front_c23.h" // Phase B: Ultra-simple C2/C3 front #include "front/tiny_heap_v2.h" // Phase 13-A: TinyHeapV2 magazine front +#include "front/tiny_ultra_hot.h" // Phase 14: TinyUltraHot C1/C2 ultra-fast path #endif #include @@ -602,6 +603,28 @@ static inline void* tiny_alloc_fast(size_t size) { } #endif + // Phase 14-C: TinyUltraHot Borrowing Design (正史から借りる設計) + // ENV-gated: HAKMEM_TINY_ULTRA_HOT=1 (default: ON) + // Targets C2-C5 (16B-128B) + // Design: UltraHot は TLS SLL から借りたブロックを magazine に保持 + // - Hit: magazine から返す (L0, fastest) + // - Miss: TLS SLL から refill して再試行 + if (__builtin_expect(ultra_hot_enabled(), 1)) { + void* base = ultra_hot_alloc(size); + if (base) { + HAK_RET_ALLOC(class_idx, base); // Header write + return USER pointer + } + // Miss → TLS SLL から借りて refill(正史から借用) + if (class_idx >= 2 && class_idx <= 5) { + ultra_hot_try_refill(class_idx); + // Retry after refill + base = ultra_hot_alloc(size); + if (base) { + HAK_RET_ALLOC(class_idx, base); + } + } + } + // Phase 13-A: TinyHeapV2 (per-thread magazine, experimental) // ENV-gated: HAKMEM_TINY_HEAP_V2=1 // Targets class 0-3 (8-64B) only, falls back to existing path if NULL diff --git a/core/tiny_free_fast_v2.inc.h b/core/tiny_free_fast_v2.inc.h index 0be5aa92..73c52d1a 100644 --- a/core/tiny_free_fast_v2.inc.h +++ b/core/tiny_free_fast_v2.inc.h @@ -22,6 +22,7 @@ #include "box/tls_sll_drain_box.h" // Box TLS-SLL Drain (Option B) #include "hakmem_tiny_integrity.h" // PRIORITY 1-4: Corruption detection #include "front/tiny_heap_v2.h" // Phase 13-B: TinyHeapV2 magazine supply +#include "front/tiny_ultra_hot.h" // Phase 14: TinyUltraHot C1/C2 ultra-fast path // Phase 7: Header-based ultra-fast free #if HAKMEM_TINY_HEADER_CLASSIDX @@ -131,6 +132,10 @@ static inline int hak_tiny_free_fast_v2(void* ptr) { // Phase E1: ALL classes (C0-C7) have 1-byte header → base = ptr-1 void* base = (char*)ptr - 1; + // Phase 14-C: UltraHot は free 時に横取りしない(Borrowing 設計) + // → 正史(TLS SLL)の在庫を正しく保つ + // → UltraHot refill は alloc 側で TLS SLL から借りる + // Phase 13-B: TinyHeapV2 magazine supply (C0-C3 only) // Two supply modes (controlled by HAKMEM_TINY_HEAP_V2_LEFTOVER_MODE): // Mode 0 (default): L0 gets blocks first ("stealing" design) diff --git a/hakmem.d b/hakmem.d index 8f5681f0..30aef2cd 100644 --- a/hakmem.d +++ b/hakmem.d @@ -31,8 +31,10 @@ hakmem.o: core/hakmem.c core/hakmem.h core/hakmem_build_flags.h \ core/box/../box/../tiny_debug_ring.h core/box/../box/tls_sll_drain_box.h \ core/box/../box/tls_sll_box.h core/box/../box/free_local_box.h \ core/box/../hakmem_tiny_integrity.h core/box/../front/tiny_heap_v2.h \ - core/box/../front/../hakmem_tiny.h core/box/front_gate_classifier.h \ - core/box/hak_wrappers.inc.h + core/box/../front/../hakmem_tiny.h core/box/../front/tiny_ultra_hot.h \ + core/box/../front/../box/tls_sll_box.h core/box/front_gate_v2.h \ + core/box/external_guard_box.h core/box/hak_wrappers.inc.h \ + core/box/front_gate_classifier.h core/hakmem.h: core/hakmem_build_flags.h: core/hakmem_config.h: @@ -105,5 +107,9 @@ core/box/../box/free_local_box.h: core/box/../hakmem_tiny_integrity.h: core/box/../front/tiny_heap_v2.h: core/box/../front/../hakmem_tiny.h: -core/box/front_gate_classifier.h: +core/box/../front/tiny_ultra_hot.h: +core/box/../front/../box/tls_sll_box.h: +core/box/front_gate_v2.h: +core/box/external_guard_box.h: core/box/hak_wrappers.inc.h: +core/box/front_gate_classifier.h: