Phase 14 v1: Pointer-Chase Reduction (tcache) NEUTRAL (+0.20%)

Implementation:
- Intrusive LIFO tcache layer (L1) before UnifiedCache
- TLS per-class bins (head pointer + count)
- Intrusive next pointers (via tiny_next_store/load SSOT)
- Cap: 64 blocks per class (default)
- ENV: HAKMEM_TINY_TCACHE=0/1 (default: 0, OFF)

A/B Test Results (Mixed 10-run):
- Baseline (TCACHE=0): 51,083,379 ops/s
- Optimized (TCACHE=1): 51,186,838 ops/s
- Mean delta: +0.20% (below +1.0% GO threshold)
- Median delta: +0.59%

Verdict: NEUTRAL - Freeze as research box (default OFF)

Root Cause (v1 wiring incomplete):
- Free side pushes to tcache via unified_cache_push()
- Alloc hot path (tiny_hot_alloc_fast) doesn't consume tcache
- tcache becomes "sink" without alloc-side pop → ROI not measurable

Files:
- Created: core/box/tiny_tcache_{env_box,box}.h, tiny_tcache_env_box.c
- Modified: core/front/tiny_unified_cache.h (integration)
- Modified: core/bench_profile.h (refresh sync)
- Modified: Makefile (build integration)
- Results: docs/analysis/PHASE14_POINTER_CHASE_REDUCTION_1_AB_TEST_RESULTS.md
- v2 Instructions: docs/analysis/PHASE14_POINTER_CHASE_REDUCTION_2_NEXT_INSTRUCTIONS.md

Next: Phase 14 v2 (connect tcache to tiny_front_hot_box alloc/free hot path)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
Moe Charm (CI)
2025-12-15 01:28:50 +09:00
parent 0b306f72f4
commit f8fb05bc13
11 changed files with 729 additions and 9 deletions

View File

@ -30,6 +30,7 @@
#include "../hakmem_tiny_config.h" // For TINY_NUM_CLASSES
#include "../box/ptr_type_box.h" // Phantom pointer types (BASE/USER)
#include "../box/tiny_front_config_box.h" // Phase 8-Step1: Config macros
#include "../box/tiny_tcache_box.h" // Phase 14 v1: Intrusive LIFO tcache
// ============================================================================
// Phase 3 C2 Patch 3: Bounds Check Compile-out
@ -220,9 +221,16 @@ static inline int unified_cache_push(int class_idx, hak_base_ptr_t base) {
// Fast path: Unified cache disabled → return 0 (not handled)
if (__builtin_expect(!TINY_FRONT_UNIFIED_CACHE_ENABLED, 0)) return 0;
TinyUnifiedCache* cache = &g_unified_cache[class_idx]; // 1 cache miss (TLS)
void* base_raw = HAK_BASE_TO_RAW(base);
// Phase 14 v1: Try tcache first (intrusive LIFO, no array access)
if (tiny_tcache_try_push(class_idx, base_raw)) {
return 1; // SUCCESS (tcache hit, no array access)
}
// Tcache overflow or disabled → fall through to array cache
TinyUnifiedCache* cache = &g_unified_cache[class_idx]; // 1 cache miss (TLS)
// Phase 8-Step3: Lazy init check (conditional in PGO mode)
// PGO builds assume bench_fast_init() prewarmed cache → remove check (-1 branch)
#if !HAKMEM_TINY_FRONT_PGO
@ -281,7 +289,23 @@ static inline hak_base_ptr_t unified_cache_pop_or_refill(int class_idx) {
}
#endif
// Try pop from cache (fast path)
// Phase 14 v1: Try tcache first (intrusive LIFO, no array access)
void* tcache_base = tiny_tcache_try_pop(class_idx);
if (tcache_base != NULL) {
#if !HAKMEM_BUILD_RELEASE
g_unified_cache_hit[class_idx]++;
#endif
// Performance measurement: count cache hits (ENV enabled only)
if (__builtin_expect(unified_cache_measure_check(), 0)) {
atomic_fetch_add_explicit(&g_unified_cache_hits_global,
1, memory_order_relaxed);
atomic_fetch_add_explicit(&g_unified_cache_hits_by_class[class_idx],
1, memory_order_relaxed);
}
return HAK_BASE_FROM_RAW(tcache_base); // HIT (tcache, no array access)
}
// Tcache miss or disabled → try pop from array cache (fast path)
if (__builtin_expect(cache->head != cache->tail, 1)) {
void* base = cache->slots[cache->head]; // 1 cache miss (array access)
cache->head = (cache->head + 1) & cache->mask;