Phase 5 E5-2: Header Write-Once (NEUTRAL, FROZEN)

Target: tiny_region_id_write_header (3.35% self%)
- Hypothesis: Headers redundant for reused blocks
- Strategy: Write headers ONCE at refill boundary, skip in hot alloc

Implementation:
- ENV gate: HAKMEM_TINY_HEADER_WRITE_ONCE=0/1 (default 0)
- core/box/tiny_header_write_once_env_box.h: ENV gate
- core/box/tiny_header_write_once_stats_box.h: Stats counters
- core/box/tiny_header_box.h: Added tiny_header_finalize_alloc()
- core/front/tiny_unified_cache.c: Prefill at 3 refill sites
- core/box/tiny_front_hot_box.h: Use finalize function

A/B Test Results (Mixed, 10-run, 20M iters):
- Baseline (WRITE_ONCE=0): 44.22M ops/s (mean), 44.53M ops/s (median)
- Optimized (WRITE_ONCE=1): 44.42M ops/s (mean), 44.36M ops/s (median)
- Improvement: +0.45% mean, -0.38% median

Decision: NEUTRAL (within ±1.0% threshold)
- Action: FREEZE as research box (default OFF, do not promote)

Root Cause Analysis:
- Header writes are NOT redundant - existing code writes only when needed
- Branch overhead (~4 cycles) cancels savings (~3-5 cycles)
- perf self% ≠ optimization ROI (3.35% target → +0.45% gain)

Key Lessons:
1. Verify assumptions before optimizing (inspect code paths)
2. Hot spot self% measures time IN function, not savings from REMOVING it
3. Branch overhead matters (even "simple" checks add cycles)

Positive Outcome:
- StdDev reduced 50% (0.96M → 0.48M) - more stable performance

Health Check: PASS (all profiles)

Next Candidates:
- free_tiny_fast_cold: 7.14% self%
- unified_cache_push: 3.39% self%
- hakmem_env_snapshot_enabled: 2.97% self%

Deliverables:
- docs/analysis/PHASE5_E5_2_HEADER_REFILL_ONCE_DESIGN.md
- docs/analysis/PHASE5_E5_2_HEADER_REFILL_ONCE_AB_TEST_RESULTS.md
- CURRENT_TASK.md (E5-2 complete, FROZEN)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
Moe Charm (CI)
2025-12-14 06:22:25 +09:00
parent 75e20b29cc
commit f7b18aaf13
9 changed files with 894 additions and 1 deletions

View File

@ -182,4 +182,44 @@ static inline int tiny_header_read(const void* base, int class_idx) {
#endif
}
// ============================================================================
// Header Finalize for Allocation (Phase 5 E5-2: Write-Once Optimization)
// ============================================================================
//
// Replaces direct calls to tiny_region_id_write_header() in allocation paths.
// Enables header write-once optimization:
// - C1-C6: Skip header write if already prefilled at refill boundary
// - C0, C7: Always write header (next pointer overwrites it anyway)
//
// Use this in allocation hot paths:
// - tiny_hot_alloc_fast()
// - unified_cache_pop()
// - All other allocation returns
//
// DO NOT use this for:
// - Freelist operations (use tiny_header_write_if_preserved)
// - Refill boundary (use direct write in unified_cache_refill)
// Forward declaration from tiny_region_id.h
void* tiny_region_id_write_header(void* base, int class_idx);
// Forward declaration from tiny_header_write_once_env_box.h
int tiny_header_write_once_enabled(void);
static inline void* tiny_header_finalize_alloc(void* base, int class_idx) {
#if HAKMEM_TINY_HEADER_CLASSIDX
// Write-once optimization: Skip header write for C1-C6 if already prefilled
if (tiny_header_write_once_enabled() && tiny_class_preserves_header(class_idx)) {
// Header already written at refill boundary → skip write, return USER pointer
return (void*)((uint8_t*)base + 1);
}
// Traditional path: C0, C7, or WRITE_ONCE=0
return tiny_region_id_write_header(base, class_idx);
#else
(void)class_idx;
return base;
#endif
}
#endif // TINY_HEADER_BOX_H