Files
hakmem/core/tiny_alloc_fast_inline.h

121 lines
4.3 KiB
C
Raw Normal View History

Phase 7 Task 3: Pre-warm TLS cache (+180-280% improvement!) MAJOR SUCCESS: HAKMEM now achieves 85-92% of System malloc on tiny allocations (128-512B) and BEATS System at 146% on 1024B allocations! Performance Results: - Random Mixed 128B: 21M → 59M ops/s (+181%) 🚀 - Random Mixed 256B: 19M → 70M ops/s (+268%) 🚀 - Random Mixed 512B: 21M → 68M ops/s (+224%) 🚀 - Random Mixed 1024B: 21M → 65M ops/s (+210%, 146% of System!) 🏆 - Larson 1T: 2.68M ops/s (stable, no regression) Implementation: 1. Task 3a: Remove profiling overhead in release builds - Wrapped RDTSC calls in #if !HAKMEM_BUILD_RELEASE - Compiler can eliminate profiling code completely - Effect: +2% (2.68M → 2.73M Larson) 2. Task 3b: Simplify refill logic - Use constants from hakmem_build_flags.h - TLS cache already optimal - Effect: No regression 3. Task 3c: Pre-warm TLS cache (GAME CHANGER!) - Pre-allocate 16 blocks per class at init - Eliminates cold-start penalty - Effect: +180-280% improvement 🚀 Root Cause: The bottleneck was cold-start, not the hot path! First allocation in each class triggered a SuperSlab refill (100+ cycles). Pre-warming eliminated this penalty, revealing Phase 7's true potential. Files Modified: - core/hakmem_tiny.c: Pre-warm function implementation - core/box/hak_core_init.inc.h: Pre-warm initialization call - core/tiny_alloc_fast.inc.h: Profiling overhead removal - core/hakmem_phase7_config.h: Task 3 constants (NEW) - core/hakmem_build_flags.h: Phase 7 feature flags - Makefile: PREWARM_TLS flag, phase7 targets - CLAUDE.md: Phase 7 success summary - PHASE7_TASK3_RESULTS.md: Comprehensive results report (NEW) Build: make HEADER_CLASSIDX=1 AGGRESSIVE_INLINE=1 PREWARM_TLS=1 phase7-bench 🎉 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 12:54:52 +09:00
// tiny_alloc_fast_inline.h - Phase 7 Task 2: Aggressive inline TLS cache access
// Purpose: Eliminate function call overhead (5-10 cycles) in hot path
// Design: Macro-based inline expansion of TLS freelist operations
// Performance: Expected +10-15% (22M → 24-25M ops/s)
#ifndef TINY_ALLOC_FAST_INLINE_H
#define TINY_ALLOC_FAST_INLINE_H
#include <stddef.h>
#include "hakmem_build_flags.h"
Tiny: fix remote sentinel leak → SEGV; add defense-in-depth; PoolTLS: refill-boundary remote drain; build UX help; quickstart docs Summary - Fix SEGV root cause in Tiny random_mixed: TINY_REMOTE_SENTINEL leaked from Remote queue into freelist/TLS SLL. - Clear/guard sentinel at the single boundary where Remote merges to freelist. - Add minimal defense-in-depth in freelist_pop and TLS SLL pop. - Silence verbose prints behind debug gates to reduce noise in release runs. - Pool TLS: integrate Remote Queue drain at refill boundary to avoid unnecessary backend carve/OS calls when possible. - DX: strengthen build.sh with help/list/verify and add docs/BUILDING_QUICKSTART.md. Details - core/superslab/superslab_inline.h: guard head/node against TINY_REMOTE_SENTINEL; sanitize node[0] when splicing local chain; only print diagnostics when debug guard is enabled. - core/slab_handle.h: freelist_pop breaks on sentinel head (fail-fast under strict). - core/tiny_alloc_fast_inline.h: TLS SLL pop breaks on sentinel head (rare branch). - core/tiny_superslab_free.inc.h: sentinel scan log behind debug guard. - core/pool_refill.c: try pool_remote_pop_chain() before backend carve in pool_refill_and_alloc(). - core/tiny_adaptive_sizing.c: default adaptive logs off; enable via HAKMEM_ADAPTIVE_LOG=1. - build.sh: add help/list/verify; EXTRA_MAKEFLAGS passthrough; echo pinned flags. - docs/BUILDING_QUICKSTART.md: add one‑pager for targets/flags/env/perf/strace. Verification (high level) - Tiny random_mixed 10k 256/1024: SEGV resolved; runs complete. - Pool TLS 1T/4T perf: HAKMEM >= system (≈ +0.7% 1T, ≈ +2.9% 4T); syscall counts ~10–13. Known issues (to address next) - Tiny random_mixed perf is weak vs system: - 1T/500k/256: cycles/op ≈ 240 vs ~47 (≈5× slower), IPC ≈0.92, branch‑miss ≈11%. - 1T/500k/1024: cycles/op ≈ 149 vs ~53 (≈2.8× slower), IPC ≈0.82, branch‑miss ≈10.5%. - Hypothesis: frequent SuperSlab path for class7 (fast_cap=0), branchy refill/adopt, and hot-path divergence. - Proposed next steps: - Introduce fast_cap>0 for class7 (bounded TLS SLL) and a simpler batch refill. - Add env‑gated Remote Side OFF for 1T A/B (reduce side-table and guards). - Revisit likely/unlikely and unify adopt boundary sequencing (drain→bind→acquire) for Tiny.
2025-11-09 16:49:34 +09:00
#include "tiny_remote.h" // for TINY_REMOTE_SENTINEL (defense-in-depth)
Phase 7 Task 3: Pre-warm TLS cache (+180-280% improvement!) MAJOR SUCCESS: HAKMEM now achieves 85-92% of System malloc on tiny allocations (128-512B) and BEATS System at 146% on 1024B allocations! Performance Results: - Random Mixed 128B: 21M → 59M ops/s (+181%) 🚀 - Random Mixed 256B: 19M → 70M ops/s (+268%) 🚀 - Random Mixed 512B: 21M → 68M ops/s (+224%) 🚀 - Random Mixed 1024B: 21M → 65M ops/s (+210%, 146% of System!) 🏆 - Larson 1T: 2.68M ops/s (stable, no regression) Implementation: 1. Task 3a: Remove profiling overhead in release builds - Wrapped RDTSC calls in #if !HAKMEM_BUILD_RELEASE - Compiler can eliminate profiling code completely - Effect: +2% (2.68M → 2.73M Larson) 2. Task 3b: Simplify refill logic - Use constants from hakmem_build_flags.h - TLS cache already optimal - Effect: No regression 3. Task 3c: Pre-warm TLS cache (GAME CHANGER!) - Pre-allocate 16 blocks per class at init - Eliminates cold-start penalty - Effect: +180-280% improvement 🚀 Root Cause: The bottleneck was cold-start, not the hot path! First allocation in each class triggered a SuperSlab refill (100+ cycles). Pre-warming eliminated this penalty, revealing Phase 7's true potential. Files Modified: - core/hakmem_tiny.c: Pre-warm function implementation - core/box/hak_core_init.inc.h: Pre-warm initialization call - core/tiny_alloc_fast.inc.h: Profiling overhead removal - core/hakmem_phase7_config.h: Task 3 constants (NEW) - core/hakmem_build_flags.h: Phase 7 feature flags - Makefile: PREWARM_TLS flag, phase7 targets - CLAUDE.md: Phase 7 success summary - PHASE7_TASK3_RESULTS.md: Comprehensive results report (NEW) Build: make HEADER_CLASSIDX=1 AGGRESSIVE_INLINE=1 PREWARM_TLS=1 phase7-bench 🎉 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 12:54:52 +09:00
// External TLS variables (defined in hakmem_tiny.c)
extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES];
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
#ifndef TINY_NUM_CLASSES
#define TINY_NUM_CLASSES 8
#endif
// ========== Inline Macro: TLS Freelist Pop ==========
//
// Aggressive inline expansion of tiny_alloc_fast_pop()
// Saves: 5-10 cycles (function call overhead + register spilling)
//
// Assembly comparison (x86-64):
// Function call:
// push %rbx ; Save registers
// mov %edi, %ebx ; class_idx to %ebx
// call tiny_alloc_fast_pop ; Call (5-10 cycles overhead)
// pop %rbx ; Restore registers
// test %rax, %rax ; Check result
//
// Inline macro:
// mov g_tls_sll_head(%rdi), %rax ; Direct access (3-4 cycles)
// test %rax, %rax
// je .miss
// mov (%rax), %rdx
// mov %rdx, g_tls_sll_head(%rdi)
//
// Result: 5-10 fewer instructions, better register allocation
//
#define TINY_ALLOC_FAST_POP_INLINE(class_idx, ptr_out) do { \
void* _head = g_tls_sll_head[(class_idx)]; \
if (__builtin_expect(_head != NULL, 1)) { \
Tiny: fix remote sentinel leak → SEGV; add defense-in-depth; PoolTLS: refill-boundary remote drain; build UX help; quickstart docs Summary - Fix SEGV root cause in Tiny random_mixed: TINY_REMOTE_SENTINEL leaked from Remote queue into freelist/TLS SLL. - Clear/guard sentinel at the single boundary where Remote merges to freelist. - Add minimal defense-in-depth in freelist_pop and TLS SLL pop. - Silence verbose prints behind debug gates to reduce noise in release runs. - Pool TLS: integrate Remote Queue drain at refill boundary to avoid unnecessary backend carve/OS calls when possible. - DX: strengthen build.sh with help/list/verify and add docs/BUILDING_QUICKSTART.md. Details - core/superslab/superslab_inline.h: guard head/node against TINY_REMOTE_SENTINEL; sanitize node[0] when splicing local chain; only print diagnostics when debug guard is enabled. - core/slab_handle.h: freelist_pop breaks on sentinel head (fail-fast under strict). - core/tiny_alloc_fast_inline.h: TLS SLL pop breaks on sentinel head (rare branch). - core/tiny_superslab_free.inc.h: sentinel scan log behind debug guard. - core/pool_refill.c: try pool_remote_pop_chain() before backend carve in pool_refill_and_alloc(). - core/tiny_adaptive_sizing.c: default adaptive logs off; enable via HAKMEM_ADAPTIVE_LOG=1. - build.sh: add help/list/verify; EXTRA_MAKEFLAGS passthrough; echo pinned flags. - docs/BUILDING_QUICKSTART.md: add one‑pager for targets/flags/env/perf/strace. Verification (high level) - Tiny random_mixed 10k 256/1024: SEGV resolved; runs complete. - Pool TLS 1T/4T perf: HAKMEM >= system (≈ +0.7% 1T, ≈ +2.9% 4T); syscall counts ~10–13. Known issues (to address next) - Tiny random_mixed perf is weak vs system: - 1T/500k/256: cycles/op ≈ 240 vs ~47 (≈5× slower), IPC ≈0.92, branch‑miss ≈11%. - 1T/500k/1024: cycles/op ≈ 149 vs ~53 (≈2.8× slower), IPC ≈0.82, branch‑miss ≈10.5%. - Hypothesis: frequent SuperSlab path for class7 (fast_cap=0), branchy refill/adopt, and hot-path divergence. - Proposed next steps: - Introduce fast_cap>0 for class7 (bounded TLS SLL) and a simpler batch refill. - Add env‑gated Remote Side OFF for 1T A/B (reduce side-table and guards). - Revisit likely/unlikely and unify adopt boundary sequencing (drain→bind→acquire) for Tiny.
2025-11-09 16:49:34 +09:00
if (__builtin_expect((uintptr_t)_head == TINY_REMOTE_SENTINEL, 0)) { \
/* Break the chain defensively if sentinel leaked into TLS SLL */ \
g_tls_sll_head[(class_idx)] = NULL; \
if (g_tls_sll_count[(class_idx)] > 0) g_tls_sll_count[(class_idx)]--; \
(ptr_out) = NULL; \
} else { \
/* Phase 7: header-aware next (C0-C6: base+1, C7: base) */ \
size_t _off = 0; \
#if HAKMEM_TINY_HEADER_CLASSIDX \
_off = ((class_idx) == 7) ? 0 : 1; \
#endif \
void* _next = *(void**)((uint8_t*)_head + _off); \
Tiny: fix remote sentinel leak → SEGV; add defense-in-depth; PoolTLS: refill-boundary remote drain; build UX help; quickstart docs Summary - Fix SEGV root cause in Tiny random_mixed: TINY_REMOTE_SENTINEL leaked from Remote queue into freelist/TLS SLL. - Clear/guard sentinel at the single boundary where Remote merges to freelist. - Add minimal defense-in-depth in freelist_pop and TLS SLL pop. - Silence verbose prints behind debug gates to reduce noise in release runs. - Pool TLS: integrate Remote Queue drain at refill boundary to avoid unnecessary backend carve/OS calls when possible. - DX: strengthen build.sh with help/list/verify and add docs/BUILDING_QUICKSTART.md. Details - core/superslab/superslab_inline.h: guard head/node against TINY_REMOTE_SENTINEL; sanitize node[0] when splicing local chain; only print diagnostics when debug guard is enabled. - core/slab_handle.h: freelist_pop breaks on sentinel head (fail-fast under strict). - core/tiny_alloc_fast_inline.h: TLS SLL pop breaks on sentinel head (rare branch). - core/tiny_superslab_free.inc.h: sentinel scan log behind debug guard. - core/pool_refill.c: try pool_remote_pop_chain() before backend carve in pool_refill_and_alloc(). - core/tiny_adaptive_sizing.c: default adaptive logs off; enable via HAKMEM_ADAPTIVE_LOG=1. - build.sh: add help/list/verify; EXTRA_MAKEFLAGS passthrough; echo pinned flags. - docs/BUILDING_QUICKSTART.md: add one‑pager for targets/flags/env/perf/strace. Verification (high level) - Tiny random_mixed 10k 256/1024: SEGV resolved; runs complete. - Pool TLS 1T/4T perf: HAKMEM >= system (≈ +0.7% 1T, ≈ +2.9% 4T); syscall counts ~10–13. Known issues (to address next) - Tiny random_mixed perf is weak vs system: - 1T/500k/256: cycles/op ≈ 240 vs ~47 (≈5× slower), IPC ≈0.92, branch‑miss ≈11%. - 1T/500k/1024: cycles/op ≈ 149 vs ~53 (≈2.8× slower), IPC ≈0.82, branch‑miss ≈10.5%. - Hypothesis: frequent SuperSlab path for class7 (fast_cap=0), branchy refill/adopt, and hot-path divergence. - Proposed next steps: - Introduce fast_cap>0 for class7 (bounded TLS SLL) and a simpler batch refill. - Add env‑gated Remote Side OFF for 1T A/B (reduce side-table and guards). - Revisit likely/unlikely and unify adopt boundary sequencing (drain→bind→acquire) for Tiny.
2025-11-09 16:49:34 +09:00
g_tls_sll_head[(class_idx)] = _next; \
if (g_tls_sll_count[(class_idx)] > 0) { \
g_tls_sll_count[(class_idx)]--; \
} \
(ptr_out) = _head; \
if (__builtin_expect((class_idx) == 7, 0)) { \
*(void**)(ptr_out) = NULL; \
} \
Phase 7 Task 3: Pre-warm TLS cache (+180-280% improvement!) MAJOR SUCCESS: HAKMEM now achieves 85-92% of System malloc on tiny allocations (128-512B) and BEATS System at 146% on 1024B allocations! Performance Results: - Random Mixed 128B: 21M → 59M ops/s (+181%) 🚀 - Random Mixed 256B: 19M → 70M ops/s (+268%) 🚀 - Random Mixed 512B: 21M → 68M ops/s (+224%) 🚀 - Random Mixed 1024B: 21M → 65M ops/s (+210%, 146% of System!) 🏆 - Larson 1T: 2.68M ops/s (stable, no regression) Implementation: 1. Task 3a: Remove profiling overhead in release builds - Wrapped RDTSC calls in #if !HAKMEM_BUILD_RELEASE - Compiler can eliminate profiling code completely - Effect: +2% (2.68M → 2.73M Larson) 2. Task 3b: Simplify refill logic - Use constants from hakmem_build_flags.h - TLS cache already optimal - Effect: No regression 3. Task 3c: Pre-warm TLS cache (GAME CHANGER!) - Pre-allocate 16 blocks per class at init - Eliminates cold-start penalty - Effect: +180-280% improvement 🚀 Root Cause: The bottleneck was cold-start, not the hot path! First allocation in each class triggered a SuperSlab refill (100+ cycles). Pre-warming eliminated this penalty, revealing Phase 7's true potential. Files Modified: - core/hakmem_tiny.c: Pre-warm function implementation - core/box/hak_core_init.inc.h: Pre-warm initialization call - core/tiny_alloc_fast.inc.h: Profiling overhead removal - core/hakmem_phase7_config.h: Task 3 constants (NEW) - core/hakmem_build_flags.h: Phase 7 feature flags - Makefile: PREWARM_TLS flag, phase7 targets - CLAUDE.md: Phase 7 success summary - PHASE7_TASK3_RESULTS.md: Comprehensive results report (NEW) Build: make HEADER_CLASSIDX=1 AGGRESSIVE_INLINE=1 PREWARM_TLS=1 phase7-bench 🎉 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 12:54:52 +09:00
} \
} else { \
(ptr_out) = NULL; \
} \
} while(0)
// ========== Inline Macro: TLS Freelist Push ==========
//
// Aggressive inline expansion of tiny_alloc_fast_push()
// Saves: 5-10 cycles (function call overhead)
//
// Assembly comparison:
// Function call:
// mov %rdi, %rsi ; ptr to %rsi
// mov %ebx, %edi ; class_idx to %edi
// call tiny_alloc_fast_push ; Call (5-10 cycles)
//
// Inline macro:
// mov g_tls_sll_head(%rdi), %rax ; Direct inline (2-3 cycles)
// mov %rax, (%rsi)
// mov %rsi, g_tls_sll_head(%rdi)
//
#define TINY_ALLOC_FAST_PUSH_INLINE(class_idx, ptr) do { \
/* Phase 7: header-aware next (C0-C6: base+1, C7: base) */ \
size_t _off = 0; \
#if HAKMEM_TINY_HEADER_CLASSIDX \
_off = ((class_idx) == 7) ? 0 : 1; \
#endif \
*(void**)((uint8_t*)(ptr) + _off) = g_tls_sll_head[(class_idx)]; \
Phase 7 Task 3: Pre-warm TLS cache (+180-280% improvement!) MAJOR SUCCESS: HAKMEM now achieves 85-92% of System malloc on tiny allocations (128-512B) and BEATS System at 146% on 1024B allocations! Performance Results: - Random Mixed 128B: 21M → 59M ops/s (+181%) 🚀 - Random Mixed 256B: 19M → 70M ops/s (+268%) 🚀 - Random Mixed 512B: 21M → 68M ops/s (+224%) 🚀 - Random Mixed 1024B: 21M → 65M ops/s (+210%, 146% of System!) 🏆 - Larson 1T: 2.68M ops/s (stable, no regression) Implementation: 1. Task 3a: Remove profiling overhead in release builds - Wrapped RDTSC calls in #if !HAKMEM_BUILD_RELEASE - Compiler can eliminate profiling code completely - Effect: +2% (2.68M → 2.73M Larson) 2. Task 3b: Simplify refill logic - Use constants from hakmem_build_flags.h - TLS cache already optimal - Effect: No regression 3. Task 3c: Pre-warm TLS cache (GAME CHANGER!) - Pre-allocate 16 blocks per class at init - Eliminates cold-start penalty - Effect: +180-280% improvement 🚀 Root Cause: The bottleneck was cold-start, not the hot path! First allocation in each class triggered a SuperSlab refill (100+ cycles). Pre-warming eliminated this penalty, revealing Phase 7's true potential. Files Modified: - core/hakmem_tiny.c: Pre-warm function implementation - core/box/hak_core_init.inc.h: Pre-warm initialization call - core/tiny_alloc_fast.inc.h: Profiling overhead removal - core/hakmem_phase7_config.h: Task 3 constants (NEW) - core/hakmem_build_flags.h: Phase 7 feature flags - Makefile: PREWARM_TLS flag, phase7 targets - CLAUDE.md: Phase 7 success summary - PHASE7_TASK3_RESULTS.md: Comprehensive results report (NEW) Build: make HEADER_CLASSIDX=1 AGGRESSIVE_INLINE=1 PREWARM_TLS=1 phase7-bench 🎉 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 12:54:52 +09:00
g_tls_sll_head[(class_idx)] = (ptr); \
g_tls_sll_count[(class_idx)]++; \
} while(0)
// ========== Performance Notes ==========
//
// Benchmark results (expected):
// - Random Mixed 128B: 21M → 23M ops/s (+10%)
// - Random Mixed 256B: 19M → 22M ops/s (+15%)
// - Larson 1T: 2.7M → 3.0M ops/s (+11%)
//
// Key optimizations:
// 1. No function call overhead (save 5-10 cycles)
// 2. Better register allocation (inline knows full context)
// 3. No stack frame setup/teardown
// 4. Compiler can optimize across macro boundaries
//
// Trade-offs:
// 1. Code size: +100-200 bytes (each call site expanded)
// 2. Debug visibility: Macros harder to step through
// 3. Maintenance: Changes must be kept in sync with function version
//
// Recommendation: Use inline macros for CRITICAL hot paths only
// (alloc/free fast path), keep functions for diagnostics/debugging
#endif // TINY_ALLOC_FAST_INLINE_H