Files
hakmem/core/front/malloc_tiny_fast.h

820 lines
37 KiB
C
Raw Normal View History

2025-11-17 05:29:08 +09:00
// malloc_tiny_fast.h - Phase 26: Front Gate Unification (Tiny Fast Path)
//
// Goal: Eliminate 3-layer overhead (malloc → hak_alloc_at → wrapper → tiny_alloc_fast)
// Target: +10-15% performance (11.35M → 12.5-13.5M ops/s)
//
// Design (ChatGPT analysis):
// - Replace: malloc → hak_alloc_at (236 lines) → wrapper (diagnostics) → tiny_alloc_fast
// - With: malloc → malloc_tiny_fast (single-layer, direct to Unified Cache)
// - Preserves: Safety checks (lock depth, initializing, LD_SAFE, jemalloc block)
// - Leverages: Phase 23 Unified Cache (tcache-style, 2-3 cache misses)
//
// Performance:
// - Current overhead: malloc(8.97%) + routing + wrapper(3.63%) + tiny(5.37%) = 17.97%
// - BenchFast ceiling: 8-10 instructions (~1-2% overhead)
// - Gap: ~16%
// - Target: Close half the gap (+10-15% improvement)
//
// ENV Variables:
// HAKMEM_FRONT_GATE_UNIFIED=1 # Enable Front Gate Unification (default: 0, OFF)
#ifndef HAK_FRONT_MALLOC_TINY_FAST_H
#define HAK_FRONT_MALLOC_TINY_FAST_H
#include <stdint.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdatomic.h>
#include <pthread.h> // For pthread_self() in cross-thread check
2025-11-17 05:29:08 +09:00
#include "../hakmem_build_flags.h"
#include "../hakmem_tiny_config.h" // For TINY_NUM_CLASSES
#include "../hakmem_super_registry.h" // For cross-thread owner check
#include "../superslab/superslab_inline.h" // For ss_fast_lookup, slab_index_for (Phase 12)
#include "../box/ss_slab_meta_box.h" // For ss_slab_meta_owner_tid_low_get
#include "../box/free_remote_box.h" // For tiny_free_remote_box
2025-11-17 05:29:08 +09:00
#include "tiny_unified_cache.h" // For unified_cache_pop_or_refill
#include "../tiny_region_id.h" // For tiny_region_id_write_header
#include "../hakmem_tiny.h" // For hak_tiny_size_to_class
Phase 4-Step2: Add Hot/Cold Path Box (+7.3% performance) Implemented Hot/Cold Path separation using Box pattern for Tiny allocations: Performance Improvement (without PGO): - Baseline (Phase 26-A): 53.3 M ops/s - Hot/Cold Box (Phase 4-Step2): 57.2 M ops/s - Gain: +7.3% (+3.9 M ops/s) Implementation: 1. core/box/tiny_front_hot_box.h - Ultra-fast hot path (1 branch) - Removed range check (caller guarantees valid class_idx) - Inline cache hit path with branch prediction hints - Debug metrics with zero overhead in Release builds 2. core/box/tiny_front_cold_box.h - Slow cold path (noinline, cold) - Refill logic (batch allocation from SuperSlab) - Drain logic (batch free to SuperSlab) - Error reporting and diagnostics 3. core/front/malloc_tiny_fast.h - Updated to use Hot/Cold Boxes - Hot path: tiny_hot_alloc_fast() (1 branch: cache empty check) - Cold path: tiny_cold_refill_and_alloc() (noinline, cold attribute) - Clear separation improves i-cache locality Branch Analysis: - Baseline: 4-5 branches in hot path (range check + cache check + refill logic mixed) - Hot/Cold Box: 1 branch in hot path (cache empty check only) - Reduction: 3-4 branches eliminated from hot path Design Principles (Box Pattern): ✅ Single Responsibility: Hot path = cache hit only, Cold path = refill/errors ✅ Clear Contract: Hot returns NULL on miss, Cold handles miss ✅ Observable: Debug metrics (TINY_HOT_METRICS_*) gated by NDEBUG ✅ Safe: Branch prediction hints (TINY_HOT_LIKELY/UNLIKELY) ✅ Testable: Isolated hot/cold paths, easy A/B testing PGO Status: - Temporarily disabled (build issues with __gcov_merge_time_profile) - Will re-enable PGO in future commit after resolving gcc/lto issues - Current benchmarks are without PGO (fair A/B comparison) Other Changes: - .gitignore: Added *.d files (dependency files, auto-generated) - Makefile: PGO targets temporarily disabled (show informational message) - build_pgo.sh: Temporarily disabled (show "PGO paused" message) Next: Phase 4-Step3 (Front Config Box, target +5-8%) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 11:58:37 +09:00
#include "../box/tiny_front_hot_box.h" // Phase 4-Step2: Hot Path Box
#include "../box/tiny_front_cold_box.h" // Phase 4-Step2: Cold Path Box
#include "../box/tiny_c7_hotbox.h" // Optional: C7 専用ホットボックス
#include "../box/tiny_heap_box.h" // TinyHeap 汎用 Box
#include "../box/tiny_hotheap_v2_box.h" // TinyHotHeap v2 (Phase31 A/B)
#include "../box/smallobject_hotbox_v3_box.h" // SmallObject HotHeap v3 skeleton
#include "../box/smallobject_hotbox_v4_box.h" // SmallObject HotHeap v4 (C7 stub)
#include "../box/smallobject_hotbox_v5_box.h" // SmallObject HotHeap v5 (C6-only route stub, Phase v5-1)
#include "../box/smallobject_core_v6_box.h" // SmallObject Core v6 (Phase V6-HDR-2)
#include "../box/smallobject_v6_env_box.h" // SmallObject v6 ENV control (Phase V6-HDR-2)
#include "../box/smallobject_hotbox_v7_box.h" // SmallObject HotBox v7 stub (Phase v7-1)
#include "../box/smallobject_policy_v7_box.h" // Phase v7-4: Policy Box
#include "../box/smallobject_mid_v35_box.h" // Phase v11a-3: MID v3.5 HotBox
#include "../box/tiny_c7_ultra_box.h" // C7 ULTRA stub (UF-1, delegates to v3)
#include "../box/tiny_c6_ultra_free_box.h" // Phase 4-2: C6 ULTRA-free (free-only, C6-only)
Phase FREE-LEGACY-OPT-5-1/5-2: C5 ULTRA free+alloc integration Summary: ======== Implemented C5 ULTRA TLS cache pattern following the successful C6 ULTRA design: - Phase 5-1: Free-side TLS cache + segment learning - Phase 5-2: Alloc-side TLS pop for complete free+alloc cycle integration Targets C5 class (129-256B) as next legacy reduction after C6 completion. Key Changes: ============ 1. NEW FILES: - core/box/tiny_c5_ultra_free_box.h: C5 ULTRA TLS cache structure - core/box/tiny_c5_ultra_free_box.c: C5 free path implementation (same pattern as C6) - core/box/tiny_c5_ultra_free_env_box.h: ENV gating (HAKMEM_TINY_C5_ULTRA_FREE_ENABLED) 2. MODIFIED FILES: - core/front/malloc_tiny_fast.h: * Added C5 ULTRA includes * Added C5 alloc-side TLS pop at lines 186-194 (integrated with C6) * Added C5 free path at lines 333-337 (integrated with C6) - core/box/tiny_ultra_classes_box.h: * Added TINY_CLASS_C5 constant * Added tiny_class_is_c5() macro * Extended tiny_class_is_ultra() to include C5 - core/box/free_path_stats_box.h: * Added c5_ultra_free_fast counter * Added c5_ultra_alloc_hit counter - core/box/free_path_stats_box.c: * Updated stats dump to output C5 counters - Makefile: * Added core/box/tiny_c5_ultra_free_box.o to all object lists 3. Design Rationale: - Exact copy of C6 ULTRA pattern (proven effective) - TLS cache capacity: 128 blocks (same as C6 for consistency) - Segment learning on first C5 free via ss_fast_lookup() - Alloc-side pop integrated directly in malloc_tiny_fast.h hotpath - Legacy fallback unification via tiny_legacy_fallback_free_base() 4. Expected Impact: - C5 legacy calls: 68,871 → 0 (100% elimination) - Total legacy reduction: ~53% of remaining 129,623 - Mixed workload: Minimal regression (C5 is smaller class, fewer allocations) 5. Stats Collection: Run with: HAKMEM_TINY_C5_ULTRA_FREE_ENABLED=1 HAKMEM_FREE_PATH_STATS=1 ./bench_allocators_hakmem Expected output: [FREE_PATH_STATS] ... c5_ultra_free=68871 c5_ultra_alloc=68871 ... legacy_fb=60752 ... [FREE_PATH_STATS_LEGACY_BY_CLASS] ... c5=0 ... Status: ======= - Code: ✅ COMPLETE (3 new files + 5 modified files) - Compilation: ✅ Verified (no errors, only unused variable warnings unrelated to C5) - Functionality: Ready to benchmark (ENV gating: default OFF, opt-in via ENV) Phase Progression: ================== ✅ Phase 4-4: C6 ULTRA free+alloc (legacy C6: 137,319 → 0) ✅ Phase 5-1/5-2: C5 ULTRA free+alloc (legacy C5: 68,871 → 0 expected) ⏳ Phase 4.5: C4 ULTRA (34,727 remaining) 📋 Future: C3/C2 ULTRA if beneficial 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-11 19:26:51 +09:00
#include "../box/tiny_c5_ultra_free_box.h" // Phase 5-1/5-2: C5 ULTRA-free + alloc integration
#include "../box/tiny_c4_ultra_free_box.h" // Phase 6: C4 ULTRA-free + alloc integration (cap=64)
#include "../box/tiny_ultra_tls_box.h" // Phase TLS-UNIFY-1: Unified ULTRA TLS API
#include "../box/tiny_ultra_classes_box.h" // Phase REFACTOR-1: Named constants for C4-C7
#include "../box/tiny_legacy_fallback_box.h" // Phase REFACTOR-2: Legacy fallback logic unification
#include "../box/tiny_ptr_convert_box.h" // Phase REFACTOR-3: Inline pointer macro centralization
#include "../box/tiny_front_v3_env_box.h" // Tiny front v3 snapshot gate
#include "../box/tiny_heap_env_box.h" // ENV gate for TinyHeap front (A/B)
#include "../box/tiny_route_env_box.h" // Route snapshot (Heap vs Legacy)
#include "../box/tiny_front_stats_box.h" // Front class distribution counters
Phase FREE-LEGACY-OPT-4-1: Legacy per-class breakdown analysis ## 目的 Legacy fallback 49.2% の内訳を per-class で分析し、最も Legacy を使用しているクラスを特定。 ## 実装内容 1. FreePathStats 構造体の拡張 - legacy_by_class[8] フィールドを追加(C0-C7 の Legacy fallback 内訳) 2. デストラクタ出力の更新 - [FREE_PATH_STATS_LEGACY_BY_CLASS] 行を追加し、C0-C7 の内訳を出力 3. カウンタの散布 - free_tiny_fast() の Legacy fallback 経路で legacy_by_class[class_idx] をインクリメント - class_idx の範囲チェック(0-7)を実施 ## 測定結果(Mixed 16-1024B) **測定安定性**: 完全に安定(3 回とも同一の値、決定的測定) Legacy per-class 内訳: - C0: 0 (0.0%) - C1: 0 (0.0%) - C2: 8,746 (3.3% of legacy) - C3: 17,279 (6.5% of legacy) - C4: 34,727 (13.0% of legacy) - C5: 68,871 (25.8% of legacy) - C6: 137,319 (51.4% of legacy) ← 最大シェア - C7: 0 (0.0%) 合計: 266,942 (49.2% of total free calls) ## 分析結果 **最大シェアクラス**: C6 (513-1024B) が Legacy の 51.4% を占める **理由**: - Mixed 16-1024B では C6 サイズのアロケーションが多い - C7 ULTRA は C7 専用で C6 は未対応 - v3/v4 も C6 をカバーしていない - Route 設定で C6 は Legacy に直接落ちている ## 次のアクション Phase FREE-LEGACY-OPT-4-2 で C6 クラスに ULTRA-Free lane を実装: - Legacy fallback を 51% 削減(C6 分) - Legacy: 49.2% → 24-27% に改善(半減) - Mixed 16-1024B: 44.8M → 47-48M ops/s 程度(+5-8% 改善) ## 変更ファイル - core/box/free_path_stats_box.h: FreePathStats 構造体に legacy_by_class[8] 追加 - core/box/free_path_stats_box.c: デストラクタに per-class 出力追加 - core/front/malloc_tiny_fast.h: Legacy fallback 経路に per-class カウンタ追加 - docs/analysis/FREE_LEGACY_PATH_ANALYSIS.md: Phase 4-1 分析結果を記録 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 18:04:14 +09:00
#include "../box/free_path_stats_box.h" // Phase FREE-LEGACY-BREAKDOWN-1: Free path stats
#include "../box/alloc_gate_stats_box.h" // Phase ALLOC-GATE-OPT-1: Alloc gate stats
#include "../box/free_policy_fast_v2_box.h" // Phase POLICY-FAST-PATH-V2: Policy snapshot bypass
Phase FREE-TINY-FAST-HOTCOLD-OPT-1: Hot/Cold split for free_tiny_fast [RESEARCH BOX - FREEZE] Split free_tiny_fast() into hot and cold paths to reduce I-cache pressure: - free_tiny_fast_hot(): always_inline, fast-path validation + ULTRA/MID/V7 - free_tiny_fast_cold(): noinline,cold, cross-thread + TinyHeap + legacy ENV: HAKMEM_FREE_TINY_FAST_HOTCOLD=0/1 (default 0) Stats: HAKMEM_FREE_TINY_FAST_HOTCOLD_STATS=0/1 (TLS only, exit dump) ## Benchmark Results (random mixed, 100M ops) HOTCOLD=0 (legacy): 49.35M, 50.18M, 50.25M ops/s (median: 50.18M) HOTCOLD=1 (split): 43.54M, 43.59M, 43.62M ops/s (median: 43.59M) **Regression: -13.1%** (NO-GO) ## Stats Analysis (10M ops, HOTCOLD_STATS=1) Hot path: 50.11% (C7 ULTRA early-exit) Cold path: 48.43% (legacy fallback) ## Root Cause Design assumption FAILED: "Cold path is rare" Reality: Cold path is 48% (almost as common as hot path) The split introduces: 1. Extra dispatch overhead in hot path 2. Function call overhead to cold for ~48% of frees 3. "Cold" is NOT rare - it's the legacy fallback for non-ULTRA classes ## Conclusion **FREEZE as research box (default OFF)** Box Theory value: - Validated hot/cold distribution via TLS stats - Confirmed that legacy fallback is NOT rare (48%) - Demonstrated that naive hot/cold split hurts when "cold" is common Alternative approaches for future work: 1. Inline the legacy fallback in hot path (no split) 2. Route-specific specialization (C7 vs non-C7 separate paths) 3. Policy-based early routing (before header validation) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-13 03:16:54 +09:00
#include "../box/free_tiny_fast_hotcold_env_box.h" // Phase FREE-TINY-FAST-HOTCOLD-OPT-1: ENV control
#include "../box/free_tiny_fast_hotcold_stats_box.h" // Phase FREE-TINY-FAST-HOTCOLD-OPT-1: Stats
2025-11-17 05:29:08 +09:00
// Helper: current thread id (low 32 bits) for owner check
#ifndef TINY_SELF_U32_LOCAL_DEFINED
#define TINY_SELF_U32_LOCAL_DEFINED
static inline uint32_t tiny_self_u32_local(void) {
return (uint32_t)(uintptr_t)pthread_self();
}
#endif
2025-11-17 05:29:08 +09:00
// ============================================================================
// ENV Control (cached, lazy init)
// ============================================================================
// Enable flag (default: 0, OFF)
static inline int front_gate_unified_enabled(void) {
static int g_enable = -1;
if (__builtin_expect(g_enable == -1, 0)) {
const char* e = getenv("HAKMEM_FRONT_GATE_UNIFIED");
Enable performance optimizations by default (+557% improvement) ## Performance Impact **Before** (optimizations OFF): - Random Mixed 256B: 9.4M ops/s - System malloc ratio: 10.6% (9.5x slower) **After** (optimizations ON): - Random Mixed 256B: 61.8M ops/s (+557%) - System malloc ratio: 70.0% (1.43x slower) ✅ - 3-run average: 60.1M - 62.8M ops/s (±2.2% variance) ## Changes Enabled 3 critical optimizations by default: ### 1. HAKMEM_SS_EMPTY_REUSE (hakmem_shared_pool.c:810) ```c // BEFORE: default OFF empty_reuse_enabled = (e && *e && *e != '0') ? 1 : 0; // AFTER: default ON empty_reuse_enabled = (e && *e && *e == '0') ? 0 : 1; ``` **Impact**: Reuse empty slabs before mmap, reduces syscall overhead ### 2. HAKMEM_TINY_UNIFIED_CACHE (tiny_unified_cache.h:69) ```c // BEFORE: default OFF g_enable = (e && *e && *e != '0') ? 1 : 0; // AFTER: default ON g_enable = (e && *e && *e == '0') ? 0 : 1; ``` **Impact**: Unified TLS cache improves hit rate ### 3. HAKMEM_FRONT_GATE_UNIFIED (malloc_tiny_fast.h:42) ```c // BEFORE: default OFF g_enable = (e && *e && *e != '0') ? 1 : 0; // AFTER: default ON g_enable = (e && *e && *e == '0') ? 0 : 1; ``` **Impact**: Unified front gate reduces dispatch overhead ## ENV Override Users can still disable optimizations if needed: ```bash export HAKMEM_SS_EMPTY_REUSE=0 # Disable empty slab reuse export HAKMEM_TINY_UNIFIED_CACHE=0 # Disable unified cache export HAKMEM_FRONT_GATE_UNIFIED=0 # Disable unified front gate ``` ## Comparison to Competitors ``` mimalloc: 113.34M ops/s (1.83x faster than HAKMEM) System malloc: 88.20M ops/s (1.43x faster than HAKMEM) HAKMEM: 61.80M ops/s ✅ Competitive performance ``` ## Files Modified - core/hakmem_shared_pool.c - EMPTY_REUSE default ON - core/front/tiny_unified_cache.h - UNIFIED_CACHE default ON - core/front/malloc_tiny_fast.h - FRONT_GATE_UNIFIED default ON 🚀 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-22 01:29:05 +09:00
g_enable = (e && *e && *e == '0') ? 0 : 1; // default ON
2025-11-17 05:29:08 +09:00
#if !HAKMEM_BUILD_RELEASE
if (g_enable) {
fprintf(stderr, "[FrontGate-INIT] front_gate_unified_enabled() = %d\n", g_enable);
fflush(stderr);
}
#endif
}
return g_enable;
}
// ============================================================================
// Phase REFACTOR-2: Legacy free helper (unified in tiny_legacy_fallback_box.h)
// ============================================================================
// Legacy free handling is encapsulated in tiny_legacy_fallback_box.h
// (Removed inline implementation to avoid duplication)
2025-11-17 05:29:08 +09:00
// ============================================================================
Phase 4-Step2: Add Hot/Cold Path Box (+7.3% performance) Implemented Hot/Cold Path separation using Box pattern for Tiny allocations: Performance Improvement (without PGO): - Baseline (Phase 26-A): 53.3 M ops/s - Hot/Cold Box (Phase 4-Step2): 57.2 M ops/s - Gain: +7.3% (+3.9 M ops/s) Implementation: 1. core/box/tiny_front_hot_box.h - Ultra-fast hot path (1 branch) - Removed range check (caller guarantees valid class_idx) - Inline cache hit path with branch prediction hints - Debug metrics with zero overhead in Release builds 2. core/box/tiny_front_cold_box.h - Slow cold path (noinline, cold) - Refill logic (batch allocation from SuperSlab) - Drain logic (batch free to SuperSlab) - Error reporting and diagnostics 3. core/front/malloc_tiny_fast.h - Updated to use Hot/Cold Boxes - Hot path: tiny_hot_alloc_fast() (1 branch: cache empty check) - Cold path: tiny_cold_refill_and_alloc() (noinline, cold attribute) - Clear separation improves i-cache locality Branch Analysis: - Baseline: 4-5 branches in hot path (range check + cache check + refill logic mixed) - Hot/Cold Box: 1 branch in hot path (cache empty check only) - Reduction: 3-4 branches eliminated from hot path Design Principles (Box Pattern): ✅ Single Responsibility: Hot path = cache hit only, Cold path = refill/errors ✅ Clear Contract: Hot returns NULL on miss, Cold handles miss ✅ Observable: Debug metrics (TINY_HOT_METRICS_*) gated by NDEBUG ✅ Safe: Branch prediction hints (TINY_HOT_LIKELY/UNLIKELY) ✅ Testable: Isolated hot/cold paths, easy A/B testing PGO Status: - Temporarily disabled (build issues with __gcov_merge_time_profile) - Will re-enable PGO in future commit after resolving gcc/lto issues - Current benchmarks are without PGO (fair A/B comparison) Other Changes: - .gitignore: Added *.d files (dependency files, auto-generated) - Makefile: PGO targets temporarily disabled (show informational message) - build_pgo.sh: Temporarily disabled (show "PGO paused" message) Next: Phase 4-Step3 (Front Config Box, target +5-8%) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 11:58:37 +09:00
// Phase 4-Step2: malloc_tiny_fast() - Hot/Cold Path Box (ACTIVE)
2025-11-17 05:29:08 +09:00
// ============================================================================
Phase 4-Step2: Add Hot/Cold Path Box (+7.3% performance) Implemented Hot/Cold Path separation using Box pattern for Tiny allocations: Performance Improvement (without PGO): - Baseline (Phase 26-A): 53.3 M ops/s - Hot/Cold Box (Phase 4-Step2): 57.2 M ops/s - Gain: +7.3% (+3.9 M ops/s) Implementation: 1. core/box/tiny_front_hot_box.h - Ultra-fast hot path (1 branch) - Removed range check (caller guarantees valid class_idx) - Inline cache hit path with branch prediction hints - Debug metrics with zero overhead in Release builds 2. core/box/tiny_front_cold_box.h - Slow cold path (noinline, cold) - Refill logic (batch allocation from SuperSlab) - Drain logic (batch free to SuperSlab) - Error reporting and diagnostics 3. core/front/malloc_tiny_fast.h - Updated to use Hot/Cold Boxes - Hot path: tiny_hot_alloc_fast() (1 branch: cache empty check) - Cold path: tiny_cold_refill_and_alloc() (noinline, cold attribute) - Clear separation improves i-cache locality Branch Analysis: - Baseline: 4-5 branches in hot path (range check + cache check + refill logic mixed) - Hot/Cold Box: 1 branch in hot path (cache empty check only) - Reduction: 3-4 branches eliminated from hot path Design Principles (Box Pattern): ✅ Single Responsibility: Hot path = cache hit only, Cold path = refill/errors ✅ Clear Contract: Hot returns NULL on miss, Cold handles miss ✅ Observable: Debug metrics (TINY_HOT_METRICS_*) gated by NDEBUG ✅ Safe: Branch prediction hints (TINY_HOT_LIKELY/UNLIKELY) ✅ Testable: Isolated hot/cold paths, easy A/B testing PGO Status: - Temporarily disabled (build issues with __gcov_merge_time_profile) - Will re-enable PGO in future commit after resolving gcc/lto issues - Current benchmarks are without PGO (fair A/B comparison) Other Changes: - .gitignore: Added *.d files (dependency files, auto-generated) - Makefile: PGO targets temporarily disabled (show informational message) - build_pgo.sh: Temporarily disabled (show "PGO paused" message) Next: Phase 4-Step3 (Front Config Box, target +5-8%) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 11:58:37 +09:00
// Ultra-thin Tiny allocation using Hot/Cold Path Box (Phase 4-Step2)
//
// IMPROVEMENTS over Phase 26-A:
// - Branch reduction: Hot path has only 1 branch (cache empty check)
// - Branch hints: TINY_HOT_LIKELY/UNLIKELY for better CPU prediction
// - Hot/Cold separation: Keeps hot path small (better i-cache locality)
// - Explicit fallback: Clear hot→cold transition
//
// PERFORMANCE:
// - Baseline (Phase 26-A, no PGO): 53.3 M ops/s
// - Hot/Cold Box (no PGO): 57.2 M ops/s (+7.3%)
//
// DESIGN:
// 1. size → class_idx (same as Phase 26-A)
// 2. Hot path: tiny_hot_alloc_fast() - cache hit (1 branch)
// 3. Cold path: tiny_cold_refill_and_alloc() - cache miss (noinline, cold)
//
2025-11-17 05:29:08 +09:00
// Preconditions:
// - Called AFTER malloc() safety checks (lock depth, initializing, LD_SAFE)
// - size <= tiny_get_max_size() (caller verified)
// Returns:
// - USER pointer on success
Phase 4-Step2: Add Hot/Cold Path Box (+7.3% performance) Implemented Hot/Cold Path separation using Box pattern for Tiny allocations: Performance Improvement (without PGO): - Baseline (Phase 26-A): 53.3 M ops/s - Hot/Cold Box (Phase 4-Step2): 57.2 M ops/s - Gain: +7.3% (+3.9 M ops/s) Implementation: 1. core/box/tiny_front_hot_box.h - Ultra-fast hot path (1 branch) - Removed range check (caller guarantees valid class_idx) - Inline cache hit path with branch prediction hints - Debug metrics with zero overhead in Release builds 2. core/box/tiny_front_cold_box.h - Slow cold path (noinline, cold) - Refill logic (batch allocation from SuperSlab) - Drain logic (batch free to SuperSlab) - Error reporting and diagnostics 3. core/front/malloc_tiny_fast.h - Updated to use Hot/Cold Boxes - Hot path: tiny_hot_alloc_fast() (1 branch: cache empty check) - Cold path: tiny_cold_refill_and_alloc() (noinline, cold attribute) - Clear separation improves i-cache locality Branch Analysis: - Baseline: 4-5 branches in hot path (range check + cache check + refill logic mixed) - Hot/Cold Box: 1 branch in hot path (cache empty check only) - Reduction: 3-4 branches eliminated from hot path Design Principles (Box Pattern): ✅ Single Responsibility: Hot path = cache hit only, Cold path = refill/errors ✅ Clear Contract: Hot returns NULL on miss, Cold handles miss ✅ Observable: Debug metrics (TINY_HOT_METRICS_*) gated by NDEBUG ✅ Safe: Branch prediction hints (TINY_HOT_LIKELY/UNLIKELY) ✅ Testable: Isolated hot/cold paths, easy A/B testing PGO Status: - Temporarily disabled (build issues with __gcov_merge_time_profile) - Will re-enable PGO in future commit after resolving gcc/lto issues - Current benchmarks are without PGO (fair A/B comparison) Other Changes: - .gitignore: Added *.d files (dependency files, auto-generated) - Makefile: PGO targets temporarily disabled (show informational message) - build_pgo.sh: Temporarily disabled (show "PGO paused" message) Next: Phase 4-Step3 (Front Config Box, target +5-8%) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 11:58:37 +09:00
// - NULL on failure (caller falls back to normal path)
//
// Phase ALLOC-TINY-FAST-DUALHOT-2: Probe window ENV gate (safe from early putenv)
static inline int alloc_dualhot_enabled(void) {
static int g = -1;
static int g_probe_left = 64; // Probe window: tolerate early putenv before gate init
if (__builtin_expect(g == -1, 0)) {
const char* e = getenv("HAKMEM_TINY_ALLOC_DUALHOT");
if (e && *e && *e != '0') {
g = 1;
} else if (g_probe_left > 0) {
g_probe_left--;
// Still probing: return "not yet set" without committing 0
if (e == NULL) {
return 0; // Env not set (yet), but keep probing
}
} else {
g = 0; // Probe window exhausted, commit to 0
}
}
return g;
}
// Phase ALLOC-GATE-SSOT-1: malloc_tiny_fast_for_class() - body (class_idx already known)
2025-11-17 05:29:08 +09:00
__attribute__((always_inline))
static inline void* malloc_tiny_fast_for_class(size_t size, int class_idx) {
// Stats (class_idx already validated by gate)
tiny_front_alloc_stat_inc(class_idx);
ALLOC_GATE_STAT_INC_CLASS(class_idx);
// Phase v11a-5b: C7 ULTRA early-exit (skip policy snapshot for common case)
// This is the most common hot path - avoids TLS policy overhead
if (class_idx == 7 && tiny_c7_ultra_enabled_env()) {
void* ultra_p = tiny_c7_ultra_alloc(size);
Phase PERF-ULTRA-ALLOC-OPT-1 (改訂版): C7 ULTRA 内部最適化 設計判断: - 寄生型 C7 ULTRA_FREE_BOX を削除(設計的に不整合) - C7 ULTRA は C4/C5/C6 と異なり専用 segment + TLS を持つ独立サブシステム - tiny_c7_ultra.c 内部で直接最適化する方針に統一 実装内容: 1. 寄生型パスの削除 - core/box/tiny_c7_ultra_free_box.{h,c} 削除 - core/box/tiny_c7_ultra_free_env_box.h 削除 - Makefile から tiny_c7_ultra_free_box.o 削除 - malloc_tiny_fast.h を元の tiny_c7_ultra_alloc/free 呼び出しに戻す 2. TLS 構造の最適化 (tiny_c7_ultra_box.h) - count を struct 先頭に移動(L1 cache locality 向上) - 配列ベース TLS キャッシュに変更(cap=128, C6 同等) - freelist: linked-list → BASE pointer 配列 - cold フィールド(seg_base/seg_end/meta)を後方配置 3. alloc の純 TLS pop 化 (tiny_c7_ultra.c) - hot path: 1 分岐のみ(count > 0) - TLS access は 1 回のみ(ctx に cache) - ENV check を呼び出し側に移動 - segment/page_meta アクセスは refill 時(cold path)のみ 4. free の UF-3 segment learning 維持 - 最初の free で segment 学習(seg_base/seg_end を TLS に記憶) - 以降は範囲チェック → TLS push - 範囲外は v3 free にフォールバック 実測値 (Mixed 16-1024B, 1M iter, ws=400): - tiny_c7_ultra_alloc self%: 7.66% (維持 - 既に最適化済み) - tiny_c7_ultra_free self%: 3.50% - Throughput: 43.5M ops/s 評価: 部分達成 - 設計一貫性の回復: 成功 - Array-based TLS cache 移行: 成功 - pure TLS pop パターン統一: 成功 - perf self% 削減(7.66% → 5-6%): 未達成(既に最適) C7 ULTRA は独立サブシステムとして tiny_c7_ultra.c に閉じる設計を維持。 次は refill path 最適化または C4-C7 ULTRA free 群の軽量化へ。 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 20:39:46 +09:00
if (TINY_HOT_LIKELY(ultra_p != NULL)) {
return ultra_p;
}
// C7 ULTRA miss → fall through to policy-based routing
}
// Phase ALLOC-TINY-FAST-DUALHOT-2: C0-C3 direct path (second hot path)
// Skip expensive policy snapshot and route determination for C0-C3.
// NOTE: Branch only taken if class_idx <= 3 (rare when OFF, frequent when ON)
if ((unsigned)class_idx <= 3u) {
if (alloc_dualhot_enabled()) {
// Direct to LEGACY unified cache (no policy snapshot)
void* ptr = tiny_hot_alloc_fast(class_idx);
if (TINY_HOT_LIKELY(ptr != NULL)) {
return ptr;
}
return tiny_cold_refill_and_alloc(class_idx);
}
}
// 2. Policy snapshot (TLS cached, single read)
const SmallPolicyV7* policy = small_policy_v7_snapshot();
SmallRouteKind route_kind = policy->route_kind[class_idx];
// 3. Single switch on route_kind (all ENV checks moved to Policy init)
switch (route_kind) {
case SMALL_ROUTE_ULTRA: {
// Phase TLS-UNIFY-1: Unified ULTRA TLS pop for C4-C6 (C7 handled above)
void* base = tiny_ultra_tls_pop((uint8_t)class_idx);
if (TINY_HOT_LIKELY(base != NULL)) {
if (class_idx == 6) FREE_PATH_STAT_INC(c6_ultra_alloc_hit);
else if (class_idx == 5) FREE_PATH_STAT_INC(c5_ultra_alloc_hit);
else if (class_idx == 4) FREE_PATH_STAT_INC(c4_ultra_alloc_hit);
return tiny_base_to_user_inline(base);
}
// ULTRA miss → fallback to LEGACY
break;
}
case SMALL_ROUTE_MID_V35: {
// Phase v11a-3: MID v3.5 allocation
void* v35p = small_mid_v35_alloc(class_idx, size);
if (TINY_HOT_LIKELY(v35p != NULL)) {
return v35p;
}
// MID v3.5 miss → fallback to LEGACY
break;
}
case SMALL_ROUTE_V7: {
// Phase v7: SmallObject v7 allocation (research box)
void* v7p = small_heap_alloc_fast_v7_stub(size, (uint8_t)class_idx);
if (TINY_HOT_LIKELY(v7p != NULL)) {
return v7p;
}
// V7 miss → fallback to LEGACY
break;
}
case SMALL_ROUTE_MID_V3: {
// Phase MID-V3: MID v3 allocation (257-768B, C5-C6)
// Note: MID v3 uses same segment infrastructure as MID v3.5
// For now, delegate to MID v3.5 which handles both
void* v3p = small_mid_v35_alloc(class_idx, size);
if (TINY_HOT_LIKELY(v3p != NULL)) {
return v3p;
}
break;
}
case SMALL_ROUTE_LEGACY:
default:
break;
}
// LEGACY fallback: Unified Cache hot/cold path
void* ptr = tiny_hot_alloc_fast(class_idx);
Phase 4-Step2: Add Hot/Cold Path Box (+7.3% performance) Implemented Hot/Cold Path separation using Box pattern for Tiny allocations: Performance Improvement (without PGO): - Baseline (Phase 26-A): 53.3 M ops/s - Hot/Cold Box (Phase 4-Step2): 57.2 M ops/s - Gain: +7.3% (+3.9 M ops/s) Implementation: 1. core/box/tiny_front_hot_box.h - Ultra-fast hot path (1 branch) - Removed range check (caller guarantees valid class_idx) - Inline cache hit path with branch prediction hints - Debug metrics with zero overhead in Release builds 2. core/box/tiny_front_cold_box.h - Slow cold path (noinline, cold) - Refill logic (batch allocation from SuperSlab) - Drain logic (batch free to SuperSlab) - Error reporting and diagnostics 3. core/front/malloc_tiny_fast.h - Updated to use Hot/Cold Boxes - Hot path: tiny_hot_alloc_fast() (1 branch: cache empty check) - Cold path: tiny_cold_refill_and_alloc() (noinline, cold attribute) - Clear separation improves i-cache locality Branch Analysis: - Baseline: 4-5 branches in hot path (range check + cache check + refill logic mixed) - Hot/Cold Box: 1 branch in hot path (cache empty check only) - Reduction: 3-4 branches eliminated from hot path Design Principles (Box Pattern): ✅ Single Responsibility: Hot path = cache hit only, Cold path = refill/errors ✅ Clear Contract: Hot returns NULL on miss, Cold handles miss ✅ Observable: Debug metrics (TINY_HOT_METRICS_*) gated by NDEBUG ✅ Safe: Branch prediction hints (TINY_HOT_LIKELY/UNLIKELY) ✅ Testable: Isolated hot/cold paths, easy A/B testing PGO Status: - Temporarily disabled (build issues with __gcov_merge_time_profile) - Will re-enable PGO in future commit after resolving gcc/lto issues - Current benchmarks are without PGO (fair A/B comparison) Other Changes: - .gitignore: Added *.d files (dependency files, auto-generated) - Makefile: PGO targets temporarily disabled (show informational message) - build_pgo.sh: Temporarily disabled (show "PGO paused" message) Next: Phase 4-Step3 (Front Config Box, target +5-8%) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 11:58:37 +09:00
if (TINY_HOT_LIKELY(ptr != NULL)) {
return ptr;
2025-11-17 05:29:08 +09:00
}
Phase 4-Step2: Add Hot/Cold Path Box (+7.3% performance) Implemented Hot/Cold Path separation using Box pattern for Tiny allocations: Performance Improvement (without PGO): - Baseline (Phase 26-A): 53.3 M ops/s - Hot/Cold Box (Phase 4-Step2): 57.2 M ops/s - Gain: +7.3% (+3.9 M ops/s) Implementation: 1. core/box/tiny_front_hot_box.h - Ultra-fast hot path (1 branch) - Removed range check (caller guarantees valid class_idx) - Inline cache hit path with branch prediction hints - Debug metrics with zero overhead in Release builds 2. core/box/tiny_front_cold_box.h - Slow cold path (noinline, cold) - Refill logic (batch allocation from SuperSlab) - Drain logic (batch free to SuperSlab) - Error reporting and diagnostics 3. core/front/malloc_tiny_fast.h - Updated to use Hot/Cold Boxes - Hot path: tiny_hot_alloc_fast() (1 branch: cache empty check) - Cold path: tiny_cold_refill_and_alloc() (noinline, cold attribute) - Clear separation improves i-cache locality Branch Analysis: - Baseline: 4-5 branches in hot path (range check + cache check + refill logic mixed) - Hot/Cold Box: 1 branch in hot path (cache empty check only) - Reduction: 3-4 branches eliminated from hot path Design Principles (Box Pattern): ✅ Single Responsibility: Hot path = cache hit only, Cold path = refill/errors ✅ Clear Contract: Hot returns NULL on miss, Cold handles miss ✅ Observable: Debug metrics (TINY_HOT_METRICS_*) gated by NDEBUG ✅ Safe: Branch prediction hints (TINY_HOT_LIKELY/UNLIKELY) ✅ Testable: Isolated hot/cold paths, easy A/B testing PGO Status: - Temporarily disabled (build issues with __gcov_merge_time_profile) - Will re-enable PGO in future commit after resolving gcc/lto issues - Current benchmarks are without PGO (fair A/B comparison) Other Changes: - .gitignore: Added *.d files (dependency files, auto-generated) - Makefile: PGO targets temporarily disabled (show informational message) - build_pgo.sh: Temporarily disabled (show "PGO paused" message) Next: Phase 4-Step3 (Front Config Box, target +5-8%) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 11:58:37 +09:00
return tiny_cold_refill_and_alloc(class_idx);
2025-11-17 05:29:08 +09:00
}
// Wrapper: size → class_idx conversion (SSOT)
__attribute__((always_inline))
static inline void* malloc_tiny_fast(size_t size) {
// Phase ALLOC-GATE-OPT-1: カウンタ散布 (1. 関数入口)
ALLOC_GATE_STAT_INC(total_calls);
// Phase ALLOC-GATE-SSOT-1: Single size→class conversion (SSOT)
ALLOC_GATE_STAT_INC(size_to_class_calls);
int class_idx = hak_tiny_size_to_class(size);
if (__builtin_expect(class_idx < 0 || class_idx >= TINY_NUM_CLASSES, 0)) {
return NULL;
}
// Delegate to *_for_class (stats tracked inside)
return malloc_tiny_fast_for_class(size, class_idx);
}
Phase FREE-TINY-FAST-HOTCOLD-OPT-1: Hot/Cold split for free_tiny_fast [RESEARCH BOX - FREEZE] Split free_tiny_fast() into hot and cold paths to reduce I-cache pressure: - free_tiny_fast_hot(): always_inline, fast-path validation + ULTRA/MID/V7 - free_tiny_fast_cold(): noinline,cold, cross-thread + TinyHeap + legacy ENV: HAKMEM_FREE_TINY_FAST_HOTCOLD=0/1 (default 0) Stats: HAKMEM_FREE_TINY_FAST_HOTCOLD_STATS=0/1 (TLS only, exit dump) ## Benchmark Results (random mixed, 100M ops) HOTCOLD=0 (legacy): 49.35M, 50.18M, 50.25M ops/s (median: 50.18M) HOTCOLD=1 (split): 43.54M, 43.59M, 43.62M ops/s (median: 43.59M) **Regression: -13.1%** (NO-GO) ## Stats Analysis (10M ops, HOTCOLD_STATS=1) Hot path: 50.11% (C7 ULTRA early-exit) Cold path: 48.43% (legacy fallback) ## Root Cause Design assumption FAILED: "Cold path is rare" Reality: Cold path is 48% (almost as common as hot path) The split introduces: 1. Extra dispatch overhead in hot path 2. Function call overhead to cold for ~48% of frees 3. "Cold" is NOT rare - it's the legacy fallback for non-ULTRA classes ## Conclusion **FREEZE as research box (default OFF)** Box Theory value: - Validated hot/cold distribution via TLS stats - Confirmed that legacy fallback is NOT rare (48%) - Demonstrated that naive hot/cold split hurts when "cold" is common Alternative approaches for future work: 1. Inline the legacy fallback in hot path (no split) 2. Route-specific specialization (C7 vs non-C7 separate paths) 3. Policy-based early routing (before header validation) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-13 03:16:54 +09:00
// ============================================================================
// Phase FREE-TINY-FAST-HOTCOLD-OPT-1: Hot/Cold split helpers
// ============================================================================
// Cold path: Cross-thread free, TinyHeap routes, and legacy fallback
// (noinline,cold to keep hot path small and I-cache clean)
__attribute__((noinline,cold))
static int free_tiny_fast_cold(void* ptr, void* base, int class_idx)
{
FREE_TINY_FAST_HOTCOLD_STAT_INC(cold_hit);
tiny_route_kind_t route = tiny_route_for_class((uint8_t)class_idx);
const int use_tiny_heap = tiny_route_is_heap_kind(route);
const TinyFrontV3Snapshot* front_snap =
__builtin_expect(tiny_front_v3_enabled(), 0) ? tiny_front_v3_snapshot_get() : NULL;
// TWO-SPEED: SuperSlab registration check is DEBUG-ONLY to keep HOT PATH fast.
// In Release builds, we trust header magic (0xA0) as sufficient validation.
#if !HAKMEM_BUILD_RELEASE
// Superslab 登録確認(誤分類防止)
SuperSlab* ss_guard = hak_super_lookup(ptr);
if (__builtin_expect(!(ss_guard && ss_guard->magic == SUPERSLAB_MAGIC), 0)) {
return 0; // hakmem 管理外 → 通常 free 経路へ
}
#endif // !HAKMEM_BUILD_RELEASE
// Cross-thread free detection (Larson MT crash fix, ENV gated) + TinyHeap free path
{
static __thread int g_larson_fix = -1;
if (__builtin_expect(g_larson_fix == -1, 0)) {
const char* e = getenv("HAKMEM_TINY_LARSON_FIX");
g_larson_fix = (e && *e && *e != '0') ? 1 : 0;
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[LARSON_FIX_INIT] g_larson_fix=%d (env=%s)\n", g_larson_fix, e ? e : "NULL");
fflush(stderr);
#endif
}
if (__builtin_expect(g_larson_fix || use_tiny_heap, 0)) {
// Phase 12 optimization: Use fast mask-based lookup (~5-10 cycles vs 50-100)
SuperSlab* ss = ss_fast_lookup(base);
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (5. super_lookup 呼び出し)
FREE_PATH_STAT_INC(super_lookup_called);
if (ss) {
int slab_idx = slab_index_for(ss, base);
if (__builtin_expect(slab_idx >= 0 && slab_idx < ss_slabs_capacity(ss), 1)) {
uint32_t self_tid = tiny_self_u32_local();
uint8_t owner_tid_low = ss_slab_meta_owner_tid_low_get(ss, slab_idx);
TinySlabMeta* meta = &ss->slabs[slab_idx];
// LARSON FIX: Use bits 8-15 for comparison (pthread TIDs aligned to 256 bytes)
uint8_t self_tid_cmp = (uint8_t)((self_tid >> 8) & 0xFFu);
#if !HAKMEM_BUILD_RELEASE
static _Atomic uint64_t g_owner_check_count = 0;
uint64_t oc = atomic_fetch_add(&g_owner_check_count, 1);
if (oc < 10) {
fprintf(stderr, "[LARSON_FIX] Owner check: ptr=%p owner_tid_low=0x%02x self_tid_cmp=0x%02x self_tid=0x%08x match=%d\n",
ptr, owner_tid_low, self_tid_cmp, self_tid, (owner_tid_low == self_tid_cmp));
fflush(stderr);
}
#endif
if (__builtin_expect(owner_tid_low != self_tid_cmp, 0)) {
// Cross-thread free → route to remote queue instead of poisoning TLS cache
FREE_TINY_FAST_HOTCOLD_STAT_INC(cold_cross_thread);
#if !HAKMEM_BUILD_RELEASE
static _Atomic uint64_t g_cross_thread_count = 0;
uint64_t ct = atomic_fetch_add(&g_cross_thread_count, 1);
if (ct < 20) {
fprintf(stderr, "[LARSON_FIX] Cross-thread free detected! ptr=%p owner_tid_low=0x%02x self_tid_cmp=0x%02x self_tid=0x%08x\n",
ptr, owner_tid_low, self_tid_cmp, self_tid);
fflush(stderr);
}
#endif
if (tiny_free_remote_box(ss, slab_idx, meta, ptr, self_tid)) {
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (6. cross-thread free)
FREE_PATH_STAT_INC(remote_free);
return 1; // handled via remote queue
}
return 0; // remote push failed; fall back to normal path
}
// Same-thread + TinyHeap route → route-based free
if (__builtin_expect(use_tiny_heap, 0)) {
FREE_TINY_FAST_HOTCOLD_STAT_INC(cold_tinyheap);
switch (route) {
case TINY_ROUTE_SMALL_HEAP_V7: {
// Phase v7-1: C6-only v7 stub (MID v3 fallback)
if (small_heap_free_fast_v7_stub(ptr, (uint8_t)class_idx)) {
return 1;
}
break; // fallthrough to legacy
}
case TINY_ROUTE_SMALL_HEAP_V6: {
// Phase V6-HDR-2: Headerless free (ENV gated)
if (small_v6_headerless_route_enabled((uint8_t)class_idx)) {
SmallHeapCtxV6* ctx_v6 = small_heap_ctx_v6();
if (small_v6_headerless_free(ctx_v6, ptr, (uint8_t)class_idx)) {
return 1; // Handled by v6
}
// v6 returned false -> fallback to legacy
}
break; // fallthrough to legacy
}
// Phase v10: v3/v4/v5 removed - routes now handled as LEGACY
case TINY_ROUTE_HOTHEAP_V2:
tiny_hotheap_v2_free((uint8_t)class_idx, base, meta);
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (v2 は tiny_heap_v1 にカウント)
FREE_PATH_STAT_INC(tiny_heap_v1_fast);
return 1;
case TINY_ROUTE_HEAP: {
tiny_heap_ctx_t* ctx = tiny_heap_ctx_for_thread();
if (class_idx == 7) {
tiny_c7_free_fast_with_meta(ss, slab_idx, base);
} else {
tiny_heap_free_class_fast_with_meta(ctx, class_idx, ss, slab_idx, base);
}
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (9. TinyHeap v1 route)
FREE_PATH_STAT_INC(tiny_heap_v1_fast);
return 1;
}
default:
break;
}
}
}
}
if (use_tiny_heap) {
// fallback: lookup failed but TinyHeap front is ON → use generic TinyHeap free
if (route == TINY_ROUTE_HOTHEAP_V2) {
tiny_hotheap_v2_record_free_fallback((uint8_t)class_idx);
}
// Phase v10: v3/v4 removed - no special fallback
tiny_heap_free_class_fast(tiny_heap_ctx_for_thread(), class_idx, ptr);
return 1;
}
}
}
// Debug: Log free operations (first 5000, all classes)
#if !HAKMEM_BUILD_RELEASE
{
extern _Atomic uint64_t g_debug_op_count;
extern __thread TinyTLSSLL g_tls_sll[];
uint64_t op = atomic_fetch_add(&g_debug_op_count, 1);
// Note: Shares g_debug_op_count with alloc logging, so bump the window.
if (op < 5000) {
fprintf(stderr, "[OP#%04lu FREE] cls=%d ptr=%p base=%p from=free_tiny_fast_cold tls_count_before=%u\n",
(unsigned long)op, class_idx, ptr, base,
g_tls_sll[class_idx].count);
fflush(stderr);
}
}
#endif
// Phase REFACTOR-2: Legacy fallback (use unified helper)
FREE_TINY_FAST_HOTCOLD_STAT_INC(cold_legacy_fallback);
tiny_legacy_fallback_free_base(base, class_idx);
return 1;
}
// Hot path: Fast-path validation + ULTRA/MID/V7 routes
// (always_inline to minimize overhead on critical path)
__attribute__((always_inline))
static inline int free_tiny_fast_hot(void* ptr) {
if (__builtin_expect(!ptr, 0)) {
FREE_TINY_FAST_HOTCOLD_STAT_INC(ret0_null_ptr);
return 0;
}
#if HAKMEM_TINY_HEADER_CLASSIDX
// 1. ページ境界ガード:
// ptr がページ先頭 (offset==0) の場合、ptr-1 は別ページか未マップ領域になる可能性がある。
// その場合はヘッダ読みを行わず、通常 free 経路にフォールバックする。
uintptr_t off = (uintptr_t)ptr & 0xFFFu;
if (__builtin_expect(off == 0, 0)) {
FREE_TINY_FAST_HOTCOLD_STAT_INC(ret0_page_boundary);
return 0;
}
// 2. Fast header magic validation (必須)
// Release ビルドでは tiny_region_id_read_header() が magic を省略するため、
// ここで自前に Tiny 専用ヘッダ (0xA0) を検証しておく。
uint8_t* header_ptr = (uint8_t*)ptr - 1;
uint8_t header = *header_ptr;
uint8_t magic = header & 0xF0u;
if (__builtin_expect(magic != HEADER_MAGIC, 0)) {
// Tiny ヘッダではない → Mid/Large/外部ポインタなので通常 free 経路へ
FREE_TINY_FAST_HOTCOLD_STAT_INC(ret0_bad_magic);
return 0;
}
// 3. class_idx 抽出下位4bit
int class_idx = (int)(header & HEADER_CLASS_MASK);
if (__builtin_expect(class_idx < 0 || class_idx >= TINY_NUM_CLASSES, 0)) {
FREE_TINY_FAST_HOTCOLD_STAT_INC(ret0_bad_class);
return 0;
}
// 4. BASE を計算して Unified Cache に push
void* base = tiny_user_to_base_inline(ptr);
tiny_front_free_stat_inc(class_idx);
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (1. 関数入口)
FREE_PATH_STAT_INC(total_calls);
// Phase v11b-1: C7 ULTRA early-exit (skip policy snapshot for most common case)
if (class_idx == 7 && tiny_c7_ultra_enabled_env()) {
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_c7_ultra);
tiny_c7_ultra_free(ptr);
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_hit);
return 1;
}
// Phase FREE-TINY-FAST-DUALHOT-1: C0-C3 direct path (48% of calls)
// Skip expensive policy snapshot and route determination, direct to legacy fallback.
// Safety: Check Larson mode (cross-thread free handling requires full validation path)
{
static __thread int g_larson_fix = -1;
if (__builtin_expect(g_larson_fix == -1, 0)) {
const char* e = getenv("HAKMEM_TINY_LARSON_FIX");
g_larson_fix = (e && *e && *e != '0') ? 1 : 0;
}
if (__builtin_expect(class_idx <= 3 && !g_larson_fix, 1)) {
// C0-C3 + Larson mode OFF → Direct to legacy (no policy snapshot overhead)
tiny_legacy_fallback_free_base(base, class_idx);
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_hit);
return 1;
}
}
Phase FREE-TINY-FAST-HOTCOLD-OPT-1: Hot/Cold split for free_tiny_fast [RESEARCH BOX - FREEZE] Split free_tiny_fast() into hot and cold paths to reduce I-cache pressure: - free_tiny_fast_hot(): always_inline, fast-path validation + ULTRA/MID/V7 - free_tiny_fast_cold(): noinline,cold, cross-thread + TinyHeap + legacy ENV: HAKMEM_FREE_TINY_FAST_HOTCOLD=0/1 (default 0) Stats: HAKMEM_FREE_TINY_FAST_HOTCOLD_STATS=0/1 (TLS only, exit dump) ## Benchmark Results (random mixed, 100M ops) HOTCOLD=0 (legacy): 49.35M, 50.18M, 50.25M ops/s (median: 50.18M) HOTCOLD=1 (split): 43.54M, 43.59M, 43.62M ops/s (median: 43.59M) **Regression: -13.1%** (NO-GO) ## Stats Analysis (10M ops, HOTCOLD_STATS=1) Hot path: 50.11% (C7 ULTRA early-exit) Cold path: 48.43% (legacy fallback) ## Root Cause Design assumption FAILED: "Cold path is rare" Reality: Cold path is 48% (almost as common as hot path) The split introduces: 1. Extra dispatch overhead in hot path 2. Function call overhead to cold for ~48% of frees 3. "Cold" is NOT rare - it's the legacy fallback for non-ULTRA classes ## Conclusion **FREEZE as research box (default OFF)** Box Theory value: - Validated hot/cold distribution via TLS stats - Confirmed that legacy fallback is NOT rare (48%) - Demonstrated that naive hot/cold split hurts when "cold" is common Alternative approaches for future work: 1. Inline the legacy fallback in hot path (no split) 2. Route-specific specialization (C7 vs non-C7 separate paths) 3. Policy-based early routing (before header validation) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-13 03:16:54 +09:00
// Phase POLICY-FAST-PATH-V2: Skip policy snapshot for known-legacy classes
if (free_policy_fast_v2_can_skip((uint8_t)class_idx)) {
FREE_PATH_STAT_INC(policy_fast_v2_skip);
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_policy_fast_skip);
goto cold_path; // Delegate to cold path for legacy handling
}
// Phase v11b-1: Policy-based single switch (replaces serial ULTRA checks)
const SmallPolicyV7* policy_free = small_policy_v7_snapshot();
SmallRouteKind route_kind_free = policy_free->route_kind[class_idx];
switch (route_kind_free) {
case SMALL_ROUTE_ULTRA: {
// Phase TLS-UNIFY-1: Unified ULTRA TLS push for C4-C6 (C7 handled above)
if (class_idx >= 4 && class_idx <= 6) {
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_ultra_tls);
tiny_ultra_tls_push((uint8_t)class_idx, base);
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_hit);
return 1;
}
// ULTRA for other classes → fallback to cold path
break;
}
case SMALL_ROUTE_MID_V35: {
// Phase v11a-3: MID v3.5 free
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_mid_v35);
small_mid_v35_free(ptr, class_idx);
FREE_PATH_STAT_INC(smallheap_v7_fast);
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_hit);
return 1;
}
case SMALL_ROUTE_V7: {
// Phase v7: SmallObject v7 free (research box)
if (small_heap_free_fast_v7_stub(ptr, (uint8_t)class_idx)) {
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_v7);
FREE_PATH_STAT_INC(smallheap_v7_fast);
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_hit);
return 1;
}
// V7 miss → fallback to cold path
break;
}
case SMALL_ROUTE_MID_V3: {
// Phase MID-V3: delegate to MID v3.5
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_mid_v35);
small_mid_v35_free(ptr, class_idx);
FREE_PATH_STAT_INC(smallheap_v7_fast);
FREE_TINY_FAST_HOTCOLD_STAT_INC(hot_hit);
return 1;
}
case SMALL_ROUTE_LEGACY:
default:
break;
}
cold_path:
// Delegate to cold path for cross-thread, TinyHeap, and legacy handling
return free_tiny_fast_cold(ptr, base, class_idx);
#else
// No header mode - fall back to normal free
return 0;
#endif
}
2025-11-17 05:29:08 +09:00
// ============================================================================
// Phase 26-B: free_tiny_fast() - Ultra-thin Tiny deallocation
// ============================================================================
// Single-layer Tiny deallocation (bypasses hak_free_at + wrapper + diagnostics)
// Preconditions:
// - ptr is from malloc_tiny_fast() (has valid header)
// - Front Gate Unified is enabled
// Returns:
// - 1 on success (pushed to Unified Cache)
// - 0 on failure (caller falls back to normal free path)
__attribute__((always_inline))
static inline int free_tiny_fast(void* ptr) {
if (__builtin_expect(!ptr, 0)) return 0;
#if HAKMEM_TINY_HEADER_CLASSIDX
2025-11-17 05:29:08 +09:00
// 1. ページ境界ガード:
// ptr がページ先頭 (offset==0) の場合、ptr-1 は別ページか未マップ領域になる可能性がある。
// その場合はヘッダ読みを行わず、通常 free 経路にフォールバックする。
uintptr_t off = (uintptr_t)ptr & 0xFFFu;
if (__builtin_expect(off == 0, 0)) {
return 0;
}
// 2. Fast header magic validation (必須)
// Release ビルドでは tiny_region_id_read_header() が magic を省略するため、
// ここで自前に Tiny 専用ヘッダ (0xA0) を検証しておく。
uint8_t* header_ptr = (uint8_t*)ptr - 1;
uint8_t header = *header_ptr;
uint8_t magic = header & 0xF0u;
if (__builtin_expect(magic != HEADER_MAGIC, 0)) {
// Tiny ヘッダではない → Mid/Large/外部ポインタなので通常 free 経路へ
return 0;
}
// 3. class_idx 抽出下位4bit
int class_idx = (int)(header & HEADER_CLASS_MASK);
if (__builtin_expect(class_idx < 0 || class_idx >= TINY_NUM_CLASSES, 0)) {
return 0;
}
// 4. BASE を計算して Unified Cache に push
void* base = tiny_user_to_base_inline(ptr);
tiny_front_free_stat_inc(class_idx);
Phase FREE-LEGACY-OPT-4-1: Legacy per-class breakdown analysis ## 目的 Legacy fallback 49.2% の内訳を per-class で分析し、最も Legacy を使用しているクラスを特定。 ## 実装内容 1. FreePathStats 構造体の拡張 - legacy_by_class[8] フィールドを追加(C0-C7 の Legacy fallback 内訳) 2. デストラクタ出力の更新 - [FREE_PATH_STATS_LEGACY_BY_CLASS] 行を追加し、C0-C7 の内訳を出力 3. カウンタの散布 - free_tiny_fast() の Legacy fallback 経路で legacy_by_class[class_idx] をインクリメント - class_idx の範囲チェック(0-7)を実施 ## 測定結果(Mixed 16-1024B) **測定安定性**: 完全に安定(3 回とも同一の値、決定的測定) Legacy per-class 内訳: - C0: 0 (0.0%) - C1: 0 (0.0%) - C2: 8,746 (3.3% of legacy) - C3: 17,279 (6.5% of legacy) - C4: 34,727 (13.0% of legacy) - C5: 68,871 (25.8% of legacy) - C6: 137,319 (51.4% of legacy) ← 最大シェア - C7: 0 (0.0%) 合計: 266,942 (49.2% of total free calls) ## 分析結果 **最大シェアクラス**: C6 (513-1024B) が Legacy の 51.4% を占める **理由**: - Mixed 16-1024B では C6 サイズのアロケーションが多い - C7 ULTRA は C7 専用で C6 は未対応 - v3/v4 も C6 をカバーしていない - Route 設定で C6 は Legacy に直接落ちている ## 次のアクション Phase FREE-LEGACY-OPT-4-2 で C6 クラスに ULTRA-Free lane を実装: - Legacy fallback を 51% 削減(C6 分) - Legacy: 49.2% → 24-27% に改善(半減) - Mixed 16-1024B: 44.8M → 47-48M ops/s 程度(+5-8% 改善) ## 変更ファイル - core/box/free_path_stats_box.h: FreePathStats 構造体に legacy_by_class[8] 追加 - core/box/free_path_stats_box.c: デストラクタに per-class 出力追加 - core/front/malloc_tiny_fast.h: Legacy fallback 経路に per-class カウンタ追加 - docs/analysis/FREE_LEGACY_PATH_ANALYSIS.md: Phase 4-1 分析結果を記録 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 18:04:14 +09:00
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (1. 関数入口)
FREE_PATH_STAT_INC(total_calls);
// Phase v11b-1: C7 ULTRA early-exit (skip policy snapshot for most common case)
if (class_idx == 7 && tiny_c7_ultra_enabled_env()) {
Phase PERF-ULTRA-ALLOC-OPT-1 (改訂版): C7 ULTRA 内部最適化 設計判断: - 寄生型 C7 ULTRA_FREE_BOX を削除(設計的に不整合) - C7 ULTRA は C4/C5/C6 と異なり専用 segment + TLS を持つ独立サブシステム - tiny_c7_ultra.c 内部で直接最適化する方針に統一 実装内容: 1. 寄生型パスの削除 - core/box/tiny_c7_ultra_free_box.{h,c} 削除 - core/box/tiny_c7_ultra_free_env_box.h 削除 - Makefile から tiny_c7_ultra_free_box.o 削除 - malloc_tiny_fast.h を元の tiny_c7_ultra_alloc/free 呼び出しに戻す 2. TLS 構造の最適化 (tiny_c7_ultra_box.h) - count を struct 先頭に移動(L1 cache locality 向上) - 配列ベース TLS キャッシュに変更(cap=128, C6 同等) - freelist: linked-list → BASE pointer 配列 - cold フィールド(seg_base/seg_end/meta)を後方配置 3. alloc の純 TLS pop 化 (tiny_c7_ultra.c) - hot path: 1 分岐のみ(count > 0) - TLS access は 1 回のみ(ctx に cache) - ENV check を呼び出し側に移動 - segment/page_meta アクセスは refill 時(cold path)のみ 4. free の UF-3 segment learning 維持 - 最初の free で segment 学習(seg_base/seg_end を TLS に記憶) - 以降は範囲チェック → TLS push - 範囲外は v3 free にフォールバック 実測値 (Mixed 16-1024B, 1M iter, ws=400): - tiny_c7_ultra_alloc self%: 7.66% (維持 - 既に最適化済み) - tiny_c7_ultra_free self%: 3.50% - Throughput: 43.5M ops/s 評価: 部分達成 - 設計一貫性の回復: 成功 - Array-based TLS cache 移行: 成功 - pure TLS pop パターン統一: 成功 - perf self% 削減(7.66% → 5-6%): 未達成(既に最適) C7 ULTRA は独立サブシステムとして tiny_c7_ultra.c に閉じる設計を維持。 次は refill path 最適化または C4-C7 ULTRA free 群の軽量化へ。 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 20:39:46 +09:00
tiny_c7_ultra_free(ptr);
return 1;
}
// Phase POLICY-FAST-PATH-V2: Skip policy snapshot for known-legacy classes
if (free_policy_fast_v2_can_skip((uint8_t)class_idx)) {
FREE_PATH_STAT_INC(policy_fast_v2_skip);
goto legacy_fallback;
}
// Phase v11b-1: Policy-based single switch (replaces serial ULTRA checks)
const SmallPolicyV7* policy_free = small_policy_v7_snapshot();
SmallRouteKind route_kind_free = policy_free->route_kind[class_idx];
switch (route_kind_free) {
case SMALL_ROUTE_ULTRA: {
// Phase TLS-UNIFY-1: Unified ULTRA TLS push for C4-C6 (C7 handled above)
if (class_idx >= 4 && class_idx <= 6) {
tiny_ultra_tls_push((uint8_t)class_idx, base);
return 1;
}
// ULTRA for other classes → fallback to LEGACY
break;
}
case SMALL_ROUTE_MID_V35: {
// Phase v11a-3: MID v3.5 free
small_mid_v35_free(ptr, class_idx);
FREE_PATH_STAT_INC(smallheap_v7_fast);
return 1;
}
case SMALL_ROUTE_V7: {
// Phase v7: SmallObject v7 free (research box)
if (small_heap_free_fast_v7_stub(ptr, (uint8_t)class_idx)) {
FREE_PATH_STAT_INC(smallheap_v7_fast);
return 1;
}
// V7 miss → fallback to LEGACY
break;
}
case SMALL_ROUTE_MID_V3: {
// Phase MID-V3: delegate to MID v3.5
small_mid_v35_free(ptr, class_idx);
FREE_PATH_STAT_INC(smallheap_v7_fast);
return 1;
}
case SMALL_ROUTE_LEGACY:
default:
break;
}
legacy_fallback:
// LEGACY fallback path
tiny_route_kind_t route = tiny_route_for_class((uint8_t)class_idx);
const int use_tiny_heap = tiny_route_is_heap_kind(route);
const TinyFrontV3Snapshot* front_snap =
__builtin_expect(tiny_front_v3_enabled(), 0) ? tiny_front_v3_snapshot_get() : NULL;
// TWO-SPEED: SuperSlab registration check is DEBUG-ONLY to keep HOT PATH fast.
// In Release builds, we trust header magic (0xA0) as sufficient validation.
#if !HAKMEM_BUILD_RELEASE
// 5. Superslab 登録確認(誤分類防止)
SuperSlab* ss_guard = hak_super_lookup(ptr);
if (__builtin_expect(!(ss_guard && ss_guard->magic == SUPERSLAB_MAGIC), 0)) {
return 0; // hakmem 管理外 → 通常 free 経路へ
}
#endif // !HAKMEM_BUILD_RELEASE
// Cross-thread free detection (Larson MT crash fix, ENV gated) + TinyHeap free path
{
static __thread int g_larson_fix = -1;
if (__builtin_expect(g_larson_fix == -1, 0)) {
const char* e = getenv("HAKMEM_TINY_LARSON_FIX");
g_larson_fix = (e && *e && *e != '0') ? 1 : 0;
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[LARSON_FIX_INIT] g_larson_fix=%d (env=%s)\n", g_larson_fix, e ? e : "NULL");
fflush(stderr);
#endif
}
if (__builtin_expect(g_larson_fix || use_tiny_heap, 0)) {
// Phase 12 optimization: Use fast mask-based lookup (~5-10 cycles vs 50-100)
SuperSlab* ss = ss_fast_lookup(base);
Phase FREE-LEGACY-OPT-4-1: Legacy per-class breakdown analysis ## 目的 Legacy fallback 49.2% の内訳を per-class で分析し、最も Legacy を使用しているクラスを特定。 ## 実装内容 1. FreePathStats 構造体の拡張 - legacy_by_class[8] フィールドを追加(C0-C7 の Legacy fallback 内訳) 2. デストラクタ出力の更新 - [FREE_PATH_STATS_LEGACY_BY_CLASS] 行を追加し、C0-C7 の内訳を出力 3. カウンタの散布 - free_tiny_fast() の Legacy fallback 経路で legacy_by_class[class_idx] をインクリメント - class_idx の範囲チェック(0-7)を実施 ## 測定結果(Mixed 16-1024B) **測定安定性**: 完全に安定(3 回とも同一の値、決定的測定) Legacy per-class 内訳: - C0: 0 (0.0%) - C1: 0 (0.0%) - C2: 8,746 (3.3% of legacy) - C3: 17,279 (6.5% of legacy) - C4: 34,727 (13.0% of legacy) - C5: 68,871 (25.8% of legacy) - C6: 137,319 (51.4% of legacy) ← 最大シェア - C7: 0 (0.0%) 合計: 266,942 (49.2% of total free calls) ## 分析結果 **最大シェアクラス**: C6 (513-1024B) が Legacy の 51.4% を占める **理由**: - Mixed 16-1024B では C6 サイズのアロケーションが多い - C7 ULTRA は C7 専用で C6 は未対応 - v3/v4 も C6 をカバーしていない - Route 設定で C6 は Legacy に直接落ちている ## 次のアクション Phase FREE-LEGACY-OPT-4-2 で C6 クラスに ULTRA-Free lane を実装: - Legacy fallback を 51% 削減(C6 分) - Legacy: 49.2% → 24-27% に改善(半減) - Mixed 16-1024B: 44.8M → 47-48M ops/s 程度(+5-8% 改善) ## 変更ファイル - core/box/free_path_stats_box.h: FreePathStats 構造体に legacy_by_class[8] 追加 - core/box/free_path_stats_box.c: デストラクタに per-class 出力追加 - core/front/malloc_tiny_fast.h: Legacy fallback 経路に per-class カウンタ追加 - docs/analysis/FREE_LEGACY_PATH_ANALYSIS.md: Phase 4-1 分析結果を記録 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 18:04:14 +09:00
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (5. super_lookup 呼び出し)
FREE_PATH_STAT_INC(super_lookup_called);
if (ss) {
int slab_idx = slab_index_for(ss, base);
if (__builtin_expect(slab_idx >= 0 && slab_idx < ss_slabs_capacity(ss), 1)) {
uint32_t self_tid = tiny_self_u32_local();
uint8_t owner_tid_low = ss_slab_meta_owner_tid_low_get(ss, slab_idx);
TinySlabMeta* meta = &ss->slabs[slab_idx];
// LARSON FIX: Use bits 8-15 for comparison (pthread TIDs aligned to 256 bytes)
uint8_t self_tid_cmp = (uint8_t)((self_tid >> 8) & 0xFFu);
#if !HAKMEM_BUILD_RELEASE
static _Atomic uint64_t g_owner_check_count = 0;
uint64_t oc = atomic_fetch_add(&g_owner_check_count, 1);
if (oc < 10) {
fprintf(stderr, "[LARSON_FIX] Owner check: ptr=%p owner_tid_low=0x%02x self_tid_cmp=0x%02x self_tid=0x%08x match=%d\n",
ptr, owner_tid_low, self_tid_cmp, self_tid, (owner_tid_low == self_tid_cmp));
fflush(stderr);
}
#endif
if (__builtin_expect(owner_tid_low != self_tid_cmp, 0)) {
// Cross-thread free → route to remote queue instead of poisoning TLS cache
#if !HAKMEM_BUILD_RELEASE
static _Atomic uint64_t g_cross_thread_count = 0;
uint64_t ct = atomic_fetch_add(&g_cross_thread_count, 1);
if (ct < 20) {
fprintf(stderr, "[LARSON_FIX] Cross-thread free detected! ptr=%p owner_tid_low=0x%02x self_tid_cmp=0x%02x self_tid=0x%08x\n",
ptr, owner_tid_low, self_tid_cmp, self_tid);
fflush(stderr);
}
#endif
if (tiny_free_remote_box(ss, slab_idx, meta, ptr, self_tid)) {
Phase FREE-LEGACY-OPT-4-1: Legacy per-class breakdown analysis ## 目的 Legacy fallback 49.2% の内訳を per-class で分析し、最も Legacy を使用しているクラスを特定。 ## 実装内容 1. FreePathStats 構造体の拡張 - legacy_by_class[8] フィールドを追加(C0-C7 の Legacy fallback 内訳) 2. デストラクタ出力の更新 - [FREE_PATH_STATS_LEGACY_BY_CLASS] 行を追加し、C0-C7 の内訳を出力 3. カウンタの散布 - free_tiny_fast() の Legacy fallback 経路で legacy_by_class[class_idx] をインクリメント - class_idx の範囲チェック(0-7)を実施 ## 測定結果(Mixed 16-1024B) **測定安定性**: 完全に安定(3 回とも同一の値、決定的測定) Legacy per-class 内訳: - C0: 0 (0.0%) - C1: 0 (0.0%) - C2: 8,746 (3.3% of legacy) - C3: 17,279 (6.5% of legacy) - C4: 34,727 (13.0% of legacy) - C5: 68,871 (25.8% of legacy) - C6: 137,319 (51.4% of legacy) ← 最大シェア - C7: 0 (0.0%) 合計: 266,942 (49.2% of total free calls) ## 分析結果 **最大シェアクラス**: C6 (513-1024B) が Legacy の 51.4% を占める **理由**: - Mixed 16-1024B では C6 サイズのアロケーションが多い - C7 ULTRA は C7 専用で C6 は未対応 - v3/v4 も C6 をカバーしていない - Route 設定で C6 は Legacy に直接落ちている ## 次のアクション Phase FREE-LEGACY-OPT-4-2 で C6 クラスに ULTRA-Free lane を実装: - Legacy fallback を 51% 削減(C6 分) - Legacy: 49.2% → 24-27% に改善(半減) - Mixed 16-1024B: 44.8M → 47-48M ops/s 程度(+5-8% 改善) ## 変更ファイル - core/box/free_path_stats_box.h: FreePathStats 構造体に legacy_by_class[8] 追加 - core/box/free_path_stats_box.c: デストラクタに per-class 出力追加 - core/front/malloc_tiny_fast.h: Legacy fallback 経路に per-class カウンタ追加 - docs/analysis/FREE_LEGACY_PATH_ANALYSIS.md: Phase 4-1 分析結果を記録 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 18:04:14 +09:00
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (6. cross-thread free)
FREE_PATH_STAT_INC(remote_free);
return 1; // handled via remote queue
}
return 0; // remote push failed; fall back to normal path
}
// Same-thread + TinyHeap route → route-based free
if (__builtin_expect(use_tiny_heap, 0)) {
switch (route) {
case TINY_ROUTE_SMALL_HEAP_V7: {
// Phase v7-1: C6-only v7 stub (MID v3 fallback)
if (small_heap_free_fast_v7_stub(ptr, (uint8_t)class_idx)) {
return 1;
}
break; // fallthrough to legacy
}
Phase v6-1/2/3/4: SmallObject Core v6 - C6-only implementation + refactor Phase v6-1: C6-only route stub (v1/pool fallback) Phase v6-2: Segment v6 + ColdIface v6 + Core v6 HotPath implementation - 2MiB segment / 64KiB page allocation - O(1) ptr→page_meta lookup with segment masking - C6-heavy A/B: SEGV-free but -44% performance (15.3M ops/s) Phase v6-3: Thin-layer optimization (TLS ownership check + batch header + refill batching) - TLS ownership fast-path skip page_meta for 90%+ of frees - Batch header writes during refill (32 allocs = 1 header write) - TLS batch refill (1/32 refill frequency) - C6-heavy A/B: v6-2 15.3M → v6-3 27.1M ops/s (±0% vs baseline) ✅ Phase v6-4: Mixed hang fix (segment metadata lookup correction) - Root cause: metadata lookup was reading mmap region instead of TLS slot - Fix: use TLS slot descriptor with in_use validation - Mixed health: 5M iterations SEGV-free, 35.8M ops/s ✅ Phase v6-refactor: Code quality improvements (macro unification + inline + docs) - Add SMALL_V6_* prefix macros (header, pointer conversion, page index) - Extract inline validation functions (small_page_v6_valid, small_ptr_in_segment_v6) - Doxygen-style comments for all public functions - Result: 0 compiler warnings, maintained +1.2% performance Files: - core/box/smallobject_core_v6_box.h (new, type & API definitions) - core/box/smallobject_cold_iface_v6.h (new, cold iface API) - core/box/smallsegment_v6_box.h (new, segment type definitions) - core/smallobject_core_v6.c (new, C6 alloc/free implementation) - core/smallobject_cold_iface_v6.c (new, refill/retire logic) - core/smallsegment_v6.c (new, segment allocator) - docs/analysis/SMALLOBJECT_CORE_V6_DESIGN.md (new, design document) - core/box/tiny_route_env_box.h (modified, v6 route added) - core/front/malloc_tiny_fast.h (modified, v6 case in route switch) - Makefile (modified, v6 objects added) - CURRENT_TASK.md (modified, v6 status added) Status: - C6-heavy: v6 OFF 27.1M → v6-3 ON 27.1M ops/s (±0%) ✅ - Mixed: v6 ON 35.8M ops/s (C6-only, other classes via v1) ✅ - Build: 0 warnings, fully documented ✅ 🤖 Generated with Claude Code Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-11 15:29:59 +09:00
case TINY_ROUTE_SMALL_HEAP_V6: {
// Phase V6-HDR-2: Headerless free (ENV gated)
if (small_v6_headerless_route_enabled((uint8_t)class_idx)) {
SmallHeapCtxV6* ctx_v6 = small_heap_ctx_v6();
if (small_v6_headerless_free(ctx_v6, ptr, (uint8_t)class_idx)) {
return 1; // Handled by v6
}
// v6 returned false -> fallback to legacy
}
break; // fallthrough to legacy
Phase v6-1/2/3/4: SmallObject Core v6 - C6-only implementation + refactor Phase v6-1: C6-only route stub (v1/pool fallback) Phase v6-2: Segment v6 + ColdIface v6 + Core v6 HotPath implementation - 2MiB segment / 64KiB page allocation - O(1) ptr→page_meta lookup with segment masking - C6-heavy A/B: SEGV-free but -44% performance (15.3M ops/s) Phase v6-3: Thin-layer optimization (TLS ownership check + batch header + refill batching) - TLS ownership fast-path skip page_meta for 90%+ of frees - Batch header writes during refill (32 allocs = 1 header write) - TLS batch refill (1/32 refill frequency) - C6-heavy A/B: v6-2 15.3M → v6-3 27.1M ops/s (±0% vs baseline) ✅ Phase v6-4: Mixed hang fix (segment metadata lookup correction) - Root cause: metadata lookup was reading mmap region instead of TLS slot - Fix: use TLS slot descriptor with in_use validation - Mixed health: 5M iterations SEGV-free, 35.8M ops/s ✅ Phase v6-refactor: Code quality improvements (macro unification + inline + docs) - Add SMALL_V6_* prefix macros (header, pointer conversion, page index) - Extract inline validation functions (small_page_v6_valid, small_ptr_in_segment_v6) - Doxygen-style comments for all public functions - Result: 0 compiler warnings, maintained +1.2% performance Files: - core/box/smallobject_core_v6_box.h (new, type & API definitions) - core/box/smallobject_cold_iface_v6.h (new, cold iface API) - core/box/smallsegment_v6_box.h (new, segment type definitions) - core/smallobject_core_v6.c (new, C6 alloc/free implementation) - core/smallobject_cold_iface_v6.c (new, refill/retire logic) - core/smallsegment_v6.c (new, segment allocator) - docs/analysis/SMALLOBJECT_CORE_V6_DESIGN.md (new, design document) - core/box/tiny_route_env_box.h (modified, v6 route added) - core/front/malloc_tiny_fast.h (modified, v6 case in route switch) - Makefile (modified, v6 objects added) - CURRENT_TASK.md (modified, v6 status added) Status: - C6-heavy: v6 OFF 27.1M → v6-3 ON 27.1M ops/s (±0%) ✅ - Mixed: v6 ON 35.8M ops/s (C6-only, other classes via v1) ✅ - Build: 0 warnings, fully documented ✅ 🤖 Generated with Claude Code Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-11 15:29:59 +09:00
}
// Phase v10: v3/v4/v5 removed - routes now handled as LEGACY
case TINY_ROUTE_HOTHEAP_V2:
tiny_hotheap_v2_free((uint8_t)class_idx, base, meta);
Phase FREE-LEGACY-OPT-4-1: Legacy per-class breakdown analysis ## 目的 Legacy fallback 49.2% の内訳を per-class で分析し、最も Legacy を使用しているクラスを特定。 ## 実装内容 1. FreePathStats 構造体の拡張 - legacy_by_class[8] フィールドを追加(C0-C7 の Legacy fallback 内訳) 2. デストラクタ出力の更新 - [FREE_PATH_STATS_LEGACY_BY_CLASS] 行を追加し、C0-C7 の内訳を出力 3. カウンタの散布 - free_tiny_fast() の Legacy fallback 経路で legacy_by_class[class_idx] をインクリメント - class_idx の範囲チェック(0-7)を実施 ## 測定結果(Mixed 16-1024B) **測定安定性**: 完全に安定(3 回とも同一の値、決定的測定) Legacy per-class 内訳: - C0: 0 (0.0%) - C1: 0 (0.0%) - C2: 8,746 (3.3% of legacy) - C3: 17,279 (6.5% of legacy) - C4: 34,727 (13.0% of legacy) - C5: 68,871 (25.8% of legacy) - C6: 137,319 (51.4% of legacy) ← 最大シェア - C7: 0 (0.0%) 合計: 266,942 (49.2% of total free calls) ## 分析結果 **最大シェアクラス**: C6 (513-1024B) が Legacy の 51.4% を占める **理由**: - Mixed 16-1024B では C6 サイズのアロケーションが多い - C7 ULTRA は C7 専用で C6 は未対応 - v3/v4 も C6 をカバーしていない - Route 設定で C6 は Legacy に直接落ちている ## 次のアクション Phase FREE-LEGACY-OPT-4-2 で C6 クラスに ULTRA-Free lane を実装: - Legacy fallback を 51% 削減(C6 分) - Legacy: 49.2% → 24-27% に改善(半減) - Mixed 16-1024B: 44.8M → 47-48M ops/s 程度(+5-8% 改善) ## 変更ファイル - core/box/free_path_stats_box.h: FreePathStats 構造体に legacy_by_class[8] 追加 - core/box/free_path_stats_box.c: デストラクタに per-class 出力追加 - core/front/malloc_tiny_fast.h: Legacy fallback 経路に per-class カウンタ追加 - docs/analysis/FREE_LEGACY_PATH_ANALYSIS.md: Phase 4-1 分析結果を記録 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 18:04:14 +09:00
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (v2 は tiny_heap_v1 にカウント)
FREE_PATH_STAT_INC(tiny_heap_v1_fast);
return 1;
case TINY_ROUTE_HEAP: {
tiny_heap_ctx_t* ctx = tiny_heap_ctx_for_thread();
if (class_idx == 7) {
tiny_c7_free_fast_with_meta(ss, slab_idx, base);
} else {
tiny_heap_free_class_fast_with_meta(ctx, class_idx, ss, slab_idx, base);
}
Phase FREE-LEGACY-OPT-4-1: Legacy per-class breakdown analysis ## 目的 Legacy fallback 49.2% の内訳を per-class で分析し、最も Legacy を使用しているクラスを特定。 ## 実装内容 1. FreePathStats 構造体の拡張 - legacy_by_class[8] フィールドを追加(C0-C7 の Legacy fallback 内訳) 2. デストラクタ出力の更新 - [FREE_PATH_STATS_LEGACY_BY_CLASS] 行を追加し、C0-C7 の内訳を出力 3. カウンタの散布 - free_tiny_fast() の Legacy fallback 経路で legacy_by_class[class_idx] をインクリメント - class_idx の範囲チェック(0-7)を実施 ## 測定結果(Mixed 16-1024B) **測定安定性**: 完全に安定(3 回とも同一の値、決定的測定) Legacy per-class 内訳: - C0: 0 (0.0%) - C1: 0 (0.0%) - C2: 8,746 (3.3% of legacy) - C3: 17,279 (6.5% of legacy) - C4: 34,727 (13.0% of legacy) - C5: 68,871 (25.8% of legacy) - C6: 137,319 (51.4% of legacy) ← 最大シェア - C7: 0 (0.0%) 合計: 266,942 (49.2% of total free calls) ## 分析結果 **最大シェアクラス**: C6 (513-1024B) が Legacy の 51.4% を占める **理由**: - Mixed 16-1024B では C6 サイズのアロケーションが多い - C7 ULTRA は C7 専用で C6 は未対応 - v3/v4 も C6 をカバーしていない - Route 設定で C6 は Legacy に直接落ちている ## 次のアクション Phase FREE-LEGACY-OPT-4-2 で C6 クラスに ULTRA-Free lane を実装: - Legacy fallback を 51% 削減(C6 分) - Legacy: 49.2% → 24-27% に改善(半減) - Mixed 16-1024B: 44.8M → 47-48M ops/s 程度(+5-8% 改善) ## 変更ファイル - core/box/free_path_stats_box.h: FreePathStats 構造体に legacy_by_class[8] 追加 - core/box/free_path_stats_box.c: デストラクタに per-class 出力追加 - core/front/malloc_tiny_fast.h: Legacy fallback 経路に per-class カウンタ追加 - docs/analysis/FREE_LEGACY_PATH_ANALYSIS.md: Phase 4-1 分析結果を記録 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 18:04:14 +09:00
// Phase FREE-LEGACY-BREAKDOWN-1: カウンタ散布 (9. TinyHeap v1 route)
FREE_PATH_STAT_INC(tiny_heap_v1_fast);
return 1;
}
default:
break;
}
}
}
}
if (use_tiny_heap) {
// fallback: lookup failed but TinyHeap front is ON → use generic TinyHeap free
if (route == TINY_ROUTE_HOTHEAP_V2) {
tiny_hotheap_v2_record_free_fallback((uint8_t)class_idx);
}
// Phase v10: v3/v4 removed - no special fallback
tiny_heap_free_class_fast(tiny_heap_ctx_for_thread(), class_idx, ptr);
return 1;
}
}
}
// Debug: Log free operations (first 5000, all classes)
#if !HAKMEM_BUILD_RELEASE
{
extern _Atomic uint64_t g_debug_op_count;
extern __thread TinyTLSSLL g_tls_sll[];
uint64_t op = atomic_fetch_add(&g_debug_op_count, 1);
// Note: Shares g_debug_op_count with alloc logging, so bump the window.
if (op < 5000) {
fprintf(stderr, "[OP#%04lu FREE] cls=%d ptr=%p base=%p from=free_tiny_fast tls_count_before=%u\n",
(unsigned long)op, class_idx, ptr, base,
g_tls_sll[class_idx].count);
fflush(stderr);
}
}
#endif
// Phase REFACTOR-2: Legacy fallback (use unified helper)
tiny_legacy_fallback_free_base(base, class_idx);
return 1;
2025-11-17 05:29:08 +09:00
#else
// No header mode - fall back to normal free
return 0;
#endif
}
#endif // HAK_FRONT_MALLOC_TINY_FAST_H