Commit Graph

9 Commits

Author SHA1 Message Date
d378ee11a0 Phase 15: Box BenchMeta separation + ExternalGuard debug + investigation report
- Implement Box BenchMeta pattern in bench_random_mixed.c (BENCH_META_CALLOC/FREE)
- Add enhanced debug logging to external_guard_box.h (caller tracking, FG classification)
- Document investigation in PHASE15_BUG_ANALYSIS.md

Issue: Page-aligned MIDCAND pointer not in SuperSlab registry → ExternalGuard → crash
Hypothesis: May be pre-existing SuperSlab bug (not Phase 15-specific)
Next: Test in Phase 14-C to verify
2025-11-15 23:00:21 +09:00
1a2c5dca0d TinyHeapV2: Enable by default with optimized settings
Changes:
- Default: TinyHeapV2 ON (was OFF, now enabled without ENV)
- Default CLASS_MASK: 0xE (C1-C3 only, skip C0 8B due to -5% regression)
- Remove debug fprintf overhead in Release builds (HAKMEM_BUILD_RELEASE guard)

Performance (100K iterations, workset=128, default settings):
- 16B: 43.9M → 47.7M ops/s (+8.7%)
- 32B: 41.9M → 44.8M ops/s (+6.9%)
- 64B: 51.2M → 50.9M ops/s (-0.6%, within noise)

Key fix: Debug fprintf in tiny_heap_v2_enabled() caused 20-30% overhead
- Before: 36-42M ops/s (with debug log)
- After:  44-48M ops/s (debug log disabled in Release)

ENV override:
- HAKMEM_TINY_HEAP_V2=0 to disable
- HAKMEM_TINY_HEAP_V2_CLASS_MASK=0xF to enable all C0-C3

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 16:33:38 +09:00
bb70d422dc Phase 13-B: TinyHeapV2 supply path with dual-mode A/B framework (Stealing vs Leftover)
Summary:
- Implemented free path supply with ENV-gated A/B modes (HAKMEM_TINY_HEAP_V2_LEFTOVER_MODE)
- Mode 0 (Stealing, default): L0 gets freed blocks first → +18% @ 32B
- Mode 1 (Leftover): L1 primary owner, L0 gets leftovers → Box-clean but -5% @ 16B
- Decision: Default to Stealing for performance (ChatGPT analysis: L0 doesn't corrupt learning layer signals)

Performance (100K iterations, workset=128):
- 16B: 43.9M → 45.6M ops/s (+3.9%)
- 32B: 41.9M → 49.6M ops/s (+18.4%) 
- 64B: 51.2M → 51.5M ops/s (+0.6%)
- 100% magazine hit rate (supply from free path working correctly)

Implementation:
- tiny_free_fast_v2.inc.h: Dual-mode supply (lines 134-166)
- tiny_heap_v2.h: Add tiny_heap_v2_leftover_mode() flag + rationale doc
- tiny_alloc_fast.inc.h: Alloc hook with tiny_heap_v2_alloc_by_class()
- CURRENT_TASK.md: Updated Phase 13-B status (complete) with A/B results

ENV flags:
- HAKMEM_TINY_HEAP_V2=1                      # Enable TinyHeapV2
- HAKMEM_TINY_HEAP_V2_LEFTOVER_MODE=0        # Mode 0 (Stealing, default)
- HAKMEM_TINY_HEAP_V2_CLASS_MASK=0xE         # C1-C3 only (skip C0 -5% regression)
- HAKMEM_TINY_HEAP_V2_STATS=1                # Print statistics

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 16:28:40 +09:00
d72a700948 Phase 13-B: TinyHeapV2 free path supply hook (magazine population)
Implement magazine supply from free path to enable TinyHeapV2 L0 cache

Changes:
1. core/tiny_free_fast_v2.inc.h (Line 24, 134-143):
   - Include tiny_heap_v2.h for magazine API
   - Add supply hook after BASE pointer conversion (Line 134-143)
   - Try to push freed block to TinyHeapV2 magazine (C0-C3 only)
   - Falls back to TLS SLL if magazine full (existing behavior)

2. core/front/tiny_heap_v2.h (Line 24-46):
   - Move TinyHeapV2Mag / TinyHeapV2Stats typedef from hakmem_tiny.c
   - Add extern declarations for TLS variables
   - Define TINY_HEAP_V2_MAG_CAP (16 slots)
   - Enables use from tiny_free_fast_v2.inc.h

3. core/hakmem_tiny.c (Line 1270-1276, 1766-1768):
   - Remove duplicate typedef definitions
   - Move TLS storage declarations after tiny_heap_v2.h include
   - Reason: tiny_heap_v2.h must be included AFTER tiny_alloc_fast.inc.h
   - Forward declarations remain for early reference

Supply Hook Flow:
```
hak_free_at(ptr) → hak_tiny_free_fast_v2(ptr)
  → class_idx = read_header(ptr)
  → base = ptr - 1
  → if (class_idx <= 3 && tiny_heap_v2_enabled())
      → tiny_heap_v2_try_push(class_idx, base)
        → success: return (magazine supplied)
        → full: fall through to TLS SLL
  → tls_sll_push(class_idx, base)  # existing path
```

Benefits:
- Magazine gets populated from freed blocks (L0 cache warm-up)
- Next allocation hits magazine (fast L0 path, no backend refill)
- Expected: 70-90% hit rate for fixed-size workloads
- Expected: +200-500% performance for C0-C3 classes

Build & Smoke Test:
-  Build successful
-  bench_fixed_size 256B workset=50: 33M ops/s (stable)
-  bench_fixed_size 16B workset=60: 30M ops/s (stable)
- 🔜 A/B test (hit rate measurement) deferred to next commit

Implementation Status:
-  Phase 13-A: Alloc hook + stats (completed, committed)
-  Phase 13-B: Free path supply (THIS COMMIT)
- 🔜 Phase 13-C: Evaluation & tuning

Notes:
- Supply hook is C0-C3 only (TinyHeapV2 target range)
- Magazine capacity=16 (same as Phase 13-A)
- No performance regression (hook is ENV-gated: HAKMEM_TINY_HEAP_V2=1)

🤝 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 13:39:37 +09:00
0836d62ff4 Phase 13-A Step 1 COMPLETE: TinyHeapV2 alloc hook + stats + supply infrastructure
Phase 13-A Status:  COMPLETE
- Alloc hook working (hak_tiny_alloc via hakmem_tiny_alloc_new.inc)
- Statistics accurate (alloc_calls, mag_hits tracked correctly)
- NO-REFILL L0 cache stable (zero performance overhead)
- A/B tests: C1 +0.76%, C2 +0.42%, C3 -0.26% (all within noise)

Changes:
- Added tiny_heap_v2_try_push() infrastructure for Phase 13-B (free path supply)
- Currently unused but provides clean API for magazine supply from free path

Verification:
- Modified bench_fixed_size.c to use hak_alloc_at/hak_free_at (HAKMEM routing)
- Verified HAKMEM routing works: workset=10-127 
- Found separate bug: workset=128 hangs (power-of-2 edge case, not HeapV2 related)

Phase 13-B: Free path supply deferred
- Actual free path: hak_free_at → hak_tiny_free_fast_v2
- Not tiny_free_fast (wrapper-only path)
- Requires hak_tiny_free_fast_v2 integration work

Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 02:28:26 +09:00
5cc1f93622 Phase 13-A Step 1: TinyHeapV2 NO-REFILL L0 cache implementation
Implement TinyHeapV2 as a minimal "lucky hit" L0 cache that avoids
circular dependency with FastCache by eliminating self-refill.

Key Changes:
- New: core/front/tiny_heap_v2.h - NO-REFILL L0 cache implementation
  - tiny_heap_v2_alloc(): Pop from magazine if available, else return NULL
  - tiny_heap_v2_refill_mag(): No-op stub (no backend refill)
  - ENV: HAKMEM_TINY_HEAP_V2=1 to enable
  - ENV: HAKMEM_TINY_HEAP_V2_CLASS_MASK=bitmask (C0-C3 control)
  - ENV: HAKMEM_TINY_HEAP_V2_STATS=1 to print statistics
- Modified: core/hakmem_tiny_alloc_new.inc - Add TinyHeapV2 hook
  - Hook at entry point (after class_idx calculation)
  - Fallback to existing front if TinyHeapV2 returns NULL
- Modified: core/hakmem_tiny_alloc.inc - Add hook for legacy path
- Modified: core/hakmem_tiny.c - Add TLS variables and stats wrapper
  - TinyHeapV2Mag: Per-class magazine (capacity=16)
  - TinyHeapV2Stats: Per-class counters (alloc_calls, mag_hits, etc.)
  - tiny_heap_v2_print_stats(): Statistics output at exit
- New: TINY_HEAP_V2_TASK_SPEC.md - Phase 13 specification

Root Cause Fixed:
- BEFORE: TinyHeapV2 refilled from FastCache → circular dependency
  - TinyHeapV2 intercepted all allocs → FastCache never populated
  - Result: 100% backend OOM, 0% hit rate, 99% slowdown
- AFTER: TinyHeapV2 is passive L0 cache (no refill)
  - Magazine empty → return NULL → existing front handles it
  - Result: 0% overhead, stable baseline performance

A/B Test Results (100K iterations, fixed-size bench):
- C1 (8B):  Baseline 9,688 ops/s → HeapV2 ON 9,762 ops/s (+0.76%)
- C2 (16B): Baseline 9,804 ops/s → HeapV2 ON 9,845 ops/s (+0.42%)
- C3 (32B): Baseline 9,840 ops/s → HeapV2 ON 9,814 ops/s (-0.26%)
- All within noise range: NO PERFORMANCE REGRESSION 

Statistics (HeapV2 ON, C1-C3):
- alloc_calls: 200K (hook works correctly)
- mag_hits: 0 (0%) - Magazine empty as expected
- refill_calls: 0 - No refill executed (circular dependency avoided)
- backend_oom: 0 - No backend access

Next Steps (Phase 13-A Step 2):
- Implement magazine supply strategy (from existing front or free path)
- Goal: Populate magazine with "leftover" blocks from existing pipeline

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-15 01:42:57 +09:00
897ce8873f Phase B: Set refill=64 as default (A/B optimized)
A/B testing showed refill=64 provides best balanced performance:
- 128B: +15.5% improvement (8.27M → 9.55M ops/s)
- 256B: +7.2% improvement (7.90M → 8.47M ops/s)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 19:35:56 +09:00
3f738c0d6e Phase B: TinyFrontC23Box - Ultra-simple front path for C2/C3
Implemented dedicated fast path for C2/C3 (128B/256B) to bypass
SFC/SLL/Magazine complexity and directly access FastCache + SuperSlab.

Changes:
- core/front/tiny_front_c23.h: New ultra-simple front path (NEW)
  - Direct FC → SS refill (2 layers vs 5+ in generic path)
  - ENV-gated: HAKMEM_TINY_FRONT_C23_SIMPLE=1
  - Refill target: 64 blocks (optimized via A/B testing)
- core/tiny_alloc_fast.inc.h: Hook at entry point (+11 lines)
  - Early return for C2/C3 when C23 path enabled
  - Safe fallback to generic path on failure

Results (100K iterations, A/B tested refill=16/32/64/128):
- 128B: 8.27M → 9.55M ops/s (+15.5% with refill=64) 
- 256B: 7.90M → 8.61M ops/s (+9.0% with refill=32) 
- 256B: 7.90M → 8.47M ops/s (+7.2% with refill=64) 

Optimal Refill: 64 blocks
- Balanced performance across C2/C3
- 128B best case: +15.5%
- 256B good performance: +7.2%
- Simple single-value default

Architecture:
- Flow: FC pop → (miss) → ss_refill_fc_fill(64) → FC pop retry
- Bypassed layers: SLL, Magazine, SFC, MidTC
- Preserved: Box boundaries, safety checks, fallback paths
- Free path: Unchanged (TLS SLL + drain)

Box Theory Compliance:
- Clear Front ← Backend boundary (ss_refill_fc_fill)
- ENV-gated A/B testing (default OFF, opt-in)
- Safe fallback: NULL → generic path handles slow case
- Zero impact when disabled

Performance Gap Analysis:
- Current: 8-9M ops/s
- After Phase B: 9-10M ops/s (+10-15%)
- Target: 15-20M ops/s
- Remaining gap: ~2x (suggests deeper bottlenecks remain)

Next Steps:
- Perf profiling to identify next bottleneck
- Current hypotheses: classify_ptr, drain overhead, refill path
- Phase C candidates: FC-direct free, inline optimizations

ENV Usage:
# Enable C23 fast path (default: OFF)
export HAKMEM_TINY_FRONT_C23_SIMPLE=1

# Optional: Override refill target (default: 64)
export HAKMEM_TINY_FRONT_C23_REFILL=32

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 19:27:45 +09:00
ccf604778c Front-Direct implementation: SS→FC direct refill + SLL complete bypass
## Summary

Implemented Front-Direct architecture with complete SLL bypass:
- Direct SuperSlab → FastCache refill (1-hop, bypasses SLL)
- SLL-free allocation/free paths when Front-Direct enabled
- Legacy path sealing (SLL inline opt-in, SFC cascade ENV-only)

## New Modules

- core/refill/ss_refill_fc.h (236 lines): Standard SS→FC refill entry point
  - Remote drain → Freelist → Carve priority
  - Header restoration for C1-C6 (NOT C0/C7)
  - ENV: HAKMEM_TINY_P0_DRAIN_THRESH, HAKMEM_TINY_P0_NO_DRAIN

- core/front/fast_cache.h: FastCache (L1) type definition
- core/front/quick_slot.h: QuickSlot (L0) type definition

## Allocation Path (core/tiny_alloc_fast.inc.h)

- Added s_front_direct_alloc TLS flag (lazy ENV check)
- SLL pop guarded by: g_tls_sll_enable && !s_front_direct_alloc
- Refill dispatch:
  - Front-Direct: ss_refill_fc_fill() → fastcache_pop() (1-hop)
  - Legacy: sll_refill_batch_from_ss() → SLL → FC (2-hop, A/B only)
- SLL inline pop sealed (requires HAKMEM_TINY_INLINE_SLL=1 opt-in)

## Free Path (core/hakmem_tiny_free.inc, core/hakmem_tiny_fastcache.inc.h)

- FC priority: Try fastcache_push() first (same-thread free)
- tiny_fast_push() bypass: Returns 0 when s_front_direct_free || !g_tls_sll_enable
- Fallback: Magazine/slow path (safe, bypasses SLL)

## Legacy Sealing

- SFC cascade: Default OFF (ENV-only via HAKMEM_TINY_SFC_CASCADE=1)
- Deleted: core/hakmem_tiny_free.inc.bak, core/pool_refill_legacy.c.bak
- Documentation: ss_refill_fc_fill() promoted as CANONICAL refill entry

## ENV Controls

- HAKMEM_TINY_FRONT_DIRECT=1: Enable Front-Direct (SS→FC direct)
- HAKMEM_TINY_P0_DIRECT_FC_ALL=1: Same as above (alt name)
- HAKMEM_TINY_REFILL_BATCH=1: Enable batch refill (also enables Front-Direct)
- HAKMEM_TINY_SFC_CASCADE=1: Enable SFC cascade (default OFF)
- HAKMEM_TINY_INLINE_SLL=1: Enable inline SLL pop (default OFF, requires AGGRESSIVE_INLINE)

## Benchmarks (Front-Direct Enabled)

```bash
ENV: HAKMEM_BENCH_FAST_FRONT=1 HAKMEM_TINY_FRONT_DIRECT=1
     HAKMEM_TINY_REFILL_BATCH=1 HAKMEM_TINY_P0_DIRECT_FC_ALL=1
     HAKMEM_TINY_REFILL_COUNT_HOT=256 HAKMEM_TINY_REFILL_COUNT_MID=96
     HAKMEM_TINY_BUMP_CHUNK=256

bench_random_mixed (16-1040B random, 200K iter):
  256 slots: 1.44M ops/s (STABLE, 0 SEGV)
  128 slots: 1.44M ops/s (STABLE, 0 SEGV)

bench_fixed_size (fixed size, 200K iter):
  256B: 4.06M ops/s (has debug logs, expected >10M without logs)
  128B: Similar (debug logs affect)
```

## Verification

- TRACE_RING test (10K iter): **0 SLL events** detected 
- Complete SLL bypass confirmed when Front-Direct=1
- Stable execution: 200K iterations × multiple sizes, 0 SEGV

## Next Steps

- Disable debug logs in hak_alloc_api.inc.h (call_num 14250-14280 range)
- Re-benchmark with clean Release build (target: 10-15M ops/s)
- 128/256B shortcut path optimization (FC hit rate improvement)

Co-Authored-By: ChatGPT <chatgpt@openai.com>
Suggested-By: ultrathink
2025-11-14 05:41:49 +09:00