Commit Graph

6 Commits

Author SHA1 Message Date
0c66991393 WIP: Unify fast path to Box Theory (experimental)
### Changes:
- **Removed duplicate fast paths**: Disabled HAKMEM_TINY_FAST_PATH in:
  - malloc() entry point (line ~1257)
  - hak_alloc_at() helper (line ~682)
- **Unified to Box Theory**: All tiny allocations now use Box Theory's
  hak_tiny_alloc_fast_wrapper() at line ~712 (HAKMEM_TINY_PHASE6_BOX_REFACTOR)

### Rationale:
- Previous implementation had **2 fast path checks** (double overhead)
- Box Theory (tiny_alloc_fast.inc.h) provides optimized 3-4 instruction path
- CLAUDE.md claims +64% (debug), +150% (production) with Box Theory
- Attempt to eliminate redundant checks and unify to single fast path

### Performance Results:
⚠️ **REGRESSION** - Performance decreased:
```
Baseline (old tiny_fast_alloc): 1.68M ops/s
Box Theory (unified):           1.35M ops/s  (-20%)
System malloc:                  8.08M ops/s  (reference)
```

### Status:
🔬 **EXPERIMENTAL** - This commit documents the attempt but shows regression.
Possible issues:
1. Box Theory may need additional tuning (env vars not sufficient)
2. Refill backend may be slower than old implementation
3. TLS freelist initialization overhead
4. Missing optimizations in Box Theory integration

### Next Steps:
- Profile to identify why Box Theory is slower
- Compare refill efficiency: old vs Box Theory
- Check if TLS SLL variables are properly initialized
- Consider reverting if root cause not found

Related: LARSON_PERFORMANCE_ANALYSIS_2025_11_05.md, CLAUDE.md Phase 6-1.7
2025-11-05 06:06:34 +00:00
31af3eab27 Add malloc routing analysis and refill success tracking
### Changes:
- **Routing Counters**: Added per-thread counters in hakmem.c to track:
  - g_malloc_total_calls: Total malloc() invocations
  - g_malloc_tiny_size_match: Calls within tiny size range (<=128B)
  - g_malloc_fast_path_tried: Calls that attempted fast path
  - g_malloc_fast_path_null: Fast path returned NULL
  - g_malloc_slow_path: Calls routed to slow path

- **Refill Success Tracking**: Added counters in tiny_fastcache.c:
  - g_refill_success_count: Full batch (16 blocks)
  - g_refill_partial_count: Partial batch (<16 blocks)
  - g_refill_fail_count: Zero blocks allocated
  - g_refill_total_blocks: Total blocks across all refills

- **Profile Output Enhanced**: tiny_fast_print_profile() now shows:
  - Routing statistics (which path allocations take)
  - Refill success/failure breakdown
  - Average blocks per refill

### Key Findings:
 Fast path routing: 100% success (20,479/20,480 calls per thread)
 Refill success: 100% (1,285 refills, all 16 blocks each)
⚠️  Performance: Still only 1.68M ops/s vs System's 8.06M (20.8%)

**Root Cause Confirmed**:
- NOT a routing problem (100% reach fast path)
- NOT a refill failure (100% success)
- IS a structural performance issue (2,418 cycles avg for malloc)

**Bottlenecks Identified**:
1. Fast path cache hits: ~2,418 cycles (vs tcache ~100 cycles)
2. Refill operations: ~39,938 cycles (expensive but infrequent)
3. Overall throughput: 4.8x slower than system malloc

**Next Steps** (per LARSON_PERFORMANCE_ANALYSIS_2025_11_05.md):
- Option B: Refill efficiency (batch allocation from SuperSlab)
- Option C: Ultra-fast path redesign (tcache-equivalent)

Related: LARSON_PERFORMANCE_ANALYSIS_2025_11_05.md
2025-11-05 05:56:02 +00:00
3e4e90eadb Phase 6-5: Entry Point Optimization (Phase 1) - Unexpected results
Implementation: Move HAKMEM_TINY_FAST_PATH check BEFORE all guard checks
in malloc(), inspired by mimalloc/tcache entry point design.

Strategy:
- tcache has 0 branches before fast path
- mimalloc has 1-2 branches before fast path
- Old HAKMEM had 8+ branches before fast path
- Phase 1: Move fast path to line 1, add branch prediction hints

Changes in core/hakmem.c:
1. Fast Path First: Size check → Init check → Cache hit (3 branches)
2. Slow Path: All guards moved after fast path (rare cases)
3. Branch hints: __builtin_expect() for hot paths

Expected results (from research):
- ST: 0.46M → 1.4-2.3M ops/s (+204-400%)
- MT: 1.86M → 3.7-5.6M ops/s (+99-201%)

Actual results (Larson 2s 8-128B 1024):
- ST: 0.377M → 0.424M ops/s (+12% only)
- MT: 1.856M → 1.453M ops/s (-22% regression!)

Analysis:
- Similar pattern to previous Option A test (+42% ST, -20% MT)
- Entry point reordering alone is insufficient
- True bottleneck may be:
  1. tiny_fast_alloc() internals (size-to-class, cache access)
  2. Refill cost (1,600 cycles for 16 individual calls)
  3. Need Batch Refill optimization (Phase 3) as priority

Next steps:
- Investigate refill bottleneck with perf profiling
- Consider implementing Phase 3 (Batch Refill) before Phase 2
- May need combination of multiple optimizations for breakthrough

Related: LARSON_PERFORMANCE_ANALYSIS_2025_11_05.md
2025-11-05 05:10:02 +00:00
09e1d89e8d Phase 6-4: Larson benchmark optimizations - LUT size-to-class
Two optimizations to improve Larson benchmark performance:

1. **Option A: Fast Path Priority** (core/hakmem.c)
   - Move HAKMEM_TINY_FAST_PATH check before all guard checks
   - Reduce malloc() fast path from 8+ branches to 3 branches
   - Results: +42% ST, -20% MT (mixed results)

2. **LUT Optimization** (core/tiny_fastcache.h)
   - Replace 11-branch linear search with O(1) lookup table
   - Use size_to_class_lut[size >> 3] for fast mapping
   - Results: +24% MT, -24% ST (MT-optimized tradeoff)

Benchmark results (Larson 2s 8-128B 1024 chunks):
- Original:     ST 0.498M ops/s, MT 1.502M ops/s
- LUT version:  ST 0.377M ops/s, MT 1.856M ops/s

Analysis:
- ST regression: Branch predictor learns linear search pattern
- MT improvement: LUT avoids branch misprediction on context switch
- Recommendation: Keep LUT for multi-threaded workloads

Related: LARSON_PERFORMANCE_ANALYSIS_2025_11_05.md
2025-11-05 04:58:03 +00:00
b64cfc055e Implement Option A: Fast Path priority optimization (Phase 6-4)
Changes:
- Reorder malloc() to prioritize Fast Path (initialized + tiny size check first)
- Move Fast Path check before all guard checks (recursion, LD_PRELOAD, etc.)
- Optimize free() with same strategy (initialized check first)
- Add branch prediction hints (__builtin_expect)

Implementation:
- malloc(): Fast Path now executes with 3 branches total
  - Branch 1+2: g_initialized && size <= TINY_FAST_THRESHOLD
  - Branch 3: tiny_fast_alloc() cache hit check
  - Slow Path: All guard checks moved after Fast Path miss

- free(): Fast Path with 1-2 branches
  - Branch 1: g_initialized check
  - Direct to hak_free_at() on normal case

Performance Results (Larson benchmark, size=8-128B):

Single-thread (threads=1):
- Before: 0.46M ops/s (10.7% of system malloc)
- After:  0.65M ops/s (15.4% of system malloc)
- Change: +42% improvement ✓

Multi-thread (threads=4):
- Before: 1.81M ops/s (25.0% of system malloc)
- After:  1.44M ops/s (19.9% of system malloc)
- Change: -20% regression ✗

Analysis:
- ST improvement shows Fast Path optimization works
- MT regression suggests contention or cache issues
- Did not meet target (+200-400%), further optimization needed

Next Steps:
- Investigate MT regression (cache coherency?)
- Consider more aggressive inlining
- Explore Option B (Refill optimization)
2025-11-05 04:44:50 +00:00
52386401b3 Debug Counters Implementation - Clean History
Major Features:
- Debug counter infrastructure for Refill Stage tracking
- Free Pipeline counters (ss_local, ss_remote, tls_sll)
- Diagnostic counters for early return analysis
- Unified larson.sh benchmark runner with profiles
- Phase 6-3 regression analysis documentation

Bug Fixes:
- Fix SuperSlab disabled by default (HAKMEM_TINY_USE_SUPERSLAB)
- Fix profile variable naming consistency
- Add .gitignore patterns for large files

Performance:
- Phase 6-3: 4.79 M ops/s (has OOM risk)
- With SuperSlab: 3.13 M ops/s (+19% improvement)

This is a clean repository without large log files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-05 12:31:14 +09:00