Commit Graph

2 Commits

Author SHA1 Message Date
ccccabd944 Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功)
Summary:
========
Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation.
Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain.
Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target.

Implementation:
===============
1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes)
   - TLS freelist (32/24/16 capacity)
   - Backend delegation to Tiny C5/C6/C7
   - Header conversion (0xa0 → 0xb0)

2. Auto-adjust Tiny boundary
   - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B)
   - When Small-Mid OFF: Tiny default C0-C7 (0-1023B)
   - Prevents routing conflict

3. Routing order fix
   - Small-Mid BEFORE Tiny (critical for proper execution)
   - Fall-through on TLS miss

Files Modified:
===============
- core/hakmem_smallmid.h/c: TLS freelist + backend delegation
- core/hakmem_tiny.c: tiny_get_max_size() auto-adjust
- core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny)
- CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan

A/B Benchmark Results:
======================
| Size   | Config A (OFF) | Config B (ON) | Delta    | % Change |
|--------|----------------|---------------|----------|----------|
| 256B   | 5.87M ops/s    | 6.06M ops/s   | +191K    | +3.3%    |
| 512B   | 6.02M ops/s    | 5.91M ops/s   | -112K    | -1.9%    |
| 1024B  | 5.58M ops/s    | 5.54M ops/s   | -35K     | -0.6%    |
| Overall| 5.82M ops/s    | 5.84M ops/s   | +20K     | +0.3%    |

Analysis:
=========
 SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist)
 SUCCESS: Minimal overhead (±0.3% = measurement noise)
 FAIL: No performance gain (target was 2-4x)

Root Cause:
-----------
- Delegation overhead = TLS savings (net gain ≈ 0 instructions)
- Small-Mid TLS alloc: ~3-5 instructions
- Tiny backend delegation: ~3-5 instructions
- Header conversion: ~2 instructions
- No batching: 1:1 delegation to Tiny (no refill amortization)

Lessons Learned:
================
- Frontend-only approach ineffective (backend calls not reduced)
- Dedicated backend essential for meaningful improvement
- Clean separation achieved = solid foundation for Phase 17-2

Next Steps (Phase 17-2):
========================
- Dedicated Small-Mid SuperSlab backend (separate from Tiny)
- TLS batch refill (8-16 blocks per refill)
- Optimized 0xb0 header fast path (no delegation)
- Target: 12-15M ops/s (2.0-2.6x improvement)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
1010a961fb Tiny: fix header/stride mismatch and harden refill paths
- Root cause: header-based class indexing (HEADER_CLASSIDX=1) wrote a 1-byte
  header during allocation, but linear carve/refill and initial slab capacity
  still used bare class block sizes. This mismatch could overrun slab usable
  space and corrupt freelists, causing reproducible SEGV at ~100k iters.

Changes
- Superslab: compute capacity with effective stride (block_size + header for
  classes 0..6; class7 remains headerless) in superslab_init_slab(). Add a
  debug-only bound check in superslab_alloc_from_slab() to fail fast if carve
  would exceed usable bytes.
- Refill (non-P0 and P0): use header-aware stride for all linear carving and
  TLS window bump operations. Ensure alignment/validation in tiny_refill_opt.h
  also uses stride, not raw class size.
- Drain: keep existing defense-in-depth for remote sentinel and sanitize nodes
  before splicing into freelist (already present).

Notes
- This unifies the memory layout across alloc/linear-carve/refill with a single
  stride definition and keeps class7 (1024B) headerless as designed.
- Debug builds add fail-fast checks; release builds remain lean.

Next
- Re-run Tiny benches (256/1024B) in debug to confirm stability, then in
  release. If any remaining crash persists, bisect with HAKMEM_TINY_P0_BATCH_REFILL=0
  to isolate P0 batch carve, and continue reducing branch-miss as planned.
2025-11-09 18:55:50 +09:00