Commit Graph

2 Commits

Author SHA1 Message Date
f40be1a5ba Pool TLS: Lock-free MPSC remote queue implementation
Problem: pool_remote_push mutex contention (67% of syscall time in futex)
Solution: Lock-free MPSC queue using atomic CAS operations

Changes:
1. core/pool_tls_remote.c - Lock-free MPSC queue
   - Push: atomic_compare_exchange_weak (CAS loop, no locks!)
   - Pop: atomic_exchange (steal entire chain)
   - Mutex only for RemoteRec creation (rare, first-push-to-thread)

2. core/pool_tls_registry.c - Lock-free lookup
   - Buckets and next pointers now atomic: _Atomic(RegEntry*)
   - Lookup uses memory_order_acquire loads (no locks on hot path)
   - Registration/unregistration still use mutex (rare operations)

Results:
- futex calls: 209 → 7 (-97% reduction!)
- Throughput: 0.97M → 1.0M ops/s (+3%)
- Remaining gap: 5.8x slower than System malloc (5.8M ops/s)

Key Finding:
- futex was NOT the primary bottleneck (only small % of total runtime)
- True bottleneck: 8% cache miss rate + registry lookup overhead

Thread Safety:
- MPSC: Multi-producer (CAS), Single-consumer (owner thread)
- Memory ordering: release/acquire for correctness
- No ABA problem (pointers used once, no reuse)

Next: P0 registry lookup elimination via POOL_TLS_BIND_BOX

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 14:29:05 +09:00
1010a961fb Tiny: fix header/stride mismatch and harden refill paths
- Root cause: header-based class indexing (HEADER_CLASSIDX=1) wrote a 1-byte
  header during allocation, but linear carve/refill and initial slab capacity
  still used bare class block sizes. This mismatch could overrun slab usable
  space and corrupt freelists, causing reproducible SEGV at ~100k iters.

Changes
- Superslab: compute capacity with effective stride (block_size + header for
  classes 0..6; class7 remains headerless) in superslab_init_slab(). Add a
  debug-only bound check in superslab_alloc_from_slab() to fail fast if carve
  would exceed usable bytes.
- Refill (non-P0 and P0): use header-aware stride for all linear carving and
  TLS window bump operations. Ensure alignment/validation in tiny_refill_opt.h
  also uses stride, not raw class size.
- Drain: keep existing defense-in-depth for remote sentinel and sanitize nodes
  before splicing into freelist (already present).

Notes
- This unifies the memory layout across alloc/linear-carve/refill with a single
  stride definition and keeps class7 (1024B) headerless as designed.
- Debug builds add fail-fast checks; release builds remain lean.

Next
- Re-run Tiny benches (256/1024B) in debug to confirm stability, then in
  release. If any remaining crash persists, bisect with HAKMEM_TINY_P0_BATCH_REFILL=0
  to isolate P0 batch carve, and continue reducing branch-miss as planned.
2025-11-09 18:55:50 +09:00