- Root cause: header-based class indexing (HEADER_CLASSIDX=1) wrote a 1-byte
header during allocation, but linear carve/refill and initial slab capacity
still used bare class block sizes. This mismatch could overrun slab usable
space and corrupt freelists, causing reproducible SEGV at ~100k iters.
Changes
- Superslab: compute capacity with effective stride (block_size + header for
classes 0..6; class7 remains headerless) in superslab_init_slab(). Add a
debug-only bound check in superslab_alloc_from_slab() to fail fast if carve
would exceed usable bytes.
- Refill (non-P0 and P0): use header-aware stride for all linear carving and
TLS window bump operations. Ensure alignment/validation in tiny_refill_opt.h
also uses stride, not raw class size.
- Drain: keep existing defense-in-depth for remote sentinel and sanitize nodes
before splicing into freelist (already present).
Notes
- This unifies the memory layout across alloc/linear-carve/refill with a single
stride definition and keeps class7 (1024B) headerless as designed.
- Debug builds add fail-fast checks; release builds remain lean.
Next
- Re-run Tiny benches (256/1024B) in debug to confirm stability, then in
release. If any remaining crash persists, bisect with HAKMEM_TINY_P0_BATCH_REFILL=0
to isolate P0 batch carve, and continue reducing branch-miss as planned.
Optimization:
=============
Check remote_counts[slab_idx] BEFORE calling drain function.
If remote queue is empty (count == 0), skip the drain entirely.
Impact:
- Single-threaded: remote_count is ALWAYS 0 → drain calls = 0
- Multi-threaded: only drain when there are actual remote frees
- Reduces unnecessary function call overhead in common case
Code:
if (tls->ss && tls->slab_idx >= 0) {
uint32_t remote_count = atomic_load_explicit(
&tls->ss->remote_counts[tls->slab_idx], memory_order_relaxed);
if (remote_count > 0) {
_ss_remote_drain_to_freelist_unsafe(tls->ss, tls->slab_idx, meta);
}
}
Benchmark Results:
==================
bench_random_mixed (1 thread):
Before: 1,020,163 ops/s
After: 1,015,347 ops/s (-0.5%, within noise)
larson_hakmem (4 threads):
Before: 931,629 ops/s (1073 sec)
After: 929,709 ops/s (1075 sec) (-0.2%, within noise)
Note: Performance unchanged, but code is cleaner and avoids
unnecessary work in single-threaded case. Real bottleneck
appears to be elsewhere (Magazine layer overhead per CLAUDE.md).
Next: Profile with perf to find actual hotspots.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Root Cause:
===========
Freelist and remote queue contained the SAME blocks, causing use-after-free:
1. Thread A (owner): pops block X from freelist → allocates to user
2. User writes data ("ab") to block X
3. Thread B (remote): free(block X) → adds to remote queue
4. Thread A (later): drains remote queue → *(void**)block_X = chain_head
→ OVERWRITES USER DATA! 💥
The freelist pop path did NOT drain the remote queue first, so blocks could
be simultaneously in both freelist and remote queue.
Fix:
====
Add remote queue drain BEFORE freelist pop in refill path:
core/hakmem_tiny_refill_p0.inc.h:
- Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist()
- Add #include "superslab/superslab_inline.h"
- This ensures freelist and remote queue are mutually exclusive
Test Results:
=============
BEFORE:
larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption)
AFTER:
larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run)
bench_random_mixed: ✅ 1,020,163 ops/s (no crashes)
Evidence:
- Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab")
- Single-threaded benchmarks worked (865K ops/s)
- Multi-threaded Larson crashed immediately
- Fix eliminates all crashes in both benchmarks
Files:
- core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop
- CURRENT_TASK.md: Document fix details
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>