Tiny: fix header/stride mismatch and harden refill paths

- Root cause: header-based class indexing (HEADER_CLASSIDX=1) wrote a 1-byte
  header during allocation, but linear carve/refill and initial slab capacity
  still used bare class block sizes. This mismatch could overrun slab usable
  space and corrupt freelists, causing reproducible SEGV at ~100k iters.

Changes
- Superslab: compute capacity with effective stride (block_size + header for
  classes 0..6; class7 remains headerless) in superslab_init_slab(). Add a
  debug-only bound check in superslab_alloc_from_slab() to fail fast if carve
  would exceed usable bytes.
- Refill (non-P0 and P0): use header-aware stride for all linear carving and
  TLS window bump operations. Ensure alignment/validation in tiny_refill_opt.h
  also uses stride, not raw class size.
- Drain: keep existing defense-in-depth for remote sentinel and sanitize nodes
  before splicing into freelist (already present).

Notes
- This unifies the memory layout across alloc/linear-carve/refill with a single
  stride definition and keeps class7 (1024B) headerless as designed.
- Debug builds add fail-fast checks; release builds remain lean.

Next
- Re-run Tiny benches (256/1024B) in debug to confirm stability, then in
  release. If any remaining crash persists, bisect with HAKMEM_TINY_P0_BATCH_REFILL=0
  to isolate P0 batch carve, and continue reducing branch-miss as planned.
This commit is contained in:
Moe Charm (CI)
2025-11-09 18:55:50 +09:00
parent ab68ee536d
commit 1010a961fb
171 changed files with 10238 additions and 634 deletions

View File

@ -336,6 +336,8 @@ static inline int tiny_alloc_fast_refill(int class_idx) {
// Previous: Complex precedence logic on every miss (5-10 cycles overhead)
// Now: Simple TLS cache lookup (1-2 cycles)
static __thread int s_refill_count[TINY_NUM_CLASSES] = {0};
// Simple adaptive booster: bump per-class refill size when refills are frequent.
static __thread uint8_t s_refill_calls[TINY_NUM_CLASSES] = {0};
int cnt = s_refill_count[class_idx];
if (__builtin_expect(cnt == 0, 0)) {
// First miss: Initialize from globals (parsed at init time)
@ -375,6 +377,26 @@ static inline int tiny_alloc_fast_refill(int class_idx) {
// Note: g_rf_hit_slab counter is incremented inside sll_refill_small_from_ss()
int refilled = sll_refill_small_from_ss(class_idx, cnt);
// Lightweight adaptation: if refills keep happening, increase per-class refill.
// Focus on class 7 (1024B) to reduce mmap/refill frequency under Tiny-heavy loads.
if (refilled > 0) {
uint8_t c = ++s_refill_calls[class_idx];
if (class_idx == 7) {
// Every 4 refills, increase target by +16 up to 128 (unless overridden).
if ((c & 0x03u) == 0) {
int target = s_refill_count[class_idx];
if (target < 128) {
target += 16;
if (target > 128) target = 128;
s_refill_count[class_idx] = target;
}
}
}
} else {
// No refill performed (capacity full): slowly decay the counter.
if (s_refill_calls[class_idx] > 0) s_refill_calls[class_idx]--;
}
// Phase 2b: Track refill and adapt cache size
if (refilled > 0) {
track_refill_for_adaptation(class_idx);