Tiny: fix header/stride mismatch and harden refill paths

- Root cause: header-based class indexing (HEADER_CLASSIDX=1) wrote a 1-byte
  header during allocation, but linear carve/refill and initial slab capacity
  still used bare class block sizes. This mismatch could overrun slab usable
  space and corrupt freelists, causing reproducible SEGV at ~100k iters.

Changes
- Superslab: compute capacity with effective stride (block_size + header for
  classes 0..6; class7 remains headerless) in superslab_init_slab(). Add a
  debug-only bound check in superslab_alloc_from_slab() to fail fast if carve
  would exceed usable bytes.
- Refill (non-P0 and P0): use header-aware stride for all linear carving and
  TLS window bump operations. Ensure alignment/validation in tiny_refill_opt.h
  also uses stride, not raw class size.
- Drain: keep existing defense-in-depth for remote sentinel and sanitize nodes
  before splicing into freelist (already present).

Notes
- This unifies the memory layout across alloc/linear-carve/refill with a single
  stride definition and keeps class7 (1024B) headerless as designed.
- Debug builds add fail-fast checks; release builds remain lean.

Next
- Re-run Tiny benches (256/1024B) in debug to confirm stability, then in
  release. If any remaining crash persists, bisect with HAKMEM_TINY_P0_BATCH_REFILL=0
  to isolate P0 batch carve, and continue reducing branch-miss as planned.
This commit is contained in:
Moe Charm (CI)
2025-11-09 18:55:50 +09:00
parent ab68ee536d
commit 1010a961fb
171 changed files with 10238 additions and 634 deletions

View File

@ -23,7 +23,7 @@ int hak_is_initializing(void);
#define TINY_NUM_CLASSES 8
#define TINY_SLAB_SIZE (64 * 1024) // 64KB per slab
#define TINY_MAX_SIZE 1024 // Maximum allocation size (1KB)
#define TINY_MAX_SIZE 1536 // Maximum allocation size (1.5KB, accommodate 1024B + header)
// ============================================================================
// Size Classes
@ -244,12 +244,14 @@ void hkm_ace_set_drain_threshold(int class_idx, uint32_t threshold);
static inline int hak_tiny_size_to_class(size_t size) {
if (size == 0 || size > TINY_MAX_SIZE) return -1;
#if HAKMEM_TINY_HEADER_CLASSIDX
// Phase 7 CRITICAL FIX (2025-11-08): Add 1-byte header overhead BEFORE class lookup
// Bug: 64B request was mapped to class 3 (64B blocks), leaving only 63B usable → BUS ERROR
// Fix: 64B request → alloc_size=65 → class 4 (128B blocks) → 127B usable ✓
size_t alloc_size = size + 1; // Add header overhead
if (alloc_size > TINY_MAX_SIZE) return -1; // 1024B request becomes 1025B, reject to Mid
return g_size_to_class_lut_1k[alloc_size]; // Look up with header-adjusted size
// Phase 7 header adds +1 byte. Special-case 1024B to remain in Tiny (no header).
// Rationale: Avoid forcing 1024B to Mid/OS which causes frequent mmap/madvise.
if (size == TINY_MAX_SIZE) {
return g_size_to_class_lut_1k[size]; // class 7 (1024B blocks)
}
size_t alloc_size = size + 1; // Add header for other sizes
if (alloc_size > TINY_MAX_SIZE) return -1;
return g_size_to_class_lut_1k[alloc_size];
#else
return g_size_to_class_lut_1k[size]; // 1..1024: single load
#endif