- Root cause: header-based class indexing (HEADER_CLASSIDX=1) wrote a 1-byte header during allocation, but linear carve/refill and initial slab capacity still used bare class block sizes. This mismatch could overrun slab usable space and corrupt freelists, causing reproducible SEGV at ~100k iters. Changes - Superslab: compute capacity with effective stride (block_size + header for classes 0..6; class7 remains headerless) in superslab_init_slab(). Add a debug-only bound check in superslab_alloc_from_slab() to fail fast if carve would exceed usable bytes. - Refill (non-P0 and P0): use header-aware stride for all linear carving and TLS window bump operations. Ensure alignment/validation in tiny_refill_opt.h also uses stride, not raw class size. - Drain: keep existing defense-in-depth for remote sentinel and sanitize nodes before splicing into freelist (already present). Notes - This unifies the memory layout across alloc/linear-carve/refill with a single stride definition and keeps class7 (1024B) headerless as designed. - Debug builds add fail-fast checks; release builds remain lean. Next - Re-run Tiny benches (256/1024B) in debug to confirm stability, then in release. If any remaining crash persists, bisect with HAKMEM_TINY_P0_BATCH_REFILL=0 to isolate P0 batch carve, and continue reducing branch-miss as planned.
17 lines
526 B
C
17 lines
526 B
C
#ifndef HAKMEM_POOL_TLS_REGISTRY_H
|
|
#define HAKMEM_POOL_TLS_REGISTRY_H
|
|
|
|
#include <stddef.h>
|
|
#include <stdint.h>
|
|
#include <sys/types.h>
|
|
|
|
// Register an arena chunk range with owner thread id and class index
|
|
void pool_reg_register(void* base, size_t size, pid_t tid, int class_idx);
|
|
// Unregister a previously registered chunk
|
|
void pool_reg_unregister(void* base, size_t size, pid_t tid);
|
|
// Lookup owner for a pointer; returns 1 if found, 0 otherwise
|
|
int pool_reg_lookup(void* ptr, pid_t* tid_out, int* class_idx_out);
|
|
|
|
#endif
|
|
|