Tiny: fix remote sentinel leak → SEGV; add defense-in-depth; PoolTLS: refill-boundary remote drain; build UX help; quickstart docs

Summary
- Fix SEGV root cause in Tiny random_mixed: TINY_REMOTE_SENTINEL leaked from Remote queue into freelist/TLS SLL.
- Clear/guard sentinel at the single boundary where Remote merges to freelist.
- Add minimal defense-in-depth in freelist_pop and TLS SLL pop.
- Silence verbose prints behind debug gates to reduce noise in release runs.
- Pool TLS: integrate Remote Queue drain at refill boundary to avoid unnecessary backend carve/OS calls when possible.
- DX: strengthen build.sh with help/list/verify and add docs/BUILDING_QUICKSTART.md.

Details
- core/superslab/superslab_inline.h: guard head/node against TINY_REMOTE_SENTINEL; sanitize node[0] when splicing local chain; only print diagnostics when debug guard is enabled.
- core/slab_handle.h: freelist_pop breaks on sentinel head (fail-fast under strict).
- core/tiny_alloc_fast_inline.h: TLS SLL pop breaks on sentinel head (rare branch).
- core/tiny_superslab_free.inc.h: sentinel scan log behind debug guard.
- core/pool_refill.c: try pool_remote_pop_chain() before backend carve in pool_refill_and_alloc().
- core/tiny_adaptive_sizing.c: default adaptive logs off; enable via HAKMEM_ADAPTIVE_LOG=1.
- build.sh: add help/list/verify; EXTRA_MAKEFLAGS passthrough; echo pinned flags.
- docs/BUILDING_QUICKSTART.md: add one‑pager for targets/flags/env/perf/strace.

Verification (high level)
- Tiny random_mixed 10k 256/1024: SEGV resolved; runs complete.
- Pool TLS 1T/4T perf: HAKMEM >= system (≈ +0.7% 1T, ≈ +2.9% 4T); syscall counts ~10–13.

Known issues (to address next)
- Tiny random_mixed perf is weak vs system:
  - 1T/500k/256: cycles/op ≈ 240 vs ~47 (≈5× slower), IPC ≈0.92, branch‑miss ≈11%.
  - 1T/500k/1024: cycles/op ≈ 149 vs ~53 (≈2.8× slower), IPC ≈0.82, branch‑miss ≈10.5%.
  - Hypothesis: frequent SuperSlab path for class7 (fast_cap=0), branchy refill/adopt, and hot-path divergence.
- Proposed next steps:
  - Introduce fast_cap>0 for class7 (bounded TLS SLL) and a simpler batch refill.
  - Add env‑gated Remote Side OFF for 1T A/B (reduce side-table and guards).
  - Revisit likely/unlikely and unify adopt boundary sequencing (drain→bind→acquire) for Tiny.
This commit is contained in:
Moe Charm (CI)
2025-11-09 16:49:34 +09:00
parent 0da9f8cba3
commit 83bb8624f6
8 changed files with 234 additions and 79 deletions

View File

@ -1,29 +1,114 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# build.sh - Unified build wrapper to eliminate flag drift # build.sh - Unified build wrapper (Phase 7 + Pool TLS) with discoverable help
#
# Quick use:
# ./build.sh bench_pool_tls_hakmem # Recommended target
# ./build.sh help # Show usage/hints/ENV
# ./build.sh verify bench_pool_tls_hakmem # Check freshness
#
# Notes:
# - Flags are pinned to avoid drift (see below). You can pass extra make flags via
# EXTRA_MAKEFLAGS, e.g. EXTRA_MAKEFLAGS="HAKMEM_DEBUG_VERBOSE=1" ./build.sh <target>
# - Arena ENV (Pool TLS): HAKMEM_POOL_TLS_ARENA_MB_INIT/MAX/GROWTH_LEVELS
# - See also: docs/BUILDING_QUICKSTART.md
set -euo pipefail set -euo pipefail
TARGET="${1:-bench_mid_large_mt_hakmem}" TARGET="${1:-bench_mid_large_mt_hakmem}"
usage() {
cat <<'USAGE'
=========================================
HAKMEM Build Script (help)
=========================================
Usage:
./build.sh <target>
./build.sh help # Show this help
./build.sh list # Show common targets
./build.sh verify <bin> # Verify binary freshness
Common targets (curated):
- bench_random_mixed_hakmem
- bench_pool_tls_hakmem
- bench_mid_large_mt_hakmem
- larson_hakmem
Pinned build flags (by default):
POOL_TLS_PHASE1=1 HEADER_CLASSIDX=1 AGGRESSIVE_INLINE=1 PREWARM_TLS=1 POOL_TLS_PREWARM=1
Extra flags (optional):
Use environment var EXTRA_MAKEFLAGS, e.g.:
EXTRA_MAKEFLAGS="HAKMEM_DEBUG_VERBOSE=1" ./build.sh bench_pool_tls_hakmem
EXTRA_MAKEFLAGS="HAKMEM_TINY_SAFE_FREE=1" ./build.sh bench_random_mixed_hakmem
Pool TLS Arena ENV (A/B friendly):
export HAKMEM_POOL_TLS_ARENA_MB_INIT=2 # default 1
export HAKMEM_POOL_TLS_ARENA_MB_MAX=16 # default 8
export HAKMEM_POOL_TLS_ARENA_GROWTH_LEVELS=4 # default 3
Verify & perf tips:
make print-flags
./verify_build.sh <bin>
perf stat -e cycles,instructions,branches,branch-misses,cache-misses -r 3 -- ./<bin> ...
strace -e trace=mmap,madvise,munmap -c ./<bin> ...
USAGE
}
list_targets() {
cat <<'LIST'
Common build targets:
bench_random_mixed_hakmem # Tiny 1T mixed
bench_pool_tls_hakmem # Pool TLS (852KB)
bench_mid_large_mt_hakmem # Mid-Large MT (832KB)
larson_hakmem # Larson mixed
bench_random_mixed_system # glibc baseline
bench_pool_tls_system # glibc baseline (PoolTLS workload)
bench_mid_large_mt_system # glibc baseline (Mid-Large workload)
LIST
}
if [[ "${TARGET}" == "help" || "${TARGET}" == "-h" || "${TARGET}" == "--help" ]]; then
usage
exit 0
fi
if [[ "${TARGET}" == "list" ]]; then
list_targets
exit 0
fi
if [[ "${TARGET}" == "verify" ]]; then
BIN="${2:-}"
if [[ -z "${BIN}" ]]; then
echo "Usage: ./build.sh verify <binary>" >&2
exit 2
fi
./verify_build.sh "${BIN}"
exit 0
fi
echo "=========================================" echo "========================================="
echo " HAKMEM Build Script" echo " HAKMEM Build Script"
echo " Target: ${TARGET}" echo " Target: ${TARGET}"
echo " Flags: POOL_TLS_PHASE1=1 POOL_TLS_PREWARM=1 HEADER_CLASSIDX=1 AGGRESSIVE_INLINE=1 PREWARM_TLS=1 ${EXTRA_MAKEFLAGS:-}"
echo "=========================================" echo "========================================="
# Always clean to avoid stale objects when toggling flags # Always clean to avoid stale objects when toggling flags
make clean >/dev/null 2>&1 || true make clean >/dev/null 2>&1 || true
# Phase 7 + Pool TLS Phase 1.5b defaults # Phase 7 + Pool TLS defaults (pinned) + user extras
make \ make \
POOL_TLS_PHASE1=1 \ POOL_TLS_PHASE1=1 \
POOL_TLS_PREWARM=1 \ POOL_TLS_PREWARM=1 \
HEADER_CLASSIDX=1 \ HEADER_CLASSIDX=1 \
AGGRESSIVE_INLINE=1 \ AGGRESSIVE_INLINE=1 \
PREWARM_TLS=1 \ PREWARM_TLS=1 \
${EXTRA_MAKEFLAGS:-} \
"${TARGET}" "${TARGET}"
echo "" echo ""
echo "=========================================" echo "========================================="
echo " ✅ Build successful" echo " ✅ Build successful"
echo " Run: ./${TARGET}" echo " Run: ./${TARGET}"
echo " Tip: ./build.sh help # flags, ENV, targets"
echo "=========================================" echo "========================================="

View File

@ -1,5 +1,7 @@
#include "pool_refill.h" #include "pool_refill.h"
#include "pool_tls.h" #include "pool_tls.h"
#include "pool_tls_arena.h"
#include "pool_tls_remote.h"
#include <sys/mman.h> #include <sys/mman.h>
#include <stdint.h> #include <stdint.h>
#include <errno.h> #include <errno.h>
@ -12,6 +14,26 @@ void* pool_refill_and_alloc(int class_idx) {
int count = pool_get_refill_count(class_idx); int count = pool_get_refill_count(class_idx);
if (count <= 0) return NULL; if (count <= 0) return NULL;
// Refill boundary integration: try draining remote frees first.
// If we can satisfy from remote queue, avoid backend carve (OS pressure).
{
void* rchain = NULL;
int rgot = pool_remote_pop_chain(class_idx, count, &rchain);
if (rgot > 0 && rchain) {
// Pop one to return, install the rest into TLS
void* ret = rchain;
rchain = *(void**)rchain;
rgot--;
if (rgot > 0 && rchain) {
pool_install_chain(class_idx, rchain, rgot);
}
#if POOL_USE_HEADERS
*((uint8_t*)ret - POOL_HEADER_SIZE) = POOL_MAGIC | class_idx;
#endif
return ret;
}
}
// Batch allocate from existing Pool backend // Batch allocate from existing Pool backend
void* chain = backend_batch_carve(class_idx, count); void* chain = backend_batch_carve(class_idx, count);
if (!chain) return NULL; // OOM if (!chain) return NULL; // OOM
@ -34,65 +56,36 @@ void* pool_refill_and_alloc(int class_idx) {
return ret; return ret;
} }
// Backend batch carve - Phase 1: Direct mmap allocation // Backend batch carve - Phase 1.5a: TLS Arena with chunk carving
void* backend_batch_carve(int class_idx, int count) { void* backend_batch_carve(int class_idx, int count) {
if (class_idx < 0 || class_idx >= POOL_SIZE_CLASSES || count <= 0) { if (class_idx < 0 || class_idx >= POOL_SIZE_CLASSES || count <= 0) {
return NULL; return NULL;
} }
// Get the class size // Allocate blocks array on stack
size_t block_size = POOL_CLASS_SIZES[class_idx]; void* blocks[64]; // Max refill count is 64
if (count > 64) count = 64;
// For Phase 1: Allocate a single large chunk via mmap // Carve from TLS Arena (Phase 1.5a)
// and carve it into blocks int carved = arena_batch_carve(class_idx, blocks, count);
#if POOL_USE_HEADERS if (carved == 0) {
size_t total_block_size = block_size + POOL_HEADER_SIZE; return NULL; // OOM
#else
size_t total_block_size = block_size;
#endif
// Allocate enough for all requested blocks
size_t total_size = total_block_size * count;
// Round up to page size
size_t page_size = 4096;
total_size = (total_size + page_size - 1) & ~(page_size - 1);
// Allocate memory via mmap
void* chunk = mmap(NULL, total_size, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (chunk == MAP_FAILED) {
return NULL;
} }
// Carve into blocks and chain them // Chain the carved blocks
void* head = NULL; void* head = NULL;
void* tail = NULL; void* tail = NULL;
char* ptr = (char*)chunk;
for (int i = 0; i < count; i++) { for (int i = 0; i < carved; i++) {
#if POOL_USE_HEADERS void* block = blocks[i];
// Skip header space - user data starts after header
void* user_ptr = ptr + POOL_HEADER_SIZE;
#else
void* user_ptr = ptr;
#endif
// Chain the blocks // Chain the block
if (!head) { if (!head) {
head = user_ptr; head = block;
tail = user_ptr; tail = block;
} else { } else {
*(void**)tail = user_ptr; *(void**)tail = block;
tail = user_ptr; tail = block;
}
// Move to next block
ptr += total_block_size;
// Stop if we'd go past the allocated chunk
if ((ptr + total_block_size) > ((char*)chunk + total_size)) {
break;
} }
} }
@ -102,4 +95,4 @@ void* backend_batch_carve(int class_idx, int count) {
} }
return head; return head;
} }

View File

@ -266,6 +266,17 @@ static inline void* slab_freelist_pop(SlabHandle* h) {
} }
void* ptr = h->meta->freelist; void* ptr = h->meta->freelist;
// Option B: Defense-in-depth against sentinel leakage
if (__builtin_expect((uintptr_t)ptr == TINY_REMOTE_SENTINEL, 0)) {
if (__builtin_expect(g_debug_remote_guard, 0)) {
fprintf(stderr, "[FREELIST_POP] sentinel detected in freelist (cls=%u slab=%u) -> break chain\n",
h->ss ? h->ss->size_class : 0u,
(unsigned)h->slab_idx);
}
h->meta->freelist = NULL; // break the chain to avoid propagating corruption
if (__builtin_expect(g_tiny_safe_free_strict, 0)) { raise(SIGUSR2); }
return NULL;
}
if (ptr) { if (ptr) {
void* next = *(void**)ptr; void* next = *(void**)ptr;
h->meta->freelist = next; h->meta->freelist = next;

View File

@ -344,6 +344,18 @@ static inline void _ss_remote_drain_to_freelist_unsafe(SuperSlab* ss, int slab_i
_Atomic(uintptr_t)* head = &ss->remote_heads[slab_idx]; _Atomic(uintptr_t)* head = &ss->remote_heads[slab_idx];
uintptr_t p = atomic_exchange_explicit(head, (uintptr_t)NULL, memory_order_acq_rel); uintptr_t p = atomic_exchange_explicit(head, (uintptr_t)NULL, memory_order_acq_rel);
if (p == 0) return; if (p == 0) return;
// Option A: Fail-fast guard against sentinel leaking into freelist
if (__builtin_expect(p == TINY_REMOTE_SENTINEL, 0)) {
if (__builtin_expect(g_debug_remote_guard, 0)) {
fprintf(stderr, "[REMOTE_DRAIN] head is sentinel! cls=%u slab=%d head=%p\n",
ss ? ss->size_class : 0u,
slab_idx,
(void*)p);
}
if (__builtin_expect(g_tiny_safe_free_strict, 0)) { raise(SIGUSR2); }
// Drop this drain attempt to prevent corrupting freelist
return;
}
uint32_t drained = 0; uint32_t drained = 0;
uintptr_t base = (uintptr_t)ss; uintptr_t base = (uintptr_t)ss;
@ -369,6 +381,15 @@ static inline void _ss_remote_drain_to_freelist_unsafe(SuperSlab* ss, int slab_i
} }
} }
void* node = (void*)p; void* node = (void*)p;
// Additional defensive check (should be redundant with head guard)
if (__builtin_expect((uintptr_t)node == TINY_REMOTE_SENTINEL, 0)) {
if (__builtin_expect(g_debug_remote_guard, 0)) {
fprintf(stderr, "[REMOTE_DRAIN] node sentinel detected, abort chain (cls=%u slab=%d)\n",
ss ? ss->size_class : 0u, slab_idx);
}
if (__builtin_expect(g_tiny_safe_free_strict, 0)) { raise(SIGUSR2); }
break;
}
uintptr_t next = tiny_remote_side_get(ss, slab_idx, node); uintptr_t next = tiny_remote_side_get(ss, slab_idx, node);
tiny_remote_watch_note("drain_pull", ss, slab_idx, node, 0xA238u, drain_tid, 0); tiny_remote_watch_note("drain_pull", ss, slab_idx, node, 0xA238u, drain_tid, 0);
if (__builtin_expect(g_remote_side_enable, 0)) { if (__builtin_expect(g_remote_side_enable, 0)) {
@ -378,20 +399,24 @@ static inline void _ss_remote_drain_to_freelist_unsafe(SuperSlab* ss, int slab_i
uintptr_t observed = atomic_load_explicit((_Atomic uintptr_t*)node, memory_order_relaxed); uintptr_t observed = atomic_load_explicit((_Atomic uintptr_t*)node, memory_order_relaxed);
tiny_remote_report_corruption("drain", node, observed); tiny_remote_report_corruption("drain", node, observed);
TinySlabMeta* meta = &ss->slabs[slab_idx]; TinySlabMeta* meta = &ss->slabs[slab_idx];
fprintf(stderr, if (__builtin_expect(g_debug_remote_guard, 0)) {
"[REMOTE_SENTINEL-DRAIN] cls=%u slab=%d node=%p drained=%u observed=0x%016" PRIxPTR " owner=%u used=%u freelist=%p\n", fprintf(stderr,
ss->size_class, "[REMOTE_SENTINEL-DRAIN] cls=%u slab=%d node=%p drained=%u observed=0x%016" PRIxPTR " owner=%u used=%u freelist=%p\n",
slab_idx, ss->size_class,
node, slab_idx,
drained, node,
observed, drained,
meta->owner_tid, observed,
(unsigned)meta->used, meta->owner_tid,
meta->freelist); (unsigned)meta->used,
meta->freelist);
}
if (g_tiny_safe_free_strict) { raise(SIGUSR2); return; } if (g_tiny_safe_free_strict) { raise(SIGUSR2); return; }
} }
tiny_remote_side_clear(ss, slab_idx, node); tiny_remote_side_clear(ss, slab_idx, node);
} }
// Always sanitize node header before linking into freelist (defense-in-depth)
// Overwrite any stale sentinel/value in node[0] with the local chain link.
tiny_remote_watch_note("drain_link", ss, slab_idx, node, 0xA239u, drain_tid, 0); tiny_remote_watch_note("drain_link", ss, slab_idx, node, 0xA239u, drain_tid, 0);
tiny_remote_track_on_remote_drain(ss, slab_idx, node, "remote_drain", drain_tid); tiny_remote_track_on_remote_drain(ss, slab_idx, node, "remote_drain", drain_tid);
if (__builtin_expect(g_debug_remote_guard && drained < 3, 0)) { if (__builtin_expect(g_debug_remote_guard && drained < 3, 0)) {

View File

@ -12,8 +12,8 @@ __thread TLSCacheStats g_tls_cache_stats[TINY_NUM_CLASSES];
// Global enable flag (default: enabled, can disable via env) // Global enable flag (default: enabled, can disable via env)
int g_adaptive_sizing_enabled = 1; int g_adaptive_sizing_enabled = 1;
// Logging enable flag (default: enabled for debugging) // Logging enable flag (default: disabled; enable via HAKMEM_ADAPTIVE_LOG=1)
static int g_adaptive_logging_enabled = 1; static int g_adaptive_logging_enabled = 0;
// Forward declaration for draining blocks // Forward declaration for draining blocks
extern void tiny_superslab_return_block(void* ptr, int class_idx); extern void tiny_superslab_return_block(void* ptr, int class_idx);

View File

@ -8,6 +8,7 @@
#include <stddef.h> #include <stddef.h>
#include "hakmem_build_flags.h" #include "hakmem_build_flags.h"
#include "tiny_remote.h" // for TINY_REMOTE_SENTINEL (defense-in-depth)
// External TLS variables (defined in hakmem_tiny.c) // External TLS variables (defined in hakmem_tiny.c)
extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES]; extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES];
@ -42,12 +43,19 @@ extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
#define TINY_ALLOC_FAST_POP_INLINE(class_idx, ptr_out) do { \ #define TINY_ALLOC_FAST_POP_INLINE(class_idx, ptr_out) do { \
void* _head = g_tls_sll_head[(class_idx)]; \ void* _head = g_tls_sll_head[(class_idx)]; \
if (__builtin_expect(_head != NULL, 1)) { \ if (__builtin_expect(_head != NULL, 1)) { \
void* _next = *(void**)_head; \ if (__builtin_expect((uintptr_t)_head == TINY_REMOTE_SENTINEL, 0)) { \
g_tls_sll_head[(class_idx)] = _next; \ /* Break the chain defensively if sentinel leaked into TLS SLL */ \
if (g_tls_sll_count[(class_idx)] > 0) { \ g_tls_sll_head[(class_idx)] = NULL; \
g_tls_sll_count[(class_idx)]--; \ if (g_tls_sll_count[(class_idx)] > 0) g_tls_sll_count[(class_idx)]--; \
(ptr_out) = NULL; \
} else { \
void* _next = *(void**)_head; \
g_tls_sll_head[(class_idx)] = _next; \
if (g_tls_sll_count[(class_idx)] > 0) { \
g_tls_sll_count[(class_idx)]--; \
} \
(ptr_out) = _head; \
} \ } \
(ptr_out) = _head; \
} else { \ } else { \
(ptr_out) = NULL; \ (ptr_out) = NULL; \
} \ } \

View File

@ -198,19 +198,21 @@ static inline void hak_tiny_free_superslab(void* ptr, SuperSlab* ss) {
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_INVALID, (uint16_t)ss->size_class, (void*)cur, aux); tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_INVALID, (uint16_t)ss->size_class, (void*)cur, aux);
uintptr_t observed = atomic_load_explicit((_Atomic uintptr_t*)(void*)cur, memory_order_relaxed); uintptr_t observed = atomic_load_explicit((_Atomic uintptr_t*)(void*)cur, memory_order_relaxed);
tiny_remote_report_corruption("scan", (void*)cur, observed); tiny_remote_report_corruption("scan", (void*)cur, observed);
fprintf(stderr, if (__builtin_expect(g_debug_remote_guard, 0)) {
"[REMOTE_SENTINEL] cls=%u slab=%d cur=%p head=%p ptr=%p scanned=%d observed=0x%016" PRIxPTR " owner=%u used=%u freelist=%p remote_head=%p\n", fprintf(stderr,
ss->size_class, "[REMOTE_SENTINEL] cls=%u slab=%d cur=%p head=%p ptr=%p scanned=%d observed=0x%016" PRIxPTR " owner=%u used=%u freelist=%p remote_head=%p\n",
slab_idx, ss->size_class,
(void*)cur, slab_idx,
(void*)head, (void*)cur,
ptr, (void*)head,
scanned, ptr,
observed, scanned,
meta->owner_tid, observed,
(unsigned)meta->used, meta->owner_tid,
meta->freelist, (unsigned)meta->used,
(void*)atomic_load_explicit(&ss->remote_heads[slab_idx], memory_order_relaxed)); meta->freelist,
(void*)atomic_load_explicit(&ss->remote_heads[slab_idx], memory_order_relaxed));
}
if (g_tiny_safe_free_strict) { raise(SIGUSR2); return; } if (g_tiny_safe_free_strict) { raise(SIGUSR2); return; }
break; break;
} }

View File

@ -0,0 +1,31 @@
# BUILDING Quickstart
Oneliner (recommended)
- `./build.sh <target>`
- Pins: `POOL_TLS_PHASE1=1 HEADER_CLASSIDX=1 AGGRESSIVE_INLINE=1 PREWARM_TLS=1 POOL_TLS_PREWARM=1`
- Help/targets: `./build.sh help`, `./build.sh list`
- Verify freshness: `./build.sh verify <binary>`
Common targets
- `bench_random_mixed_hakmem` (Tiny 1T mixed)
- `bench_pool_tls_hakmem` (Pool TLS 852KB)
- `bench_mid_large_mt_hakmem` (MidLarge MT 832KB)
- `larson_hakmem` (Larson)
- System baselines: `bench_*_system`
Pool TLS Arena ENV (A/B)
- `export HAKMEM_POOL_TLS_ARENA_MB_INIT=2` # default 1
- `export HAKMEM_POOL_TLS_ARENA_MB_MAX=16` # default 8
- `export HAKMEM_POOL_TLS_ARENA_GROWTH_LEVELS=4` # default 3
Runtime safety/verbosity (optional)
- `EXTRA_MAKEFLAGS="HAKMEM_TINY_SAFE_FREE=1" ./build.sh <target>`
- `EXTRA_MAKEFLAGS="HAKMEM_DEBUG_VERBOSE=1" ./build.sh <target>`
Perf & strace
- `perf stat -e cycles,instructions,branches,branch-misses,cache-misses -r 3 -- ./<bin> ...`
- `strace -e trace=mmap,madvise,munmap -c ./<bin> ...`
Troubleshooting
- `make print-flags` to inspect flags
- `./verify_build.sh <bin>` to check binary freshness