## Root Cause Analysis (GPT5) **Physical Layout Constraints**: - Class 0: 8B = [1B header][7B payload] → offset 1 = 9B needed = ❌ IMPOSSIBLE - Class 1-6: >=16B = [1B header][15B+ payload] → offset 1 = ✅ POSSIBLE - Class 7: 1KB → offset 0 (compatibility) **Correct Specification**: - HAKMEM_TINY_HEADER_CLASSIDX != 0: - Class 0, 7: next at offset 0 (overwrites header when on freelist) - Class 1-6: next at offset 1 (after header) - HAKMEM_TINY_HEADER_CLASSIDX == 0: - All classes: next at offset 0 **Previous Bug**: - Attempted "ALL classes offset 1" unification - Class 0 with offset 1 caused immediate SEGV (9B > 8B block size) - Mixed 2-arg/3-arg API caused confusion ## Fixes Applied ### 1. Restored 3-Argument Box API (core/box/tiny_next_ptr_box.h) ```c // Correct signatures void tiny_next_write(int class_idx, void* base, void* next_value) void* tiny_next_read(int class_idx, const void* base) // Correct offset calculation size_t offset = (class_idx == 0 || class_idx == 7) ? 0 : 1; ``` ### 2. Updated 123+ Call Sites Across 34 Files - hakmem_tiny_hot_pop_v4.inc.h (4 locations) - hakmem_tiny_fastcache.inc.h (3 locations) - hakmem_tiny_tls_list.h (12 locations) - superslab_inline.h (5 locations) - tiny_fastcache.h (3 locations) - ptr_trace.h (macro definitions) - tls_sll_box.h (2 locations) - + 27 additional files Pattern: `tiny_next_read(base)` → `tiny_next_read(class_idx, base)` Pattern: `tiny_next_write(base, next)` → `tiny_next_write(class_idx, base, next)` ### 3. Added Sentinel Detection Guards - tiny_fast_push(): Block nodes with sentinel in ptr or ptr->next - tls_list_push(): Block nodes with sentinel in ptr or ptr->next - Defense-in-depth against remote free sentinel leakage ## Verification (GPT5 Report) **Test Command**: `./out/release/bench_random_mixed_hakmem --iterations=70000` **Results**: - ✅ Main loop completed successfully - ✅ Drain phase completed successfully - ✅ NO SEGV (previous crash at iteration 66151 is FIXED) - ℹ️ Final log: "tiny_alloc(1024) failed" is normal fallback to Mid/ACE layers **Analysis**: - Class 0 immediate SEGV: ✅ RESOLVED (correct offset 0 now used) - 66K iteration crash: ✅ RESOLVED (offset consistency fixed) - Box API conflicts: ✅ RESOLVED (unified 3-arg API) ## Technical Details ### Offset Logic Justification ``` Class 0: 8B block → next pointer (8B) fits ONLY at offset 0 Class 1: 16B block → next pointer (8B) fits at offset 1 (after 1B header) Class 2: 32B block → next pointer (8B) fits at offset 1 ... Class 6: 512B block → next pointer (8B) fits at offset 1 Class 7: 1024B block → offset 0 for legacy compatibility ``` ### Files Modified (Summary) - Core API: `box/tiny_next_ptr_box.h` - Hot paths: `hakmem_tiny_hot_pop*.inc.h`, `tiny_fastcache.h` - TLS layers: `hakmem_tiny_tls_list.h`, `hakmem_tiny_tls_ops.h` - SuperSlab: `superslab_inline.h`, `tiny_superslab_*.inc.h` - Refill: `hakmem_tiny_refill.inc.h`, `tiny_refill_opt.h` - Free paths: `tiny_free_magazine.inc.h`, `tiny_superslab_free.inc.h` - Documentation: Multiple Phase E3 reports ## Remaining Work None for Box API offset bugs - all structural issues resolved. Future enhancements (non-critical): - Periodic `grep -R '*(void**)' core/` to detect direct pointer access violations - Enforce Box API usage via static analysis - Document offset rationale in architecture docs 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
66 lines
2.6 KiB
C
66 lines
2.6 KiB
C
// hakmem_tiny_assist.inc.h
|
|
// Tiny: helper routines to assist targeted remote-drain
|
|
// Keep inline to avoid call overhead on hot slow-paths.
|
|
|
|
#ifndef HAKMEM_TINY_ASSIST_INC_H
|
|
#define HAKMEM_TINY_ASSIST_INC_H
|
|
|
|
#include <stdatomic.h>
|
|
#include "hakmem_tiny_superslab.h"
|
|
#include "hakmem_tiny_ss_target.h"
|
|
#include "hakmem_tiny_drain_ema.inc.h"
|
|
#include "box/tiny_next_ptr_box.h" // Box API: Next pointer read/write
|
|
|
|
static inline uint16_t tiny_assist_drain_owned(int class_idx, int max_items) {
|
|
int drained_sets = 0;
|
|
while (drained_sets < max_items) {
|
|
SuperSlab* t = ss_target_pop(class_idx);
|
|
if (!t) break;
|
|
uint32_t mytid = tiny_self_u32();
|
|
for (int i = 0; i < SLABS_PER_SUPERSLAB; i++) {
|
|
TinySlabMeta* m = &t->slabs[i];
|
|
if (m->owner_tid != mytid) continue;
|
|
TinySlabPrefix* pref = tiny_slab_prefix(t, i);
|
|
_Atomic(uintptr_t)* rhead = (_Atomic(uintptr_t)*)&pref->reserved[0];
|
|
_Atomic(uint32_t)* rcount = (_Atomic(uint32_t)*)&pref->reserved[1];
|
|
uint32_t pending = atomic_load_explicit(rcount, memory_order_relaxed);
|
|
if (pending == 0) continue;
|
|
uintptr_t chain = atomic_exchange_explicit(rhead, 0, memory_order_acquire);
|
|
uint32_t cnt = atomic_exchange_explicit(rcount, 0, memory_order_relaxed);
|
|
while (chain && cnt > 0) {
|
|
void* node = (void*)chain;
|
|
uintptr_t next = (uintptr_t)tiny_next_read(class_idx, node);
|
|
tiny_next_write(class_idx, node, m->freelist);
|
|
m->freelist = node;
|
|
if (m->used > 0) m->used--;
|
|
ss_active_dec_one(t);
|
|
chain = next;
|
|
cnt--;
|
|
}
|
|
}
|
|
drained_sets++;
|
|
}
|
|
return (uint16_t)drained_sets;
|
|
}
|
|
|
|
// Auto-sized assist based on EMA
|
|
static inline void tiny_assist_drain_auto(int class_idx) {
|
|
uint16_t want = tiny_drain_target_from_ema(class_idx);
|
|
uint16_t got = tiny_assist_drain_owned(class_idx, want);
|
|
tiny_drain_ema_update(class_idx, got);
|
|
}
|
|
|
|
// Periodic assist from free-path (very lightweight)
|
|
static __thread uint32_t g_tls_free_tick[TINY_NUM_CLASSES];
|
|
static inline void tiny_assist_maybe_drain_on_free(int class_idx) {
|
|
uint32_t k = ++g_tls_free_tick[class_idx];
|
|
if ((k & 0x3Fu) == 0u) { // every 64 frees
|
|
uint16_t want = tiny_drain_target_from_ema(class_idx);
|
|
if (want > 8u) want = 8u; // keep it small on free path
|
|
uint16_t got = tiny_assist_drain_owned(class_idx, want);
|
|
tiny_drain_ema_update(class_idx, got);
|
|
}
|
|
}
|
|
|
|
#endif // HAKMEM_TINY_ASSIST_INC_H
|