Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash)

## Summary
Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address
SuperSlab allocation churn (877 SuperSlabs → 100-200 target).

## Implementation (ChatGPT + Claude)
1. **Metadata changes** (superslab_types.h):
   - Added class_idx to TinySlabMeta (per-slab dynamic class)
   - Removed size_class from SuperSlab (no longer per-SuperSlab)
   - Changed owner_tid (16-bit) → owner_tid_low (8-bit)

2. **Shared Pool** (hakmem_shared_pool.{h,c}):
   - Global pool shared by all size classes
   - shared_pool_acquire_slab() - Get free slab for class_idx
   - shared_pool_release_slab() - Return slab when empty
   - Per-class hints for fast path optimization

3. **Integration** (23 files modified):
   - Updated all ss->size_class → meta->class_idx
   - Updated all meta->owner_tid → meta->owner_tid_low
   - superslab_refill() now uses shared pool
   - Free path releases empty slabs back to pool

4. **Build system** (Makefile):
   - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE

## Status: ⚠️ Build OK, Runtime CRASH

**Build**:  SUCCESS
- All 23 files compile without errors
- Only warnings: superslab_allocate type mismatch (legacy code)

**Runtime**:  SEGFAULT
- Crash location: sll_refill_small_from_ss()
- Exit code: 139 (SIGSEGV)
- Test case: ./bench_random_mixed_hakmem 1000 256 42

## Known Issues
1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue
2. **Legacy superslab_allocate()** still exists (type mismatch warning)
3. **Remaining TODOs** from design doc:
   - SuperSlab physical layout integration
   - slab_handle.h cleanup
   - Remove old per-class head implementation

## Next Steps
1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss)
2. Fix shared_pool_acquire_slab() or superslab_init_slab()
3. Basic functionality test (1K → 100K iterations)
4. Measure SuperSlab count reduction (877 → 100-200)
5. Performance benchmark (+650-860% expected)

## Files Changed (25 files)
core/box/free_local_box.c
core/box/free_remote_box.c
core/box/front_gate_classifier.c
core/hakmem_super_registry.c
core/hakmem_tiny.c
core/hakmem_tiny_bg_spill.c
core/hakmem_tiny_free.inc
core/hakmem_tiny_lifecycle.inc
core/hakmem_tiny_magazine.c
core/hakmem_tiny_query.c
core/hakmem_tiny_refill.inc.h
core/hakmem_tiny_superslab.c
core/hakmem_tiny_superslab.h
core/hakmem_tiny_tls_ops.h
core/slab_handle.h
core/superslab/superslab_inline.h
core/superslab/superslab_types.h
core/tiny_debug.h
core/tiny_free_fast.inc.h
core/tiny_free_magazine.inc.h
core/tiny_remote.c
core/tiny_superslab_alloc.inc.h
core/tiny_superslab_free.inc.h
Makefile

## New Files (3 files)
PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md
core/hakmem_shared_pool.c
core/hakmem_shared_pool.h

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: ChatGPT <chatgpt@openai.com>
This commit is contained in:
Moe Charm (CI)
2025-11-13 16:33:03 +09:00
parent 2b9a03fa8b
commit 03df05ec75
29 changed files with 1390 additions and 1302 deletions

View File

@ -218,14 +218,12 @@ SuperSlab* superslab_refill(int class_idx);
static inline void* superslab_alloc_from_slab(SuperSlab* ss, int slab_idx);
static inline uint32_t sll_cap_for_class(int class_idx, uint32_t mag_cap);
// Forward decl: used by tiny_spec_pop_path before its definition
// Phase 6-1.7: Export for box refactor (Box 5 needs access from hakmem.c)
// Note: Remove 'inline' to provide linkable definition for LTO
// P0 Fix: When P0 is enabled, use sll_refill_batch_from_ss instead
#if HAKMEM_TINY_P0_BATCH_REFILL
// P0 enabled: use batch refill
// P0 enabled: sll_refill_batch_from_ss is defined in hakmem_tiny_refill_p0.inc.h
static inline int sll_refill_batch_from_ss(int class_idx, int max_take);
#else
// P0 disabled: use original refill
// Phase 12: sll_refill_small_from_ss is defined in hakmem_tiny_refill.inc.h
// Only a single implementation exists there; declare here for callers.
#ifdef HAKMEM_TINY_PHASE6_BOX_REFACTOR
int sll_refill_small_from_ss(int class_idx, int max_take);
#else
@ -452,9 +450,10 @@ static inline void tiny_debug_track_alloc_ret(int cls, void* ptr) {
tiny_failfast_abort_ptr("alloc_ret_slabidx", ss, slab_idx, ptr, "slab_idx_mismatch");
} else {
// Fail-Fast: class vs SuperSlab size_class must be consistent.
if (ss->size_class != cls) {
tiny_failfast_abort_ptr("alloc_ret_cls_mismatch", ss, slab_idx, ptr, "class_mismatch");
}
TinySlabMeta* meta = &ss->slabs[slab_idx];
if (meta->class_idx != (uint8_t)cls) {
tiny_failfast_abort_ptr("alloc_ret_cls_mismatch", ss, slab_idx, ptr, "class_mismatch");
}
size_t blk = g_tiny_class_sizes[cls];
uintptr_t base = (uintptr_t)tiny_slab_base_for(ss, slab_idx);
uintptr_t delta = (uintptr_t)base_ptr - base;
@ -838,11 +837,11 @@ void ss_partial_publish(int class_idx, SuperSlab* ss) {
// The publishing thread must stop using this SS after publishing.
int cap_pub = ss_slabs_capacity(ss);
for (int s = 0; s < cap_pub; s++) {
uint32_t prev = __atomic_exchange_n(&ss->slabs[s].owner_tid, 0u, __ATOMIC_RELEASE);
uint8_t prev = __atomic_exchange_n(&ss->slabs[s].owner_tid_low, 0u, __ATOMIC_RELEASE);
if (__builtin_expect(g_debug_remote_guard && prev != 0u, 0)) {
uintptr_t aux = ((uintptr_t)s << 32) | (uintptr_t)prev;
tiny_debug_ring_record(TINY_RING_EVENT_OWNER_RELEASE,
(uint16_t)ss->size_class,
(uint16_t)ss->slabs[s].class_idx,
&ss->slabs[s],
aux);
}
@ -939,8 +938,8 @@ SuperSlab* ss_partial_adopt(int class_idx) {
}
static inline void tiny_tls_bind_slab(TinyTLSSlab* tls, SuperSlab* ss, int slab_idx) {
// Canonical binding:
// - ss->size_class defines block size for this SuperSlab
// Canonical binding under Phase 12:
// - Per-slab TinySlabMeta.class_idx defines class for this slab
// - slab_idx is the owning slab index within ss
// - slab_base is ALWAYS derived from tiny_slab_base_for(ss, slab_idx)
tls->ss = ss;
@ -1447,7 +1446,8 @@ static inline void* hak_tiny_alloc_superslab_try_fast(int class_idx) {
if (!meta) return NULL;
// Try linear (bump) allocation first when freelist is empty
if (meta->freelist == NULL && meta->used < meta->capacity && tls->slab_base) {
size_t block_size = g_tiny_class_sizes[tls->ss->size_class];
// Use per-slab class_idx to get stride
size_t block_size = tiny_stride_for_class(meta->class_idx);
void* block = tls->slab_base + ((size_t)meta->used * block_size);
meta->used++;
// Track active blocks in SuperSlab for conservative reclamation