## Summary
Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address
SuperSlab allocation churn (877 SuperSlabs → 100-200 target).
## Implementation (ChatGPT + Claude)
1. **Metadata changes** (superslab_types.h):
- Added class_idx to TinySlabMeta (per-slab dynamic class)
- Removed size_class from SuperSlab (no longer per-SuperSlab)
- Changed owner_tid (16-bit) → owner_tid_low (8-bit)
2. **Shared Pool** (hakmem_shared_pool.{h,c}):
- Global pool shared by all size classes
- shared_pool_acquire_slab() - Get free slab for class_idx
- shared_pool_release_slab() - Return slab when empty
- Per-class hints for fast path optimization
3. **Integration** (23 files modified):
- Updated all ss->size_class → meta->class_idx
- Updated all meta->owner_tid → meta->owner_tid_low
- superslab_refill() now uses shared pool
- Free path releases empty slabs back to pool
4. **Build system** (Makefile):
- Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE
## Status: ⚠️ Build OK, Runtime CRASH
**Build**: ✅ SUCCESS
- All 23 files compile without errors
- Only warnings: superslab_allocate type mismatch (legacy code)
**Runtime**: ❌ SEGFAULT
- Crash location: sll_refill_small_from_ss()
- Exit code: 139 (SIGSEGV)
- Test case: ./bench_random_mixed_hakmem 1000 256 42
## Known Issues
1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue
2. **Legacy superslab_allocate()** still exists (type mismatch warning)
3. **Remaining TODOs** from design doc:
- SuperSlab physical layout integration
- slab_handle.h cleanup
- Remove old per-class head implementation
## Next Steps
1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss)
2. Fix shared_pool_acquire_slab() or superslab_init_slab()
3. Basic functionality test (1K → 100K iterations)
4. Measure SuperSlab count reduction (877 → 100-200)
5. Performance benchmark (+650-860% expected)
## Files Changed (25 files)
core/box/free_local_box.c
core/box/free_remote_box.c
core/box/front_gate_classifier.c
core/hakmem_super_registry.c
core/hakmem_tiny.c
core/hakmem_tiny_bg_spill.c
core/hakmem_tiny_free.inc
core/hakmem_tiny_lifecycle.inc
core/hakmem_tiny_magazine.c
core/hakmem_tiny_query.c
core/hakmem_tiny_refill.inc.h
core/hakmem_tiny_superslab.c
core/hakmem_tiny_superslab.h
core/hakmem_tiny_tls_ops.h
core/slab_handle.h
core/superslab/superslab_inline.h
core/superslab/superslab_types.h
core/tiny_debug.h
core/tiny_free_fast.inc.h
core/tiny_free_magazine.inc.h
core/tiny_remote.c
core/tiny_superslab_alloc.inc.h
core/tiny_superslab_free.inc.h
Makefile
## New Files (3 files)
PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md
core/hakmem_shared_pool.c
core/hakmem_shared_pool.h
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: ChatGPT <chatgpt@openai.com>
114 lines
4.6 KiB
C
114 lines
4.6 KiB
C
#include "hakmem_tiny_bg_spill.h"
|
|
#include "hakmem_tiny_superslab.h" // For SuperSlab, TinySlabMeta, ss_active_dec_one
|
|
#include "hakmem_super_registry.h" // For hak_super_registry_lookup
|
|
#include "tiny_remote.h"
|
|
#include "hakmem_tiny.h"
|
|
#include "box/tiny_next_ptr_box.h" // Phase E1-CORRECT: Box API
|
|
#include <pthread.h>
|
|
|
|
static inline uint32_t tiny_self_u32_guard(void) {
|
|
return (uint32_t)(uintptr_t)pthread_self();
|
|
}
|
|
#include <stdlib.h> // For getenv, atoi
|
|
|
|
// Global variables
|
|
int g_bg_spill_enable = 0; // HAKMEM_TINY_BG_SPILL=1
|
|
int g_bg_spill_target = 128; // HAKMEM_TINY_BG_TARGET (per class)
|
|
int g_bg_spill_max_batch = 128; // HAKMEM_TINY_BG_MAX_BATCH
|
|
_Atomic uintptr_t g_bg_spill_head[TINY_NUM_CLASSES];
|
|
_Atomic uint32_t g_bg_spill_len[TINY_NUM_CLASSES];
|
|
|
|
void bg_spill_init(void) {
|
|
// Parse environment variables
|
|
char* bs = getenv("HAKMEM_TINY_BG_SPILL");
|
|
if (bs) g_bg_spill_enable = (atoi(bs) != 0) ? 1 : 0;
|
|
char* bt2 = getenv("HAKMEM_TINY_BG_TARGET");
|
|
if (bt2) { int v = atoi(bt2); if (v > 0 && v <= 8192) g_bg_spill_target = v; }
|
|
char* mb = getenv("HAKMEM_TINY_BG_MAX_BATCH");
|
|
if (mb) { int v = atoi(mb); if (v > 0 && v <= 4096) g_bg_spill_max_batch = v; }
|
|
|
|
// Initialize atomic queues
|
|
for (int k = 0; k < TINY_NUM_CLASSES; k++) {
|
|
atomic_store_explicit(&g_bg_spill_head[k], (uintptr_t)0, memory_order_relaxed);
|
|
atomic_store_explicit(&g_bg_spill_len[k], 0u, memory_order_relaxed);
|
|
}
|
|
}
|
|
|
|
void bg_spill_drain_class(int class_idx, pthread_mutex_t* lock) {
|
|
uint32_t approx = atomic_load_explicit(&g_bg_spill_len[class_idx], memory_order_relaxed);
|
|
if (approx == 0) return;
|
|
|
|
uintptr_t chain = atomic_exchange_explicit(&g_bg_spill_head[class_idx], (uintptr_t)0, memory_order_acq_rel);
|
|
if (chain == 0) return;
|
|
|
|
// Split chain up to max_batch
|
|
int processed = 0;
|
|
void* rest = NULL;
|
|
void* cur = (void*)chain;
|
|
void* prev = NULL;
|
|
// Phase 7: header-aware next pointer (C0-C6: base+1, C7: base)
|
|
#if HAKMEM_TINY_HEADER_CLASSIDX
|
|
// Phase E1-CORRECT: ALL classes have 1-byte header, next ptr at offset 1
|
|
const size_t next_off = 1;
|
|
#else
|
|
const size_t next_off = 0;
|
|
#endif
|
|
#include "box/tiny_next_ptr_box.h"
|
|
while (cur && processed < g_bg_spill_max_batch) {
|
|
prev = cur;
|
|
cur = tiny_next_read(class_idx, cur);
|
|
processed++;
|
|
}
|
|
if (cur != NULL) { rest = cur; tiny_next_write(class_idx, prev, NULL); }
|
|
|
|
// Return processed nodes to SS freelists
|
|
pthread_mutex_lock(lock);
|
|
uint32_t self_tid = tiny_self_u32_guard();
|
|
void* node = (void*)chain;
|
|
while (node) {
|
|
SuperSlab* owner_ss = hak_super_lookup(node);
|
|
void* next = tiny_next_read(class_idx, node);
|
|
if (owner_ss && owner_ss->magic == SUPERSLAB_MAGIC) {
|
|
int slab_idx = slab_index_for(owner_ss, node);
|
|
if (slab_idx >= 0 && slab_idx < ss_slabs_capacity(owner_ss)) {
|
|
TinySlabMeta* meta = &owner_ss->slabs[slab_idx];
|
|
uint8_t node_class_idx = (meta->class_idx < TINY_NUM_CLASSES)
|
|
? meta->class_idx
|
|
: (uint8_t)class_idx;
|
|
if (!tiny_remote_guard_allow_local_push(owner_ss, slab_idx, meta, node, "bg_spill", self_tid)) {
|
|
(void)ss_remote_push(owner_ss, slab_idx, node);
|
|
if (meta->used > 0) meta->used--;
|
|
node = next;
|
|
continue;
|
|
}
|
|
void* prev = meta->freelist;
|
|
// Phase 12: use per-slab class for next pointer
|
|
tiny_next_write(node_class_idx, node, prev);
|
|
meta->freelist = node;
|
|
tiny_failfast_log("bg_spill", node_class_idx, owner_ss, meta, node, prev);
|
|
meta->used--;
|
|
// Active was decremented at free time
|
|
}
|
|
}
|
|
node = next;
|
|
}
|
|
pthread_mutex_unlock(lock);
|
|
|
|
if (processed > 0) {
|
|
atomic_fetch_sub_explicit(&g_bg_spill_len[class_idx], (uint32_t)processed, memory_order_relaxed);
|
|
}
|
|
|
|
if (rest) {
|
|
// Prepend remainder back to head
|
|
uintptr_t old_head;
|
|
void* tail = rest;
|
|
while (tiny_next_read(class_idx, tail)) tail = tiny_next_read(class_idx, tail);
|
|
do {
|
|
old_head = atomic_load_explicit(&g_bg_spill_head[class_idx], memory_order_acquire);
|
|
tiny_next_write(class_idx, tail, (void*)old_head);
|
|
} while (!atomic_compare_exchange_weak_explicit(&g_bg_spill_head[class_idx], &old_head,
|
|
(uintptr_t)rest,
|
|
memory_order_release, memory_order_relaxed));
|
|
}
|
|
}
|