Files
hakmem/core/hakmem_shared_pool.c

607 lines
22 KiB
C
Raw Normal View History

#include "hakmem_shared_pool_internal.h"
#include "hakmem_debug_master.h" // Phase 4b: Master debug control
#include "hakmem_stats_master.h" // Phase 4d: Master stats control
#include "box/ss_slab_meta_box.h" // Phase 3d-A: SlabMeta Box boundary
Phase 12-1.1: EMPTY Slab Detection + Immediate Reuse (+13% improvement, 10.2M→11.5M ops/s) Implementation of Task-sensei Priority 1 recommendation: Add empty_mask to SuperSlab for immediate EMPTY slab detection and reuse, reducing Stage 3 (mmap) overhead. ## Changes ### 1. SuperSlab Structure (core/superslab/superslab_types.h) - Added `empty_mask` (uint32_t): Bitmap for EMPTY slabs (used==0) - Added `empty_count` (uint8_t): Quick check for EMPTY slab availability ### 2. EMPTY Detection API (core/box/ss_hot_cold_box.h) - Added `ss_is_slab_empty()`: Returns true if slab is completely EMPTY - Added `ss_mark_slab_empty()`: Marks slab as EMPTY (highest reuse priority) - Added `ss_clear_slab_empty()`: Removes EMPTY state when reactivated - Updated `ss_update_hot_cold_indices()`: Classify EMPTY/Hot/Cold slabs - Updated `ss_init_hot_cold()`: Initialize empty_mask/empty_count ### 3. Free Path Integration (core/box/free_local_box.c) - After `meta->used--`, check if `meta->used == 0` - If true, call `ss_mark_slab_empty()` to update empty_mask - Enables immediate EMPTY detection on every free operation ### 4. Shared Pool Stage 0.5 (core/hakmem_shared_pool.c) - New Stage 0.5 before Stage 1: Scan existing SuperSlabs for EMPTY slabs - Iterate over `g_super_reg_by_class[class_idx][]` (first 16 entries) - Check `ss->empty_count > 0` → scan `empty_mask` with `__builtin_ctz()` - Reuse EMPTY slab directly, avoiding Stage 3 (mmap/lock overhead) - ENV control: `HAKMEM_SS_EMPTY_REUSE=1` (default OFF for A/B testing) - ENV tunable: `HAKMEM_SS_EMPTY_SCAN_LIMIT=N` (default 16 SuperSlabs) ## Performance Results ``` Benchmark: Random Mixed 256B (100K iterations) OFF (default): 10.2M ops/s (baseline) ON (ENV=1): 11.5M ops/s (+13.0% improvement) ✅ ``` ## Expected Impact (from Task-sensei analysis) **Current bottleneck**: - Stage 1: 2-5% hit rate (free list broken) - Stage 2: 3-8% hit rate (rare UNUSED) - Stage 3: 87-95% hit rate (lock + mmap overhead) ← bottleneck **Expected with Phase 12-1.1**: - Stage 0.5: 20-40% hit rate (EMPTY scan) - Stage 1-2: 20-30% hit rate (combined) - Stage 3: 30-50% hit rate (significantly reduced) **Theoretical max**: 25M → 55-70M ops/s (+120-180%) ## Current Gap Analysis **Observed**: 11.5M ops/s (+13%) **Expected**: 55-70M ops/s (+120-180%) **Gap**: Performance regression or missing complementary optimizations Possible causes: 1. Phase 3d-C (25.1M→10.2M) regression - unrelated to this change 2. EMPTY scan overhead (16 SuperSlabs × empty_count check) 3. Missing Priority 2-5 optimizations (Lazy SS deallocation, etc.) 4. Stage 0.5 too conservative (scan_limit=16, should be higher?) ## Usage ```bash # Enable EMPTY reuse optimization export HAKMEM_SS_EMPTY_REUSE=1 # Optional: increase scan limit (trade-off: throughput vs latency) export HAKMEM_SS_EMPTY_SCAN_LIMIT=32 ./bench_random_mixed_hakmem 100000 256 42 ``` ## Next Steps **Priority 1-A**: Investigate Phase 3d-C→12-1.1 regression (25.1M→10.2M) **Priority 1-B**: Implement Phase 12-1.2 (Lazy SS deallocation) for complementary effect **Priority 1-C**: Profile Stage 0.5 overhead (scan_limit tuning) ## Files Modified Core implementation: - `core/superslab/superslab_types.h` - empty_mask/empty_count fields - `core/box/ss_hot_cold_box.h` - EMPTY detection/marking API - `core/box/free_local_box.c` - Free path EMPTY detection - `core/hakmem_shared_pool.c` - Stage 0.5 EMPTY scan Documentation: - `CURRENT_TASK.md` - Task-sensei investigation report --- 🎯 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Task-sensei (investigation & design analysis)
2025-11-21 04:56:48 +09:00
#include "box/ss_hot_cold_box.h" // Phase 12-1.1: EMPTY slab marking
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
#include "box/pagefault_telemetry_box.h" // Box PageFaultTelemetry (PF_BUCKET_SS_META)
#include "box/tls_sll_drain_box.h" // Box TLS SLL Drain (tiny_tls_sll_drain)
Tiny Pool redesign: P0.1, P0.3, P1.1, P1.2 - Out-of-band class_idx lookup This commit implements the first phase of Tiny Pool redesign based on ChatGPT architecture review. The goal is to eliminate Header/Next pointer conflicts by moving class_idx lookup out-of-band (to SuperSlab metadata). ## P0.1: C0(8B) class upgraded to 16B - Size table changed: {16,32,64,128,256,512,1024,2048} (8 classes) - LUT updated: 1..16 → class 0, 17..32 → class 1, etc. - tiny_next_off: C0 now uses offset 1 (header preserved) - Eliminates edge cases for 8B allocations ## P0.3: Slab reuse guard Box (tls_slab_reuse_guard_box.h) - New Box for draining TLS SLL before slab reuse - ENV gate: HAKMEM_TINY_SLAB_REUSE_GUARD=1 - Prevents stale pointers when slabs are recycled - Follows Box theory: single responsibility, minimal API ## P1.1: SuperSlab class_map addition - Added uint8_t class_map[SLABS_PER_SUPERSLAB_MAX] to SuperSlab - Maps slab_idx → class_idx for out-of-band lookup - Initialized to 255 (UNASSIGNED) on SuperSlab creation - Set correctly on slab initialization in all backends ## P1.2: Free fast path uses class_map - ENV gate: HAKMEM_TINY_USE_CLASS_MAP=1 - Free path can now get class_idx from class_map instead of Header - Falls back to Header read if class_map returns invalid value - Fixed Legacy Backend dynamic slab initialization bug ## Documentation added - HAKMEM_ARCHITECTURE_OVERVIEW.md: 4-layer architecture analysis - TLS_SLL_ARCHITECTURE_INVESTIGATION.md: Root cause analysis - PTR_LIFECYCLE_TRACE_AND_ROOT_CAUSE_ANALYSIS.md: Pointer tracking - TINY_REDESIGN_CHECKLIST.md: Implementation roadmap (P0-P3) ## Test results - Baseline: 70% success rate (30% crash - pre-existing issue) - class_map enabled: 70% success rate (same as baseline) - Performance: ~30.5M ops/s (unchanged) ## Next steps (P1.3, P2, P3) - P1.3: Add meta->active for accurate TLS/freelist sync - P2: TLS SLL redesign with Box-based counting - P3: Complete Header out-of-band migration 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 13:42:39 +09:00
#include "box/tls_slab_reuse_guard_box.h" // Box TLS Slab Reuse Guard (P0.3)
#include "hakmem_policy.h" // FrozenPolicy (learning layer)
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
#include <stdlib.h>
#include <string.h>
2025-11-14 15:32:07 +09:00
#include <stdatomic.h>
#include <stdio.h>
#include <sys/mman.h> // For mmap/munmap (used in shared_pool_ensure_capacity_unlocked)
2025-11-14 15:32:07 +09:00
// ============================================================================
Wrap debug fprintf in !HAKMEM_BUILD_RELEASE guards (Release build optimization) ## Changes ### 1. core/page_arena.c - Removed init failure message (lines 25-27) - error is handled by returning early - All other fprintf statements already wrapped in existing #if !HAKMEM_BUILD_RELEASE blocks ### 2. core/hakmem.c - Wrapped SIGSEGV handler init message (line 72) - CRITICAL: Kept SIGSEGV/SIGBUS/SIGABRT error messages (lines 62-64) - production needs crash logs ### 3. core/hakmem_shared_pool.c - Wrapped all debug fprintf statements in #if !HAKMEM_BUILD_RELEASE: - Node pool exhaustion warning (line 252) - SP_META_CAPACITY_ERROR warning (line 421) - SP_FIX_GEOMETRY debug logging (line 745) - SP_ACQUIRE_STAGE0.5_EMPTY debug logging (line 865) - SP_ACQUIRE_STAGE0_L0 debug logging (line 803) - SP_ACQUIRE_STAGE1_LOCKFREE debug logging (line 922) - SP_ACQUIRE_STAGE2_LOCKFREE debug logging (line 996) - SP_ACQUIRE_STAGE3 debug logging (line 1116) - SP_SLOT_RELEASE debug logging (line 1245) - SP_SLOT_FREELIST_LOCKFREE debug logging (line 1305) - SP_SLOT_COMPLETELY_EMPTY debug logging (line 1316) - Fixed lock_stats_init() for release builds (lines 60-65) - ensure g_lock_stats_enabled is initialized ## Performance Validation Before: 51M ops/s (with debug fprintf overhead) After: 49.1M ops/s (consistent performance, fprintf removed from hot paths) ## Build & Test ```bash ./build.sh larson_hakmem ./out/release/larson_hakmem 1 5 1 1000 100 10000 42 # Result: 49.1M ops/s ``` Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-26 13:14:18 +09:00
// P0 Lock Contention Instrumentation (Debug build only; counters defined always)
2025-11-14 15:32:07 +09:00
// ============================================================================
_Atomic uint64_t g_lock_acquire_count = 0; // Total lock acquisitions
_Atomic uint64_t g_lock_release_count = 0; // Total lock releases
_Atomic uint64_t g_lock_acquire_slab_count = 0; // Locks from acquire_slab path
_Atomic uint64_t g_lock_release_slab_count = 0; // Locks from release_slab path
2025-11-14 15:32:07 +09:00
Wrap debug fprintf in !HAKMEM_BUILD_RELEASE guards (Release build optimization) ## Changes ### 1. core/page_arena.c - Removed init failure message (lines 25-27) - error is handled by returning early - All other fprintf statements already wrapped in existing #if !HAKMEM_BUILD_RELEASE blocks ### 2. core/hakmem.c - Wrapped SIGSEGV handler init message (line 72) - CRITICAL: Kept SIGSEGV/SIGBUS/SIGABRT error messages (lines 62-64) - production needs crash logs ### 3. core/hakmem_shared_pool.c - Wrapped all debug fprintf statements in #if !HAKMEM_BUILD_RELEASE: - Node pool exhaustion warning (line 252) - SP_META_CAPACITY_ERROR warning (line 421) - SP_FIX_GEOMETRY debug logging (line 745) - SP_ACQUIRE_STAGE0.5_EMPTY debug logging (line 865) - SP_ACQUIRE_STAGE0_L0 debug logging (line 803) - SP_ACQUIRE_STAGE1_LOCKFREE debug logging (line 922) - SP_ACQUIRE_STAGE2_LOCKFREE debug logging (line 996) - SP_ACQUIRE_STAGE3 debug logging (line 1116) - SP_SLOT_RELEASE debug logging (line 1245) - SP_SLOT_FREELIST_LOCKFREE debug logging (line 1305) - SP_SLOT_COMPLETELY_EMPTY debug logging (line 1316) - Fixed lock_stats_init() for release builds (lines 60-65) - ensure g_lock_stats_enabled is initialized ## Performance Validation Before: 51M ops/s (with debug fprintf overhead) After: 49.1M ops/s (consistent performance, fprintf removed from hot paths) ## Build & Test ```bash ./build.sh larson_hakmem ./out/release/larson_hakmem 1 5 1 1000 100 10000 42 # Result: 49.1M ops/s ``` Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-26 13:14:18 +09:00
#if !HAKMEM_BUILD_RELEASE
int g_lock_stats_enabled = -1; // -1=uninitialized, 0=off, 1=on
2025-11-14 15:32:07 +09:00
// Initialize lock stats from environment variable
// Phase 4b: Now uses hak_debug_check() for master debug control support
void lock_stats_init(void) {
2025-11-14 15:32:07 +09:00
if (__builtin_expect(g_lock_stats_enabled == -1, 0)) {
g_lock_stats_enabled = hak_debug_check("HAKMEM_SHARED_POOL_LOCK_STATS");
2025-11-14 15:32:07 +09:00
}
}
// Report lock statistics at shutdown
static void __attribute__((destructor)) lock_stats_report(void) {
if (g_lock_stats_enabled != 1) {
return;
}
uint64_t acquires = atomic_load(&g_lock_acquire_count);
uint64_t releases = atomic_load(&g_lock_release_count);
uint64_t acquire_path = atomic_load(&g_lock_acquire_slab_count);
uint64_t release_path = atomic_load(&g_lock_release_slab_count);
fprintf(stderr, "\n=== SHARED POOL LOCK STATISTICS ===\n");
fprintf(stderr, "Total lock ops: %lu (acquire) + %lu (release) = %lu\n",
acquires, releases, acquires + releases);
fprintf(stderr, "Balance: %ld (should be 0)\n",
(int64_t)acquires - (int64_t)releases);
fprintf(stderr, "\n--- Breakdown by Code Path ---\n");
fprintf(stderr, "acquire_slab(): %lu (%.1f%%)\n",
acquire_path, 100.0 * acquire_path / (acquires ? acquires : 1));
fprintf(stderr, "release_slab(): %lu (%.1f%%)\n",
release_path, 100.0 * release_path / (acquires ? acquires : 1));
fprintf(stderr, "===================================\n");
fflush(stderr);
}
#else
// Release build: No-op stubs
int g_lock_stats_enabled = 0;
#endif
// ============================================================================
// SP Acquire Stage Statistics (Stage1/2/3 breakdown)
// ============================================================================
_Atomic uint64_t g_sp_stage1_hits[TINY_NUM_CLASSES_SS];
_Atomic uint64_t g_sp_stage2_hits[TINY_NUM_CLASSES_SS];
_Atomic uint64_t g_sp_stage3_hits[TINY_NUM_CLASSES_SS];
// Data collection gate (0=off, 1=on). 学習層からも有効化される。
int g_sp_stage_stats_enabled = 0;
#if !HAKMEM_BUILD_RELEASE
// Logging gate for destructorENV: HAKMEM_SHARED_POOL_STAGE_STATS
static int g_sp_stage_stats_log_enabled = -1; // -1=uninitialized, 0=off, 1=on
void sp_stage_stats_init(void) {
// Phase 4d: Now uses hak_stats_check() for unified stats control
if (__builtin_expect(g_sp_stage_stats_log_enabled == -1, 0)) {
g_sp_stage_stats_log_enabled = hak_stats_check("HAKMEM_SHARED_POOL_STAGE_STATS", "pool");
if (g_sp_stage_stats_log_enabled == 1) {
// ログが有効なら計測も必ず有効化する。
g_sp_stage_stats_enabled = 1;
}
}
}
static void __attribute__((destructor)) sp_stage_stats_report(void) {
if (g_sp_stage_stats_log_enabled != 1) {
return;
}
fprintf(stderr, "\n=== SHARED POOL STAGE STATISTICS ===\n");
fprintf(stderr, "Per-class acquire_slab() stage hits (Stage1=EMPTY, Stage2=UNUSED, Stage3=new SS)\n");
for (int cls = 0; cls < TINY_NUM_CLASSES_SS; cls++) {
uint64_t s1 = atomic_load(&g_sp_stage1_hits[cls]);
uint64_t s2 = atomic_load(&g_sp_stage2_hits[cls]);
uint64_t s3 = atomic_load(&g_sp_stage3_hits[cls]);
uint64_t total = s1 + s2 + s3;
if (total == 0) continue; // Skip unused classes
double p1 = 100.0 * (double)s1 / (double)total;
double p2 = 100.0 * (double)s2 / (double)total;
double p3 = 100.0 * (double)s3 / (double)total;
fprintf(stderr,
"Class %d: total=%llu S1=%llu (%.1f%%) S2=%llu (%.1f%%) S3=%llu (%.1f%%)\n",
cls,
(unsigned long long)total,
(unsigned long long)s1, p1,
(unsigned long long)s2, p2,
(unsigned long long)s3, p3);
}
fprintf(stderr, "====================================\n");
fflush(stderr);
}
#else
// Release build: No-op stubs
void sp_stage_stats_init(void) {}
#endif
// Snapshot Tiny-related backend metrics for learner / observability.
void
shared_pool_tiny_metrics_snapshot(uint64_t stage1[TINY_NUM_CLASSES_SS],
uint64_t stage2[TINY_NUM_CLASSES_SS],
uint64_t stage3[TINY_NUM_CLASSES_SS],
uint32_t active_slots[TINY_NUM_CLASSES_SS])
{
// Ensure env-based logging設定の初期化だけ先に済ませる。
sp_stage_stats_init();
// 学習層から呼ばれた場合は、計測自体は常に有効化する(ログは env で制御)。
g_sp_stage_stats_enabled = 1;
for (int cls = 0; cls < TINY_NUM_CLASSES_SS; cls++) {
if (stage1) {
stage1[cls] = atomic_load_explicit(&g_sp_stage1_hits[cls],
memory_order_relaxed);
}
if (stage2) {
stage2[cls] = atomic_load_explicit(&g_sp_stage2_hits[cls],
memory_order_relaxed);
}
if (stage3) {
stage3[cls] = atomic_load_explicit(&g_sp_stage3_hits[cls],
memory_order_relaxed);
}
if (active_slots) {
active_slots[cls] = g_shared_pool.class_active_slots[cls];
}
}
}
// Helper: return per-class active slot limit from FrozenPolicy.tiny_cap[]
// Semantics:
// - tiny_cap[class] == 0 → no limit (unbounded)
// - otherwise: soft cap on ACTIVE slots managed by shared pool for this class.
uint32_t sp_class_active_limit(int class_idx) {
const FrozenPolicy* pol = hkm_policy_get();
if (!pol) {
return 0; // no limit
}
if (class_idx < 0 || class_idx >= 8) {
return 0;
}
return (uint32_t)pol->tiny_cap[class_idx];
}
// ============================================================================
// P0-4: Lock-Free Free Slot List - Node Pool
// ============================================================================
// Pre-allocated node pools (one per class, to avoid malloc/free)
FreeSlotNode g_free_node_pool[TINY_NUM_CLASSES_SS][MAX_FREE_NODES_PER_CLASS];
_Atomic uint32_t g_node_alloc_index[TINY_NUM_CLASSES_SS] = {0};
// Recycle list for FreeSlotNode (per class, lock-free LIFO).
// node_alloc() はまずこのリストから再利用を試み、枯渇時のみ新規ノードを切り出す。
static _Atomic(FreeSlotNode*) g_node_free_head[TINY_NUM_CLASSES_SS] = {
[0 ... TINY_NUM_CLASSES_SS-1] = ATOMIC_VAR_INIT(NULL)
};
// Allocate a node from pool (lock-free fast path, may fall back to legacy path)
static inline FreeSlotNode* node_alloc(int class_idx) {
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES_SS) {
return NULL;
}
// First, try to pop from recycle list (nodes returned by pop_lockfree).
FreeSlotNode* free_head = atomic_load_explicit(
&g_node_free_head[class_idx],
memory_order_acquire);
while (free_head != NULL) {
FreeSlotNode* next = free_head->next;
if (atomic_compare_exchange_weak_explicit(
&g_node_free_head[class_idx],
&free_head,
next,
memory_order_acq_rel,
memory_order_acquire)) {
return free_head; // Recycled node
}
// CAS failed: free_head is updated; retry with new head.
}
uint32_t idx = atomic_fetch_add(&g_node_alloc_index[class_idx], 1);
if (idx >= MAX_FREE_NODES_PER_CLASS) {
// Pool exhausted - should be rare.
return NULL;
}
return &g_free_node_pool[class_idx][idx];
}
// ============================================================================
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
// Phase 12-2: SharedSuperSlabPool skeleton implementation
// Goal:
// - Centralize SuperSlab allocation/registration
// - Provide acquire_slab/release_slab APIs for later refill/free integration
// - Keep logic simple & conservative; correctness and observability first.
//
// Notes:
// - Concurrency: protected by g_shared_pool.alloc_lock for now.
// - class_hints is best-effort: read lock-free, written under lock.
// - LRU hooks left as no-op placeholders.
SharedSuperSlabPool g_shared_pool = {
.slabs = NULL,
.capacity = 0,
.total_count = 0,
.active_count = 0,
.alloc_lock = PTHREAD_MUTEX_INITIALIZER,
.class_hints = { NULL },
.lru_head = NULL,
.lru_tail = NULL,
.lru_count = 0,
// P0-4: Lock-free free slot lists (zero-initialized atomic pointers)
.free_slots_lockfree = {{.head = ATOMIC_VAR_INIT(NULL)}},
// Legacy: mutex-protected free lists
.free_slots = {{.entries = {{0}}, .count = 0}},
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
// Phase 12: SP-SLOT fields (ss_metadata is fixed-size array, auto-zeroed)
.ss_meta_count = 0
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
};
void
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
shared_pool_ensure_capacity_unlocked(uint32_t min_capacity)
{
if (g_shared_pool.capacity >= min_capacity) {
return;
}
uint32_t new_cap = g_shared_pool.capacity ? g_shared_pool.capacity : 16;
while (new_cap < min_capacity) {
new_cap *= 2;
}
// CRITICAL FIX: Use system mmap() directly to avoid recursion!
size_t new_size = new_cap * sizeof(SuperSlab*);
SuperSlab** new_slabs = (SuperSlab**)mmap(NULL, new_size,
PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (new_slabs == MAP_FAILED) {
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
// Allocation failure: keep old state; caller must handle NULL later.
return;
}
// Copy old data if exists
if (g_shared_pool.slabs != NULL) {
memcpy(new_slabs, g_shared_pool.slabs,
g_shared_pool.capacity * sizeof(SuperSlab*));
// Free old mapping (also use system munmap, not free!)
size_t old_size = g_shared_pool.capacity * sizeof(SuperSlab*);
munmap(g_shared_pool.slabs, old_size);
}
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
// Zero new entries to keep scanning logic simple.
memset(new_slabs + g_shared_pool.capacity, 0,
(new_cap - g_shared_pool.capacity) * sizeof(SuperSlab*));
g_shared_pool.slabs = new_slabs;
g_shared_pool.capacity = new_cap;
}
void
shared_pool_init(void)
{
// Idempotent init; safe to call from multiple early paths.
// pthread_mutex_t with static initializer is already valid.
pthread_mutex_lock(&g_shared_pool.alloc_lock);
if (g_shared_pool.capacity == 0 && g_shared_pool.slabs == NULL) {
shared_pool_ensure_capacity_unlocked(16);
}
pthread_mutex_unlock(&g_shared_pool.alloc_lock);
}
// ============================================================================
// Phase 12: SP-SLOT Box - Modular Helper Functions
// ============================================================================
// ---------- Layer 1: Slot Operations (Low-level) ----------
// Find first unused slot in SharedSSMeta
// P0-5: Uses atomic load for state check
// Returns: slot_idx on success, -1 if no unused slots
static int sp_slot_find_unused(SharedSSMeta* meta) {
if (!meta) return -1;
for (int i = 0; i < meta->total_slots; i++) {
SlotState state = atomic_load_explicit(&meta->slots[i].state, memory_order_acquire);
if (state == SLOT_UNUSED) {
return i;
}
}
return -1;
}
// Mark slot as ACTIVE (UNUSED→ACTIVE or EMPTY→ACTIVE)
// P0-5: Uses atomic store for state transition (caller must hold mutex!)
// Returns: 0 on success, -1 on error
int sp_slot_mark_active(SharedSSMeta* meta, int slot_idx, int class_idx) {
if (!meta || slot_idx < 0 || slot_idx >= meta->total_slots) return -1;
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES_SS) return -1;
SharedSlot* slot = &meta->slots[slot_idx];
// Load state atomically
SlotState state = atomic_load_explicit(&slot->state, memory_order_acquire);
// Transition: UNUSED→ACTIVE or EMPTY→ACTIVE
if (state == SLOT_UNUSED || state == SLOT_EMPTY) {
atomic_store_explicit(&slot->state, SLOT_ACTIVE, memory_order_release);
slot->class_idx = (uint8_t)class_idx;
slot->slab_idx = (uint8_t)slot_idx;
meta->active_slots++;
return 0;
}
return -1; // Already ACTIVE or invalid state
}
// Mark slot as EMPTY (ACTIVE→EMPTY)
// P0-5: Uses atomic store for state transition (caller must hold mutex!)
// Returns: 0 on success, -1 on error
int sp_slot_mark_empty(SharedSSMeta* meta, int slot_idx) {
if (!meta || slot_idx < 0 || slot_idx >= meta->total_slots) return -1;
SharedSlot* slot = &meta->slots[slot_idx];
// Load state atomically
SlotState state = atomic_load_explicit(&slot->state, memory_order_acquire);
if (state == SLOT_ACTIVE) {
atomic_store_explicit(&slot->state, SLOT_EMPTY, memory_order_release);
if (meta->active_slots > 0) {
meta->active_slots--;
}
return 0;
}
return -1; // Not ACTIVE
}
Phase 9-2 Fix: SuperSlab registry exhaustion workaround Problem: - Legacy-allocated SuperSlabs had slot states stuck at SLOT_UNUSED - sp_slot_mark_empty() failed, preventing EMPTY transition - Slots never returned to freelist → registry exhaustion - "SuperSlab registry full" errors flooded the system Root Cause: - Dual management: Legacy path vs Shared Pool path - Legacy SuperSlabs not synced with Shared Pool metadata - Inconsistent slot state tracking Solution (Workaround): - Added sp_meta_sync_slots_from_ss(): Syncs SP metadata from SuperSlab - Modified shared_pool_release_slab(): Detects SLOT_ACTIVE mismatch - On mismatch: Syncs from SuperSlab bitmap/class_map, then proceeds - Allows EMPTY transition → freelist insertion → registry unregister Implementation: 1. sp_meta_sync_slots_from_ss() (core/hakmem_shared_pool.c:418-452) - Rebuilds slot states from SuperSlab->slab_bitmap - Updates total_slots, active_slots, class_idx - Handles SLOT_ACTIVE, SLOT_EMPTY, SLOT_UNUSED states 2. shared_pool_release_slab() (core/hakmem_shared_pool.c:1336-1349) - Checks slot_state != SLOT_ACTIVE but slab_bitmap set - Calls sp_meta_sync_slots_from_ss() to rebuild state - Allows normal EMPTY flow to proceed Results (verified by testing): - "SuperSlab registry full" errors: ELIMINATED (0 occurrences) - Throughput: 118-125 M ops/sec (stable) - 3 consecutive stress tests: All passed - Medium load test (15K iterations): Success Nature of Fix: - WORKAROUND (not root cause fix) - Detects and repairs inconsistency at release time - Root fix would require: Legacy path elimination + unified architecture - This fix ensures stability while preserving existing code paths Next Steps: - Benchmark performance improvement vs Phase 9-1 baseline - Plan root cause fix (Phase 10): Unify SuperSlab management - Consider gradual Legacy path deprecation Credit: ChatGPT for root cause analysis and implementation 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 07:36:02 +09:00
// Sync SP-SLOT view from an existing SuperSlab.
// This is needed when a legacy-allocated SuperSlab reaches the shared-pool
// release path for the first time (slot states are still SLOT_UNUSED).
void sp_meta_sync_slots_from_ss(SharedSSMeta* meta, SuperSlab* ss) {
Phase 9-2 Fix: SuperSlab registry exhaustion workaround Problem: - Legacy-allocated SuperSlabs had slot states stuck at SLOT_UNUSED - sp_slot_mark_empty() failed, preventing EMPTY transition - Slots never returned to freelist → registry exhaustion - "SuperSlab registry full" errors flooded the system Root Cause: - Dual management: Legacy path vs Shared Pool path - Legacy SuperSlabs not synced with Shared Pool metadata - Inconsistent slot state tracking Solution (Workaround): - Added sp_meta_sync_slots_from_ss(): Syncs SP metadata from SuperSlab - Modified shared_pool_release_slab(): Detects SLOT_ACTIVE mismatch - On mismatch: Syncs from SuperSlab bitmap/class_map, then proceeds - Allows EMPTY transition → freelist insertion → registry unregister Implementation: 1. sp_meta_sync_slots_from_ss() (core/hakmem_shared_pool.c:418-452) - Rebuilds slot states from SuperSlab->slab_bitmap - Updates total_slots, active_slots, class_idx - Handles SLOT_ACTIVE, SLOT_EMPTY, SLOT_UNUSED states 2. shared_pool_release_slab() (core/hakmem_shared_pool.c:1336-1349) - Checks slot_state != SLOT_ACTIVE but slab_bitmap set - Calls sp_meta_sync_slots_from_ss() to rebuild state - Allows normal EMPTY flow to proceed Results (verified by testing): - "SuperSlab registry full" errors: ELIMINATED (0 occurrences) - Throughput: 118-125 M ops/sec (stable) - 3 consecutive stress tests: All passed - Medium load test (15K iterations): Success Nature of Fix: - WORKAROUND (not root cause fix) - Detects and repairs inconsistency at release time - Root fix would require: Legacy path elimination + unified architecture - This fix ensures stability while preserving existing code paths Next Steps: - Benchmark performance improvement vs Phase 9-1 baseline - Plan root cause fix (Phase 10): Unify SuperSlab management - Consider gradual Legacy path deprecation Credit: ChatGPT for root cause analysis and implementation 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 07:36:02 +09:00
if (!meta || !ss) return;
int cap = ss_slabs_capacity(ss);
if (cap > MAX_SLOTS_PER_SS) {
cap = MAX_SLOTS_PER_SS;
}
meta->total_slots = (uint8_t)cap;
meta->active_slots = 0;
for (int i = 0; i < cap; i++) {
SlotState state = SLOT_UNUSED;
uint32_t bit = (1u << i);
if (ss->slab_bitmap & bit) {
state = SLOT_ACTIVE;
meta->active_slots++;
} else {
TinySlabMeta* smeta = &ss->slabs[i];
uint16_t used = atomic_load_explicit(&smeta->used, memory_order_relaxed);
if (smeta->capacity > 0 && used == 0) {
state = SLOT_EMPTY;
}
}
uint8_t cls = ss->class_map[i];
if (cls == 255) {
cls = ss->slabs[i].class_idx;
}
meta->slots[i].class_idx = cls;
meta->slots[i].slab_idx = (uint8_t)i;
atomic_store_explicit(&meta->slots[i].state, state, memory_order_release);
}
}
// ---------- Layer 2: Metadata Management (Mid-level) ----------
// Ensure ss_metadata array has capacity for at least min_count entries
// Caller must hold alloc_lock
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
// Returns: 0 on success, -1 if capacity exceeded
// RACE FIX: No realloc! Fixed-size array prevents race with lock-free Stage 2
static int sp_meta_ensure_capacity(uint32_t min_count) {
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
if (min_count > MAX_SS_METADATA_ENTRIES) {
Wrap debug fprintf in !HAKMEM_BUILD_RELEASE guards (Release build optimization) ## Changes ### 1. core/page_arena.c - Removed init failure message (lines 25-27) - error is handled by returning early - All other fprintf statements already wrapped in existing #if !HAKMEM_BUILD_RELEASE blocks ### 2. core/hakmem.c - Wrapped SIGSEGV handler init message (line 72) - CRITICAL: Kept SIGSEGV/SIGBUS/SIGABRT error messages (lines 62-64) - production needs crash logs ### 3. core/hakmem_shared_pool.c - Wrapped all debug fprintf statements in #if !HAKMEM_BUILD_RELEASE: - Node pool exhaustion warning (line 252) - SP_META_CAPACITY_ERROR warning (line 421) - SP_FIX_GEOMETRY debug logging (line 745) - SP_ACQUIRE_STAGE0.5_EMPTY debug logging (line 865) - SP_ACQUIRE_STAGE0_L0 debug logging (line 803) - SP_ACQUIRE_STAGE1_LOCKFREE debug logging (line 922) - SP_ACQUIRE_STAGE2_LOCKFREE debug logging (line 996) - SP_ACQUIRE_STAGE3 debug logging (line 1116) - SP_SLOT_RELEASE debug logging (line 1245) - SP_SLOT_FREELIST_LOCKFREE debug logging (line 1305) - SP_SLOT_COMPLETELY_EMPTY debug logging (line 1316) - Fixed lock_stats_init() for release builds (lines 60-65) - ensure g_lock_stats_enabled is initialized ## Performance Validation Before: 51M ops/s (with debug fprintf overhead) After: 49.1M ops/s (consistent performance, fprintf removed from hot paths) ## Build & Test ```bash ./build.sh larson_hakmem ./out/release/larson_hakmem 1 5 1 1000 100 10000 42 # Result: 49.1M ops/s ``` Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-26 13:14:18 +09:00
#if !HAKMEM_BUILD_RELEASE
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
static int warn_once = 0;
if (warn_once == 0) {
fprintf(stderr, "[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=%d\n",
MAX_SS_METADATA_ENTRIES);
warn_once = 1;
}
Wrap debug fprintf in !HAKMEM_BUILD_RELEASE guards (Release build optimization) ## Changes ### 1. core/page_arena.c - Removed init failure message (lines 25-27) - error is handled by returning early - All other fprintf statements already wrapped in existing #if !HAKMEM_BUILD_RELEASE blocks ### 2. core/hakmem.c - Wrapped SIGSEGV handler init message (line 72) - CRITICAL: Kept SIGSEGV/SIGBUS/SIGABRT error messages (lines 62-64) - production needs crash logs ### 3. core/hakmem_shared_pool.c - Wrapped all debug fprintf statements in #if !HAKMEM_BUILD_RELEASE: - Node pool exhaustion warning (line 252) - SP_META_CAPACITY_ERROR warning (line 421) - SP_FIX_GEOMETRY debug logging (line 745) - SP_ACQUIRE_STAGE0.5_EMPTY debug logging (line 865) - SP_ACQUIRE_STAGE0_L0 debug logging (line 803) - SP_ACQUIRE_STAGE1_LOCKFREE debug logging (line 922) - SP_ACQUIRE_STAGE2_LOCKFREE debug logging (line 996) - SP_ACQUIRE_STAGE3 debug logging (line 1116) - SP_SLOT_RELEASE debug logging (line 1245) - SP_SLOT_FREELIST_LOCKFREE debug logging (line 1305) - SP_SLOT_COMPLETELY_EMPTY debug logging (line 1316) - Fixed lock_stats_init() for release builds (lines 60-65) - ensure g_lock_stats_enabled is initialized ## Performance Validation Before: 51M ops/s (with debug fprintf overhead) After: 49.1M ops/s (consistent performance, fprintf removed from hot paths) ## Build & Test ```bash ./build.sh larson_hakmem ./out/release/larson_hakmem 1 5 1 1000 100 10000 42 # Result: 49.1M ops/s ``` Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-26 13:14:18 +09:00
#endif
return -1;
}
return 0;
}
// Find SharedSSMeta for given SuperSlab, or create if not exists
// Caller must hold alloc_lock
// Returns: SharedSSMeta* on success, NULL on error
SharedSSMeta* sp_meta_find_or_create(SuperSlab* ss) {
if (!ss) return NULL;
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
// RACE FIX: Load count atomically for consistency (even under mutex)
uint32_t count = atomic_load_explicit(&g_shared_pool.ss_meta_count, memory_order_relaxed);
// Search existing metadata
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
for (uint32_t i = 0; i < count; i++) {
// RACE FIX: Load pointer atomically for consistency
SuperSlab* meta_ss = atomic_load_explicit(&g_shared_pool.ss_metadata[i].ss, memory_order_relaxed);
if (meta_ss == ss) {
return &g_shared_pool.ss_metadata[i];
}
}
// Create new metadata entry
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
if (sp_meta_ensure_capacity(count + 1) != 0) {
return NULL;
}
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
// RACE FIX: Read current count atomically (even under mutex for consistency)
uint32_t current_count = atomic_load_explicit(&g_shared_pool.ss_meta_count, memory_order_relaxed);
SharedSSMeta* meta = &g_shared_pool.ss_metadata[current_count];
// RACE FIX: Store SuperSlab pointer atomically (visible to lock-free Stage 2)
atomic_store_explicit(&meta->ss, ss, memory_order_relaxed);
meta->total_slots = (uint8_t)ss_slabs_capacity(ss);
meta->active_slots = 0;
// Initialize all slots as UNUSED
// P0-5: Use atomic store for state initialization
for (int i = 0; i < meta->total_slots; i++) {
atomic_store_explicit(&meta->slots[i].state, SLOT_UNUSED, memory_order_relaxed);
meta->slots[i].class_idx = 0;
meta->slots[i].slab_idx = (uint8_t)i;
}
Fix: Larson multi-threaded crash - 3 critical race conditions in SharedSuperSlabPool Root Cause Analysis (via Task agent investigation): Larson benchmark crashed with SEGV due to 3 separate race conditions between lock-free Stage 2 readers and mutex-protected writers in shared_pool_acquire_slab(). Race Condition 1: Non-Atomic Counter - **Problem**: `ss_meta_count` was `uint32_t` (non-atomic) but read atomically via cast - **Impact**: Thread A reads partially-updated count, accesses uninitialized metadata[N] - **Fix**: Changed to `_Atomic uint32_t`, use memory_order_release/acquire Race Condition 2: Non-Atomic Pointer - **Problem**: `meta->ss` was plain pointer, read lock-free but freed under mutex - **Impact**: Thread A loads `meta->ss` after Thread B frees SuperSlab → use-after-free - **Fix**: Changed to `_Atomic(SuperSlab*)`, set NULL before free, check for NULL Race Condition 3: realloc() vs Lock-Free Readers (CRITICAL) - **Problem**: `sp_meta_ensure_capacity()` used `realloc()` which MOVES the array - **Impact**: Thread B reallocs `ss_metadata`, Thread A accesses OLD (freed) array - **Fix**: **Removed realloc entirely** - use fixed-size array `ss_metadata[2048]` Fixes Applied: 1. **core/hakmem_shared_pool.h** (Line 53, 125-126): - `SuperSlab* ss` → `_Atomic(SuperSlab*) ss` - `uint32_t ss_meta_count` → `_Atomic uint32_t ss_meta_count` - `SharedSSMeta* ss_metadata` → `SharedSSMeta ss_metadata[MAX_SS_METADATA_ENTRIES]` - Removed `ss_meta_capacity` (no longer needed) 2. **core/hakmem_shared_pool.c** (Lines 223-233, 248-287, 577, 631-635, 812-815, 872): - **sp_meta_ensure_capacity()**: Replaced realloc with capacity check - **sp_meta_find_or_create()**: atomic_load/store for count and ss pointer - **Stage 1 (line 577)**: atomic_load for meta->ss - **Stage 2 (line 631-635)**: atomic_load with NULL check + skip - **shared_pool_release_slab()**: atomic_store(NULL) BEFORE superslab_free() - All metadata searches: atomic_load for consistency Memory Ordering: - **Release** (line 285): `atomic_fetch_add(&ss_meta_count, 1, memory_order_release)` → Publishes all metadata[N] writes before count increment is visible - **Acquire** (line 620, 631): `atomic_load(..., memory_order_acquire)` → Synchronizes-with release, ensures initialized metadata is seen - **Release** (line 872): `atomic_store(&meta->ss, NULL, memory_order_release)` → Prevents Stage 2 from seeing dangling pointer Test Results: - **Before**: SEGV crash (1 thread, 2 threads, any iteration count) - **After**: No crashes, stable execution - 1 thread: 266K ops/sec (stable, no SEGV) - 2 threads: 193K ops/sec (stable, no SEGV) - Warning: `[SP_META_CAPACITY_ERROR] Exceeded MAX_SS_METADATA_ENTRIES=2048` → Non-fatal, indicates metadata recycling needed (future optimization) Known Limitation: - Fixed array size (2048) may be insufficient for extreme workloads - Workaround: Increase MAX_SS_METADATA_ENTRIES if needed - Proper solution: Implement metadata recycling when SuperSlabs are freed Performance Note: - Larson still slow (~200K ops/sec vs System 20M ops/sec, 100x slower) - This is due to lock contention (separate issue, not race condition) - Crash bug is FIXED, performance optimization is next step Related Issues: - Original report: Commit 93cc23450 claimed to fix 500K SEGV but crashes persisted - This fix addresses the ROOT CAUSE, not just symptoms 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 23:16:54 +09:00
// RACE FIX: Atomic increment with release semantics
// This ensures all writes to metadata[current_count] (lines 268-278) are visible
// before the count increment is visible to lock-free Stage 2 readers
atomic_fetch_add_explicit(&g_shared_pool.ss_meta_count, 1, memory_order_release);
return meta;
}
// Find UNUSED slot and claim it (UNUSED → ACTIVE) using lock-free CAS
// Returns: slot_idx on success, -1 if no UNUSED slots
int sp_slot_claim_lockfree(SharedSSMeta* meta, int class_idx) {
if (!meta) return -1;
// Optimization: Quick check if any unused slots exist?
// For now, just iterate. Metadata size is small (max 32 slots).
for (int i = 0; i < meta->total_slots; i++) {
SharedSlot* slot = &meta->slots[i];
SlotState state = atomic_load_explicit(&slot->state, memory_order_acquire);
if (state == SLOT_UNUSED) {
// Attempt CAS: UNUSED → ACTIVE
if (atomic_compare_exchange_strong_explicit(
&slot->state,
&state,
SLOT_ACTIVE,
memory_order_acq_rel,
memory_order_acquire)) {
return i; // Success!
}
// CAS failed: someone else took it or state changed
}
}
return -1;
}
// ---------- Layer 3: Free List Management ----------
// Push empty slot to per-class free list
// Caller must hold alloc_lock
// Returns: 0 on success, -1 if list is full
int sp_freelist_push_lockfree(int class_idx, SharedSSMeta* meta, int slot_idx) {
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES_SS) return -1;
FreeSlotNode* node = node_alloc(class_idx);
if (!node) {
// Pool exhausted
return -1;
}
node->meta = meta;
node->slot_idx = slot_idx;
// Lock-free push to stack (LIFO)
FreeSlotNode* old_head = atomic_load_explicit(
&g_shared_pool.free_slots_lockfree[class_idx].head,
memory_order_relaxed);
do {
node->next = old_head;
} while (!atomic_compare_exchange_weak_explicit(
&g_shared_pool.free_slots_lockfree[class_idx].head,
&old_head,
node,
memory_order_release,
memory_order_relaxed));
return 0;
}
// Pop empty slot from per-class free list
// Lock-free
// Returns: 1 on success, 0 if empty
int sp_freelist_pop_lockfree(int class_idx, SharedSSMeta** meta_out, int* slot_idx_out) {
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES_SS) return 0;
FreeSlotNode* head = atomic_load_explicit(
&g_shared_pool.free_slots_lockfree[class_idx].head,
memory_order_acquire);
while (head) {
FreeSlotNode* next = head->next;
if (atomic_compare_exchange_weak_explicit(
&g_shared_pool.free_slots_lockfree[class_idx].head,
&head,
next,
memory_order_acquire,
memory_order_acquire)) {
// Success!
*meta_out = head->meta;
*slot_idx_out = head->slot_idx;
// Recycle node (push to free_head list)
FreeSlotNode* free_head = atomic_load_explicit(&g_node_free_head[class_idx], memory_order_relaxed);
do {
head->next = free_head;
} while (!atomic_compare_exchange_weak_explicit(
&g_node_free_head[class_idx],
&free_head,
head,
memory_order_release,
memory_order_relaxed));
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
return 1;
}
// CAS failed: head updated, retry
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
}
return 0; // Empty list
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
}
// Allocator helper for SuperSlab (Phase 9-2 Task 1)
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
SuperSlab*
sp_internal_allocate_superslab(void)
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
{
// Use legacy backend to allocate a SuperSlab (malloc-based)
extern SuperSlab* superslab_allocate(uint8_t size_class);
// Pass 8 as class_idx (dummy, will be overwritten) or larger
SuperSlab* ss = superslab_allocate(8);
if (!ss) {
return NULL;
}
// Initialize basic fields if not done by superslab_alloc
ss->active_slabs = 0;
ss->slab_bitmap = 0;
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
return ss;
}
// ============================================================================
// Public API (High-level)
// ============================================================================
SuperSlab*
shared_pool_acquire_superslab(void)
C7 Stride Upgrade: Fix 1024B→2048B alignment corruption (ROOT CAUSE) ## Problem C7 (1KB class) blocks were being carved with 1024B stride but expected to align with 2048B stride, causing systematic NXT_MISALIGN errors with characteristic pattern: delta_mod = 1026, 1028, 1030, 1032... (1024*N + offset). This caused crashes, double-frees, and alignment violations in 1024B workloads. ## Root Cause The global array `g_tiny_class_sizes[]` was correctly updated to 2048B, but `tiny_block_stride_for_class()` contained a LOCAL static const array with the old 1024B value: ```c // hakmem_tiny_superslab.h:52 (BEFORE) static const size_t class_sizes[8] = {8, 16, 32, 64, 128, 256, 512, 1024}; ^^^^ ``` This local table was used by ALL carve operations, causing every C7 block to be allocated with 1024B stride despite the 2048B upgrade. ## Fix Updated local stride table in `tiny_block_stride_for_class()`: ```c // hakmem_tiny_superslab.h:52 (AFTER) static const size_t class_sizes[8] = {8, 16, 32, 64, 128, 256, 512, 2048}; ^^^^ ``` ## Verification **Before**: NXT_MISALIGN delta_mod shows 1024B pattern (1026, 1028, 1030...) **After**: NXT_MISALIGN delta_mod shows random values (227, 994, 195...) → No more 1024B alignment pattern = stride upgrade successful ✓ ## Additional Safety Layers (Defense in Depth) 1. **Validation Logic Fix** (tiny_nextptr.h:100) - Changed stride check to use `tiny_block_stride_for_class()` (includes header) - Was using `g_tiny_class_sizes[]` (raw size without header) 2. **TLS SLL Purge** (hakmem_tiny_lazy_init.inc.h:83-87) - Clear TLS SLL on lazy class initialization - Prevents stale blocks from previous runs 3. **Pre-Carve Geometry Validation** (hakmem_tiny_refill_p0.inc.h:273-297) - Validates slab capacity matches current stride before carving - Reinitializes if geometry is stale (e.g., after stride upgrade) 4. **LRU Stride Validation** (hakmem_super_registry.c:369-458) - Validates cached SuperSlabs have compatible stride - Evicts incompatible SuperSlabs immediately 5. **Shared Pool Geometry Fix** (hakmem_shared_pool.c:722-733) - Reinitializes slab geometry on acquisition if capacity mismatches 6. **Legacy Backend Validation** (ss_legacy_backend_box.c:138-155) - Validates geometry before allocation in legacy path ## Impact - Eliminates 100% of 1024B-pattern alignment errors - Fixes crashes in 1024B workloads (bench_random_mixed 1024B now stable) - Establishes multiple validation layers to prevent future stride issues 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 22:55:17 +09:00
{
// Phase 12: Legacy wrapper?
// This function seems to be a direct allocation bypass.
return sp_internal_allocate_superslab();
C7 Stride Upgrade: Fix 1024B→2048B alignment corruption (ROOT CAUSE) ## Problem C7 (1KB class) blocks were being carved with 1024B stride but expected to align with 2048B stride, causing systematic NXT_MISALIGN errors with characteristic pattern: delta_mod = 1026, 1028, 1030, 1032... (1024*N + offset). This caused crashes, double-frees, and alignment violations in 1024B workloads. ## Root Cause The global array `g_tiny_class_sizes[]` was correctly updated to 2048B, but `tiny_block_stride_for_class()` contained a LOCAL static const array with the old 1024B value: ```c // hakmem_tiny_superslab.h:52 (BEFORE) static const size_t class_sizes[8] = {8, 16, 32, 64, 128, 256, 512, 1024}; ^^^^ ``` This local table was used by ALL carve operations, causing every C7 block to be allocated with 1024B stride despite the 2048B upgrade. ## Fix Updated local stride table in `tiny_block_stride_for_class()`: ```c // hakmem_tiny_superslab.h:52 (AFTER) static const size_t class_sizes[8] = {8, 16, 32, 64, 128, 256, 512, 2048}; ^^^^ ``` ## Verification **Before**: NXT_MISALIGN delta_mod shows 1024B pattern (1026, 1028, 1030...) **After**: NXT_MISALIGN delta_mod shows random values (227, 994, 195...) → No more 1024B alignment pattern = stride upgrade successful ✓ ## Additional Safety Layers (Defense in Depth) 1. **Validation Logic Fix** (tiny_nextptr.h:100) - Changed stride check to use `tiny_block_stride_for_class()` (includes header) - Was using `g_tiny_class_sizes[]` (raw size without header) 2. **TLS SLL Purge** (hakmem_tiny_lazy_init.inc.h:83-87) - Clear TLS SLL on lazy class initialization - Prevents stale blocks from previous runs 3. **Pre-Carve Geometry Validation** (hakmem_tiny_refill_p0.inc.h:273-297) - Validates slab capacity matches current stride before carving - Reinitializes if geometry is stale (e.g., after stride upgrade) 4. **LRU Stride Validation** (hakmem_super_registry.c:369-458) - Validates cached SuperSlabs have compatible stride - Evicts incompatible SuperSlabs immediately 5. **Shared Pool Geometry Fix** (hakmem_shared_pool.c:722-733) - Reinitializes slab geometry on acquisition if capacity mismatches 6. **Legacy Backend Validation** (ss_legacy_backend_box.c:138-155) - Validates geometry before allocation in legacy path ## Impact - Eliminates 100% of 1024B-pattern alignment errors - Fixes crashes in 1024B workloads (bench_random_mixed 1024B now stable) - Establishes multiple validation layers to prevent future stride issues 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 22:55:17 +09:00
}
void sp_fix_geometry_if_needed(SuperSlab* ss, int slab_idx, int class_idx) {
// Phase 9-1: For now, we assume geometry is compatible or set by caller.
// This hook exists for future use when we support dynamic geometry resizing.
(void)ss; (void)slab_idx; (void)class_idx;
Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash) ## Summary Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address SuperSlab allocation churn (877 SuperSlabs → 100-200 target). ## Implementation (ChatGPT + Claude) 1. **Metadata changes** (superslab_types.h): - Added class_idx to TinySlabMeta (per-slab dynamic class) - Removed size_class from SuperSlab (no longer per-SuperSlab) - Changed owner_tid (16-bit) → owner_tid_low (8-bit) 2. **Shared Pool** (hakmem_shared_pool.{h,c}): - Global pool shared by all size classes - shared_pool_acquire_slab() - Get free slab for class_idx - shared_pool_release_slab() - Return slab when empty - Per-class hints for fast path optimization 3. **Integration** (23 files modified): - Updated all ss->size_class → meta->class_idx - Updated all meta->owner_tid → meta->owner_tid_low - superslab_refill() now uses shared pool - Free path releases empty slabs back to pool 4. **Build system** (Makefile): - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE ## Status: ⚠️ Build OK, Runtime CRASH **Build**: ✅ SUCCESS - All 23 files compile without errors - Only warnings: superslab_allocate type mismatch (legacy code) **Runtime**: ❌ SEGFAULT - Crash location: sll_refill_small_from_ss() - Exit code: 139 (SIGSEGV) - Test case: ./bench_random_mixed_hakmem 1000 256 42 ## Known Issues 1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue 2. **Legacy superslab_allocate()** still exists (type mismatch warning) 3. **Remaining TODOs** from design doc: - SuperSlab physical layout integration - slab_handle.h cleanup - Remove old per-class head implementation ## Next Steps 1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss) 2. Fix shared_pool_acquire_slab() or superslab_init_slab() 3. Basic functionality test (1K → 100K iterations) 4. Measure SuperSlab count reduction (877 → 100-200) 5. Performance benchmark (+650-860% expected) ## Files Changed (25 files) core/box/free_local_box.c core/box/free_remote_box.c core/box/front_gate_classifier.c core/hakmem_super_registry.c core/hakmem_tiny.c core/hakmem_tiny_bg_spill.c core/hakmem_tiny_free.inc core/hakmem_tiny_lifecycle.inc core/hakmem_tiny_magazine.c core/hakmem_tiny_query.c core/hakmem_tiny_refill.inc.h core/hakmem_tiny_superslab.c core/hakmem_tiny_superslab.h core/hakmem_tiny_tls_ops.h core/slab_handle.h core/superslab/superslab_inline.h core/superslab/superslab_types.h core/tiny_debug.h core/tiny_free_fast.inc.h core/tiny_free_magazine.inc.h core/tiny_remote.c core/tiny_superslab_alloc.inc.h core/tiny_superslab_free.inc.h Makefile ## New Files (3 files) PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md core/hakmem_shared_pool.c core/hakmem_shared_pool.h 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
}