## Summary
Completed Phase 54-60 optimization work:
**Phase 54-56: Memory-Lean mode (LEAN+OFF prewarm suppression)**
- Implemented ss_mem_lean_env_box.h with ENV gates
- Balanced mode (LEAN+OFF) promoted as production default
- Result: +1.2% throughput, better stability, zero syscall overhead
- Added to bench_profile.h: MIXED_TINYV3_C7_BALANCED preset
**Phase 57: 60-min soak finalization**
- Balanced mode: 60-min soak, RSS drift 0%, CV 5.38%
- Speed-first mode: 60-min soak, RSS drift 0%, CV 1.58%
- Syscall budget: 1.25e-7/op (800× under target)
- Status: PRODUCTION-READY
**Phase 59: 50% recovery baseline rebase**
- hakmem FAST (Balanced): 59.184M ops/s, CV 1.31%
- mimalloc: 120.466M ops/s, CV 3.50%
- Ratio: 49.13% (M1 ACHIEVED within statistical noise)
- Superior stability: 2.68× better CV than mimalloc
**Phase 60: Alloc pass-down SSOT (NO-GO)**
- Implemented alloc_passdown_ssot_env_box.h
- Modified malloc_tiny_fast.h for SSOT pattern
- Result: -0.46% (NO-GO)
- Key lesson: SSOT not applicable where early-exit already optimized
## Key Metrics
- Performance: 49.13% of mimalloc (M1 effectively achieved)
- Stability: CV 1.31% (superior to mimalloc 3.50%)
- Syscall budget: 1.25e-7/op (excellent)
- RSS: 33MB stable, 0% drift over 60 minutes
## Files Added/Modified
New boxes:
- core/box/ss_mem_lean_env_box.h
- core/box/ss_release_policy_box.{h,c}
- core/box/alloc_passdown_ssot_env_box.h
Scripts:
- scripts/soak_mixed_single_process.sh
- scripts/analyze_epoch_tail_csv.py
- scripts/soak_mixed_rss.sh
- scripts/calculate_percentiles.py
- scripts/analyze_soak.py
Documentation: Phase 40-60 analysis documents
## Design Decisions
1. Profile separation (core/bench_profile.h):
- MIXED_TINYV3_C7_SAFE: Speed-first (no LEAN)
- MIXED_TINYV3_C7_BALANCED: Balanced mode (LEAN+OFF)
2. Box Theory compliance:
- All ENV gates reversible (HAKMEM_SS_MEM_LEAN, HAKMEM_ALLOC_PASSDOWN_SSOT)
- Single conversion points maintained
- No physical deletions (compile-out only)
3. Lessons learned:
- SSOT effective only where redundancy exists (Phase 60 showed limits)
- Branch prediction extremely effective (~0 cycles for well-predicted branches)
- Early-exit pattern valuable even when seemingly redundant
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
271 lines
11 KiB
C
271 lines
11 KiB
C
// superslab_stats.c - Statistics and debugging for SuperSlab allocator
|
|
// Purpose: Tracking and reporting allocation statistics
|
|
// License: MIT
|
|
// Date: 2025-11-28
|
|
|
|
#include "hakmem_tiny_superslab_internal.h"
|
|
#include "box/ss_os_acquire_box.h"
|
|
#include <stdbool.h>
|
|
#include <stdlib.h>
|
|
|
|
// ============================================================================
|
|
// Global Statistics
|
|
// ============================================================================
|
|
|
|
pthread_mutex_t g_superslab_lock = PTHREAD_MUTEX_INITIALIZER;
|
|
uint64_t g_superslabs_allocated = 0; // Non-static for debugging
|
|
uint64_t g_superslabs_freed = 0; // Phase 7.6: Non-static for test access
|
|
uint64_t g_bytes_allocated = 0; // Non-static for debugging
|
|
|
|
// Debug counters
|
|
_Atomic uint64_t g_ss_active_dec_calls = 0;
|
|
_Atomic uint64_t g_hak_tiny_free_calls = 0;
|
|
_Atomic uint64_t g_ss_remote_push_calls = 0;
|
|
// Free path instrumentation (lightweight, for OOM/route diagnosis)
|
|
_Atomic uint64_t g_free_ss_enter = 0; // hak_tiny_free_superslab() entries
|
|
_Atomic uint64_t g_free_local_box_calls = 0; // same-thread freelist pushes
|
|
_Atomic uint64_t g_free_remote_box_calls = 0; // cross-thread remote pushes
|
|
// Per-class counters for gating/metrics (Tiny classes = 8)
|
|
uint64_t g_ss_alloc_by_class[8] = {0};
|
|
uint64_t g_ss_freed_by_class[8] = {0};
|
|
|
|
// Global counters for debugging (non-static for external access)
|
|
_Atomic uint64_t g_ss_mmap_count = 0;
|
|
_Atomic uint64_t g_final_fallback_mmap_count = 0;
|
|
_Atomic uint64_t g_ss_os_alloc_calls = 0;
|
|
_Atomic uint64_t g_ss_os_free_calls = 0;
|
|
_Atomic uint64_t g_ss_os_madvise_calls = 0;
|
|
_Atomic uint64_t g_ss_os_madvise_fail_enomem = 0;
|
|
_Atomic uint64_t g_ss_os_madvise_fail_other = 0;
|
|
_Atomic uint64_t g_ss_os_huge_alloc_calls = 0;
|
|
_Atomic uint64_t g_ss_os_huge_fail_calls = 0;
|
|
_Atomic bool g_ss_madvise_disabled = false;
|
|
_Atomic uint64_t g_ss_lean_decommit_calls = 0;
|
|
_Atomic uint64_t g_ss_lean_retire_calls = 0;
|
|
|
|
// Superslab/slab observability (Tiny-only; relaxed updates)
|
|
_Atomic uint64_t g_ss_live_by_class[8] = {0};
|
|
_Atomic uint64_t g_ss_empty_events[8] = {0};
|
|
_Atomic uint64_t g_slab_live_events[8] = {0};
|
|
|
|
// ============================================================================
|
|
// Statistics Functions
|
|
// ============================================================================
|
|
|
|
void ss_stats_os_alloc(uint8_t size_class, size_t ss_size) {
|
|
pthread_mutex_lock(&g_superslab_lock);
|
|
g_superslabs_allocated++;
|
|
if (size_class < 8) {
|
|
g_ss_alloc_by_class[size_class]++;
|
|
}
|
|
g_bytes_allocated += ss_size;
|
|
pthread_mutex_unlock(&g_superslab_lock);
|
|
}
|
|
|
|
void ss_stats_cache_reuse(void) {
|
|
pthread_mutex_lock(&g_superslab_lock);
|
|
g_superslabs_reused++;
|
|
pthread_mutex_unlock(&g_superslab_lock);
|
|
}
|
|
|
|
void ss_stats_cache_store(void) {
|
|
pthread_mutex_lock(&g_superslab_lock);
|
|
g_superslabs_cached++;
|
|
pthread_mutex_unlock(&g_superslab_lock);
|
|
}
|
|
|
|
void ss_stats_on_ss_alloc_class(int class_idx) {
|
|
if (class_idx >= 0 && class_idx < 8) {
|
|
atomic_fetch_add_explicit(&g_ss_live_by_class[class_idx], 1, memory_order_relaxed);
|
|
}
|
|
}
|
|
|
|
void ss_stats_on_ss_free_class(int class_idx) {
|
|
if (class_idx >= 0 && class_idx < 8) {
|
|
uint64_t prev = atomic_load_explicit(&g_ss_live_by_class[class_idx], memory_order_relaxed);
|
|
if (prev > 0) {
|
|
atomic_fetch_sub_explicit(&g_ss_live_by_class[class_idx], 1, memory_order_relaxed);
|
|
}
|
|
}
|
|
}
|
|
|
|
void ss_stats_on_ss_scan(int class_idx, int slab_live, int is_empty) {
|
|
if (class_idx < 0 || class_idx >= 8) {
|
|
return;
|
|
}
|
|
if (slab_live > 0) {
|
|
atomic_fetch_add_explicit(&g_slab_live_events[class_idx],
|
|
(uint64_t)slab_live,
|
|
memory_order_relaxed);
|
|
}
|
|
if (is_empty) {
|
|
atomic_fetch_add_explicit(&g_ss_empty_events[class_idx], 1, memory_order_relaxed);
|
|
}
|
|
}
|
|
|
|
// ============================================================================
|
|
// Diagnostics
|
|
// ============================================================================
|
|
|
|
void log_superslab_oom_once(size_t ss_size, size_t alloc_size, int err) {
|
|
(void)ss_size; (void)alloc_size; (void)err;
|
|
static int logged = 0;
|
|
if (logged) return;
|
|
logged = 1;
|
|
|
|
// CRITICAL FIX: Increment lock depth FIRST before any LIBC calls
|
|
// fopen/fclose/getrlimit/fprintf all may call malloc internally
|
|
// Must bypass HAKMEM wrapper to avoid header mismatch crash
|
|
extern __thread int g_hakmem_lock_depth;
|
|
g_hakmem_lock_depth++; // Force wrapper to use __libc_malloc
|
|
|
|
struct rlimit rl = {0};
|
|
if (getrlimit(RLIMIT_AS, &rl) != 0) {
|
|
rl.rlim_cur = RLIM_INFINITY;
|
|
rl.rlim_max = RLIM_INFINITY;
|
|
}
|
|
|
|
unsigned long vm_size_kb = 0;
|
|
unsigned long vm_rss_kb = 0;
|
|
FILE* status = fopen("/proc/self/status", "r");
|
|
if (status) {
|
|
char line[256];
|
|
while (fgets(line, sizeof(line), status)) {
|
|
if (strncmp(line, "VmSize:", 7) == 0) {
|
|
(void)sscanf(line + 7, "%lu", &vm_size_kb);
|
|
} else if (strncmp(line, "VmRSS:", 6) == 0) {
|
|
(void)sscanf(line + 6, "%lu", &vm_rss_kb);
|
|
}
|
|
}
|
|
fclose(status);
|
|
}
|
|
// CRITICAL FIX: Do NOT decrement lock_depth yet!
|
|
// fprintf() below may call malloc for buffering
|
|
|
|
char rl_cur_buf[32];
|
|
char rl_max_buf[32];
|
|
if (rl.rlim_cur == RLIM_INFINITY) {
|
|
strcpy(rl_cur_buf, "inf");
|
|
} else {
|
|
snprintf(rl_cur_buf, sizeof(rl_cur_buf), "%llu", (unsigned long long)rl.rlim_cur);
|
|
}
|
|
if (rl.rlim_max == RLIM_INFINITY) {
|
|
strcpy(rl_max_buf, "inf");
|
|
} else {
|
|
snprintf(rl_max_buf, sizeof(rl_max_buf), "%llu", (unsigned long long)rl.rlim_max);
|
|
}
|
|
|
|
#if !HAKMEM_BUILD_RELEASE
|
|
fprintf(stderr,
|
|
"[SS OOM] mmap failed: err=%d ss_size=%zu alloc_size=%zu "
|
|
"alloc=%llu freed=%llu bytes=%llu "
|
|
"RLIMIT_AS(cur=%s max=%s) VmSize=%lu kB VmRSS=%lu kB\n",
|
|
err,
|
|
ss_size,
|
|
alloc_size,
|
|
(unsigned long long)g_superslabs_allocated,
|
|
(unsigned long long)g_superslabs_freed,
|
|
(unsigned long long)g_bytes_allocated,
|
|
rl_cur_buf,
|
|
rl_max_buf,
|
|
vm_size_kb,
|
|
vm_rss_kb);
|
|
#endif
|
|
|
|
g_hakmem_lock_depth--; // Now safe to restore (all libc calls complete)
|
|
}
|
|
|
|
// ============================================================================
|
|
// Statistics / Debugging
|
|
// ============================================================================
|
|
|
|
void superslab_print_stats(SuperSlab* ss) {
|
|
if (!ss || ss->magic != SUPERSLAB_MAGIC) {
|
|
printf("Invalid SuperSlab\n");
|
|
return;
|
|
}
|
|
|
|
printf("=== SuperSlab Stats ===\n");
|
|
printf("Address: %p\n", (void*)ss);
|
|
// Phase 12: per-SS size_class removed; classes are per-slab via meta->class_idx.
|
|
printf("Active slabs: %u / %d\n", ss->active_slabs, ss_slabs_capacity(ss));
|
|
printf("Bitmap: 0x%08X\n", ss->slab_bitmap);
|
|
printf("\nPer-slab details:\n");
|
|
for (int i = 0; i < ss_slabs_capacity(ss); i++) {
|
|
if (ss->slab_bitmap & (1u << i)) {
|
|
TinySlabMeta* meta = &ss->slabs[i];
|
|
printf(" Slab %2d: used=%u/%u freelist=%p class=%u owner_tid_low=%u\n",
|
|
i, meta->used, meta->capacity, meta->freelist,
|
|
(unsigned)meta->class_idx, (unsigned)meta->owner_tid_low);
|
|
}
|
|
}
|
|
printf("\n");
|
|
}
|
|
|
|
// Global statistics
|
|
void superslab_print_global_stats(void) {
|
|
pthread_mutex_lock(&g_superslab_lock);
|
|
printf("=== Global SuperSlab Stats ===\n");
|
|
printf("SuperSlabs allocated: %lu\n", g_superslabs_allocated);
|
|
printf("SuperSlabs freed: %lu\n", g_superslabs_freed);
|
|
printf("SuperSlabs active: %lu\n", g_superslabs_allocated - g_superslabs_freed);
|
|
printf("Total bytes allocated: %lu MB\n", g_bytes_allocated / (1024 * 1024));
|
|
pthread_mutex_unlock(&g_superslab_lock);
|
|
}
|
|
|
|
// ============================================================================
|
|
// OS call counters (optional, ENV gated)
|
|
// ============================================================================
|
|
|
|
static int ss_os_stats_env_enabled(void) {
|
|
static int g = -1;
|
|
if (__builtin_expect(g == -1, 0)) {
|
|
const char* e = getenv("HAKMEM_SS_OS_STATS");
|
|
g = (e && *e && *e != '0') ? 1 : 0;
|
|
}
|
|
return g;
|
|
}
|
|
|
|
static void ss_os_stats_dump(void) __attribute__((destructor, used));
|
|
static void ss_os_stats_dump(void) {
|
|
if (!ss_os_stats_env_enabled()) {
|
|
return;
|
|
}
|
|
fprintf(stderr,
|
|
"[SS_OS_STATS] alloc=%llu free=%llu madvise=%llu madvise_enomem=%llu madvise_other=%llu madvise_disabled=%d "
|
|
"mmap_total=%llu fallback_mmap=%llu huge_alloc=%llu huge_fail=%llu "
|
|
"lean_decommit=%llu lean_retire=%llu\n",
|
|
(unsigned long long)atomic_load_explicit(&g_ss_os_alloc_calls, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_ss_os_free_calls, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_ss_os_madvise_calls, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_ss_os_madvise_fail_enomem, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_ss_os_madvise_fail_other, memory_order_relaxed),
|
|
atomic_load_explicit(&g_ss_madvise_disabled, memory_order_relaxed) ? 1 : 0,
|
|
(unsigned long long)atomic_load_explicit(&g_ss_mmap_count, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_final_fallback_mmap_count, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_ss_os_huge_alloc_calls, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_ss_os_huge_fail_calls, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_ss_lean_decommit_calls, memory_order_relaxed),
|
|
(unsigned long long)atomic_load_explicit(&g_ss_lean_retire_calls, memory_order_relaxed));
|
|
}
|
|
|
|
void ss_stats_dump_if_requested(void) {
|
|
const char* env = getenv("HAKMEM_SS_STATS_DUMP");
|
|
if (!env || !*env || *env == '0') {
|
|
return;
|
|
}
|
|
fprintf(stderr, "[SS_STATS] class live empty_events slab_live_events\n");
|
|
for (int c = 0; c < 8; c++) {
|
|
uint64_t live = atomic_load_explicit(&g_ss_live_by_class[c], memory_order_relaxed);
|
|
uint64_t empty = atomic_load_explicit(&g_ss_empty_events[c], memory_order_relaxed);
|
|
uint64_t slab_live = atomic_load_explicit(&g_slab_live_events[c], memory_order_relaxed);
|
|
if (live || empty || slab_live) {
|
|
fprintf(stderr, " C%d: live=%llu empty=%llu slab_live=%llu\n",
|
|
c,
|
|
(unsigned long long)live,
|
|
(unsigned long long)empty,
|
|
(unsigned long long)slab_live);
|
|
}
|
|
}
|
|
}
|