Files
hakmem/core/hakmem_tiny_lazy_init.inc.h

161 lines
5.7 KiB
C
Raw Normal View History

Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
// hakmem_tiny_lazy_init.inc.h - Phase 22: Lazy Per-Class Initialization
// Goal: Reduce cold-start page faults by initializing only used classes
//
// ChatGPT Analysis (2025-11-16):
// - hak_tiny_init() page faults: 94.94% of all page faults
// - Cause: Eager init of all 8 classes even if only C2/C3 used
// - Solution: Lazy init per class on first use
//
// Expected Impact:
// - Page faults: -90% (only touch C2/C3 for 256B workload)
// - Cold start: +30-40% performance (16.2M → 22-25M ops/s)
#ifndef HAKMEM_TINY_LAZY_INIT_INC_H
#define HAKMEM_TINY_LAZY_INIT_INC_H
#include <pthread.h>
#include <stdint.h>
C7 Stride Upgrade: Fix 1024B→2048B alignment corruption (ROOT CAUSE) ## Problem C7 (1KB class) blocks were being carved with 1024B stride but expected to align with 2048B stride, causing systematic NXT_MISALIGN errors with characteristic pattern: delta_mod = 1026, 1028, 1030, 1032... (1024*N + offset). This caused crashes, double-frees, and alignment violations in 1024B workloads. ## Root Cause The global array `g_tiny_class_sizes[]` was correctly updated to 2048B, but `tiny_block_stride_for_class()` contained a LOCAL static const array with the old 1024B value: ```c // hakmem_tiny_superslab.h:52 (BEFORE) static const size_t class_sizes[8] = {8, 16, 32, 64, 128, 256, 512, 1024}; ^^^^ ``` This local table was used by ALL carve operations, causing every C7 block to be allocated with 1024B stride despite the 2048B upgrade. ## Fix Updated local stride table in `tiny_block_stride_for_class()`: ```c // hakmem_tiny_superslab.h:52 (AFTER) static const size_t class_sizes[8] = {8, 16, 32, 64, 128, 256, 512, 2048}; ^^^^ ``` ## Verification **Before**: NXT_MISALIGN delta_mod shows 1024B pattern (1026, 1028, 1030...) **After**: NXT_MISALIGN delta_mod shows random values (227, 994, 195...) → No more 1024B alignment pattern = stride upgrade successful ✓ ## Additional Safety Layers (Defense in Depth) 1. **Validation Logic Fix** (tiny_nextptr.h:100) - Changed stride check to use `tiny_block_stride_for_class()` (includes header) - Was using `g_tiny_class_sizes[]` (raw size without header) 2. **TLS SLL Purge** (hakmem_tiny_lazy_init.inc.h:83-87) - Clear TLS SLL on lazy class initialization - Prevents stale blocks from previous runs 3. **Pre-Carve Geometry Validation** (hakmem_tiny_refill_p0.inc.h:273-297) - Validates slab capacity matches current stride before carving - Reinitializes if geometry is stale (e.g., after stride upgrade) 4. **LRU Stride Validation** (hakmem_super_registry.c:369-458) - Validates cached SuperSlabs have compatible stride - Evicts incompatible SuperSlabs immediately 5. **Shared Pool Geometry Fix** (hakmem_shared_pool.c:722-733) - Reinitializes slab geometry on acquisition if capacity mismatches 6. **Legacy Backend Validation** (ss_legacy_backend_box.c:138-155) - Validates geometry before allocation in legacy path ## Impact - Eliminates 100% of 1024B-pattern alignment errors - Fixes crashes in 1024B workloads (bench_random_mixed 1024B now stable) - Establishes multiple validation layers to prevent future stride issues 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 22:55:17 +09:00
#include <stdio.h> // For fprintf
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
#include "superslab/superslab_types.h" // For SuperSlabACEState
#include "box/ss_addr_map_box.h" // Phase 9-1: SuperSlab address map
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
// ============================================================================
// Phase 22-1: Per-Class Initialization State
// ============================================================================
// Track which classes are initialized (per-thread)
__thread uint8_t g_class_initialized[TINY_NUM_CLASSES] = {0};
// Global one-time init flag (for shared resources)
static int g_tiny_global_initialized = 0;
static pthread_mutex_t g_lazy_init_lock = PTHREAD_MUTEX_INITIALIZER;
// ============================================================================
// Phase 22-2: Lazy Init Implementation
// ============================================================================
// Initialize one class lazily (called on first use)
static inline void lazy_init_class(int class_idx) {
// Fast path: already initialized
if (__builtin_expect(g_class_initialized[class_idx], 1)) {
return;
}
// Slow path: need to initialize this class
pthread_mutex_lock(&g_lazy_init_lock);
// Double-check after acquiring lock
if (g_class_initialized[class_idx]) {
pthread_mutex_unlock(&g_lazy_init_lock);
return;
}
// Extract from hak_tiny_init.inc lines 84-103: TLS List Init
{
TinyTLSList* tls = &g_tls_lists[class_idx];
tls->head = NULL;
tls->count = 0;
uint32_t base_cap = (uint32_t)tiny_default_cap(class_idx);
uint32_t class_max = (uint32_t)tiny_cap_max_for_class(class_idx);
if (base_cap > class_max) base_cap = class_max;
// Apply global cap limit if set
extern int g_mag_cap_limit;
extern int g_mag_cap_override[TINY_NUM_CLASSES];
if ((uint32_t)g_mag_cap_limit < base_cap) base_cap = (uint32_t)g_mag_cap_limit;
if (g_mag_cap_override[class_idx] > 0) {
uint32_t ov = (uint32_t)g_mag_cap_override[class_idx];
if (ov > class_max) ov = class_max;
if (ov > (uint32_t)g_mag_cap_limit) ov = (uint32_t)g_mag_cap_limit;
if (ov != 0u) base_cap = ov;
}
if (base_cap == 0u) base_cap = 32u;
tls->cap = base_cap;
tls->refill_low = tiny_tls_default_refill(base_cap);
tls->spill_high = tiny_tls_default_spill(base_cap);
tiny_tls_publish_targets(class_idx, base_cap);
}
C7 Stride Upgrade: Fix 1024B→2048B alignment corruption (ROOT CAUSE) ## Problem C7 (1KB class) blocks were being carved with 1024B stride but expected to align with 2048B stride, causing systematic NXT_MISALIGN errors with characteristic pattern: delta_mod = 1026, 1028, 1030, 1032... (1024*N + offset). This caused crashes, double-frees, and alignment violations in 1024B workloads. ## Root Cause The global array `g_tiny_class_sizes[]` was correctly updated to 2048B, but `tiny_block_stride_for_class()` contained a LOCAL static const array with the old 1024B value: ```c // hakmem_tiny_superslab.h:52 (BEFORE) static const size_t class_sizes[8] = {8, 16, 32, 64, 128, 256, 512, 1024}; ^^^^ ``` This local table was used by ALL carve operations, causing every C7 block to be allocated with 1024B stride despite the 2048B upgrade. ## Fix Updated local stride table in `tiny_block_stride_for_class()`: ```c // hakmem_tiny_superslab.h:52 (AFTER) static const size_t class_sizes[8] = {8, 16, 32, 64, 128, 256, 512, 2048}; ^^^^ ``` ## Verification **Before**: NXT_MISALIGN delta_mod shows 1024B pattern (1026, 1028, 1030...) **After**: NXT_MISALIGN delta_mod shows random values (227, 994, 195...) → No more 1024B alignment pattern = stride upgrade successful ✓ ## Additional Safety Layers (Defense in Depth) 1. **Validation Logic Fix** (tiny_nextptr.h:100) - Changed stride check to use `tiny_block_stride_for_class()` (includes header) - Was using `g_tiny_class_sizes[]` (raw size without header) 2. **TLS SLL Purge** (hakmem_tiny_lazy_init.inc.h:83-87) - Clear TLS SLL on lazy class initialization - Prevents stale blocks from previous runs 3. **Pre-Carve Geometry Validation** (hakmem_tiny_refill_p0.inc.h:273-297) - Validates slab capacity matches current stride before carving - Reinitializes if geometry is stale (e.g., after stride upgrade) 4. **LRU Stride Validation** (hakmem_super_registry.c:369-458) - Validates cached SuperSlabs have compatible stride - Evicts incompatible SuperSlabs immediately 5. **Shared Pool Geometry Fix** (hakmem_shared_pool.c:722-733) - Reinitializes slab geometry on acquisition if capacity mismatches 6. **Legacy Backend Validation** (ss_legacy_backend_box.c:138-155) - Validates geometry before allocation in legacy path ## Impact - Eliminates 100% of 1024B-pattern alignment errors - Fixes crashes in 1024B workloads (bench_random_mixed 1024B now stable) - Establishes multiple validation layers to prevent future stride issues 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 22:55:17 +09:00
// CRITICAL FIX: Clear TLS SLL (Phase 3d-B unified structure) to purge stale blocks
// This prevents C7 1024B→2048B stride upgrade issues where old misaligned blocks
// remain in TLS SLL from previous runs or initialization paths.
// Note: g_tls_sll is defined in hakmem_tiny_tls_state_box.inc, already visible here
g_tls_sll[class_idx].head = NULL;
g_tls_sll[class_idx].count = 0;
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[LAZY_INIT] Cleared TLS SLL for class %d (purge stale blocks)\n", class_idx);
#endif
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
// Extract from hak_tiny_init.inc lines 623-625: Per-class lock
pthread_mutex_init(&g_tiny_class_locks[class_idx].m, NULL);
// Extract from hak_tiny_init.inc lines 628-637: ACE state
{
extern SuperSlabACEState g_ss_ace[TINY_NUM_CLASSES];
g_ss_ace[class_idx].current_lg = 20; // Start with 1MB SuperSlabs
g_ss_ace[class_idx].target_lg = 20;
g_ss_ace[class_idx].hot_score = 0;
g_ss_ace[class_idx].alloc_count = 0;
g_ss_ace[class_idx].refill_count = 0;
g_ss_ace[class_idx].spill_count = 0;
g_ss_ace[class_idx].live_blocks = 0;
g_ss_ace[class_idx].last_tick_ns = 0;
}
// Mark as initialized
g_class_initialized[class_idx] = 1;
pthread_mutex_unlock(&g_lazy_init_lock);
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[LAZY_INIT] Class %d initialized\n", class_idx);
#endif
}
// Global initialization (called once, for non-class resources)
static inline void lazy_init_global(void) {
if (__builtin_expect(g_tiny_global_initialized, 1)) {
return;
}
pthread_mutex_lock(&g_lazy_init_lock);
if (g_tiny_global_initialized) {
pthread_mutex_unlock(&g_lazy_init_lock);
return;
}
// Initialize SuperSlab subsystem (only once)
extern int g_use_superslab;
if (g_use_superslab) {
extern void hak_super_registry_init(void);
extern void hak_ss_lru_init(void);
extern void hak_ss_prewarm_init(void);
hak_super_registry_init();
hak_ss_lru_init();
hak_ss_prewarm_init();
// Phase 9-1: Initialize SuperSlab address map (hash table O(1) lookup)
ss_map_init(&g_ss_addr_map);
#if !HAKMEM_BUILD_RELEASE
if (getenv("HAKMEM_SS_MAP_TRACE")) {
fprintf(stderr, "[SS_MAP] Initialized hash table with %d buckets\n", SS_MAP_HASH_SIZE);
}
#endif
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
}
// Mark global resources as initialized
g_tiny_global_initialized = 1;
pthread_mutex_unlock(&g_lazy_init_lock);
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[LAZY_INIT] Global resources initialized\n");
#endif
}
#endif // HAKMEM_TINY_LAZY_INIT_INC_H