Files
hakmem/core/front/tiny_ring_cache.c

213 lines
7.5 KiB
C
Raw Normal View History

Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
// tiny_ring_cache.c - Phase 21-1: Ring cache implementation
#include "tiny_ring_cache.h"
#include "../box/tls_sll_box.h" // For tls_sll_pop/push (Phase 21-1-C refill)
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
#include <stdlib.h>
#include <string.h>
// ============================================================================
// TLS Variables (defined here, extern in header)
// ============================================================================
__thread TinyRingCache g_ring_cache_c2 = {NULL, 0, 0, 0, 0};
__thread TinyRingCache g_ring_cache_c3 = {NULL, 0, 0, 0, 0};
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
__thread TinyRingCache g_ring_cache_c5 = {NULL, 0, 0, 0, 0};
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
// ============================================================================
// Metrics (Phase 21-1-E, optional for Phase 21-1-C)
// ============================================================================
#if !HAKMEM_BUILD_RELEASE
__thread uint64_t g_ring_cache_hit[8] = {0};
__thread uint64_t g_ring_cache_miss[8] = {0};
__thread uint64_t g_ring_cache_push[8] = {0};
__thread uint64_t g_ring_cache_full[8] = {0};
__thread uint64_t g_ring_cache_refill[8] = {0};
#endif
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
// ============================================================================
// Init (called at thread start, from hakmem_tiny.c)
// ============================================================================
void ring_cache_init(void) {
if (!ring_cache_enabled()) return;
// C2 init
size_t cap_c2 = ring_capacity_c2();
g_ring_cache_c2.slots = (void**)calloc(cap_c2, sizeof(void*));
if (!g_ring_cache_c2.slots) {
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[Ring-INIT] Failed to allocate C2 ring (%zu slots)\n", cap_c2);
fflush(stderr);
#endif
return;
}
g_ring_cache_c2.capacity = (uint16_t)cap_c2;
g_ring_cache_c2.mask = (uint16_t)(cap_c2 - 1);
g_ring_cache_c2.head = 0;
g_ring_cache_c2.tail = 0;
// C3 init
size_t cap_c3 = ring_capacity_c3();
g_ring_cache_c3.slots = (void**)calloc(cap_c3, sizeof(void*));
if (!g_ring_cache_c3.slots) {
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[Ring-INIT] Failed to allocate C3 ring (%zu slots)\n", cap_c3);
fflush(stderr);
#endif
// Free C2 if C3 failed
free(g_ring_cache_c2.slots);
g_ring_cache_c2.slots = NULL;
return;
}
g_ring_cache_c3.capacity = (uint16_t)cap_c3;
g_ring_cache_c3.mask = (uint16_t)(cap_c3 - 1);
g_ring_cache_c3.head = 0;
g_ring_cache_c3.tail = 0;
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
// C5 init
size_t cap_c5 = ring_capacity_c5();
g_ring_cache_c5.slots = (void**)calloc(cap_c5, sizeof(void*));
if (!g_ring_cache_c5.slots) {
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
#if !HAKMEM_BUILD_RELEASE
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
fprintf(stderr, "[Ring-INIT] Failed to allocate C5 ring (%zu slots)\n", cap_c5);
fflush(stderr);
#endif
// Free C2 and C3 if C5 failed
free(g_ring_cache_c2.slots);
g_ring_cache_c2.slots = NULL;
free(g_ring_cache_c3.slots);
g_ring_cache_c3.slots = NULL;
return;
}
g_ring_cache_c5.capacity = (uint16_t)cap_c5;
g_ring_cache_c5.mask = (uint16_t)(cap_c5 - 1);
g_ring_cache_c5.head = 0;
g_ring_cache_c5.tail = 0;
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[Ring-INIT] C2=%zu slots (%zu bytes), C3=%zu slots (%zu bytes), C5=%zu slots (%zu bytes)\n",
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
cap_c2, cap_c2 * sizeof(void*),
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
cap_c3, cap_c3 * sizeof(void*),
cap_c5, cap_c5 * sizeof(void*));
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
fflush(stderr);
#endif
}
// ============================================================================
// Shutdown (called at thread exit, optional)
// ============================================================================
void ring_cache_shutdown(void) {
if (!ring_cache_enabled()) return;
// Drain rings to TLS SLL before shutdown (prevent leak)
// TODO: Implement drain logic in Phase 21-1-C
// Free ring buffers
if (g_ring_cache_c2.slots) {
free(g_ring_cache_c2.slots);
g_ring_cache_c2.slots = NULL;
}
if (g_ring_cache_c3.slots) {
free(g_ring_cache_c3.slots);
g_ring_cache_c3.slots = NULL;
}
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
if (g_ring_cache_c5.slots) {
free(g_ring_cache_c5.slots);
g_ring_cache_c5.slots = NULL;
}
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
#if !HAKMEM_BUILD_RELEASE
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified Summary: - Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s) - PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM) - Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization Phase 23 Changes: 1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h}) - Direct SuperSlab carve (TLS SLL bypass) - Self-contained pop-or-refill pattern - ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128 2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h) - Unified ON → direct cache access (skip all intermediate layers) - Alloc: unified_cache_pop_or_refill() → immediate fail to slow - Free: unified_cache_push() → fallback to SLL only if full PageFaultTelemetry Changes: 3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h}) - PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement - Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked() 4. Measurement results (Random Mixed 500K / 256B): - Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page) - SSM: 512 pages (initialization footprint) - MID/L25: 0 (unused in this workload) - Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny) Ring Cache Enhancements: 5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h}) - ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size - Conditional compilation cleanup Documentation: 6. Analysis reports - RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown - RANDOM_MIXED_SUMMARY.md: Phase 23 summary - RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage - CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan Next Steps (Phase 24): - Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K) - Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal) - Expected improvement: +30-50% for Mid/Large workloads Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
fprintf(stderr, "[Ring-SHUTDOWN] C2/C3/C5 rings freed\n");
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
fflush(stderr);
#endif
}
// ============================================================================
// Refill from TLS SLL (cascade, Phase 21-1-C)
// ============================================================================
// Refill ring from TLS SLL (one-way cascade: SLL → Ring)
// Returns: number of blocks transferred
int ring_refill_from_sll(int class_idx, int target_count) {
if (!ring_cascade_enabled()) return 0;
if (class_idx != 2 && class_idx != 3) return 0;
int transferred = 0;
while (transferred < target_count) {
void* ptr = NULL;
// Pop from TLS SLL
if (!tls_sll_pop(class_idx, &ptr)) {
break; // SLL empty
}
// Push to Ring
if (!ring_cache_push(class_idx, ptr)) {
// Ring full, push back to SLL
tls_sll_push(class_idx, ptr, (uint32_t)-1); // Unlimited capacity
break;
}
transferred++;
}
#if !HAKMEM_BUILD_RELEASE
if (transferred > 0) {
g_ring_cache_refill[class_idx]++; // Count refill operations
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
fprintf(stderr, "[Ring-REFILL] C%d: %d blocks transferred from SLL to Ring\n",
class_idx, transferred);
fflush(stderr);
}
#endif
return transferred;
}
// ============================================================================
// Stats (Phase 21-1-C/E metrics)
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
// ============================================================================
void ring_cache_print_stats(void) {
if (!ring_cache_enabled()) return;
#if !HAKMEM_BUILD_RELEASE
// Current occupancy
uint16_t c2_count = (g_ring_cache_c2.tail >= g_ring_cache_c2.head)
? (g_ring_cache_c2.tail - g_ring_cache_c2.head)
: (g_ring_cache_c2.capacity - g_ring_cache_c2.head + g_ring_cache_c2.tail);
uint16_t c3_count = (g_ring_cache_c3.tail >= g_ring_cache_c3.head)
? (g_ring_cache_c3.tail - g_ring_cache_c3.head)
: (g_ring_cache_c3.capacity - g_ring_cache_c3.head + g_ring_cache_c3.tail);
fprintf(stderr, "\n[Ring-STATS] Ring Cache Metrics (C2/C3):\n");
fprintf(stderr, " C2: %u/%u slots occupied\n", c2_count, g_ring_cache_c2.capacity);
fprintf(stderr, " C3: %u/%u slots occupied\n", c3_count, g_ring_cache_c3.capacity);
// Metrics summary (C2/C3 only)
for (int c = 2; c <= 3; c++) {
uint64_t total_allocs = g_ring_cache_hit[c] + g_ring_cache_miss[c];
uint64_t total_frees = g_ring_cache_push[c] + g_ring_cache_full[c];
double hit_rate = (total_allocs > 0) ? (100.0 * g_ring_cache_hit[c] / total_allocs) : 0.0;
double full_rate = (total_frees > 0) ? (100.0 * g_ring_cache_full[c] / total_frees) : 0.0;
if (total_allocs > 0 || total_frees > 0) {
fprintf(stderr, " C%d: hit=%llu miss=%llu (%.1f%% hit), push=%llu full=%llu (%.1f%% full), refill=%llu\n",
c,
(unsigned long long)g_ring_cache_hit[c],
(unsigned long long)g_ring_cache_miss[c],
hit_rate,
(unsigned long long)g_ring_cache_push[c],
(unsigned long long)g_ring_cache_full[c],
full_rate,
(unsigned long long)g_ring_cache_refill[c]);
}
}
Phase 21-1-A: Ring cache 基本実装 - Array-based TLS cache (C2/C3) ## Summary Phase 21-1-A の基本実装完了。Ring buffer ベースの TLS cache を C2/C3 (33-128B)専用に実装。ポインタチェイス削減で +15-20% 性能向上を目指す。 ## Implementation **Files Created**: - `core/front/tiny_ring_cache.h` - Ring cache API, ENV control - `core/front/tiny_ring_cache.c` - Ring cache implementation **Makefile Integration**: - Added `core/front/tiny_ring_cache.o` to OBJS_BASE - Added `core/front/tiny_ring_cache_shared.o` to SHARED_OBJS - Added `core/front/tiny_ring_cache.o` to BENCH_HAKMEM_OBJS_BASE ## Design (Task 先生調査結果 + ChatGPT フィードバック) **Ring Buffer Structure**: - C2/C3 専用(hot classes, 33-128B) - Default 128 slots (power-of-2, ENV で 64/128/256 A/B 可能) - Ultra-fast pop/push (1-2 instructions, array access) - Fast modulo via mask (capacity - 1) **Hierarchy** (Option 4: UltraHot 置き換え): ``` Ring (L0, C2/C3 専用) → HeapV2 (L1, fallback) → TLS SLL (L2) → SuperSlab (L3) ``` **Rationale**: - UltraHot の C3 問題(5.8% hit rate)を根本解決 - Phase 19-3 の +12.9%(UltraHot 除去)を維持 - Ring サイズ(128)>> UltraHot(4)→ hit rate 大幅向上期待 **Performance Goal**: - Pointer chasing: TLS SLL 1 回 → Ring 0 回 - Memory access: 3 → 2 回 - Cache locality: 配列(連続メモリ)vs linked list - Expected: +15-20% (54.4M → 62-65M ops/s) ## ENV Variables ```bash HAKMEM_TINY_HOT_RING_ENABLE=1 # Ring 有効化 (default: 0) HAKMEM_TINY_HOT_RING_C2=128 # C2 サイズ (default: 128) HAKMEM_TINY_HOT_RING_C3=128 # C3 サイズ (default: 128) HAKMEM_TINY_HOT_RING_CASCADE=1 # SLL → Ring refill (default: 0) ``` ## Implementation Status Phase 21-1-A: ✅ **COMPLETE** - Ring buffer data structure - TLS variables - ENV control (enable/capacity) - Power-of-2 capacity (fast modulo) - Ultra-fast pop/push inline functions - Refill from SLL (scaffold) - Init/shutdown/stats (scaffold) - Makefile integration - Compile success Phase 21-1-B: ⏳ **NEXT** - Alloc/Free 統合 Phase 21-1-C: ⏳ **PENDING** - Refill/Cascade 実装 Phase 21-1-D: ⏳ **PENDING** - A/B テスト ## Next Steps 1. Alloc path 統合 (`core/tiny_alloc_fast.inc.h`) 2. Free path 統合 (`core/tiny_free_fast_v2.inc.h`) 3. Init call from `hakmem_tiny.c` 4. A/B test: Ring vs UltraHot vs Baseline 🎯 Target: 62-65M ops/s (+15-20% vs 54.4M baseline) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 07:32:24 +09:00
fflush(stderr);
#endif
}