Commit Graph

16 Commits

Author SHA1 Message Date
60b02adf54 hak_init_wait_for_ready: タイムアウト削除 + デバッグ出力抑制
- hak_init_wait_for_ready(): タイムアウト(i > 1000000)を削除
  - 他スレッドは初期化完了まで確実に待機するように変更
  - init_waitによるlibcフォールバックを防止

- tls_sll_drain_box.h: デバッグ出力を#ifndef NDEBUGで囲む
  - releaseビルドでの不要なfprintf出力を抑制
  - [TLS_SLL_DRAIN] メッセージがベンチマーク時に出なくなった

性能への影響:
- sh8bench 8スレッド: 17秒(変更なし)
- フォールバック: 8回(初期化時のみ、正常動作)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 23:29:07 +09:00
0bc33dc4f5 Phase 9-2: Remove Legacy Backend & Unify to Shared Pool (50M ops/s)
- Removed Legacy Backend fallback; Shared Pool is now the sole backend.
- Removed Soft Cap limit in Shared Pool to allow full memory management.
- Implemented EMPTY slab recycling with batched meta->used decrement in remote drain.
- Updated tiny_free_local_box to return is_empty status for safe recycling.
- Fixed race condition in release path by removing from legacy list early.
- Achieved 50.3M ops/s in WS8192 benchmark (+200% vs baseline).
2025-12-01 13:47:23 +09:00
87b7d30998 Phase 9: SuperSlab optimization & EMPTY slab recycling (WIP)
Phase 9-1: O(1) SuperSlab lookup optimization
- Created ss_addr_map_box: Hash table (8192 buckets) for O(1) SuperSlab lookup
- Created ss_tls_hint_box: TLS caching layer for SuperSlab hints
- Integrated hash table into registry (init, insert, remove, lookup)
- Modified hak_super_lookup() to use new hash table
- Expected: 50-80 cycles → 10-20 cycles (not verified - SuperSlab disabled by default)

Phase 9-2: EMPTY slab recycling implementation
- Created slab_recycling_box: SLAB_TRY_RECYCLE() macro following Box pattern
- Integrated into remote drain (superslab_slab.c)
- Integrated into TLS SLL drain (tls_sll_drain_box.h) with touched slab tracking
- Observable: Debug tracing via HAKMEM_SLAB_RECYCLE_TRACE
- Updated Makefile: Added new box objects to 3 build targets

Known Issues:
- SuperSlab registry exhaustion still occurs (unregistration not working)
- shared_pool_release_slab() may not be removing from g_super_reg[]
- Needs investigation before Phase 9-2 can be completed

Expected Impact (when fixed):
- Stage 1 hit rate: 0% → 80%
- shared_fail events: 4 → 0
- Kernel overhead: 55% → 15%
- Throughput: 16.5M → 25-30M ops/s (+50-80%)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 07:16:50 +09:00
4f2bcb7d32 Refactor: Phase 2 Box化 - SuperSlab Lookup Box with multiple contract levels
Purpose: Formalize SuperSlab lookup responsibilities with clear safety guarantees

Evolution:
- Phase 12: UNSAFE mask+dereference (5-10 cycles) → 12% crash rate
- Phase 1b: SAFE registry lookup (50-100 cycles) → 0% crash rate
- Phase 2: Box化 - multiple contracts (UNSAFE/SAFE/GUARDED)

Box Pattern Benefits:
1. Clear Contracts: Each API documents preconditions and guarantees
2. Multiple Levels: Choose speed vs safety based on context
3. Future-Proof: Enables optimizations without breaking existing code

API Design:
- ss_lookup_unsafe(): 5-10 cycles, requires validated pointer (internal use only)
- ss_lookup_safe(): 50-100 cycles, works with arbitrary pointers (recommended)
- ss_lookup_guarded(): 100-200 cycles, adds integrity checks (debug only)
- ss_fast_lookup(): Backward compatible (→ ss_lookup_safe)

Implementation:
- Created core/box/superslab_lookup_box.h with full contract documentation
- Integrated into core/superslab/superslab_inline.h
- ss_lookup_safe() implemented as macro to avoid circular dependency
- ss_lookup_guarded() only available in debug builds
- Removed conflicting extern declarations from 3 locations

Testing:
- Build: Success (all warnings resolved)
- Crash rate: 0% (50/50 iterations passed)
- Backward compatibility: Maintained via ss_fast_lookup() macro

Future Optimization Opportunities (documented in Box):
- Phase 2.1: Hybrid lookup (try UNSAFE first, fallback to SAFE)
- Phase 2.2: Per-thread cache (1-2 cycles hit rate)
- Phase 2.3: Hardware-assisted validation (PAC/CPUID)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 08:44:29 +09:00
846daa3edf Cleanup: Fix 2 additional Class 0/7 header bugs (correctness fix)
Task Agent Investigation:
- Found 2 more instances of hardcoded `class_idx != 7` checks
- These are real bugs (C0 also uses offset=0, not just C7)
- However, NOT the root cause of 12% crash rate

Bug Fixes (2 locations):
1. tls_sll_drain_box.h:190
   - Path: TLS SLL drain → tiny_free_local_box()
   - Fix: Use tiny_header_write_for_alloc() (ALL classes)
   - Reason: tiny_free_local_box() reads header for class_idx

2. hakmem_tiny_refill.inc.h:384
   - Path: SuperSlab refill → TLS SLL push
   - Fix: Use tiny_header_write_if_preserved() (C1-C6 only)
   - Reason: TLS SLL push needs header for validation

Test Results:
- Before: 12% crash rate (88/100 runs successful)
- After: 12% crash rate (44/50 runs successful)
- Conclusion: Correctness fix, but not primary crash cause

Analysis:
- Bugs are real (incorrect Class 0 handling)
- Fixes don't reduce crash rate → different root cause exists
- Heisenbug characteristics (disappears under gdb)
- Likely: Race condition, uninitialized memory, or use-after-free

Remaining Work:
- 12% crash rate persists (requires different investigation)
- Next: Focus on TLS initialization, race conditions, allocation paths

Design Note:
- tls_sll_drain_box.h uses tiny_header_write_for_alloc()
  because tiny_free_local_box() needs header to read class_idx
- hakmem_tiny_refill.inc.h uses tiny_header_write_if_preserved()
  because TLS SLL push validates header (C1-C6 only)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 08:12:08 +09:00
dc9e650db3 Tiny Pool redesign: P0.1, P0.3, P1.1, P1.2 - Out-of-band class_idx lookup
This commit implements the first phase of Tiny Pool redesign based on
ChatGPT architecture review. The goal is to eliminate Header/Next pointer
conflicts by moving class_idx lookup out-of-band (to SuperSlab metadata).

## P0.1: C0(8B) class upgraded to 16B
- Size table changed: {16,32,64,128,256,512,1024,2048} (8 classes)
- LUT updated: 1..16 → class 0, 17..32 → class 1, etc.
- tiny_next_off: C0 now uses offset 1 (header preserved)
- Eliminates edge cases for 8B allocations

## P0.3: Slab reuse guard Box (tls_slab_reuse_guard_box.h)
- New Box for draining TLS SLL before slab reuse
- ENV gate: HAKMEM_TINY_SLAB_REUSE_GUARD=1
- Prevents stale pointers when slabs are recycled
- Follows Box theory: single responsibility, minimal API

## P1.1: SuperSlab class_map addition
- Added uint8_t class_map[SLABS_PER_SUPERSLAB_MAX] to SuperSlab
- Maps slab_idx → class_idx for out-of-band lookup
- Initialized to 255 (UNASSIGNED) on SuperSlab creation
- Set correctly on slab initialization in all backends

## P1.2: Free fast path uses class_map
- ENV gate: HAKMEM_TINY_USE_CLASS_MAP=1
- Free path can now get class_idx from class_map instead of Header
- Falls back to Header read if class_map returns invalid value
- Fixed Legacy Backend dynamic slab initialization bug

## Documentation added
- HAKMEM_ARCHITECTURE_OVERVIEW.md: 4-layer architecture analysis
- TLS_SLL_ARCHITECTURE_INVESTIGATION.md: Root cause analysis
- PTR_LIFECYCLE_TRACE_AND_ROOT_CAUSE_ANALYSIS.md: Pointer tracking
- TINY_REDESIGN_CHECKLIST.md: Implementation roadmap (P0-P3)

## Test results
- Baseline: 70% success rate (30% crash - pre-existing issue)
- class_map enabled: 70% success rate (same as baseline)
- Performance: ~30.5M ops/s (unchanged)

## Next steps (P1.3, P2, P3)
- P1.3: Add meta->active for accurate TLS/freelist sync
- P2: TLS SLL redesign with Box-based counting
- P3: Complete Header out-of-band migration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 13:42:39 +09:00
8af9123bcc Larson double-free investigation: Add full operation lifecycle logging
**Diagnostic Enhancement**: Complete malloc/free/pop operation tracing for debug

**Problem**: Larson crashes with TLS_SLL_DUP at count=18, need to trace exact
pointer lifecycle to identify if allocator returns duplicate addresses or if
benchmark has double-free bug.

**Implementation** (ChatGPT + Claude + Task collaboration):

1. **Global Operation Counter** (core/hakmem_tiny_config_box.inc:9):
   - Single atomic counter for all operations (malloc/free/pop)
   - Chronological ordering across all paths

2. **Allocation Logging** (core/hakmem_tiny_config_box.inc:148-161):
   - HAK_RET_ALLOC macro enhanced with operation logging
   - Logs first 50 class=1 allocations with ptr/base/tls_count

3. **Free Logging** (core/tiny_free_fast_v2.inc.h:222-235):
   - Added before tls_sll_push() call (line 221)
   - Logs first 50 class=1 frees with ptr/base/tls_count_before

4. **Pop Logging** (core/box/tls_sll_box.h:587-597):
   - Added in tls_sll_pop_impl() after successful pop
   - Logs first 50 class=1 pops with base/tls_count_after

5. **Drain Debug Logging** (core/box/tls_sll_drain_box.h:143-151):
   - Enhanced drain loop with detailed logging
   - Tracks pop failures and drained block counts

**Initial Findings**:
- First 19 operations: ALL frees, ZERO allocations, ZERO pops
- OP#0006: First free of 0x...430
- OP#0018: Duplicate free of 0x...430 → TLS_SLL_DUP detected
- Suggests either: (a) allocations before logging starts, or (b) Larson bug

**Debug-only**: All logging gated by !HAKMEM_BUILD_RELEASE (zero cost in release)

**Next Steps**:
- Expand logging window to 200 operations
- Log initialization phase allocations
- Cross-check with Larson benchmark source

**Status**: Ready for extended testing
2025-11-27 08:18:01 +09:00
8553894171 Larson double-free investigation: Enhanced diagnostics + Remove buggy drain pushback
**Problem**: Larson benchmark crashes with TLS_SLL_DUP (double-free), 100% crash rate in debug

**Root Cause**: TLS drain pushback code (commit c2f104618) created duplicates by
pushing pointers back to TLS SLL while they were still in the linked list chain.

**Diagnostic Enhancements** (ChatGPT + Claude collaboration):
1. **Callsite Tracking**: Track file:line for each TLS SLL push (debug only)
   - Arrays: g_tls_sll_push_file[], g_tls_sll_push_line[]
   - Macro: tls_sll_push() auto-records __FILE__, __LINE__

2. **Enhanced Duplicate Detection**:
   - Scan depth: 64 → 256 nodes (deep duplicate detection)
   - Error message shows BOTH current and previous push locations
   - Calls ptr_trace_dump_now() for detailed analysis

3. **Evidence Captured**:
   - Both duplicate pushes from same line (221)
   - Pointer at position 11 in TLS SLL (count=18, scanned=11)
   - Confirms pointer allocated without being popped from TLS SLL

**Fix**:
- **core/box/tls_sll_drain_box.h**: Remove pushback code entirely
  - Old: Push back to TLS SLL on validation failure → duplicates!
  - New: Skip pointer (accept rare leak) to avoid duplicates
  - Rationale: SuperSlab lookup failures are transient/rare

**Status**: Fix implemented, ready for testing

**Updated**:
- LARSON_DOUBLE_FREE_INVESTIGATION.md: Root cause confirmed
2025-11-27 07:30:32 +09:00
c2f104618f Fix critical TLS drain memory leak causing potential double-free
## Root Cause

TLS drain was dropping pointers when SuperSlab lookup or slab_idx validation failed:
- Pop pointer from TLS SLL
- Lookup/validation fails
- continue → LEAK! Pointer never returned to any freelist

## Impact

Memory leak + potential double allocation:
1. Pointer P popped but leaked
2. Same address P reallocated from carve/other source
3. User frees P again → duplicate detection → ABORT

## Fix

**Before (BUGGY)**:
```c
if (!ss || invalid_slab_idx) {
    continue;  // ← LEAK!
}
```

**After (FIXED)**:
```c
if (!ss || invalid_slab_idx) {
    // Push back to TLS SLL head (retry later)
    tiny_next_write(class_idx, base, g_tls_sll[class_idx].head);
    g_tls_sll[class_idx].head = base;
    g_tls_sll[class_idx].count++;
    break;  // Stop draining to avoid infinite retry
}
```

## Files Changed

- core/box/tls_sll_drain_box.h: Fix 2 leak sites (SS lookup + slab_idx validation)
- docs/analysis/LARSON_DOUBLE_FREE_INVESTIGATION.md: Investigation report

## Related

- Larson double-free investigation (47% crash rate)
- Commit e4868bf23: Freelist header write + abort() on duplicate
- ChatGPT analysis: Larson benchmark code is correct (no user bug)
2025-11-27 06:49:38 +09:00
8b67718bf2 Fix C7 TLS SLL corruption: Protect next pointer from user data overwrites
## Root Cause
C7 (1024B allocations, 2048B stride) was using offset=1 for freelist next
pointers, storing them at `base[1..8]`. Since user pointer is `base+1`, users
could overwrite the next pointer area, corrupting the TLS SLL freelist.

## The Bug Sequence
1. Block freed → TLS SLL push stores next at `base[1..8]`
2. Block allocated → User gets `base+1`, can modify `base[1..2047]`
3. User writes data → Overwrites `base[1..8]` (next pointer area!)
4. Block freed again → tiny_next_load() reads garbage from `base[1..8]`
5. TLS SLL head becomes invalid (0xfe, 0xdb, 0x58, etc.)

## Why This Was Reverted
Previous fix (C7 offset=0) was reverted with comment:
  "C7も header を保持して class 判別を壊さないことを優先"
  (Prioritize preserving C7 header to avoid breaking class identification)

This reasoning was FLAWED because:
- Header IS restored during allocation (HAK_RET_ALLOC), not freelist ops
- Class identification at free time reads from ptr-1 = base[0] (after restoration)
- During freelist, header CAN be sacrificed (not visible to user)
- The revert CREATED the race condition by exposing base[1..8] to user

## Fix Applied

### 1. Revert C7 offset to 0 (tiny_nextptr.h:54)
```c
// BEFORE (BROKEN):
return (class_idx == 0) ? 0u : 1u;

// AFTER (FIXED):
return (class_idx == 0 || class_idx == 7) ? 0u : 1u;
```

### 2. Remove C7 header restoration in freelist (tiny_nextptr.h:84)
```c
// BEFORE (BROKEN):
if (class_idx != 0) {  // Restores header for all classes including C7

// AFTER (FIXED):
if (class_idx != 0 && class_idx != 7) {  // Only C1-C6 restore headers
```

### 3. Bonus: Remove premature slab release (tls_sll_drain_box.h:182-189)
Removed `shared_pool_release_slab()` call from drain path that could cause
use-after-free when blocks from same slab remain in TLS SLL.

## Why This Fix Works

**Memory Layout** (C7 in freelist):
```
Address:     base      base+1        base+2048
            ┌────┬──────────────────────┐
Content:    │next│  (user accessible)  │
            └────┴──────────────────────┘
            8B ptr  ← USER CANNOT TOUCH base[0]
```

- **Next pointer at base[0]**: Protected from user modification ✓
- **User pointer at base+1**: User sees base[1..2047] only ✓
- **Header restored during allocation**: HAK_RET_ALLOC writes 0xa7 at base[0] ✓
- **Class ID preserved**: tiny_region_id_read_header(ptr) reads ptr-1 = base[0] ✓

## Verification Results

### Before Fix
- **Errors**: 33 TLS_SLL_POP_INVALID per 100K iterations (0.033%)
- **Performance**: 1.8M ops/s (corruption caused slow path fallback)
- **Symptoms**: Invalid TLS SLL heads (0xfe, 0xdb, 0x58, 0x80, 0xc2, etc.)

### After Fix
- **Errors**: 0 per 200K iterations 
- **Performance**: 10.0M ops/s (+456%!) 
- **C7 direct test**: 5.5M ops/s, 100K iterations, 0 errors 

## Files Modified
- core/tiny_nextptr.h (lines 49-54, 82-84) - C7 offset=0, no header restoration
- core/box/tls_sll_drain_box.h (lines 182-189) - Remove premature slab release

## Architectural Lesson

**Design Principle**: Freelist metadata MUST be stored in memory NOT accessible to user.

| Class | Offset | Next Storage | User Access | Result |
|-------|--------|--------------|-------------|--------|
| C0 | 0 | base[0] | base[1..7] | Safe ✓ |
| C1-C6 | 1 | base[1..8] | base[1..N] | Safe (header at base[0]) ✓ |
| C7 (broken) | 1 | base[1..8] | base[1..2047] | **CORRUPTED** ✗ |
| C7 (fixed) | 0 | base[0] | base[1..2047] | Safe ✓ |

🧹 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 23:42:43 +09:00
9b0d746407 Phase 3d-B: TLS Cache Merge - Unified g_tls_sll[] structure (+12-18% expected)
Merge separate g_tls_sll_head[] and g_tls_sll_count[] arrays into unified
TinyTLSSLL struct to improve L1D cache locality. Expected performance gain:
+12-18% from reducing cache line splits (2 loads → 1 load per operation).

Changes:
- core/hakmem_tiny.h: Add TinyTLSSLL type (16B aligned, head+count+pad)
- core/hakmem_tiny.c: Replace separate arrays with g_tls_sll[8]
- core/box/tls_sll_box.h: Update Box API (13 sites) for unified access
- Updated 32+ files: All g_tls_sll_head[i] → g_tls_sll[i].head
- Updated 32+ files: All g_tls_sll_count[i] → g_tls_sll[i].count
- core/hakmem_tiny_integrity.h: Unified canary guards
- core/box/integrity_box.c: Simplified canary validation
- Makefile: Added core/box/tiny_sizeclass_hist_box.o to link

Build:  PASS (10K ops sanity test)
Warnings: Only pre-existing LTO type mismatches (unrelated)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 07:32:30 +09:00
82ba74933a Tiny Step 2: drain interval optimization (default 1024→2048)
Completed A/B testing for TLS SLL drain interval and implemented
optimal default value based on empirical results.

Changes:
- core/box/tls_sll_drain_box.h: Default drain interval 1024 → 2048
- TINY_DRAIN_INTERVAL_AB_REPORT.md: Complete A/B analysis report

Results (100K iterations):
- 256B: 7.68M ops/s (+4.9% vs baseline 7.32M)
- 128B: 8.76M ops/s (+13.6% vs baseline 7.71M)
- Syscalls: Unchanged (2410) - drain affects frontend only

Key Findings:
- Size-dependent optimal intervals discovered (128B→512, 256B→2048)
- Prioritized 256B critical path (classify_ptr 3.65% in perf profile)
- No regression observed; both classes improved

Methodology:
- ENV-only testing (no code changes during A/B)
- Tested intervals: 512, 1024 (baseline), 2048
- Workload: bench_random_mixed_hakmem
- Metrics: Throughput, syscall count (strace -c)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 17:41:26 +09:00
dd613bc93a Drain optimization: Drain ALL blocks to maximize empty detection
Issue:
- Previous drain: only 32 blocks/trigger → slabs partially empty
- Shared pool SuperSlabs mix multiple classes (C0-C7)
- active_slabs only reaches 0 when ALL classes empty
- Result: superslab_free() rarely called, LRU cache unused

Fix:
- Change drain batch_size: 32 → 0 (drain all available)
- Added active_slabs logging in shared_pool_release_slab
- Maximizes chance of SuperSlab becoming completely empty

Performance Impact (ws=4096, 200K iterations):
- Before (batch=32): 5.9M ops/s
- After (batch=all): 6.1M ops/s (+3.4%)
- Baseline improvement: 563K → 6.1M ops/s (+980%!)

Known Issue:
- LRU cache still unused due to Shared Pool design
- SuperSlabs rarely become completely empty (multi-class mixing)
- Requires Shared Pool architecture optimization (Phase 12)

Next: Investigate Shared Pool optimization strategies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 07:55:51 +09:00
4ffdaae2fc Add empty slab detection to drain: call shared_pool_release_slab
Issue:
- Drain was detecting meta->used==0 but not releasing slabs
- Logic missing: shared_pool_release_slab() call after empty detection
- Result: SuperSlabs not freed, LRU cache not populated

Fix:
- Added shared_pool_release_slab() call when meta->used==0 (line 194)
- Mirrors logic in tiny_superslab_free.inc.h:223-236
- Empty slabs now released to shared pool

Performance Impact (ws=4096, 200K iterations):
- Before (baseline): 563K ops/s
- After this fix: 5.9M ops/s (+950% improvement!)

Note: LRU cache still not populated (investigating next)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 07:13:00 +09:00
2ef28ee5ab Fix drain box compilation: Use pthread_self() directly
Issue:
- tiny_self_u32() is static inline, cannot be linked from drain box
- Link error: undefined reference to 'tiny_self_u32'

Fix:
- Use pthread_self() directly like hakmem_tiny_superslab.c:917
- Added <pthread.h> include
- Changed extern declaration from size_t to const size_t

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 07:10:46 +09:00
88f3592ef6 Option B: Periodic TLS SLL Drain - Fix Phase 9 LRU Architecture Issue
Root Cause:
- TLS SLL fast path (95-99% of frees) does NOT decrement meta->used
- Slabs never appear empty → SuperSlabs never freed → LRU never used
- Impact: 6,455 mmap/munmap calls per 200K iterations (74.8% time)
- Performance: -94% regression (9.38M → 563K ops/s)

Solution:
- Periodic drain every N frees (default: 1024) per size class
- Drain path: TLS SLL → slab freelist via tiny_free_local_box()
- This properly decrements meta->used and enables empty detection

Implementation:
1. core/box/tls_sll_drain_box.h - New drain box function
   - tiny_tls_sll_drain(): Pop from TLS SLL, push to slab freelist
   - tiny_tls_sll_try_drain(): Drain trigger with counter
   - ENV: HAKMEM_TINY_SLL_DRAIN_ENABLE=1/0 (default: 1)
   - ENV: HAKMEM_TINY_SLL_DRAIN_INTERVAL=N (default: 1024)
   - ENV: HAKMEM_TINY_SLL_DRAIN_DEBUG=1 (debug logging)

2. core/tiny_free_fast_v2.inc.h - Integrated drain trigger
   - Added drain call after successful TLS SLL push (line 145)
   - Cost: 2-3 cycles per free (counter increment + comparison)
   - Drain triggered every 1024 frees (0.1% overhead)

Expected Impact:
- mmap/munmap: 6,455 → ~100 calls (-96-97%)
- Throughput: 563K → 8-10M ops/s (+1,300-1,700%)
- LRU utilization: 0% → >90% (functional)

Reference: PHASE9_LRU_ARCHITECTURE_ISSUE.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-14 07:09:18 +09:00