feat: Phase 7 + Phase 2 - Massive performance & stability improvements

Performance Achievements:
- Tiny allocations: +180-280% (21M → 59-70M ops/s random mixed)
- Single-thread: +24% (2.71M → 3.36M ops/s Larson)
- 4T stability: 0% → 95% (19/20 success rate)
- Overall: 91.3% of System malloc average (target was 40-55%) ✓

Phase 7 (Tasks 1-3): Core Optimizations
- Task 1: Header validation removal (Region-ID direct lookup)
- Task 2: Aggressive inline (TLS cache access optimization)
- Task 3: Pre-warm TLS cache (eliminate cold-start penalty)
  Result: +180-280% improvement, 85-146% of System malloc

Critical Bug Fixes:
- Fix 64B allocation crash (size-to-class +1 for header)
- Fix 4T wrapper recursion bugs (BUG #7, #8, #10, #11)
- Remove malloc fallback (30% → 50% stability)

Phase 2a: SuperSlab Dynamic Expansion (CRITICAL)
- Implement mimalloc-style chunk linking
- Unlimited slab expansion (no more OOM at 32 slabs)
- Fix chunk initialization bug (bitmap=0x00000001 after expansion)
  Files: core/hakmem_tiny_superslab.c/h, core/superslab/superslab_types.h
  Result: 50% → 95% stability (19/20 4T success)

Phase 2b: TLS Cache Adaptive Sizing
- Dynamic capacity: 16-2048 slots based on usage
- High-water mark tracking + exponential growth/shrink
- Expected: +3-10% performance, -30-50% memory
  Files: core/tiny_adaptive_sizing.c/h (new)

Phase 2c: BigCache Dynamic Hash Table
- Migrate from fixed 256×8 array to dynamic hash table
- Auto-resize: 256 → 512 → 1024 → 65,536 buckets
- Improved hash function (FNV-1a) + collision chaining
  Files: core/hakmem_bigcache.c/h
  Expected: +10-20% cache hit rate

Design Flaws Analysis:
- Identified 6 components with fixed-capacity bottlenecks
- SuperSlab (CRITICAL), TLS Cache (HIGH), BigCache/L2.5 (MEDIUM)
- Report: DESIGN_FLAWS_ANALYSIS.md (11 chapters)

Documentation:
- 13 comprehensive reports (PHASE*.md, DESIGN_FLAWS*.md)
- Implementation guides, test results, production readiness
- Bug fix reports, root cause analysis

Build System:
- Makefile: phase7 targets, PREWARM_TLS flag
- Auto dependency generation (-MMD -MP) for .inc files

Known Issues:
- 4T stability: 19/20 (95%) - investigating 1 failure for 100%
- L2.5 Pool dynamic sharding: design only (needs 2-3 days integration)

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Moe Charm (CI)
2025-11-08 17:08:00 +09:00
parent 7975e243ee
commit 707056b765
36 changed files with 8488 additions and 483 deletions

View File

@ -7,12 +7,17 @@
// Architecture: SFC (Layer 0, 128-256 slots) → SLL (Layer 1, unlimited) → SuperSlab (Layer 2+)
// Cascade Refill: SFC ← SLL (one-way, safe)
// Goal: +200% performance (4.19M → 12M+ ops/s)
//
// Phase 2b: Adaptive TLS Cache Sizing
// Hot classes grow to 2048 slots, cold classes shrink to 16 slots
// Expected: +3-10% performance, -30-50% TLS cache memory overhead
#pragma once
#include "tiny_atomic.h"
#include "hakmem_tiny.h"
#include "tiny_route.h"
#include "tiny_alloc_fast_sfc.inc.h" // Box 5-NEW: SFC Layer
#include "tiny_region_id.h" // Phase 7: Header-based class_idx lookup
#include "tiny_adaptive_sizing.h" // Phase 2b: Adaptive sizing
#ifdef HAKMEM_TINY_FRONT_GATE_BOX
#include "box/front_gate_box.h"
#endif
@ -320,6 +325,13 @@ static inline int tiny_alloc_fast_refill(int class_idx) {
uint64_t start = tiny_profile_enabled() ? tiny_fast_rdtsc() : 0;
#endif
// Phase 2b: Check available capacity before refill
int available_capacity = get_available_capacity(class_idx);
if (available_capacity <= 0) {
// Cache is full, don't refill
return 0;
}
// Phase 7 Task 3: Simplified refill count (cached per-class in TLS)
// Previous: Complex precedence logic on every miss (5-10 cycles overhead)
// Now: Simple TLS cache lookup (1-2 cycles)
@ -348,6 +360,11 @@ static inline int tiny_alloc_fast_refill(int class_idx) {
cnt = v;
}
// Phase 2b: Clamp refill count to available capacity
if (cnt > available_capacity) {
cnt = available_capacity;
}
#if HAKMEM_DEBUG_COUNTERS
// Track refill calls (compile-time gated)
g_rf_total_calls[class_idx]++;
@ -358,6 +375,11 @@ static inline int tiny_alloc_fast_refill(int class_idx) {
// Note: g_rf_hit_slab counter is incremented inside sll_refill_small_from_ss()
int refilled = sll_refill_small_from_ss(class_idx, cnt);
// Phase 2b: Track refill and adapt cache size
if (refilled > 0) {
track_refill_for_adaptation(class_idx);
}
// Box 5-NEW: Cascade refill SFC ← SLL (if SFC enabled)
// This happens AFTER SuperSlab → SLL refill, so SLL has blocks
static __thread int sfc_check_done_refill = 0;