P-Tier + Tiny Route Policy: Aggressive Superslab Management + Safe Routing

## Phase 1: Utilization-Aware Superslab Tiering (案B実装済)

- Add ss_tier_box.h: Classify SuperSlabs into HOT/DRAINING/FREE based on utilization
  - HOT (>25%): Accept new allocations
  - DRAINING (≤25%): Drain only, no new allocs
  - FREE (0%): Ready for eager munmap

- Enhanced shared_pool_release_slab():
  - Check tier transition after each slab release
  - If tier→FREE: Force remaining slots to EMPTY and call superslab_free() immediately
  - Bypasses LRU cache to prevent registry bloat from accumulating DRAINING SuperSlabs

- Test results (bench_random_mixed_hakmem):
  - 1M iterations:  ~1.03M ops/s (previously passed)
  - 10M iterations:  ~1.15M ops/s (previously: registry full error)
  - 50M iterations:  ~1.08M ops/s (stress test)

## Phase 2: Tiny Front Routing Policy (新規Box)

- Add tiny_route_box.h/c: Single 8-byte table for class→routing decisions
  - ROUTE_TINY_ONLY: Tiny front exclusive (no fallback)
  - ROUTE_TINY_FIRST: Try Tiny, fallback to Pool if fails
  - ROUTE_POOL_ONLY: Skip Tiny entirely

- Profiles via HAKMEM_TINY_PROFILE ENV:
  - "hot": C0-C3=TINY_ONLY, C4-C6=TINY_FIRST, C7=POOL_ONLY
  - "conservative" (default): All TINY_FIRST
  - "off": All POOL_ONLY (disable Tiny)
  - "full": All TINY_ONLY (microbench mode)

- A/B test results (ws=256, 100k ops random_mixed):
  - Default (conservative): ~2.90M ops/s
  - hot: ~2.65M ops/s (more conservative)
  - off: ~2.86M ops/s
  - full: ~2.98M ops/s (slightly best)

## Design Rationale

### Registry Pressure Fix (案B)
- Problem: DRAINING tier SS occupied registry indefinitely
- Solution: When total_active_blocks→0, immediately free to clear registry slot
- Result: No more "registry full" errors under stress

### Routing Policy Box (新)
- Problem: Tiny front optimization scattered across ENV/branches
- Solution: Centralize routing in single table, select profiles via ENV
- Benefit: Safe A/B testing without touching hot path code
- Future: Integrate with RSS budget/learning layers for dynamic profile switching

## Next Steps (性能最適化)
- Profile Tiny front internals (TLS SLL, FastCache, Superslab backend latency)
- Identify bottleneck between current ~2.9M ops/s and mimalloc ~100M ops/s
- Consider:
  - Reduce shared pool lock contention
  - Optimize unified cache hit rate
  - Streamline Superslab carving logic

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Moe Charm (CI)
2025-12-04 18:01:25 +09:00
parent 984cca41ef
commit d5e6ed535c
13 changed files with 647 additions and 25 deletions

View File

@ -6,6 +6,7 @@
#include "box/pagefault_telemetry_box.h"
#include "box/tls_sll_drain_box.h"
#include "box/tls_slab_reuse_guard_box.h"
#include "box/ss_tier_box.h" // P-Tier: Tier filtering support
#include "hakmem_policy.h"
#include "hakmem_env_cache.h" // Priority-2: ENV cache
@ -41,6 +42,8 @@ sp_acquire_from_empty_scan(int class_idx, SuperSlab** ss_out, int* slab_idx_out,
for (int i = 0; i < scan_limit; i++) {
SuperSlab* ss = g_super_reg_by_class[class_idx][i];
if (!(ss && ss->magic == SUPERSLAB_MAGIC)) continue;
// P-Tier: Skip DRAINING tier SuperSlabs
if (!ss_tier_is_hot(ss)) continue;
if (ss->empty_count == 0) continue; // No EMPTY slabs in this SS
uint32_t mask = ss->empty_mask;
@ -151,6 +154,16 @@ stage1_retry_after_tension_drain:
SuperSlab* ss_guard = atomic_load_explicit(&reuse_meta->ss, memory_order_relaxed);
if (ss_guard) {
tiny_tls_slab_reuse_guard(ss_guard);
// P-Tier: Skip DRAINING tier SuperSlabs (reinsert to freelist and fallback)
if (!ss_tier_is_hot(ss_guard)) {
// DRAINING SuperSlab - skip this slot and fall through to Stage 2
if (g_lock_stats_enabled == 1) {
atomic_fetch_add(&g_lock_release_count, 1);
}
pthread_mutex_unlock(&g_shared_pool.alloc_lock);
goto stage2_fallback;
}
}
// Activate slot under mutex (slot state transition requires protection)
@ -221,6 +234,13 @@ stage2_fallback:
{
SuperSlab* hint_ss = g_shared_pool.class_hints[class_idx];
if (__builtin_expect(hint_ss != NULL, 1)) {
// P-Tier: Skip DRAINING tier SuperSlabs
if (!ss_tier_is_hot(hint_ss)) {
// Clear stale hint pointing to DRAINING SuperSlab
g_shared_pool.class_hints[class_idx] = NULL;
goto stage2_scan;
}
// P0 Optimization: O(1) lookup via cached pointer (avoids metadata scan)
SharedSSMeta* hint_meta = hint_ss->shared_meta;
if (__builtin_expect(hint_meta != NULL, 1)) {
@ -277,6 +297,7 @@ stage2_fallback:
}
}
stage2_scan:
// P0-5: Lock-free atomic CAS claiming (no mutex needed for slot state transition!)
// RACE FIX: Read ss_meta_count atomically (now properly declared as _Atomic)
// No cast needed! memory_order_acquire synchronizes with release in sp_meta_find_or_create
@ -288,10 +309,23 @@ stage2_fallback:
for (uint32_t i = 0; i < meta_count; i++) {
SharedSSMeta* meta = &g_shared_pool.ss_metadata[i];
// RACE FIX: Load SuperSlab pointer atomically BEFORE claiming
// Use memory_order_acquire to synchronize with release in sp_meta_find_or_create
SuperSlab* ss_preflight = atomic_load_explicit(&meta->ss, memory_order_acquire);
if (!ss_preflight) {
// SuperSlab was freed - skip this entry
continue;
}
// P-Tier: Skip DRAINING tier SuperSlabs
if (!ss_tier_is_hot(ss_preflight)) {
continue;
}
// Try lock-free claiming (UNUSED → ACTIVE via CAS)
int claimed_idx = sp_slot_claim_lockfree(meta, class_idx);
if (claimed_idx >= 0) {
// RACE FIX: Load SuperSlab pointer atomically (critical for lock-free Stage 2)
// RACE FIX: Load SuperSlab pointer atomically again after claiming
// Use memory_order_acquire to synchronize with release in sp_meta_find_or_create
SuperSlab* ss = atomic_load_explicit(&meta->ss, memory_order_acquire);
if (!ss) {