12 KiB
Current Task: Phase 7 + Pool TLS — Step 4.x Integration & Validation(Tiny P0: デフォルトON)
Date: 2025-11-09 Status: 🚀 In Progress (Step 4.x) Priority: HIGH
🎯 Goal
Box理論に沿って、Pool TLS を中心に「syscall 希薄化」と「境界一箇所化」を推し進め、Tiny/Mid/Larson の安定高速化を図る。
Why This Works
Phase 7 Task 3 achieved +180-280% improvement by pre-warming:
- Before: First allocation → TLS miss → SuperSlab refill (100+ cycles)
- After: First allocation → TLS hit (15 cycles, pre-populated cache)
Same bottleneck exists in Pool TLS:
- First 8KB allocation → TLS miss → Arena carve → mmap (1000+ cycles)
- Pre-warm eliminates this cold-start penalty
📊 Current Status(Step 4までの主な進捗)
実装サマリ(Tiny + Pool TLS)
- ✅ Tiny 1024B 特例(ヘッダ無し)+ class7 補給の軽量適応(mmap 多発の主因を遮断)
- ✅ OS 降下の境界化(
hak_os_map_boundary()):mmap 呼び出しを一箇所に集約 - ✅ Pool TLS Arena(1→2→4→8MB指数成長, ENV で可変):mmap をアリーナへ集約
- ✅ Page Registry(チャンク登録/lookup で owner 解決)
- ✅ Remote Queue(Pool 用, mutex バケット版)+ alloc 前の軽量 drain を配線
Tiny P0(Batch Refill)
- ✅ P0 致命バグ修正(freelist→SLL一括移送後に
meta->used += from_freelistが抜けていた) - ✅ 線形 carve の Fail‑Fast ガード(簡素/一般/TLSバンプの全経路)
- ✅ ランタイム A/B スイッチ実装:
- 既定ON(
HAKMEM_TINY_P0_ENABLE未設定/≠0) - Kill:
HAKMEM_TINY_P0_DISABLE=1、Drain 切替:HAKMEM_TINY_P0_NO_DRAIN=1、ログ:HAKMEM_TINY_P0_LOG=1
- 既定ON(
- ✅ ベンチ: 100k×256B(1T)で P0 ON 最速(~2.76M ops/s)、P0 OFF ~2.73M ops/s(安定)
- ⚠️ 既知:
[P0_COUNTER_MISMATCH]警告(active_delta と taken の差分)が稀に出るが、SEGV は解消済(継続監査)
NEW: P0 carve ループの根本原因と修正(SEGV 解消)
- 🔴 根因: P0 バッチ carve ループ内で
superslab_refill(class_idx)により TLS が新しい SuperSlab を指すのに、tlsを再読込せずmeta=tls->metaのみ更新 →ss_active_add(tls->ss, batch)が古い SuperSlab に加算され、active カウンタ破壊・SEGV に繋がる。 - 🛠 修正:
superslab_refill()後にtls = &g_tls_slabs[class_idx]; meta = tls->meta;を再読込(core/hakmem_tiny_refill_p0.inc.h)。 - 🧪 検証: 固定サイズ 256B/1KB (200k iters)完走、SEGV 再現なし。active_delta=0 を確認。RS はわずかに改善(0.8–0.9% → 継続最適化対象)。
詳細: docs/TINY_P0_BATCH_REFILL.md
🚀 次のステップ(アクション)
- Remote Queue の drain を Pool TLS refill 境界とも統合(低水位時は drain→refill→bind)
- 現状: pool_alloc 入口で drain, pop 後 low-water で追加 drain を実装済み
- 追加: refill 経路(
pool_refill_and_alloc呼出し直前)でも drain を試行し、drain 成功時は refill を回避
- strace による syscall 減少確認(指標化)
- RandomMixed: 256 / 1024B, それぞれ
mmap/madvise/munmap回数(-c合計) - PoolTLS: 1T/4T の
mmap/madvise/munmap減少を比較(Arena導入前後)
- 性能A/B(ENV: INIT/MAX/GROWTH)で最適化勘所を探索
HAKMEM_POOL_TLS_ARENA_MB_INIT,HAKMEM_POOL_TLS_ARENA_MB_MAX,HAKMEM_POOL_TLS_ARENA_GROWTH_LEVELSの組合せを評価- 目標: syscall を削減しつつメモリ使用量を許容範囲に維持
-
Remote Queue の高速化(次フェーズ)
-
Tiny 256B/1KB の直詰め最適化(性能)
- P0→FC 直詰めの一往復設計を活用し、以下を段階的に適用(A/Bスイッチ済み)
- FC cap/batch 上限の掃引(class5/7)
- remote drain 閾値化のチューニング(頻度削減)
- adopt 先行の徹底(map 前に再試行)
- 配列詰めの軽い unroll/分岐ヒントの見直し(branch‑miss 低減)
- まずはmutex→lock分割/軽量スピン化、必要に応じてクラス別queue
- Page Registry の O(1) 化(ページ単位のテーブル), 将来はper-arena ID化
NEW: 本日の適用と計測スナップショット(Ryzen 7 5825U)
-
変更点(Tiny 256B/1KB 向け)
- FastCache 有効容量を per-class で厳密適用(
tiny_fc_room/push_bulkがg_fast_cap[c]を使用) - 既定 cap 見直し: class5=96, class7=48(ENVで上書き可:
HAKMEM_TINY_FAST_CAP_C{5,7}) - Direct-FC の drain 閾値 既定を 32→64(ENV:
HAKMEM_TINY_P0_DRAIN_THRESH) - class7 の Direct-FC 既定は OFF(
HAKMEM_TINY_P0_DIRECT_FC_C7=1で明示ON)
- FastCache 有効容量を per-class で厳密適用(
-
固定サイズベンチ(release, 200k iters)
- 256B: 4.49–4.54M ops/s, branch-miss ≈ 8.89%(先行値 ≈11% から改善)
- 1KB: 現状 SEGV(Direct-FC OFF でも再現)→ P0 一般経路の残存不具合の可能性
- 結果保存: benchmarks/results/_ryzen7-5825U_fixed/
-
推奨: class7 は当面 P0 をA/Bで停止(
HAKMEM_TINY_P0_DISABLE=1もしくは class7限定ガード導入)し、256Bのチューニングを先行。
Challenge: Pool blocks are LARGE (8KB-52KB) vs Tiny (128B-1KB)
Memory Budget Analysis:
Phase 7 Tiny:
- 16 blocks × 1KB = 16KB per class
- 7 classes × 16KB = 112KB total ✅ Acceptable
Pool TLS (Naive):
- 16 blocks × 8KB = 128KB (class 0)
- 16 blocks × 52KB = 832KB (class 6)
- Total: ~4-5MB ❌ Too much!
Smart Strategy: Variable pre-warm counts based on expected usage
// Hot classes (8-24KB) - common in real workloads
Class 0 (8KB): 16 blocks = 128KB
Class 1 (16KB): 16 blocks = 256KB
Class 2 (24KB): 12 blocks = 288KB
// Warm classes (32-40KB)
Class 3 (32KB): 8 blocks = 256KB
Class 4 (40KB): 8 blocks = 320KB
// Cold classes (48-52KB) - rare
Class 5 (48KB): 4 blocks = 192KB
Class 6 (52KB): 4 blocks = 208KB
Total: ~1.6MB ✅ Acceptable
Rationale:
- Smaller classes are used more frequently (Pareto principle)
- Total memory: 1.6MB (reasonable for 8-52KB allocations)
- Covers most real-world workload patterns
ENV(Arena 関連)
# Initial chunk size in MB (default: 1)
export HAKMEM_POOL_TLS_ARENA_MB_INIT=2
# Maximum chunk size in MB (default: 8)
export HAKMEM_POOL_TLS_ARENA_MB_MAX=16
# Number of growth levels (default: 3 → 1→2→4→8MB)
export HAKMEM_POOL_TLS_ARENA_GROWTH_LEVELS=4
Location: core/pool_tls.c
Code:
// Pre-warm counts optimized for memory usage
static const int PREWARM_COUNTS[POOL_SIZE_CLASSES] = {
16, 16, 12, // Hot: 8KB, 16KB, 24KB
8, 8, // Warm: 32KB, 40KB
4, 4 // Cold: 48KB, 52KB
};
void pool_tls_prewarm(void) {
for (int class_idx = 0; class_idx < POOL_SIZE_CLASSES; class_idx++) {
int count = PREWARM_COUNTS[class_idx];
size_t size = POOL_CLASS_SIZES[class_idx];
// Allocate then immediately free to populate TLS cache
for (int i = 0; i < count; i++) {
void* ptr = pool_alloc(size);
if (ptr) {
pool_free(ptr); // Goes back to TLS freelist
} else {
// OOM during pre-warm (rare, but handle gracefully)
break;
}
}
}
}
Header Addition (core/pool_tls.h):
// Pre-warm TLS cache (call once at thread init)
void pool_tls_prewarm(void);
軽い確認(推奨)
# PoolTLS
./build.sh bench_pool_tls_hakmem
./bench_pool_tls_hakmem 1 100000 256 42
./bench_pool_tls_hakmem 4 50000 256 42
# syscall 計測(mmap/madvise/munmap 合計が減っているか確認)
strace -e trace=mmap,madvise,munmap -c ./bench_pool_tls_hakmem 1 100000 256 42
strace -e trace=mmap,madvise,munmap -c ./bench_random_mixed_hakmem 100000 256 42
strace -e trace=mmap,madvise,munmap -c ./bench_random_mixed_hakmem 100000 1024 42
Location: core/hakmem.c (or wherever Pool TLS init happens)
Code:
#ifdef HAKMEM_POOL_TLS_PHASE1
// Initialize Pool TLS
pool_thread_init();
// Pre-warm cache (Phase 1.5b optimization)
#ifdef HAKMEM_POOL_TLS_PREWARM
pool_tls_prewarm();
#endif
#endif
Makefile Addition:
# Pool TLS Phase 1.5b - Pre-warm optimization
ifeq ($(POOL_TLS_PREWARM),1)
CFLAGS += -DHAKMEM_POOL_TLS_PREWARM=1
endif
Update build.sh:
make \
POOL_TLS_PHASE1=1 \
POOL_TLS_PREWARM=1 \ # NEW!
HEADER_CLASSIDX=1 \
AGGRESSIVE_INLINE=1 \
PREWARM_TLS=1 \
"${TARGET}"
Step 4: Build & Smoke Test ⏳ 10 min
# Build with pre-warm enabled
./build_pool_tls.sh bench_mid_large_mt_hakmem
# Quick smoke test
./dev_pool_tls.sh test
# Expected: No crashes, similar or better performance
Step 5: Benchmark ⏳ 15 min
# Full benchmark vs System malloc
./run_pool_bench.sh
# Expected results:
# Before (1.5a): 1.79M ops/s
# After (1.5b): 5-15M ops/s (+3-8x)
Additional benchmarks:
# Different sizes
./bench_mid_large_mt_hakmem 1 100000 256 42 # 8-32KB mixed
./bench_mid_large_mt_hakmem 1 100000 1024 42 # Larger workset
# Multi-threaded
./bench_mid_large_mt_hakmem 4 100000 256 42 # 4T
Step 6: Measure & Analyze ⏳ 10 min
Metrics to collect:
- ops/s improvement (target: +3-8x)
- Memory overhead (should be ~1.6MB per thread)
- Cold-start penalty reduction (first allocation latency)
Success Criteria:
- ✅ No crashes or stability issues
- ✅ +200% or better improvement (5M ops/s minimum)
- ✅ Memory overhead < 2MB per thread
- ✅ No performance regression on small workloads
Step 7: Tune (if needed) ⏳ 15 min (optional)
If results are suboptimal, adjust pre-warm counts:
Too slow (< 5M ops/s):
- Increase hot class pre-warm (16 → 24)
- More aggressive: Pre-warm all classes to 16
Memory too high (> 2MB):
- Reduce cold class pre-warm (4 → 2)
- Lazy pre-warm: Only hot classes initially
Adaptive approach:
// Pre-warm based on runtime heuristics
void pool_tls_prewarm_adaptive(void) {
// Start with minimal pre-warm
static const int MIN_PREWARM[7] = {8, 8, 4, 4, 2, 2, 2};
// TODO: Track usage patterns and adjust dynamically
}
📋 Implementation Checklist
Phase 1.5b: Pre-warm Optimization
-
Step 1: Design pre-warm strategy (15 min)
- Analyze memory budget
- Decide pre-warm counts per class
- Document rationale
-
Step 2: Implement
pool_tls_prewarm()(20 min)- Add PREWARM_COUNTS array
- Write pre-warm function
- Add to pool_tls.h
-
Step 3: Integrate with init (10 min)
- Add call to hakmem.c init
- Add Makefile flag
- Update build.sh
-
Step 4: Build & smoke test (10 min)
- Build with pre-warm enabled
- Run dev_pool_tls.sh test
- Verify no crashes
-
Step 5: Benchmark (15 min)
- Run run_pool_bench.sh
- Test different sizes
- Test multi-threaded
-
Step 6: Measure & analyze (10 min)
- Record performance improvement
- Measure memory overhead
- Validate success criteria
-
Step 7: Tune (optional, 15 min)
- Adjust pre-warm counts if needed
- Re-benchmark
- Document final configuration
Total Estimated Time: 1.5 hours (90 minutes)
🎯 Expected Outcomes
Performance Targets
Phase 1.5a (current): 1.79M ops/s
Phase 1.5b (target): 5-15M ops/s (+3-8x)
Conservative: 5M ops/s (+180%)
Expected: 8M ops/s (+350%)
Optimistic: 15M ops/s (+740%)
Comparison to Phase 7
Phase 7 Task 3 (Tiny):
Before: 21M → After: 59M ops/s (+181%)
Phase 1.5b (Pool):
Before: 1.79M → After: 5-15M ops/s (+180-740%)
Similar or better improvement expected!
Risk Assessment
- Technical Risk: LOW (proven pattern from Phase 7)
- Stability Risk: LOW (simple, non-invasive change)
- Memory Risk: LOW (1.6MB is negligible for Pool workloads)
- Complexity Risk: LOW (< 50 LOC change)
📁 Related Documents
CLAUDE.md- Development history (Phase 1.5a documented)POOL_TLS_QUICKSTART.md- Quick start guidePOOL_TLS_INVESTIGATION_FINAL.md- Phase 1.5a debugging journeyPHASE7_TASK3_RESULTS.md- Pre-warm success pattern (Tiny)
🚀 Next Actions
NOW: Start Step 1 - Design pre-warm strategy NEXT: Implement pool_tls_prewarm() function THEN: Build, test, benchmark
Estimated Completion: 1.5 hours from start Success Probability: 90% (proven technique)
Status: Ready to implement - awaiting user confirmation to proceed! 🚀