Files
hakmem/scripts/run_mixed_10_cleanenv.sh

49 lines
2.1 KiB
Bash
Raw Normal View History

#!/usr/bin/env bash
set -euo pipefail
# 10-run Mixed benchmark with explicit clean ENV for frozen/research knobs.
# Purpose: avoid "bench_setenv_default() does not override exported ENV" drift.
profile=${HAKMEM_PROFILE:-MIXED_TINYV3_C7_SAFE}
iters=${ITERS:-20000000}
ws=${WS:-400}
runs=${RUNS:-10}
bin=${BENCH_BIN:-./bench_random_mixed_hakmem}
Phase 54-60: Memory-Lean mode, Balanced mode stabilization, M1 (50%) achievement ## Summary Completed Phase 54-60 optimization work: **Phase 54-56: Memory-Lean mode (LEAN+OFF prewarm suppression)** - Implemented ss_mem_lean_env_box.h with ENV gates - Balanced mode (LEAN+OFF) promoted as production default - Result: +1.2% throughput, better stability, zero syscall overhead - Added to bench_profile.h: MIXED_TINYV3_C7_BALANCED preset **Phase 57: 60-min soak finalization** - Balanced mode: 60-min soak, RSS drift 0%, CV 5.38% - Speed-first mode: 60-min soak, RSS drift 0%, CV 1.58% - Syscall budget: 1.25e-7/op (800× under target) - Status: PRODUCTION-READY **Phase 59: 50% recovery baseline rebase** - hakmem FAST (Balanced): 59.184M ops/s, CV 1.31% - mimalloc: 120.466M ops/s, CV 3.50% - Ratio: 49.13% (M1 ACHIEVED within statistical noise) - Superior stability: 2.68× better CV than mimalloc **Phase 60: Alloc pass-down SSOT (NO-GO)** - Implemented alloc_passdown_ssot_env_box.h - Modified malloc_tiny_fast.h for SSOT pattern - Result: -0.46% (NO-GO) - Key lesson: SSOT not applicable where early-exit already optimized ## Key Metrics - Performance: 49.13% of mimalloc (M1 effectively achieved) - Stability: CV 1.31% (superior to mimalloc 3.50%) - Syscall budget: 1.25e-7/op (excellent) - RSS: 33MB stable, 0% drift over 60 minutes ## Files Added/Modified New boxes: - core/box/ss_mem_lean_env_box.h - core/box/ss_release_policy_box.{h,c} - core/box/alloc_passdown_ssot_env_box.h Scripts: - scripts/soak_mixed_single_process.sh - scripts/analyze_epoch_tail_csv.py - scripts/soak_mixed_rss.sh - scripts/calculate_percentiles.py - scripts/analyze_soak.py Documentation: Phase 40-60 analysis documents ## Design Decisions 1. Profile separation (core/bench_profile.h): - MIXED_TINYV3_C7_SAFE: Speed-first (no LEAN) - MIXED_TINYV3_C7_BALANCED: Balanced mode (LEAN+OFF) 2. Box Theory compliance: - All ENV gates reversible (HAKMEM_SS_MEM_LEAN, HAKMEM_ALLOC_PASSDOWN_SSOT) - Single conversion points maintained - No physical deletions (compile-out only) 3. Lessons learned: - SSOT effective only where redundancy exists (Phase 60 showed limits) - Branch prediction extremely effective (~0 cycles for well-predicted branches) - Early-exit pattern valuable even when seemingly redundant 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-17 06:24:01 +09:00
# Keep profiles reproducible even if user exported env vars.
case "${profile}" in
MIXED_TINYV3_C7_BALANCED)
export HAKMEM_SS_MEM_LEAN=1
export HAKMEM_SS_MEM_LEAN_DECOMMIT=OFF
export HAKMEM_SS_MEM_LEAN_TARGET_MB=10
;;
*)
export HAKMEM_SS_MEM_LEAN=0
export HAKMEM_SS_MEM_LEAN_DECOMMIT=OFF
export HAKMEM_SS_MEM_LEAN_TARGET_MB=10
;;
esac
# Force known research knobs OFF to avoid accidental carry-over.
export HAKMEM_TINY_HEADER_WRITE_ONCE=${HAKMEM_TINY_HEADER_WRITE_ONCE:-0}
Phase 13 v1 + E5-2 retest: Both NEUTRAL, freeze as research boxes Phase 13 v1: Header Write Elimination (C7 preserve header) - Verdict: NEUTRAL (+0.78%) - Implementation: HAKMEM_TINY_C7_PRESERVE_HEADER ENV gate (default OFF) - Makes C7 nextptr offset conditional (0→1 when enabled) - 4-point matrix A/B test results: * Case A (baseline): 51.49M ops/s * Case B (WRITE_ONCE=1): 52.07M ops/s (+1.13%) * Case C (C7_PRESERVE=1): 51.36M ops/s (-0.26%) * Case D (both): 51.89M ops/s (+0.78% NEUTRAL) - Action: Freeze as research box (default OFF, manual opt-in) Phase 5 E5-2: Header Write-Once retest (promotion test) - Verdict: NEUTRAL (+0.54%) - Motivation: Phase 13 Case B showed +1.13%, re-tested with dedicated 20-run - Results (20-run): * Case A (baseline): 51.10M ops/s * Case B (WRITE_ONCE=1): 51.37M ops/s (+0.54%) - Previous test: +0.45% (consistent with NEUTRAL) - Action: Keep as research box (default OFF, manual opt-in) Key findings: - Header write tax optimization shows consistent NEUTRAL results - Neither Phase 13 v1 nor E5-2 reaches GO threshold (+1.0%) - Both implemented as reversible ENV gates for future research Files changed: - New: core/box/tiny_c7_preserve_header_env_box.{c,h} - Modified: core/box/tiny_layout_box.h (C7 offset conditional) - Modified: core/tiny_nextptr.h, core/box/tiny_header_box.h (comments) - Modified: core/bench_profile.h (refresh sync) - Modified: Makefile (add new .o files) - Modified: scripts/run_mixed_10_cleanenv.sh (add C7_PRESERVE ENV) - Docs: PHASE13_*, PHASE5_E5_2_HEADER_WRITE_ONCE_* (design/results) Next: Phase 14 (Pointer-chase reduction, tcache-style intrusive LIFO) 🤖 Generated with Claude Code Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-15 00:32:25 +09:00
export HAKMEM_TINY_C7_PRESERVE_HEADER=${HAKMEM_TINY_C7_PRESERVE_HEADER:-0}
export HAKMEM_TINY_TCACHE=${HAKMEM_TINY_TCACHE:-0}
export HAKMEM_TINY_TCACHE_CAP=${HAKMEM_TINY_TCACHE_CAP:-64}
export HAKMEM_MALLOC_TINY_DIRECT=${HAKMEM_MALLOC_TINY_DIRECT:-0}
Phase 16 v1 NEUTRAL, Phase 17 Case B confirmed, Phase 18 design added ## Phase 16 v1: Front FastLane Alloc LEGACY Direct — NEUTRAL (+0.62%) Target: Reduce alloc-side fixed costs by adding LEGACY direct path to FastLane entry, mirroring Phase 9/10 free-side winning pattern. Result: +0.62% on Mixed (below +1.0% GO threshold) → NEUTRAL, freeze as research box (default OFF). Critical issue: Initial impl crashed (segfault) for C4-C7. Root cause: unified_cache_refill() incompatibility. Safety fix: Limited to C0-C3 only (matching existing dualhot pattern). Files: - core/box/front_fastlane_alloc_legacy_direct_env_box.{h,c} (new) - core/box/front_fastlane_box.h (LEGACY direct path, lines 93-119) - core/bench_profile.h (env refresh sync) - Makefile (new obj) - docs/analysis/PHASE16_*.md (design/results/instructions) ENV: HAKMEM_FRONT_FASTLANE_ALLOC_LEGACY_DIRECT=0 (default OFF, opt-in) Verdict: Research box frozen. Phase 14-16 plateau confirms dispatch/ routing optimization ROI is exhausted post-Phase-6 FastLane collapse. --- ## Phase 17: FORCE_LIBC Gap Validation — Case B Confirmed Purpose: Validate "system malloc faster" observation using same-binary A/B testing to isolate allocator logic差 vs binary layout penalty. Method: - Same-binary toggle: HAKMEM_FORCE_LIBC_ALLOC=0/1 (bench_random_mixed_hakmem) - System binary: bench_random_mixed_system (21K separate binary) - Perf stat: Hardware counter analysis (I-cache, cycles, instructions) Result: **Case B confirmed** — Allocator差 negligible, layout penalty dominates. Gap breakdown (Mixed, 20M iters, ws=400): - hakmem (FORCE_LIBC=0): 48.12M ops/s - libc (FORCE_LIBC=1, same binary): 48.31M ops/s → +0.39% (noise level) - system binary (21K): 83.85M ops/s → +73.57% vs libc, +74.26% vs hakmem Perf stat (200M iters): - I-cache misses: 153K (hakmem) → 68K (system) = -55% (smoking gun) - Cycles: 17.9B → 10.2B = -43% - Instructions: 41.3B → 21.5B = -48% - Binary size: 653K → 21K (30x difference) Root cause: Binary size (30x) causes I-cache thrashing. Code bloat >> algorithmic efficiency. Conclusion: Phase 12's "system malloc 1.6x faster" was real, but misattributed. Gap is layout/I-cache, NOT allocator algorithm. Files: - docs/analysis/PHASE17_*.md (results/instructions) - scripts/run_mixed_10_cleanenv.sh (Phase 9/10 defaults aligned) Next: Phase 18 Hot Text Isolation (layout optimization, not algorithm opt) --- ## Phase 18: Hot Text Isolation — Design Added Purpose: Reduce I-cache misses + instruction footprint via layout control (binary optimization, not allocator algorithm changes). Strategy (v1 → v2 progression): v1 (TU split + hot/cold attrs + optional gc-sections): - Target: +2% throughput (GO threshold, realistic for layout tweaks) - Secondary: I-cache -10%, instructions -5% (direction confirmation) - Risk: Low (reversible via build knob) - Expected: +0-2% (NEUTRAL likely, but validates approach) v2 (BENCH_MINIMAL compile-out): - Target: +10-20% throughput (本命) - Method: Conditional compilation removes stats/ENV/debug from hot path - Expected: Instruction count -30-40% → significant I-cache improvement Files: - docs/analysis/PHASE18_*.md (design/instructions) - CURRENT_TASK.md (Phase 17 complete, Phase 18 v1/v2 plan) Build gate: HOT_TEXT_ISOLATION=0/1 (Makefile knob) Next: Implement Phase 18 v1 (TU split first, BENCH_MINIMAL if v1 NEUTRAL) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-15 05:25:47 +09:00
export HAKMEM_FRONT_FASTLANE_ALLOC_LEGACY_DIRECT=${HAKMEM_FRONT_FASTLANE_ALLOC_LEGACY_DIRECT:-0}
export HAKMEM_FORCE_LIBC_ALLOC=${HAKMEM_FORCE_LIBC_ALLOC:-0}
export HAKMEM_ENV_SNAPSHOT_SHAPE=${HAKMEM_ENV_SNAPSHOT_SHAPE:-0}
Phase 59b & 61: Speed-first Rebase + C7 ULTRA Header-Light Optimization Phase 59b: Speed-first Mode Baseline Rebase - Rebase on MIXED_TINYV3_C7_SAFE profile (Speed-first, no prewarm suppression) - hakmem: 58.478 M ops/s (CV 2.52%) - mimalloc: 120.979 M ops/s (CV 0.90%) - Ratio: 48.34% of mimalloc (down from 49.13% Balanced mode in Phase 59) - Reason for difference: Profile selection (Speed-first vs Balanced) and mimalloc environment variance - Status: COMPLETE (measurement-only, zero code changes) Phase 61: C7 ULTRA Header-Light Optimization Attempt - Objective: Skip header write on C7 ULTRA alloc hit (write only on refill) - Implementation: ENV gate HAKMEM_TINY_C7_ULTRA_HEADER_LIGHT (default OFF) - Result: +0.31% (NEUTRAL, below +1.0% GO threshold) - Baseline: 59.543 M ops/s (CV 1.53%) - Treatment: 59.729 M ops/s (CV 2.66%) - Root cause analysis: - tiny_region_id_write_header only 2.32% of time (lower than Phase 42 estimate 4.56%) - Header-light mode adds branch to hot path, negating write savings - Mixed workload dilutes C7-specific optimization effectiveness - Variance increased due to branch prediction variability - Decision: Kept as research box with ENV gate (default OFF) - Lesson: Workload-specific optimizations need careful verification with full workloads Updated Documentation: - PHASE59B_SPEED_FIRST_REBASE_RESULTS.md: Full measurement results and analysis - PHASE61_C7_ULTRA_HEADER_LIGHT_RESULTS.md: A/B test results and root cause analysis - PHASE61_C7_ULTRA_HEADER_LIGHT_IMPLEMENTATION.md: Implementation details and design - CURRENT_TASK.md: Updated status and next phase planning (Phase 62) - PERFORMANCE_TARGETS_SCORECARD.md: Updated baseline and M1 milestone status M1 (50%) Milestone Status: - Current: 48.34% (Speed-first profile) - Gap: -1.66% (within measurement noise) - Profile recommendation: Speed-first as canonical default for throughput focus 🤖 Generated with Claude Code Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-17 16:25:26 +09:00
export HAKMEM_TINY_C7_ULTRA_HEADER_LIGHT=${HAKMEM_TINY_C7_ULTRA_HEADER_LIGHT:-0}
Phase 17 v2 (FORCE_LIBC fix) + Phase 19-1b (FastLane Direct) — GO (+5.88%) ## Phase 17 v2: FORCE_LIBC Gap Validation Fix **Critical bug fix**: Phase 17 v1 の測定が壊れていた **Problem**: HAKMEM_FORCE_LIBC_ALLOC=1 が FastLane より後でしか見えず、 same-binary A/B が実質 "hakmem vs hakmem" になっていた(+0.39% 誤測定) **Fix**: core/box/hak_wrappers.inc.h:171 と :645 に g_force_libc_alloc==1 の early bypass を追加、__libc_malloc/__libc_free に最初に直行 **Result**: 正しい同一バイナリ A/B 測定 - hakmem (FORCE_LIBC=0): 48.99M ops/s - libc (FORCE_LIBC=1): 79.72M ops/s (+62.7%) - system binary: 88.06M ops/s (+10.5% vs libc) **Gap 分解**: - Allocator 差: +62.7% (主戦場) - Layout penalty: +10.5% (副次的) **Conclusion**: Case A 確定 (allocator dominant, NOT layout) Phase 17 v1 の Case B 判定は誤り。 Files: - docs/analysis/PHASE17_FORCE_LIBC_GAP_VALIDATION_1_AB_TEST_RESULTS.md (v2) - docs/analysis/PHASE17_FORCE_LIBC_GAP_VALIDATION_1_NEXT_INSTRUCTIONS.md (updated) --- ## Phase 19: FastLane Instruction Reduction Analysis **Goal**: libc との instruction gap (-35% instructions, -56% branches) を削減 **perf stat 分析** (FORCE_LIBC=0 vs 1, 200M ops): - hakmem: 209.09 instructions/op, 52.33 branches/op - libc: 135.92 instructions/op, 22.93 branches/op - Delta: +73.17 instructions/op (+53.8%), +29.40 branches/op (+128.2%) **Hot path** (perf report): - front_fastlane_try_free: 23.97% cycles - malloc wrapper: 23.84% cycles - free wrapper: 6.82% cycles - **Wrapper overhead: ~55% of all cycles** **Reduction candidates**: - A: Wrapper layer 削除 (-17.5 inst/op, +10-15% 期待) - B: ENV snapshot 統合 (-10.0 inst/op, +5-8%) - C: Stats 削除 (-5.0 inst/op, +3-5%) - D: Header inline (-4.0 inst/op, +2-3%) - E: Route fast path (-3.5 inst/op, +2-3%) Files: - docs/analysis/PHASE19_FASTLANE_INSTRUCTION_REDUCTION_1_DESIGN.md - docs/analysis/PHASE19_FASTLANE_INSTRUCTION_REDUCTION_2_NEXT_INSTRUCTIONS.md --- ## Phase 19-1b: FastLane Direct — GO (+5.88%) **Strategy**: Wrapper layer を bypass し、core allocator を直接呼ぶ - free() → free_tiny_fast() (not free_tiny_fast_hot) - malloc() → malloc_tiny_fast() **Phase 19-1 が NO-GO (-3.81%) だった原因**: 1. __builtin_expect(fastlane_direct_enabled(), 0) が逆効果(A/B 不公平) 2. free_tiny_fast_hot() が誤選択(free_tiny_fast() が勝ち筋) **Phase 19-1b の修正**: 1. __builtin_expect() 削除 2. free_tiny_fast() を直接呼び出し **Result** (Mixed, 10-run, 20M iters, ws=400): - Baseline (FASTLANE_DIRECT=0): 49.17M ops/s - Optimized (FASTLANE_DIRECT=1): 52.06M ops/s - **Delta: +5.88%** (GO 基準 +5% クリア) **perf stat** (200M iters): - Instructions/op: 199.90 → 169.45 (-30.45, -15.23%) - Branches/op: 51.49 → 41.52 (-9.97, -19.36%) - Cycles/op: 88.88 → 84.37 (-4.51, -5.07%) - I-cache miss: 111K → 98K (-11.79%) **Trade-offs** (acceptable): - iTLB miss: +41.46% (front-end cost) - dTLB miss: +29.15% (backend cost) - Overall gain (+5.88%) outweighs costs **Implementation**: 1. **ENV gate**: core/box/fastlane_direct_env_box.{h,c} - HAKMEM_FASTLANE_DIRECT=0/1 (default: 0, opt-in) - Single _Atomic global (wrapper キャッシュ問題を解決) 2. **Wrapper 修正**: core/box/hak_wrappers.inc.h - malloc: direct call to malloc_tiny_fast() when FASTLANE_DIRECT=1 - free: direct call to free_tiny_fast() when FASTLANE_DIRECT=1 - Safety: !g_initialized では direct 使わない、fallback 維持 3. **Preset 昇格**: core/bench_profile.h:88 - bench_setenv_default("HAKMEM_FASTLANE_DIRECT", "1") - Comment: +5.88% proven on Mixed, 10-run 4. **cleanenv 更新**: scripts/run_mixed_10_cleanenv.sh:22 - HAKMEM_FASTLANE_DIRECT=${HAKMEM_FASTLANE_DIRECT:-1} - Phase 9/10 と同様に昇格 **Verdict**: GO — 本線採用、プリセット昇格完了 **Rollback**: HAKMEM_FASTLANE_DIRECT=0 で既存 FastLane path に戻る Files: - core/box/fastlane_direct_env_box.{h,c} (new) - core/box/hak_wrappers.inc.h (modified) - core/bench_profile.h (preset promotion) - scripts/run_mixed_10_cleanenv.sh (ENV default aligned) - Makefile (new obj) - docs/analysis/PHASE19_1B_FASTLANE_DIRECT_REVISED_AB_TEST_RESULTS.md --- ## Cumulative Performance - Baseline (all optimizations OFF): ~40M ops/s (estimated) - Current (Phase 19-1b): 52.06M ops/s - **Cumulative gain: ~+30% from baseline** Remaining gap to libc (79.72M): - Current: 52.06M ops/s - Target: 79.72M ops/s - **Gap: +53.2%** (was +62.7% before Phase 19-1b) Next: Phase 19-2 (ENV snapshot consolidation, +5-8% expected) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-15 11:28:40 +09:00
# NOTE: Phase 19-1b is promoted in presets. Keep cleanenv aligned by default.
export HAKMEM_FASTLANE_DIRECT=${HAKMEM_FASTLANE_DIRECT:-1}
Phase 16 v1 NEUTRAL, Phase 17 Case B confirmed, Phase 18 design added ## Phase 16 v1: Front FastLane Alloc LEGACY Direct — NEUTRAL (+0.62%) Target: Reduce alloc-side fixed costs by adding LEGACY direct path to FastLane entry, mirroring Phase 9/10 free-side winning pattern. Result: +0.62% on Mixed (below +1.0% GO threshold) → NEUTRAL, freeze as research box (default OFF). Critical issue: Initial impl crashed (segfault) for C4-C7. Root cause: unified_cache_refill() incompatibility. Safety fix: Limited to C0-C3 only (matching existing dualhot pattern). Files: - core/box/front_fastlane_alloc_legacy_direct_env_box.{h,c} (new) - core/box/front_fastlane_box.h (LEGACY direct path, lines 93-119) - core/bench_profile.h (env refresh sync) - Makefile (new obj) - docs/analysis/PHASE16_*.md (design/results/instructions) ENV: HAKMEM_FRONT_FASTLANE_ALLOC_LEGACY_DIRECT=0 (default OFF, opt-in) Verdict: Research box frozen. Phase 14-16 plateau confirms dispatch/ routing optimization ROI is exhausted post-Phase-6 FastLane collapse. --- ## Phase 17: FORCE_LIBC Gap Validation — Case B Confirmed Purpose: Validate "system malloc faster" observation using same-binary A/B testing to isolate allocator logic差 vs binary layout penalty. Method: - Same-binary toggle: HAKMEM_FORCE_LIBC_ALLOC=0/1 (bench_random_mixed_hakmem) - System binary: bench_random_mixed_system (21K separate binary) - Perf stat: Hardware counter analysis (I-cache, cycles, instructions) Result: **Case B confirmed** — Allocator差 negligible, layout penalty dominates. Gap breakdown (Mixed, 20M iters, ws=400): - hakmem (FORCE_LIBC=0): 48.12M ops/s - libc (FORCE_LIBC=1, same binary): 48.31M ops/s → +0.39% (noise level) - system binary (21K): 83.85M ops/s → +73.57% vs libc, +74.26% vs hakmem Perf stat (200M iters): - I-cache misses: 153K (hakmem) → 68K (system) = -55% (smoking gun) - Cycles: 17.9B → 10.2B = -43% - Instructions: 41.3B → 21.5B = -48% - Binary size: 653K → 21K (30x difference) Root cause: Binary size (30x) causes I-cache thrashing. Code bloat >> algorithmic efficiency. Conclusion: Phase 12's "system malloc 1.6x faster" was real, but misattributed. Gap is layout/I-cache, NOT allocator algorithm. Files: - docs/analysis/PHASE17_*.md (results/instructions) - scripts/run_mixed_10_cleanenv.sh (Phase 9/10 defaults aligned) Next: Phase 18 Hot Text Isolation (layout optimization, not algorithm opt) --- ## Phase 18: Hot Text Isolation — Design Added Purpose: Reduce I-cache misses + instruction footprint via layout control (binary optimization, not allocator algorithm changes). Strategy (v1 → v2 progression): v1 (TU split + hot/cold attrs + optional gc-sections): - Target: +2% throughput (GO threshold, realistic for layout tweaks) - Secondary: I-cache -10%, instructions -5% (direction confirmation) - Risk: Low (reversible via build knob) - Expected: +0-2% (NEUTRAL likely, but validates approach) v2 (BENCH_MINIMAL compile-out): - Target: +10-20% throughput (本命) - Method: Conditional compilation removes stats/ENV/debug from hot path - Expected: Instruction count -30-40% → significant I-cache improvement Files: - docs/analysis/PHASE18_*.md (design/instructions) - CURRENT_TASK.md (Phase 17 complete, Phase 18 v1/v2 plan) Build gate: HOT_TEXT_ISOLATION=0/1 (Makefile knob) Next: Implement Phase 18 v1 (TU split first, BENCH_MINIMAL if v1 NEUTRAL) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-15 05:25:47 +09:00
# NOTE: Phase 9/10 are promoted (bench_profile defaults to 1). Keep cleanenv aligned by default.
export HAKMEM_FREE_TINY_FAST_MONO_DUALHOT=${HAKMEM_FREE_TINY_FAST_MONO_DUALHOT:-1}
export HAKMEM_FREE_TINY_FAST_MONO_LEGACY_DIRECT=${HAKMEM_FREE_TINY_FAST_MONO_LEGACY_DIRECT:-1}
# NOTE: Phase 69-1 winner (Warm Pool Size=16, +3.26% Strong GO, ENV-only)
export HAKMEM_WARM_POOL_SIZE=${HAKMEM_WARM_POOL_SIZE:-16}
for i in $(seq 1 "${runs}"); do
echo "=== Run ${i}/${runs} ==="
HAKMEM_PROFILE="${profile}" "${bin}" "${iters}" "${ws}" 1 2>&1 | rg "Throughput" || true
done