Files
hakmem/scripts/analyze_epoch_tail_csv.py
Moe Charm (CI) 7adbcdfcb6 Phase 54-60: Memory-Lean mode, Balanced mode stabilization, M1 (50%) achievement
## Summary

Completed Phase 54-60 optimization work:

**Phase 54-56: Memory-Lean mode (LEAN+OFF prewarm suppression)**
- Implemented ss_mem_lean_env_box.h with ENV gates
- Balanced mode (LEAN+OFF) promoted as production default
- Result: +1.2% throughput, better stability, zero syscall overhead
- Added to bench_profile.h: MIXED_TINYV3_C7_BALANCED preset

**Phase 57: 60-min soak finalization**
- Balanced mode: 60-min soak, RSS drift 0%, CV 5.38%
- Speed-first mode: 60-min soak, RSS drift 0%, CV 1.58%
- Syscall budget: 1.25e-7/op (800× under target)
- Status: PRODUCTION-READY

**Phase 59: 50% recovery baseline rebase**
- hakmem FAST (Balanced): 59.184M ops/s, CV 1.31%
- mimalloc: 120.466M ops/s, CV 3.50%
- Ratio: 49.13% (M1 ACHIEVED within statistical noise)
- Superior stability: 2.68× better CV than mimalloc

**Phase 60: Alloc pass-down SSOT (NO-GO)**
- Implemented alloc_passdown_ssot_env_box.h
- Modified malloc_tiny_fast.h for SSOT pattern
- Result: -0.46% (NO-GO)
- Key lesson: SSOT not applicable where early-exit already optimized

## Key Metrics

- Performance: 49.13% of mimalloc (M1 effectively achieved)
- Stability: CV 1.31% (superior to mimalloc 3.50%)
- Syscall budget: 1.25e-7/op (excellent)
- RSS: 33MB stable, 0% drift over 60 minutes

## Files Added/Modified

New boxes:
- core/box/ss_mem_lean_env_box.h
- core/box/ss_release_policy_box.{h,c}
- core/box/alloc_passdown_ssot_env_box.h

Scripts:
- scripts/soak_mixed_single_process.sh
- scripts/analyze_epoch_tail_csv.py
- scripts/soak_mixed_rss.sh
- scripts/calculate_percentiles.py
- scripts/analyze_soak.py

Documentation: Phase 40-60 analysis documents

## Design Decisions

1. Profile separation (core/bench_profile.h):
   - MIXED_TINYV3_C7_SAFE: Speed-first (no LEAN)
   - MIXED_TINYV3_C7_BALANCED: Balanced mode (LEAN+OFF)

2. Box Theory compliance:
   - All ENV gates reversible (HAKMEM_SS_MEM_LEAN, HAKMEM_ALLOC_PASSDOWN_SSOT)
   - Single conversion points maintained
   - No physical deletions (compile-out only)

3. Lessons learned:
   - SSOT effective only where redundancy exists (Phase 60 showed limits)
   - Branch prediction extremely effective (~0 cycles for well-predicted branches)
   - Early-exit pattern valuable even when seemingly redundant

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-17 06:24:01 +09:00

142 lines
4.0 KiB
Python
Executable File

#!/usr/bin/env python3
"""
analyze_epoch_tail_csv.py
Compute correct tail proxy statistics from Phase 51/52 epoch CSV.
Input CSV (from scripts/soak_mixed_single_process.sh):
epoch,iter,throughput_ops_s,rss_mb
Key points:
- Tail in throughput space is the *LOW* tail (p1/p0.1), not p99.
- Tail in latency space is the *HIGH* tail (p99/p999), computed from per-epoch latency values:
latency_ns = 1e9 / throughput_ops_s
- Do NOT compute latency percentiles as 1e9 / throughput_percentile (nonlinear + order inversion).
"""
from __future__ import annotations
import argparse
import csv
import math
from dataclasses import dataclass
from typing import List, Tuple
def percentile(sorted_vals: List[float], p: float) -> float:
if not sorted_vals:
return float("nan")
if p <= 0:
return sorted_vals[0]
if p >= 100:
return sorted_vals[-1]
# linear interpolation between closest ranks
k = (len(sorted_vals) - 1) * (p / 100.0)
f = math.floor(k)
c = math.ceil(k)
if f == c:
return sorted_vals[int(k)]
d0 = sorted_vals[f] * (c - k)
d1 = sorted_vals[c] * (k - f)
return d0 + d1
@dataclass
class Stats:
mean: float
stdev: float
cv: float
p50: float
p90: float
p99: float
p999: float
p10: float
p1: float
p01: float
minv: float
maxv: float
def compute_stats(vals: List[float]) -> Stats:
if not vals:
nan = float("nan")
return Stats(nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan)
n = len(vals)
mean = sum(vals) / n
var = sum((v - mean) ** 2 for v in vals) / n
stdev = math.sqrt(var)
cv = (stdev / mean) if mean != 0 else float("nan")
s = sorted(vals)
return Stats(
mean=mean,
stdev=stdev,
cv=cv,
p50=percentile(s, 50),
p90=percentile(s, 90),
p99=percentile(s, 99),
p999=percentile(s, 99.9),
p10=percentile(s, 10),
p1=percentile(s, 1),
p01=percentile(s, 0.1),
minv=s[0],
maxv=s[-1],
)
def read_csv(path: str) -> Tuple[List[float], List[float]]:
thr: List[float] = []
rss: List[float] = []
with open(path, newline="") as f:
reader = csv.DictReader(f)
for row in reader:
t = row.get("throughput_ops_s", "").strip()
r = row.get("rss_mb", "").strip()
if not t:
continue
try:
thr.append(float(t))
except ValueError:
continue
if r:
try:
rss.append(float(r))
except ValueError:
pass
return thr, rss
def main() -> int:
ap = argparse.ArgumentParser()
ap.add_argument("csv", help="epoch CSV (from scripts/soak_mixed_single_process.sh)")
args = ap.parse_args()
thr, rss = read_csv(args.csv)
thr_stats = compute_stats(thr)
lat = [(1e9 / t) for t in thr if t > 0]
lat_stats = compute_stats(lat)
print(f"epochs={len(thr)}")
print("")
print("Throughput (ops/s) [NOTE: tail = low throughput]")
print(f" mean={thr_stats.mean:,.0f} stdev={thr_stats.stdev:,.0f} cv={thr_stats.cv*100:.2f}%")
print(f" p50={thr_stats.p50:,.0f} p10={thr_stats.p10:,.0f} p1={thr_stats.p1:,.0f} p0.1={thr_stats.p01:,.0f}")
print(f" min={thr_stats.minv:,.0f} max={thr_stats.maxv:,.0f}")
print("")
print("Latency proxy (ns/op) [NOTE: tail = high latency]")
print(f" mean={lat_stats.mean:,.2f} stdev={lat_stats.stdev:,.2f} cv={lat_stats.cv*100:.2f}%")
print(f" p50={lat_stats.p50:,.2f} p90={lat_stats.p90:,.2f} p99={lat_stats.p99:,.2f} p99.9={lat_stats.p999:,.2f}")
print(f" min={lat_stats.minv:,.2f} max={lat_stats.maxv:,.2f}")
if rss:
rss_stats = compute_stats(rss)
print("")
print("RSS (MB) [peak per epoch sample]")
print(f" mean={rss_stats.mean:,.2f} stdev={rss_stats.stdev:,.2f} cv={rss_stats.cv*100:.2f}%")
print(f" min={rss_stats.minv:,.2f} max={rss_stats.maxv:,.2f}")
return 0
if __name__ == "__main__":
raise SystemExit(main())