Files
hakmem/core
Moe Charm (CI) cdaf117581 Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy)
STRATEGY CHANGE (ChatGPT reviewed):
- Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend)
- Backend: Reuse existing Tiny SuperSlab/SharedPool APIs
- Goal: Measure performance impact before building dedicated infrastructure
- A/B test: Does thin front layer improve 256-1KB performance?

RATIONALE (ChatGPT analysis):
1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict
2. Metadata shapes collide - struct bloat → L1 miss increase
3. Learning signals get muddied - size-specific control becomes difficult

IMPLEMENTATION:
- Reduced size classes: 5 → 3 (256B/512B/1KB only)
- Removed dedicated SuperSlab backend stub
- Backend: Direct delegation to hak_tiny_alloc/free
- TLS freelist: Thin front cache (32/24/16 capacity)
- Fast path: TLS hit (pop/push with header 0xb0)
- Slow path: Backend alloc via Tiny (no TLS refill)
- Free path: TLS push if space, else delegate to Tiny

ARCHITECTURE:
  Tiny:      0-255B    (C0-C5, unchanged)
  Small-Mid: 256-1KB   (SM0-SM2, Front Box, backend=Tiny)
  Mid:       8KB-32KB  (existing)

FILES CHANGED:
- hakmem_smallmid.h: Reduced to 3 classes, updated docs
- hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation

NEXT STEPS:
- Integrate into hak_alloc_api.inc.h routing
- A/B benchmark: Small-Mid ON/OFF comparison
- If successful (2x improvement), consider Phase 17-2 dedicated backend

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
..
2025-11-11 21:49:05 +09:00