2025-11-07 01:27:04 +09:00
|
|
|
// pool_api.inc.h — Box: L2 Pool public API (alloc/free/lookup)
|
|
|
|
|
#ifndef POOL_API_INC_H
|
|
|
|
|
#define POOL_API_INC_H
|
|
|
|
|
|
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified
Summary:
- Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s)
- PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM)
- Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization
Phase 23 Changes:
1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h})
- Direct SuperSlab carve (TLS SLL bypass)
- Self-contained pop-or-refill pattern
- ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128
2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h)
- Unified ON → direct cache access (skip all intermediate layers)
- Alloc: unified_cache_pop_or_refill() → immediate fail to slow
- Free: unified_cache_push() → fallback to SLL only if full
PageFaultTelemetry Changes:
3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h})
- PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement
- Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked()
4. Measurement results (Random Mixed 500K / 256B):
- Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page)
- SSM: 512 pages (initialization footprint)
- MID/L25: 0 (unused in this workload)
- Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny)
Ring Cache Enhancements:
5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h})
- ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size
- Conditional compilation cleanup
Documentation:
6. Analysis reports
- RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown
- RANDOM_MIXED_SUMMARY.md: Phase 23 summary
- RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage
- CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan
Next Steps (Phase 24):
- Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K)
- Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal)
- Expected improvement: +30-50% for Mid/Large workloads
Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
|
|
|
#include "pagefault_telemetry_box.h" // Box PageFaultTelemetry (PF_BUCKET_MID)
|
2025-12-09 19:34:54 +09:00
|
|
|
#include "box/pool_hotbox_v2_box.h"
|
2025-12-09 21:50:15 +09:00
|
|
|
#include "box/tiny_heap_env_box.h" // TinyHeap profile (C7_SAFE では flatten を無効化)
|
2025-12-10 09:08:18 +09:00
|
|
|
#include "box/pool_zero_mode_box.h" // Pool zeroing policy (env cached)
|
2025-12-12 21:39:18 +09:00
|
|
|
#include "box/pool_config_box.h" // Pool configuration & ENV gates
|
|
|
|
|
#include "box/pool_stats_box.h" // Pool statistics & monitoring
|
|
|
|
|
#include "box/pool_mid_desc_cache_box.h" // Mid descriptor TLS cache
|
2025-12-12 21:46:26 +09:00
|
|
|
#include "box/pool_free_v1_box.h" // Pool v1 free implementation (L0-SplitBox + L1-FastBox/SlowBox)
|
2025-12-12 22:15:21 +09:00
|
|
|
#include "box/pool_block_to_user_box.h" // Pool block to user pointer helpers
|
2025-12-12 22:17:53 +09:00
|
|
|
#include "box/pool_free_v2_box.h" // Pool v2 free implementation (with hotbox v2)
|
2025-12-12 22:20:19 +09:00
|
|
|
#include "box/pool_alloc_v1_flat_box.h" // Pool v1 flatten (TLS-only fast path)
|
2025-12-12 22:24:21 +09:00
|
|
|
#include "box/pool_alloc_v2_box.h" // Pool v2 alloc implementation (with hotbox v2)
|
2025-12-12 22:28:13 +09:00
|
|
|
#include "box/pool_alloc_v1_box.h" // Pool v1 alloc implementation (baseline)
|
2025-12-10 14:00:57 +09:00
|
|
|
#include <stdint.h>
|
Phase 23 Unified Cache + PageFaultTelemetry generalization: Mid/VM page-fault bottleneck identified
Summary:
- Phase 23 Unified Cache: +30% improvement (Random Mixed 256B: 18.18M → 23.68M ops/s)
- PageFaultTelemetry: Extended to generic buckets (C0-C7, MID, L25, SSM)
- Measurement-driven decision: Mid/VM page-faults (80-100K) >> Tiny (6K) → prioritize Mid/VM optimization
Phase 23 Changes:
1. Unified Cache implementation (core/front/tiny_unified_cache.{c,h})
- Direct SuperSlab carve (TLS SLL bypass)
- Self-contained pop-or-refill pattern
- ENV: HAKMEM_TINY_UNIFIED_CACHE=1, HAKMEM_TINY_UNIFIED_C{0-7}=128
2. Fast path pruning (tiny_alloc_fast.inc.h, tiny_free_fast_v2.inc.h)
- Unified ON → direct cache access (skip all intermediate layers)
- Alloc: unified_cache_pop_or_refill() → immediate fail to slow
- Free: unified_cache_push() → fallback to SLL only if full
PageFaultTelemetry Changes:
3. Generic bucket architecture (core/box/pagefault_telemetry_box.{c,h})
- PF_BUCKET_{C0-C7, MID, L25, SSM} for domain-specific measurement
- Integration: hak_pool_try_alloc(), l25_alloc_new_run(), shared_pool_allocate_superslab_unlocked()
4. Measurement results (Random Mixed 500K / 256B):
- Tiny C2-C7: 2-33 pages, high reuse (64-3.8 touches/page)
- SSM: 512 pages (initialization footprint)
- MID/L25: 0 (unused in this workload)
- Mid/Large VM benchmarks: 80-100K page-faults (13-16x higher than Tiny)
Ring Cache Enhancements:
5. Hot Ring Cache (core/front/tiny_ring_cache.{c,h})
- ENV: HAKMEM_TINY_HOT_RING_ENABLE=1, HAKMEM_TINY_HOT_RING_C{0-7}=size
- Conditional compilation cleanup
Documentation:
6. Analysis reports
- RANDOM_MIXED_BOTTLENECK_ANALYSIS.md: Page-fault breakdown
- RANDOM_MIXED_SUMMARY.md: Phase 23 summary
- RING_CACHE_ACTIVATION_GUIDE.md: Ring cache usage
- CURRENT_TASK.md: Updated with Phase 23 results and Phase 24 plan
Next Steps (Phase 24):
- Target: Mid/VM PageArena/HotSpanBox (page-fault reduction 80-100K → 30-40K)
- Tiny SSM optimization deferred (low ROI, ~6K page-faults already optimal)
- Expected improvement: +30-50% for Mid/Large workloads
Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 02:47:58 +09:00
|
|
|
|
2025-12-08 21:30:21 +09:00
|
|
|
static inline int hak_pool_mid_lookup_v1_impl(void* ptr, size_t* out_size) {
|
2025-11-07 01:27:04 +09:00
|
|
|
if (g_mf2_enabled) { MidPage* page = mf2_addr_to_page(ptr); if (page) { int c = (int)page->class_idx; if (c < 0 || c >= POOL_NUM_CLASSES) return 0; size_t sz = g_class_sizes[c]; if (sz == 0) return 0; if (out_size) *out_size = sz; return 1; } }
|
2025-12-10 14:00:57 +09:00
|
|
|
MidPageDesc* d = mid_desc_lookup_cached(ptr); if (!d) return 0; int c = (int)d->class_idx; if (c < 0 || c >= POOL_NUM_CLASSES) return 0; size_t sz = g_class_sizes[c]; if (sz == 0) return 0; if (out_size) *out_size = sz; return 1;
|
2025-11-07 01:27:04 +09:00
|
|
|
}
|
|
|
|
|
|
2025-12-08 21:30:21 +09:00
|
|
|
static inline void hak_pool_free_fast_v1_impl(void* ptr, uintptr_t site_id) {
|
2025-12-10 09:15:24 +09:00
|
|
|
if (!ptr || !g_pool.initialized) return;
|
|
|
|
|
if (g_mf2_enabled) {
|
|
|
|
|
MidPage* page = mf2_addr_to_page(ptr);
|
|
|
|
|
if (page) { mf2_free(ptr); return; }
|
|
|
|
|
}
|
2025-12-10 14:00:57 +09:00
|
|
|
MidPageDesc* d = mid_desc_lookup_cached(ptr);
|
2025-12-10 09:15:24 +09:00
|
|
|
if (!d) return;
|
|
|
|
|
size_t sz = g_class_sizes[(int)d->class_idx];
|
|
|
|
|
if (sz == 0) return;
|
|
|
|
|
hak_pool_free(ptr, sz, site_id);
|
2025-11-07 01:27:04 +09:00
|
|
|
}
|
|
|
|
|
|
2025-12-08 21:30:21 +09:00
|
|
|
// --- Public wrappers (env-gated) ----------------------------------------------
|
|
|
|
|
static inline int hak_pool_v2_route(void) { return hak_pool_v2_enabled(); }
|
|
|
|
|
|
|
|
|
|
void* hak_pool_try_alloc(size_t size, uintptr_t site_id) {
|
|
|
|
|
if (!hak_pool_v2_route()) {
|
2025-12-09 19:34:54 +09:00
|
|
|
if (hak_pool_v1_flatten_enabled()) {
|
|
|
|
|
return hak_pool_try_alloc_v1_flat(size, site_id);
|
|
|
|
|
}
|
2025-12-08 21:30:21 +09:00
|
|
|
return hak_pool_try_alloc_v1_impl(size, site_id);
|
|
|
|
|
}
|
|
|
|
|
return hak_pool_try_alloc_v2_impl(size, site_id);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void hak_pool_free(void* ptr, size_t size, uintptr_t site_id) {
|
Phase FREE-FRONT-V3-1: Free route snapshot infrastructure + build fix
Summary:
========
Implemented Phase FREE-FRONT-V3 infrastructure to optimize free hotpath by:
1. Creating snapshot-based route decision table (consolidating route logic)
2. Removing redundant ENV checks from hot path
3. Preparing for future integration into hak_free_at()
Key Changes:
============
1. NEW FILES:
- core/box/free_front_v3_env_box.h: Route snapshot definition & API
- core/box/free_front_v3_env_box.c: Snapshot initialization & caching
2. Infrastructure Details:
- FreeRouteSnapshotV3: Maps class_idx → free_route_kind for all 8 classes
- Routes defined: LEGACY, TINY_V3, CORE_V6_C6, POOL_V1
- ENV-gated initialization (HAKMEM_TINY_FREE_FRONT_V3_ENABLED, default OFF)
- Per-thread TLS caching to avoid repeated ENV reads
3. Design Goals:
- Consolidate tiny_route_for_class() results into snapshot table
- Remove C7 ULTRA / v4 / v5 / v6 ENV checks from hot path
- Limit lookup (ss_fast_lookup/slab_index_for) to paths that truly need it
- Clear ownership boundary: front v3 handles routing, downstream handles free
4. Phase Plan:
- v3-1 ✅ COMPLETE: Infrastructure (snapshot table, ENV initialization, TLS cache)
- v3-2 (INFRASTRUCTURE ONLY): Placeholder integration in hak_free_api.inc.h
- v3-3 (FUTURE): Full integration + benchmark A/B to measure hotpath improvement
5. BUILD FIX:
- Added missing core/box/c7_meta_used_counter_box.o to OBJS_BASE in Makefile
- This symbol was referenced but not linked, causing undefined reference errors
- Benchmark targets now build cleanly without LTO
Status:
=======
- Build: ✅ PASS (bench_allocators_hakmem builds without errors)
- Integration: Currently DISABLED (default OFF, ready for v3-2 phase)
- No performance impact: Infrastructure-only, hotpath unchanged
Future Work:
============
- Phase v3-2: Integrate snapshot routing into hak_free_at() main path
- Phase v3-3: Measure free hotpath performance improvement (target: 1-2% less branch mispredict)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-11 19:17:30 +09:00
|
|
|
// Phase FREE-LEGACY-BREAKDOWN-1: pool v1 カウンタ
|
|
|
|
|
extern void free_path_stat_inc_pool_v1_fast(void);
|
|
|
|
|
free_path_stat_inc_pool_v1_fast();
|
|
|
|
|
|
2025-12-08 21:30:21 +09:00
|
|
|
if (!hak_pool_v2_route()) {
|
2025-12-09 19:34:54 +09:00
|
|
|
if (hak_pool_v1_flatten_enabled()) {
|
|
|
|
|
hak_pool_free_v1_flat(ptr, size, site_id);
|
|
|
|
|
} else {
|
|
|
|
|
hak_pool_free_v1_impl(ptr, size, site_id);
|
|
|
|
|
}
|
2025-12-08 21:30:21 +09:00
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
hak_pool_free_v2_impl(ptr, size, site_id);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void hak_pool_free_fast(void* ptr, uintptr_t site_id) {
|
|
|
|
|
if (!hak_pool_v2_route()) {
|
2025-12-09 19:34:54 +09:00
|
|
|
// fast path lacks size; keep existing v1 fast implementation even when
|
|
|
|
|
// flatten is enabled to avoid behavior drift.
|
2025-12-08 21:30:21 +09:00
|
|
|
hak_pool_free_fast_v1_impl(ptr, site_id);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
hak_pool_free_fast_v2_impl(ptr, site_id);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
int hak_pool_mid_lookup(void* ptr, size_t* out_size) {
|
|
|
|
|
if (!hak_pool_v2_route()) {
|
|
|
|
|
return hak_pool_mid_lookup_v1_impl(ptr, out_size);
|
|
|
|
|
}
|
|
|
|
|
return hak_pool_mid_lookup_v2_impl(ptr, out_size);
|
|
|
|
|
}
|
|
|
|
|
|
2025-11-07 01:27:04 +09:00
|
|
|
#endif // POOL_API_INC_H
|