Files
hakmem/core/box/ss_pt_lookup_box.h
Moe Charm (CI) d9991f39ff Phase ALLOC-TINY-FAST-DUALHOT-1 & Optimization Roadmap Update
Add comprehensive design docs and research boxes:
- docs/analysis/ALLOC_TINY_FAST_DUALHOT_1_DESIGN.md: ALLOC DUALHOT investigation
- docs/analysis/FREE_TINY_FAST_DUALHOT_1_DESIGN.md: FREE DUALHOT final specs
- docs/analysis/FREE_TINY_FAST_HOTCOLD_OPT_1_DESIGN.md: Hot/Cold split research
- docs/analysis/POOL_MID_INUSE_DEFERRED_DN_BATCH_DESIGN.md: Deferred batching design
- docs/analysis/POOL_MID_INUSE_DEFERRED_REGRESSION_ANALYSIS.md: Stats overhead findings
- docs/analysis/MID_DESC_CACHE_BENCHMARK_2025-12-12.md: Cache measurement results
- docs/analysis/LAST_MATCH_CACHE_IMPLEMENTATION.md: TLS cache investigation

Research boxes (SS page table):
- core/box/ss_pt_env_box.h: HAKMEM_SS_LOOKUP_KIND gate
- core/box/ss_pt_types_box.h: 2-level page table structures
- core/box/ss_pt_lookup_box.h: ss_pt_lookup() implementation
- core/box/ss_pt_register_box.h: Page table registration
- core/box/ss_pt_impl.c: Global definitions

Updates:
- docs/specs/ENV_VARS_COMPLETE.md: HOTCOLD, DEFERRED, SS_LOOKUP env vars
- core/box/hak_free_api.inc.h: FREE-DISPATCH-SSOT integration
- core/box/pool_mid_inuse_deferred_box.h: Deferred API updates
- core/box/pool_mid_inuse_deferred_stats_box.h: Stats collection
- core/hakmem_super_registry: SS page table integration

Current Status:
- FREE-TINY-FAST-DUALHOT-1: +13% improvement, ready for adoption
- ALLOC-TINY-FAST-DUALHOT-1: -2% regression, frozen as research box
- Next: Optimization roadmap per ROI (mimalloc gap 2.5x)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-13 05:35:46 +09:00

37 lines
1.0 KiB
C

#ifndef SS_PT_LOOKUP_BOX_H
#define SS_PT_LOOKUP_BOX_H
#include "ss_pt_types_box.h"
#include "ss_pt_env_box.h"
// O(1) lookup (hot path, lock-free)
static inline struct SuperSlab* ss_pt_lookup(void* addr) {
uintptr_t p = (uintptr_t)addr;
// Out-of-range check (>> 48 for LA57 compatibility)
if (__builtin_expect(p >> 48, 0)) {
if (hak_ss_pt_stats_enabled()) t_ss_pt_stats.pt_out_of_range++;
return NULL; // Fallback to hash handled by caller
}
uint32_t l1_idx = SS_PT_L1_INDEX(addr);
uint32_t l2_idx = SS_PT_L2_INDEX(addr);
// L1 load (acquire)
SsPtL2* l2 = atomic_load_explicit(&g_ss_pt.l2[l1_idx], memory_order_acquire);
if (__builtin_expect(l2 == NULL, 0)) {
if (hak_ss_pt_stats_enabled()) t_ss_pt_stats.pt_miss++;
return NULL;
}
// L2 load (acquire)
struct SuperSlab* ss = atomic_load_explicit(&l2->entries[l2_idx], memory_order_acquire);
if (hak_ss_pt_stats_enabled()) {
if (ss) t_ss_pt_stats.pt_hit++;
else t_ss_pt_stats.pt_miss++;
}
return ss;
}
#endif