Tiny Pool redesign: P0.1, P0.3, P1.1, P1.2 - Out-of-band class_idx lookup
This commit implements the first phase of Tiny Pool redesign based on
ChatGPT architecture review. The goal is to eliminate Header/Next pointer
conflicts by moving class_idx lookup out-of-band (to SuperSlab metadata).
## P0.1: C0(8B) class upgraded to 16B
- Size table changed: {16,32,64,128,256,512,1024,2048} (8 classes)
- LUT updated: 1..16 → class 0, 17..32 → class 1, etc.
- tiny_next_off: C0 now uses offset 1 (header preserved)
- Eliminates edge cases for 8B allocations
## P0.3: Slab reuse guard Box (tls_slab_reuse_guard_box.h)
- New Box for draining TLS SLL before slab reuse
- ENV gate: HAKMEM_TINY_SLAB_REUSE_GUARD=1
- Prevents stale pointers when slabs are recycled
- Follows Box theory: single responsibility, minimal API
## P1.1: SuperSlab class_map addition
- Added uint8_t class_map[SLABS_PER_SUPERSLAB_MAX] to SuperSlab
- Maps slab_idx → class_idx for out-of-band lookup
- Initialized to 255 (UNASSIGNED) on SuperSlab creation
- Set correctly on slab initialization in all backends
## P1.2: Free fast path uses class_map
- ENV gate: HAKMEM_TINY_USE_CLASS_MAP=1
- Free path can now get class_idx from class_map instead of Header
- Falls back to Header read if class_map returns invalid value
- Fixed Legacy Backend dynamic slab initialization bug
## Documentation added
- HAKMEM_ARCHITECTURE_OVERVIEW.md: 4-layer architecture analysis
- TLS_SLL_ARCHITECTURE_INVESTIGATION.md: Root cause analysis
- PTR_LIFECYCLE_TRACE_AND_ROOT_CAUSE_ANALYSIS.md: Pointer tracking
- TINY_REDESIGN_CHECKLIST.md: Implementation roadmap (P0-P3)
## Test results
- Baseline: 70% success rate (30% crash - pre-existing issue)
- class_map enabled: 70% success rate (same as baseline)
- Performance: ~30.5M ops/s (unchanged)
## Next steps (P1.3, P2, P3)
- P1.3: Add meta->active for accurate TLS/freelist sync
- P2: TLS SLL redesign with Box-based counting
- P3: Complete Header out-of-band migration
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@ -5,6 +5,7 @@
|
||||
#include "box/ss_hot_cold_box.h" // Phase 12-1.1: EMPTY slab marking
|
||||
#include "box/pagefault_telemetry_box.h" // Box PageFaultTelemetry (PF_BUCKET_SS_META)
|
||||
#include "box/tls_sll_drain_box.h" // Box TLS SLL Drain (tiny_tls_sll_drain)
|
||||
#include "box/tls_slab_reuse_guard_box.h" // Box TLS Slab Reuse Guard (P0.3)
|
||||
#include "hakmem_policy.h" // FrozenPolicy (learning layer)
|
||||
|
||||
#include <stdlib.h>
|
||||
@ -684,6 +685,8 @@ shared_pool_allocate_superslab_unlocked(void)
|
||||
int max_slabs = ss_slabs_capacity(ss);
|
||||
for (int i = 0; i < max_slabs; i++) {
|
||||
ss_slab_meta_class_idx_set(ss, i, 255); // UNASSIGNED
|
||||
// P1.1: Initialize class_map to UNASSIGNED as well
|
||||
ss->class_map[i] = 255;
|
||||
}
|
||||
|
||||
if (g_shared_pool.total_count >= g_shared_pool.capacity) {
|
||||
@ -751,6 +754,8 @@ static inline void sp_fix_geometry_if_needed(SuperSlab* ss, int slab_idx, int cl
|
||||
|
||||
superslab_init_slab(ss, slab_idx, stride, 0 /*owner_tid*/);
|
||||
meta->class_idx = (uint8_t)class_idx;
|
||||
// P1.1: Update class_map after geometry fix
|
||||
ss->class_map[slab_idx] = (uint8_t)class_idx;
|
||||
}
|
||||
}
|
||||
|
||||
@ -861,11 +866,16 @@ stage1_retry_after_tension_drain:
|
||||
// Validate this slab is truly EMPTY and reusable
|
||||
TinySlabMeta* meta = &ss->slabs[empty_idx];
|
||||
if (meta->capacity > 0 && meta->used == 0) {
|
||||
// P0.3: Guard against TLS SLL orphaned pointers before reusing slab
|
||||
tiny_tls_slab_reuse_guard(ss);
|
||||
|
||||
// Clear EMPTY state (will be re-marked on next free)
|
||||
ss_clear_slab_empty(ss, empty_idx);
|
||||
|
||||
// Bind this slab to class_idx
|
||||
meta->class_idx = (uint8_t)class_idx;
|
||||
// P1.1: Update class_map for EMPTY slab reuse
|
||||
ss->class_map[empty_idx] = (uint8_t)class_idx;
|
||||
|
||||
#if !HAKMEM_BUILD_RELEASE
|
||||
if (dbg_acquire == 1) {
|
||||
@ -905,6 +915,13 @@ stage1_retry_after_tension_drain:
|
||||
|
||||
pthread_mutex_lock(&g_shared_pool.alloc_lock);
|
||||
|
||||
// P0.3: Guard against TLS SLL orphaned pointers before reusing slab
|
||||
// RACE FIX: Load SuperSlab pointer atomically BEFORE guard (consistency)
|
||||
SuperSlab* ss_guard = atomic_load_explicit(&reuse_meta->ss, memory_order_relaxed);
|
||||
if (ss_guard) {
|
||||
tiny_tls_slab_reuse_guard(ss_guard);
|
||||
}
|
||||
|
||||
// Activate slot under mutex (slot state transition requires protection)
|
||||
if (sp_slot_mark_active(reuse_meta, reuse_slot_idx, class_idx) == 0) {
|
||||
// RACE FIX: Load SuperSlab pointer atomically (consistency)
|
||||
@ -1291,6 +1308,8 @@ shared_pool_release_slab(SuperSlab* ss, int slab_idx)
|
||||
if (ss->slab_bitmap & bit) {
|
||||
ss->slab_bitmap &= ~bit;
|
||||
slab_meta->class_idx = 255; // UNASSIGNED
|
||||
// P1.1: Mark class_map as UNASSIGNED when releasing slab
|
||||
ss->class_map[slab_idx] = 255;
|
||||
|
||||
if (ss->active_slabs > 0) {
|
||||
ss->active_slabs--;
|
||||
|
||||
Reference in New Issue
Block a user