Files
hakmem/core/hakmem_smallmid.c

372 lines
13 KiB
C
Raw Normal View History

/**
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
* hakmem_smallmid.c - Small-Mid Allocator Front Box Implementation
*
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
* Phase 17-1: Front Box Only (No Dedicated SuperSlab Backend)
*
* Strategy (ChatGPT reviewed):
* - Thin front layer with TLS freelist (256B/512B/1KB)
* - Backend: Use existing Tiny SuperSlab/SharedPool APIs
* - Goal: Measure performance impact before building dedicated backend
* - A/B test: Does Small-Mid front improve 256-1KB performance?
*
* Architecture:
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
* - 3 size classes: 256B/512B/1KB (reduced from 5)
* - TLS freelist for fast alloc/free (static inline)
* - Backend: Call Tiny allocator APIs (reuse existing infrastructure)
* - ENV controlled (HAKMEM_SMALLMID_ENABLE=1)
*
* Created: 2025-11-16
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
* Updated: 2025-11-16 (Phase 17-1 revision - Front Box only)
*/
#include "hakmem_smallmid.h"
#include "hakmem_build_flags.h"
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
#include "hakmem_tiny.h" // For backend: hak_tiny_alloc / hak_tiny_free
#include "tiny_region_id.h" // For header writing
#include <string.h>
#include <pthread.h>
// ============================================================================
// TLS State
// ============================================================================
__thread void* g_smallmid_tls_head[SMALLMID_NUM_CLASSES] = {NULL};
__thread uint32_t g_smallmid_tls_count[SMALLMID_NUM_CLASSES] = {0};
// ============================================================================
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
// Size Class Table (Phase 17-1: 3 classes)
// ============================================================================
const size_t g_smallmid_class_sizes[SMALLMID_NUM_CLASSES] = {
256, // SM0: 256B
512, // SM1: 512B
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
1024 // SM2: 1KB
};
// ============================================================================
// Global State
// ============================================================================
static pthread_mutex_t g_smallmid_init_lock = PTHREAD_MUTEX_INITIALIZER;
static int g_smallmid_initialized = 0;
static int g_smallmid_enabled = -1; // -1 = not checked, 0 = disabled, 1 = enabled
// ============================================================================
// Statistics (Debug)
// ============================================================================
#ifdef HAKMEM_SMALLMID_STATS
SmallMidStats g_smallmid_stats = {0};
void smallmid_print_stats(void) {
fprintf(stderr, "\n=== Small-Mid Allocator Statistics ===\n");
fprintf(stderr, "Total allocs: %lu\n", g_smallmid_stats.total_allocs);
fprintf(stderr, "Total frees: %lu\n", g_smallmid_stats.total_frees);
fprintf(stderr, "TLS hits: %lu\n", g_smallmid_stats.tls_hits);
fprintf(stderr, "TLS misses: %lu\n", g_smallmid_stats.tls_misses);
fprintf(stderr, "SuperSlab refills: %lu\n", g_smallmid_stats.superslab_refills);
if (g_smallmid_stats.total_allocs > 0) {
double hit_rate = (double)g_smallmid_stats.tls_hits / g_smallmid_stats.total_allocs * 100.0;
fprintf(stderr, "TLS hit rate: %.2f%%\n", hit_rate);
}
fprintf(stderr, "=======================================\n\n");
}
#endif
// ============================================================================
// ENV Control
// ============================================================================
bool smallmid_is_enabled(void) {
if (__builtin_expect(g_smallmid_enabled == -1, 0)) {
const char* env = getenv("HAKMEM_SMALLMID_ENABLE");
g_smallmid_enabled = (env && atoi(env) == 1) ? 1 : 0;
if (g_smallmid_enabled) {
SMALLMID_LOG("Small-Mid allocator ENABLED (ENV: HAKMEM_SMALLMID_ENABLE=1)");
} else {
SMALLMID_LOG("Small-Mid allocator DISABLED (default, set HAKMEM_SMALLMID_ENABLE=1 to enable)");
}
}
return (g_smallmid_enabled == 1);
}
// ============================================================================
// Initialization
// ============================================================================
void smallmid_init(void) {
if (g_smallmid_initialized) return;
pthread_mutex_lock(&g_smallmid_init_lock);
if (!g_smallmid_initialized) {
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
SMALLMID_LOG("Initializing Small-Mid Front Box...");
// Check ENV
if (!smallmid_is_enabled()) {
SMALLMID_LOG("Small-Mid allocator is disabled, skipping initialization");
g_smallmid_initialized = 1;
pthread_mutex_unlock(&g_smallmid_init_lock);
return;
}
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
// Phase 17-1: No dedicated backend - use existing Tiny infrastructure
// No additional initialization needed (TLS state is static)
g_smallmid_initialized = 1;
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
SMALLMID_LOG("Small-Mid Front Box initialized (3 classes: 256B/512B/1KB, backend=Tiny)");
}
pthread_mutex_unlock(&g_smallmid_init_lock);
}
// ============================================================================
// TLS Freelist Operations
// ============================================================================
/**
* smallmid_tls_pop - Pop a block from TLS freelist
*
* @param class_idx Size class index
* @return Block pointer (with header), or NULL if empty
*/
static inline void* smallmid_tls_pop(int class_idx) {
void* head = g_smallmid_tls_head[class_idx];
if (!head) return NULL;
// Read next pointer (stored at offset 0 in user data, after 1-byte header)
void* next = *(void**)((uint8_t*)head + 1);
g_smallmid_tls_head[class_idx] = next;
g_smallmid_tls_count[class_idx]--;
#ifdef HAKMEM_SMALLMID_STATS
__atomic_fetch_add(&g_smallmid_stats.tls_hits, 1, __ATOMIC_RELAXED);
#endif
return head;
}
/**
* smallmid_tls_push - Push a block to TLS freelist
*
* @param class_idx Size class index
* @param ptr Block pointer (with header)
* @return true on success, false if TLS full
*/
static inline bool smallmid_tls_push(int class_idx, void* ptr) {
uint32_t capacity = smallmid_tls_capacity(class_idx);
if (g_smallmid_tls_count[class_idx] >= capacity) {
return false; // TLS full
}
// Write next pointer (at offset 0 in user data, after 1-byte header)
void* head = g_smallmid_tls_head[class_idx];
*(void**)((uint8_t*)ptr + 1) = head;
g_smallmid_tls_head[class_idx] = ptr;
g_smallmid_tls_count[class_idx]++;
return true;
}
// ============================================================================
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
// Backend Delegation (Phase 17-1: Reuse Tiny infrastructure)
// ============================================================================
/**
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
* smallmid_backend_alloc - Allocate from Tiny backend and convert header
*
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
* @param size Allocation size (256-1024)
* @return User pointer with Small-Mid header (0xb0), or NULL on failure
*
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
* Strategy:
* - Call Tiny allocator (handles C5/C6/C7 = 256B/512B/1KB)
* - Tiny writes header: 0xa5/0xa6/0xa7
* - Overwrite with Small-Mid header: 0xb0/0xb1/0xb2
*/
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
static void* smallmid_backend_alloc(size_t size) {
#ifdef HAKMEM_SMALLMID_STATS
__atomic_fetch_add(&g_smallmid_stats.tls_misses, 1, __ATOMIC_RELAXED);
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
__atomic_fetch_add(&g_smallmid_stats.superslab_refills, 1, __ATOMIC_RELAXED);
#endif
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
// Call Tiny allocator
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
void* ptr = hak_tiny_alloc(size);
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
if (!ptr) {
SMALLMID_LOG("smallmid_backend_alloc(%zu): Tiny allocation failed", size);
return NULL;
}
// Overwrite header: Tiny (0xa0 | tiny_class) → Small-Mid (0xb0 | sm_class)
// Tiny class mapping: C5=256B, C6=512B, C7=1KB
// Small-Mid class mapping: SM0=256B, SM1=512B, SM2=1KB
uint8_t* base = (uint8_t*)ptr - 1;
uint8_t tiny_header = *base;
uint8_t tiny_class = tiny_header & 0x0f;
// Convert Tiny class (5/6/7) to Small-Mid class (0/1/2)
int sm_class = tiny_class - 5;
if (sm_class < 0 || sm_class >= SMALLMID_NUM_CLASSES) {
// Should never happen - Tiny allocated wrong class
SMALLMID_LOG("smallmid_backend_alloc(%zu): Invalid Tiny class %d", size, tiny_class);
// Revert header and free
hak_tiny_free(ptr);
return NULL;
}
// Write Small-Mid header
*base = 0xb0 | sm_class;
SMALLMID_LOG("smallmid_backend_alloc(%zu) = %p (Tiny C%d → SM C%d)", size, ptr, tiny_class, sm_class);
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
return ptr;
}
/**
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
* smallmid_backend_free - Convert header and delegate to Tiny backend
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
*
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
* @param ptr User pointer (must have Small-Mid header 0xb0)
* @param size Allocation size (unused, Tiny reads header)
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
*
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
* Strategy:
* - Convert header: Small-Mid (0xb0 | sm_class) Tiny (0xa0 | tiny_class)
* - Call Tiny free to handle deallocation
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
*/
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
static void smallmid_backend_free(void* ptr, size_t size) {
(void)size; // Unused - Tiny reads size from header
// Read Small-Mid header
uint8_t* base = (uint8_t*)ptr - 1;
uint8_t sm_header = *base;
uint8_t sm_class = sm_header & 0x0f;
// Convert Small-Mid class (0/1/2) to Tiny class (5/6/7)
uint8_t tiny_class = sm_class + 5;
// Write Tiny header
*base = 0xa0 | tiny_class;
SMALLMID_LOG("smallmid_backend_free(%p): SM C%d → Tiny C%d", ptr, sm_class, tiny_class);
// Call Tiny free
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
hak_tiny_free(ptr);
}
// ============================================================================
// Allocation
// ============================================================================
void* smallmid_alloc(size_t size) {
// Check if enabled
if (!smallmid_is_enabled()) {
return NULL; // Disabled, fall through to Mid or other allocators
}
// Initialize if needed
if (__builtin_expect(!g_smallmid_initialized, 0)) {
smallmid_init();
}
// Validate size range
if (__builtin_expect(!smallmid_is_in_range(size), 0)) {
SMALLMID_LOG("smallmid_alloc: size %zu out of range [%d-%d]",
size, SMALLMID_MIN_SIZE, SMALLMID_MAX_SIZE);
return NULL;
}
// Get size class
int class_idx = smallmid_size_to_class(size);
if (__builtin_expect(class_idx < 0, 0)) {
SMALLMID_LOG("smallmid_alloc: invalid class for size %zu", size);
return NULL;
}
#ifdef HAKMEM_SMALLMID_STATS
__atomic_fetch_add(&g_smallmid_stats.total_allocs, 1, __ATOMIC_RELAXED);
#endif
// Fast path: Pop from TLS freelist
void* ptr = smallmid_tls_pop(class_idx);
if (ptr) {
SMALLMID_LOG("smallmid_alloc(%zu) = %p (TLS hit, class=%d)", size, ptr, class_idx);
return (uint8_t*)ptr + 1; // Return user pointer (skip header)
}
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
// TLS miss: Allocate from Tiny backend
// Phase 17-1: Reuse Tiny infrastructure (C5/C6/C7) instead of dedicated SuperSlab
ptr = smallmid_backend_alloc(size);
if (!ptr) {
SMALLMID_LOG("smallmid_alloc(%zu) = NULL (backend failed)", size);
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
return NULL;
}
Phase 17-1: Small-Mid Allocator - TLS Frontend Cache (結果: ±0.3%, 層分離成功) Summary: ======== Phase 17-1 implements Small-Mid allocator as TLS frontend cache with Tiny backend delegation. Result: Clean layer separation achieved with minimal overhead (±0.3%), but no performance gain. Conclusion: Frontend-only approach is dead end. Phase 17-2 (dedicated backend) required for 2-3x target. Implementation: =============== 1. Small-Mid TLS frontend (256B/512B/1KB - 3 classes) - TLS freelist (32/24/16 capacity) - Backend delegation to Tiny C5/C6/C7 - Header conversion (0xa0 → 0xb0) 2. Auto-adjust Tiny boundary - When Small-Mid ON: Tiny auto-limits to C0-C5 (0-255B) - When Small-Mid OFF: Tiny default C0-C7 (0-1023B) - Prevents routing conflict 3. Routing order fix - Small-Mid BEFORE Tiny (critical for proper execution) - Fall-through on TLS miss Files Modified: =============== - core/hakmem_smallmid.h/c: TLS freelist + backend delegation - core/hakmem_tiny.c: tiny_get_max_size() auto-adjust - core/box/hak_alloc_api.inc.h: Routing order (Small-Mid → Tiny) - CURRENT_TASK.md: Phase 17-1 results + Phase 17-2 plan A/B Benchmark Results: ====================== | Size | Config A (OFF) | Config B (ON) | Delta | % Change | |--------|----------------|---------------|----------|----------| | 256B | 5.87M ops/s | 6.06M ops/s | +191K | +3.3% | | 512B | 6.02M ops/s | 5.91M ops/s | -112K | -1.9% | | 1024B | 5.58M ops/s | 5.54M ops/s | -35K | -0.6% | | Overall| 5.82M ops/s | 5.84M ops/s | +20K | +0.3% | Analysis: ========= ✅ SUCCESS: Clean layer separation (Small-Mid ↔ Tiny coexist) ✅ SUCCESS: Minimal overhead (±0.3% = measurement noise) ❌ FAIL: No performance gain (target was 2-4x) Root Cause: ----------- - Delegation overhead = TLS savings (net gain ≈ 0 instructions) - Small-Mid TLS alloc: ~3-5 instructions - Tiny backend delegation: ~3-5 instructions - Header conversion: ~2 instructions - No batching: 1:1 delegation to Tiny (no refill amortization) Lessons Learned: ================ - Frontend-only approach ineffective (backend calls not reduced) - Dedicated backend essential for meaningful improvement - Clean separation achieved = solid foundation for Phase 17-2 Next Steps (Phase 17-2): ======================== - Dedicated Small-Mid SuperSlab backend (separate from Tiny) - TLS batch refill (8-16 blocks per refill) - Optimized 0xb0 header fast path (no delegation) - Target: 12-15M ops/s (2.0-2.6x improvement) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 02:37:24 +09:00
SMALLMID_LOG("smallmid_alloc(%zu) = %p (backend alloc, class=%d)", size, ptr, class_idx);
return ptr;
}
// ============================================================================
// Free
// ============================================================================
void smallmid_free(void* ptr) {
if (!ptr) return;
// Check if enabled
if (!smallmid_is_enabled()) {
return; // Disabled, should not be called
}
#ifdef HAKMEM_SMALLMID_STATS
__atomic_fetch_add(&g_smallmid_stats.total_frees, 1, __ATOMIC_RELAXED);
#endif
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
// Phase 17-1: Read header to identify if this is a Small-Mid TLS allocation
// or a backend (Tiny) allocation
uint8_t* base = (uint8_t*)ptr - 1;
uint8_t header = *base;
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
// Small-Mid TLS allocations have magic 0xb0
// Tiny allocations have magic 0xa0
uint8_t magic = header & 0xf0;
int class_idx = header & 0x0f;
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
if (magic == 0xb0 && class_idx >= 0 && class_idx < SMALLMID_NUM_CLASSES) {
// This is a Small-Mid TLS allocation, push to TLS freelist
if (smallmid_tls_push(class_idx, base)) {
SMALLMID_LOG("smallmid_free(%p): pushed to TLS (class=%d)", ptr, class_idx);
return;
}
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
// TLS full: Delegate to Tiny backend
SMALLMID_LOG("smallmid_free(%p): TLS full, delegating to backend", ptr);
// Fall through to backend free
}
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
// This is a backend (Tiny) allocation, or TLS full - delegate to Tiny
// Tiny will handle the free based on its own header (0xa0)
size_t size = 0; // Tiny free doesn't need size, it reads header
smallmid_backend_free(ptr, size);
}
// ============================================================================
// Thread Cleanup
// ============================================================================
void smallmid_thread_exit(void) {
if (!smallmid_is_enabled()) return;
SMALLMID_LOG("smallmid_thread_exit: cleaning up TLS state");
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
// Phase 17-1: Return TLS blocks to Tiny backend
for (int i = 0; i < SMALLMID_NUM_CLASSES; i++) {
Phase 17-1 Revision: Small-Mid Front Box Only (ChatGPT Strategy) STRATEGY CHANGE (ChatGPT reviewed): - Phase 17-1: Build FRONT BOX ONLY (no dedicated SuperSlab backend) - Backend: Reuse existing Tiny SuperSlab/SharedPool APIs - Goal: Measure performance impact before building dedicated infrastructure - A/B test: Does thin front layer improve 256-1KB performance? RATIONALE (ChatGPT analysis): 1. Tiny/Middle/Large need different properties - same SuperSlab causes conflict 2. Metadata shapes collide - struct bloat → L1 miss increase 3. Learning signals get muddied - size-specific control becomes difficult IMPLEMENTATION: - Reduced size classes: 5 → 3 (256B/512B/1KB only) - Removed dedicated SuperSlab backend stub - Backend: Direct delegation to hak_tiny_alloc/free - TLS freelist: Thin front cache (32/24/16 capacity) - Fast path: TLS hit (pop/push with header 0xb0) - Slow path: Backend alloc via Tiny (no TLS refill) - Free path: TLS push if space, else delegate to Tiny ARCHITECTURE: Tiny: 0-255B (C0-C5, unchanged) Small-Mid: 256-1KB (SM0-SM2, Front Box, backend=Tiny) Mid: 8KB-32KB (existing) FILES CHANGED: - hakmem_smallmid.h: Reduced to 3 classes, updated docs - hakmem_smallmid.c: Removed SuperSlab stub, added backend delegation NEXT STEPS: - Integrate into hak_alloc_api.inc.h routing - A/B benchmark: Small-Mid ON/OFF comparison - If successful (2x improvement), consider Phase 17-2 dedicated backend 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 01:51:43 +09:00
void* head = g_smallmid_tls_head[i];
while (head) {
void* next = *(void**)((uint8_t*)head + 1);
void* user_ptr = (uint8_t*)head + 1;
smallmid_backend_free(user_ptr, 0);
head = next;
}
g_smallmid_tls_head[i] = NULL;
g_smallmid_tls_count[i] = 0;
}
}