Files
hakmem/CURRENT_TASK.md

298 lines
8.4 KiB
Markdown
Raw Normal View History

# Current Task 2025-11-08
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
## 🚀 Phase 7: Region-ID Direct Lookup - System malloc に勝つ
Fix: SuperSlab guess loop & header magic SEGV (random_mixed/mid_large_mt) ## Problem bench_random_mixed_hakmem and bench_mid_large_mt_hakmem crashed with SEGV: - random_mixed: Exit 139 (SEGV) ❌ - mid_large_mt: Exit 139 (SEGV) ❌ - Larson: 838K ops/s ✅ (worked fine) Error: Unmapped memory dereference in free path ## Root Causes (2 bugs found by Ultrathink Task) ### Bug 1: Guess Loop (core/box/hak_free_api.inc.h:92-95) ```c for (int lg=21; lg>=20; lg--) { SuperSlab* guess=(SuperSlab*)((uintptr_t)ptr & ~mask); if (guess && guess->magic==SUPERSLAB_MAGIC) { // ← SEGV // Dereferences unmapped memory } } ``` ### Bug 2: Header Magic Check (core/box/hak_free_api.inc.h:115) ```c void* raw = (char*)ptr - HEADER_SIZE; AllocHeader* hdr = (AllocHeader*)raw; if (hdr->magic != HAKMEM_MAGIC) { // ← SEGV // Dereferences unmapped memory if ptr has no header } ``` **Why SEGV:** - Registry lookup fails (allocation not from SuperSlab) - Guess loop calculates 1MB/2MB aligned address - No memory mapping validation - Dereferences unmapped memory → SEGV **Why Larson worked but random_mixed failed:** - Larson: All from SuperSlab → registry hit → never reaches guess loop - random_mixed: Diverse sizes (8-4096B) → registry miss → enters buggy paths **Why LD_PRELOAD worked:** - hak_core_init.inc.h:119-121 disables SuperSlab by default - → SS-first path skipped → buggy code never executed ## Fix (2-part) ### Part 1: Remove Guess Loop File: core/box/hak_free_api.inc.h:92-95 - Deleted unsafe guess loop (4 lines) - If registry lookup fails, allocation is not from SuperSlab ### Part 2: Add Memory Safety Check File: core/hakmem_internal.h:277-294 ```c static inline int hak_is_memory_readable(void* addr) { unsigned char vec; return mincore(addr, 1, &vec) == 0; // Check if mapped } ``` File: core/box/hak_free_api.inc.h:115-131 ```c if (!hak_is_memory_readable(raw)) { // Not accessible → route to appropriate handler // Prevents SEGV on unmapped memory goto done; } // Safe to dereference now AllocHeader* hdr = (AllocHeader*)raw; ``` ## Verification | Test | Before | After | Result | |------|--------|-------|--------| | random_mixed (2KB) | ❌ SEGV | ✅ 2.22M ops/s | 🎉 Fixed | | random_mixed (4KB) | ❌ SEGV | ✅ 2.58M ops/s | 🎉 Fixed | | Larson 4T | ✅ 838K | ✅ 838K ops/s | ✅ No regression | **Performance Impact:** 0% (mincore only on fallback path) ## Investigation - Complete analysis: SEGV_ROOT_CAUSE_COMPLETE.md - Fix report: SEGV_FIX_REPORT.md - Previous investigation: SEGFAULT_INVESTIGATION_REPORT.md 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-07 17:34:24 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
### ミッション
**HAKMEM を System malloc/mimalloc より速くする**
- **Current**: 1.2M ops/s (bench_random_mixed)
- **Target**: 40-80M ops/s (70-140% of System malloc)
- **Strategy**: SuperSlab lookup 削除 → Ultra-fast free (3-5 instructions)
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
---
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
## 📊 現状分析(完了)
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
### Performance Gap 発見
- **System malloc**: 56M ops/s
- **HAKMEM**: 1.2M ops/s
- **Gap**: **47x slower** 💀
Fix: SuperSlab guess loop & header magic SEGV (random_mixed/mid_large_mt) ## Problem bench_random_mixed_hakmem and bench_mid_large_mt_hakmem crashed with SEGV: - random_mixed: Exit 139 (SEGV) ❌ - mid_large_mt: Exit 139 (SEGV) ❌ - Larson: 838K ops/s ✅ (worked fine) Error: Unmapped memory dereference in free path ## Root Causes (2 bugs found by Ultrathink Task) ### Bug 1: Guess Loop (core/box/hak_free_api.inc.h:92-95) ```c for (int lg=21; lg>=20; lg--) { SuperSlab* guess=(SuperSlab*)((uintptr_t)ptr & ~mask); if (guess && guess->magic==SUPERSLAB_MAGIC) { // ← SEGV // Dereferences unmapped memory } } ``` ### Bug 2: Header Magic Check (core/box/hak_free_api.inc.h:115) ```c void* raw = (char*)ptr - HEADER_SIZE; AllocHeader* hdr = (AllocHeader*)raw; if (hdr->magic != HAKMEM_MAGIC) { // ← SEGV // Dereferences unmapped memory if ptr has no header } ``` **Why SEGV:** - Registry lookup fails (allocation not from SuperSlab) - Guess loop calculates 1MB/2MB aligned address - No memory mapping validation - Dereferences unmapped memory → SEGV **Why Larson worked but random_mixed failed:** - Larson: All from SuperSlab → registry hit → never reaches guess loop - random_mixed: Diverse sizes (8-4096B) → registry miss → enters buggy paths **Why LD_PRELOAD worked:** - hak_core_init.inc.h:119-121 disables SuperSlab by default - → SS-first path skipped → buggy code never executed ## Fix (2-part) ### Part 1: Remove Guess Loop File: core/box/hak_free_api.inc.h:92-95 - Deleted unsafe guess loop (4 lines) - If registry lookup fails, allocation is not from SuperSlab ### Part 2: Add Memory Safety Check File: core/hakmem_internal.h:277-294 ```c static inline int hak_is_memory_readable(void* addr) { unsigned char vec; return mincore(addr, 1, &vec) == 0; // Check if mapped } ``` File: core/box/hak_free_api.inc.h:115-131 ```c if (!hak_is_memory_readable(raw)) { // Not accessible → route to appropriate handler // Prevents SEGV on unmapped memory goto done; } // Safe to dereference now AllocHeader* hdr = (AllocHeader*)raw; ``` ## Verification | Test | Before | After | Result | |------|--------|-------|--------| | random_mixed (2KB) | ❌ SEGV | ✅ 2.22M ops/s | 🎉 Fixed | | random_mixed (4KB) | ❌ SEGV | ✅ 2.58M ops/s | 🎉 Fixed | | Larson 4T | ✅ 838K | ✅ 838K ops/s | ✅ No regression | **Performance Impact:** 0% (mincore only on fallback path) ## Investigation - Complete analysis: SEGV_ROOT_CAUSE_COMPLETE.md - Fix report: SEGV_FIX_REPORT.md - Previous investigation: SEGFAULT_INVESTIGATION_REPORT.md 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-07 17:34:24 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
### Root Cause 特定ChatGPT Pro Ultrathink
**Free path で 2回の SuperSlab lookup が 52.63% CPU を消費**
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
```c
// 現状の問題
void free(ptr) {
SuperSlab* ss = hak_super_lookup(ptr); // ← Lookup #1 (100+ cycles)
int class_idx = ss->size_class;
// ... 330 lines of validation ...
hak_tiny_free_superslab(ptr, ss); // ← Lookup #2 (redundant!)
}
```
**比較:**
| Path | Instructions | Atomics | Lookups | Cycles |
|------|--------------|---------|---------|--------|
| **Allocation** | 3-4 | 0 | 0 | ~10 |
| **Free (現状)** | 330+ | 5-7 | 2 | ~500+ |
| **System tcache** | 3-4 | 0 | 0 | ~10 |
---
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
## ✅ 設計完了Task Agent Opus Ultrathink
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
### 推奨方式: Smart Headers (Hybrid 1B)
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
**天才的発見:**
> SuperSlab の slab[0] に **960 bytes の無駄パディング** が存在
> → Header に再利用すれば **メモリ overhead ゼロ!**
**実装:**
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
```c
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
// Ultra-Fast Free (3-5 instructions, 5-10 cycles)
void hak_free_fast(void* ptr) {
// 1. Get class from inline header (1 instruction)
uint8_t cls = *((uint8_t*)ptr - 1);
// 2. Push to TLS freelist (2-3 instructions)
*(void**)ptr = g_tls_sll_head[cls];
g_tls_sll_head[cls] = ptr;
g_tls_sll_count[cls]++;
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
// Done! No lookup, no validation, no atomic
}
```
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
**Performance Projection:**
- **1.2M → 40-60M ops/s** (30-50x improvement) 🚀
- **vs System malloc**: 70-110% (互角〜勝ち!) 🏆
- **vs mimalloc**: 同等レベル
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
**Memory Overhead:**
- Slab[0]: 0% (パディング再利用)
- Other slabs: ~1.5% (1 byte/block)
- Average: <2% (許容範囲)
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
**設計ドキュメント:**
- [`REGION_ID_DESIGN.md`](REGION_ID_DESIGN.md) - 完全設計Task Agent Opus
- [`CLAUDE.md#phase-7`](CLAUDE.md#phase-7-region-id-direct-lookup---ultra-fast-free-path-2025-11-08-) - Phase 7 概要
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
---
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
## 📋 実装計画
### Phase 7-1: Proof of Concept (1-2日) ⏳
**Goal**: Header 方式の動作確認 + 効果測定
**Tasks:**
1. **Header 書き込み実装** (Allocation path)
- `core/tiny_alloc_fast.inc.h` - Header 書き込み追加
- `core/tiny_region_id.h` - Header API 定義(新規)
```c
// Allocation 時に class_idx を header に書き込む
static inline void* alloc_with_header(int class_idx, void* ptr) {
*((uint8_t*)ptr - 1) = (uint8_t)class_idx;
return ptr;
}
```
2. **Ultra-fast free 実装** (Free path)
- `core/tiny_free_fast_v2.inc.h` - 新しい free path新規、10-20 LOC
- Feature flag: `HAKMEM_TINY_HEADER_CLASSIDX=1`
```c
void hak_free_fast_v2(void* ptr) {
uint8_t cls = *((uint8_t*)ptr - 1);
*(void**)ptr = g_tls_sll_head[cls];
g_tls_sll_head[cls] = ptr;
g_tls_sll_count[cls]++;
}
```
3. **Benchmark 測定**
```bash
# Before (現状)
make clean && make bench_random_mixed_hakmem
HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 100000 2048 1234567
# → 1.2M ops/s
# After (Header 方式)
make clean && make HEADER_CLASSIDX=1 bench_random_mixed_hakmem
HAKMEM_TINY_USE_SUPERSLAB=1 HAKMEM_TINY_HEADER_CLASSIDX=1 \
./bench_random_mixed_hakmem 100000 2048 1234567
# → Target: 40-60M ops/s
```
**Success Criteria:**
- ✅ Throughput > 30M ops/s (25x improvement)
- ✅ No crashes (stability test 10 runs)
- ✅ Memory overhead < 3%
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
---
### Phase 7-2: Production Integration (2-3日)
**Goal**: Feature flag + Fallback + Debug validation
**Tasks:**
1. **Feature flag 追加**
- `core/hakmem_build_flags.h` - `HAKMEM_TINY_HEADER_CLASSIDX` flag
- Default: OFF (後方互換性)
- A/B toggle で簡単切り替え
2. **Fallback path 実装**
- Header なし allocation への対応
- Legacy mode サポート
```c
if (has_header(ptr)) {
fast_free_v2(ptr); // Header 方式
} else {
fast_free_v1(ptr); // Legacy (SuperSlab lookup)
}
```
3. **Debug validation**
- Magic byte for UAF detection
- Header corruption check
- Fail-Fast integration
```c
#if !HAKMEM_BUILD_RELEASE
if (cls >= TINY_NUM_CLASSES) {
fprintf(stderr, "[HEADER_CORRUPT] Invalid class_idx=%u\n", cls);
abort();
}
#endif
```
**Success Criteria:**
- ✅ Feature flag で instant rollback 可能
- ✅ Legacy mode で既存コード動作
- ✅ Debug mode で validation 完璧
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
---
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
### Phase 7-3: Testing & Optimization (1-2日)
**Goal**: 本番品質達成
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
**Tasks:**
1. **Unit tests**
- Header 書き込み/読み込み正確性
- Edge cases (slab[0] パディング、class 境界)
- UAF detection
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
2. **Stress tests**
- Larson 4T (MT stability)
- Fragmentation stress
- Long-running test (1000+ seconds)
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
3. **Full benchmark suite**
```bash
# Comprehensive benchmark
make bench_comprehensive_hakmem
./bench_comprehensive_hakmem
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
# vs System malloc
make bench_comprehensive_system
./bench_comprehensive_system
# Comparison report
diff comprehensive_hakmem.txt comprehensive_system.txt
```
**Success Criteria:**
- ✅ bench_random_mixed: 40-60M ops/s
- ✅ larson_hakmem 4T: 4-6M ops/s
- ✅ vs System: 70-110%
- ✅ vs mimalloc: 同等以上
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
---
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
## 🎯 Expected Outcomes
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
### Performance Targets
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
| Benchmark | Before | After | vs System | Result |
|-----------|--------|-------|-----------|--------|
| bench_random_mixed | 1.2M | **40-60M** | **70-110%** | ✅ 互角〜勝ち |
| larson_hakmem 4T | 0.8M | **4-6M** | **120-180%** | ✅ 勝ち |
| Tiny hot path | TBD | **50-80M** | **90-140%** | ✅ 互角〜勝ち |
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
### 総合評価ChatGPT Pro
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
**勝てる領域:**
-**Tiny (≤1KB)**: Header 直帰で System/mimalloc 同等
-**MT Larson**: Remote side-table でスケール
-**Mid-Large (8-32KB)**: 既に +171% で勝ち
**難所(追いつく):**
- ⚠️ **VM系**: mmap/munmap 最適化が必要
**総合勝算:**
> Front直帰 + 裏段バッチ + 学習 で **System/mimalloc を超える** 🏆
---
Fix: CRITICAL multi-threaded freelist/remote queue race condition Root Cause: =========== Freelist and remote queue contained the SAME blocks, causing use-after-free: 1. Thread A (owner): pops block X from freelist → allocates to user 2. User writes data ("ab") to block X 3. Thread B (remote): free(block X) → adds to remote queue 4. Thread A (later): drains remote queue → *(void**)block_X = chain_head → OVERWRITES USER DATA! 💥 The freelist pop path did NOT drain the remote queue first, so blocks could be simultaneously in both freelist and remote queue. Fix: ==== Add remote queue drain BEFORE freelist pop in refill path: core/hakmem_tiny_refill_p0.inc.h: - Call _ss_remote_drain_to_freelist_unsafe() BEFORE trc_pop_from_freelist() - Add #include "superslab/superslab_inline.h" - This ensures freelist and remote queue are mutually exclusive Test Results: ============= BEFORE: larson_hakmem (4 threads): ❌ SEGV in seconds (freelist corruption) AFTER: larson_hakmem (4 threads): ✅ 931,629 ops/s (1073 sec stable run) bench_random_mixed: ✅ 1,020,163 ops/s (no crashes) Evidence: - Fail-Fast logs showed next pointer corruption: 0x...6261 (ASCII "ab") - Single-threaded benchmarks worked (865K ops/s) - Multi-threaded Larson crashed immediately - Fix eliminates all crashes in both benchmarks Files: - core/hakmem_tiny_refill_p0.inc.h: Add remote drain before freelist pop - CURRENT_TASK.md: Document fix details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 01:35:45 +09:00
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
## 📁 関連ドキュメント
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
- [`REGION_ID_DESIGN.md`](REGION_ID_DESIGN.md) - 完全設計Task Agent Opus Ultrathink
- [`CLAUDE.md#phase-7`](CLAUDE.md#phase-7-region-id-direct-lookup---ultra-fast-free-path-2025-11-08-) - Phase 7 概要
- [`FREE_PATH_ULTRATHINK_ANALYSIS.md`](FREE_PATH_ULTRATHINK_ANALYSIS.md) - 現状ボトルネック分析
- [`DEBUG_LOGGING_POLICY.md`](DEBUG_LOGGING_POLICY.md) - Debug/Release ビルドポリシー
---
## 🛠️ 実行コマンドPhase 7-1 用)
```bash
# 現状ベースライン測定
make clean && make bench_random_mixed_hakmem
HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 100000 2048 1234567
# → Expected: 1.2M ops/s
# Header 方式実装後Phase 7-1
make clean && make HEADER_CLASSIDX=1 bench_random_mixed_hakmem
HAKMEM_TINY_USE_SUPERSLAB=1 HAKMEM_TINY_HEADER_CLASSIDX=1 \
./bench_random_mixed_hakmem 100000 2048 1234567
# → Target: 40-60M ops/s (30-50x improvement!)
# Larson MT test
HAKMEM_TINY_USE_SUPERSLAB=1 HAKMEM_TINY_HEADER_CLASSIDX=1 \
./larson_hakmem 2 8 128 1024 1 12345 4
# → Target: 4-6M ops/s
# Debug validation mode
HAKMEM_TINY_USE_SUPERSLAB=1 HAKMEM_TINY_HEADER_CLASSIDX=1 \
HAKMEM_TINY_REFILL_FAILFAST=2 \
./bench_random_mixed_hakmem 50000 2048 1234567
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
# → Header validation + Fail-Fast
```
Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!) Implemented ultra-fast header-based free path that eliminates SuperSlab lookup bottleneck (100+ cycles → 5-10 cycles). ## Key Changes 1. **Smart Headers** (core/tiny_region_id.h): - 1-byte header before each allocation stores class_idx - Memory layout: [Header: 1B] [User data: N-1B] - Overhead: <2% average (0% for Slab[0] using wasted padding) 2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h): - Write header at base: *base = class_idx - Return user pointer: base + 1 3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h): - Read class_idx from header (ptr-1): 2-3 cycles - Push base (ptr-1) to TLS freelist: 3-5 cycles - Total: 5-10 cycles (vs 500+ cycles current!) 4. **Free Path Integration** (core/box/hak_free_api.inc.h): - Removed SuperSlab lookup from fast path - Direct header validation (no lookup needed!) 5. **Size Class Adjustment** (core/hakmem_tiny.h): - Max tiny size: 1023B (was 1024B) - 1024B requests → Mid allocator fallback ## Performance Results | Size | Baseline | Phase 7 | Improvement | |------|----------|---------|-------------| | 128B | 1.22M | 6.54M | **+436%** 🚀 | | 512B | 1.22M | 1.70M | **+39%** | | 1023B | 1.22M | 1.92M | **+57%** | ## Build & Test Enable Phase 7: make HEADER_CLASSIDX=1 bench_random_mixed_hakmem Run benchmark: HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567 ## Known Issues - 1024B requests fallback to Mid allocator (by design) - Target 40-60M ops/s not yet reached (current: 1.7-6.5M) - Further optimization needed (TLS capacity tuning, refill optimization) ## Credits Design: ChatGPT Pro Ultrathink, Claude Code Implementation: Claude Code with Task Agent Ultrathink support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
---
## 📅 Timeline
- **Phase 7-1 (PoC)**: 1-2日 ← **次のステップ!**
- **Phase 7-2 (Integration)**: 2-3日
- **Phase 7-3 (Testing)**: 1-2日
- **Total**: **4-6日で System malloc に勝つ** 🎉
---
## ✅ 完了済みPhase 6 まで)
### Release Build 最適化 (2025-11-08)
- ✅ Safety Checks を Debug mode に移動
-`-DNDEBUG` を Makefile に追加
- ✅ Remote push debug log を Release で無効化
- **Result**: 1.02M → 1.20M ops/s (+17.3%)
### リモートキュー競合バグ修正 (2025-11-07)
- ✅ Freelist pop 前に remote drain 追加
- ✅ Larson 4T 安定化 (1073秒稼働)
### 二重割り当てバグ修正 (2025-11-07)
-`TinySlabMeta``carved` フィールド追加
- ✅ Linear carve カーソル修正
---
**次のアクション: Phase 7-1 実装開始!** 🚀