Commit Graph

64 Commits

Author SHA1 Message Date
f2ce7256cd Add v4 C7/C6 fast classify and small-segment v4 scaffolding 2025-12-10 19:14:38 +09:00
3261025995 Phase v4-4: pilot C6 v4 route with opt-in gate 2025-12-10 18:18:05 +09:00
cbd33511eb Phase v4-3.1: reuse C7 v4 pages and record prep calls 2025-12-10 17:58:42 +09:00
acc64f2438 Phase ML1: Pool v1 memset 89.73% overhead 軽量化 (+15.34% improvement)
## Summary
- ChatGPT により bench_profile.h の setenv segfault を修正(RTLD_NEXT 経由に切り替え)
- core/box/pool_zero_mode_box.h 新設:ENV キャッシュ経由で ZERO_MODE を統一管理
- core/hakmem_pool.c で zero mode に応じた memset 制御(FULL/header/off)
- A/B テスト結果:ZERO_MODE=header で +15.34% improvement(1M iterations, C6-heavy)

## Files Modified
- core/box/pool_api.inc.h: pool_zero_mode_box.h include
- core/bench_profile.h: glibc setenv → malloc+putenv(segfault 回避)
- core/hakmem_pool.c: zero mode 参照・制御ロジック
- core/box/pool_zero_mode_box.h (新設): enum/getter
- CURRENT_TASK.md: Phase ML1 結果記載

## Test Results
| Iterations | ZERO_MODE=full | ZERO_MODE=header | Improvement |
|-----------|----------------|-----------------|------------|
| 10K       | 3.06 M ops/s   | 3.17 M ops/s    | +3.65%     |
| 1M        | 23.71 M ops/s  | 27.34 M ops/s   | **+15.34%** |

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-10 09:08:18 +09:00
a905e0ffdd Guard madvise ENOMEM and stabilize pool/tiny front v3 2025-12-09 21:50:15 +09:00
8f18963ad5 Phase 36-37: TinyHotHeap v2 HotBox redesign and C7 current_page policy fixes
- Redefine TinyHotHeap v2 as per-thread Hot Box with clear boundaries
- Add comprehensive OS statistics tracking for SS allocations
- Implement route-based free handling for TinyHeap v2
- Add C6/C7 debugging and statistics improvements
- Update documentation with implementation guidelines and analysis
- Add new box headers for stats, routing, and front-end management
2025-12-08 21:30:21 +09:00
a6991ec9e4 Add TinyHeap class mask and extend routing 2025-12-07 22:49:28 +09:00
fda6cd2e67 Boxify superslab registry, add bench profile, and document C7 hotpath experiments 2025-12-07 03:12:27 +09:00
18faa6a1c4 Add OBSERVE stats and auto tiny policy profile 2025-12-06 01:44:05 +09:00
03538055ae Restore C7 Warm/TLS carve for release and add policy scaffolding 2025-12-06 01:34:04 +09:00
d17ec46628 Fix C7 warm/TLS Release path and unify debug instrumentation 2025-12-05 23:41:01 +09:00
e96e9a4bf9 Feat: Add TLS carve experiment for warm C7 2025-12-05 20:50:24 +09:00
4c986fa9d1 Feat: Add experimental TLS Bind Box path in Unified Cache
- Added experimental path in unified_cache_refill to test ss_tls_bind_one for C7 class.
- Guarded by HAKMEM_WARM_TLS_BIND_C7 env var and debug build.
- Updated Page Box comments to clarify future TLS Bind Box integration.
2025-12-05 20:05:11 +09:00
45b2ccbe45 Refactor: Extract TLS Bind Box for unified slab binding
- Created core/box/ss_tls_bind_box.h containing ss_tls_bind_one().
- Refactored superslab_refill() to use the new box.
- Updated signatures to avoid circular dependencies (tiny_self_u32).
- Added future integration points for Warm Pool and Page Box.
2025-12-05 19:57:30 +09:00
093f362231 Add Page Box layer for C7 class optimization
- Implement tiny_page_box.c/h: per-thread page cache between UC and Shared Pool
- Integrate Page Box into Unified Cache refill path
- Remove legacy SuperSlab implementation (merged into smallmid)
- Add HAKMEM_TINY_PAGE_BOX_CLASSES env var for selective class enabling
- Update bench_random_mixed.c with Page Box statistics

Current status: Implementation safe, no regressions.
Page Box ON/OFF shows minimal difference - pool strategy needs tuning.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-05 15:31:44 +09:00
141b121e9c Phase 1: Warm Pool Capacity Increase (16 → 12 with matching threshold)
Key Changes:
- Reduced static capacity from 16 to 12 SuperSlabs per class
- Fixed prefill threshold from hardcoded 4 to match capacity (12)
- Updated environment variable clamping to [1,12]
- This allows warm pool to actually utilize its full capacity

Performance:
- Baseline (post-unified-cache-opt): 4.76M ops/s
- After Phase 1: 4.84M ops/s
- Improvement: +1.6% (expected +15-20%)

Note: Actual improvement lower than expected because the warm pool
bottleneck is only part of the overall allocation path. Unified cache
optimization (+14.9%) already addressed much of the registry scan overhead.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-05 12:16:39 +09:00
a04e3ba0e9 Optimize Unified Cache: Batch Freelist Validation + TLS Alignment
Two complementary optimizations to improve unified cache hot path performance:

1. Batch Freelist Validation (core/front/tiny_unified_cache.c)
   - Remove duplicate per-block freelist validation in release builds
   - Consolidated validation logic into unified_refill_validate_base() function
   - Previously: hak_super_lookup(p) called on EVERY freelist block (~128 blocks)
   - Now: Single validation function at batch start
   - Impact (RELEASE): Eliminates 50-100 cycles per block × 128 = 1,280-2,560 cycles/refill
   - Impact (DEBUG): Full validation still available via unified_refill_validate_base()
   - Safety: Block integrity protected by header magic (0xA0 | class_idx)

2. TLS Unified Cache Alignment (core/front/tiny_unified_cache.h)
   - Add __attribute__((aligned(64))) to TinyUnifiedCache struct
   - Aligns each per-class cache to 64-byte cache line boundary
   - Eliminates false sharing across classes (8 classes × 64B = 512B per thread)
   - Prevents cache line thrashing on concurrent class access
   - Fields stay same size (16B data + 48B padding), no binary compatibility issues
   - Requires clean rebuild due to struct size change (16B → 64B)

Performance Expectations (projected, pending clean build measurement):
- random_mixed (256B working set): +15-20% throughput gain
- tiny_hot: No regression (already cache-friendly)
- tiny_malloc: +3-5% throughput gain

Benchmark Results (after clean rebuild):
- Target: 4.3M → 5.0M ops/s (+17%)
- tiny_hot: Maintain 150M+ ops/s (no regression)

Code Quality:
-  Proper separation of concerns (validation logic centralized)
-  Clean compile-time gating with #if HAKMEM_BUILD_RELEASE
-  Memory-safe (all access patterns unchanged)
-  Maintainable (single source of truth for validation)

Testing Required:
- [ ] Clean rebuild (make clean && make bench_random_mixed_hakmem)
- [ ] Performance measurement with consistent parameters
- [ ] Debug build validation test (ensure corruption detection still works)
- [ ] Multi-threaded correctness (TLS alignment safe for MT)

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: ChatGPT (optimization implementation)
2025-12-05 11:32:07 +09:00
1cdc932fca Performance Optimization: Release Build Hygiene (Priority 1-4)
Implement 4 targeted optimizations for release builds:

1. **Remove freelist validation from release builds** (Priority 1)
   - Guard registry lookup on every freelist node with #if !HAKMEM_BUILD_RELEASE
   - Expected gain: +15-20% throughput (eliminates 30-40% of refill cycles)
   - File: core/front/tiny_unified_cache.c:501-529

2. **Optimize PageFault telemetry** (Priority 2)
   - Already properly gated with HAKMEM_DEBUG_COUNTERS
   - No change needed (verified correct implementation)

3. **Make warm pool stats compile-time gated** (Priority 3)
   - Guard all stats recording with #if HAKMEM_DEBUG_COUNTERS
   - File: core/box/warm_pool_stats_box.h:25-51

4. **Reduce warm pool prefill lock overhead** (Priority 4)
   - Reduced WARM_POOL_PREFILL_BUDGET from 3 to 2 SuperSlabs
   - Balances prefill lock overhead with pool depletion frequency
   - File: core/box/warm_pool_prefill_box.h:28

5. **Disable debug counters by default in release builds** (Supporting)
   - Modified HAKMEM_DEBUG_COUNTERS to auto-detect based on NDEBUG
   - File: core/hakmem_build_flags.h:33-40

Benchmark Results (1M allocations, ws=256):
- Before: 4.02-4.2M ops/s (with diagnostic overhead)
- After: 4.04-4.2M ops/s (release build optimized)
- Warm pool hit rate: Maintained at 55.6%
- No performance regressions detected

Expected Impact After Compilation:
- With -DHAKMEM_BUILD_RELEASE=1 and -DNDEBUG:
  - Freelist validation: compiled out completely
  - Debug counters: compiled out completely
  - Telemetry: compiled out completely
  - Stats recording: compiled out (single (void) statement remains)
  - Expected +15-25% improvement in release builds

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-05 06:16:12 +09:00
b6010dd253 Modularize Warm Pool with 3 Box Refactorings - Phase B-3a Complete
Objective: Clean up warm pool implementation by extracting inline boxes
for statistics, carving, and prefill logic. Achieved full modularity
with zero performance regression using aggressive inline optimization.

Changes:

1. **Legacy Code Removal** (Phase 0)
   - Removed unused static __thread prefill_attempt_count variable
   - Cleaned up duplicate comments
   - Simplified carve failure handling

2. **Warm Pool Statistics Box** (Phase 1)
   - New file: core/box/warm_pool_stats_box.h
   - Inline APIs: warm_pool_record_hit/miss/prefilled()
   - All statistics recording externalized
   - Integrated into unified_cache.c
   - Performance: 0 cost (inlined to direct memory write)

3. **Slab Carving Box** (Phase 2)
   - New file: core/box/slab_carve_box.h
   - Inline API: slab_carve_from_ss()
   - Extracted unified_cache_carve_from_ss() function
   - Now reusable by other refill paths (P0, etc.)
   - Performance: 100% inlined, O(slabs) scan unchanged

4. **Warm Pool Prefill Box** (Phase 3)
   - New file: core/box/warm_pool_prefill_box.h
   - Inline API: warm_pool_do_prefill()
   - Extracted prefill loop with configurable budget
   - WARM_POOL_PREFILL_BUDGET = 3 (tunable)
   - Cold path optimization (only on empty pool)
   - Performance: Cold path cost (non-critical)

Architecture:
- core/front/tiny_unified_cache.c now 40+ lines shorter
- Logic distributed to 3 well-defined boxes
- Each box has single responsibility (SRP)
- Inline compilation preserves hot path performance
- LTO (-flto) enables cross-file inlining

Performance Results:
- 1M allocations: 4.099M ops/s (maintained)
- 5M allocations: 4.046M ops/s (maintained)
- 55.6% warm pool hit rate (unchanged)
- Zero regression on throughput
- All three boxes fully inlined by compiler

Code Quality Improvements:
 Removed legacy unused variables
 Separated concerns into specialized boxes
 Improved readability and maintainability
 Preserved performance via aggressive inline
 Enabled future reuse (carve box for P0)

Testing:
 Compilation: No errors
 Functionality: 1M and 5M allocation tests pass
 Performance: Baseline maintained
 Statistics: Output identical to pre-refactor

Next Phase: Consider similar modularization for:
- Registry scanning (registry_scan_box.h)
- TLS management (tls_management_box.h)
- Cache operations (unified_cache_policy_box.h)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 23:39:02 +09:00
5685c2f4c9 Implement Warm Pool Secondary Prefill Optimization (Phase B-2c Complete)
Problem: Warm pool had 0% hit rate (only 1 hit per 3976 misses) despite being
implemented, causing all cache misses to go through expensive superslab_refill
registry scans.

Root Cause Analysis:
- Warm pool was initialized once and pushed a single slab after each refill
- When that slab was exhausted, it was discarded (not pushed back)
- Next refill would push another single slab, which was immediately exhausted
- Pool would oscillate between 0 and 1 items, yielding 0% hit rate

Solution: Secondary Prefill on Cache Miss
When warm pool becomes empty, we now do multiple superslab_refills and prefill
the pool with 3 additional HOT superlslabs before attempting to carve. This
builds a working set of slabs that can sustain allocation pressure.

Implementation Details:
- Modified unified_cache_refill() cold path to detect empty pool
- Added prefill loop: when pool count == 0, load 3 extra superlslabs
- Store extra slabs in warm pool, keep 1 in TLS for immediate carving
- Track prefill events in g_warm_pool_stats[].prefilled counter

Results (1M Random Mixed 256B allocations):
- Before: C7 hits=1, misses=3976, hit_rate=0.0%
- After:  C7 hits=3929, misses=3143, hit_rate=55.6%
- Throughput: 4.055M ops/s (maintained vs 4.07M baseline)
- Stability: Consistent 55.6% hit rate at 5M allocations (4.102M ops/s)

Performance Impact:
- No regression: throughput remained stable at ~4.1M ops/s
- Registry scan avoided in 55.6% of cache misses (significant savings)
- Warm pool now functioning as intended with strong locality

Configuration:
- TINY_WARM_POOL_MAX_PER_CLASS increased from 4 to 16 to support prefill
- Prefill budget hardcoded to 3 (tunable via env var if needed later)
- All statistics always compiled, ENV-gated printing via HAKMEM_WARM_POOL_STATS=1

Next Steps:
- Monitor for further optimization opportunities (prefill budget tuning)
- Consider adaptive prefill budget based on class-specific hit rates
- Validate at larger allocation counts (10M+ pending registry size fix)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 23:31:54 +09:00
c1c45106da Two-Speed HOT PATH: Guard hak_super_lookup calls with HAKMEM_BUILD_RELEASE
Phase E2 introduced registry lookup to the hot path, causing 84-88% regression
(70M → 9M ops/sec). This commit restores performance by guarding expensive
hak_super_lookup calls (50-100 cycles each) with conditional compilation.

Key changes:
- tls_sll_box.h push: Full validation in Debug, ss_fast_lookup (O(1)) in Release
- tls_sll_box.h pop: Registry validation in Debug, trust list structure in Release
- tiny_free_fast_v2.inc.h: Header/meta cross-check Debug-only
- malloc_tiny_fast.h: SuperSlab registration check Debug-only

Performance improvement:
- Release build: 2.9M → 87-88M ops/sec (30x improvement)
- Restored to historical UNIFIED-HEADER peak (70-80M range)

Release builds trust:
- Header magic (0xA0) as sufficient allocation origin validation
- TLS SLL linked list structure integrity
- Header-based class_idx classification

Debug builds maintain full validation with expensive registry lookups.

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 18:53:04 +09:00
860991ee50 Performance Measurement Framework: Unified Cache, TLS SLL, Shared Pool Analysis
## Summary

Implemented production-grade measurement infrastructure to quantify top 3 bottlenecks:
- Unified cache hit/miss rates + refill cost
- TLS SLL usage patterns
- Shared pool lock contention distribution

## Changes

### 1. Unified Cache Metrics (tiny_unified_cache.h/c)
- Added atomic counters:
  - g_unified_cache_hits_global: successful cache pops
  - g_unified_cache_misses_global: refill triggers
  - g_unified_cache_refill_cycles_global: refill cost in CPU cycles (rdtsc)
- Instrumented `unified_cache_pop_or_refill()` to count hits
- Instrumented `unified_cache_refill()` with cycle measurement
- ENV-gated: HAKMEM_MEASURE_UNIFIED_CACHE=1 (default: off)
- Added unified_cache_print_measurements() output function

### 2. TLS SLL Metrics (tls_sll_box.h)
- Added atomic counters:
  - g_tls_sll_push_count_global: total pushes
  - g_tls_sll_pop_count_global: successful pops
  - g_tls_sll_pop_empty_count_global: empty list conditions
- Instrumented push/pop paths
- Added tls_sll_print_measurements() output function

### 3. Shared Pool Contention (hakmem_shared_pool_acquire.c)
- Added atomic counters:
  - g_sp_stage2_lock_acquired_global: Stage 2 locks
  - g_sp_stage3_lock_acquired_global: Stage 3 allocations
  - g_sp_alloc_lock_contention_global: total lock acquisitions
- Instrumented all pthread_mutex_lock calls in hot paths
- Added shared_pool_print_measurements() output function

### 4. Benchmark Integration (bench_random_mixed.c)
- Called all 3 print functions after benchmark loop
- Functions active only when HAKMEM_MEASURE_UNIFIED_CACHE=1 set

## Design Principles

- **Zero overhead when disabled**: Inline checks with __builtin_expect hints
- **Atomic relaxed memory order**: Minimal synchronization overhead
- **ENV-gated**: Single flag controls all measurements
- **Production-safe**: Compiles in release builds, no functional changes

## Usage

```bash
HAKMEM_MEASURE_UNIFIED_CACHE=1 ./bench_allocators_hakmem bench_random_mixed_hakmem 1000000 256 42
```

Output (when enabled):
```
========================================
Unified Cache Statistics
========================================
Hits:        1234567
Misses:      56789
Hit Rate:    95.6%
Avg Refill Cycles: 1234

========================================
TLS SLL Statistics
========================================
Total Pushes:     1234567
Total Pops:       345678
Pop Empty Count:  12345
Hit Rate:         98.8%

========================================
Shared Pool Contention Statistics
========================================
Stage 2 Locks:    123456 (33%)
Stage 3 Locks:    234567 (67%)
Total Contention: 357 locks per 1M ops
```

## Next Steps

1. **Enable measurements** and run benchmarks to gather data
2. **Analyze miss rates**: Which bottleneck dominates?
3. **Profile hottest stage**: Focus optimization on top contributor
4. Possible targets:
   - Increase unified cache capacity if miss rate >5%
   - Profile if TLS SLL is unused (potential legacy code removal)
   - Analyze if Stage 2 lock can be replaced with CAS

## Makefile Updates

Added core/box/tiny_route_box.o to:
- OBJS_BASE (test build)
- SHARED_OBJS (shared library)
- BENCH_HAKMEM_OBJS_BASE (benchmark)
- TINY_BENCH_OBJS_BASE (tiny benchmark)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 18:26:39 +09:00
a0a80f5403 Remove legacy redundant code after Gatekeeper Box consolidation
Summary of Deletions:
- Remove core/box/unified_batch_box.c (26 lines)
  * Legacy batch allocation logic superseded by Alloc Gatekeeper Box
  * unified_cache now handles allocation aggregation

- Remove core/box/unified_batch_box.h (29 lines)
  * Header declarations for deprecated unified_batch_box module

- Remove core/tiny_free_fast.inc.h (329 lines)
  * Legacy fast-path free implementation
  * Functionality consolidated into:
    - tiny_free_gate_box.h (Fail-Fast layer + diagnostics)
    - malloc_tiny_fast.h (Free path integration)
    - unified_cache (return to freelist)
  * Code path now routes through Gatekeeper Box for consistency

Build System Updates:
- Update Makefile
  * Remove unified_batch_box.o from OBJS_BASE
  * Remove unified_batch_box_shared.o from SHARED_OBJS
  * Remove unified_batch_box.o from BENCH_HAKMEM_OBJS_BASE

- Update core/hakmem_tiny_phase6_wrappers_box.inc
  * Remove unified_batch_box references
  * Simplify allocation wrapper to use new Gatekeeper architecture

Impact:
- Removes ~385 lines of redundant/superseded code
- Consolidates allocation logic through unified Gatekeeper entry points
- All functionality preserved via new Box-based architecture
- Simplifies codebase and reduces maintenance burden

Testing:
- Build verification: make clean && make RELEASE=0/1
- Smoke tests: All pass (simple_alloc, loop 10M, pool_tls)
- No functional regressions

Rationale:
After implementing Alloc/Free Gatekeeper Boxes with Fail-Fast layers
and Unified Cache type safety, the legacy separate implementations
became redundant. This commit completes the architectural consolidation
and simplifies the allocator codebase.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 12:55:53 +09:00
0c0d9c8c0b Unify Unified Cache API to BASE-only pointer type with Phantom typing
Core Changes:
- Modified: core/front/tiny_unified_cache.h
  * API signatures changed to use hak_base_ptr_t (Phantom type)
  * unified_cache_pop() returns hak_base_ptr_t (was void*)
  * unified_cache_push() accepts hak_base_ptr_t base (was void*)
  * unified_cache_pop_or_refill() returns hak_base_ptr_t (was void*)
  * Added #include "../box/ptr_type_box.h" for Phantom types

- Modified: core/front/tiny_unified_cache.c
  * unified_cache_refill() return type changed to hak_base_ptr_t
  * Uses HAK_BASE_FROM_RAW() for wrapping return values
  * Uses HAK_BASE_TO_RAW() for unwrapping parameters
  * Maintains internal void* storage in slots array

- Modified: core/box/tiny_front_cold_box.h
  * Uses hak_base_ptr_t from unified_cache_refill()
  * Uses hak_base_is_null() for NULL checks
  * Maintains tiny_user_offset() for BASE→USER conversion
  * Cold path refill integration updated to Phantom types

- Modified: core/front/malloc_tiny_fast.h
  * Free path wraps BASE pointer with HAK_BASE_FROM_RAW()
  * When pushing to Unified Cache via unified_cache_push()

Design Rationale:
- Unified Cache API now exclusively handles BASE pointers (no USER mixing)
- Phantom types enforce type distinction at compile time (debug mode)
- Zero runtime overhead in Release mode (macros expand to identity)
- Hot paths (tiny_hot_alloc_fast, tiny_hot_free_fast) remain unchanged
- Layout consistency maintained via tiny_user_offset() Box

Validation:
- All 25 Phantom type usage sites verified (25/25 correct)
- HAK_BASE_FROM_RAW(): 5/5 correct wrappings
- HAK_BASE_TO_RAW(): 1/1 correct unwrapping
- hak_base_is_null(): 4/4 correct NULL checks
- Compilation: RELEASE=0 and RELEASE=1 both successful
- Smoke tests: 3/3 passed (simple_alloc, loop 10M, pool_tls)

Type Safety Benefits:
- Prevents USER/BASE pointer confusion at API boundaries
- Compile-time checking in debug builds via Phantom struct
- Zero cost abstraction in release builds
- Clear intent: Unified Cache exclusively stores BASE pointers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 12:20:21 +09:00
19ce4c1ac4 Add SuperSlab refcount pinning and critical failsafe guards
Major breakthrough: sh8bench now completes without SIGSEGV!
Added defensive refcounting and failsafe mechanisms to prevent
use-after-free and corruption propagation.

Changes:
1. SuperSlab Refcount Pinning (core/box/tls_sll_box.h)
   - tls_sll_push_impl: increment refcount before adding to list
   - tls_sll_pop_impl: decrement refcount when removing from list
   - Prevents SuperSlab from being freed while TLS SLL holds pointers

2. SuperSlab Release Guards (core/superslab_allocate.c, shared_pool_release.c)
   - Check refcount > 0 before freeing SuperSlab
   - If refcount > 0, defer release instead of freeing
   - Prevents use-after-free when TLS/remote/freelist hold stale pointers

3. TLS SLL Next Pointer Validation (core/box/tls_sll_box.h)
   - Detect invalid next pointer during traversal
   - Log [TLS_SLL_NEXT_INVALID] when detected
   - Drop list to prevent corruption propagation

4. Unified Cache Freelist Validation (core/front/tiny_unified_cache.c)
   - Validate freelist head before use
   - Log [UNIFIED_FREELIST_INVALID] for corrupted lists
   - Defensive drop to prevent bad allocations

5. Early Refcount Decrement Fix (core/tiny_free_fast.inc.h)
   - Removed ss_active_dec_one from fast path
   - Prevents premature refcount depletion
   - Defers decrement to proper cleanup path

Test Results:
 sh8bench completes successfully (exit code 0)
 No SIGSEGV or ABORT signals
 Short runs (5s) crash-free
⚠️ Multiple [TLS_SLL_NEXT_INVALID] / [UNIFIED_FREELIST_INVALID] logged
⚠️ Invalid pointers still present (stale references exist)

Status Analysis:
- Stability: ACHIEVED (no crashes)
- Root Cause: NOT FULLY SOLVED (invalid pointers remain)
- Approach: Defensive + refcount guards working well

Remaining Issues:
 Why does SuperSlab get unregistered while TLS SLL holds pointers?
 SuperSlab lifecycle: remote_queue / adopt / LRU interactions?
 Stale pointers indicate improper SuperSlab lifetime management

Performance Impact:
- Refcount operations: +1-3 cycles per push/pop (minor)
- Validation checks: +2-5 cycles (minor)
- Overall: < 5% overhead estimated

Next Investigation:
- Trace SuperSlab lifecycle (allocation → registration → unregister → free)
- Check remote_queue handling
- Verify adopt/LRU mechanisms
- Correlate stale pointer logs with SuperSlab unregister events

Log Volume Warning:
- May produce many diagnostic logs on long runs
- Consider ENV gating for production

Technical Notes:
- Refcount is per-SuperSlab, not global
- Guards prevent symptom propagation, not root cause
- Root cause is in SuperSlab lifecycle management

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 21:56:52 +09:00
2dc9d5d596 Fix include order in hakmem.c - move hak_kpi_util.inc.h before hak_core_init.inc.h
Problem: hak_core_init.inc.h references KPI measurement variables
(g_latency_histogram, g_latency_samples, g_baseline_soft_pf, etc.)
but hakmem.c was including hak_kpi_util.inc.h AFTER hak_core_init.inc.h,
causing undefined reference errors.

Solution: Reorder includes so hak_kpi_util.inc.h (definition) comes
before hak_core_init.inc.h (usage).

Build result:  Success (libhakmem.so 547KB, 0 errors)

Minor changes:
- Added extern __thread declarations for TLS SLL debug variables
- Added signal handler logging for debug_dump_last_push
- Improved hakmem_tiny.c structure for Phase 2 preparation

🤖 Generated with Claude Code + Task Agent

Co-Authored-By: Gemini <gemini@example.com>
Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 13:28:44 +09:00
b5be708b6a Fix potential freelist corruption in unified_cache_refill (Class 0) and improve TLS SLL logging/safety 2025-12-03 12:43:02 +09:00
c2716f5c01 Implement Phase 2: Headerless Allocator Support (Partial)
- Feature: Added HAKMEM_TINY_HEADERLESS toggle (A/B testing)
- Feature: Implemented Headerless layout logic (Offset=0)
- Refactor: Centralized layout definitions in tiny_layout_box.h
- Refactor: Abstracted pointer arithmetic in free path via ptr_conversion_box.h
- Verification: sh8bench passes in Headerless mode (No TLS_SLL_HDR_RESET)
- Known Issue: Regression in Phase 1 mode due to blind pointer conversion logic
2025-12-03 12:11:27 +09:00
bd5e97f38a Save current state before investigating TLS_SLL_HDR_RESET 2025-12-03 10:34:39 +09:00
6154e7656c 根治修正: unified_cache_refill SEGVAULT + コンパイラ最適化対策
問題:
  - リリース版sh8benchでunified_cache_refill+0x46fでSEGVAULT
  - コンパイラ最適化により、ヘッダー書き込みとtiny_next_read()の
    順序が入れ替わり、破損したポインタをout[]に格納

根本原因:
  - ヘッダー書き込みがtiny_next_read()の後にあった
  - volatile barrierがなく、コンパイラが自由に順序を変更
  - ASan版では最適化が制限されるため問題が隠蔽されていた

修正内容(P1-P3):

P1: unified_cache_refill SEGVAULT修正 (core/front/tiny_unified_cache.c:341-350)
  - ヘッダー書き込みをtiny_next_read()の前に移動
  - __atomic_thread_fence(__ATOMIC_RELEASE)追加
  - コンパイラ最適化による順序入れ替えを防止

P2: 二重書き込み削除 (core/box/tiny_front_cold_box.h:75-82)
  - tiny_region_id_write_header()削除
  - unified_cache_refillが既にヘッダー書き込み済み
  - 不要なメモリ操作を削除して効率化

P3: tiny_next_read()安全性強化 (core/tiny_nextptr.h:73-86)
  - __atomic_thread_fence(__ATOMIC_ACQUIRE)追加
  - メモリ操作の順序を保証

P4: ヘッダー書き込みデフォルトON (core/tiny_region_id.h - ChatGPT修正)
  - g_write_headerのデフォルトを1に変更
  - HAKMEM_TINY_WRITE_HEADER=0で旧挙動に戻せる

テスト結果:
   unified_cache_refill SEGVAULT: 解消(sh8bench実行可能に)
   TLS_SLL_HDR_RESET: まだ発生中(別の根本原因、調査継続)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 09:57:12 +09:00
936dc365ba Priority-2: ENV Cache - Warm Path (FastCache/SuperSlab) getenv() 置換
変更内容:
- hakmem_env_cache.h: 2つの新ENV変数を追加
  (TINY_FAST_STATS, TINY_UNIFIED_CACHE)
- tiny_fastcache.c: 2箇所の getenv() を置換
  (TINY_PROFILE, TINY_FAST_STATS)
- tiny_fastcache.h: 1箇所の getenv() を置換
  (TINY_PROFILE in inline function)
- superslab_slab.c: 1箇所の getenv() を置換
  (TINY_SLL_DIAG)
- tiny_unified_cache.c: 1箇所の getenv() を置換
  (TINY_UNIFIED_CACHE)

効果: Warm path層からも syscall を排除 (ENV変数数: 28→30)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 20:25:48 +09:00
daddbc926c fix(Phase 11+): Cold Start lazy init for unified_cache_refill
Root cause: unified_cache_refill() accessed cache->slots before initialization
when a size class was first used via the refill path (not pop path).

Fix: Add lazy initialization check at start of unified_cache_refill()
- Check if cache->slots is NULL before accessing
- Call unified_cache_init() if needed
- Return NULL if init fails (graceful degradation)

Also includes:
- ss_cold_start_box.inc.h: Box Pattern for default prewarm settings
- hakmem_super_registry.c: Use static array in prewarm (avoid recursion)
- Default prewarm enabled (1 SuperSlab/class, configurable via ENV)

Test: 8B→16B→Mixed allocation pattern now works correctly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 19:43:23 +09:00
195c74756c Fix mid free routing and relax mid W_MAX 2025-12-01 22:06:10 +09:00
4ef0171bc0 feat: Add ACE allocation failure tracing and debug hooks
This commit introduces a comprehensive tracing mechanism for allocation failures within the Adaptive Cache Engine (ACE) component. This feature allows for precise identification of the root cause for Out-Of-Memory (OOM) issues related to ACE allocations.

Key changes include:
- **ACE Tracing Implementation**:
  - Added  environment variable to enable/disable detailed logging of allocation failures.
  - Instrumented , , and  to distinguish between "Threshold" (size class mismatch), "Exhaustion" (pool depletion), and "MapFail" (OS memory allocation failure).
- **Build System Fixes**:
  - Corrected  to ensure  is properly linked into , resolving an  error.
- **LD_PRELOAD Wrapper Adjustments**:
  - Investigated and understood the  wrapper's behavior under , particularly its interaction with  and  checks.
  - Enabled debugging flags for  environment to prevent unintended fallbacks to 's  for non-tiny allocations, allowing comprehensive testing of the  allocator.
- **Debugging & Verification**:
  - Introduced temporary verbose logging to pinpoint execution flow issues within  interception and  routing. These temporary logs have been removed.
  - Created  to facilitate testing of the tracing features.

This feature will significantly aid in diagnosing and resolving allocation-related OOM issues in  by providing clear insights into the failure pathways.
2025-12-01 16:37:59 +09:00
e769dec283 Refactor: Clean up SuperSlab shared pool code
- Removed unused/disabled L0 cache implementation from core/hakmem_shared_pool.c.
- Deleted stale backup file core/hakmem_tiny_superslab.c.bak.
- Removed untracked and obsolete shared_pool source files.
2025-11-30 15:27:53 +09:00
191e659837 Phase 8 Root Cause Fix: BenchFast crash investigation and infrastructure isolation
Goal: Fix BenchFast mode crash and improve infrastructure separation
Status: Normal mode works perfectly (17.9M ops/s), BenchFast crash reduced but persists (separate issue)

Root Cause Analysis (Layers 0-3):

Layer 1: Removed unnecessary unified_cache_init() call
- Problem: Phase 8 Step 2 added unified_cache_init() to bench_fast_init()
- Design error: BenchFast uses TLS SLL strategy, NOT Unified Cache
- Impact: 16KB mmap allocations created, later misclassified as Tiny → crash
- Fix: Removed unified_cache_init() call from bench_fast_box.c lines 123-129
- Rationale: BenchFast and Unified Cache are different allocation strategies

Layer 2: Infrastructure isolation (__libc bypass)
- Problem: Infrastructure allocations (cache arrays) went through HAKMEM wrapper
- Risk: Can interact with BenchFast mode, causing path conflicts
- Fix: Use __libc_calloc/__libc_free in unified_cache_init/shutdown
- Benefit: Clean separation between workload (measured) and infrastructure (unmeasured)
- Defense: Prevents future crashes from infrastructure/workload mixing

Layer 3: Box Contract documentation
- Problem: Implicit assumptions about BenchFast behavior were undocumented
- Fix: Added comprehensive Box Contract to bench_fast_box.h (lines 13-51)
- Documents:
  * Workload allocations: Tiny only, TLS SLL strategy
  * Infrastructure allocations: __libc bypass, no HAKMEM interaction
  * Preconditions, guarantees, and violation examples
- Benefit: Future developers understand design constraints

Layer 0: Limit prealloc to actual TLS SLL capacity
- Problem: Old code preallocated 50,000 blocks/class
- Reality: Adaptive sizing limits TLS SLL to 128 blocks/class at runtime
- Lost blocks: 50,000 - 128 = 49,872 blocks/class × 6 = 299,232 lost blocks!
- Impact: Lost blocks caused heap corruption
- Fix: Hard-code prealloc to 128 blocks/class (observed actual capacity)
- Result: 768 total blocks (128 × 6), zero lost blocks

Performance Impact:
- Normal mode:  17.9M ops/s (perfect, no regression)
- BenchFast mode: ⚠️ Still crashes (different root cause, requires further investigation)

Benefits:
- Unified Cache infrastructure properly isolated (__libc bypass)
- BenchFast Box Contract documented (prevents future misunderstandings)
- Prealloc overflow eliminated (no more lost blocks)
- Normal mode unchanged (backward compatible)

Known Issue (separate):
- BenchFast mode still crashes with "free(): invalid pointer"
- Crash location: Likely bench_random_mixed.c line 145 (BENCH_META_FREE(slots))
- Next steps: GDB debugging, AddressSanitizer build, or strace analysis
- Not caused by Phase 8 changes (pre-existing issue)

Files Modified:
- core/box/bench_fast_box.h        - Box Contract documentation (Layer 3)
- core/box/bench_fast_box.c        - Removed prewarm, limited prealloc (Layer 0+1)
- core/front/tiny_unified_cache.c  - __libc bypass (Layer 2)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 04:51:36 +09:00
cfa587c61d Phase 8-Step1-3: Unified Cache hot path optimization (config macro + prewarm + PGO init removal)
Goal: Reduce branches in Unified Cache hot paths (-2 branches per op)
Expected improvement: +2-3% in PGO mode

Changes:
1. Config Macro (Step 1):
   - Added TINY_FRONT_UNIFIED_CACHE_ENABLED macro to tiny_front_config_box.h
   - PGO mode: compile-time constant (1)
   - Normal mode: runtime function call unified_cache_enabled()
   - Replaced unified_cache_enabled() calls in 3 locations:
     * unified_cache_pop() line 142
     * unified_cache_push() line 182
     * unified_cache_pop_or_refill() line 228

2. Function Declaration Fix:
   - Moved unified_cache_enabled() from static inline to non-static
   - Implementation in tiny_unified_cache.c (was in .h as static inline)
   - Forward declaration in tiny_front_config_box.h
   - Resolves declaration conflict between config box and header

3. Prewarm (Step 2):
   - Added unified_cache_init() call to bench_fast_init()
   - Ensures cache is initialized before benchmark starts
   - Enables PGO builds to remove lazy init checks

4. Conditional Init Removal (Step 3):
   - Wrapped lazy init checks in #if !HAKMEM_TINY_FRONT_PGO
   - PGO builds assume prewarm → no init check needed (-1 branch)
   - Normal builds keep lazy init for safety
   - Applied to 3 functions: unified_cache_pop(), unified_cache_push(), unified_cache_pop_or_refill()

Performance Impact:
  PGO mode: -2 branches per operation (enabled check + init check)
  Normal mode: Same as before (runtime checks)

Branch Elimination (PGO):
  Before: if (!unified_cache_enabled()) + if (slots == NULL)
  After:  if (!1) [eliminated] + [init check removed]
  Result: -2 branches in alloc/free hot paths

Files Modified:
  core/box/tiny_front_config_box.h        - Config macro + forward declaration
  core/front/tiny_unified_cache.h         - Config macro usage + PGO conditionals
  core/front/tiny_unified_cache.c         - unified_cache_enabled() implementation
  core/box/bench_fast_box.c               - Prewarm call in bench_fast_init()

Note: BenchFast mode has pre-existing crash (not caused by these changes)

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 17:58:42 +09:00
04186341c1 Phase 4-Step2: Add Hot/Cold Path Box (+7.3% performance)
Implemented Hot/Cold Path separation using Box pattern for Tiny allocations:

Performance Improvement (without PGO):
- Baseline (Phase 26-A):     53.3 M ops/s
- Hot/Cold Box (Phase 4-Step2): 57.2 M ops/s
- Gain: +7.3% (+3.9 M ops/s)

Implementation:
1. core/box/tiny_front_hot_box.h - Ultra-fast hot path (1 branch)
   - Removed range check (caller guarantees valid class_idx)
   - Inline cache hit path with branch prediction hints
   - Debug metrics with zero overhead in Release builds

2. core/box/tiny_front_cold_box.h - Slow cold path (noinline, cold)
   - Refill logic (batch allocation from SuperSlab)
   - Drain logic (batch free to SuperSlab)
   - Error reporting and diagnostics

3. core/front/malloc_tiny_fast.h - Updated to use Hot/Cold Boxes
   - Hot path: tiny_hot_alloc_fast() (1 branch: cache empty check)
   - Cold path: tiny_cold_refill_and_alloc() (noinline, cold attribute)
   - Clear separation improves i-cache locality

Branch Analysis:
- Baseline: 4-5 branches in hot path (range check + cache check + refill logic mixed)
- Hot/Cold Box: 1 branch in hot path (cache empty check only)
- Reduction: 3-4 branches eliminated from hot path

Design Principles (Box Pattern):
 Single Responsibility: Hot path = cache hit only, Cold path = refill/errors
 Clear Contract: Hot returns NULL on miss, Cold handles miss
 Observable: Debug metrics (TINY_HOT_METRICS_*) gated by NDEBUG
 Safe: Branch prediction hints (TINY_HOT_LIKELY/UNLIKELY)
 Testable: Isolated hot/cold paths, easy A/B testing

PGO Status:
- Temporarily disabled (build issues with __gcov_merge_time_profile)
- Will re-enable PGO in future commit after resolving gcc/lto issues
- Current benchmarks are without PGO (fair A/B comparison)

Other Changes:
- .gitignore: Added *.d files (dependency files, auto-generated)
- Makefile: PGO targets temporarily disabled (show informational message)
- build_pgo.sh: Temporarily disabled (show "PGO paused" message)

Next: Phase 4-Step3 (Front Config Box, target +5-8%)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 11:58:37 +09:00
20f8d6f179 Cleanup: Add tiny_debug_api.h to eliminate guard/failfast implicit warnings
Created central header for debug instrumentation API to fix implicit
function declaration warnings across the codebase.

Changes:
1. Created core/tiny_debug_api.h
   - Declares guard system API (3 functions)
   - Declares failfast debugging API (3 functions)
   - Uses forward declarations for SuperSlab/TinySlabMeta

2. Updated 3 files to include tiny_debug_api.h:
   - core/tiny_region_id.h (removed inline externs)
   - core/hakmem_tiny_tls_ops.h
   - core/tiny_superslab_alloc.inc.h

Warnings eliminated (6 of 11 total):
 tiny_guard_is_enabled()
 tiny_guard_on_alloc()
 tiny_guard_on_invalid()
 tiny_failfast_log()
 tiny_failfast_abort_ptr()
 tiny_refill_failfast_level()

Remaining warnings (deferred to P1):
- ss_active_add (2 occurrences)
- expand_superslab_head
- hkm_ace_set_tls_capacity
- smallmid_backend_free

Impact:
- Cleaner build output
- Better type safety for debug functions
- No behavior changes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 06:47:13 +09:00
679c821573 ENV Cleanup Step 14: Gate HAKMEM_TINY_HEAP_V2_DEBUG
Gate the HeapV2 push debug logging behind #if !HAKMEM_BUILD_RELEASE:
- HAKMEM_TINY_HEAP_V2_DEBUG: Controls magazine push event tracing
- File: core/front/tiny_heap_v2.h:117-130

Wraps the ENV check and debug output that logs the first 5 push
operations per size class for HeapV2 magazine diagnostics.

Performance: 29.6M ops/s (within baseline range)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 04:33:39 +09:00
f8b0f38f78 ENV Cleanup Step 8: Gate HAKMEM_SUPER_LOOKUP_DEBUG in header
Gate HAKMEM_SUPER_LOOKUP_DEBUG environment variable behind
#if !HAKMEM_BUILD_RELEASE in hakmem_super_registry.h inline function.

Changes:
- Wrap s_dbg initialization in conditional compilation
- Release builds use constant s_dbg = 0 for complete elimination
- Debug logging in hak_super_lookup() now fully compiled out in release

Performance: 30.3M ops/s Larson (stable, no regression)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 01:45:45 +09:00
8355214135 Fix NULL pointer crash in unified_cache_refill ss_active_add
When superslab_refill() fails in the inner loop, tls->ss can remain
NULL even when produced > 0 (from earlier successful allocations).
This caused a segfault at high iteration counts (>500K) in the
random_mixed benchmark.

Root cause: Line 353 calls ss_active_add(tls->ss, ...) without
checking if tls->ss is NULL after a failed refill breaks the loop.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 13:31:46 +09:00
64ed3d8d8c Add ss_fast_lookup() for O(1) SuperSlab lookup via mask
Replaces expensive hak_super_lookup() (registry hash lookup, 50-100 cycles)
with fast mask-based lookup (~5-10 cycles) in free hot paths.

Algorithm:
1. Mask pointer with SUPERSLAB_SIZE_MIN (1MB) - works for both 1MB and 2MB SS
2. Validate magic (SUPERSLAB_MAGIC)
3. Range check using ss->lg_size

Applied to:
- tiny_free_fast.inc.h: tiny_free_fast() SuperSlab path
- tiny_free_fast_v2.inc.h: LARSON_FIX cross-thread check
- front/malloc_tiny_fast.h: free_tiny_fast() LARSON_FIX path

Note: Performance impact minimal with LARSON_FIX=OFF (default) since
SuperSlab lookup is skipped entirely in that case. Optimization benefits
LARSON_FIX=ON path for safe multi-threaded operation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 12:47:10 +09:00
d8e3971dc2 Fix cross-thread ownership check: Use bits 8-15 for owner_tid_low
Problem:
- TLS_SLL_PUSH_DUP crash in Larson multi-threaded benchmark
- Cross-thread frees incorrectly routed to same-thread TLS path
- Root cause: pthread_t on glibc is 256-byte aligned (TCB base)
  so lower 8 bits are ALWAYS 0x00 for ALL threads

Fix:
- Change owner_tid_low from (tid & 0xFF) to ((tid >> 8) & 0xFF)
- Bits 8-15 actually vary between threads, enabling correct detection
- Applied consistently across all ownership check locations:
  - superslab_inline.h: ss_owner_try_acquire/release/is_mine
  - slab_handle.h: slab_try_acquire
  - tiny_free_fast.inc.h: tiny_free_is_same_thread_ss
  - tiny_free_fast_v2.inc.h: cross-thread detection
  - tiny_superslab_free.inc.h: same-thread check
  - ss_allocation_box.c: slab initialization
  - hakmem_tiny_superslab.c: ownership handling

Also added:
- Address watcher debug infrastructure (tiny_region_id.h)
- Cross-thread detection in malloc_tiny_fast.h Front Gate

Test results:
- Larson 1T/2T/4T: PASS (no TLS_SLL_PUSH_DUP crash)
- random_mixed: PASS
- Performance: ~20M ops/s (regression from 48M, needs optimization)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 11:52:11 +09:00
6b791b97d4 ENV Cleanup: Delete Ultra HEAP & BG Remote dead code (-1,096 LOC)
Deleted files (11):
- core/ultra/ directory (6 files: tiny_ultra_heap.*, tiny_ultra_page_arena.*)
- core/front/tiny_ultrafront.h
- core/tiny_ultra_fast.inc.h
- core/hakmem_tiny_ultra_front.inc.h
- core/hakmem_tiny_ultra_simple.inc
- core/hakmem_tiny_ultra_batch_box.inc

Edited files (10):
- core/hakmem_tiny.c: Remove Ultra HEAP #includes, move ultra_batch_for_class()
- core/hakmem_tiny_tls_state_box.inc: Delete TinyUltraFront, g_ultra_simple
- core/hakmem_tiny_phase6_wrappers_box.inc: Delete ULTRA_SIMPLE block
- core/hakmem_tiny_alloc.inc: Delete Ultra-Front code block
- core/hakmem_tiny_init.inc: Delete ULTRA_SIMPLE ENV loading
- core/hakmem_tiny_remote_target.{c,h}: Delete g_bg_remote_enable/batch
- core/tiny_refill.h: Remove BG Remote check (always break)
- core/hakmem_tiny_background.inc: Delete BG Remote drain loop

Deleted ENV variables:
- HAKMEM_TINY_ULTRA_HEAP (build flag, undefined)
- HAKMEM_TINY_ULTRA_L0
- HAKMEM_TINY_ULTRA_HEAP_DUMP
- HAKMEM_TINY_ULTRA_PAGE_DUMP
- HAKMEM_TINY_ULTRA_FRONT
- HAKMEM_TINY_BG_REMOTE (no getenv, dead code)
- HAKMEM_TINY_BG_REMOTE_BATCH (no getenv, dead code)
- HAKMEM_TINY_ULTRA_SIMPLE (references only)

Impact:
- Code reduction: -1,096 lines
- Binary size: 305KB → 304KB (-1KB)
- Build: PASS
- Sanity: 15.69M ops/s (3 runs avg)
- Larson: 1 crash observed (seed 43, likely existing instability)

Notes:
- Ultra HEAP never compiled (#if HAKMEM_TINY_ULTRA_HEAP undefined)
- BG Remote variables never initialized (g_bg_remote_enable always 0)
- Ultra SLIM (ultra_slim_alloc_box.h) preserved (active 4-layer path)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 04:35:47 +09:00
bcfb4f6b59 Remove dead code: UltraHot, RingCache, FrontC23, Class5 Hotpath
(cherry-picked from 225b6fcc7, conflicts resolved)
2025-11-26 12:33:49 +09:00
2fe970252a Fix: workset=8192 SEGV - Unify SuperSlab geometry to Box3 (partial fix)
Problem:
- bench_random_mixed_hakmem with workset=8192 causes SEGV
- workset=256 works fine
- Root cause identified by ChatGPT analysis

Root Cause:
SuperSlab geometry double definition caused slab_base misalignment:
- Old: tiny_slab_base_for() used SLAB0_OFFSET + idx * SLAB_SIZE
- New: Box3 tiny_slab_base_for_geometry() uses offset only for idx=0
- Result: slab_idx > 0 had +2048 byte offset error
- Impact: Unified Cache carve stepped beyond slab boundary → SEGV

Fix 1: core/superslab/superslab_inline.h
========================================
Delegate SuperSlab base calculation to Box3:

  static inline uint8_t* tiny_slab_base_for(SuperSlab* ss, int slab_idx) {
      if (!ss || slab_idx < 0) return NULL;
      return tiny_slab_base_for_geometry(ss, slab_idx);  // ← Box3 unified
  }

Effect:
- All tiny_slab_base_for() calls now use single Box3 implementation
- TLS slab_base and Box3 calculations perfectly aligned
- Eliminates geometry mismatch between layers

Fix 2: core/front/tiny_unified_cache.c
========================================
Enhanced fail-fast validation (debug builds only):
- unified_refill_validate_base(): Use TLS as source of truth
- Cross-check with registry lookup for safety
- Validate: slab_base range, alignment, meta consistency
- Box3 + TLS boundary consolidated to one place

Fix 3: core/hakmem_tiny_superslab.h
========================================
Added forward declaration:
- SuperSlab* superslab_refill(int class_idx);
- Required by tiny_unified_cache.c

Test Results:
=============
workset=8192 SEGV threshold improved:

Before fix:
   Immediate SEGV at any iteration count

After fix:
   100K iterations: OK (9.8M ops/s)
   200K iterations: OK (15.5M ops/s)
   300K iterations: SEGV (different bug exposed)

Conclusion:
- Box3 geometry unification fixed primary SEGV
- Stability improved: 0 → 200K iterations
- Remaining issue: 300K+ iterations hit different bug
- Likely causes: memory pressure, different corruption pattern

Known Issues:
- Debug warnings still present: FREE_FAST_HDR_META_MISMATCH, NXT_HDR_MISMATCH
- These are separate header consistency issues (not related to geometry)
- 300K+ SEGV requires further investigation

Performance:
- No performance regression observed in stable range
- workset=256 unaffected: 60M+ ops/s maintained

Credit: Root cause analysis and fix strategy by ChatGPT

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-22 07:40:35 +09:00
5c9fe34b40 Enable performance optimizations by default (+557% improvement)
## Performance Impact

**Before** (optimizations OFF):
- Random Mixed 256B: 9.4M ops/s
- System malloc ratio: 10.6% (9.5x slower)

**After** (optimizations ON):
- Random Mixed 256B: 61.8M ops/s (+557%)
- System malloc ratio: 70.0% (1.43x slower) 
- 3-run average: 60.1M - 62.8M ops/s (±2.2% variance)

## Changes

Enabled 3 critical optimizations by default:

### 1. HAKMEM_SS_EMPTY_REUSE (hakmem_shared_pool.c:810)
```c
// BEFORE: default OFF
empty_reuse_enabled = (e && *e && *e != '0') ? 1 : 0;

// AFTER: default ON
empty_reuse_enabled = (e && *e && *e == '0') ? 0 : 1;
```
**Impact**: Reuse empty slabs before mmap, reduces syscall overhead

### 2. HAKMEM_TINY_UNIFIED_CACHE (tiny_unified_cache.h:69)
```c
// BEFORE: default OFF
g_enable = (e && *e && *e != '0') ? 1 : 0;

// AFTER: default ON
g_enable = (e && *e && *e == '0') ? 0 : 1;
```
**Impact**: Unified TLS cache improves hit rate

### 3. HAKMEM_FRONT_GATE_UNIFIED (malloc_tiny_fast.h:42)
```c
// BEFORE: default OFF
g_enable = (e && *e && *e != '0') ? 1 : 0;

// AFTER: default ON
g_enable = (e && *e && *e == '0') ? 0 : 1;
```
**Impact**: Unified front gate reduces dispatch overhead

## ENV Override

Users can still disable optimizations if needed:
```bash
export HAKMEM_SS_EMPTY_REUSE=0           # Disable empty slab reuse
export HAKMEM_TINY_UNIFIED_CACHE=0       # Disable unified cache
export HAKMEM_FRONT_GATE_UNIFIED=0       # Disable unified front gate
```

## Comparison to Competitors

```
mimalloc:      113.34M ops/s (1.83x faster than HAKMEM)
System malloc:  88.20M ops/s (1.43x faster than HAKMEM)
HAKMEM:         61.80M ops/s  Competitive performance
```

## Files Modified
- core/hakmem_shared_pool.c - EMPTY_REUSE default ON
- core/front/tiny_unified_cache.h - UNIFIED_CACHE default ON
- core/front/malloc_tiny_fast.h - FRONT_GATE_UNIFIED default ON

🚀 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-22 01:29:05 +09:00
25d963a4aa Code Cleanup: Remove false positives, redundant validations, and reduce verbose logging
Following the C7 stride upgrade fix (commit 23c0d9541), this commit performs
comprehensive cleanup to improve code quality and reduce debug noise.

## Changes

### 1. Disable False Positive Checks (tiny_nextptr.h)
- **Disabled**: NXT_MISALIGN validation block with `#if 0`
- **Reason**: Produces false positives due to slab base offsets (2048, 65536)
  not being stride-aligned, causing all blocks to appear "misaligned"
- **TODO**: Reimplement to check stride DISTANCE between consecutive blocks
  instead of absolute alignment to stride boundaries

### 2. Remove Redundant Geometry Validations

**hakmem_tiny_refill_p0.inc.h (P0 batch refill)**
- Removed 25-line CARVE_GEOMETRY_FIX validation block
- Replaced with NOTE explaining redundancy
- **Reason**: Stride table is now correct in tiny_block_stride_for_class(),
  defense-in-depth validation adds overhead without benefit

**ss_legacy_backend_box.c (legacy backend)**
- Removed 18-line LEGACY_FIX_GEOMETRY validation block
- Replaced with NOTE explaining redundancy
- **Reason**: Shared_pool validates geometry at acquisition time

### 3. Reduce Verbose Logging

**hakmem_shared_pool.c (sp_fix_geometry_if_needed)**
- Made SP_FIX_GEOMETRY logging conditional on `!HAKMEM_BUILD_RELEASE`
- **Reason**: Geometry fixes are expected during stride upgrades,
  no need to log in release builds

### 4. Verification
- Build:  Successful (LTO warnings expected)
- Test:  10K iterations (1.87M ops/s, no crashes)
- NXT_MISALIGN false positives:  Eliminated

## Files Modified
- core/tiny_nextptr.h - Disabled false positive NXT_MISALIGN check
- core/hakmem_tiny_refill_p0.inc.h - Removed redundant CARVE validation
- core/box/ss_legacy_backend_box.c - Removed redundant LEGACY validation
- core/hakmem_shared_pool.c - Made SP_FIX_GEOMETRY logging debug-only

## Impact
- **Code clarity**: Removed 43 lines of redundant validation code
- **Debug noise**: Reduced false positive diagnostics
- **Performance**: Eliminated overhead from redundant geometry checks
- **Maintainability**: Single source of truth for geometry validation

🧹 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-21 23:00:24 +09:00
9b0d746407 Phase 3d-B: TLS Cache Merge - Unified g_tls_sll[] structure (+12-18% expected)
Merge separate g_tls_sll_head[] and g_tls_sll_count[] arrays into unified
TinyTLSSLL struct to improve L1D cache locality. Expected performance gain:
+12-18% from reducing cache line splits (2 loads → 1 load per operation).

Changes:
- core/hakmem_tiny.h: Add TinyTLSSLL type (16B aligned, head+count+pad)
- core/hakmem_tiny.c: Replace separate arrays with g_tls_sll[8]
- core/box/tls_sll_box.h: Update Box API (13 sites) for unified access
- Updated 32+ files: All g_tls_sll_head[i] → g_tls_sll[i].head
- Updated 32+ files: All g_tls_sll_count[i] → g_tls_sll[i].count
- core/hakmem_tiny_integrity.h: Unified canary guards
- core/box/integrity_box.c: Simplified canary validation
- Makefile: Added core/box/tiny_sizeclass_hist_box.o to link

Build:  PASS (10K ops sanity test)
Warnings: Only pre-existing LTO type mismatches (unrelated)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 07:32:30 +09:00