Commit Graph

17 Commits

Author SHA1 Message Date
25cb7164c7 Comprehensive legacy cleanup and architecture consolidation
Summary of Changes:

MOVED TO ARCHIVE:
- core/hakmem_tiny_legacy_slow_box.inc → archive/
  * Slow path legacy code preserved for reference
  * Superseded by Gatekeeper Box architecture

- core/superslab_allocate.c → archive/superslab_allocate_legacy.c
  * Legacy SuperSlab allocation implementation
  * Functionality integrated into new Box system

- core/superslab_head.c → archive/superslab_head_legacy.c
  * Legacy slab head management
  * Refactored through Box architecture

REMOVED DEAD CODE:
- Eliminated unused allocation policy variants from ss_allocation_box.c
  * Reduced from 127+ lines of conditional logic to focused implementation
  * Removed: old policy branches, unused allocation strategies
  * Kept: current Box-based allocation path

ADDED NEW INFRASTRUCTURE:
- core/superslab_head_stub.c (41 lines)
  * Minimal stub for backward compatibility
  * Delegates to new architecture

- Enhanced core/superslab_cache.c (75 lines added)
  * Added missing API functions for cache management
  * Proper interface for SuperSlab cache integration

REFACTORED CORE SYSTEMS:
- core/hakmem_super_registry.c
  * Moved registration logic from scattered locations
  * Centralized SuperSlab registry management

- core/hakmem_tiny.c
  * Removed 27 lines of redundant initialization
  * Simplified through Box architecture

- core/hakmem_tiny_alloc.inc
  * Streamlined allocation path to use Gatekeeper
  * Removed legacy decision logic

- core/box/ss_allocation_box.c/h
  * Dramatically simplified allocation policy
  * Removed conditional branches for unused strategies
  * Focused on current Box-based approach

BUILD SYSTEM:
- Updated Makefile for archive structure
- Removed obsolete object file references
- Maintained build compatibility

SAFETY & TESTING:
- All deletions verified: no broken references
- Build verification: RELEASE=0 and RELEASE=1 pass
- Smoke tests: 100% pass rate
- Functional verification: allocation/free intact

Architecture Consolidation:
Before: Multiple overlapping allocation paths with legacy code branches
After:  Single unified path through Gatekeeper Boxes with clear architecture

Benefits:
- Reduced code size and complexity
- Improved maintainability
- Single source of truth for allocation logic
- Better diagnostic/observability hooks
- Foundation for future optimizations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 14:22:48 +09:00
4f2bcb7d32 Refactor: Phase 2 Box化 - SuperSlab Lookup Box with multiple contract levels
Purpose: Formalize SuperSlab lookup responsibilities with clear safety guarantees

Evolution:
- Phase 12: UNSAFE mask+dereference (5-10 cycles) → 12% crash rate
- Phase 1b: SAFE registry lookup (50-100 cycles) → 0% crash rate
- Phase 2: Box化 - multiple contracts (UNSAFE/SAFE/GUARDED)

Box Pattern Benefits:
1. Clear Contracts: Each API documents preconditions and guarantees
2. Multiple Levels: Choose speed vs safety based on context
3. Future-Proof: Enables optimizations without breaking existing code

API Design:
- ss_lookup_unsafe(): 5-10 cycles, requires validated pointer (internal use only)
- ss_lookup_safe(): 50-100 cycles, works with arbitrary pointers (recommended)
- ss_lookup_guarded(): 100-200 cycles, adds integrity checks (debug only)
- ss_fast_lookup(): Backward compatible (→ ss_lookup_safe)

Implementation:
- Created core/box/superslab_lookup_box.h with full contract documentation
- Integrated into core/superslab/superslab_inline.h
- ss_lookup_safe() implemented as macro to avoid circular dependency
- ss_lookup_guarded() only available in debug builds
- Removed conflicting extern declarations from 3 locations

Testing:
- Build: Success (all warnings resolved)
- Crash rate: 0% (50/50 iterations passed)
- Backward compatibility: Maintained via ss_fast_lookup() macro

Future Optimization Opportunities (documented in Box):
- Phase 2.1: Hybrid lookup (try UNSAFE first, fallback to SAFE)
- Phase 2.2: Per-thread cache (1-2 cycles hit rate)
- Phase 2.3: Hardware-assisted validation (PAC/CPUID)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 08:44:29 +09:00
dea7ced429 Fix: Replace unsafe ss_fast_lookup() with safe registry lookup (12% → 0% crash)
Root Cause:
- Phase 12 optimization used mask+dereference for fast SuperSlab lookup
- Masked arbitrary pointers could produce unmapped addresses
- Reading ss->magic from unmapped memory → SEGFAULT
- Crash rate: 12% (6/50 iterations)

Solution Phase 1a (Failed):
- Added user-space range checks (0x1000 to 0x00007fffffffffff)
- Result: Still 10-12% crash rate (range check insufficient)
- Problem: Addresses within range can still be unmapped after masking

Solution Phase 1b (Successful):
- Replace ss_fast_lookup() with hak_super_lookup() registry lookup
- hak_super_lookup() uses hash table - never dereferences arbitrary memory
- Implemented as macro to avoid circular include dependency
- Result: 0% crash rate (100/100 test iterations passed)

Trade-off:
- Performance: 50-100 cycles (vs 5-10 cycles Phase 12)
- Safety: 0% crash rate (vs 12% crash rate Phase 12)
- Rollback Phase 12 optimization but ensures crash-free operation
- Still faster than mincore() syscall (5000-10000 cycles)

Testing:
- Before: 44/50 success (12% crash rate)
- After: 100/100 success (0% crash rate)
- Confirmed stable across extended testing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 08:31:45 +09:00
dc9e650db3 Tiny Pool redesign: P0.1, P0.3, P1.1, P1.2 - Out-of-band class_idx lookup
This commit implements the first phase of Tiny Pool redesign based on
ChatGPT architecture review. The goal is to eliminate Header/Next pointer
conflicts by moving class_idx lookup out-of-band (to SuperSlab metadata).

## P0.1: C0(8B) class upgraded to 16B
- Size table changed: {16,32,64,128,256,512,1024,2048} (8 classes)
- LUT updated: 1..16 → class 0, 17..32 → class 1, etc.
- tiny_next_off: C0 now uses offset 1 (header preserved)
- Eliminates edge cases for 8B allocations

## P0.3: Slab reuse guard Box (tls_slab_reuse_guard_box.h)
- New Box for draining TLS SLL before slab reuse
- ENV gate: HAKMEM_TINY_SLAB_REUSE_GUARD=1
- Prevents stale pointers when slabs are recycled
- Follows Box theory: single responsibility, minimal API

## P1.1: SuperSlab class_map addition
- Added uint8_t class_map[SLABS_PER_SUPERSLAB_MAX] to SuperSlab
- Maps slab_idx → class_idx for out-of-band lookup
- Initialized to 255 (UNASSIGNED) on SuperSlab creation
- Set correctly on slab initialization in all backends

## P1.2: Free fast path uses class_map
- ENV gate: HAKMEM_TINY_USE_CLASS_MAP=1
- Free path can now get class_idx from class_map instead of Header
- Falls back to Header read if class_map returns invalid value
- Fixed Legacy Backend dynamic slab initialization bug

## Documentation added
- HAKMEM_ARCHITECTURE_OVERVIEW.md: 4-layer architecture analysis
- TLS_SLL_ARCHITECTURE_INVESTIGATION.md: Root cause analysis
- PTR_LIFECYCLE_TRACE_AND_ROOT_CAUSE_ANALYSIS.md: Pointer tracking
- TINY_REDESIGN_CHECKLIST.md: Implementation roadmap (P0-P3)

## Test results
- Baseline: 70% success rate (30% crash - pre-existing issue)
- class_map enabled: 70% success rate (same as baseline)
- Performance: ~30.5M ops/s (unchanged)

## Next steps (P1.3, P2, P3)
- P1.3: Add meta->active for accurate TLS/freelist sync
- P2: TLS SLL redesign with Box-based counting
- P3: Complete Header out-of-band migration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 13:42:39 +09:00
7a03a614fd Restrict ss_fast_lookup to validated Tiny pointer paths only
Safety fix: ss_fast_lookup masks pointer to 1MB boundary and reads
memory at that address. If called with arbitrary (non-Tiny) pointers,
the masked address could be unmapped → SEGFAULT.

Changes:
- tiny_free_fast(): Reverted to safe hak_super_lookup (can receive
  arbitrary pointers without prior validation)
- ss_fast_lookup(): Added safety warning in comments documenting when
  it's safe to use (after header magic 0xA0 validation)

ss_fast_lookup remains in LARSON_FIX paths where header magic is
already validated before the SuperSlab lookup.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 12:55:40 +09:00
64ed3d8d8c Add ss_fast_lookup() for O(1) SuperSlab lookup via mask
Replaces expensive hak_super_lookup() (registry hash lookup, 50-100 cycles)
with fast mask-based lookup (~5-10 cycles) in free hot paths.

Algorithm:
1. Mask pointer with SUPERSLAB_SIZE_MIN (1MB) - works for both 1MB and 2MB SS
2. Validate magic (SUPERSLAB_MAGIC)
3. Range check using ss->lg_size

Applied to:
- tiny_free_fast.inc.h: tiny_free_fast() SuperSlab path
- tiny_free_fast_v2.inc.h: LARSON_FIX cross-thread check
- front/malloc_tiny_fast.h: free_tiny_fast() LARSON_FIX path

Note: Performance impact minimal with LARSON_FIX=OFF (default) since
SuperSlab lookup is skipped entirely in that case. Optimization benefits
LARSON_FIX=ON path for safe multi-threaded operation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 12:47:10 +09:00
d8e3971dc2 Fix cross-thread ownership check: Use bits 8-15 for owner_tid_low
Problem:
- TLS_SLL_PUSH_DUP crash in Larson multi-threaded benchmark
- Cross-thread frees incorrectly routed to same-thread TLS path
- Root cause: pthread_t on glibc is 256-byte aligned (TCB base)
  so lower 8 bits are ALWAYS 0x00 for ALL threads

Fix:
- Change owner_tid_low from (tid & 0xFF) to ((tid >> 8) & 0xFF)
- Bits 8-15 actually vary between threads, enabling correct detection
- Applied consistently across all ownership check locations:
  - superslab_inline.h: ss_owner_try_acquire/release/is_mine
  - slab_handle.h: slab_try_acquire
  - tiny_free_fast.inc.h: tiny_free_is_same_thread_ss
  - tiny_free_fast_v2.inc.h: cross-thread detection
  - tiny_superslab_free.inc.h: same-thread check
  - ss_allocation_box.c: slab initialization
  - hakmem_tiny_superslab.c: ownership handling

Also added:
- Address watcher debug infrastructure (tiny_region_id.h)
- Cross-thread detection in malloc_tiny_fast.h Front Gate

Test results:
- Larson 1T/2T/4T: PASS (no TLS_SLL_PUSH_DUP crash)
- random_mixed: PASS
- Performance: ~20M ops/s (regression from 48M, needs optimization)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 11:52:11 +09:00
6ae0db9fd2 Fix: workset=8192 SEGV - Align slab_index_for to Box3 geometry (iteration 2)
Problem:
After Box3 geometry unification (commit 2fe970252), workset=8192 still SEGVs:
- 200K iterations:  OK
- 300K iterations:  SEGV

Root Cause (identified by ChatGPT):
Header/metadata class mismatches around 300K iterations:
- [HDR_META_MISMATCH] hdr_cls=6 meta_cls=5
- [FREE_FAST_HDR_META_MISMATCH] hdr_cls=5 meta_cls=4
- [TLS_SLL_PUSH_META_MISMATCH] cls=5 meta_cls=4

Cause: slab_index_for() geometry mismatch with Box3
- tiny_slab_base_for_geometry() (Box3):
    - Slab 0: ss + SUPERSLAB_SLAB0_DATA_OFFSET
    - Slab 1: ss + 1*SLAB_SIZE
    - Slab k: ss + k*SLAB_SIZE

- Old slab_index_for():
    rel = p - (base + SUPERSLAB_SLAB0_DATA_OFFSET);
    idx = rel / SLAB_SIZE;

- Result: Off-by-one for slab_idx > 0
    Example: tiny_slab_base_for_geometry(ss, 4) returns 0x...40000
             slab_index_for(ss, 0x...40000) returns 3 (wrong!)

Impact:
- Block allocated in "C6 slab 4" appears to be in "C5 slab 3"
- Header class_idx (C6) != meta->class_idx (C5)
- TLS SLL corruption → SEGV after extended runs

Fix: core/superslab/superslab_inline.h
======================================
Rewrite slab_index_for() as inverse of Box3 geometry:

  static inline int slab_index_for(SuperSlab* ss, void* ptr) {
      // ... bounds checks ...

      // Slab 0: special case (has metadata offset)
      if (p < base + SLAB_SIZE) {
          return 0;
      }

      // Slab 1+: simple SLAB_SIZE spacing from base
      size_t rel = p - base;  // ← Changed from (p - base - OFFSET)
      int idx = (int)(rel / SLAB_SIZE);
      return idx;
  }

Verification:
- slab_index_for(ss, tiny_slab_base_for_geometry(ss, idx)) == idx 
- Consistent for any address within slab

Test Results:
=============
workset=8192 SEGV threshold improved further:

Before this fix (after 2fe970252):
   200K iterations: OK
   300K iterations: SEGV

After this fix:
   220K iterations: OK (15.5M ops/s)
   240K iterations: SEGV (different bug)

Progress:
- Iteration 1 (2fe970252): 0 → 200K stable
- Iteration 2 (this fix):  200K → 220K stable
- Total improvement: ∞ → 220K iterations (+10% stability)

Known Issues:
- 240K+ still SEGVs (suspected: TLS SLL double-free, per ChatGPT)
- Debug builds may show TLS_SLL_PUSH FATAL double-free detection
- Requires further investigation of free path

Impact:
- No performance regression in stable range
- Header/metadata mismatch errors eliminated
- workset=256 unaffected: 60M+ ops/s maintained

Credit: Root cause analysis and fix by ChatGPT

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-22 07:56:06 +09:00
2fe970252a Fix: workset=8192 SEGV - Unify SuperSlab geometry to Box3 (partial fix)
Problem:
- bench_random_mixed_hakmem with workset=8192 causes SEGV
- workset=256 works fine
- Root cause identified by ChatGPT analysis

Root Cause:
SuperSlab geometry double definition caused slab_base misalignment:
- Old: tiny_slab_base_for() used SLAB0_OFFSET + idx * SLAB_SIZE
- New: Box3 tiny_slab_base_for_geometry() uses offset only for idx=0
- Result: slab_idx > 0 had +2048 byte offset error
- Impact: Unified Cache carve stepped beyond slab boundary → SEGV

Fix 1: core/superslab/superslab_inline.h
========================================
Delegate SuperSlab base calculation to Box3:

  static inline uint8_t* tiny_slab_base_for(SuperSlab* ss, int slab_idx) {
      if (!ss || slab_idx < 0) return NULL;
      return tiny_slab_base_for_geometry(ss, slab_idx);  // ← Box3 unified
  }

Effect:
- All tiny_slab_base_for() calls now use single Box3 implementation
- TLS slab_base and Box3 calculations perfectly aligned
- Eliminates geometry mismatch between layers

Fix 2: core/front/tiny_unified_cache.c
========================================
Enhanced fail-fast validation (debug builds only):
- unified_refill_validate_base(): Use TLS as source of truth
- Cross-check with registry lookup for safety
- Validate: slab_base range, alignment, meta consistency
- Box3 + TLS boundary consolidated to one place

Fix 3: core/hakmem_tiny_superslab.h
========================================
Added forward declaration:
- SuperSlab* superslab_refill(int class_idx);
- Required by tiny_unified_cache.c

Test Results:
=============
workset=8192 SEGV threshold improved:

Before fix:
   Immediate SEGV at any iteration count

After fix:
   100K iterations: OK (9.8M ops/s)
   200K iterations: OK (15.5M ops/s)
   300K iterations: SEGV (different bug exposed)

Conclusion:
- Box3 geometry unification fixed primary SEGV
- Stability improved: 0 → 200K iterations
- Remaining issue: 300K+ iterations hit different bug
- Likely causes: memory pressure, different corruption pattern

Known Issues:
- Debug warnings still present: FREE_FAST_HDR_META_MISMATCH, NXT_HDR_MISMATCH
- These are separate header consistency issues (not related to geometry)
- 300K+ SEGV requires further investigation

Performance:
- No performance regression observed in stable range
- workset=256 unaffected: 60M+ ops/s maintained

Credit: Root cause analysis and fix strategy by ChatGPT

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-22 07:40:35 +09:00
fcf098857a Phase12 debug: restore SUPERSLAB constants/APIs, implement Box2 drain boundary, fix tiny_fast_pop to return BASE, honor TLS SLL toggle in alloc/free fast paths, add fail-fast stubs, and quiet capacity sentinel. Update CURRENT_TASK with A/B results (SLL-off stable; SLL-on crash). 2025-11-14 01:02:00 +09:00
03df05ec75 Phase 12: Shared SuperSlab Pool implementation (WIP - runtime crash)
## Summary
Implemented Phase 12 Shared SuperSlab Pool (mimalloc-style) to address
SuperSlab allocation churn (877 SuperSlabs → 100-200 target).

## Implementation (ChatGPT + Claude)
1. **Metadata changes** (superslab_types.h):
   - Added class_idx to TinySlabMeta (per-slab dynamic class)
   - Removed size_class from SuperSlab (no longer per-SuperSlab)
   - Changed owner_tid (16-bit) → owner_tid_low (8-bit)

2. **Shared Pool** (hakmem_shared_pool.{h,c}):
   - Global pool shared by all size classes
   - shared_pool_acquire_slab() - Get free slab for class_idx
   - shared_pool_release_slab() - Return slab when empty
   - Per-class hints for fast path optimization

3. **Integration** (23 files modified):
   - Updated all ss->size_class → meta->class_idx
   - Updated all meta->owner_tid → meta->owner_tid_low
   - superslab_refill() now uses shared pool
   - Free path releases empty slabs back to pool

4. **Build system** (Makefile):
   - Added hakmem_shared_pool.o to OBJS_BASE and TINY_BENCH_OBJS_BASE

## Status: ⚠️ Build OK, Runtime CRASH

**Build**:  SUCCESS
- All 23 files compile without errors
- Only warnings: superslab_allocate type mismatch (legacy code)

**Runtime**:  SEGFAULT
- Crash location: sll_refill_small_from_ss()
- Exit code: 139 (SIGSEGV)
- Test case: ./bench_random_mixed_hakmem 1000 256 42

## Known Issues
1. **SEGFAULT in refill path** - Likely shared_pool_acquire_slab() issue
2. **Legacy superslab_allocate()** still exists (type mismatch warning)
3. **Remaining TODOs** from design doc:
   - SuperSlab physical layout integration
   - slab_handle.h cleanup
   - Remove old per-class head implementation

## Next Steps
1. Debug SEGFAULT (gdb backtrace shows sll_refill_small_from_ss)
2. Fix shared_pool_acquire_slab() or superslab_init_slab()
3. Basic functionality test (1K → 100K iterations)
4. Measure SuperSlab count reduction (877 → 100-200)
5. Performance benchmark (+650-860% expected)

## Files Changed (25 files)
core/box/free_local_box.c
core/box/free_remote_box.c
core/box/front_gate_classifier.c
core/hakmem_super_registry.c
core/hakmem_tiny.c
core/hakmem_tiny_bg_spill.c
core/hakmem_tiny_free.inc
core/hakmem_tiny_lifecycle.inc
core/hakmem_tiny_magazine.c
core/hakmem_tiny_query.c
core/hakmem_tiny_refill.inc.h
core/hakmem_tiny_superslab.c
core/hakmem_tiny_superslab.h
core/hakmem_tiny_tls_ops.h
core/slab_handle.h
core/superslab/superslab_inline.h
core/superslab/superslab_types.h
core/tiny_debug.h
core/tiny_free_fast.inc.h
core/tiny_free_magazine.inc.h
core/tiny_remote.c
core/tiny_superslab_alloc.inc.h
core/tiny_superslab_free.inc.h
Makefile

## New Files (3 files)
PHASE12_SHARED_SUPERSLAB_POOL_DESIGN.md
core/hakmem_shared_pool.c
core/hakmem_shared_pool.h

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: ChatGPT <chatgpt@openai.com>
2025-11-13 16:33:03 +09:00
72b38bc994 Phase E3-FINAL: Fix Box API offset bugs - ALL classes now use correct offsets
## Root Cause Analysis (GPT5)

**Physical Layout Constraints**:
- Class 0: 8B = [1B header][7B payload] → offset 1 = 9B needed =  IMPOSSIBLE
- Class 1-6: >=16B = [1B header][15B+ payload] → offset 1 =  POSSIBLE
- Class 7: 1KB → offset 0 (compatibility)

**Correct Specification**:
- HAKMEM_TINY_HEADER_CLASSIDX != 0:
  - Class 0, 7: next at offset 0 (overwrites header when on freelist)
  - Class 1-6: next at offset 1 (after header)
- HAKMEM_TINY_HEADER_CLASSIDX == 0:
  - All classes: next at offset 0

**Previous Bug**:
- Attempted "ALL classes offset 1" unification
- Class 0 with offset 1 caused immediate SEGV (9B > 8B block size)
- Mixed 2-arg/3-arg API caused confusion

## Fixes Applied

### 1. Restored 3-Argument Box API (core/box/tiny_next_ptr_box.h)
```c
// Correct signatures
void tiny_next_write(int class_idx, void* base, void* next_value)
void* tiny_next_read(int class_idx, const void* base)

// Correct offset calculation
size_t offset = (class_idx == 0 || class_idx == 7) ? 0 : 1;
```

### 2. Updated 123+ Call Sites Across 34 Files
- hakmem_tiny_hot_pop_v4.inc.h (4 locations)
- hakmem_tiny_fastcache.inc.h (3 locations)
- hakmem_tiny_tls_list.h (12 locations)
- superslab_inline.h (5 locations)
- tiny_fastcache.h (3 locations)
- ptr_trace.h (macro definitions)
- tls_sll_box.h (2 locations)
- + 27 additional files

Pattern: `tiny_next_read(base)` → `tiny_next_read(class_idx, base)`
Pattern: `tiny_next_write(base, next)` → `tiny_next_write(class_idx, base, next)`

### 3. Added Sentinel Detection Guards
- tiny_fast_push(): Block nodes with sentinel in ptr or ptr->next
- tls_list_push(): Block nodes with sentinel in ptr or ptr->next
- Defense-in-depth against remote free sentinel leakage

## Verification (GPT5 Report)

**Test Command**: `./out/release/bench_random_mixed_hakmem --iterations=70000`

**Results**:
-  Main loop completed successfully
-  Drain phase completed successfully
-  NO SEGV (previous crash at iteration 66151 is FIXED)
- ℹ️ Final log: "tiny_alloc(1024) failed" is normal fallback to Mid/ACE layers

**Analysis**:
- Class 0 immediate SEGV:  RESOLVED (correct offset 0 now used)
- 66K iteration crash:  RESOLVED (offset consistency fixed)
- Box API conflicts:  RESOLVED (unified 3-arg API)

## Technical Details

### Offset Logic Justification
```
Class 0:  8B block → next pointer (8B) fits ONLY at offset 0
Class 1: 16B block → next pointer (8B) fits at offset 1 (after 1B header)
Class 2: 32B block → next pointer (8B) fits at offset 1
...
Class 6: 512B block → next pointer (8B) fits at offset 1
Class 7: 1024B block → offset 0 for legacy compatibility
```

### Files Modified (Summary)
- Core API: `box/tiny_next_ptr_box.h`
- Hot paths: `hakmem_tiny_hot_pop*.inc.h`, `tiny_fastcache.h`
- TLS layers: `hakmem_tiny_tls_list.h`, `hakmem_tiny_tls_ops.h`
- SuperSlab: `superslab_inline.h`, `tiny_superslab_*.inc.h`
- Refill: `hakmem_tiny_refill.inc.h`, `tiny_refill_opt.h`
- Free paths: `tiny_free_magazine.inc.h`, `tiny_superslab_free.inc.h`
- Documentation: Multiple Phase E3 reports

## Remaining Work

None for Box API offset bugs - all structural issues resolved.

Future enhancements (non-critical):
- Periodic `grep -R '*(void**)' core/` to detect direct pointer access violations
- Enforce Box API usage via static analysis
- Document offset rationale in architecture docs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-13 06:50:20 +09:00
862e8ea7db Infrastructure and build updates
- Update build configuration and flags
- Add missing header files and dependencies
- Update TLS list implementation with proper scoping
- Fix various compilation warnings and issues
- Update debug ring and tiny allocation infrastructure
- Update benchmark results documentation

Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
2025-11-11 21:49:05 +09:00
1b6624dec4 Fix debug build: gate Tiny observation snapshot in hakmem_tiny_stats.c behind HAKMEM_TINY_OBS_ENABLE to avoid incomplete TinyObsStats and missing globals. Now debug build passes, enabling C7 triage with fail‑fast guards. 2025-11-10 03:00:00 +09:00
83bb8624f6 Tiny: fix remote sentinel leak → SEGV; add defense-in-depth; PoolTLS: refill-boundary remote drain; build UX help; quickstart docs
Summary
- Fix SEGV root cause in Tiny random_mixed: TINY_REMOTE_SENTINEL leaked from Remote queue into freelist/TLS SLL.
- Clear/guard sentinel at the single boundary where Remote merges to freelist.
- Add minimal defense-in-depth in freelist_pop and TLS SLL pop.
- Silence verbose prints behind debug gates to reduce noise in release runs.
- Pool TLS: integrate Remote Queue drain at refill boundary to avoid unnecessary backend carve/OS calls when possible.
- DX: strengthen build.sh with help/list/verify and add docs/BUILDING_QUICKSTART.md.

Details
- core/superslab/superslab_inline.h: guard head/node against TINY_REMOTE_SENTINEL; sanitize node[0] when splicing local chain; only print diagnostics when debug guard is enabled.
- core/slab_handle.h: freelist_pop breaks on sentinel head (fail-fast under strict).
- core/tiny_alloc_fast_inline.h: TLS SLL pop breaks on sentinel head (rare branch).
- core/tiny_superslab_free.inc.h: sentinel scan log behind debug guard.
- core/pool_refill.c: try pool_remote_pop_chain() before backend carve in pool_refill_and_alloc().
- core/tiny_adaptive_sizing.c: default adaptive logs off; enable via HAKMEM_ADAPTIVE_LOG=1.
- build.sh: add help/list/verify; EXTRA_MAKEFLAGS passthrough; echo pinned flags.
- docs/BUILDING_QUICKSTART.md: add one‑pager for targets/flags/env/perf/strace.

Verification (high level)
- Tiny random_mixed 10k 256/1024: SEGV resolved; runs complete.
- Pool TLS 1T/4T perf: HAKMEM >= system (≈ +0.7% 1T, ≈ +2.9% 4T); syscall counts ~10–13.

Known issues (to address next)
- Tiny random_mixed perf is weak vs system:
  - 1T/500k/256: cycles/op ≈ 240 vs ~47 (≈5× slower), IPC ≈0.92, branch‑miss ≈11%.
  - 1T/500k/1024: cycles/op ≈ 149 vs ~53 (≈2.8× slower), IPC ≈0.82, branch‑miss ≈10.5%.
  - Hypothesis: frequent SuperSlab path for class7 (fast_cap=0), branchy refill/adopt, and hot-path divergence.
- Proposed next steps:
  - Introduce fast_cap>0 for class7 (bounded TLS SLL) and a simpler batch refill.
  - Add env‑gated Remote Side OFF for 1T A/B (reduce side-table and guards).
  - Revisit likely/unlikely and unify adopt boundary sequencing (drain→bind→acquire) for Tiny.
2025-11-09 16:49:34 +09:00
6b1382959c Phase 7-1 PoC: Region-ID Direct Lookup (+39%~+436% improvement!)
Implemented ultra-fast header-based free path that eliminates SuperSlab
lookup bottleneck (100+ cycles → 5-10 cycles).

## Key Changes

1. **Smart Headers** (core/tiny_region_id.h):
   - 1-byte header before each allocation stores class_idx
   - Memory layout: [Header: 1B] [User data: N-1B]
   - Overhead: <2% average (0% for Slab[0] using wasted padding)

2. **Ultra-Fast Allocation** (core/tiny_alloc_fast.inc.h):
   - Write header at base: *base = class_idx
   - Return user pointer: base + 1

3. **Ultra-Fast Free** (core/tiny_free_fast_v2.inc.h):
   - Read class_idx from header (ptr-1): 2-3 cycles
   - Push base (ptr-1) to TLS freelist: 3-5 cycles
   - Total: 5-10 cycles (vs 500+ cycles current!)

4. **Free Path Integration** (core/box/hak_free_api.inc.h):
   - Removed SuperSlab lookup from fast path
   - Direct header validation (no lookup needed!)

5. **Size Class Adjustment** (core/hakmem_tiny.h):
   - Max tiny size: 1023B (was 1024B)
   - 1024B requests → Mid allocator fallback

## Performance Results

| Size | Baseline | Phase 7 | Improvement |
|------|----------|---------|-------------|
| 128B | 1.22M | 6.54M | **+436%** 🚀 |
| 512B | 1.22M | 1.70M | **+39%** |
| 1023B | 1.22M | 1.92M | **+57%** |

## Build & Test

Enable Phase 7:
  make HEADER_CLASSIDX=1 bench_random_mixed_hakmem

Run benchmark:
  HAKMEM_TINY_USE_SUPERSLAB=1 ./bench_random_mixed_hakmem 10000 128 1234567

## Known Issues

- 1024B requests fallback to Mid allocator (by design)
- Target 40-60M ops/s not yet reached (current: 1.7-6.5M)
- Further optimization needed (TLS capacity tuning, refill optimization)

## Credits

Design: ChatGPT Pro Ultrathink, Claude Code
Implementation: Claude Code with Task Agent Ultrathink support

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-08 03:18:17 +09:00
a430545820 Phase 6-2.8: SuperSlab modular refactoring (665 lines → 104 lines)
目的: hakmem_tiny_superslab.h の肥大化を解消 (500+ 行)

実装内容:
1. superslab_types.h を作成
   - SuperSlab 構造体定義 (TinySlabMeta, SuperSlab)
   - 設定定数 (SUPERSLAB_SIZE_MAX, TINY_NUM_CLASSES_SS)
   - コンパイル時アサーション

2. superslab_inline.h を作成
   - ホットパス用インライン関数を集約
   - ss_slabs_capacity(), slab_index_for()
   - tiny_slab_base_for(), ss_remote_push()
   - _ss_remote_drain_to_freelist_unsafe()
   - Fail-fast validation helpers
   - ACE helpers (hak_now_ns, hak_tiny_superslab_next_lg)

3. hakmem_tiny_superslab.h をリファクタリング
   - 665 行 → 104 行 (-84%)
   - include のみに書き換え
   - 関数宣言と extern 宣言のみ残す

効果:
 ビルド成功 (libhakmem.so, larson_hakmem)
 Mid-Large allocator テスト通過 (3.98M ops/s)
⚠️ Tiny allocator の freelist corruption バグは未解決 (リファクタリングのスコープ外)

注意:
- Phase 6-2.6/6-2.7 の freelist バグは依然として存在
- リファクタリングは保守性向上のみが目的
- バグ修正は次のフェーズで対応

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-07 23:05:33 +09:00