Phase 6-2.6: Fix slab_data_start() consistency in refill/validation paths

Problem:
- Phase 6-2.5 changed SUPERSLAB_SLAB0_DATA_OFFSET from 1024 → 2048
- Fixed sizeof(SuperSlab) mismatch (1088 bytes)
- But 3 locations still used old slab_data_start() + manual offset

This caused:
- Address mismatch between allocation carving and validation
- Freelist corruption false positives
- 53-byte misalignment errors resolved, but new errors appeared

Changes:
1. core/tiny_tls_guard.h:34
   - Validation: slab_data_start() → tiny_slab_base_for()
   - Ensures validation uses same base address as allocation

2. core/hakmem_tiny_refill.inc.h:222
   - Allocation carving: Remove manual +2048 hack
   - Use canonical tiny_slab_base_for()

3. core/hakmem_tiny_refill.inc.h:275
   - Bump allocation: Remove duplicate slab_start calculation
   - Use existing base calculation with tiny_slab_base_for()

Result:
- Consistent use of tiny_slab_base_for() across all paths
- All code uses SUPERSLAB_SLAB0_DATA_OFFSET constant
- Remaining freelist corruption needs deeper investigation (not simple offset bug)

Related commits:
- d2f0d8458: Phase 6-2.5 (constants.h + 2048 offset)
- c9053a43a: Phase 6-2.3~6-2.4 (active counter + SEGV fixes)

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Moe Charm (CI)
2025-11-07 22:34:24 +09:00
parent d2f0d84584
commit b8ed2b05b4
4 changed files with 134 additions and 8 deletions

View File

@ -219,9 +219,7 @@ static inline int sll_refill_small_from_ss(int class_idx, int max_take) {
// Track active blocks reserved into TLS SLL
ss_active_inc(tls->ss);
} else if (meta->used < meta->capacity) {
void* slab_start = slab_data_start(tls->ss, tls->slab_idx);
// ULTRATHINK FIX: Use aligned offset (2048) for slab 0
if (tls->slab_idx == 0) slab_start = (char*)slab_start + 2048;
void* slab_start = tiny_slab_base_for(tls->ss, tls->slab_idx);
p = (char*)slab_start + ((size_t)meta->used * bs);
meta->used++;
// Track active blocks reserved into TLS SLL
@ -274,9 +272,6 @@ static inline void* superslab_tls_bump_fast(int class_idx) {
uint32_t chunk = (g_bump_chunk > 0 ? (uint32_t)g_bump_chunk : 1u);
if (chunk > avail) chunk = avail;
size_t bs = g_tiny_class_sizes[tls->ss->size_class];
void* slab_start = slab_data_start(tls->ss, tls->slab_idx);
// ULTRATHINK FIX: Use aligned offset (2048) for slab 0
if (tls->slab_idx == 0) slab_start = (char*)slab_start + 2048;
uint8_t* base = tls->slab_base ? tls->slab_base : tiny_slab_base_for(tls->ss, tls->slab_idx);
uint8_t* start = base + ((size_t)used * bs);
// Reserve the chunk once in header (keeps remote-free accounting valid)