Files
hakmem/core/front/tiny_unified_cache.h
Moe Charm (CI) 860991ee50 Performance Measurement Framework: Unified Cache, TLS SLL, Shared Pool Analysis
## Summary

Implemented production-grade measurement infrastructure to quantify top 3 bottlenecks:
- Unified cache hit/miss rates + refill cost
- TLS SLL usage patterns
- Shared pool lock contention distribution

## Changes

### 1. Unified Cache Metrics (tiny_unified_cache.h/c)
- Added atomic counters:
  - g_unified_cache_hits_global: successful cache pops
  - g_unified_cache_misses_global: refill triggers
  - g_unified_cache_refill_cycles_global: refill cost in CPU cycles (rdtsc)
- Instrumented `unified_cache_pop_or_refill()` to count hits
- Instrumented `unified_cache_refill()` with cycle measurement
- ENV-gated: HAKMEM_MEASURE_UNIFIED_CACHE=1 (default: off)
- Added unified_cache_print_measurements() output function

### 2. TLS SLL Metrics (tls_sll_box.h)
- Added atomic counters:
  - g_tls_sll_push_count_global: total pushes
  - g_tls_sll_pop_count_global: successful pops
  - g_tls_sll_pop_empty_count_global: empty list conditions
- Instrumented push/pop paths
- Added tls_sll_print_measurements() output function

### 3. Shared Pool Contention (hakmem_shared_pool_acquire.c)
- Added atomic counters:
  - g_sp_stage2_lock_acquired_global: Stage 2 locks
  - g_sp_stage3_lock_acquired_global: Stage 3 allocations
  - g_sp_alloc_lock_contention_global: total lock acquisitions
- Instrumented all pthread_mutex_lock calls in hot paths
- Added shared_pool_print_measurements() output function

### 4. Benchmark Integration (bench_random_mixed.c)
- Called all 3 print functions after benchmark loop
- Functions active only when HAKMEM_MEASURE_UNIFIED_CACHE=1 set

## Design Principles

- **Zero overhead when disabled**: Inline checks with __builtin_expect hints
- **Atomic relaxed memory order**: Minimal synchronization overhead
- **ENV-gated**: Single flag controls all measurements
- **Production-safe**: Compiles in release builds, no functional changes

## Usage

```bash
HAKMEM_MEASURE_UNIFIED_CACHE=1 ./bench_allocators_hakmem bench_random_mixed_hakmem 1000000 256 42
```

Output (when enabled):
```
========================================
Unified Cache Statistics
========================================
Hits:        1234567
Misses:      56789
Hit Rate:    95.6%
Avg Refill Cycles: 1234

========================================
TLS SLL Statistics
========================================
Total Pushes:     1234567
Total Pops:       345678
Pop Empty Count:  12345
Hit Rate:         98.8%

========================================
Shared Pool Contention Statistics
========================================
Stage 2 Locks:    123456 (33%)
Stage 3 Locks:    234567 (67%)
Total Contention: 357 locks per 1M ops
```

## Next Steps

1. **Enable measurements** and run benchmarks to gather data
2. **Analyze miss rates**: Which bottleneck dominates?
3. **Profile hottest stage**: Focus optimization on top contributor
4. Possible targets:
   - Increase unified cache capacity if miss rate >5%
   - Profile if TLS SLL is unused (potential legacy code removal)
   - Analyze if Stage 2 lock can be replaced with CAS

## Makefile Updates

Added core/box/tiny_route_box.o to:
- OBJS_BASE (test build)
- SHARED_OBJS (shared library)
- BENCH_HAKMEM_OBJS_BASE (benchmark)
- TINY_BENCH_OBJS_BASE (tiny benchmark)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 18:26:39 +09:00

283 lines
12 KiB
C

// tiny_unified_cache.h - Phase 23: Unified Frontend Cache (tcache-style)
//
// Goal: Flatten 4-5 layer frontend cascade into single-layer array cache
// Target: +50-100% performance (20.3M → 30-40M ops/s)
//
// Design (Task-sensei analysis):
// - Replace: Ring → FastCache → SFC → TLS SLL (4 layers, 8-10 cache misses)
// - With: Single unified array cache per class (1 layer, 2-3 cache misses)
// - Fallback: Direct SuperSlab refill (skip intermediate layers)
//
// Performance:
// - Alloc: 2-3 cache misses (TLS access + array access)
// - Free: 2-3 cache misses (similar to System malloc tcache)
// - vs Current: 8-10 cache misses → 2-3 cache misses (70% reduction)
//
// ENV Variables:
// HAKMEM_TINY_UNIFIED_CACHE=1 # Enable Unified cache (default: 0, OFF)
// HAKMEM_TINY_UNIFIED_C0=128 # C0 cache size (default: 128)
// ...
// HAKMEM_TINY_UNIFIED_C7=128 # C7 cache size (default: 128)
#ifndef HAK_FRONT_TINY_UNIFIED_CACHE_H
#define HAK_FRONT_TINY_UNIFIED_CACHE_H
#include <stdint.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdatomic.h>
#include "../hakmem_build_flags.h"
#include "../hakmem_tiny_config.h" // For TINY_NUM_CLASSES
#include "../box/ptr_type_box.h" // Phantom pointer types (BASE/USER)
#include "../box/tiny_front_config_box.h" // Phase 8-Step1: Config macros
// ============================================================================
// Performance Measurement: Unified Cache (ENV-gated)
// ============================================================================
// Global atomic counters for production performance measurement
// ENV: HAKMEM_MEASURE_UNIFIED_CACHE=1 to enable (default: OFF)
extern _Atomic uint64_t g_unified_cache_hits_global;
extern _Atomic uint64_t g_unified_cache_misses_global;
extern _Atomic uint64_t g_unified_cache_refill_cycles_global;
// Print statistics function
void unified_cache_print_measurements(void);
// Check if measurement is enabled (inline for hot path)
static inline int unified_cache_measure_check(void) {
static int g_measure = -1;
if (__builtin_expect(g_measure == -1, 0)) {
const char* e = getenv("HAKMEM_MEASURE_UNIFIED_CACHE");
g_measure = (e && *e && *e != '0') ? 1 : 0;
}
return g_measure;
}
// ============================================================================
// Unified Cache Structure (per class)
// ============================================================================
typedef struct {
// slots は BASE ポインタ群を保持する(ユーザポインタではない)。
// API では hak_base_ptr_t で型安全に扱い、内部表現は void* のまま。
void** slots; // Dynamic array of BASE pointers (allocated at init)
uint16_t head; // Pop index (consumer)
uint16_t tail; // Push index (producer)
uint16_t capacity; // Cache size (power of 2 for fast modulo: & (capacity-1))
uint16_t mask; // Capacity - 1 (for fast modulo)
} TinyUnifiedCache;
// ============================================================================
// External TLS Variables (defined in tiny_unified_cache.c)
// ============================================================================
extern __thread TinyUnifiedCache g_unified_cache[TINY_NUM_CLASSES];
// ============================================================================
// Metrics (Phase 23, optional for debugging)
// ============================================================================
#if !HAKMEM_BUILD_RELEASE
extern __thread uint64_t g_unified_cache_hit[TINY_NUM_CLASSES]; // Alloc hits
extern __thread uint64_t g_unified_cache_miss[TINY_NUM_CLASSES]; // Alloc misses
extern __thread uint64_t g_unified_cache_push[TINY_NUM_CLASSES]; // Free pushes
extern __thread uint64_t g_unified_cache_full[TINY_NUM_CLASSES]; // Free full (fallback to SuperSlab)
#endif
// ============================================================================
// ENV Control (cached, lazy init)
// ============================================================================
// Phase 8-Step1-Fix: Forward declaration only (implementation in .c file)
// Enable flag (default: 0, OFF) - implemented in tiny_unified_cache.c
int unified_cache_enabled(void);
// Per-class capacity (default: Hot_2048 strategy - optimized for 256B workload)
// Phase 23 Capacity Optimization Result: Hot_2048 = 14.63M ops/s (+43% vs baseline)
// Hot classes (C2/C3: 128B/256B) get 2048 slots, others get 64 slots
static inline size_t unified_capacity(int class_idx) {
static size_t g_cap[TINY_NUM_CLASSES] = {0};
if (__builtin_expect(g_cap[class_idx] == 0, 0)) {
char env_name[64];
snprintf(env_name, sizeof(env_name), "HAKMEM_TINY_UNIFIED_C%d", class_idx);
const char* e = getenv(env_name);
// Default: Hot_2048 strategy (C2/C3=2048, others=64)
size_t default_cap = 64; // Cold classes
if (class_idx == 2 || class_idx == 3) {
default_cap = 2048; // Hot classes (128B, 256B)
}
g_cap[class_idx] = (e && *e) ? (size_t)atoi(e) : default_cap;
// Round up to power of 2 (for fast modulo)
if (g_cap[class_idx] < 32) g_cap[class_idx] = 32;
if (g_cap[class_idx] > 4096) g_cap[class_idx] = 4096; // Increased limit for Hot_2048
// Ensure power of 2
size_t pow2 = 32;
while (pow2 < g_cap[class_idx]) pow2 *= 2;
g_cap[class_idx] = pow2;
#if !HAKMEM_BUILD_RELEASE
fprintf(stderr, "[Unified-INIT] C%d capacity = %zu (power of 2)\n", class_idx, g_cap[class_idx]);
fflush(stderr);
#endif
}
return g_cap[class_idx];
}
// ============================================================================
// Init/Shutdown Forward Declarations
// ============================================================================
void unified_cache_init(void);
void unified_cache_shutdown(void);
void unified_cache_print_stats(void);
// ============================================================================
// Phase 23-D: Self-Contained Refill (Box U1 + Box U2 integration)
// ============================================================================
// Batch refill from SuperSlab (called on cache miss)
// Returns: BASE pointer (first block), or NULL if failed
void* unified_cache_refill(int class_idx);
// ============================================================================
// Ultra-Fast Pop/Push (2-3 cache misses, tcache-style)
// ============================================================================
// Pop from unified cache (alloc fast path)
// Returns: BASE pointer (wrapped hak_base_ptr_t; callerがUSERへ変換)
static inline hak_base_ptr_t unified_cache_pop(int class_idx) {
// Phase 8-Step1: Use config macro for dead code elimination in PGO mode
// Fast path: Unified cache disabled → return NULL immediately
#include "../box/tiny_front_config_box.h"
if (__builtin_expect(!TINY_FRONT_UNIFIED_CACHE_ENABLED, 0))
return HAK_BASE_FROM_RAW(NULL);
TinyUnifiedCache* cache = &g_unified_cache[class_idx]; // 1 cache miss (TLS)
// Phase 8-Step3: Lazy init check (conditional in PGO mode)
// PGO builds assume bench_fast_init() prewarmed cache → remove check (-1 branch)
#if !HAKMEM_TINY_FRONT_PGO
// Lazy init check (once per thread, per class)
if (__builtin_expect(cache->slots == NULL, 0)) {
unified_cache_init(); // First call in this thread
// Re-check after init (may fail if allocation failed)
if (cache->slots == NULL)
return HAK_BASE_FROM_RAW(NULL);
}
#endif
// Empty check
if (__builtin_expect(cache->head == cache->tail, 0)) {
#if !HAKMEM_BUILD_RELEASE
g_unified_cache_miss[class_idx]++;
#endif
return HAK_BASE_FROM_RAW(NULL); // Empty
}
// Pop from head (consumer)
void* base = cache->slots[cache->head]; // 1 cache miss (array access)
cache->head = (cache->head + 1) & cache->mask; // Fast modulo (power of 2)
#if !HAKMEM_BUILD_RELEASE
g_unified_cache_hit[class_idx]++;
#endif
return HAK_BASE_FROM_RAW(base); // Return BASE pointer (2-3 cache misses total)
}
// Push to unified cache (free fast path)
// Input: BASE pointer (wrapped hak_base_ptr_t; caller must pass BASE, not USER)
// Returns: 1=SUCCESS, 0=FULL
static inline int unified_cache_push(int class_idx, hak_base_ptr_t base) {
// Phase 8-Step1: Use config macro for dead code elimination in PGO mode
// Fast path: Unified cache disabled → return 0 (not handled)
if (__builtin_expect(!TINY_FRONT_UNIFIED_CACHE_ENABLED, 0)) return 0;
TinyUnifiedCache* cache = &g_unified_cache[class_idx]; // 1 cache miss (TLS)
void* base_raw = HAK_BASE_TO_RAW(base);
// Phase 8-Step3: Lazy init check (conditional in PGO mode)
// PGO builds assume bench_fast_init() prewarmed cache → remove check (-1 branch)
#if !HAKMEM_TINY_FRONT_PGO
// Lazy init check (once per thread, per class)
if (__builtin_expect(cache->slots == NULL, 0)) {
unified_cache_init(); // First call in this thread
// Re-check after init (may fail if allocation failed)
if (cache->slots == NULL) return 0;
}
#endif
uint16_t next_tail = (cache->tail + 1) & cache->mask;
// Full check (leave 1 slot empty to distinguish full/empty)
if (__builtin_expect(next_tail == cache->head, 0)) {
#if !HAKMEM_BUILD_RELEASE
g_unified_cache_full[class_idx]++;
#endif
return 0; // Full
}
// Push to tail (producer)
cache->slots[cache->tail] = base_raw; // 1 cache miss (array write)
cache->tail = next_tail;
#if !HAKMEM_BUILD_RELEASE
g_unified_cache_push[class_idx]++;
#endif
return 1; // SUCCESS (2-3 cache misses total)
}
// ============================================================================
// Phase 23-D: Self-Contained Pop-or-Refill (tcache-style, single-layer)
// ============================================================================
// All-in-one: Pop from cache, or refill from SuperSlab on miss
// Returns: BASE pointer (wrapped hak_base_ptr_t), or NULL-wrapped if failed
// Design: Self-contained, bypasses all other frontend layers (Ring/FC/SFC/SLL)
static inline hak_base_ptr_t unified_cache_pop_or_refill(int class_idx) {
// Phase 8-Step1: Use config macro for dead code elimination in PGO mode
// Fast path: Unified cache disabled → NULL-wrapped (caller uses legacy cascade)
if (__builtin_expect(!TINY_FRONT_UNIFIED_CACHE_ENABLED, 0))
return HAK_BASE_FROM_RAW(NULL);
TinyUnifiedCache* cache = &g_unified_cache[class_idx]; // 1 cache miss (TLS)
// Phase 8-Step3: Lazy init check (conditional in PGO mode)
// PGO builds assume bench_fast_init() prewarmed cache → remove check (-1 branch)
#if !HAKMEM_TINY_FRONT_PGO
// Lazy init check (once per thread, per class)
if (__builtin_expect(cache->slots == NULL, 0)) {
unified_cache_init();
if (cache->slots == NULL)
return HAK_BASE_FROM_RAW(NULL);
}
#endif
// Try pop from cache (fast path)
if (__builtin_expect(cache->head != cache->tail, 1)) {
void* base = cache->slots[cache->head]; // 1 cache miss (array access)
cache->head = (cache->head + 1) & cache->mask;
#if !HAKMEM_BUILD_RELEASE
g_unified_cache_hit[class_idx]++;
#endif
// Performance measurement: count cache hits
if (__builtin_expect(unified_cache_measure_check(), 0)) {
atomic_fetch_add_explicit(&g_unified_cache_hits_global, 1, memory_order_relaxed);
}
return HAK_BASE_FROM_RAW(base); // Hit! (2-3 cache misses total)
}
// Cache miss → Batch refill from SuperSlab
#if !HAKMEM_BUILD_RELEASE
g_unified_cache_miss[class_idx]++;
#endif
return unified_cache_refill(class_idx); // Refill + return first block (BASE)
}
#endif // HAK_FRONT_TINY_UNIFIED_CACHE_H