Phase12 debug: restore SUPERSLAB constants/APIs, implement Box2 drain boundary, fix tiny_fast_pop to return BASE, honor TLS SLL toggle in alloc/free fast paths, add fail-fast stubs, and quiet capacity sentinel. Update CURRENT_TASK with A/B results (SLL-off stable; SLL-on crash).
This commit is contained in:
324
CURRENT_TASK.md
324
CURRENT_TASK.md
@ -1,152 +1,202 @@
|
|||||||
# Current Task: Phase E1-CORRECT - 最下層ポインターBox実装
|
# CURRENT TASK (Phase 12: Shared SuperSlab Pool – Debug Phase)
|
||||||
|
|
||||||
**Date**: 2025-11-13
|
Phase12 の設計に沿った shared SuperSlab pool 実装および Box API 境界リファクタリングは導入済み。
|
||||||
**Status**: 🔧 In Progress
|
現在は **shared backend 有効状態での SEGV 解消と安定化** を行うデバッグフェーズに入っている。
|
||||||
**Priority**: CRITICAL
|
|
||||||
|
本タスクでは以下をゴールとする:
|
||||||
|
|
||||||
|
- shared Superslab pool backend (`hakmem_shared_pool.[ch]` + `hak_tiny_alloc_superslab_backend_shared`) を
|
||||||
|
Box API (`hak_tiny_alloc_superslab_box`) 経由で安全に運用できる状態にする。
|
||||||
|
- `bench_random_mixed_hakmem` 実行時に SEGV が発生しないことを確認し、
|
||||||
|
shared backend を実用レベルの「最小安定実装」として確定させる。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🎯 Goal
|
## 2. 現状サマリ(実装済み)
|
||||||
|
|
||||||
Phase E1-CORRECT において、**tiny freelist next ポインタのレイアウト仕様と API を物理制約込みで厳密に統一**し、
|
1. Box/API 境界
|
||||||
C7/C0 特殊ケースや直接 *(void\*\*) アクセス起因の SEGV を構造的に排除する。
|
- tiny フロントエンドから Superslab への入口:
|
||||||
|
- `hak_tiny_alloc_superslab_box(int class_idx)` に一本化。
|
||||||
|
- TLS SLL:
|
||||||
|
- slow path を含む呼び出しは `tls_sll_box.h` (`tls_sll_pop(int, void**)` 等) の Box API 経由に統一。
|
||||||
|
|
||||||
|
2. shared Superslab pool 実装
|
||||||
|
- `hakmem_shared_pool.[ch]`:
|
||||||
|
- `SharedSuperSlabPool g_shared_pool` と
|
||||||
|
`shared_pool_init`, `shared_pool_acquire_slab`, `shared_pool_release_slab` を実装。
|
||||||
|
- SuperSlab を global に管理し、slab 単位で `class_idx` を割当/解放する shared pool 構造を提供。
|
||||||
|
- `hakmem_tiny_superslab.c`:
|
||||||
|
- `hak_tiny_alloc_superslab_backend_shared(int class_idx)`:
|
||||||
|
- `shared_pool_acquire_slab` により `(ss, slab_idx)` を取得。
|
||||||
|
- `superslab_init_slab` で未初期化 slab を初期化。
|
||||||
|
- ジオメトリは `SUPERSLAB_SLAB0_DATA_OFFSET` + `slab_idx * SUPERSLAB_SLAB_USABLE_SIZE` + `used * stride` を使用。
|
||||||
|
- 単純 bump でブロックを返却。
|
||||||
|
- `hak_tiny_alloc_superslab_backend_legacy(int class_idx)`:
|
||||||
|
- 旧 per-class `g_superslab_heads` ベースの実装を static backend に封じ込め。
|
||||||
|
- `hak_tiny_alloc_superslab_box(int class_idx)`:
|
||||||
|
- shared backend → 失敗時に legacy backend へフォールバックする実装に更新。
|
||||||
|
- `make bench_random_mixed_hakmem`:
|
||||||
|
- ビルドは成功し、shared backend を含む構造的な不整合は解消済み。
|
||||||
|
|
||||||
|
3. 現状の問題(2025-11-14 更新)
|
||||||
|
- `bench_random_mixed_hakmem` は SLL(TLS 単方向リスト)有効時に早期 SEGV。
|
||||||
|
- SLL を無効化(`HAKMEM_TINY_TLS_SLL=0`)すると、shared ON/OFF いずれも安定完走(Throughput 表示)。
|
||||||
|
- よって、現時点のクラッシュ主因は「共有SS」ではなく「SLL フロント経路の不整合(BASE/USER/next 取り扱い)」である可能性が高い。
|
||||||
|
|
||||||
|
以降は、この SEGV を潰し「shared Superslab pool 最小安定版」を完成させるためのデバッグタスクとする。
|
||||||
|
|
||||||
|
## 3. デバッグフェーズの具体タスク
|
||||||
|
|
||||||
|
### 3-1. shared backend ON/OFF 制御と原因切り分け
|
||||||
|
|
||||||
|
1. shared backend スイッチ導入・確認
|
||||||
|
- `hak_tiny_alloc_superslab_box(int class_idx)` に環境変数または定数フラグを導入し:
|
||||||
|
- `HAKMEM_TINY_SS_SHARED=0` → legacy backend のみ(回帰確認用)
|
||||||
|
- `HAKMEM_TINY_SS_SHARED=1` → 現行 shared backend(デバッグ対象)
|
||||||
|
- 手順:
|
||||||
|
- legacy 固定で `bench_random_mixed_hakmem` 実行 → SEGV が消えることを確認し、問題が shared 経路に限定されることを保証。
|
||||||
|
|
||||||
|
### 3-2. shared slab メタデータの一貫性検証
|
||||||
|
|
||||||
|
2. `shared_pool_acquire_slab` と `hak_tiny_alloc_superslab_backend_shared` の整合確認
|
||||||
|
- 確認事項:
|
||||||
|
- `class_idx` 割当時に:
|
||||||
|
- `meta->class_idx` が正しく `class_idx` にセットされているか。
|
||||||
|
- `superslab_init_slab` 呼び出し後、`capacity > 0`, `used == 0`, `freelist == NULL` になっているか。
|
||||||
|
- `meta->used++` / `total_active_blocks++` の更新が free パスの期待と一致しているか。
|
||||||
|
- 必要なら:
|
||||||
|
- debug build で `assert(meta->class_idx == class_idx)` 等を追加して早期検出。
|
||||||
|
|
||||||
|
3. free/refill 経路との整合性
|
||||||
|
- 対象ファイル:
|
||||||
|
- `tiny_superslab_free.inc.h`
|
||||||
|
- `hakmem_tiny_free.inc`
|
||||||
|
- `hakmem_tiny_bg_spill.c`
|
||||||
|
- 確認事項:
|
||||||
|
- pointer→SuperSlab→TinySlabMeta 解決ロジックが:
|
||||||
|
- `meta->class_idx` ベースで正しい class を判定しているか。
|
||||||
|
- shared/legacy の違いに依存せず動作するか。
|
||||||
|
- 空 slab 判定時に:
|
||||||
|
- `shared_pool_release_slab` を呼ぶ条件と `meta->used == 0` の扱いが矛盾していないか。
|
||||||
|
- 必要な修正:
|
||||||
|
- shared slab 専用の「空になった slab の返却」パスを導入し、UNASSIGNED への戻しを一元化。
|
||||||
|
|
||||||
|
### 3-3. Superslab registry / LRU / shared pool の連携確認
|
||||||
|
|
||||||
|
4. Registry & LRU 連携
|
||||||
|
- `hakmem_super_registry.c` の:
|
||||||
|
- `hak_super_register`, `hak_super_unregister`
|
||||||
|
- `hak_ss_lru_pop/push`
|
||||||
|
- 確認:
|
||||||
|
- shared pool で確保した SuperSlab も registry に登録されていること。
|
||||||
|
- LRU 経由再利用時に `class_idx`/slab 割付が破綻していないこと。
|
||||||
|
- 必要に応じて:
|
||||||
|
- shared pool 管理下の SuperSlab を区別するフラグや、再利用前のメタリセットを追加。
|
||||||
|
|
||||||
|
### 3-4. SEGV の直接解析
|
||||||
|
|
||||||
|
5. gdb によるスタックトレース取得(実施)
|
||||||
|
- コマンド例:
|
||||||
|
- `cd hakmem`
|
||||||
|
- `gdb --args ./bench_random_mixed_hakmem`
|
||||||
|
- `run`
|
||||||
|
- `bt`
|
||||||
|
- 結果(抜粋):
|
||||||
|
- `hak_tiny_alloc_fast_wrapper()` 内で SEGV。SLL 無効化で再現しないため、SLL 経路の BASE/USER/next の整合に絞る。
|
||||||
|
|
||||||
|
### 3-5. 安定版 shared Superslab pool の確定
|
||||||
|
|
||||||
|
6. 修正後確認
|
||||||
|
- `HAKMEM_TINY_SS_SHARED=1`(shared 有効)で:
|
||||||
|
- `bench_random_mixed_hakmem` が SEGV 無しで完走すること。
|
||||||
|
- 簡易的な統計・ログで:
|
||||||
|
- shared Superslab が複数 class で共有されていること。
|
||||||
|
- メタデータ破綻や異常な解放が発生していないこと。
|
||||||
|
- これをもって:
|
||||||
|
- 「Phase12 Shared Superslab Pool 最小安定版」が完了。
|
||||||
|
|
||||||
|
### 2-3. TLS / SLL / Refill の整合性確保
|
||||||
|
|
||||||
|
**スコープ: `core/hakmem_tiny_refill.inc.h`, `core/hakmem_tiny_tls_ops.h`, `core/hakmem_tiny.c`(局所)**
|
||||||
|
|
||||||
|
6. **sll_refill_small_from_ss の Phase12 対応**
|
||||||
|
- 入力: `class_idx`, `max_take`
|
||||||
|
- 動作:
|
||||||
|
- shared pool から該当 `class_idx` の slab を取得 or bind。
|
||||||
|
- slab の freelist/bump から `max_take` 個を TLS SLL に積む。
|
||||||
|
- ここでは:
|
||||||
|
- **g_sll_cap_override を参照しない**(将来廃止しやすい形に)。
|
||||||
|
- cap 計算は `sll_cap_for_class(class_idx, mag_cap)` に集約。
|
||||||
|
|
||||||
|
7. **tiny_fast_refill_and_take / TLS SLL 経路の一貫性**
|
||||||
|
- `tiny_fast_refill_and_take` が:
|
||||||
|
- まず TLS SLL / FastCache を見る。
|
||||||
|
- 足りなければ `sll_refill_small_from_ss` を必ず経由するよう整理(旧経路の枝刈り)。
|
||||||
|
- ただし:
|
||||||
|
- 既存インラインとの整合性を崩さないよう、**分岐削除は段階的に**行う。
|
||||||
|
|
||||||
|
### 2-4. g_sll_cap_override の段階的無効化(安全版)
|
||||||
|
|
||||||
|
8. **参照経路のサニタイズ(非破壊)**
|
||||||
|
- `hakmem_tiny_intel.inc`, `hakmem_tiny_background.inc`, `hakmem_tiny_init.inc` などで:
|
||||||
|
- g_sll_cap_override を書き換える経路を `#if 0` or コメントアウトで停止。
|
||||||
|
- 配列定義自体はそのまま残し、リンク切れを防ぐ。
|
||||||
|
- `sll_cap_for_class()` は Phase12 ポリシーに従う実装に置き換える。
|
||||||
|
- これにより:
|
||||||
|
- 実際の SLL cap は sll_cap_for_class 経由に統一されるが、
|
||||||
|
- ABI/シンボル互換性は保持される。
|
||||||
|
|
||||||
|
9. **ビルド & アセンブリ確認**
|
||||||
|
- `make bench_random_mixed_hakmem`
|
||||||
|
- `gdb -q ./bench_random_mixed_hakmem -ex "disassemble sll_refill_small_from_ss" -ex "quit"`
|
||||||
|
- 確認項目:
|
||||||
|
- g_sll_cap_override 更新経路は実際には使われていない。
|
||||||
|
- sll_refill_small_from_ss が shared SuperSlab pool を用いる単一ロジックになっている。
|
||||||
|
|
||||||
|
### 2-5. Shared Pool 実装の検証とバグ切り分け
|
||||||
|
|
||||||
|
10. **機能検証**
|
||||||
|
- `bench_random_mixed_hakmem` を実行:
|
||||||
|
- SIGSEGV / abort の有無
|
||||||
|
- ログと `HAKMEM_TINY_SUPERSLAB_TRACE` で shared pool の挙動を確認。
|
||||||
|
|
||||||
|
11. **パフォーマンス確認**
|
||||||
|
- 目標: 設計書の期待値に対し、オーダーとして妥当な速度になっているか:
|
||||||
|
- 9M → 70–90M ops/s のレンジを狙う(まずは退行していないことを確認)。
|
||||||
|
|
||||||
|
12. **問題発生時の切り分け**
|
||||||
|
- クラッシュ/不正挙動があれば:
|
||||||
|
- まず shared pool 周辺(slab class_idx, freelist 管理, owner/bind/unbind)に絞って原因特定。
|
||||||
|
- Tiny front-end (bump, SLL, HotMag 等) を疑うのはその後。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## ✅ 正式仕様(決定版)
|
## 3. 実装ルール(再確認)
|
||||||
|
|
||||||
HAKMEM_TINY_HEADER_CLASSIDX フラグ有無と size class ごとに next の格納オフセットを厳密定義する。
|
- hakmem_tiny.c は write_to_file で全書き換えしない。
|
||||||
|
- 変更は:
|
||||||
### 1. ヘッダ有効時 (HAKMEM_TINY_HEADER_CLASSIDX != 0)
|
- `#if 0` / コメントアウト
|
||||||
|
- 局所的な関数実装差し替え
|
||||||
各クラスの物理レイアウトと next オフセット:
|
- 新しい shared pool 関数の追加
|
||||||
|
- 既存呼び出し先の付け替え
|
||||||
- Class 0:
|
に限定し、逐次ビルド確認する。
|
||||||
- 物理: `[1B header][7B payload]` (合計 8B)
|
|
||||||
- 制約: offset 1 に 8B pointer は入らない (1 + 8 = 9B > 8B) → 不可能
|
|
||||||
- 仕様:
|
|
||||||
- freelist 中は header を上書きして next を `base + 0` に格納
|
|
||||||
- free 中 header不要のため問題なし
|
|
||||||
- next offset: `0`
|
|
||||||
|
|
||||||
- Class 1〜6:
|
|
||||||
- 物理: `[1B header][payload >= 8B]`
|
|
||||||
- 仕様:
|
|
||||||
- header は保持
|
|
||||||
- freelist next は header 直後の `base + 1` に格納
|
|
||||||
- next offset: `1`
|
|
||||||
|
|
||||||
- Class 7:
|
|
||||||
- 大きなブロック / もともと特殊扱いだった領域
|
|
||||||
- 実装と互換性・余裕を考慮し、freelist next は `base + 0` 扱いとするのが合理的
|
|
||||||
- next offset: `0`
|
|
||||||
|
|
||||||
まとめ:
|
|
||||||
|
|
||||||
- `HAKMEM_TINY_HEADER_CLASSIDX != 0` のとき:
|
|
||||||
- Class 0,7 → `next_off = 0`
|
|
||||||
- Class 1〜6 → `next_off = 1`
|
|
||||||
|
|
||||||
### 2. ヘッダ無効時 (HAKMEM_TINY_HEADER_CLASSIDX == 0)
|
|
||||||
|
|
||||||
- 全クラス:
|
|
||||||
- header なし
|
|
||||||
- freelist next は従来通り `base + 0`
|
|
||||||
- next offset: 常に `0`
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📦 Box / API 統一方針
|
## 4. 直近の変更(2025-11-14 追記)
|
||||||
|
|
||||||
重複・矛盾していた Box API / tiny_nextptr 実装を以下の方針で統一する。
|
- 定数/APIの復元・宣言不足解消(`SUPERSLAB_LG_*`, 所有権API, active dec, fail-fast スタブ 等)。
|
||||||
|
- Box 2 drain 境界を `_ss_remote_drain_to_freelist_unsafe()` に一本化。
|
||||||
|
- `tiny_fast_pop()` が USER を返していた不具合を修正(BASE返却へ)。
|
||||||
|
- SLL トグルの実効化:
|
||||||
|
- free v2(ヘッダ系)で `g_tls_sll_enable==0` 時は即スローパスへ。
|
||||||
|
- alloc fast でも SLL 無効時は TLS SLL pop を完全スキップ。
|
||||||
|
- `tls_sll_box` の capacity > 1<<20 を「無制限」扱いへ(過剰警告を抑制)。
|
||||||
|
|
||||||
### Authoritative Logic
|
暫定ガイド(shared の検証を先に進めるため)
|
||||||
|
- `HAKMEM_TINY_TLS_SLL=0` で shared ON/OFF の安定動作を確認し、shared 経路の SEGV 有無を切り分ける。
|
||||||
|
|
||||||
単一の「next offset 計算」と「安全な load/store」を真実として定義:
|
次の一手(SLL ルートの最小修正)
|
||||||
|
1) SLL push/pop すべての呼び出しを Box API 経由(BASEのみ)に強制。直書き・next手計算を禁止。
|
||||||
- `size_t tiny_next_off(int class_idx)`:
|
2) `tls_sll_box` にデバッグ限定の軽量ガードを追加(slab範囲+stride整合)して最初の破綻ノードを特定。
|
||||||
- `#if HAKMEM_TINY_HEADER_CLASSIDX`
|
3) 必要なら一時的に `HAKMEM_TINY_SLL_C03_ONLY=1`(C0–C3 のみ SLL 使用)で範囲を狭め、原因箇所を早期確定。
|
||||||
- `return (class_idx == 0 || class_idx == 7) ? 0 : 1;`
|
|
||||||
- `#else`
|
|
||||||
- `return 0;`
|
|
||||||
- `void* tiny_next_load(const void* base, int class_idx)`
|
|
||||||
- `void tiny_next_store(void* base, int class_idx, void* next)`
|
|
||||||
|
|
||||||
この3つを中心に全ての next アクセスを集約する。
|
|
||||||
|
|
||||||
### box/tiny_next_ptr_box.h
|
|
||||||
|
|
||||||
- `tiny_nextptr.h` をインクルード、もしくは同一ロジックを使用し、
|
|
||||||
「Box API」としての薄いラッパ/マクロを提供:
|
|
||||||
|
|
||||||
例(最終イメージ):
|
|
||||||
|
|
||||||
- `static inline void tiny_next_write(int class_idx, void* base, void* next)`
|
|
||||||
- 中で `tiny_next_store(base, class_idx, next)` を呼ぶ
|
|
||||||
- `static inline void* tiny_next_read(int class_idx, const void* base)`
|
|
||||||
- 中で `tiny_next_load(base, class_idx)` を呼ぶ
|
|
||||||
- `#define TINY_NEXT_WRITE(cls, base, next) tiny_next_write((cls), (base), (next))`
|
|
||||||
- `#define TINY_NEXT_READ(cls, base) tiny_next_read((cls), (base))`
|
|
||||||
|
|
||||||
ポイント:
|
|
||||||
|
|
||||||
- API は `class_idx` と `base pointer` を明示的に受け取る。
|
|
||||||
- next offset の分岐 (0 or 1) は API 内だけに閉じ込め、呼び出し元での条件分岐は禁止。
|
|
||||||
- `*(void**)` による直接アクセスは禁止(grep で検出対象)。
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚫 禁止事項
|
|
||||||
|
|
||||||
- Phase E1-CORRECT 以降のコードで以下を使用することは禁止:
|
|
||||||
- `*(void**)ptr` などの直接 next 読み書き
|
|
||||||
- `class_idx == 7 ? 0 : 1` など、ローカルに next offset を決めるロジック
|
|
||||||
- `ALL classes offset 1` 前提のコメントや実装
|
|
||||||
|
|
||||||
これらは順次削除・修正対象。
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 現状の問題と対策
|
|
||||||
|
|
||||||
### 以前の問題点
|
|
||||||
|
|
||||||
- `tiny_nextptr.h` が「ALL classes → offset 1」として実装されていた時期があり、
|
|
||||||
- Class 0 に対して offset 1 書き込み → 即時 SEGV
|
|
||||||
- Class 7 や一部 call site での不整合も誘発
|
|
||||||
- `box/tiny_next_ptr_box.h` と `tiny_nextptr.h` が別仕様になり、
|
|
||||||
- どちらが正しいか不明瞭な状態で混在していた
|
|
||||||
|
|
||||||
### 対策(このドキュメントが指示すること)
|
|
||||||
|
|
||||||
1. 正式仕様を上記の通り固定(Class 0,7 → 0 / Class 1〜6 → 1)。
|
|
||||||
2. `tiny_nextptr.h` をこの仕様に合わせて修正する。
|
|
||||||
3. `box/tiny_next_ptr_box.h` を `tiny_nextptr.h` ベースの Box API として整理する。
|
|
||||||
4. 全ての tiny/TLS/fastcache/refill/SLL 関連コードから、直接 offset 計算と `*(void**)` を排除し、
|
|
||||||
`tiny_next_*` / `TINY_NEXT_*` API 経由に統一する。
|
|
||||||
5. grep による監査:
|
|
||||||
- `grep -R '\*\(void\*\*\)' core/` で違反箇所検出
|
|
||||||
- 残存している場合は順次修正
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Success Criteria
|
|
||||||
|
|
||||||
- 10K〜100K iterations のストレステストで全サイズ (C0〜C7) SEGV 0件
|
|
||||||
- Class 0 に対する offset1 アクセスが存在しない (grep/レビューで確認)
|
|
||||||
- Class 7 の next アクセスも Box API 経由で一貫 (offset0扱い)
|
|
||||||
- すべての next アクセスパスが:
|
|
||||||
- 「仕様: next_off(class_idx)」に従う tiny_next_* 経由のみで記述されている
|
|
||||||
- 将来のリファクタ時も、この CURRENT_TASK.md を見れば
|
|
||||||
「next はどこにあり、どうアクセスすべきか」が一意に判断できる状態
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📌 実装タスクまとめ(開発者向け)
|
|
||||||
|
|
||||||
- [ ] tiny_nextptr.h を上記仕様(0/1 mixed: C0,7→0 / C1-6→1)に修正
|
|
||||||
- [ ] box/tiny_next_ptr_box.h を tiny_nextptr.h ベースのラッパとして整理
|
|
||||||
- [ ] 既存コードから next オフセット直書きロジックを撤廃し、Box API に統一
|
|
||||||
- [ ] `*(void**)` の直接使用箇所を grep で洗い、必要なものを tiny_next_* に置換
|
|
||||||
- [ ] Release/Debug ビルド + 長時間テストで安定性確認
|
|
||||||
- [ ] ドキュメント・コメントから「ALL classes offset 1」系の誤記を除去
|
|
||||||
|
|||||||
6
Makefile
6
Makefile
@ -179,7 +179,7 @@ LDFLAGS += $(EXTRA_LDFLAGS)
|
|||||||
|
|
||||||
# Targets
|
# Targets
|
||||||
TARGET = test_hakmem
|
TARGET = test_hakmem
|
||||||
OBJS_BASE = hakmem.o hakmem_config.o hakmem_tiny_config.o hakmem_ucb1.o hakmem_bigcache.o hakmem_pool.o hakmem_l25_pool.o hakmem_site_rules.o hakmem_tiny.o hakmem_tiny_superslab.o tiny_sticky.o tiny_remote.o tiny_publish.o tiny_debug_ring.o hakmem_tiny_magazine.o hakmem_tiny_stats.o hakmem_tiny_sfc.o hakmem_tiny_query.o hakmem_tiny_rss.o hakmem_tiny_registry.o hakmem_tiny_remote_target.o hakmem_tiny_bg_spill.o tiny_adaptive_sizing.o hakmem_mid_mt.o hakmem_super_registry.o hakmem_shared_pool.o hakmem_elo.o hakmem_batch.o hakmem_p2.o hakmem_sizeclass_dist.o hakmem_evo.o hakmem_debug.o hakmem_sys.o hakmem_whale.o hakmem_policy.o hakmem_ace.o hakmem_ace_stats.o hakmem_prof.o hakmem_learner.o hakmem_size_hist.o hakmem_learn_log.o hakmem_syscall.o hakmem_ace_metrics.o hakmem_ace_ucb1.o hakmem_ace_controller.o tiny_fastcache.o core/box/superslab_expansion_box.o core/box/integrity_box.o core/box/free_local_box.o core/box/free_remote_box.o core/box/free_publish_box.o core/box/mailbox_box.o core/box/front_gate_box.o core/box/front_gate_classifier.o core/box/capacity_box.o core/box/carve_push_box.o core/box/prewarm_box.o core/link_stubs.o test_hakmem.o
|
OBJS_BASE = hakmem.o hakmem_config.o hakmem_tiny_config.o hakmem_ucb1.o hakmem_bigcache.o hakmem_pool.o hakmem_l25_pool.o hakmem_site_rules.o hakmem_tiny.o hakmem_tiny_superslab.o tiny_sticky.o tiny_remote.o tiny_publish.o tiny_debug_ring.o hakmem_tiny_magazine.o hakmem_tiny_stats.o hakmem_tiny_sfc.o hakmem_tiny_query.o hakmem_tiny_rss.o hakmem_tiny_registry.o hakmem_tiny_remote_target.o hakmem_tiny_bg_spill.o tiny_adaptive_sizing.o hakmem_mid_mt.o hakmem_super_registry.o hakmem_shared_pool.o hakmem_elo.o hakmem_batch.o hakmem_p2.o hakmem_sizeclass_dist.o hakmem_evo.o hakmem_debug.o hakmem_sys.o hakmem_whale.o hakmem_policy.o hakmem_ace.o hakmem_ace_stats.o hakmem_prof.o hakmem_learner.o hakmem_size_hist.o hakmem_learn_log.o hakmem_syscall.o hakmem_ace_metrics.o hakmem_ace_ucb1.o hakmem_ace_controller.o tiny_fastcache.o core/box/superslab_expansion_box.o core/box/integrity_box.o core/box/free_local_box.o core/box/free_remote_box.o core/box/free_publish_box.o core/box/mailbox_box.o core/box/front_gate_box.o core/box/front_gate_classifier.o core/box/capacity_box.o core/box/carve_push_box.o core/box/prewarm_box.o core/link_stubs.o core/tiny_failfast.o test_hakmem.o
|
||||||
OBJS = $(OBJS_BASE)
|
OBJS = $(OBJS_BASE)
|
||||||
|
|
||||||
# Shared library
|
# Shared library
|
||||||
@ -203,7 +203,7 @@ endif
|
|||||||
# Benchmark targets
|
# Benchmark targets
|
||||||
BENCH_HAKMEM = bench_allocators_hakmem
|
BENCH_HAKMEM = bench_allocators_hakmem
|
||||||
BENCH_SYSTEM = bench_allocators_system
|
BENCH_SYSTEM = bench_allocators_system
|
||||||
BENCH_HAKMEM_OBJS_BASE = hakmem.o hakmem_config.o hakmem_tiny_config.o hakmem_ucb1.o hakmem_bigcache.o hakmem_pool.o hakmem_l25_pool.o hakmem_site_rules.o hakmem_tiny.o hakmem_tiny_superslab.o tiny_sticky.o tiny_remote.o tiny_publish.o tiny_debug_ring.o hakmem_tiny_magazine.o hakmem_tiny_stats.o hakmem_tiny_sfc.o hakmem_tiny_query.o hakmem_tiny_rss.o hakmem_tiny_registry.o hakmem_tiny_remote_target.o hakmem_tiny_bg_spill.o tiny_adaptive_sizing.o hakmem_mid_mt.o hakmem_super_registry.o hakmem_elo.o hakmem_batch.o hakmem_p2.o hakmem_sizeclass_dist.o hakmem_evo.o hakmem_debug.o hakmem_sys.o hakmem_whale.o hakmem_policy.o hakmem_ace.o hakmem_ace_stats.o hakmem_prof.o hakmem_learner.o hakmem_size_hist.o hakmem_learn_log.o hakmem_syscall.o hakmem_ace_metrics.o hakmem_ace_ucb1.o hakmem_ace_controller.o tiny_fastcache.o core/box/superslab_expansion_box.o core/box/integrity_box.o core/box/free_local_box.o core/box/free_remote_box.o core/box/free_publish_box.o core/box/mailbox_box.o core/box/front_gate_box.o core/box/front_gate_classifier.o core/box/capacity_box.o core/box/carve_push_box.o core/box/prewarm_box.o core/link_stubs.o bench_allocators_hakmem.o
|
BENCH_HAKMEM_OBJS_BASE = hakmem.o hakmem_config.o hakmem_tiny_config.o hakmem_ucb1.o hakmem_bigcache.o hakmem_pool.o hakmem_l25_pool.o hakmem_site_rules.o hakmem_tiny.o hakmem_tiny_superslab.o tiny_sticky.o tiny_remote.o tiny_publish.o tiny_debug_ring.o hakmem_tiny_magazine.o hakmem_tiny_stats.o hakmem_tiny_sfc.o hakmem_tiny_query.o hakmem_tiny_rss.o hakmem_tiny_registry.o hakmem_tiny_remote_target.o hakmem_tiny_bg_spill.o tiny_adaptive_sizing.o hakmem_mid_mt.o hakmem_super_registry.o hakmem_elo.o hakmem_batch.o hakmem_p2.o hakmem_sizeclass_dist.o hakmem_evo.o hakmem_debug.o hakmem_sys.o hakmem_whale.o hakmem_policy.o hakmem_ace.o hakmem_ace_stats.o hakmem_prof.o hakmem_learner.o hakmem_size_hist.o hakmem_learn_log.o hakmem_syscall.o hakmem_ace_metrics.o hakmem_ace_ucb1.o hakmem_ace_controller.o tiny_fastcache.o core/box/superslab_expansion_box.o core/box/integrity_box.o core/box/free_local_box.o core/box/free_remote_box.o core/box/free_publish_box.o core/box/mailbox_box.o core/box/front_gate_box.o core/box/front_gate_classifier.o core/box/capacity_box.o core/box/carve_push_box.o core/box/prewarm_box.o core/link_stubs.o core/tiny_failfast.o bench_allocators_hakmem.o
|
||||||
BENCH_HAKMEM_OBJS = $(BENCH_HAKMEM_OBJS_BASE)
|
BENCH_HAKMEM_OBJS = $(BENCH_HAKMEM_OBJS_BASE)
|
||||||
ifeq ($(POOL_TLS_PHASE1),1)
|
ifeq ($(POOL_TLS_PHASE1),1)
|
||||||
BENCH_HAKMEM_OBJS += pool_tls.o pool_refill.o pool_tls_arena.o pool_tls_registry.o pool_tls_remote.o
|
BENCH_HAKMEM_OBJS += pool_tls.o pool_refill.o pool_tls_arena.o pool_tls_registry.o pool_tls_remote.o
|
||||||
@ -380,7 +380,7 @@ test-box-refactor: box-refactor
|
|||||||
./larson_hakmem 10 8 128 1024 1 12345 4
|
./larson_hakmem 10 8 128 1024 1 12345 4
|
||||||
|
|
||||||
# Phase 4: Tiny Pool benchmarks (properly linked with hakmem)
|
# Phase 4: Tiny Pool benchmarks (properly linked with hakmem)
|
||||||
TINY_BENCH_OBJS_BASE = hakmem.o hakmem_config.o hakmem_tiny_config.o hakmem_ucb1.o hakmem_bigcache.o hakmem_pool.o hakmem_l25_pool.o hakmem_site_rules.o hakmem_tiny.o hakmem_tiny_superslab.o core/box/superslab_expansion_box.o core/box/integrity_box.o core/box/mailbox_box.o core/box/front_gate_box.o core/box/front_gate_classifier.o core/box/free_local_box.o core/box/free_remote_box.o core/box/free_publish_box.o core/box/capacity_box.o core/box/carve_push_box.o core/box/prewarm_box.o tiny_sticky.o tiny_remote.o tiny_publish.o tiny_debug_ring.o hakmem_tiny_magazine.o hakmem_tiny_stats.o hakmem_tiny_sfc.o hakmem_tiny_query.o hakmem_tiny_rss.o hakmem_tiny_registry.o hakmem_tiny_remote_target.o hakmem_tiny_bg_spill.o tiny_adaptive_sizing.o hakmem_mid_mt.o hakmem_super_registry.o hakmem_shared_pool.o hakmem_elo.o hakmem_batch.o hakmem_p2.o hakmem_sizeclass_dist.o hakmem_evo.o hakmem_debug.o hakmem_sys.o hakmem_whale.o hakmem_policy.o hakmem_ace.o hakmem_ace_stats.o hakmem_prof.o hakmem_learner.o hakmem_size_hist.o hakmem_learn_log.o hakmem_syscall.o hakmem_ace_metrics.o hakmem_ace_ucb1.o hakmem_ace_controller.o tiny_fastcache.o core/link_stubs.o
|
TINY_BENCH_OBJS_BASE = hakmem.o hakmem_config.o hakmem_tiny_config.o hakmem_ucb1.o hakmem_bigcache.o hakmem_pool.o hakmem_l25_pool.o hakmem_site_rules.o hakmem_tiny.o hakmem_tiny_superslab.o core/box/superslab_expansion_box.o core/box/integrity_box.o core/box/mailbox_box.o core/box/front_gate_box.o core/box/front_gate_classifier.o core/box/free_local_box.o core/box/free_remote_box.o core/box/free_publish_box.o core/box/capacity_box.o core/box/carve_push_box.o core/box/prewarm_box.o tiny_sticky.o tiny_remote.o tiny_publish.o tiny_debug_ring.o hakmem_tiny_magazine.o hakmem_tiny_stats.o hakmem_tiny_sfc.o hakmem_tiny_query.o hakmem_tiny_rss.o hakmem_tiny_registry.o hakmem_tiny_remote_target.o hakmem_tiny_bg_spill.o tiny_adaptive_sizing.o hakmem_mid_mt.o hakmem_super_registry.o hakmem_shared_pool.o hakmem_elo.o hakmem_batch.o hakmem_p2.o hakmem_sizeclass_dist.o hakmem_evo.o hakmem_debug.o hakmem_sys.o hakmem_whale.o hakmem_policy.o hakmem_ace.o hakmem_ace_stats.o hakmem_prof.o hakmem_learner.o hakmem_size_hist.o hakmem_learn_log.o hakmem_syscall.o hakmem_ace_metrics.o hakmem_ace_ucb1.o hakmem_ace_controller.o tiny_fastcache.o core/link_stubs.o core/tiny_failfast.o
|
||||||
TINY_BENCH_OBJS = $(TINY_BENCH_OBJS_BASE)
|
TINY_BENCH_OBJS = $(TINY_BENCH_OBJS_BASE)
|
||||||
ifeq ($(POOL_TLS_PHASE1),1)
|
ifeq ($(POOL_TLS_PHASE1),1)
|
||||||
TINY_BENCH_OBJS += pool_tls.o pool_refill.o core/pool_tls_arena.o pool_tls_registry.o pool_tls_remote.o
|
TINY_BENCH_OBJS += pool_tls.o pool_refill.o core/pool_tls_arena.o pool_tls_registry.o pool_tls_remote.o
|
||||||
|
|||||||
@ -18,7 +18,7 @@ static _Atomic int g_box_cap_initialized = 0;
|
|||||||
// External declarations (from adaptive_sizing and hakmem_tiny)
|
// External declarations (from adaptive_sizing and hakmem_tiny)
|
||||||
extern __thread TLSCacheStats g_tls_cache_stats[TINY_NUM_CLASSES]; // TLS variable!
|
extern __thread TLSCacheStats g_tls_cache_stats[TINY_NUM_CLASSES]; // TLS variable!
|
||||||
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
|
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
|
||||||
extern int g_sll_cap_override[TINY_NUM_CLASSES];
|
extern int g_sll_cap_override[TINY_NUM_CLASSES]; // LEGACY (Phase12以降は参照しない/互換用ダミー)
|
||||||
extern int g_sll_multiplier;
|
extern int g_sll_multiplier;
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
@ -52,12 +52,7 @@ uint32_t box_cap_get(int class_idx) {
|
|||||||
// Compute SLL capacity using same logic as sll_cap_for_class()
|
// Compute SLL capacity using same logic as sll_cap_for_class()
|
||||||
// This centralizes the capacity calculation
|
// This centralizes the capacity calculation
|
||||||
|
|
||||||
// Check for override
|
// Phase12: g_sll_cap_override はレガシー互換ダミー。capacity_box では無視する。
|
||||||
if (g_sll_cap_override[class_idx] > 0) {
|
|
||||||
uint32_t cap = (uint32_t)g_sll_cap_override[class_idx];
|
|
||||||
if (cap > TINY_TLS_MAG_CAP) cap = TINY_TLS_MAG_CAP;
|
|
||||||
return cap;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get base capacity from adaptive sizing
|
// Get base capacity from adaptive sizing
|
||||||
uint32_t cap = g_tls_cache_stats[class_idx].capacity;
|
uint32_t cap = g_tls_cache_stats[class_idx].capacity;
|
||||||
|
|||||||
@ -5,24 +5,19 @@ core/box/carve_push_box.o: core/box/carve_push_box.c \
|
|||||||
core/box/../superslab/superslab_types.h \
|
core/box/../superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h \
|
core/hakmem_tiny_superslab_constants.h \
|
||||||
core/box/../superslab/superslab_inline.h \
|
core/box/../superslab/superslab_inline.h \
|
||||||
core/box/../superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/box/../superslab/superslab_types.h core/box/../tiny_debug_ring.h \
|
||||||
core/hakmem_build_flags.h core/tiny_remote.h \
|
core/box/../tiny_remote.h core/box/../hakmem_tiny_superslab_constants.h \
|
||||||
core/box/../superslab/../tiny_box_geometry.h \
|
|
||||||
core/box/../superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/box/../superslab/../hakmem_tiny_config.h \
|
|
||||||
core/box/../superslab/../box/tiny_next_ptr_box.h \
|
|
||||||
core/hakmem_tiny_config.h core/tiny_nextptr.h \
|
|
||||||
core/box/../tiny_debug_ring.h core/box/../tiny_remote.h \
|
|
||||||
core/box/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/box/../hakmem_tiny_config.h core/box/../hakmem_tiny_superslab.h \
|
core/box/../hakmem_tiny_config.h core/box/../hakmem_tiny_superslab.h \
|
||||||
core/box/../hakmem_tiny_integrity.h core/box/../hakmem_tiny.h \
|
core/box/../hakmem_tiny_integrity.h core/box/../hakmem_tiny.h \
|
||||||
core/box/carve_push_box.h core/box/capacity_box.h core/box/tls_sll_box.h \
|
core/box/carve_push_box.h core/box/capacity_box.h core/box/tls_sll_box.h \
|
||||||
core/box/../ptr_trace.h core/box/../hakmem_build_flags.h \
|
core/box/../hakmem_build_flags.h core/box/../tiny_remote.h \
|
||||||
core/box/../tiny_remote.h core/box/../tiny_region_id.h \
|
core/box/../tiny_region_id.h core/box/../tiny_box_geometry.h \
|
||||||
core/box/../tiny_box_geometry.h core/box/../ptr_track.h \
|
core/box/../hakmem_tiny_config.h core/box/../ptr_track.h \
|
||||||
core/box/../ptr_track.h core/box/../tiny_refill_opt.h \
|
core/box/../ptr_track.h core/box/../ptr_trace.h \
|
||||||
core/box/../tiny_region_id.h core/box/../box/tls_sll_box.h \
|
core/box/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
||||||
core/box/../tiny_box_geometry.h
|
core/tiny_nextptr.h core/hakmem_build_flags.h \
|
||||||
|
core/box/../tiny_refill_opt.h core/box/../tiny_region_id.h \
|
||||||
|
core/box/../box/tls_sll_box.h core/box/../tiny_box_geometry.h
|
||||||
core/box/../hakmem_tiny.h:
|
core/box/../hakmem_tiny.h:
|
||||||
core/box/../hakmem_build_flags.h:
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../hakmem_trace.h:
|
core/box/../hakmem_trace.h:
|
||||||
@ -33,15 +28,6 @@ core/box/../superslab/superslab_types.h:
|
|||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/box/../superslab/superslab_inline.h:
|
core/box/../superslab/superslab_inline.h:
|
||||||
core/box/../superslab/superslab_types.h:
|
core/box/../superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/hakmem_build_flags.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/box/../superslab/../tiny_box_geometry.h:
|
|
||||||
core/box/../superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/box/../superslab/../hakmem_tiny_config.h:
|
|
||||||
core/box/../superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/box/../tiny_debug_ring.h:
|
core/box/../tiny_debug_ring.h:
|
||||||
core/box/../tiny_remote.h:
|
core/box/../tiny_remote.h:
|
||||||
core/box/../hakmem_tiny_superslab_constants.h:
|
core/box/../hakmem_tiny_superslab_constants.h:
|
||||||
@ -52,13 +38,18 @@ core/box/../hakmem_tiny.h:
|
|||||||
core/box/carve_push_box.h:
|
core/box/carve_push_box.h:
|
||||||
core/box/capacity_box.h:
|
core/box/capacity_box.h:
|
||||||
core/box/tls_sll_box.h:
|
core/box/tls_sll_box.h:
|
||||||
core/box/../ptr_trace.h:
|
|
||||||
core/box/../hakmem_build_flags.h:
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../tiny_remote.h:
|
core/box/../tiny_remote.h:
|
||||||
core/box/../tiny_region_id.h:
|
core/box/../tiny_region_id.h:
|
||||||
core/box/../tiny_box_geometry.h:
|
core/box/../tiny_box_geometry.h:
|
||||||
|
core/box/../hakmem_tiny_config.h:
|
||||||
core/box/../ptr_track.h:
|
core/box/../ptr_track.h:
|
||||||
core/box/../ptr_track.h:
|
core/box/../ptr_track.h:
|
||||||
|
core/box/../ptr_trace.h:
|
||||||
|
core/box/../box/tiny_next_ptr_box.h:
|
||||||
|
core/hakmem_tiny_config.h:
|
||||||
|
core/tiny_nextptr.h:
|
||||||
|
core/hakmem_build_flags.h:
|
||||||
core/box/../tiny_refill_opt.h:
|
core/box/../tiny_refill_opt.h:
|
||||||
core/box/../tiny_region_id.h:
|
core/box/../tiny_region_id.h:
|
||||||
core/box/../box/tls_sll_box.h:
|
core/box/../box/tls_sll_box.h:
|
||||||
|
|||||||
@ -3,13 +3,10 @@ core/box/free_local_box.o: core/box/free_local_box.c \
|
|||||||
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/box/free_publish_box.h \
|
core/hakmem_tiny_superslab_constants.h core/box/free_publish_box.h \
|
||||||
core/hakmem_tiny.h core/hakmem_trace.h core/hakmem_tiny_mini_mag.h
|
core/hakmem_tiny.h core/hakmem_trace.h core/hakmem_tiny_mini_mag.h \
|
||||||
|
core/box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
||||||
|
core/tiny_nextptr.h
|
||||||
core/box/free_local_box.h:
|
core/box/free_local_box.h:
|
||||||
core/hakmem_tiny_superslab.h:
|
core/hakmem_tiny_superslab.h:
|
||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
@ -19,16 +16,11 @@ core/superslab/superslab_types.h:
|
|||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/box/free_publish_box.h:
|
core/box/free_publish_box.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_trace.h:
|
core/hakmem_trace.h:
|
||||||
core/hakmem_tiny_mini_mag.h:
|
core/hakmem_tiny_mini_mag.h:
|
||||||
|
core/box/tiny_next_ptr_box.h:
|
||||||
|
core/hakmem_tiny_config.h:
|
||||||
|
core/tiny_nextptr.h:
|
||||||
|
|||||||
@ -3,11 +3,6 @@ core/box/free_publish_box.o: core/box/free_publish_box.c \
|
|||||||
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/hakmem_tiny.h \
|
core/hakmem_tiny_superslab_constants.h core/hakmem_tiny.h \
|
||||||
core/hakmem_trace.h core/hakmem_tiny_mini_mag.h core/tiny_route.h \
|
core/hakmem_trace.h core/hakmem_tiny_mini_mag.h core/tiny_route.h \
|
||||||
core/tiny_ready.h core/hakmem_tiny.h core/box/mailbox_box.h
|
core/tiny_ready.h core/hakmem_tiny.h core/box/mailbox_box.h
|
||||||
@ -20,14 +15,6 @@ core/superslab/superslab_types.h:
|
|||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_trace.h:
|
core/hakmem_trace.h:
|
||||||
|
|||||||
@ -3,11 +3,6 @@ core/box/free_remote_box.o: core/box/free_remote_box.c \
|
|||||||
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/box/free_publish_box.h \
|
core/hakmem_tiny_superslab_constants.h core/box/free_publish_box.h \
|
||||||
core/hakmem_tiny.h core/hakmem_trace.h core/hakmem_tiny_mini_mag.h
|
core/hakmem_tiny.h core/hakmem_trace.h core/hakmem_tiny_mini_mag.h
|
||||||
core/box/free_remote_box.h:
|
core/box/free_remote_box.h:
|
||||||
@ -19,14 +14,6 @@ core/superslab/superslab_types.h:
|
|||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/box/free_publish_box.h:
|
core/box/free_publish_box.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
|
|||||||
@ -3,14 +3,15 @@ core/box/front_gate_box.o: core/box/front_gate_box.c \
|
|||||||
core/hakmem_trace.h core/hakmem_tiny_mini_mag.h \
|
core/hakmem_trace.h core/hakmem_tiny_mini_mag.h \
|
||||||
core/tiny_alloc_fast_sfc.inc.h core/hakmem_tiny.h \
|
core/tiny_alloc_fast_sfc.inc.h core/hakmem_tiny.h \
|
||||||
core/box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
core/box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
||||||
core/tiny_nextptr.h core/box/tls_sll_box.h core/box/../ptr_trace.h \
|
core/tiny_nextptr.h core/box/tls_sll_box.h \
|
||||||
core/box/../hakmem_tiny_config.h core/box/../hakmem_build_flags.h \
|
core/box/../hakmem_tiny_config.h core/box/../hakmem_build_flags.h \
|
||||||
core/box/../tiny_remote.h core/box/../tiny_region_id.h \
|
core/box/../tiny_remote.h core/box/../tiny_region_id.h \
|
||||||
core/box/../hakmem_build_flags.h core/box/../tiny_box_geometry.h \
|
core/box/../hakmem_build_flags.h core/box/../tiny_box_geometry.h \
|
||||||
core/box/../hakmem_tiny_superslab_constants.h \
|
core/box/../hakmem_tiny_superslab_constants.h \
|
||||||
core/box/../hakmem_tiny_config.h core/box/../ptr_track.h \
|
core/box/../hakmem_tiny_config.h core/box/../ptr_track.h \
|
||||||
core/box/../hakmem_tiny_integrity.h core/box/../hakmem_tiny.h \
|
core/box/../hakmem_tiny_integrity.h core/box/../hakmem_tiny.h \
|
||||||
core/box/../ptr_track.h core/box/ptr_conversion_box.h
|
core/box/../ptr_track.h core/box/../ptr_trace.h \
|
||||||
|
core/box/ptr_conversion_box.h
|
||||||
core/box/front_gate_box.h:
|
core/box/front_gate_box.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
@ -22,7 +23,6 @@ core/box/tiny_next_ptr_box.h:
|
|||||||
core/hakmem_tiny_config.h:
|
core/hakmem_tiny_config.h:
|
||||||
core/tiny_nextptr.h:
|
core/tiny_nextptr.h:
|
||||||
core/box/tls_sll_box.h:
|
core/box/tls_sll_box.h:
|
||||||
core/box/../ptr_trace.h:
|
|
||||||
core/box/../hakmem_tiny_config.h:
|
core/box/../hakmem_tiny_config.h:
|
||||||
core/box/../hakmem_build_flags.h:
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../tiny_remote.h:
|
core/box/../tiny_remote.h:
|
||||||
@ -35,4 +35,5 @@ core/box/../ptr_track.h:
|
|||||||
core/box/../hakmem_tiny_integrity.h:
|
core/box/../hakmem_tiny_integrity.h:
|
||||||
core/box/../hakmem_tiny.h:
|
core/box/../hakmem_tiny.h:
|
||||||
core/box/../ptr_track.h:
|
core/box/../ptr_track.h:
|
||||||
|
core/box/../ptr_trace.h:
|
||||||
core/box/ptr_conversion_box.h:
|
core/box/ptr_conversion_box.h:
|
||||||
|
|||||||
@ -7,13 +7,8 @@ core/box/front_gate_classifier.o: core/box/front_gate_classifier.c \
|
|||||||
core/box/../superslab/superslab_types.h \
|
core/box/../superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h \
|
core/hakmem_tiny_superslab_constants.h \
|
||||||
core/box/../superslab/superslab_inline.h \
|
core/box/../superslab/superslab_inline.h \
|
||||||
core/box/../superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/box/../superslab/superslab_types.h core/box/../tiny_debug_ring.h \
|
||||||
core/hakmem_build_flags.h core/tiny_remote.h \
|
core/box/../tiny_remote.h core/box/../superslab/superslab_inline.h \
|
||||||
core/box/../superslab/../tiny_box_geometry.h \
|
|
||||||
core/box/../superslab/../box/tiny_next_ptr_box.h \
|
|
||||||
core/hakmem_tiny_config.h core/tiny_nextptr.h \
|
|
||||||
core/box/../tiny_debug_ring.h core/box/../tiny_remote.h \
|
|
||||||
core/box/../superslab/superslab_inline.h \
|
|
||||||
core/box/../hakmem_build_flags.h core/box/../hakmem_internal.h \
|
core/box/../hakmem_build_flags.h core/box/../hakmem_internal.h \
|
||||||
core/box/../hakmem.h core/box/../hakmem_config.h \
|
core/box/../hakmem.h core/box/../hakmem_config.h \
|
||||||
core/box/../hakmem_features.h core/box/../hakmem_sys.h \
|
core/box/../hakmem_features.h core/box/../hakmem_sys.h \
|
||||||
@ -31,13 +26,6 @@ core/box/../superslab/superslab_types.h:
|
|||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/box/../superslab/superslab_inline.h:
|
core/box/../superslab/superslab_inline.h:
|
||||||
core/box/../superslab/superslab_types.h:
|
core/box/../superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/hakmem_build_flags.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/box/../superslab/../tiny_box_geometry.h:
|
|
||||||
core/box/../superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/box/../tiny_debug_ring.h:
|
core/box/../tiny_debug_ring.h:
|
||||||
core/box/../tiny_remote.h:
|
core/box/../tiny_remote.h:
|
||||||
core/box/../superslab/superslab_inline.h:
|
core/box/../superslab/superslab_inline.h:
|
||||||
|
|||||||
@ -3,13 +3,8 @@ core/box/mailbox_box.o: core/box/mailbox_box.c core/box/mailbox_box.h \
|
|||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/hakmem_build_flags.h core/tiny_remote.h \
|
core/hakmem_build_flags.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/hakmem_tiny.h \
|
core/hakmem_tiny_superslab_constants.h core/hakmem_tiny.h \
|
||||||
core/hakmem_trace.h core/hakmem_tiny_mini_mag.h
|
core/hakmem_trace.h core/hakmem_tiny_mini_mag.h core/tiny_debug_ring.h
|
||||||
core/box/mailbox_box.h:
|
core/box/mailbox_box.h:
|
||||||
core/hakmem_tiny_superslab.h:
|
core/hakmem_tiny_superslab.h:
|
||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
@ -19,15 +14,8 @@ core/superslab/superslab_types.h:
|
|||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_trace.h:
|
core/hakmem_trace.h:
|
||||||
core/hakmem_tiny_mini_mag.h:
|
core/hakmem_tiny_mini_mag.h:
|
||||||
|
core/tiny_debug_ring.h:
|
||||||
|
|||||||
@ -5,15 +5,8 @@ core/box/prewarm_box.o: core/box/prewarm_box.c core/box/../hakmem_tiny.h \
|
|||||||
core/box/../superslab/superslab_types.h \
|
core/box/../superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h \
|
core/hakmem_tiny_superslab_constants.h \
|
||||||
core/box/../superslab/superslab_inline.h \
|
core/box/../superslab/superslab_inline.h \
|
||||||
core/box/../superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/box/../superslab/superslab_types.h core/box/../tiny_debug_ring.h \
|
||||||
core/hakmem_build_flags.h core/tiny_remote.h \
|
core/box/../tiny_remote.h core/box/../hakmem_tiny_superslab_constants.h \
|
||||||
core/box/../superslab/../tiny_box_geometry.h \
|
|
||||||
core/box/../superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/box/../superslab/../hakmem_tiny_config.h \
|
|
||||||
core/box/../superslab/../box/tiny_next_ptr_box.h \
|
|
||||||
core/hakmem_tiny_config.h core/tiny_nextptr.h \
|
|
||||||
core/box/../tiny_debug_ring.h core/box/../tiny_remote.h \
|
|
||||||
core/box/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/box/../hakmem_tiny_config.h core/box/../hakmem_tiny_superslab.h \
|
core/box/../hakmem_tiny_config.h core/box/../hakmem_tiny_superslab.h \
|
||||||
core/box/../hakmem_tiny_integrity.h core/box/../hakmem_tiny.h \
|
core/box/../hakmem_tiny_integrity.h core/box/../hakmem_tiny.h \
|
||||||
core/box/prewarm_box.h core/box/capacity_box.h core/box/carve_push_box.h
|
core/box/prewarm_box.h core/box/capacity_box.h core/box/carve_push_box.h
|
||||||
@ -27,15 +20,6 @@ core/box/../superslab/superslab_types.h:
|
|||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/box/../superslab/superslab_inline.h:
|
core/box/../superslab/superslab_inline.h:
|
||||||
core/box/../superslab/superslab_types.h:
|
core/box/../superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/hakmem_build_flags.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/box/../superslab/../tiny_box_geometry.h:
|
|
||||||
core/box/../superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/box/../superslab/../hakmem_tiny_config.h:
|
|
||||||
core/box/../superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/box/../tiny_debug_ring.h:
|
core/box/../tiny_debug_ring.h:
|
||||||
core/box/../tiny_remote.h:
|
core/box/../tiny_remote.h:
|
||||||
core/box/../hakmem_tiny_superslab_constants.h:
|
core/box/../hakmem_tiny_superslab_constants.h:
|
||||||
|
|||||||
@ -5,16 +5,10 @@ core/box/superslab_expansion_box.o: core/box/superslab_expansion_box.c \
|
|||||||
core/box/../hakmem_tiny_superslab.h \
|
core/box/../hakmem_tiny_superslab.h \
|
||||||
core/box/../superslab/superslab_types.h \
|
core/box/../superslab/superslab_types.h \
|
||||||
core/box/../superslab/superslab_inline.h \
|
core/box/../superslab/superslab_inline.h \
|
||||||
core/box/../superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/box/../superslab/superslab_types.h core/box/../tiny_debug_ring.h \
|
||||||
core/hakmem_build_flags.h core/tiny_remote.h \
|
core/box/../hakmem_build_flags.h core/box/../tiny_remote.h \
|
||||||
core/box/../superslab/../tiny_box_geometry.h \
|
|
||||||
core/box/../superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/box/../superslab/../hakmem_tiny_config.h \
|
|
||||||
core/box/../superslab/../box/tiny_next_ptr_box.h \
|
|
||||||
core/hakmem_tiny_config.h core/tiny_nextptr.h \
|
|
||||||
core/box/../tiny_debug_ring.h core/box/../tiny_remote.h \
|
|
||||||
core/box/../hakmem_tiny_superslab_constants.h \
|
core/box/../hakmem_tiny_superslab_constants.h \
|
||||||
core/box/../hakmem_build_flags.h core/box/../hakmem_tiny_superslab.h \
|
core/box/../hakmem_tiny_superslab.h \
|
||||||
core/box/../hakmem_tiny_superslab_constants.h
|
core/box/../hakmem_tiny_superslab_constants.h
|
||||||
core/box/superslab_expansion_box.h:
|
core/box/superslab_expansion_box.h:
|
||||||
core/box/../superslab/superslab_types.h:
|
core/box/../superslab/superslab_types.h:
|
||||||
@ -24,18 +18,9 @@ core/box/../hakmem_tiny_superslab.h:
|
|||||||
core/box/../superslab/superslab_types.h:
|
core/box/../superslab/superslab_types.h:
|
||||||
core/box/../superslab/superslab_inline.h:
|
core/box/../superslab/superslab_inline.h:
|
||||||
core/box/../superslab/superslab_types.h:
|
core/box/../superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/hakmem_build_flags.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/box/../superslab/../tiny_box_geometry.h:
|
|
||||||
core/box/../superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/box/../superslab/../hakmem_tiny_config.h:
|
|
||||||
core/box/../superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/box/../tiny_debug_ring.h:
|
core/box/../tiny_debug_ring.h:
|
||||||
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../tiny_remote.h:
|
core/box/../tiny_remote.h:
|
||||||
core/box/../hakmem_tiny_superslab_constants.h:
|
core/box/../hakmem_tiny_superslab_constants.h:
|
||||||
core/box/../hakmem_build_flags.h:
|
|
||||||
core/box/../hakmem_tiny_superslab.h:
|
core/box/../hakmem_tiny_superslab.h:
|
||||||
core/box/../hakmem_tiny_superslab_constants.h:
|
core/box/../hakmem_tiny_superslab_constants.h:
|
||||||
|
|||||||
@ -1,394 +1,231 @@
|
|||||||
// tls_sll_box.h - Box TLS-SLL: Single-Linked List API (C7-safe)
|
// tls_sll_box.h - Box TLS-SLL: Single-Linked List API (Unified Box version)
|
||||||
//
|
//
|
||||||
// Purpose: Centralized TLS SLL management with C7 protection
|
// Goal:
|
||||||
// Design: Zero-overhead static inline API, C7 always rejected
|
// - Single authoritative Box for TLS SLL operations.
|
||||||
|
// - All next pointer layout is decided by tiny_next_ptr_box.h (Box API).
|
||||||
|
// - Callers pass BASE pointers only; no local next_offset arithmetic.
|
||||||
|
// - Compatible with existing ptr_trace PTR_NEXT_* macros (off is logging-only).
|
||||||
//
|
//
|
||||||
// Key Rules:
|
// Invariants:
|
||||||
// 1. C7 (1KB headerless) is ALWAYS rejected (returns false/0)
|
// - g_tiny_class_sizes[cls] is TOTAL stride (including 1-byte header when enabled).
|
||||||
// 2. All SLL direct writes MUST go through this API
|
// - For HEADER_CLASSIDX != 0, tiny_nextptr.h encodes:
|
||||||
// 3. Pop returns with first 8 bytes cleared for C7 (safety)
|
// class 0: next_off = 0
|
||||||
// 4. Capacity checks prevent overflow
|
// class 1-6: next_off = 1
|
||||||
//
|
// class 7: next_off = 0
|
||||||
// Architecture:
|
// Callers MUST NOT duplicate this logic.
|
||||||
// - Box TLS-SLL (this): Push/Pop/Splice authority
|
// - TLS SLL stores BASE pointers only.
|
||||||
// - Caller: Provides capacity limits, handles fallback on failure
|
// - Box provides: push / pop / splice with capacity & integrity checks.
|
||||||
//
|
|
||||||
// Performance:
|
|
||||||
// - Static inline → zero function call overhead
|
|
||||||
// - C7 check: 1 comparison + predict-not-taken (< 1 cycle)
|
|
||||||
// - Same performance as direct SLL access for C0-C6
|
|
||||||
|
|
||||||
#ifndef TLS_SLL_BOX_H
|
#ifndef TLS_SLL_BOX_H
|
||||||
#define TLS_SLL_BOX_H
|
#define TLS_SLL_BOX_H
|
||||||
|
|
||||||
#include <stdint.h>
|
#include <stdint.h>
|
||||||
#include <stdbool.h>
|
#include <stdbool.h>
|
||||||
#include <stdio.h> // For fprintf in debug
|
#include <stdio.h>
|
||||||
#include <stdlib.h> // For abort in debug
|
#include <stdlib.h>
|
||||||
#include "../ptr_trace.h" // Debug-only: pointer next read/write tracing
|
|
||||||
#include "../hakmem_tiny_config.h" // For TINY_NUM_CLASSES
|
#include "../hakmem_tiny_config.h"
|
||||||
#include "../hakmem_build_flags.h"
|
#include "../hakmem_build_flags.h"
|
||||||
#include "../tiny_remote.h" // For TINY_REMOTE_SENTINEL detection
|
#include "../tiny_remote.h"
|
||||||
#include "../tiny_region_id.h" // HEADER_MAGIC / HEADER_CLASS_MASK
|
#include "../tiny_region_id.h"
|
||||||
#include "../hakmem_tiny_integrity.h" // PRIORITY 2: Freelist integrity checks
|
#include "../hakmem_tiny_integrity.h"
|
||||||
#include "../ptr_track.h" // Pointer tracking for debugging header corruption
|
#include "../ptr_track.h"
|
||||||
#include "tiny_next_ptr_box.h" // Box API: Next pointer read/write
|
#include "../ptr_trace.h"
|
||||||
|
#include "tiny_next_ptr_box.h"
|
||||||
|
|
||||||
|
// External TLS SLL state (defined in hakmem_tiny.c or equivalent)
|
||||||
|
extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES];
|
||||||
|
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
|
||||||
|
|
||||||
|
// ========== Debug guard ==========
|
||||||
|
|
||||||
// Debug guard: validate base pointer before SLL ops (Debug only)
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
extern const size_t g_tiny_class_sizes[];
|
static inline void tls_sll_debug_guard(int class_idx, void* base, const char* where)
|
||||||
static inline void tls_sll_debug_guard(int class_idx, void* base, const char* where) {
|
{
|
||||||
(void)g_tiny_class_sizes;
|
(void)class_idx;
|
||||||
// Only a minimal guard: tiny integers are always invalid
|
|
||||||
if ((uintptr_t)base < 4096) {
|
if ((uintptr_t)base < 4096) {
|
||||||
fprintf(stderr, "[TLS_SLL_GUARD] %s: small ptr=%p cls=%d (likely corruption)\n", where, base, class_idx);
|
fprintf(stderr,
|
||||||
|
"[TLS_SLL_GUARD] %s: suspicious ptr=%p cls=%d\n",
|
||||||
|
where, base, class_idx);
|
||||||
abort();
|
abort();
|
||||||
}
|
}
|
||||||
// NOTE: Do NOT check alignment vs class size here.
|
|
||||||
// Blocks are stride-aligned (size+header) from slab base; modulo class size is not 0.
|
|
||||||
}
|
}
|
||||||
#else
|
#else
|
||||||
static inline void tls_sll_debug_guard(int class_idx, void* base, const char* where) { (void)class_idx; (void)base; (void)where; }
|
static inline void tls_sll_debug_guard(int class_idx, void* base, const char* where)
|
||||||
|
{
|
||||||
|
(void)class_idx; (void)base; (void)where;
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Normalize a possibly user-pointer (base+1) to base (header classes)
|
// Normalize helper: callers are required to pass BASE already.
|
||||||
static inline void* tls_sll_normalize_base(int class_idx, void* node) {
|
// Kept as a no-op for documentation / future hardening.
|
||||||
|
static inline void* tls_sll_normalize_base(int class_idx, void* node)
|
||||||
|
{
|
||||||
(void)class_idx;
|
(void)class_idx;
|
||||||
// Caller must pass base pointers; do not heuristically adjust.
|
|
||||||
return node;
|
return node;
|
||||||
}
|
}
|
||||||
|
|
||||||
// External TLS SLL state (defined elsewhere)
|
|
||||||
extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES];
|
|
||||||
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
|
|
||||||
|
|
||||||
// ========== Push ==========
|
// ========== Push ==========
|
||||||
|
//
|
||||||
|
// Push BASE pointer into TLS SLL for given class.
|
||||||
|
// Returns true on success, false if capacity full or input invalid.
|
||||||
|
|
||||||
// Push pointer to TLS SLL
|
static inline bool tls_sll_push(int class_idx, void* ptr, uint32_t capacity)
|
||||||
// Returns: true on success, false if C7 or capacity exceeded
|
{
|
||||||
//
|
|
||||||
// CRITICAL Phase 7 Header Design:
|
|
||||||
// - C0-C6 (header classes): [1B header][user data]
|
|
||||||
// ^base ^ptr (caller passes this)
|
|
||||||
// - SLL stores "base" (ptr-1) to avoid overwriting header
|
|
||||||
// - C7 (headerless): ptr == base (no offset)
|
|
||||||
//
|
|
||||||
// Safety:
|
|
||||||
// - C7 always rejected (headerless, first 8 bytes = user data)
|
|
||||||
// - Capacity check prevents overflow
|
|
||||||
// - Header protection: stores base (ptr-1) for C0-C6
|
|
||||||
//
|
|
||||||
// Performance: 3-4 cycles (C0-C6), < 1 cycle (C7 fast rejection)
|
|
||||||
static inline bool tls_sll_push(int class_idx, void* ptr, uint32_t capacity) {
|
|
||||||
// PRIORITY 1: Bounds check BEFORE any array access
|
|
||||||
HAK_CHECK_CLASS_IDX(class_idx, "tls_sll_push");
|
HAK_CHECK_CLASS_IDX(class_idx, "tls_sll_push");
|
||||||
|
|
||||||
// Phase E1-CORRECT: All classes including C7 can now use TLS SLL
|
// Capacity semantics:
|
||||||
|
// - capacity == 0 → disabled (reject)
|
||||||
|
// - capacity > 1<<20 → treat as "unbounded" sentinel (no limit)
|
||||||
|
if (capacity == 0) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
const uint32_t kCapacityHardMax = (1u << 20);
|
||||||
|
const int unlimited = (capacity > kCapacityHardMax);
|
||||||
|
|
||||||
// Capacity check
|
if (!ptr) {
|
||||||
if (g_tls_sll_count[class_idx] >= capacity) {
|
return false;
|
||||||
return false; // SLL full
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ✅ FIX #15: CATCH USER pointer contamination at injection point
|
// Base pointer only (callers must pass BASE; this is a no-op by design).
|
||||||
// For Class 2 (32B blocks), BASE addresses should be multiples of 33 (stride)
|
ptr = tls_sll_normalize_base(class_idx, ptr);
|
||||||
// USER pointers are BASE+1, so for Class 2 starting at even address, USER is ODD
|
|
||||||
// This catches USER pointers being passed to TLS SLL (should be BASE!)
|
|
||||||
#if !HAKMEM_BUILD_RELEASE && HAKMEM_TINY_HEADER_CLASSIDX
|
|
||||||
if (class_idx == 2) { // Class 2 specific check (can extend to all header classes)
|
|
||||||
uintptr_t addr = (uintptr_t)ptr;
|
|
||||||
// For class 2 with 32B blocks, check if pointer looks like USER (BASE+1)
|
|
||||||
// If slab base is at offset 0x...X0, then:
|
|
||||||
// - First block BASE: 0x...X0 (even)
|
|
||||||
// - First block USER: 0x...X1 (odd)
|
|
||||||
// - Second block BASE: 0x...X0 + 33 = 0x...Y1 (odd)
|
|
||||||
// - Second block USER: 0x...Y2 (even)
|
|
||||||
// So ODD/EVEN alternates, but we can detect obvious USER pointers
|
|
||||||
// by checking if ptr-1 has a header
|
|
||||||
if ((addr & 0xF) <= 15) { // Check last nibble for patterns
|
|
||||||
uint8_t* possible_base = (addr & 1) ? ((uint8_t*)ptr - 1) : (uint8_t*)ptr;
|
|
||||||
uint8_t byte_at_possible_base = *possible_base;
|
|
||||||
uint8_t expected_header = HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK);
|
|
||||||
|
|
||||||
// If ptr is ODD and ptr-1 has valid header, ptr is USER!
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
if ((addr & 1) && byte_at_possible_base == expected_header) {
|
// Minimal range guard before we touch memory.
|
||||||
extern _Atomic uint64_t malloc_count;
|
if (!validate_ptr_range(ptr, "tls_sll_push_base")) {
|
||||||
uint64_t call = atomic_load(&malloc_count);
|
fprintf(stderr,
|
||||||
fprintf(stderr, "\n========================================\n");
|
"[TLS_SLL_PUSH] FATAL invalid BASE ptr cls=%d base=%p\n",
|
||||||
fprintf(stderr, "=== USER POINTER BUG DETECTED ===\n");
|
class_idx, ptr);
|
||||||
fprintf(stderr, "========================================\n");
|
abort();
|
||||||
fprintf(stderr, "Call: %lu\n", call);
|
|
||||||
fprintf(stderr, "Class: %d\n", class_idx);
|
|
||||||
fprintf(stderr, "Passed ptr: %p (ODD address - USER pointer!)\n", ptr);
|
|
||||||
fprintf(stderr, "Expected: %p (EVEN address - BASE pointer)\n", (void*)possible_base);
|
|
||||||
fprintf(stderr, "Header at ptr-1: 0x%02x (valid header!)\n", byte_at_possible_base);
|
|
||||||
fprintf(stderr, "========================================\n");
|
|
||||||
fprintf(stderr, "BUG: Caller passed USER pointer to tls_sll_push!\n");
|
|
||||||
fprintf(stderr, "FIX: Convert USER → BASE before push\n");
|
|
||||||
fprintf(stderr, "========================================\n");
|
|
||||||
fflush(stderr);
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// CRITICAL: Caller must pass "base" pointer (NOT user ptr)
|
// Capacity check BEFORE any writes.
|
||||||
// Phase 7 carve operations return base (stride includes header)
|
uint32_t cur = g_tls_sll_count[class_idx];
|
||||||
// SLL stores base to avoid overwriting header with next pointer
|
if (!unlimited && cur >= capacity) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
// ✅ FIX #11C: ALWAYS restore header before pushing to SLL (defense in depth)
|
|
||||||
// ROOT CAUSE (multiple sources):
|
|
||||||
// 1. User may overwrite byte 0 (header) during normal use
|
|
||||||
// 2. Freelist stores next at base (offset 0), overwriting header
|
|
||||||
// 3. Simple refill carves blocks without writing headers
|
|
||||||
//
|
|
||||||
// SOLUTION: Restore header HERE (single point of truth) instead of at each call site.
|
|
||||||
// This prevents all header corruption bugs at the TLS SLL boundary.
|
|
||||||
// COST: 1 byte write (~1-2 cycles, negligible vs SEGV debugging cost).
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
#if HAKMEM_TINY_HEADER_CLASSIDX
|
||||||
// DEBUG: Log if header was corrupted (0x00) before restoration for class 2
|
// Restore header defensively for header classes (class != 0,7 use header byte).
|
||||||
uint8_t before = *(uint8_t*)ptr;
|
if (class_idx != 0 && class_idx != 7) {
|
||||||
PTR_TRACK_TLS_PUSH(ptr, class_idx); // Track BEFORE header write
|
uint8_t* b = (uint8_t*)ptr;
|
||||||
*(uint8_t*)ptr = HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK);
|
uint8_t expected = (uint8_t)(HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK));
|
||||||
PTR_TRACK_HEADER_WRITE(ptr, HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK));
|
// Always set; any mismatch is effectively healed here.
|
||||||
|
PTR_TRACK_TLS_PUSH(ptr, class_idx);
|
||||||
// ✅ Option C: Class 2 inline logs - PUSH operation (DISABLED for performance)
|
PTR_TRACK_HEADER_WRITE(ptr, expected);
|
||||||
if (0 && class_idx == 2) {
|
*b = expected;
|
||||||
extern _Atomic uint64_t malloc_count;
|
|
||||||
uint64_t call = atomic_load(&malloc_count);
|
|
||||||
fprintf(stderr, "[C2_PUSH] ptr=%p before=0x%02x after=0xa2 call=%lu\n",
|
|
||||||
ptr, before, call);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Phase 7: Store next pointer at header-safe offset (base+1 for C0-C6)
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
|
||||||
const size_t next_offset = 1; // C7 is rejected above; always skip header
|
|
||||||
#else
|
|
||||||
const size_t next_offset = 0;
|
|
||||||
#endif
|
|
||||||
tls_sll_debug_guard(class_idx, ptr, "push");
|
tls_sll_debug_guard(class_idx, ptr, "push");
|
||||||
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
// PRIORITY 2+: Double-free detection - scan existing SLL for duplicates
|
// Optional double-free detection: scan a bounded prefix of the list.
|
||||||
// This is expensive but critical for debugging the P0 corruption bug
|
|
||||||
{
|
{
|
||||||
void* scan = g_tls_sll_head[class_idx];
|
void* scan = g_tls_sll_head[class_idx];
|
||||||
uint32_t scan_count = 0;
|
uint32_t scanned = 0;
|
||||||
const uint32_t scan_limit = (g_tls_sll_count[class_idx] < 100) ? g_tls_sll_count[class_idx] : 100;
|
const uint32_t limit = (g_tls_sll_count[class_idx] < 64)
|
||||||
|
? g_tls_sll_count[class_idx]
|
||||||
while (scan && scan_count < scan_limit) {
|
: 64;
|
||||||
|
while (scan && scanned < limit) {
|
||||||
if (scan == ptr) {
|
if (scan == ptr) {
|
||||||
fprintf(stderr, "[TLS_SLL_PUSH] FATAL: Double-free detected!\n");
|
fprintf(stderr,
|
||||||
fprintf(stderr, " class_idx=%d ptr=%p appears multiple times in SLL\n", class_idx, ptr);
|
"[TLS_SLL_PUSH] FATAL double-free: cls=%d ptr=%p already in SLL\n",
|
||||||
fprintf(stderr, " g_tls_sll_count[%d]=%u scan_pos=%u\n",
|
class_idx, ptr);
|
||||||
class_idx, g_tls_sll_count[class_idx], scan_count);
|
|
||||||
fprintf(stderr, " This indicates the same pointer was freed twice\n");
|
|
||||||
ptr_trace_dump_now("double_free");
|
ptr_trace_dump_now("double_free");
|
||||||
fflush(stderr);
|
|
||||||
abort();
|
abort();
|
||||||
}
|
}
|
||||||
|
void* next;
|
||||||
void* next_scan;
|
PTR_NEXT_READ("tls_sll_scan", class_idx, scan, 0, next);
|
||||||
PTR_NEXT_READ("sll_scan", class_idx, scan, next_offset, next_scan);
|
scan = next;
|
||||||
scan = next_scan;
|
scanned++;
|
||||||
scan_count++;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
PTR_NEXT_WRITE("tls_push", class_idx, ptr, next_offset, g_tls_sll_head[class_idx]);
|
// Link new node to current head via Box API (offset is handled inside tiny_nextptr).
|
||||||
|
PTR_NEXT_WRITE("tls_push", class_idx, ptr, 0, g_tls_sll_head[class_idx]);
|
||||||
g_tls_sll_head[class_idx] = ptr;
|
g_tls_sll_head[class_idx] = ptr;
|
||||||
g_tls_sll_count[class_idx]++;
|
g_tls_sll_count[class_idx] = cur + 1;
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
// ========== Pop ==========
|
// ========== Pop ==========
|
||||||
|
//
|
||||||
|
// Pop BASE pointer from TLS SLL.
|
||||||
|
// Returns true on success and stores BASE into *out.
|
||||||
|
|
||||||
// Pop pointer from TLS SLL
|
static inline bool tls_sll_pop(int class_idx, void** out)
|
||||||
// Returns: true on success (writes user ptr to *out), false if empty
|
{
|
||||||
//
|
|
||||||
// CRITICAL Phase 7 Header Design:
|
|
||||||
// - SLL stores "base" (ptr-1) for C0-C6
|
|
||||||
// - Must return "ptr" (base+1) to user
|
|
||||||
// - C7: base == ptr (no offset)
|
|
||||||
//
|
|
||||||
// Safety:
|
|
||||||
// - C7 protection: clears first 8 bytes on pop (prevents next pointer leak)
|
|
||||||
// - Header protection: returns ptr (base+1) for C0-C6
|
|
||||||
// - NULL check before deref
|
|
||||||
//
|
|
||||||
// Performance: 4-5 cycles
|
|
||||||
static inline bool tls_sll_pop(int class_idx, void** out) {
|
|
||||||
// PRIORITY 1: Bounds check
|
|
||||||
HAK_CHECK_CLASS_IDX(class_idx, "tls_sll_pop");
|
HAK_CHECK_CLASS_IDX(class_idx, "tls_sll_pop");
|
||||||
atomic_fetch_add(&g_integrity_check_class_bounds, 1);
|
atomic_fetch_add(&g_integrity_check_class_bounds, 1);
|
||||||
|
|
||||||
void* base = g_tls_sll_head[class_idx];
|
void* base = g_tls_sll_head[class_idx];
|
||||||
if (!base) {
|
if (!base) {
|
||||||
return false; // SLL empty
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// ✅ CRITICAL FIX: Detect remote sentinel leaked into TLS SLL
|
// Sentinel guard: remote sentinel must never be in TLS SLL.
|
||||||
// The sentinel (0xBADA55BADA55BADA) is used by remote free operations
|
|
||||||
// If it leaks into TLS SLL head, dereferencing it causes SEGV
|
|
||||||
if (__builtin_expect((uintptr_t)base == TINY_REMOTE_SENTINEL, 0)) {
|
if (__builtin_expect((uintptr_t)base == TINY_REMOTE_SENTINEL, 0)) {
|
||||||
// Reset corrupted TLS SLL state
|
|
||||||
g_tls_sll_head[class_idx] = NULL;
|
g_tls_sll_head[class_idx] = NULL;
|
||||||
g_tls_sll_count[class_idx] = 0;
|
g_tls_sll_count[class_idx] = 0;
|
||||||
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
// Log sentinel detection (helps identify root cause)
|
fprintf(stderr,
|
||||||
static __thread int sentinel_logged = 0;
|
"[TLS_SLL_POP] Remote sentinel detected at head; SLL reset (cls=%d)\n",
|
||||||
if (sentinel_logged < 10) {
|
class_idx);
|
||||||
fprintf(stderr, "[SENTINEL_DETECT] class=%d head=0x%lx (BADASS) - TLS SLL reset\n",
|
#endif
|
||||||
class_idx, (unsigned long)TINY_REMOTE_SENTINEL);
|
return false;
|
||||||
sentinel_logged++;
|
|
||||||
}
|
|
||||||
|
|
||||||
return false; // Trigger refill path
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// PRIORITY 2: Validate base pointer BEFORE dereferencing
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
if (!validate_ptr_range(base, "tls_sll_pop_base")) {
|
if (!validate_ptr_range(base, "tls_sll_pop_base")) {
|
||||||
fprintf(stderr, "[TLS_SLL_POP] FATAL: Invalid BASE pointer!\n");
|
fprintf(stderr,
|
||||||
fprintf(stderr, " class_idx=%d base=%p\n", class_idx, base);
|
"[TLS_SLL_POP] FATAL invalid BASE ptr cls=%d base=%p\n",
|
||||||
fprintf(stderr, " g_tls_sll_count[%d]=%u\n", class_idx, g_tls_sll_count[class_idx]);
|
class_idx, base);
|
||||||
fflush(stderr);
|
|
||||||
abort();
|
abort();
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Pop from SLL (reads next from base)
|
|
||||||
// Phase E1-CORRECT FIX: Class 0 must use offset 0 (8B block can't fit 8B pointer at offset 1)
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
|
||||||
// CRITICAL: Use class_idx argument (NOT header byte) because Class 0/7 overwrite header with next pointer!
|
|
||||||
const size_t next_offset = (class_idx == 0 || class_idx == 7) ? 0 : 1;
|
|
||||||
#else
|
|
||||||
const size_t next_offset = 0;
|
|
||||||
#endif
|
|
||||||
|
|
||||||
// PRIORITY 2: Validate that (base + next_offset) is safe to dereference BEFORE reading
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
|
||||||
void* read_addr = (uint8_t*)base + next_offset;
|
|
||||||
if (!validate_ptr_range(read_addr, "tls_sll_pop_read_addr")) {
|
|
||||||
fprintf(stderr, "[TLS_SLL_POP] FATAL: Cannot safely read next pointer!\n");
|
|
||||||
fprintf(stderr, " class_idx=%d base=%p read_addr=%p (base+%zu)\n",
|
|
||||||
class_idx, base, read_addr, next_offset);
|
|
||||||
fprintf(stderr, " g_tls_sll_count[%d]=%u\n", class_idx, g_tls_sll_count[class_idx]);
|
|
||||||
fflush(stderr);
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
atomic_fetch_add(&g_integrity_check_freelist, 1);
|
|
||||||
#endif
|
|
||||||
|
|
||||||
tls_sll_debug_guard(class_idx, base, "pop");
|
tls_sll_debug_guard(class_idx, base, "pop");
|
||||||
|
|
||||||
// ✅ FIX #12: VALIDATION - Detect header corruption at the moment it's injected
|
|
||||||
// This is the CRITICAL validation point: we validate the header BEFORE reading next pointer.
|
|
||||||
// If the header is corrupted here, we know corruption happened BEFORE this pop (during push/splice/carve).
|
|
||||||
// Phase E1-CORRECT: Class 1-6 have headers, Class 0/7 overwrite header with next pointer
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
#if HAKMEM_TINY_HEADER_CLASSIDX
|
||||||
|
// Header validation for header-classes (class != 0,7).
|
||||||
if (class_idx != 0 && class_idx != 7) {
|
if (class_idx != 0 && class_idx != 7) {
|
||||||
// Read byte 0 (should be header = HEADER_MAGIC | class_idx)
|
uint8_t got = *(uint8_t*)base;
|
||||||
uint8_t byte0 = *(uint8_t*)base;
|
uint8_t expect = (uint8_t)(HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK));
|
||||||
PTR_TRACK_TLS_POP(base, class_idx); // Track POP operation
|
PTR_TRACK_TLS_POP(base, class_idx);
|
||||||
PTR_TRACK_HEADER_READ(base, byte0); // Track header read
|
PTR_TRACK_HEADER_READ(base, got);
|
||||||
uint8_t expected = HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK);
|
if (__builtin_expect(got != expect, 0)) {
|
||||||
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
// ✅ Option C: Class 2 inline logs - POP operation (DISABLED for performance)
|
fprintf(stderr,
|
||||||
if (0 && class_idx == 2) {
|
"[TLS_SLL_POP] CORRUPTED HEADER cls=%d base=%p got=0x%02x expect=0x%02x\n",
|
||||||
extern _Atomic uint64_t malloc_count;
|
class_idx, base, got, expect);
|
||||||
uint64_t call = atomic_load(&malloc_count);
|
ptr_trace_dump_now("header_corruption");
|
||||||
fprintf(stderr, "[C2_POP] ptr=%p header=0x%02x expected=0xa2 call=%lu\n",
|
abort();
|
||||||
base, byte0, call);
|
#else
|
||||||
fflush(stderr);
|
// In release, fail-safe: drop list.
|
||||||
|
g_tls_sll_head[class_idx] = NULL;
|
||||||
|
g_tls_sll_count[class_idx] = 0;
|
||||||
|
return false;
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
|
}
|
||||||
if (byte0 != expected) {
|
|
||||||
// 🚨 CORRUPTION DETECTED AT INJECTION POINT!
|
|
||||||
// Get call number from malloc wrapper
|
|
||||||
extern _Atomic uint64_t malloc_count; // Defined in hak_wrappers.inc.h
|
|
||||||
uint64_t call_num = atomic_load(&malloc_count);
|
|
||||||
|
|
||||||
fprintf(stderr, "\n========================================\n");
|
|
||||||
fprintf(stderr, "=== CORRUPTION DETECTED (Fix #12) ===\n");
|
|
||||||
fprintf(stderr, "========================================\n");
|
|
||||||
fprintf(stderr, "Malloc call: %lu\n", call_num);
|
|
||||||
fprintf(stderr, "Class: %d\n", class_idx);
|
|
||||||
fprintf(stderr, "Base ptr: %p\n", base);
|
|
||||||
fprintf(stderr, "Expected: 0x%02x (HEADER_MAGIC | class_idx)\n", expected);
|
|
||||||
fprintf(stderr, "Actual: 0x%02x\n", byte0);
|
|
||||||
fprintf(stderr, "========================================\n");
|
|
||||||
fprintf(stderr, "\nThis means corruption was injected BEFORE this pop.\n");
|
|
||||||
fprintf(stderr, "Likely culprits:\n");
|
|
||||||
fprintf(stderr, " 1. tls_sll_push() - failed to restore header\n");
|
|
||||||
fprintf(stderr, " 2. tls_sll_splice() - chain had corrupted headers\n");
|
|
||||||
fprintf(stderr, " 3. trc_linear_carve() - didn't write header\n");
|
|
||||||
fprintf(stderr, " 4. trc_pop_from_freelist() - didn't restore header\n");
|
|
||||||
fprintf(stderr, " 5. Remote free path - overwrote header\n");
|
|
||||||
fprintf(stderr, "========================================\n");
|
|
||||||
fflush(stderr);
|
|
||||||
abort(); // Immediate crash with backtrace
|
|
||||||
}
|
|
||||||
} // end if (class_idx != 0 && class_idx != 7)
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// DEBUG: Log read operation for crash investigation
|
// Read next via Box API.
|
||||||
static _Atomic uint64_t g_pop_count = 0;
|
void* next;
|
||||||
uint64_t pop_num = atomic_fetch_add(&g_pop_count, 1);
|
PTR_NEXT_READ("tls_pop", class_idx, base, 0, next);
|
||||||
|
|
||||||
// Log ALL class 0 pops (DISABLED for performance)
|
|
||||||
if (0 && class_idx == 0) {
|
|
||||||
// Check byte 0 to see if header exists
|
|
||||||
uint8_t byte0 = *(uint8_t*)base;
|
|
||||||
fprintf(stderr, "[TLS_POP_C0] pop=%lu base=%p byte0=0x%02x next_off=%zu\n",
|
|
||||||
pop_num, base, byte0, next_offset);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
|
||||||
|
|
||||||
void* next; PTR_NEXT_READ("tls_pop", class_idx, base, next_offset, next);
|
|
||||||
|
|
||||||
if (0 && class_idx == 0) {
|
|
||||||
fprintf(stderr, "[TLS_POP_C0] pop=%lu base=%p next=%p\n",
|
|
||||||
pop_num, base, next);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
|
||||||
|
|
||||||
// PRIORITY 2: Validate next pointer after reading it
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
if (!validate_ptr_range(next, "tls_sll_pop_next")) {
|
if (next && !validate_ptr_range(next, "tls_sll_pop_next")) {
|
||||||
fprintf(stderr, "[TLS_SLL_POP] FATAL: Invalid next pointer after read!\n");
|
fprintf(stderr,
|
||||||
fprintf(stderr, " class_idx=%d base=%p next=%p next_offset=%zu\n",
|
"[TLS_SLL_POP] FATAL invalid next ptr cls=%d base=%p next=%p\n",
|
||||||
class_idx, base, next, next_offset);
|
class_idx, base, next);
|
||||||
fprintf(stderr, " g_tls_sll_count[%d]=%u\n", class_idx, g_tls_sll_count[class_idx]);
|
ptr_trace_dump_now("next_corruption");
|
||||||
fflush(stderr);
|
|
||||||
abort();
|
abort();
|
||||||
}
|
}
|
||||||
|
|
||||||
// PRIORITY 2+: Additional check for obviously corrupted pointers (non-canonical addresses)
|
|
||||||
// Detects patterns like 0x7fff00008000 that pass validate_ptr_range but are still invalid
|
|
||||||
if (next != NULL) {
|
|
||||||
uintptr_t addr = (uintptr_t)next;
|
|
||||||
// x86-64 canonical addresses: bits 48-63 must be copies of bit 47
|
|
||||||
// Valid ranges: 0x0000_0000_0000_0000 to 0x0000_7FFF_FFFF_FFFF (user space)
|
|
||||||
// or 0xFFFF_8000_0000_0000 to 0xFFFF_FFFF_FFFF_FFFF (kernel space)
|
|
||||||
// Invalid: 0x0001_xxxx_xxxx_xxxx to 0xFFFE_xxxx_xxxx_xxxx
|
|
||||||
uint64_t top_bits = addr >> 47;
|
|
||||||
if (top_bits != 0 && top_bits != 0x1FFFF) {
|
|
||||||
fprintf(stderr, "[TLS_SLL_POP] FATAL: Corrupted SLL chain - non-canonical address!\n");
|
|
||||||
fprintf(stderr, " class_idx=%d base=%p next=%p (top_bits=0x%lx)\n",
|
|
||||||
class_idx, base, next, (unsigned long)top_bits);
|
|
||||||
fprintf(stderr, " g_tls_sll_count[%d]=%u\n", class_idx, g_tls_sll_count[class_idx]);
|
|
||||||
fprintf(stderr, " Likely causes: double-free, use-after-free, buffer overflow\n");
|
|
||||||
ptr_trace_dump_now("sll_chain_corruption");
|
|
||||||
fflush(stderr);
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
g_tls_sll_head[class_idx] = next;
|
g_tls_sll_head[class_idx] = next;
|
||||||
@ -396,203 +233,83 @@ static inline bool tls_sll_pop(int class_idx, void** out) {
|
|||||||
g_tls_sll_count[class_idx]--;
|
g_tls_sll_count[class_idx]--;
|
||||||
}
|
}
|
||||||
|
|
||||||
// CRITICAL FIX: Clear next pointer to prevent stale pointer corruption
|
// Clear next inside popped node to avoid stale-chain issues.
|
||||||
//
|
tiny_next_write(class_idx, base, NULL);
|
||||||
// ROOT CAUSE OF P0 BUG (iteration 28,440 crash):
|
|
||||||
// When a block is popped from SLL and given to user, the `next` pointer at base+1
|
|
||||||
// (for C0-C6) or base (for C7) was NOT cleared. If the user doesn't overwrite it,
|
|
||||||
// the stale `next` pointer remains. When the block is freed and pushed back to SLL,
|
|
||||||
// the stale pointer creates loops or invalid pointers → SEGV at 0x7fff00008000!
|
|
||||||
//
|
|
||||||
// FIX: Clear next pointer for BOTH C7 AND C0-C6:
|
|
||||||
// - C7 (headerless): next at base (offset 0) - was already cleared
|
|
||||||
// - C0-C6 (header): next at base+1 (offset 1) - **WAS NOT CLEARED** ← BUG!
|
|
||||||
//
|
|
||||||
// Previous WRONG assumption: "C0-C6 header hides next" - FALSE!
|
|
||||||
// Phase E1-CORRECT: All classes have 1-byte header at base, next is at base+1
|
|
||||||
//
|
|
||||||
// Cost: 1 store instruction (~1 cycle) for all classes
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
|
||||||
// DEBUG: Verify header is intact BEFORE clearing next pointer
|
|
||||||
if (class_idx == 2) {
|
|
||||||
uint8_t header_before_clear = *(uint8_t*)base;
|
|
||||||
if (header_before_clear != (HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK))) {
|
|
||||||
extern _Atomic uint64_t malloc_count;
|
|
||||||
uint64_t call_num = atomic_load(&malloc_count);
|
|
||||||
fprintf(stderr, "[POP_HEADER_CHECK] call=%lu cls=%d base=%p header=0x%02x BEFORE clear_next!\n",
|
|
||||||
call_num, class_idx, base, header_before_clear);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
tiny_next_write(class_idx, base, NULL); // All classes: clear next pointer
|
*out = base;
|
||||||
|
|
||||||
// DEBUG: Verify header is STILL intact AFTER clearing next pointer
|
|
||||||
if (class_idx == 2) {
|
|
||||||
uint8_t header_after_clear = *(uint8_t*)base;
|
|
||||||
if (header_after_clear != (HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK))) {
|
|
||||||
extern _Atomic uint64_t malloc_count;
|
|
||||||
uint64_t call_num = atomic_load(&malloc_count);
|
|
||||||
fprintf(stderr, "[POP_HEADER_CORRUPTED] call=%lu cls=%d base=%p header=0x%02x AFTER clear_next!\n",
|
|
||||||
call_num, class_idx, base, header_after_clear);
|
|
||||||
fprintf(stderr, "[POP_HEADER_CORRUPTED] This means clear_next OVERWROTE the header!\n");
|
|
||||||
fprintf(stderr, "[POP_HEADER_CORRUPTED] Bug: next_offset calculation is WRONG!\n");
|
|
||||||
fflush(stderr);
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
#else
|
|
||||||
*(void**)base = NULL; // No header: clear at base
|
|
||||||
#endif
|
|
||||||
|
|
||||||
*out = base; // Return base (caller converts to ptr if needed)
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
// ========== Splice ==========
|
// ========== Splice ==========
|
||||||
|
|
||||||
// Splice chain of pointers to TLS SLL (batch push)
|
|
||||||
// Returns: actual count moved (0 for C7 or if capacity exceeded)
|
|
||||||
//
|
//
|
||||||
// CRITICAL Phase 7 Header Design:
|
// Splice a pre-linked chain of BASE pointers into TLS SLL head.
|
||||||
// - Caller MUST pass chain of "base" pointers (ptr-1 for C0-C6)
|
// chain_head is BASE; links are via Box API-compatible next layout.
|
||||||
// - Chain links are stored at base (*(void**)base = next_base)
|
// Returns number of nodes actually moved (<= capacity remaining).
|
||||||
// - SLL head stores base pointers
|
|
||||||
//
|
|
||||||
// Safety:
|
|
||||||
// - C7 always returns 0 (no splice)
|
|
||||||
// - Capacity check limits splice size
|
|
||||||
// - Chain traversal with safety (breaks on NULL)
|
|
||||||
// - Assumes chain is already linked using base pointers
|
|
||||||
//
|
|
||||||
// Performance: ~5 cycles + O(count) for chain traversal
|
|
||||||
static inline uint32_t tls_sll_splice(int class_idx, void* chain_head, uint32_t count, uint32_t capacity) {
|
|
||||||
// Phase E1-CORRECT: All classes including C7 can now use splice
|
|
||||||
|
|
||||||
// 🐛 DEBUG: UNCONDITIONAL log to verify function is called
|
static inline uint32_t tls_sll_splice(int class_idx,
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
void* chain_head,
|
||||||
{
|
uint32_t count,
|
||||||
static _Atomic int g_once = 0;
|
uint32_t capacity)
|
||||||
if (atomic_fetch_add(&g_once, 1) == 0) {
|
{
|
||||||
fprintf(stderr, "[SPLICE_ENTRY] First call to tls_sll_splice()! cls=%d count=%u capacity=%u\n",
|
HAK_CHECK_CLASS_IDX(class_idx, "tls_sll_splice");
|
||||||
class_idx, count, capacity);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
// Calculate available capacity
|
if (!chain_head || count == 0 || capacity == 0) {
|
||||||
uint32_t available = (capacity > g_tls_sll_count[class_idx])
|
return 0;
|
||||||
? (capacity - g_tls_sll_count[class_idx]) : 0;
|
|
||||||
|
|
||||||
// 🐛 DEBUG: Log ALL splice inputs to diagnose truncation
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
|
||||||
{
|
|
||||||
static _Atomic uint64_t g_splice_log_count = 0;
|
|
||||||
uint64_t splice_num = atomic_fetch_add(&g_splice_log_count, 1);
|
|
||||||
if (splice_num < 10) { // Log first 10 splices
|
|
||||||
fprintf(stderr, "[SPLICE_DEBUG #%lu] cls=%d count=%u capacity=%u sll_count=%u available=%u\n",
|
|
||||||
splice_num, class_idx, count, capacity, g_tls_sll_count[class_idx], available);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
if (available == 0 || count == 0 || !chain_head) {
|
|
||||||
return 0; // No space or empty chain
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Limit splice size to available capacity
|
uint32_t cur = g_tls_sll_count[class_idx];
|
||||||
uint32_t to_move = (count < available) ? count : available;
|
if (cur >= capacity) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
uint32_t room = capacity - cur;
|
||||||
|
uint32_t to_move = (count < room) ? count : room;
|
||||||
|
|
||||||
|
// Traverse chain up to to_move, validate, and find tail.
|
||||||
|
void* tail = chain_head;
|
||||||
|
uint32_t moved = 1;
|
||||||
|
|
||||||
|
tls_sll_debug_guard(class_idx, chain_head, "splice_head");
|
||||||
|
|
||||||
// ✅ FIX #14: DEFENSE IN DEPTH - Restore headers for ALL nodes in chain
|
|
||||||
// ROOT CAUSE: Even though callers (trc_linear_carve, trc_pop_from_freelist) are
|
|
||||||
// supposed to restore headers, there might be edge cases or future code paths
|
|
||||||
// that forget. Adding header restoration HERE provides a safety net.
|
|
||||||
//
|
|
||||||
// COST: 1 byte write per node (~1-2 cycles each, negligible vs SEGV debugging)
|
|
||||||
// BENEFIT: Guaranteed header integrity at TLS SLL boundary (defense in depth!)
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
#if HAKMEM_TINY_HEADER_CLASSIDX
|
||||||
const size_t next_offset = 1; // C0-C6: next at base+1
|
// Restore header defensively on each node we touch.
|
||||||
|
|
||||||
// Restore headers for ALL nodes in chain (traverse once)
|
|
||||||
{
|
{
|
||||||
void* node = chain_head;
|
uint8_t* b = (uint8_t*)chain_head;
|
||||||
uint32_t restored_count = 0;
|
uint8_t expected = (uint8_t)(HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK));
|
||||||
|
*b = expected;
|
||||||
while (node != NULL && restored_count < to_move) {
|
|
||||||
uint8_t before = *(uint8_t*)node;
|
|
||||||
uint8_t expected = HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK);
|
|
||||||
|
|
||||||
// Restore header unconditionally
|
|
||||||
*(uint8_t*)node = expected;
|
|
||||||
|
|
||||||
// ✅ Option C: Class 2 inline logs - SPLICE operation (DISABLED for performance)
|
|
||||||
if (0 && class_idx == 2) {
|
|
||||||
extern _Atomic uint64_t malloc_count;
|
|
||||||
uint64_t call = atomic_load(&malloc_count);
|
|
||||||
fprintf(stderr, "[C2_SPLICE] ptr=%p before=0x%02x after=0xa2 restored=%u/%u call=%lu\n",
|
|
||||||
node, before, restored_count+1, to_move, call);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Move to next node
|
|
||||||
void* next = tiny_next_read(class_idx, node);
|
|
||||||
node = next;
|
|
||||||
restored_count++;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
#else
|
|
||||||
const size_t next_offset = 0; // No header: next at base
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Traverse chain to find tail (needed for splicing)
|
while (moved < to_move) {
|
||||||
void* tail = chain_head;
|
tls_sll_debug_guard(class_idx, tail, "splice_traverse");
|
||||||
for (uint32_t i = 1; i < to_move; i++) {
|
|
||||||
tls_sll_debug_guard(class_idx, tail, "splice_trav");
|
void* next;
|
||||||
void* next; PTR_NEXT_READ("tls_sp_trav", class_idx, tail, next_offset, next);
|
PTR_NEXT_READ("tls_splice_trav", class_idx, tail, 0, next);
|
||||||
|
|
||||||
if (!next) {
|
if (!next) {
|
||||||
// Chain shorter than expected, adjust to_move
|
|
||||||
to_move = i;
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if HAKMEM_TINY_HEADER_CLASSIDX
|
||||||
|
{
|
||||||
|
uint8_t* b = (uint8_t*)next;
|
||||||
|
uint8_t expected = (uint8_t)(HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK));
|
||||||
|
*b = expected;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
tail = next;
|
tail = next;
|
||||||
|
moved++;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Splice chain to SLL head
|
// Link tail to existing head and install new head.
|
||||||
// tail is a base pointer by construction
|
tls_sll_debug_guard(class_idx, tail, "splice_tail");
|
||||||
tls_sll_debug_guard(class_idx, tail, "splice_link");
|
PTR_NEXT_WRITE("tls_splice_link", class_idx, tail, 0, g_tls_sll_head[class_idx]);
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
|
||||||
fprintf(stderr, "[SPLICE_LINK] cls=%d tail=%p off=%zu old_head=%p\n",
|
|
||||||
class_idx, tail, (size_t)next_offset, g_tls_sll_head[class_idx]);
|
|
||||||
#endif
|
|
||||||
PTR_NEXT_WRITE("tls_sp_link", class_idx, tail, next_offset, g_tls_sll_head[class_idx]);
|
|
||||||
|
|
||||||
// ✅ FIX #11: chain_head is already correct BASE pointer from caller
|
|
||||||
tls_sll_debug_guard(class_idx, chain_head, "splice_head");
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
|
||||||
fprintf(stderr, "[SPLICE_SET_HEAD] cls=%d head=%p moved=%u\n",
|
|
||||||
class_idx, chain_head, (unsigned)to_move);
|
|
||||||
#endif
|
|
||||||
g_tls_sll_head[class_idx] = chain_head;
|
g_tls_sll_head[class_idx] = chain_head;
|
||||||
g_tls_sll_count[class_idx] += to_move;
|
g_tls_sll_count[class_idx] = cur + moved;
|
||||||
|
|
||||||
return to_move;
|
return moved;
|
||||||
}
|
}
|
||||||
|
|
||||||
// ========== Debug/Stats (optional) ==========
|
|
||||||
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
|
||||||
// Verify C7 is not in SLL (debug only, call at safe points)
|
|
||||||
static inline void tls_sll_verify_no_c7(void) {
|
|
||||||
void* head = g_tls_sll_head[7];
|
|
||||||
if (head != NULL) {
|
|
||||||
fprintf(stderr, "[TLS_SLL_BUG] C7 found in TLS SLL! head=%p count=%u\n",
|
|
||||||
head, g_tls_sll_count[7]);
|
|
||||||
fprintf(stderr, "[TLS_SLL_BUG] This should NEVER happen - C7 is headerless!\n");
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#endif // TLS_SLL_BOX_H
|
#endif // TLS_SLL_BOX_H
|
||||||
|
|||||||
@ -1,4 +1,5 @@
|
|||||||
#include "hakmem_shared_pool.h"
|
#include "hakmem_shared_pool.h"
|
||||||
|
#include "hakmem_tiny_superslab.h"
|
||||||
#include "hakmem_tiny_superslab_constants.h"
|
#include "hakmem_tiny_superslab_constants.h"
|
||||||
|
|
||||||
#include <stdlib.h>
|
#include <stdlib.h>
|
||||||
@ -66,48 +67,67 @@ shared_pool_init(void)
|
|||||||
pthread_mutex_unlock(&g_shared_pool.alloc_lock);
|
pthread_mutex_unlock(&g_shared_pool.alloc_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Internal: allocate and register a new SuperSlab.
|
/*
|
||||||
// Caller must hold alloc_lock.
|
* Internal: allocate and register a new SuperSlab for the shared pool.
|
||||||
|
*
|
||||||
|
* Phase 12 NOTE:
|
||||||
|
* - We MUST use the real superslab_allocate() path so that:
|
||||||
|
* - backing memory is a full SuperSlab region (1–2MB),
|
||||||
|
* - header/layout are initialized correctly,
|
||||||
|
* - registry integration stays consistent.
|
||||||
|
* - shared_pool is responsible only for:
|
||||||
|
* - tracking pointers,
|
||||||
|
* - marking per-slab class_idx as UNASSIGNED initially.
|
||||||
|
* It does NOT bypass registry/LRU.
|
||||||
|
*
|
||||||
|
* Caller must hold alloc_lock.
|
||||||
|
*/
|
||||||
static SuperSlab*
|
static SuperSlab*
|
||||||
shared_pool_allocate_superslab_unlocked(void)
|
shared_pool_allocate_superslab_unlocked(void)
|
||||||
{
|
{
|
||||||
// Allocate SuperSlab and backing memory region.
|
// Use size_class 0 as a neutral hint; Phase 12 per-slab class_idx is authoritative.
|
||||||
// NOTE: Existing code likely has a helper; we keep this minimal for now.
|
extern SuperSlab* superslab_allocate(uint8_t size_class);
|
||||||
SuperSlab* ss = (SuperSlab*)aligned_alloc(64, sizeof(SuperSlab));
|
SuperSlab* ss = superslab_allocate(0);
|
||||||
if (!ss) {
|
if (!ss) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
memset(ss, 0, sizeof(SuperSlab));
|
// superslab_allocate() already:
|
||||||
ss->magic = SUPERSLAB_MAGIC;
|
// - zeroes slab metadata / remote queues,
|
||||||
ss->lg_size = SUPERSLAB_LG_DEFAULT;
|
// - sets magic/lg_size/etc,
|
||||||
ss->active_slabs = 0;
|
// - registers in global registry.
|
||||||
ss->slab_bitmap = 0;
|
// For shared-pool semantics we normalize all slab class_idx to UNASSIGNED.
|
||||||
|
int max_slabs = ss_slabs_capacity(ss);
|
||||||
// Initialize all per-slab metadata to UNASSIGNED for Phase 12 semantics.
|
for (int i = 0; i < max_slabs; i++) {
|
||||||
for (int i = 0; i < SLABS_PER_SUPERSLAB_MAX; i++) {
|
ss->slabs[i].class_idx = 255; // UNASSIGNED
|
||||||
ss->slabs[i].class_idx = 255; // UNASSIGNED
|
|
||||||
ss->slabs[i].owner_tid_low = 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Register into pool array.
|
|
||||||
if (g_shared_pool.total_count >= g_shared_pool.capacity) {
|
if (g_shared_pool.total_count >= g_shared_pool.capacity) {
|
||||||
shared_pool_ensure_capacity_unlocked(g_shared_pool.total_count + 1);
|
shared_pool_ensure_capacity_unlocked(g_shared_pool.total_count + 1);
|
||||||
if (g_shared_pool.total_count >= g_shared_pool.capacity) {
|
if (g_shared_pool.total_count >= g_shared_pool.capacity) {
|
||||||
free(ss);
|
// Pool table expansion failed; leave ss alive (registry-owned),
|
||||||
|
// but do not treat it as part of shared_pool.
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
g_shared_pool.slabs[g_shared_pool.total_count] = ss;
|
g_shared_pool.slabs[g_shared_pool.total_count] = ss;
|
||||||
g_shared_pool.total_count++;
|
g_shared_pool.total_count++;
|
||||||
// Not counted as active until we assign at least one slab.
|
// Not counted as active until at least one slab is assigned.
|
||||||
return ss;
|
return ss;
|
||||||
}
|
}
|
||||||
|
|
||||||
SuperSlab*
|
SuperSlab*
|
||||||
shared_pool_acquire_superslab(void)
|
shared_pool_acquire_superslab(void)
|
||||||
{
|
{
|
||||||
|
// Phase 12 debug safety:
|
||||||
|
// If shared backend is disabled at Box API level, this function SHOULD NOT be called.
|
||||||
|
// But since bench currently SEGVs here even with legacy forced, treat this as a hard guard:
|
||||||
|
// we early-return error instead of touching potentially-bad state.
|
||||||
|
//
|
||||||
|
// This isolates shared_pool from the current crash so we can validate legacy path first.
|
||||||
|
// FIXED: Remove the return -1; that was preventing operation
|
||||||
|
|
||||||
shared_pool_init();
|
shared_pool_init();
|
||||||
|
|
||||||
pthread_mutex_lock(&g_shared_pool.alloc_lock);
|
pthread_mutex_lock(&g_shared_pool.alloc_lock);
|
||||||
@ -123,6 +143,10 @@ shared_pool_acquire_superslab(void)
|
|||||||
int
|
int
|
||||||
shared_pool_acquire_slab(int class_idx, SuperSlab** ss_out, int* slab_idx_out)
|
shared_pool_acquire_slab(int class_idx, SuperSlab** ss_out, int* slab_idx_out)
|
||||||
{
|
{
|
||||||
|
// Phase 12: real shared backend is enabled; this function must be correct & safe.
|
||||||
|
// Invariants (callers rely on):
|
||||||
|
// - On success, *ss_out != NULL, 0 <= *slab_idx_out < SLABS_PER_SUPERSLAB_MAX.
|
||||||
|
// - The chosen slab has meta->class_idx == class_idx and capacity > 0.
|
||||||
if (!ss_out || !slab_idx_out) {
|
if (!ss_out || !slab_idx_out) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|||||||
@ -1236,7 +1236,7 @@ static inline __attribute__((always_inline)) pthread_t tiny_self_pt(void) {
|
|||||||
// tiny_mmap_gate.h already included at top
|
// tiny_mmap_gate.h already included at top
|
||||||
#include "tiny_publish.h"
|
#include "tiny_publish.h"
|
||||||
|
|
||||||
int g_sll_cap_override[TINY_NUM_CLASSES] = {0}; // HAKMEM_TINY_SLL_CAP_C{0..7}
|
int g_sll_cap_override[TINY_NUM_CLASSES] = {0}; // LEGACY (Phase12以降は参照しない/互換用ダミー)
|
||||||
// Optional prefetch on SLL pop (guarded by env: HAKMEM_TINY_PREFETCH=1)
|
// Optional prefetch on SLL pop (guarded by env: HAKMEM_TINY_PREFETCH=1)
|
||||||
static int g_tiny_prefetch = 0;
|
static int g_tiny_prefetch = 0;
|
||||||
|
|
||||||
@ -1501,12 +1501,7 @@ static inline void* hak_tiny_alloc_superslab_try_fast(int class_idx) {
|
|||||||
// SLL capacity policy: for hot tiny classes (0..3), allow larger SLL up to multiplier * mag_cap
|
// SLL capacity policy: for hot tiny classes (0..3), allow larger SLL up to multiplier * mag_cap
|
||||||
// for >=4 keep current conservative half (to limit footprint).
|
// for >=4 keep current conservative half (to limit footprint).
|
||||||
static inline uint32_t sll_cap_for_class(int class_idx, uint32_t mag_cap) {
|
static inline uint32_t sll_cap_for_class(int class_idx, uint32_t mag_cap) {
|
||||||
// Absolute override
|
// Phase12: g_sll_cap_override は非推奨。ここでは無視して通常capを返す。
|
||||||
if (g_sll_cap_override[class_idx] > 0) {
|
|
||||||
uint32_t cap = (uint32_t)g_sll_cap_override[class_idx];
|
|
||||||
if (cap > TINY_TLS_MAG_CAP) cap = TINY_TLS_MAG_CAP;
|
|
||||||
return cap;
|
|
||||||
}
|
|
||||||
uint32_t cap = mag_cap;
|
uint32_t cap = mag_cap;
|
||||||
if (class_idx <= 3) {
|
if (class_idx <= 3) {
|
||||||
uint32_t mult = (g_sll_multiplier > 0 ? (uint32_t)g_sll_multiplier : 1u);
|
uint32_t mult = (g_sll_multiplier > 0 ? (uint32_t)g_sll_multiplier : 1u);
|
||||||
|
|||||||
@ -5,32 +5,31 @@ core/hakmem_tiny.o: core/hakmem_tiny.c core/hakmem_tiny.h \
|
|||||||
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_debug_ring.h core/tiny_remote.h \
|
core/tiny_debug_ring.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/hakmem_super_registry.h \
|
core/hakmem_tiny_superslab_constants.h core/hakmem_super_registry.h \
|
||||||
core/hakmem_internal.h core/hakmem.h core/hakmem_config.h \
|
core/hakmem_internal.h core/hakmem.h core/hakmem_config.h \
|
||||||
core/hakmem_features.h core/hakmem_sys.h core/hakmem_whale.h \
|
core/hakmem_features.h core/hakmem_sys.h core/hakmem_whale.h \
|
||||||
core/hakmem_syscall.h core/hakmem_tiny_magazine.h \
|
core/hakmem_syscall.h core/hakmem_tiny_magazine.h \
|
||||||
core/hakmem_tiny_integrity.h core/hakmem_tiny_batch_refill.h \
|
core/hakmem_tiny_integrity.h core/box/tiny_next_ptr_box.h \
|
||||||
core/hakmem_tiny_stats.h core/tiny_api.h core/hakmem_tiny_stats_api.h \
|
core/hakmem_tiny_config.h core/tiny_nextptr.h \
|
||||||
core/hakmem_tiny_query_api.h core/hakmem_tiny_rss_api.h \
|
core/hakmem_tiny_batch_refill.h core/hakmem_tiny_stats.h core/tiny_api.h \
|
||||||
core/hakmem_tiny_registry_api.h core/tiny_tls.h core/tiny_debug.h \
|
core/hakmem_tiny_stats_api.h core/hakmem_tiny_query_api.h \
|
||||||
core/tiny_mmap_gate.h core/tiny_refill.h core/slab_handle.h \
|
core/hakmem_tiny_rss_api.h core/hakmem_tiny_registry_api.h \
|
||||||
core/tiny_sticky.h core/tiny_ready.h core/box/mailbox_box.h \
|
core/tiny_tls.h core/tiny_debug.h core/tiny_mmap_gate.h \
|
||||||
core/hakmem_tiny_superslab.h core/tiny_remote_bg.h \
|
core/tiny_refill.h core/slab_handle.h core/tiny_sticky.h \
|
||||||
core/hakmem_tiny_remote_target.h core/tiny_ready_bg.h core/tiny_route.h \
|
core/tiny_ready.h core/box/mailbox_box.h core/hakmem_tiny_superslab.h \
|
||||||
core/box/adopt_gate_box.h core/tiny_tls_guard.h \
|
core/tiny_remote_bg.h core/hakmem_tiny_remote_target.h \
|
||||||
core/hakmem_tiny_tls_list.h core/hakmem_tiny_bg_spill.h \
|
core/tiny_ready_bg.h core/tiny_route.h core/box/adopt_gate_box.h \
|
||||||
core/tiny_adaptive_sizing.h core/tiny_system.h core/hakmem_prof.h \
|
core/tiny_tls_guard.h core/hakmem_tiny_tls_list.h \
|
||||||
core/tiny_publish.h core/box/tls_sll_box.h core/box/../ptr_trace.h \
|
core/hakmem_tiny_bg_spill.h core/tiny_adaptive_sizing.h \
|
||||||
core/box/../hakmem_tiny_config.h core/box/../hakmem_build_flags.h \
|
core/tiny_system.h core/hakmem_prof.h core/tiny_publish.h \
|
||||||
core/box/../tiny_remote.h core/box/../tiny_region_id.h \
|
core/box/tls_sll_box.h core/box/../hakmem_tiny_config.h \
|
||||||
core/box/../hakmem_build_flags.h core/box/../tiny_box_geometry.h \
|
core/box/../hakmem_build_flags.h core/box/../tiny_remote.h \
|
||||||
core/box/../ptr_track.h core/box/../hakmem_tiny_integrity.h \
|
core/box/../tiny_region_id.h core/box/../hakmem_build_flags.h \
|
||||||
core/box/../ptr_track.h core/hakmem_tiny_hotmag.inc.h \
|
core/box/../tiny_box_geometry.h \
|
||||||
|
core/box/../hakmem_tiny_superslab_constants.h \
|
||||||
|
core/box/../hakmem_tiny_config.h core/box/../ptr_track.h \
|
||||||
|
core/box/../hakmem_tiny_integrity.h core/box/../ptr_track.h \
|
||||||
|
core/box/../ptr_trace.h core/hakmem_tiny_hotmag.inc.h \
|
||||||
core/hakmem_tiny_hot_pop.inc.h core/hakmem_tiny_fastcache.inc.h \
|
core/hakmem_tiny_hot_pop.inc.h core/hakmem_tiny_fastcache.inc.h \
|
||||||
core/hakmem_tiny_refill.inc.h core/tiny_box_geometry.h \
|
core/hakmem_tiny_refill.inc.h core/tiny_box_geometry.h \
|
||||||
core/hakmem_tiny_ultra_front.inc.h core/hakmem_tiny_intel.inc \
|
core/hakmem_tiny_ultra_front.inc.h core/hakmem_tiny_intel.inc \
|
||||||
@ -62,14 +61,6 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/hakmem_super_registry.h:
|
core/hakmem_super_registry.h:
|
||||||
core/hakmem_internal.h:
|
core/hakmem_internal.h:
|
||||||
@ -81,6 +72,9 @@ core/hakmem_whale.h:
|
|||||||
core/hakmem_syscall.h:
|
core/hakmem_syscall.h:
|
||||||
core/hakmem_tiny_magazine.h:
|
core/hakmem_tiny_magazine.h:
|
||||||
core/hakmem_tiny_integrity.h:
|
core/hakmem_tiny_integrity.h:
|
||||||
|
core/box/tiny_next_ptr_box.h:
|
||||||
|
core/hakmem_tiny_config.h:
|
||||||
|
core/tiny_nextptr.h:
|
||||||
core/hakmem_tiny_batch_refill.h:
|
core/hakmem_tiny_batch_refill.h:
|
||||||
core/hakmem_tiny_stats.h:
|
core/hakmem_tiny_stats.h:
|
||||||
core/tiny_api.h:
|
core/tiny_api.h:
|
||||||
@ -110,16 +104,18 @@ core/tiny_system.h:
|
|||||||
core/hakmem_prof.h:
|
core/hakmem_prof.h:
|
||||||
core/tiny_publish.h:
|
core/tiny_publish.h:
|
||||||
core/box/tls_sll_box.h:
|
core/box/tls_sll_box.h:
|
||||||
core/box/../ptr_trace.h:
|
|
||||||
core/box/../hakmem_tiny_config.h:
|
core/box/../hakmem_tiny_config.h:
|
||||||
core/box/../hakmem_build_flags.h:
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../tiny_remote.h:
|
core/box/../tiny_remote.h:
|
||||||
core/box/../tiny_region_id.h:
|
core/box/../tiny_region_id.h:
|
||||||
core/box/../hakmem_build_flags.h:
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../tiny_box_geometry.h:
|
core/box/../tiny_box_geometry.h:
|
||||||
|
core/box/../hakmem_tiny_superslab_constants.h:
|
||||||
|
core/box/../hakmem_tiny_config.h:
|
||||||
core/box/../ptr_track.h:
|
core/box/../ptr_track.h:
|
||||||
core/box/../hakmem_tiny_integrity.h:
|
core/box/../hakmem_tiny_integrity.h:
|
||||||
core/box/../ptr_track.h:
|
core/box/../ptr_track.h:
|
||||||
|
core/box/../ptr_trace.h:
|
||||||
core/hakmem_tiny_hotmag.inc.h:
|
core/hakmem_tiny_hotmag.inc.h:
|
||||||
core/hakmem_tiny_hot_pop.inc.h:
|
core/hakmem_tiny_hot_pop.inc.h:
|
||||||
core/hakmem_tiny_fastcache.inc.h:
|
core/hakmem_tiny_fastcache.inc.h:
|
||||||
|
|||||||
@ -235,11 +235,7 @@ static void* intelligence_engine_main(void* arg) {
|
|||||||
int floor = g_tiny_cap_floor[k]; if (floor <= 0) floor = 64;
|
int floor = g_tiny_cap_floor[k]; if (floor <= 0) floor = 64;
|
||||||
int mag = g_mag_cap_override[k]; if (mag <= 0) mag = tiny_effective_cap(k);
|
int mag = g_mag_cap_override[k]; if (mag <= 0) mag = tiny_effective_cap(k);
|
||||||
mag -= g_tiny_diet_step; if (mag < floor) mag = floor; g_mag_cap_override[k] = mag;
|
mag -= g_tiny_diet_step; if (mag < floor) mag = floor; g_mag_cap_override[k] = mag;
|
||||||
if (k <= 3) {
|
// Phase12: SLL cap 調整は g_sll_cap_override ではなくポリシー側が担当するため、ここでは変更しない。
|
||||||
int sll = g_sll_cap_override[k]; if (sll <= 0) sll = mag * 2;
|
|
||||||
int sll_floor = floor; if (sll_floor < 64) sll_floor = 64;
|
|
||||||
sll -= (g_tiny_diet_step * 2); if (sll < sll_floor) sll = sll_floor; g_sll_cap_override[k] = sll;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -258,4 +254,3 @@ static void* intelligence_engine_main(void* arg) {
|
|||||||
}
|
}
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -98,8 +98,8 @@ static inline __attribute__((always_inline)) void* tiny_fast_pop(int class_idx)
|
|||||||
} else {
|
} else {
|
||||||
g_fast_count[class_idx] = 0;
|
g_fast_count[class_idx] = 0;
|
||||||
}
|
}
|
||||||
// Phase E1-CORRECT: All classes return user pointer (base+1)
|
// Phase E1-CORRECT: Return BASE pointer; caller (HAK_RET_ALLOC) performs BASE→USER
|
||||||
return (void*)((uint8_t*)head + 1);
|
return head;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline __attribute__((always_inline)) int tiny_fast_push(int class_idx, void* ptr) {
|
static inline __attribute__((always_inline)) int tiny_fast_push(int class_idx, void* ptr) {
|
||||||
|
|||||||
@ -457,7 +457,7 @@ void hak_tiny_init(void) {
|
|||||||
if (vm) { int v = atoi(vm); if (v > 0 && v <= TINY_TLS_MAG_CAP) g_mag_cap_override[i] = v; }
|
if (vm) { int v = atoi(vm); if (v > 0 && v <= TINY_TLS_MAG_CAP) g_mag_cap_override[i] = v; }
|
||||||
snprintf(var, sizeof(var), "HAKMEM_TINY_SLL_CAP_C%d", i);
|
snprintf(var, sizeof(var), "HAKMEM_TINY_SLL_CAP_C%d", i);
|
||||||
char* vs = getenv(var);
|
char* vs = getenv(var);
|
||||||
if (vs) { int v = atoi(vs); if (v > 0 && v <= TINY_TLS_MAG_CAP) g_sll_cap_override[i] = v; }
|
// Phase12: g_sll_cap_override はレガシー互換ダミー。SLL cap は sll_cap_for_class()/TinyAcePolicy が担当するため、ここでは無視する。
|
||||||
|
|
||||||
// Front refill count per-class override (fast path tuning)
|
// Front refill count per-class override (fast path tuning)
|
||||||
snprintf(var, sizeof(var), "HAKMEM_TINY_REFILL_COUNT_C%d", i);
|
snprintf(var, sizeof(var), "HAKMEM_TINY_REFILL_COUNT_C%d", i);
|
||||||
|
|||||||
@ -300,11 +300,8 @@ static void tiny_ace_collect_stats(int idx, const TinyObsStats* st) {
|
|||||||
int mag_step = (g_obs_mag_step > 0) ? g_obs_mag_step : ACE_MAG_STEP_DEFAULT;
|
int mag_step = (g_obs_mag_step > 0) ? g_obs_mag_step : ACE_MAG_STEP_DEFAULT;
|
||||||
if (mag_step < 1) mag_step = 1;
|
if (mag_step < 1) mag_step = 1;
|
||||||
|
|
||||||
int current_sll = g_sll_cap_override[idx];
|
// Phase12: g_sll_cap_override はレガシー互換ダミー。SLL cap は TinyAcePolicy に直接保持する。
|
||||||
if (current_sll <= 0) {
|
int current_sll = pol.sll_cap;
|
||||||
int mult = (g_sll_multiplier > 0) ? g_sll_multiplier : 2;
|
|
||||||
current_sll = current_mag * mult;
|
|
||||||
}
|
|
||||||
if (current_sll < current_mag) current_sll = current_mag;
|
if (current_sll < current_mag) current_sll = current_mag;
|
||||||
if (current_sll < 32) current_sll = 32;
|
if (current_sll < 32) current_sll = 32;
|
||||||
int sll_step = (g_obs_sll_step > 0) ? g_obs_sll_step : ACE_SLL_STEP_DEFAULT;
|
int sll_step = (g_obs_sll_step > 0) ? g_obs_sll_step : ACE_SLL_STEP_DEFAULT;
|
||||||
@ -576,7 +573,7 @@ static void tiny_ace_apply_policies(void) {
|
|||||||
int new_sll = pol->sll_cap;
|
int new_sll = pol->sll_cap;
|
||||||
if (new_sll < new_mag) new_sll = new_mag;
|
if (new_sll < new_mag) new_sll = new_mag;
|
||||||
if (new_sll > TINY_TLS_MAG_CAP) new_sll = TINY_TLS_MAG_CAP;
|
if (new_sll > TINY_TLS_MAG_CAP) new_sll = TINY_TLS_MAG_CAP;
|
||||||
g_sll_cap_override[i] = new_sll;
|
pol->sll_cap = (uint16_t)new_sll; // publish only into policy (no global override)
|
||||||
|
|
||||||
if (g_fast_enable && !g_fast_cap_locked[i]) {
|
if (g_fast_enable && !g_fast_cap_locked[i]) {
|
||||||
uint16_t new_fast = pol->fast_cap;
|
uint16_t new_fast = pol->fast_cap;
|
||||||
@ -635,7 +632,7 @@ static void tiny_ace_init_defaults(void) {
|
|||||||
pol->hotmag_refill = hotmag_refill_target(i);
|
pol->hotmag_refill = hotmag_refill_target(i);
|
||||||
|
|
||||||
if (g_mag_cap_override[i] <= 0) g_mag_cap_override[i] = pol->mag_cap;
|
if (g_mag_cap_override[i] <= 0) g_mag_cap_override[i] = pol->mag_cap;
|
||||||
if (g_sll_cap_override[i] <= 0) g_sll_cap_override[i] = pol->sll_cap;
|
// Phase12: g_sll_cap_override は使用しない(互換用ダミー)
|
||||||
switch (i) {
|
switch (i) {
|
||||||
case 0: g_hot_alloc_fn[i] = tiny_hot_pop_class0; break;
|
case 0: g_hot_alloc_fn[i] = tiny_hot_pop_class0; break;
|
||||||
case 1: g_hot_alloc_fn[i] = tiny_hot_pop_class1; break;
|
case 1: g_hot_alloc_fn[i] = tiny_hot_pop_class1; break;
|
||||||
|
|||||||
@ -7,6 +7,7 @@
|
|||||||
#include <stdint.h>
|
#include <stdint.h>
|
||||||
|
|
||||||
#include "hakmem_tiny.h"
|
#include "hakmem_tiny.h"
|
||||||
|
#include "hakmem_tiny_config.h" // extern g_tiny_class_sizes
|
||||||
#include "hakmem_tiny_query_api.h"
|
#include "hakmem_tiny_query_api.h"
|
||||||
#include "hakmem_tiny_superslab.h"
|
#include "hakmem_tiny_superslab.h"
|
||||||
#include "hakmem_super_registry.h"
|
#include "hakmem_super_registry.h"
|
||||||
|
|||||||
@ -1,347 +1,391 @@
|
|||||||
// hakmem_tiny_refill.inc.h
|
// hakmem_tiny_refill.inc.h
|
||||||
// Phase 2D-1: Hot-path inline functions - Refill operations
|
// Phase 12: Minimal refill helpers needed by Box fast path.
|
||||||
//
|
//
|
||||||
// This file contains hot-path refill functions for various allocation tiers.
|
// 本ヘッダは、以下を提供する:
|
||||||
// These functions are extracted from hakmem_tiny.c to improve maintainability and
|
// - superslab_tls_bump_fast: TinyTLSSlab + SuperSlab メタからのTLSバンプ窓
|
||||||
// reduce the main file size by approximately 280 lines.
|
// - tiny_fast_refill_and_take: FastCache/TLS SLL からの最小 refill + 1個取得
|
||||||
|
// - bulk_mag_to_sll_if_room: Magazine→SLL へのバルク移送(容量チェック付き)
|
||||||
|
// - sll_refill_small_from_ss: Phase12 shared SuperSlab pool 向けの最小実装
|
||||||
//
|
//
|
||||||
// Functions handle:
|
// 旧来の g_sll_cap_override / getenv ベースの多経路ロジックは一切含めない。
|
||||||
// - tiny_fast_refill_and_take: Fast cache refill (lines 584-622, 39 lines)
|
|
||||||
// - quick_refill_from_sll: Quick slot refill from SLL (lines 918-936, 19 lines)
|
|
||||||
// - quick_refill_from_mag: Quick slot refill from magazine (lines 938-949, 12 lines)
|
|
||||||
// - sll_refill_small_from_ss: SLL refill from superslab (lines 952-996, 45 lines)
|
|
||||||
// - superslab_tls_bump_fast: TLS bump allocation (lines 1016-1060, 45 lines)
|
|
||||||
// - frontend_refill_fc: Frontend fast cache refill (lines 1063-1106, 44 lines)
|
|
||||||
// - bulk_mag_to_sll_if_room: Magazine to SLL bulk transfer (lines 1133-1154, 22 lines)
|
|
||||||
// - ultra_refill_sll: Ultra-mode SLL refill (lines 1178-1233, 56 lines)
|
|
||||||
|
|
||||||
#ifndef HAKMEM_TINY_REFILL_INC_H
|
#ifndef HAKMEM_TINY_REFILL_INC_H
|
||||||
#define HAKMEM_TINY_REFILL_INC_H
|
#define HAKMEM_TINY_REFILL_INC_H
|
||||||
|
|
||||||
#include "hakmem_tiny.h"
|
#include "hakmem_tiny.h"
|
||||||
#include "hakmem_tiny_superslab.h"
|
#include "hakmem_tiny_superslab.h"
|
||||||
#include "hakmem_tiny_magazine.h"
|
|
||||||
#include "hakmem_tiny_tls_list.h"
|
#include "hakmem_tiny_tls_list.h"
|
||||||
#include "tiny_box_geometry.h" // Box 3: Geometry & Capacity Calculator
|
#include "tiny_box_geometry.h"
|
||||||
#include "hakmem_super_registry.h" // For hak_super_lookup (Debug validation)
|
#include "superslab/superslab_inline.h"
|
||||||
#include "superslab/superslab_inline.h" // For slab_index_for/ss_slabs_capacity (Debug validation)
|
#include "box/tls_sll_box.h"
|
||||||
#include "box/tls_sll_box.h" // Box TLS-SLL: Safe SLL operations API
|
#include "hakmem_tiny_integrity.h"
|
||||||
#include "hakmem_tiny_integrity.h" // PRIORITY 1-4: Corruption detection
|
#include "box/tiny_next_ptr_box.h"
|
||||||
#include "box/tiny_next_ptr_box.h" // Box API: Next pointer read/write
|
|
||||||
#include <stdint.h>
|
#include <stdint.h>
|
||||||
#include <pthread.h>
|
#include <stdatomic.h>
|
||||||
#include <stdlib.h>
|
|
||||||
|
|
||||||
// External declarations for TLS variables and globals
|
// ========= Externs from hakmem_tiny.c and friends =========
|
||||||
extern int g_fast_enable;
|
|
||||||
|
extern int g_use_superslab;
|
||||||
|
extern __thread TinyTLSSlab g_tls_slabs[TINY_NUM_CLASSES];
|
||||||
|
|
||||||
|
extern int g_fastcache_enable;
|
||||||
extern uint16_t g_fast_cap[TINY_NUM_CLASSES];
|
extern uint16_t g_fast_cap[TINY_NUM_CLASSES];
|
||||||
extern __thread void* g_fast_head[TINY_NUM_CLASSES];
|
extern __thread TinyFastCache g_fast_cache[TINY_NUM_CLASSES];
|
||||||
extern __thread uint16_t g_fast_count[TINY_NUM_CLASSES];
|
|
||||||
|
|
||||||
extern int g_tls_list_enable;
|
|
||||||
extern int g_tls_sll_enable;
|
extern int g_tls_sll_enable;
|
||||||
extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES];
|
extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES];
|
||||||
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
|
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
|
||||||
|
|
||||||
extern int g_use_superslab;
|
extern _Atomic uint32_t g_frontend_fill_target[TINY_NUM_CLASSES];
|
||||||
|
|
||||||
extern int g_ultra_bump_shadow;
|
extern int g_ultra_bump_shadow;
|
||||||
extern int g_bump_chunk;
|
extern int g_bump_chunk;
|
||||||
extern __thread uint8_t* g_tls_bcur[TINY_NUM_CLASSES];
|
extern __thread uint8_t* g_tls_bcur[TINY_NUM_CLASSES];
|
||||||
extern __thread uint8_t* g_tls_bend[TINY_NUM_CLASSES];
|
extern __thread uint8_t* g_tls_bend[TINY_NUM_CLASSES];
|
||||||
|
|
||||||
extern int g_fastcache_enable;
|
|
||||||
extern int g_quick_enable;
|
|
||||||
|
|
||||||
// External variable declarations
|
|
||||||
// Note: TinyTLSSlab, TinyFastCache, and TinyQuickSlot types must be defined before including this file
|
|
||||||
extern __thread TinyTLSSlab g_tls_slabs[TINY_NUM_CLASSES];
|
|
||||||
extern TinyPool g_tiny_pool;
|
|
||||||
extern PaddedLock g_tiny_class_locks[TINY_NUM_CLASSES];
|
|
||||||
extern __thread TinyFastCache g_fast_cache[TINY_NUM_CLASSES];
|
|
||||||
extern __thread TinyQuickSlot g_tls_quick[TINY_NUM_CLASSES];
|
|
||||||
|
|
||||||
// Frontend fill target
|
|
||||||
extern _Atomic uint32_t g_frontend_fill_target[TINY_NUM_CLASSES];
|
|
||||||
|
|
||||||
// Debug counters
|
|
||||||
#if HAKMEM_DEBUG_COUNTERS
|
#if HAKMEM_DEBUG_COUNTERS
|
||||||
extern uint64_t g_bump_hits[TINY_NUM_CLASSES];
|
extern uint64_t g_bump_hits[TINY_NUM_CLASSES];
|
||||||
extern uint64_t g_bump_arms[TINY_NUM_CLASSES];
|
extern uint64_t g_bump_arms[TINY_NUM_CLASSES];
|
||||||
extern uint64_t g_path_refill_calls[TINY_NUM_CLASSES];
|
extern uint64_t g_path_refill_calls[TINY_NUM_CLASSES];
|
||||||
extern uint64_t g_ultra_refill_calls[TINY_NUM_CLASSES];
|
extern uint64_t g_ultra_refill_calls[TINY_NUM_CLASSES];
|
||||||
#define HAK_PATHDBG_INC(arr, idx) do { if (g_path_debug_enabled) { (arr)[(idx)]++; } } while(0)
|
|
||||||
#define HAK_ULTRADBG_INC(arr, idx) do { (arr)[(idx)]++; } while(0)
|
|
||||||
extern int g_path_debug_enabled;
|
extern int g_path_debug_enabled;
|
||||||
#else
|
|
||||||
#define HAK_PATHDBG_INC(arr, idx) do { (void)(idx); } while(0)
|
|
||||||
#define HAK_ULTRADBG_INC(arr, idx) do { (void)(idx); } while(0)
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Tracepoint macros
|
// ========= From other units =========
|
||||||
#ifndef HAK_TP1
|
|
||||||
#define HAK_TP1(name, idx) do { (void)(idx); } while(0)
|
|
||||||
#endif
|
|
||||||
|
|
||||||
// Forward declarations for functions used in this file
|
|
||||||
static inline void* tiny_fast_pop(int class_idx);
|
|
||||||
static inline int tiny_fast_push(int class_idx, void* ptr);
|
|
||||||
static inline int tls_refill_from_tls_slab(int class_idx, TinyTLSList* tls, uint32_t want);
|
|
||||||
static inline uint32_t sll_cap_for_class(int class_idx, uint32_t mag_cap);
|
|
||||||
SuperSlab* superslab_refill(int class_idx);
|
SuperSlab* superslab_refill(int class_idx);
|
||||||
static void* slab_data_start(SuperSlab* ss, int slab_idx);
|
|
||||||
static inline uint8_t* tiny_slab_base_for(SuperSlab* ss, int slab_idx);
|
|
||||||
void ss_active_add(SuperSlab* ss, uint32_t n);
|
|
||||||
static inline void ss_active_inc(SuperSlab* ss);
|
|
||||||
static TinySlab* allocate_new_slab(int class_idx);
|
|
||||||
static void move_to_full_list(int class_idx, struct TinySlab* target_slab);
|
|
||||||
static int hak_tiny_find_free_block(TinySlab* slab);
|
|
||||||
static void hak_tiny_set_used(TinySlab* slab, int block_idx);
|
|
||||||
static inline int ultra_batch_for_class(int class_idx);
|
|
||||||
static inline int ultra_sll_cap_for_class(int class_idx);
|
|
||||||
// Note: tiny_small_mags_init_once and tiny_mag_init_if_needed are declared in hakmem_tiny_magazine.h
|
|
||||||
static void eventq_push(int class_idx, uint32_t size);
|
|
||||||
|
|
||||||
// Debug-only: Validate that a base node belongs to the expected Tiny SuperSlab and is stride-aligned
|
void ss_active_inc(SuperSlab* ss);
|
||||||
// IMPORTANT: This is expensive validation, ONLY enabled in DEBUG builds
|
void ss_active_add(SuperSlab* ss, uint32_t n);
|
||||||
#if !HAKMEM_BUILD_RELEASE && 0 // Disabled by default even in debug (enable with #if 1 if needed)
|
|
||||||
static inline void tiny_debug_validate_node_base(int class_idx, void* node, const char* where) {
|
size_t tiny_stride_for_class(int class_idx);
|
||||||
|
uint8_t* tiny_slab_base_for_geometry(SuperSlab* ss, int slab_idx);
|
||||||
|
|
||||||
|
extern uint32_t sll_cap_for_class(int class_idx, uint32_t mag_cap);
|
||||||
|
|
||||||
|
/* ultra_* 系は hakmem_tiny.c 側に定義があるため、ここでは宣言しない */
|
||||||
|
/* tls_sll_push は box/tls_sll_box.h で static inline bool tls_sll_push(...) 提供済み */
|
||||||
|
/* tiny_small_mags_init_once / tiny_mag_init_if_needed も hakmem_tiny_magazine.h で宣言済みなので、ここでは再宣言しない */
|
||||||
|
/* tiny_fast_pop / tiny_fast_push / fastcache_* は hakmem_tiny_fastcache.inc.h 側の static inline なので、ここでは未宣言でOK */
|
||||||
|
|
||||||
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
|
static inline void tiny_debug_validate_node_base(int class_idx, void* node, const char* where)
|
||||||
|
{
|
||||||
|
(void)class_idx;
|
||||||
|
(void)where;
|
||||||
|
|
||||||
|
// 最低限の防御: 異常に小さいアドレスを弾く
|
||||||
if ((uintptr_t)node < 4096) {
|
if ((uintptr_t)node < 4096) {
|
||||||
fprintf(stderr, "[SLL_NODE_SMALL] %s: node=%p cls=%d\n", where, node, class_idx);
|
fprintf(stderr,
|
||||||
abort();
|
"[TINY_REFILL_GUARD] %s: suspicious node=%p cls=%d\n",
|
||||||
}
|
where, node, class_idx);
|
||||||
SuperSlab* ss = hak_super_lookup(node);
|
|
||||||
if (!ss) {
|
|
||||||
fprintf(stderr, "[SLL_NODE_UNKNOWN] %s: node=%p cls=%d\n", where, node, class_idx);
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
int ocls = meta ? meta->class_idx : -1;
|
|
||||||
if (ocls == 7 || ocls != class_idx) {
|
|
||||||
fprintf(stderr, "[SLL_NODE_CLASS_MISMATCH] %s: node=%p cls=%d owner_cls=%d\n", where, node, class_idx, ocls);
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
int slab_idx = slab_index_for(ss, node);
|
|
||||||
if (slab_idx < 0 || slab_idx >= ss_slabs_capacity(ss)) {
|
|
||||||
fprintf(stderr, "[SLL_NODE_SLAB_OOB] %s: node=%p slab_idx=%d cap=%d\n", where, node, slab_idx, ss_slabs_capacity(ss));
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
uint8_t* base = tiny_slab_base_for_geometry(ss, slab_idx);
|
|
||||||
size_t usable = tiny_usable_bytes_for_slab(slab_idx);
|
|
||||||
size_t stride = tiny_stride_for_class(ocls);
|
|
||||||
uintptr_t a = (uintptr_t)node;
|
|
||||||
if (a < (uintptr_t)base || a >= (uintptr_t)base + usable) {
|
|
||||||
fprintf(stderr, "[SLL_NODE_RANGE] %s: node=%p base=%p usable=%zu\n", where, node, base, usable);
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
size_t off = (size_t)(a - (uintptr_t)base);
|
|
||||||
if (off % stride != 0) {
|
|
||||||
fprintf(stderr, "[SLL_NODE_MISALIGNED] %s: node=%p off=%zu stride=%zu base=%p\n", where, node, off, stride, base);
|
|
||||||
abort();
|
abort();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#else
|
#else
|
||||||
static inline void tiny_debug_validate_node_base(int class_idx, void* node, const char* where) { (void)class_idx; (void)node; (void)where; }
|
static inline void tiny_debug_validate_node_base(int class_idx, void* node, const char* where)
|
||||||
|
{
|
||||||
|
(void)class_idx;
|
||||||
|
(void)node;
|
||||||
|
(void)where;
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Fast cache refill and take operation
|
// ========= superslab_tls_bump_fast =========
|
||||||
|
//
|
||||||
|
// Ultra bump shadow: current slabが freelist 空で carved<capacity のとき、
|
||||||
|
// 連続領域を TLS window としてまとめ予約する。
|
||||||
|
// tiny_hot_pop_class{0..3} から呼ばれる。
|
||||||
|
|
||||||
|
static inline void* superslab_tls_bump_fast(int class_idx) {
|
||||||
|
if (!g_ultra_bump_shadow || !g_use_superslab) return NULL;
|
||||||
|
|
||||||
|
uint8_t* cur = g_tls_bcur[class_idx];
|
||||||
|
if (cur) {
|
||||||
|
uint8_t* end = g_tls_bend[class_idx];
|
||||||
|
size_t stride = tiny_stride_for_class(class_idx);
|
||||||
|
if (cur + stride <= end) {
|
||||||
|
g_tls_bcur[class_idx] = cur + stride;
|
||||||
|
#if HAKMEM_DEBUG_COUNTERS
|
||||||
|
g_bump_hits[class_idx]++;
|
||||||
|
#endif
|
||||||
|
#if HAKMEM_TINY_HEADER_CLASSIDX
|
||||||
|
// Headerは呼び出し元で書く or strideに含め済み想定。ここでは生ポインタ返す。
|
||||||
|
#endif
|
||||||
|
return cur;
|
||||||
|
}
|
||||||
|
g_tls_bcur[class_idx] = NULL;
|
||||||
|
g_tls_bend[class_idx] = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
TinyTLSSlab* tls = &g_tls_slabs[class_idx];
|
||||||
|
TinySlabMeta* meta = tls->meta;
|
||||||
|
if (!tls->ss || !meta || meta->freelist) return NULL;
|
||||||
|
|
||||||
|
uint16_t carved = meta->carved;
|
||||||
|
uint16_t cap = meta->capacity;
|
||||||
|
if (carved >= cap) return NULL;
|
||||||
|
|
||||||
|
uint32_t avail = (uint32_t)cap - (uint32_t)carved;
|
||||||
|
uint32_t chunk = (g_bump_chunk > 0) ? (uint32_t)g_bump_chunk : 1u;
|
||||||
|
if (chunk > avail) chunk = avail;
|
||||||
|
|
||||||
|
size_t stride = tiny_stride_for_class(class_idx);
|
||||||
|
uint8_t* base = tls->slab_base
|
||||||
|
? tls->slab_base
|
||||||
|
: tiny_slab_base_for_geometry(tls->ss, tls->slab_idx);
|
||||||
|
uint8_t* start = base + (size_t)carved * stride;
|
||||||
|
|
||||||
|
meta->carved = (uint16_t)(carved + (uint16_t)chunk);
|
||||||
|
meta->used = (uint16_t)(meta->used + (uint16_t)chunk);
|
||||||
|
ss_active_add(tls->ss, chunk);
|
||||||
|
#if HAKMEM_DEBUG_COUNTERS
|
||||||
|
g_bump_arms[class_idx]++;
|
||||||
|
#endif
|
||||||
|
|
||||||
|
// 1個目を即返し、残りをTLS windowとして保持
|
||||||
|
g_tls_bcur[class_idx] = start + stride;
|
||||||
|
g_tls_bend[class_idx] = start + (size_t)chunk * stride;
|
||||||
|
return start;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ========= tiny_fast_refill_and_take =========
|
||||||
|
//
|
||||||
|
// FCが空の時に、TLS list/superslab からバッチ取得して一つ返す。
|
||||||
|
// 旧来の複雑な経路を削り、FC/SLLのみの最小ロジックにする。
|
||||||
|
|
||||||
static inline void* tiny_fast_refill_and_take(int class_idx, TinyTLSList* tls) {
|
static inline void* tiny_fast_refill_and_take(int class_idx, TinyTLSList* tls) {
|
||||||
// Phase 1: C0–C3 prefer headerless array stack (FastCache) for lowest latency
|
// 1) Front FastCache から直接
|
||||||
if (__builtin_expect(g_fastcache_enable && class_idx <= 3, 1)) {
|
if (__builtin_expect(g_fastcache_enable && class_idx <= 3, 1)) {
|
||||||
void* fc = fastcache_pop(class_idx);
|
void* fc = fastcache_pop(class_idx);
|
||||||
if (fc) {
|
if (fc) {
|
||||||
extern unsigned long long g_front_fc_hit[];
|
extern unsigned long long g_front_fc_hit[TINY_NUM_CLASSES];
|
||||||
g_front_fc_hit[class_idx]++;
|
g_front_fc_hit[class_idx]++;
|
||||||
return fc;
|
return fc;
|
||||||
} else {
|
|
||||||
extern unsigned long long g_front_fc_miss[];
|
|
||||||
g_front_fc_miss[class_idx]++;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// For class5 hotpath, skip direct Front (SFC/SLL) and rely on TLS List path
|
|
||||||
extern int g_tiny_hotpath_class5;
|
// 2) ローカルfast list
|
||||||
if (!(g_tiny_hotpath_class5 && class_idx == 5)) {
|
{
|
||||||
void* direct = tiny_fast_pop(class_idx);
|
void* p = tiny_fast_pop(class_idx);
|
||||||
if (direct) return direct;
|
if (p) return p;
|
||||||
}
|
}
|
||||||
|
|
||||||
uint16_t cap = g_fast_cap[class_idx];
|
uint16_t cap = g_fast_cap[class_idx];
|
||||||
if (cap == 0) return NULL;
|
if (cap == 0) return NULL;
|
||||||
uint16_t count = g_fast_count[class_idx];
|
TinyFastCache* fc = &g_fast_cache[class_idx];
|
||||||
uint16_t need = cap > count ? (uint16_t)(cap - count) : 0;
|
int room = (int)cap - fc->top;
|
||||||
if (need == 0) return NULL;
|
if (room <= 0) return NULL;
|
||||||
uint32_t have = tls->count;
|
|
||||||
if (have < need) {
|
// 3) TLS SLL から詰め替え
|
||||||
uint32_t want = need - have;
|
int filled = 0;
|
||||||
uint32_t thresh = tls_list_refill_threshold(tls);
|
while (room > 0 && g_tls_sll_enable) {
|
||||||
if (want < thresh) want = thresh;
|
void* h = NULL;
|
||||||
tls_refill_from_tls_slab(class_idx, tls, want);
|
if (!tls_sll_pop(class_idx, &h)) break;
|
||||||
|
tiny_debug_validate_node_base(class_idx, h, "tiny_fast_refill_and_take");
|
||||||
|
fc->items[fc->top++] = h;
|
||||||
|
room--;
|
||||||
|
filled++;
|
||||||
}
|
}
|
||||||
void* batch_head = NULL;
|
|
||||||
void* batch_tail = NULL;
|
if (filled == 0) {
|
||||||
uint32_t taken = tls_list_bulk_take(tls, need, &batch_head, &batch_tail, class_idx);
|
// 4) Superslab bump (optional)
|
||||||
if (taken == 0u || batch_head == NULL) {
|
void* bump = superslab_tls_bump_fast(class_idx);
|
||||||
|
if (bump) return bump;
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
void* ret = batch_head;
|
// 5) 1個返す
|
||||||
void* node = tiny_next_read(class_idx, ret);
|
return fc->items[--fc->top];
|
||||||
uint32_t remaining = (taken > 0u) ? (taken - 1u) : 0u;
|
}
|
||||||
|
|
||||||
while (node && remaining > 0u) {
|
// ========= bulk_mag_to_sll_if_room =========
|
||||||
void* next = tiny_next_read(class_idx, node);
|
//
|
||||||
int pushed = 0;
|
// Magazine → SLL への安全な流し込み。
|
||||||
if (__builtin_expect(g_fastcache_enable && class_idx <= 3, 1)) {
|
// tiny_free_magazine.inc.h から参照される。
|
||||||
// Headerless array stack for hottest tiny classes
|
|
||||||
pushed = fastcache_push(class_idx, node);
|
static inline int bulk_mag_to_sll_if_room(int class_idx, TinyTLSMag* mag, int n) {
|
||||||
} else {
|
if (!g_tls_sll_enable || n <= 0) return 0;
|
||||||
// For class5 hotpath, keep leftovers in TLS List (not SLL)
|
|
||||||
extern int g_tiny_hotpath_class5;
|
uint32_t cap = sll_cap_for_class(class_idx, (uint32_t)mag->cap);
|
||||||
if (__builtin_expect(g_tiny_hotpath_class5 && class_idx == 5, 0)) {
|
uint32_t have = g_tls_sll_count[class_idx];
|
||||||
tls_list_push_fast(tls, node, 5);
|
if (have >= cap) return 0;
|
||||||
pushed = 1;
|
|
||||||
} else {
|
int room = (int)(cap - have);
|
||||||
pushed = tiny_fast_push(class_idx, node);
|
int take = n < room ? n : room;
|
||||||
}
|
if (take <= 0) return 0;
|
||||||
}
|
if (take > mag->top) take = mag->top;
|
||||||
if (pushed) { node = next; remaining--; }
|
if (take <= 0) return 0;
|
||||||
else {
|
|
||||||
// Push failed, return remaining to TLS (preserve order)
|
int pushed = 0;
|
||||||
tls_list_bulk_put(tls, node, batch_tail, remaining, class_idx);
|
for (int i = 0; i < take; i++) {
|
||||||
// ✅ FIX #16: Return BASE pointer (not USER)
|
void* p = mag->items[--mag->top].ptr;
|
||||||
// Caller will apply HAK_RET_ALLOC which does BASE → USER conversion
|
if (!tls_sll_push(class_idx, p, cap)) {
|
||||||
return ret;
|
mag->top++; // rollback last
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
pushed++;
|
||||||
}
|
}
|
||||||
// ✅ FIX #16: Return BASE pointer (not USER)
|
#if HAKMEM_DEBUG_COUNTERS
|
||||||
// Caller will apply HAK_RET_ALLOC which does BASE → USER conversion
|
if (pushed > 0) g_path_refill_calls[class_idx]++;
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Quick slot refill from SLL
|
|
||||||
static inline int quick_refill_from_sll(int class_idx) {
|
|
||||||
if (!g_tls_sll_enable) return 0;
|
|
||||||
TinyQuickSlot* qs = &g_tls_quick[class_idx];
|
|
||||||
int room = (int)(QUICK_CAP - qs->top);
|
|
||||||
if (room <= 0) return 0;
|
|
||||||
// Limit burst to a tiny constant to reduce loop/branches
|
|
||||||
if (room > 2) room = 2;
|
|
||||||
int filled = 0;
|
|
||||||
while (room > 0) {
|
|
||||||
// CRITICAL: Use Box TLS-SLL API to avoid race condition (rbp=0xa0 SEGV)
|
|
||||||
void* head = NULL;
|
|
||||||
if (!tls_sll_pop(class_idx, &head)) break;
|
|
||||||
// One-shot validation for the first pop
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
|
||||||
do { static _Atomic int once = 0; int exp = 0; if (atomic_compare_exchange_strong(&once, &exp, 1)) { tiny_debug_validate_node_base(class_idx, head, "quick_refill_from_sll"); } } while (0);
|
|
||||||
#endif
|
#endif
|
||||||
qs->items[qs->top++] = head;
|
return pushed;
|
||||||
room--; filled++;
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* ========= Minimal Phase 12 sll_refill_small_from_ss =========
|
||||||
|
*
|
||||||
|
* Box化方針:
|
||||||
|
* - フロントエンド(tiny_fast_refill 等)は:
|
||||||
|
* - TLS SLL: tls_sll_box.h API のみを使用
|
||||||
|
* - Superslab: 本関数を唯一の「小サイズ SLL 補充 Box」として利用
|
||||||
|
* - バックエンド:
|
||||||
|
* - 現段階(Stage A/B)では既存 TLS Superslab/TinySlabMeta を直接利用
|
||||||
|
* - 将来(Stage C)に shared_pool_acquire_slab() に差し替え可能なよう、
|
||||||
|
* ここに Superslab 内部アクセスを閉じ込める
|
||||||
|
*
|
||||||
|
* 契約:
|
||||||
|
* - Tiny classes のみ (0 <= class_idx < TINY_NUM_CLASSES)
|
||||||
|
* - max_take は「この呼び出しで SLL に積みたい最大個数」
|
||||||
|
* - 戻り値は実際に SLL に積んだ個数(0 以上)
|
||||||
|
* - 呼び出し側は head/count/meta 等に触れず、Box API (tls_sll_box) のみ利用する
|
||||||
|
*/
|
||||||
|
|
||||||
|
__attribute__((noinline))
|
||||||
|
int sll_refill_small_from_ss(int class_idx, int max_take)
|
||||||
|
{
|
||||||
|
// Hard defensive gate: Tiny classes only, never trust caller.
|
||||||
|
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES) {
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
if (filled > 0) HAK_TP1(quick_refill_sll, class_idx);
|
|
||||||
if (filled > 0) {
|
|
||||||
extern unsigned long long g_front_quick_hit[];
|
|
||||||
g_front_quick_hit[class_idx]++;
|
|
||||||
}
|
|
||||||
return filled;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Quick slot refill from magazine
|
|
||||||
static inline int quick_refill_from_mag(int class_idx) {
|
|
||||||
TinyTLSMag* mag = &g_tls_mags[class_idx];
|
|
||||||
if (mag->top <= 0) return 0;
|
|
||||||
TinyQuickSlot* qs = &g_tls_quick[class_idx];
|
|
||||||
int room = (int)(QUICK_CAP - qs->top);
|
|
||||||
if (room <= 0) return 0;
|
|
||||||
// Only a single transfer from magazine to minimize overhead
|
|
||||||
int take = (mag->top > 0 && room > 0) ? 1 : 0;
|
|
||||||
for (int i = 0; i < take; i++) { qs->items[qs->top++] = mag->items[--mag->top].ptr; }
|
|
||||||
if (take > 0) HAK_TP1(quick_refill_mag, class_idx);
|
|
||||||
return take;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// Box 3 wrapper: verify linear carve stays within slab usable bytes (Fail-Fast)
|
|
||||||
// DEPRECATED: Use tiny_carve_guard_verbose() from Box 3 directly
|
|
||||||
static inline int tiny_linear_carve_guard(TinyTLSSlab* tls,
|
|
||||||
TinySlabMeta* meta,
|
|
||||||
size_t stride,
|
|
||||||
uint32_t reserve,
|
|
||||||
const char* stage) {
|
|
||||||
if (!tls || !meta) return 0;
|
|
||||||
int class_idx = (tls->meta && tls->meta->class_idx < TINY_NUM_CLASSES)
|
|
||||||
? (int)tls->meta->class_idx
|
|
||||||
: -1;
|
|
||||||
return tiny_carve_guard_verbose(stage,
|
|
||||||
class_idx,
|
|
||||||
tls->slab_idx,
|
|
||||||
meta->carved,
|
|
||||||
meta->used,
|
|
||||||
meta->capacity,
|
|
||||||
stride,
|
|
||||||
reserve);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Refill a few nodes directly into TLS SLL from TLS-cached SuperSlab (owner-thread only)
|
|
||||||
// Note: If HAKMEM_TINY_P0_BATCH_REFILL is enabled, sll_refill_batch_from_ss is used instead
|
|
||||||
#ifdef HAKMEM_TINY_PHASE6_BOX_REFACTOR
|
|
||||||
__attribute__((noinline)) int sll_refill_small_from_ss(int class_idx, int max_take) {
|
|
||||||
#else
|
|
||||||
static inline int sll_refill_small_from_ss(int class_idx, int max_take) {
|
|
||||||
#endif
|
|
||||||
HAK_CHECK_CLASS_IDX(class_idx, "sll_refill_small_from_ss");
|
HAK_CHECK_CLASS_IDX(class_idx, "sll_refill_small_from_ss");
|
||||||
atomic_fetch_add(&g_integrity_check_class_bounds, 1);
|
atomic_fetch_add(&g_integrity_check_class_bounds, 1);
|
||||||
|
|
||||||
if (!g_use_superslab || max_take <= 0)
|
// Phase12: 起動直後など、shared pool / superslab 未有効時は絶対に動かさない。
|
||||||
|
if (!g_use_superslab || max_take <= 0) {
|
||||||
return 0;
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// TLS slab 未構成状態 (ss/meta/slab_base すべて NULL) のときは、ここでは触らない。
|
||||||
|
// superslab_refill は「本当に必要になったタイミング」でのみ呼ぶ。
|
||||||
TinyTLSSlab* tls = &g_tls_slabs[class_idx];
|
TinyTLSSlab* tls = &g_tls_slabs[class_idx];
|
||||||
if (!tls->ss || !tls->meta || tls->meta->class_idx != (uint8_t)class_idx) {
|
if (!tls) {
|
||||||
if (!superslab_refill(class_idx))
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool tls_uninitialized =
|
||||||
|
(tls->ss == NULL) &&
|
||||||
|
(tls->meta == NULL) &&
|
||||||
|
(tls->slab_base == NULL);
|
||||||
|
|
||||||
|
if (tls_uninitialized) {
|
||||||
|
// 初回は、呼び出し元の上位ロジックが superslab_refill を呼ぶことを期待し、ここでは何もしない。
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure we have a valid TLS slab for this class via shared pool.
|
||||||
|
// superslab_refill() 契約:
|
||||||
|
// - 成功: g_tls_slabs[class_idx] に ss/meta/slab_base/slab_idx を一貫して設定
|
||||||
|
// - 失敗: TLS は不変 or 巻き戻し、NULL を返す
|
||||||
|
if (!tls->ss || !tls->meta ||
|
||||||
|
tls->meta->class_idx != (uint8_t)class_idx ||
|
||||||
|
!tls->slab_base) {
|
||||||
|
if (!superslab_refill(class_idx)) {
|
||||||
return 0;
|
return 0;
|
||||||
|
}
|
||||||
tls = &g_tls_slabs[class_idx];
|
tls = &g_tls_slabs[class_idx];
|
||||||
if (!tls->ss || !tls->meta || tls->meta->class_idx != (uint8_t)class_idx)
|
if (!tls->ss || !tls->meta ||
|
||||||
|
tls->meta->class_idx != (uint8_t)class_idx ||
|
||||||
|
!tls->slab_base) {
|
||||||
return 0;
|
return 0;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
TinySlabMeta* meta = tls->meta;
|
TinySlabMeta* meta = tls->meta;
|
||||||
uint32_t sll_cap = sll_cap_for_class(class_idx, (uint32_t)TINY_TLS_MAG_CAP);
|
// Meta invariants: class & capacity は妥当であること
|
||||||
int room = (int)sll_cap - (int)g_tls_sll_count[class_idx];
|
if (!meta ||
|
||||||
if (room <= 0)
|
meta->class_idx != (uint8_t)class_idx ||
|
||||||
|
meta->capacity == 0) {
|
||||||
return 0;
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
const uint32_t cap = sll_cap_for_class(class_idx, (uint32_t)TINY_TLS_MAG_CAP);
|
||||||
|
const uint32_t cur = g_tls_sll_count[class_idx];
|
||||||
|
if (cur >= cap) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
int room = (int)(cap - cur);
|
||||||
|
int target = (max_take < room) ? max_take : room;
|
||||||
|
if (target <= 0) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
int take = max_take < room ? max_take : room;
|
|
||||||
int taken = 0;
|
int taken = 0;
|
||||||
size_t bs = tiny_stride_for_class(class_idx);
|
const size_t stride = tiny_stride_for_class(class_idx);
|
||||||
|
|
||||||
while (taken < take) {
|
while (taken < target) {
|
||||||
void* p = NULL;
|
void* p = NULL;
|
||||||
|
|
||||||
|
// freelist 優先
|
||||||
if (meta->freelist) {
|
if (meta->freelist) {
|
||||||
p = meta->freelist;
|
p = meta->freelist;
|
||||||
meta->freelist = tiny_next_read(class_idx, p);
|
meta->freelist = tiny_next_read(class_idx, p);
|
||||||
meta->used++;
|
meta->used++;
|
||||||
|
if (__builtin_expect(meta->used > meta->capacity, 0)) {
|
||||||
|
// 異常検出時はロールバックして終了(fail-fast 回避のため静かに中断)
|
||||||
|
meta->used = meta->capacity;
|
||||||
|
break;
|
||||||
|
}
|
||||||
ss_active_inc(tls->ss);
|
ss_active_inc(tls->ss);
|
||||||
} else if (meta->carved < meta->capacity) {
|
}
|
||||||
if (!tiny_linear_carve_guard(tls, meta, bs, 1, "sll_refill_small"))
|
// freelist が尽きていて carved < capacity なら線形 carve
|
||||||
abort();
|
else if (meta->carved < meta->capacity) {
|
||||||
uint8_t* slab_start = tiny_slab_base_for_geometry(tls->ss, tls->slab_idx);
|
uint8_t* base = tls->slab_base
|
||||||
p = tiny_block_at_index(slab_start, meta->carved, bs);
|
? tls->slab_base
|
||||||
|
: tiny_slab_base_for_geometry(tls->ss, tls->slab_idx);
|
||||||
|
if (!base) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
uint16_t idx = meta->carved;
|
||||||
|
if (idx >= meta->capacity) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
uint8_t* addr = base + ((size_t)idx * stride);
|
||||||
meta->carved++;
|
meta->carved++;
|
||||||
meta->used++;
|
meta->used++;
|
||||||
ss_active_inc(tls->ss);
|
if (__builtin_expect(meta->used > meta->capacity, 0)) {
|
||||||
} else {
|
meta->used = meta->capacity;
|
||||||
if (!superslab_refill(class_idx))
|
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
|
ss_active_inc(tls->ss);
|
||||||
|
p = addr;
|
||||||
|
}
|
||||||
|
// freelist も carve も尽きたら、新しい slab を shared pool から取得
|
||||||
|
else {
|
||||||
|
if (!superslab_refill(class_idx)) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
tls = &g_tls_slabs[class_idx];
|
tls = &g_tls_slabs[class_idx];
|
||||||
meta = tls->meta;
|
meta = tls->meta;
|
||||||
if (!tls->ss || !meta || meta->class_idx != (uint8_t)class_idx)
|
if (!tls->ss || !meta ||
|
||||||
|
meta->class_idx != (uint8_t)class_idx ||
|
||||||
|
!tls->slab_base ||
|
||||||
|
meta->capacity == 0) {
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!p)
|
if (!p) {
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
if (!tls_sll_push(class_idx, p, sll_cap)) {
|
tiny_debug_validate_node_base(class_idx, p, "sll_refill_small_from_ss");
|
||||||
// SLL full; stop without complex rollback.
|
|
||||||
|
// SLL push 失敗時はそれ以上積まない(p はTLS slab管理下なので破棄でOK)
|
||||||
|
if (!tls_sll_push(class_idx, p, cap)) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -351,213 +395,4 @@ static inline int sll_refill_small_from_ss(int class_idx, int max_take) {
|
|||||||
return taken;
|
return taken;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ultra-Bump TLS shadow try: returns pointer when a TLS bump window is armed
|
|
||||||
// or can be armed by reserving a small chunk from the current SuperSlab meta.
|
|
||||||
static inline void* superslab_tls_bump_fast(int class_idx) {
|
|
||||||
if (!g_ultra_bump_shadow || !g_use_superslab) return NULL;
|
|
||||||
// Serve from armed TLS window if present
|
|
||||||
uint8_t* cur = g_tls_bcur[class_idx];
|
|
||||||
if (__builtin_expect(cur != NULL, 0)) {
|
|
||||||
uint8_t* end = g_tls_bend[class_idx];
|
|
||||||
// ✅ FIX #13B: Use stride (not user size) to match window arming (line 516)
|
|
||||||
// ROOT CAUSE: Window is carved with stride spacing, but fast path advanced by user size,
|
|
||||||
// causing misalignment and missing headers on blocks after the first one.
|
|
||||||
size_t bs = g_tiny_class_sizes[class_idx];
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
|
||||||
if (class_idx != 7) bs += 1; // stride = user_size + header
|
|
||||||
#endif
|
|
||||||
if (__builtin_expect(cur <= end - bs, 1)) {
|
|
||||||
g_tls_bcur[class_idx] = cur + bs;
|
|
||||||
#if HAKMEM_DEBUG_COUNTERS
|
|
||||||
g_bump_hits[class_idx]++;
|
|
||||||
#endif
|
|
||||||
HAK_TP1(bump_hit, class_idx);
|
|
||||||
// ✅ FIX #13: Write header and return BASE pointer
|
|
||||||
// ROOT CAUSE: Bump allocations didn't write headers, causing corruption when freed.
|
|
||||||
// SOLUTION: Write header to carved block before returning BASE.
|
|
||||||
// IMPORTANT: Return BASE (not USER) - caller will convert via HAK_RET_ALLOC.
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
|
||||||
if (class_idx != 7) {
|
|
||||||
*cur = HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK);
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
return (void*)cur; // Return BASE (caller converts to USER via HAK_RET_ALLOC)
|
|
||||||
}
|
|
||||||
// Window exhausted
|
|
||||||
g_tls_bcur[class_idx] = NULL;
|
|
||||||
g_tls_bend[class_idx] = NULL;
|
|
||||||
}
|
|
||||||
// Arm a new window from TLS-cached SuperSlab meta (linear mode only)
|
|
||||||
TinyTLSSlab* tls = &g_tls_slabs[class_idx];
|
|
||||||
TinySlabMeta* meta = tls->meta;
|
|
||||||
if (!meta || meta->freelist != NULL) return NULL; // linear mode only
|
|
||||||
// Use monotonic 'carved' for window arming
|
|
||||||
uint16_t carved = meta->carved;
|
|
||||||
uint16_t cap = meta->capacity;
|
|
||||||
if (carved >= cap) return NULL;
|
|
||||||
uint32_t avail = (uint32_t)cap - (uint32_t)carved;
|
|
||||||
uint32_t chunk = (g_bump_chunk > 0 ? (uint32_t)g_bump_chunk : 1u);
|
|
||||||
if (chunk > avail) chunk = avail;
|
|
||||||
// Box 3: Get stride and slab base
|
|
||||||
size_t bs = tiny_stride_for_class(tls->meta ? tls->meta->class_idx : 0);
|
|
||||||
uint8_t* base = tls->slab_base ? tls->slab_base : tiny_slab_base_for_geometry(tls->ss, tls->slab_idx);
|
|
||||||
if (__builtin_expect(!tiny_linear_carve_guard(tls, meta, bs, chunk, "tls_bump"), 0)) {
|
|
||||||
abort();
|
|
||||||
}
|
|
||||||
uint8_t* start = base + ((size_t)carved * bs);
|
|
||||||
// Reserve the chunk: advance carved and used accordingly
|
|
||||||
meta->carved = (uint16_t)(carved + (uint16_t)chunk);
|
|
||||||
meta->used = (uint16_t)(meta->used + (uint16_t)chunk);
|
|
||||||
// Account all reserved blocks as active in SuperSlab
|
|
||||||
ss_active_add(tls->ss, chunk);
|
|
||||||
#if HAKMEM_DEBUG_COUNTERS
|
|
||||||
g_bump_arms[class_idx]++;
|
|
||||||
#endif
|
|
||||||
g_tls_bcur[class_idx] = start + bs;
|
|
||||||
g_tls_bend[class_idx] = start + (size_t)chunk * bs;
|
|
||||||
// ✅ FIX #13: Write header and return BASE pointer
|
|
||||||
#if HAKMEM_TINY_HEADER_CLASSIDX
|
|
||||||
if (class_idx != 7) {
|
|
||||||
*start = HEADER_MAGIC | (class_idx & HEADER_CLASS_MASK);
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
return (void*)start; // Return BASE (caller converts to USER via HAK_RET_ALLOC)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Frontend: refill FastCache directly from TLS active slab (owner-only) or adopt a slab
|
|
||||||
static inline int frontend_refill_fc(int class_idx) {
|
|
||||||
TinyFastCache* fc = &g_fast_cache[class_idx];
|
|
||||||
int room = TINY_FASTCACHE_CAP - fc->top;
|
|
||||||
if (room <= 0) return 0;
|
|
||||||
// Target refill (conservative for safety)
|
|
||||||
int need = ultra_batch_for_class(class_idx);
|
|
||||||
int tgt = atomic_load_explicit(&g_frontend_fill_target[class_idx], memory_order_relaxed);
|
|
||||||
if (tgt > 0 && tgt < need) need = tgt;
|
|
||||||
if (need > room) need = room;
|
|
||||||
if (need <= 0) return 0;
|
|
||||||
|
|
||||||
int filled = 0;
|
|
||||||
|
|
||||||
// Step A: First bulk transfer from TLS SLL to FastCache (lock-free, O(1))
|
|
||||||
// CRITICAL: Use Box TLS-SLL API to avoid race condition (rbp=0xa0 SEGV)
|
|
||||||
if (g_tls_sll_enable) {
|
|
||||||
while (need > 0) {
|
|
||||||
void* h = NULL;
|
|
||||||
if (!tls_sll_pop(class_idx, &h)) break;
|
|
||||||
// One-shot validation for the first pop into FastCache
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
|
||||||
do { static _Atomic int once_fc = 0; int exp2 = 0; if (atomic_compare_exchange_strong(&once_fc, &exp2, 1)) { tiny_debug_validate_node_base(class_idx, h, "frontend_refill_fc"); } } while (0);
|
|
||||||
#endif
|
|
||||||
fc->items[fc->top++] = h;
|
|
||||||
need--; filled++;
|
|
||||||
if (fc->top >= TINY_FASTCACHE_CAP) break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step B: If still not enough, transfer from TLS Magazine (lock-free, O(1))
|
|
||||||
if (need > 0) {
|
|
||||||
tiny_small_mags_init_once();
|
|
||||||
if (class_idx > 3) tiny_mag_init_if_needed(class_idx);
|
|
||||||
TinyTLSMag* mag = &g_tls_mags[class_idx];
|
|
||||||
while (need > 0 && mag->top > 0 && fc->top < TINY_FASTCACHE_CAP) {
|
|
||||||
void* p = mag->items[--mag->top].ptr;
|
|
||||||
fc->items[fc->top++] = p;
|
|
||||||
need--; filled++;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (filled > 0) {
|
|
||||||
eventq_push(class_idx, (uint32_t)g_tiny_class_sizes[class_idx]);
|
|
||||||
HAK_PATHDBG_INC(g_path_refill_calls, class_idx);
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Move up to 'n' items from TLS magazine to SLL if SLL has room (lock-free).
|
|
||||||
static inline int bulk_mag_to_sll_if_room(int class_idx, TinyTLSMag* mag, int n) {
|
|
||||||
if (g_tls_list_enable) return 0;
|
|
||||||
if (!g_tls_sll_enable || n <= 0) return 0;
|
|
||||||
uint32_t cap = sll_cap_for_class(class_idx, (uint32_t)mag->cap);
|
|
||||||
uint32_t have = g_tls_sll_count[class_idx];
|
|
||||||
if (have >= cap) return 0;
|
|
||||||
int room = (int)(cap - have);
|
|
||||||
int avail = mag->top;
|
|
||||||
// Hysteresis: avoid frequent tiny moves; take at least 8 if possible
|
|
||||||
int take = (n < room ? n : room);
|
|
||||||
if (take < 8 && avail >= 8 && room >= 8) take = 8;
|
|
||||||
if (take > avail) take = avail;
|
|
||||||
if (take <= 0) return 0;
|
|
||||||
for (int i = 0; i < take; i++) {
|
|
||||||
void* p = mag->items[--mag->top].ptr;
|
|
||||||
if (!tls_sll_push(class_idx, p, cap)) {
|
|
||||||
// No more room; return remaining items to magazine and stop
|
|
||||||
mag->top++; // undo pop
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
HAK_PATHDBG_INC(g_path_refill_calls, class_idx);
|
|
||||||
return take;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ultra-mode (SLL-only) refill operation
|
|
||||||
static inline void ultra_refill_sll(int class_idx) {
|
|
||||||
int need = ultra_batch_for_class(class_idx);
|
|
||||||
HAK_ULTRADBG_INC(g_ultra_refill_calls, class_idx);
|
|
||||||
int sll_cap = ultra_sll_cap_for_class(class_idx);
|
|
||||||
pthread_mutex_t* lock = &g_tiny_class_locks[class_idx].m;
|
|
||||||
pthread_mutex_lock(lock);
|
|
||||||
TinySlab* slab = g_tiny_pool.free_slabs[class_idx];
|
|
||||||
if (!slab) {
|
|
||||||
slab = allocate_new_slab(class_idx);
|
|
||||||
if (slab) {
|
|
||||||
slab->next = g_tiny_pool.free_slabs[class_idx];
|
|
||||||
g_tiny_pool.free_slabs[class_idx] = slab;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (slab) {
|
|
||||||
// Box 3: Get stride (block size + header, except C7 which is headerless)
|
|
||||||
size_t bs = tiny_stride_for_class(class_idx);
|
|
||||||
int remaining = need;
|
|
||||||
while (remaining > 0 && slab->free_count > 0) {
|
|
||||||
if ((int)g_tls_sll_count[class_idx] >= sll_cap) break;
|
|
||||||
int first = hak_tiny_find_free_block(slab);
|
|
||||||
if (first < 0) break;
|
|
||||||
// Allocate the first found block
|
|
||||||
hak_tiny_set_used(slab, first);
|
|
||||||
slab->free_count--;
|
|
||||||
void* p0 = (char*)slab->base + ((size_t)first * bs);
|
|
||||||
if (!tls_sll_push(class_idx, p0, (uint32_t)sll_cap)) {
|
|
||||||
// SLL saturated; stop refilling
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
remaining--;
|
|
||||||
// Try to allocate more from the same word to amortize scanning
|
|
||||||
int word_idx = first / 64;
|
|
||||||
uint64_t used = slab->bitmap[word_idx];
|
|
||||||
uint64_t free_bits = ~used;
|
|
||||||
while (remaining > 0 && free_bits && slab->free_count > 0) {
|
|
||||||
if ((int)g_tls_sll_count[class_idx] >= sll_cap) break;
|
|
||||||
int bit_idx = __builtin_ctzll(free_bits);
|
|
||||||
int block_idx = word_idx * 64 + bit_idx;
|
|
||||||
hak_tiny_set_used(slab, block_idx);
|
|
||||||
slab->free_count--;
|
|
||||||
void* p = (char*)slab->base + ((size_t)block_idx * bs);
|
|
||||||
if (!tls_sll_push(class_idx, p, (uint32_t)sll_cap)) {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
remaining--;
|
|
||||||
// Update free_bits for next iteration
|
|
||||||
used = slab->bitmap[word_idx];
|
|
||||||
free_bits = ~used;
|
|
||||||
}
|
|
||||||
if (slab->free_count == 0) {
|
|
||||||
move_to_full_list(class_idx, slab);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
pthread_mutex_unlock(lock);
|
|
||||||
}
|
|
||||||
|
|
||||||
#endif // HAKMEM_TINY_REFILL_INC_H
|
#endif // HAKMEM_TINY_REFILL_INC_H
|
||||||
|
|||||||
@ -22,26 +22,40 @@ static void* __attribute__((cold, noinline)) hak_tiny_alloc_slow(size_t size, in
|
|||||||
if (ptr) { HAK_RET_ALLOC(class_idx, ptr); }
|
if (ptr) { HAK_RET_ALLOC(class_idx, ptr); }
|
||||||
}
|
}
|
||||||
|
|
||||||
// Try TLS list refill (C7 is headerless: skip TLS list entirely)
|
// Try TLS SLL via Box (Phase12 正式経路)
|
||||||
|
// C7 は headerless: 既存仕様通り TLS/SLL をスキップ
|
||||||
|
if (g_tls_sll_enable && class_idx != 7) {
|
||||||
|
// Box: 単一APIで TLS SLL を扱う(内部で head/count/next を管理)
|
||||||
|
void* ptr = NULL;
|
||||||
|
if (tls_sll_pop(class_idx, &ptr)) {
|
||||||
|
return ptr;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try TLS list (legacy small-mag) via既存API(構造体直接触らない)
|
||||||
if (g_tls_list_enable && class_idx != 7) {
|
if (g_tls_list_enable && class_idx != 7) {
|
||||||
TinyTLSList* tls = &g_tls_lists[class_idx];
|
TinyTLSList* tls = &g_tls_lists[class_idx];
|
||||||
// Fail‑Fast: guard against poisoned head (remote sentinel)
|
|
||||||
|
// Fail-Fast: guard against poisoned head (remote sentinel)
|
||||||
if (__builtin_expect((uintptr_t)tls->head == TINY_REMOTE_SENTINEL, 0)) {
|
if (__builtin_expect((uintptr_t)tls->head == TINY_REMOTE_SENTINEL, 0)) {
|
||||||
tls->head = NULL;
|
tls->head = NULL;
|
||||||
tls->count = 0;
|
tls->count = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (tls->count > 0) {
|
if (tls->count > 0) {
|
||||||
void* ptr = tls_list_pop(tls, class_idx);
|
void* ptr = tls_list_pop(tls, class_idx);
|
||||||
if (ptr) { HAK_RET_ALLOC(class_idx, ptr); }
|
if (ptr) {
|
||||||
// ptr が NULL の場合でも、ここで終了せず後段の Superslab 経路へフォールバックする
|
HAK_RET_ALLOC(class_idx, ptr);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Try refilling TLS list from slab
|
// Try refilling TLS list from TLS-cached Superslab slab
|
||||||
uint32_t want = tls->refill_low > 0 ? tls->refill_low : 32;
|
uint32_t want = tls->refill_low > 0 ? tls->refill_low : 32;
|
||||||
if (tls_refill_from_tls_slab(class_idx, tls, want) > 0) {
|
if (tls_refill_from_tls_slab(class_idx, tls, want) > 0) {
|
||||||
void* ptr = tls_list_pop(tls, class_idx);
|
void* ptr = tls_list_pop(tls, class_idx);
|
||||||
if (ptr) { HAK_RET_ALLOC(class_idx, ptr); }
|
if (ptr) {
|
||||||
// ここでも NULL の場合は続行(後段へフォールバック)
|
HAK_RET_ALLOC(class_idx, ptr);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -87,8 +101,12 @@ static void* __attribute__((cold, noinline)) hak_tiny_alloc_slow(size_t size, in
|
|||||||
}
|
}
|
||||||
} while (0);
|
} while (0);
|
||||||
|
|
||||||
// Final fallback: allocate from superslab
|
// Final fallback: allocate from superslab via Box API wrapper (Stage A)
|
||||||
void* ss_ptr = hak_tiny_alloc_superslab(class_idx);
|
// NOTE:
|
||||||
|
// - hak_tiny_alloc_superslab_box() is a thin façade over the legacy
|
||||||
|
// per-class SuperslabHead backend in Phase 12 Stage A.
|
||||||
|
// - Callers (slow path) no longer depend on internal Superslab layout.
|
||||||
|
void* ss_ptr = hak_tiny_alloc_superslab_box(class_idx);
|
||||||
if (ss_ptr) { HAK_RET_ALLOC(class_idx, ss_ptr); }
|
if (ss_ptr) { HAK_RET_ALLOC(class_idx, ss_ptr); }
|
||||||
tiny_alloc_dump_tls_state(class_idx, "slow_fail", &g_tls_slabs[class_idx]);
|
tiny_alloc_dump_tls_state(class_idx, "slow_fail", &g_tls_slabs[class_idx]);
|
||||||
// Optional one-shot debug when final slow path fails
|
// Optional one-shot debug when final slow path fails
|
||||||
|
|||||||
@ -10,6 +10,7 @@
|
|||||||
#include <unistd.h>
|
#include <unistd.h>
|
||||||
|
|
||||||
#include "hakmem_tiny.h"
|
#include "hakmem_tiny.h"
|
||||||
|
#include "hakmem_tiny_config.h" // extern g_tiny_class_sizes
|
||||||
#include "hakmem_tiny_stats_api.h"
|
#include "hakmem_tiny_stats_api.h"
|
||||||
#include <signal.h>
|
#include <signal.h>
|
||||||
|
|
||||||
|
|||||||
@ -5,7 +5,9 @@
|
|||||||
|
|
||||||
#include "hakmem_tiny_superslab.h"
|
#include "hakmem_tiny_superslab.h"
|
||||||
#include "hakmem_super_registry.h" // Phase 1: Registry integration
|
#include "hakmem_super_registry.h" // Phase 1: Registry integration
|
||||||
#include "hakmem_tiny.h" // For g_tiny_class_sizes and tiny_self_u32
|
#include "hakmem_tiny.h" // For tiny_self_u32
|
||||||
|
#include "hakmem_tiny_config.h" // For extern g_tiny_class_sizes
|
||||||
|
#include "hakmem_shared_pool.h" // Phase 12: Shared SuperSlab pool backend (skeleton)
|
||||||
#include <sys/mman.h>
|
#include <sys/mman.h>
|
||||||
#include <sys/resource.h>
|
#include <sys/resource.h>
|
||||||
#include <errno.h>
|
#include <errno.h>
|
||||||
@ -21,6 +23,21 @@
|
|||||||
static int g_ss_force_lg = -1;
|
static int g_ss_force_lg = -1;
|
||||||
static _Atomic int g_ss_populate_once = 0;
|
static _Atomic int g_ss_populate_once = 0;
|
||||||
|
|
||||||
|
// Forward: decide next SuperSlab lg for a class (ACE-aware, clamped)
|
||||||
|
static inline uint8_t hak_tiny_superslab_next_lg(int class_idx)
|
||||||
|
{
|
||||||
|
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES_SS) {
|
||||||
|
return SUPERSLAB_LG_DEFAULT;
|
||||||
|
}
|
||||||
|
// Prefer ACE target if within allowed range
|
||||||
|
uint8_t t = atomic_load_explicit((_Atomic uint8_t*)&g_ss_ace[class_idx].target_lg,
|
||||||
|
memory_order_relaxed);
|
||||||
|
if (t < SUPERSLAB_LG_MIN || t > SUPERSLAB_LG_MAX) {
|
||||||
|
return SUPERSLAB_LG_DEFAULT;
|
||||||
|
}
|
||||||
|
return t;
|
||||||
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Global Statistics
|
// Global Statistics
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
@ -86,6 +103,38 @@ static void ss_cache_precharge(uint8_t size_class, size_t ss_size, uintptr_t ss_
|
|||||||
static SuperslabCacheEntry* ss_cache_pop(uint8_t size_class);
|
static SuperslabCacheEntry* ss_cache_pop(uint8_t size_class);
|
||||||
static int ss_cache_push(uint8_t size_class, SuperSlab* ss);
|
static int ss_cache_push(uint8_t size_class, SuperSlab* ss);
|
||||||
|
|
||||||
|
// Drain remote MPSC stack into freelist (ownership already verified by caller)
|
||||||
|
void _ss_remote_drain_to_freelist_unsafe(SuperSlab* ss, int slab_idx, TinySlabMeta* meta)
|
||||||
|
{
|
||||||
|
if (!ss || slab_idx < 0 || slab_idx >= ss_slabs_capacity(ss) || !meta) return;
|
||||||
|
|
||||||
|
// Atomically take the whole remote list
|
||||||
|
uintptr_t head = atomic_exchange_explicit(&ss->remote_heads[slab_idx], 0,
|
||||||
|
memory_order_acq_rel);
|
||||||
|
if (head == 0) return;
|
||||||
|
|
||||||
|
// Convert remote stack (offset 0 next) into freelist encoding via Box API
|
||||||
|
// and splice in front of current freelist preserving relative order.
|
||||||
|
void* prev = meta->freelist;
|
||||||
|
int cls = (int)meta->class_idx;
|
||||||
|
uintptr_t cur = head;
|
||||||
|
while (cur != 0) {
|
||||||
|
uintptr_t next = *(uintptr_t*)cur; // remote-next stored at offset 0
|
||||||
|
// Rewrite next pointer to Box representation for this class
|
||||||
|
tiny_next_write(cls, (void*)cur, prev);
|
||||||
|
prev = (void*)cur;
|
||||||
|
cur = next;
|
||||||
|
}
|
||||||
|
meta->freelist = prev;
|
||||||
|
// Reset remote count after full drain
|
||||||
|
atomic_store_explicit(&ss->remote_counts[slab_idx], 0, memory_order_release);
|
||||||
|
|
||||||
|
// Update freelist/nonempty visibility bits
|
||||||
|
uint32_t bit = (1u << slab_idx);
|
||||||
|
atomic_fetch_or_explicit(&ss->freelist_mask, bit, memory_order_release);
|
||||||
|
atomic_fetch_or_explicit(&ss->nonempty_mask, bit, memory_order_release);
|
||||||
|
}
|
||||||
|
|
||||||
static inline void ss_stats_os_alloc(uint8_t size_class, size_t ss_size) {
|
static inline void ss_stats_os_alloc(uint8_t size_class, size_t ss_size) {
|
||||||
pthread_mutex_lock(&g_superslab_lock);
|
pthread_mutex_lock(&g_superslab_lock);
|
||||||
g_superslabs_allocated++;
|
g_superslabs_allocated++;
|
||||||
@ -340,6 +389,220 @@ static int ss_cache_push(uint8_t size_class, SuperSlab* ss) {
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Legacy backend for hak_tiny_alloc_superslab_box().
|
||||||
|
*
|
||||||
|
* Phase 12 Stage A/B:
|
||||||
|
* - Uses per-class SuperSlabHead (g_superslab_heads) as the implementation.
|
||||||
|
* - Callers MUST use hak_tiny_alloc_superslab_box() and never touch this directly.
|
||||||
|
* - Later Stage C: this function will be replaced by a shared_pool backend.
|
||||||
|
*/
|
||||||
|
static SuperSlabHead* init_superslab_head(int class_idx);
|
||||||
|
static int expand_superslab_head(SuperSlabHead* head);
|
||||||
|
|
||||||
|
static void* hak_tiny_alloc_superslab_backend_legacy(int class_idx)
|
||||||
|
{
|
||||||
|
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES_SS) {
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
SuperSlabHead* head = g_superslab_heads[class_idx];
|
||||||
|
if (!head) {
|
||||||
|
head = init_superslab_head(class_idx);
|
||||||
|
if (!head) {
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
g_superslab_heads[class_idx] = head;
|
||||||
|
}
|
||||||
|
|
||||||
|
SuperSlab* chunk = head->current_chunk ? head->current_chunk : head->first_chunk;
|
||||||
|
|
||||||
|
while (chunk) {
|
||||||
|
int cap = ss_slabs_capacity(chunk);
|
||||||
|
for (int slab_idx = 0; slab_idx < cap; slab_idx++) {
|
||||||
|
TinySlabMeta* meta = &chunk->slabs[slab_idx];
|
||||||
|
|
||||||
|
if (meta->capacity == 0) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (meta->used < meta->capacity) {
|
||||||
|
size_t stride = tiny_block_stride_for_class(class_idx);
|
||||||
|
size_t offset = (size_t)meta->used * stride;
|
||||||
|
uint8_t* base = (uint8_t*)chunk
|
||||||
|
+ SUPERSLAB_SLAB0_DATA_OFFSET
|
||||||
|
+ (size_t)slab_idx * SUPERSLAB_SLAB_USABLE_SIZE
|
||||||
|
+ offset;
|
||||||
|
|
||||||
|
meta->used++;
|
||||||
|
atomic_fetch_add_explicit(&chunk->total_active_blocks, 1, memory_order_relaxed);
|
||||||
|
return (void*)base;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
chunk = chunk->next_chunk;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (expand_superslab_head(head) < 0) {
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
SuperSlab* new_chunk = head->current_chunk;
|
||||||
|
if (!new_chunk) {
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
int cap2 = ss_slabs_capacity(new_chunk);
|
||||||
|
for (int slab_idx = 0; slab_idx < cap2; slab_idx++) {
|
||||||
|
TinySlabMeta* meta = &new_chunk->slabs[slab_idx];
|
||||||
|
if (meta->capacity == 0) continue;
|
||||||
|
if (meta->used < meta->capacity) {
|
||||||
|
size_t stride = tiny_block_stride_for_class(class_idx);
|
||||||
|
size_t offset = (size_t)meta->used * stride;
|
||||||
|
uint8_t* base = (uint8_t*)new_chunk
|
||||||
|
+ SUPERSLAB_SLAB0_DATA_OFFSET
|
||||||
|
+ (size_t)slab_idx * SUPERSLAB_SLAB_USABLE_SIZE
|
||||||
|
+ offset;
|
||||||
|
|
||||||
|
meta->used++;
|
||||||
|
atomic_fetch_add_explicit(&new_chunk->total_active_blocks, 1, memory_order_relaxed);
|
||||||
|
return (void*)base;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Shared pool backend for hak_tiny_alloc_superslab_box().
|
||||||
|
*
|
||||||
|
* Phase 12-2:
|
||||||
|
* - Uses SharedSuperSlabPool (g_shared_pool) to obtain a SuperSlab/slab
|
||||||
|
* for the requested class_idx.
|
||||||
|
* - This backend EXPRESSLY owns only:
|
||||||
|
* - choosing (ss, slab_idx) via shared_pool_acquire_slab()
|
||||||
|
* - initializing that slab's TinySlabMeta via superslab_init_slab()
|
||||||
|
* and nothing else; all callers must go through hak_tiny_alloc_superslab_box().
|
||||||
|
*
|
||||||
|
* - For now this is a minimal, conservative implementation:
|
||||||
|
* - One linear bump-run is carved from the acquired slab using tiny_block_stride_for_class().
|
||||||
|
* - No complex per-slab freelist or refill policy yet (Phase 12-3+).
|
||||||
|
* - If shared_pool_acquire_slab() fails, we fall back to legacy backend.
|
||||||
|
*/
|
||||||
|
static void* hak_tiny_alloc_superslab_backend_shared(int class_idx)
|
||||||
|
{
|
||||||
|
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES_SS) {
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
SuperSlab* ss = NULL;
|
||||||
|
int slab_idx = -1;
|
||||||
|
|
||||||
|
if (shared_pool_acquire_slab(class_idx, &ss, &slab_idx) != 0 || !ss) {
|
||||||
|
// Shared pool could not provide a slab; caller may choose to fall back.
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
TinySlabMeta* meta = &ss->slabs[slab_idx];
|
||||||
|
|
||||||
|
// Defensive: shared_pool must either hand us an UNASSIGNED slab or one
|
||||||
|
// already bound to this class. Anything else is a hard bug.
|
||||||
|
if (meta->class_idx != 255 && meta->class_idx != (uint8_t)class_idx) {
|
||||||
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
|
fprintf(stderr,
|
||||||
|
"[HAKMEM][SS_SHARED] BUG: acquire_slab mismatch: cls=%d meta->class_idx=%u slab_idx=%d ss=%p\n",
|
||||||
|
class_idx, (unsigned)meta->class_idx, slab_idx, (void*)ss);
|
||||||
|
#endif
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize slab geometry once for this class.
|
||||||
|
if (meta->capacity == 0) {
|
||||||
|
size_t block_size = g_tiny_class_sizes[class_idx];
|
||||||
|
// owner_tid_low is advisory; we can use 0 in this backend.
|
||||||
|
superslab_init_slab(ss, slab_idx, block_size, 0);
|
||||||
|
meta = &ss->slabs[slab_idx];
|
||||||
|
|
||||||
|
// Ensure class_idx is bound to this class after init. superslab_init_slab
|
||||||
|
// does not touch class_idx by design; shared_pool owns that field.
|
||||||
|
if (meta->class_idx == 255) {
|
||||||
|
meta->class_idx = (uint8_t)class_idx;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Final contract check before computing addresses.
|
||||||
|
if (meta->class_idx != (uint8_t)class_idx ||
|
||||||
|
meta->capacity == 0 ||
|
||||||
|
meta->used > meta->capacity) {
|
||||||
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
|
fprintf(stderr,
|
||||||
|
"[HAKMEM][SS_SHARED] BUG: invalid slab meta before alloc: "
|
||||||
|
"cls=%d slab_idx=%d meta_cls=%u used=%u cap=%u ss=%p\n",
|
||||||
|
class_idx, slab_idx,
|
||||||
|
(unsigned)meta->class_idx,
|
||||||
|
(unsigned)meta->used,
|
||||||
|
(unsigned)meta->capacity,
|
||||||
|
(void*)ss);
|
||||||
|
#endif
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simple bump allocation within this slab.
|
||||||
|
if (meta->used >= meta->capacity) {
|
||||||
|
// Slab exhausted: in minimal Phase12-2 backend we do not loop;
|
||||||
|
// caller or future logic must acquire another slab.
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
size_t stride = tiny_block_stride_for_class(class_idx);
|
||||||
|
size_t offset = (size_t)meta->used * stride;
|
||||||
|
|
||||||
|
// Phase 12-2 minimal geometry:
|
||||||
|
// - slab 0 data offset via SUPERSLAB_SLAB0_DATA_OFFSET
|
||||||
|
// - subsequent slabs at fixed SUPERSLAB_SLAB_USABLE_SIZE strides.
|
||||||
|
size_t slab_base_off = SUPERSLAB_SLAB0_DATA_OFFSET
|
||||||
|
+ (size_t)slab_idx * SUPERSLAB_SLAB_USABLE_SIZE;
|
||||||
|
uint8_t* base = (uint8_t*)ss + slab_base_off + offset;
|
||||||
|
|
||||||
|
meta->used++;
|
||||||
|
atomic_fetch_add_explicit(&ss->total_active_blocks, 1, memory_order_relaxed);
|
||||||
|
|
||||||
|
return (void*)base;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Box API entry:
|
||||||
|
* - Single front-door for tiny-side Superslab allocations.
|
||||||
|
*
|
||||||
|
* Phase 12 policy:
|
||||||
|
* - HAKMEM_TINY_SS_SHARED=0 → legacy backendのみ(回帰確認用)
|
||||||
|
* - HAKMEM_TINY_SS_SHARED=1 → shared backendを優先し、失敗時のみ legacy にフォールバック
|
||||||
|
*/
|
||||||
|
void* hak_tiny_alloc_superslab_box(int class_idx)
|
||||||
|
{
|
||||||
|
static int g_ss_shared_mode = -1;
|
||||||
|
if (__builtin_expect(g_ss_shared_mode == -1, 0)) {
|
||||||
|
const char* e = getenv("HAKMEM_TINY_SS_SHARED");
|
||||||
|
if (!e || !*e) {
|
||||||
|
g_ss_shared_mode = 1; // デフォルト: shared 有効
|
||||||
|
} else {
|
||||||
|
int v = atoi(e);
|
||||||
|
g_ss_shared_mode = (v != 0) ? 1 : 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (g_ss_shared_mode == 1) {
|
||||||
|
void* p = hak_tiny_alloc_superslab_backend_shared(class_idx);
|
||||||
|
if (p != NULL) {
|
||||||
|
return p;
|
||||||
|
}
|
||||||
|
// shared backend が失敗した場合は安全側で legacy にフォールバック
|
||||||
|
return hak_tiny_alloc_superslab_backend_legacy(class_idx);
|
||||||
|
}
|
||||||
|
|
||||||
|
// shared OFF 時は legacy のみ
|
||||||
|
return hak_tiny_alloc_superslab_backend_legacy(class_idx);
|
||||||
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// SuperSlab Allocation (2MB aligned)
|
// SuperSlab Allocation (2MB aligned)
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
@ -761,59 +1024,38 @@ void superslab_free(SuperSlab* ss) {
|
|||||||
// Slab Initialization within SuperSlab
|
// Slab Initialization within SuperSlab
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|
||||||
void superslab_init_slab(SuperSlab* ss, int slab_idx, size_t block_size, uint32_t owner_tid) {
|
void superslab_init_slab(SuperSlab* ss, int slab_idx, size_t block_size, uint32_t owner_tid)
|
||||||
|
{
|
||||||
if (!ss || slab_idx < 0 || slab_idx >= ss_slabs_capacity(ss)) {
|
if (!ss || slab_idx < 0 || slab_idx >= ss_slabs_capacity(ss)) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate capacity using canonical tiny_slab_base_for() layout:
|
// Phase E1-CORRECT unified geometry:
|
||||||
// - slab_data_start(ss, slab_idx) = SuperSlab base + slab_idx * SLAB_SIZE
|
// - block_size is the TOTAL stride for this class (g_tiny_class_sizes[cls])
|
||||||
// - tiny_slab_base_for(ss, 0) = SuperSlab base + SUPERSLAB_SLAB0_DATA_OFFSET
|
// - usable bytes are determined by slab index (slab0 vs others)
|
||||||
// - tiny_slab_base_for(ss, i>0) = slab_data_start (no gap)
|
// - capacity = usable / stride for ALL classes (including former C7)
|
||||||
//
|
size_t usable_size = (slab_idx == 0)
|
||||||
// Phase 6-2.5: Use constants from hakmem_tiny_superslab_constants.h
|
? SUPERSLAB_SLAB0_USABLE_SIZE
|
||||||
size_t usable_size = (slab_idx == 0) ? SUPERSLAB_SLAB0_USABLE_SIZE : SUPERSLAB_SLAB_USABLE_SIZE;
|
: SUPERSLAB_SLAB_USABLE_SIZE;
|
||||||
// Phase E1-CORRECT: block_size is already the stride (from g_tiny_class_sizes)
|
|
||||||
// g_tiny_class_sizes now stores TOTAL block size for ALL classes (including C7)
|
|
||||||
// No adjustment needed - just use block_size as-is
|
|
||||||
size_t stride = block_size;
|
size_t stride = block_size;
|
||||||
int capacity = (int)(usable_size / stride);
|
uint16_t capacity = (uint16_t)(usable_size / stride);
|
||||||
|
|
||||||
// Diagnostic: Verify capacity for slab 0 of class 7 (one-shot)
|
|
||||||
if (slab_idx == 0) {
|
|
||||||
static _Atomic int g_cap_log_printed = 0;
|
|
||||||
if (atomic_load(&g_cap_log_printed) == 0 &&
|
|
||||||
atomic_exchange(&g_cap_log_printed, 1) == 0) {
|
|
||||||
#if !HAKMEM_BUILD_RELEASE
|
#if !HAKMEM_BUILD_RELEASE
|
||||||
fprintf(stderr, "[SUPERSLAB_INIT] class 7 slab 0: usable_size=%zu stride=%zu capacity=%d\n",
|
if (slab_idx == 0) {
|
||||||
usable_size, stride, capacity);
|
fprintf(stderr,
|
||||||
fprintf(stderr, "[SUPERSLAB_INIT] Expected: 63488 / 1024 = 62 blocks\n");
|
"[SUPERSLAB_INIT] slab 0: usable_size=%zu stride=%zu capacity=%u\n",
|
||||||
if (capacity != 62) {
|
usable_size, stride, (unsigned)capacity);
|
||||||
fprintf(stderr, "[SUPERSLAB_INIT] WARNING: capacity=%d (expected 62!)\n", capacity);
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
// Phase 6.24: Lazy freelist initialization
|
|
||||||
// NO freelist build here! (saves 4000-8000 cycles per slab init)
|
|
||||||
// freelist will be built on-demand when first free() is called
|
|
||||||
// Linear allocation is used until then (sequential memory access)
|
|
||||||
|
|
||||||
// Initialize slab metadata
|
|
||||||
TinySlabMeta* meta = &ss->slabs[slab_idx];
|
TinySlabMeta* meta = &ss->slabs[slab_idx];
|
||||||
meta->freelist = NULL; // NULL = linear allocation mode
|
meta->freelist = NULL; // NULL = linear allocation mode
|
||||||
meta->used = 0;
|
meta->used = 0;
|
||||||
meta->capacity = (uint16_t)capacity;
|
meta->capacity = capacity;
|
||||||
meta->carved = 0; // Initialize carved counter
|
meta->carved = 0;
|
||||||
meta->owner_tid_low = (uint8_t)(owner_tid & 0xFFu);
|
meta->owner_tid_low = (uint8_t)(owner_tid & 0xFFu);
|
||||||
// Caller (refill) is responsible for setting meta->class_idx
|
// meta->class_idx is set by the caller (shared_pool / refill path)
|
||||||
|
|
||||||
// Store slab_start in SuperSlab for later use
|
|
||||||
// (We need this for linear allocation)
|
|
||||||
// Note: We'll calculate this in superslab_alloc_from_slab() instead
|
|
||||||
|
|
||||||
// Mark slab as active
|
|
||||||
superslab_activate_slab(ss, slab_idx);
|
superslab_activate_slab(ss, slab_idx);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -34,6 +34,13 @@ extern _Atomic uint64_t g_ss_active_dec_calls;
|
|||||||
|
|
||||||
uint32_t tiny_remote_drain_threshold(void);
|
uint32_t tiny_remote_drain_threshold(void);
|
||||||
|
|
||||||
|
// Monotonic clock in nanoseconds (header inline to avoid TU dependencies)
|
||||||
|
static inline uint64_t hak_now_ns(void) {
|
||||||
|
struct timespec ts;
|
||||||
|
clock_gettime(CLOCK_MONOTONIC, &ts);
|
||||||
|
return (uint64_t)ts.tv_sec * 1000000000ull + (uint64_t)ts.tv_nsec;
|
||||||
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Tiny block stride helper (Phase 7 header-aware)
|
// Tiny block stride helper (Phase 7 header-aware)
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
@ -63,11 +70,37 @@ static inline size_t tiny_block_stride_for_class(int class_idx) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Phase 12:
|
* Phase 12 (Shared SuperSlab Pool: Stage A - Minimal Box API wrapper)
|
||||||
* - Per-class SuperSlabHead / superslab_allocate() are superseded by
|
*
|
||||||
* the shared SuperSlab pool (hakmem_shared_pool.{h,c}).
|
* Goals at this stage:
|
||||||
* - The legacy declarations are removed to avoid accidental use.
|
* - Introduce a single, well-defined Box/Phase12 API that the tiny front-end
|
||||||
|
* (slow path / refill) uses to obtain blocks from the SuperSlab layer.
|
||||||
|
* - Keep existing per-class SuperslabHead/g_superslab_heads and
|
||||||
|
* superslab_allocate() implementation intact as the internal backend.
|
||||||
|
* - Do NOT change behavior or allocation strategy yet; we only:
|
||||||
|
* - centralize the "allocate from superslab for tiny class" logic, and
|
||||||
|
* - isolate callers from internal Superslab details.
|
||||||
|
*
|
||||||
|
* This allows:
|
||||||
|
* - hak_tiny_alloc_slow() / refill code to stop depending on legacy internals,
|
||||||
|
* so later commits can switch the backend to the shared SuperSlab pool
|
||||||
|
* (hakmem_shared_pool.{h,c}) without touching front-end call sites.
|
||||||
|
*
|
||||||
|
* Stage A API (introduced here):
|
||||||
|
* - void* hak_tiny_alloc_superslab_box(int class_idx);
|
||||||
|
* - Returns a single tiny block for given class_idx, or NULL on failure.
|
||||||
|
* - BOX CONTRACT:
|
||||||
|
* - Callers pass validated class_idx (0 <= idx < TINY_NUM_CLASSES).
|
||||||
|
* - Returns a BASE pointer already suitable for Box/TLS-SLL/header rules.
|
||||||
|
* - No direct access to SuperSlab/TinySlabMeta from callers.
|
||||||
|
*
|
||||||
|
* NOTE:
|
||||||
|
* - At this stage, hak_tiny_alloc_superslab_box() is a thin inline wrapper
|
||||||
|
* that forwards to the existing per-class SuperslabHead backend.
|
||||||
|
* - Later Stage B/C patches may switch its implementation to shared_pool_*()
|
||||||
|
* without changing any callers.
|
||||||
*/
|
*/
|
||||||
|
void* hak_tiny_alloc_superslab_box(int class_idx);
|
||||||
|
|
||||||
// Initialize a slab within SuperSlab
|
// Initialize a slab within SuperSlab
|
||||||
void superslab_init_slab(SuperSlab* ss, int slab_idx, size_t block_size, uint32_t owner_tid);
|
void superslab_init_slab(SuperSlab* ss, int slab_idx, size_t block_size, uint32_t owner_tid);
|
||||||
@ -81,6 +114,9 @@ void superslab_deactivate_slab(SuperSlab* ss, int slab_idx);
|
|||||||
// Find first free slab index (-1 if none)
|
// Find first free slab index (-1 if none)
|
||||||
int superslab_find_free_slab(SuperSlab* ss);
|
int superslab_find_free_slab(SuperSlab* ss);
|
||||||
|
|
||||||
|
// Free a SuperSlab (unregister and return to pool or munmap)
|
||||||
|
void superslab_free(SuperSlab* ss);
|
||||||
|
|
||||||
// Statistics
|
// Statistics
|
||||||
void superslab_print_stats(SuperSlab* ss);
|
void superslab_print_stats(SuperSlab* ss);
|
||||||
|
|
||||||
|
|||||||
@ -9,6 +9,20 @@
|
|||||||
// SuperSlab Layout Constants
|
// SuperSlab Layout Constants
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|
||||||
|
// Log2 range for SuperSlab sizes (in MB):
|
||||||
|
// - MIN: 1MB (2^20)
|
||||||
|
// - MAX: 2MB (2^21)
|
||||||
|
// - DEFAULT: 2MB unless constrained by ACE/env
|
||||||
|
#ifndef SUPERSLAB_LG_MIN
|
||||||
|
#define SUPERSLAB_LG_MIN 20
|
||||||
|
#endif
|
||||||
|
#ifndef SUPERSLAB_LG_MAX
|
||||||
|
#define SUPERSLAB_LG_MAX 21
|
||||||
|
#endif
|
||||||
|
#ifndef SUPERSLAB_LG_DEFAULT
|
||||||
|
#define SUPERSLAB_LG_DEFAULT 21
|
||||||
|
#endif
|
||||||
|
|
||||||
// Size of each slab within SuperSlab (fixed, never changes)
|
// Size of each slab within SuperSlab (fixed, never changes)
|
||||||
#define SLAB_SIZE (64 * 1024) // 64KB per slab
|
#define SLAB_SIZE (64 * 1024) // 64KB per slab
|
||||||
|
|
||||||
|
|||||||
@ -97,31 +97,32 @@ static inline void ptr_trace_dump_now(const char* reason) { (void)reason; }
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
// Phase E1-CORRECT: Use Box API for all next pointer operations
|
// Phase E1-CORRECT: Use Box API for all next pointer operations
|
||||||
// Box API handles offset calculation internally based on class_idx
|
// Box API handles offset calculation internally based on class_idx.
|
||||||
|
// `off` は呼び出し元互換用に受け取るが、アドレス計算には使わない(ログ専用)。
|
||||||
#define PTR_NEXT_WRITE(tag, cls, node, off, value) do { \
|
#define PTR_NEXT_WRITE(tag, cls, node, off, value) do { \
|
||||||
(void)(off); /* unused, Box API handles offset */ \
|
(void)(off); \
|
||||||
tiny_next_write((cls), (node), (value)); \
|
tiny_next_write((cls), (node), (value)); \
|
||||||
ptr_trace_record((tag), (cls), (node), (value), 1); \
|
ptr_trace_record((tag), (cls), (node), (value), (size_t)(off)); \
|
||||||
ptr_trace_try_register_dump(); \
|
ptr_trace_try_register_dump(); \
|
||||||
} while(0)
|
} while (0)
|
||||||
|
|
||||||
#define PTR_NEXT_READ(tag, cls, node, off, out_var) do { \
|
#define PTR_NEXT_READ(tag, cls, node, off, out_var) do { \
|
||||||
(void)(off); /* unused, Box API handles offset */ \
|
(void)(off); \
|
||||||
(out_var) = tiny_next_read((cls), (node)); \
|
void* _tmp = tiny_next_read((cls), (node)); \
|
||||||
ptr_trace_record((tag), (cls), (node), (out_var), 1); \
|
(out_var) = _tmp; \
|
||||||
|
ptr_trace_record((tag), (cls), (node), (out_var), (size_t)(off)); \
|
||||||
ptr_trace_try_register_dump(); \
|
ptr_trace_try_register_dump(); \
|
||||||
} while(0)
|
} while (0)
|
||||||
|
|
||||||
#else // HAKMEM_PTR_TRACE == 0
|
#else // HAKMEM_PTR_TRACE == 0
|
||||||
|
|
||||||
// Phase E1-CORRECT: Use Box API for all next pointer operations (Release mode)
|
// Phase E1-CORRECT: Use Box API for all next pointer operations (Release mode)
|
||||||
// Zero cost: Box API functions are static inline with compile-time flag evaluation
|
// `off` は互換用のダミーで、Box API が offset を決定する。
|
||||||
// Unified 2-argument API: ALL classes (C0-C7) use offset 1, class_idx no longer needed
|
|
||||||
#define PTR_NEXT_WRITE(tag, cls, node, off, value) \
|
#define PTR_NEXT_WRITE(tag, cls, node, off, value) \
|
||||||
do { (void)(tag); (void)(off); tiny_next_write((cls), (node), (value)); } while(0)
|
do { (void)(tag); (void)(off); tiny_next_write((cls), (node), (value)); } while (0)
|
||||||
|
|
||||||
#define PTR_NEXT_READ(tag, cls, node, off, out_var) \
|
#define PTR_NEXT_READ(tag, cls, node, off, out_var) \
|
||||||
do { (void)(tag); (void)(off); (out_var) = tiny_next_read((cls), (node)); } while(0)
|
do { (void)(tag); (void)(off); (out_var) = tiny_next_read((cls), (node)); } while (0)
|
||||||
|
|
||||||
// Always provide a stub for release builds so callers can link
|
// Always provide a stub for release builds so callers can link
|
||||||
static inline void ptr_trace_dump_now(const char* reason) { (void)reason; }
|
static inline void ptr_trace_dump_now(const char* reason) { (void)reason; }
|
||||||
|
|||||||
@ -120,7 +120,7 @@ static inline void slab_drain_remote(SlabHandle* h) {
|
|||||||
uint64_t count = atomic_fetch_add(&g_drain_invalid_count, 1);
|
uint64_t count = atomic_fetch_add(&g_drain_invalid_count, 1);
|
||||||
if (count < 10) {
|
if (count < 10) {
|
||||||
fprintf(stderr, "[SLAB_HANDLE] Drain invalid owner: cur=%u expected=%u\n",
|
fprintf(stderr, "[SLAB_HANDLE] Drain invalid owner: cur=%u expected=%u\n",
|
||||||
cur_owner, h->owner_tid);
|
cur_owner, h->owner_tid_low);
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
if (g_tiny_safe_free_strict) {
|
if (g_tiny_safe_free_strict) {
|
||||||
@ -185,7 +185,7 @@ static inline void slab_release(SlabHandle* h) {
|
|||||||
uint64_t count = atomic_fetch_add(&g_release_invalid_count, 1);
|
uint64_t count = atomic_fetch_add(&g_release_invalid_count, 1);
|
||||||
if (count < 10) {
|
if (count < 10) {
|
||||||
fprintf(stderr, "[SLAB_HANDLE] Release invalid owner: cur=%u expected=%u\n",
|
fprintf(stderr, "[SLAB_HANDLE] Release invalid owner: cur=%u expected=%u\n",
|
||||||
cur_owner, h->owner_tid);
|
cur_owner, h->owner_tid_low);
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
if (g_tiny_safe_free_strict) {
|
if (g_tiny_safe_free_strict) {
|
||||||
|
|||||||
@ -1,545 +1,155 @@
|
|||||||
// superslab_inline.h - SuperSlab Hot Path Inline Functions (Box 5)
|
|
||||||
// Purpose: Performance-critical inline helpers for SuperSlab allocator
|
|
||||||
// Extracted from hakmem_tiny_superslab.h (Phase 6-2.8 Refactoring)
|
|
||||||
// Box Theory: Box 5 (SuperSlab Primitives)
|
|
||||||
|
|
||||||
#ifndef SUPERSLAB_INLINE_H
|
#ifndef SUPERSLAB_INLINE_H
|
||||||
#define SUPERSLAB_INLINE_H
|
#define SUPERSLAB_INLINE_H
|
||||||
|
|
||||||
#include <stdint.h>
|
|
||||||
#include <stddef.h>
|
|
||||||
#include <stdbool.h>
|
|
||||||
#include <stdatomic.h>
|
|
||||||
#include <stdlib.h>
|
|
||||||
#include <stdio.h>
|
|
||||||
#include <time.h>
|
|
||||||
#include <signal.h>
|
|
||||||
#include <pthread.h>
|
|
||||||
#include <inttypes.h>
|
|
||||||
#include "superslab_types.h"
|
#include "superslab_types.h"
|
||||||
#include "hakmem_tiny_superslab_constants.h"
|
|
||||||
#include "tiny_debug_ring.h"
|
|
||||||
#include "tiny_remote.h"
|
|
||||||
#include "../tiny_box_geometry.h" // Box 3: Geometry & Capacity Calculator
|
|
||||||
#include "../box/tiny_next_ptr_box.h" // Box API: next pointer read/write
|
|
||||||
|
|
||||||
// External declarations
|
// Forward declaration for unsafe remote drain used by refill/handle paths
|
||||||
extern int g_debug_remote_guard;
|
// Implemented in hakmem_tiny_superslab.c
|
||||||
extern int g_tiny_safe_free_strict;
|
void _ss_remote_drain_to_freelist_unsafe(SuperSlab* ss, int slab_idx, TinySlabMeta* meta);
|
||||||
|
|
||||||
|
// Optional debug counter (defined in hakmem_tiny_superslab.c)
|
||||||
extern _Atomic uint64_t g_ss_active_dec_calls;
|
extern _Atomic uint64_t g_ss_active_dec_calls;
|
||||||
extern _Atomic uint64_t g_ss_remote_push_calls;
|
|
||||||
extern _Atomic int g_ss_remote_seen;
|
|
||||||
extern int g_remote_side_enable;
|
|
||||||
extern int g_remote_force_notify;
|
|
||||||
|
|
||||||
// Function declarations
|
// Return maximum number of slabs for this SuperSlab based on lg_size.
|
||||||
uint32_t tiny_remote_drain_threshold(void);
|
static inline int ss_slabs_capacity(SuperSlab* ss)
|
||||||
void tiny_publish_notify(int class_idx, struct SuperSlab* ss, int slab_idx);
|
{
|
||||||
|
if (!ss) return 0;
|
||||||
// ============================================================================
|
|
||||||
// Fast Path Inline Functions
|
|
||||||
// ============================================================================
|
|
||||||
|
|
||||||
// Runtime-safe slab count for a given SuperSlab (MUST BE FIRST - used by other functions)
|
|
||||||
static inline int ss_slabs_capacity(const SuperSlab* ss) {
|
|
||||||
size_t ss_size = (size_t)1 << ss->lg_size;
|
size_t ss_size = (size_t)1 << ss->lg_size;
|
||||||
return (int)(ss_size / SLAB_SIZE); // 16 or 32
|
return (int)(ss_size / SLAB_SIZE);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fail-fast validation level (0=off, 1=basic, 2=paranoid)
|
// Compute slab base pointer for given (ss, slab_idx).
|
||||||
static inline int tiny_refill_failfast_level(void) {
|
static inline uint8_t* tiny_slab_base_for(SuperSlab* ss, int slab_idx)
|
||||||
static int g_failfast_level = -1;
|
{
|
||||||
if (__builtin_expect(g_failfast_level == -1, 0)) {
|
if (!ss || slab_idx < 0) return NULL;
|
||||||
const char* env = getenv("HAKMEM_TINY_REFILL_FAILFAST");
|
|
||||||
if (env && *env) {
|
if (slab_idx == 0) {
|
||||||
g_failfast_level = atoi(env);
|
return (uint8_t*)ss + SUPERSLAB_SLAB0_DATA_OFFSET;
|
||||||
} else {
|
|
||||||
g_failfast_level = 1;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return g_failfast_level;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Fail-fast logging (level 2 only)
|
size_t off = SUPERSLAB_SLAB0_DATA_OFFSET + (size_t)slab_idx * SLAB_SIZE;
|
||||||
static inline void tiny_failfast_log(const char* stage,
|
size_t ss_size = (size_t)1 << ss->lg_size;
|
||||||
int class_idx,
|
if (off >= ss_size) {
|
||||||
SuperSlab* ss,
|
return NULL;
|
||||||
TinySlabMeta* meta,
|
|
||||||
const void* node,
|
|
||||||
const void* next) {
|
|
||||||
if (__builtin_expect(tiny_refill_failfast_level() < 2, 1)) return;
|
|
||||||
uintptr_t base = ss ? (uintptr_t)ss : 0;
|
|
||||||
size_t size = ss ? ((size_t)1ULL << ss->lg_size) : 0;
|
|
||||||
uintptr_t limit = base + size;
|
|
||||||
fprintf(stderr,
|
|
||||||
"[TRC_FREELIST_LOG] stage=%s cls=%d node=%p next=%p head=%p base=%p limit=%p\n",
|
|
||||||
stage ? stage : "(null)",
|
|
||||||
class_idx,
|
|
||||||
node,
|
|
||||||
next,
|
|
||||||
meta ? meta->freelist : NULL,
|
|
||||||
(void*)base,
|
|
||||||
(void*)limit);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Fail-fast abort with detailed diagnostics
|
|
||||||
static inline void tiny_failfast_abort_ptr(const char* stage,
|
|
||||||
SuperSlab* ss,
|
|
||||||
int slab_idx,
|
|
||||||
const void* ptr,
|
|
||||||
const char* reason) {
|
|
||||||
if (__builtin_expect(tiny_refill_failfast_level() < 2, 1)) return;
|
|
||||||
uintptr_t base = ss ? (uintptr_t)ss : 0;
|
|
||||||
size_t size = ss ? ((size_t)1ULL << ss->lg_size) : 0;
|
|
||||||
uintptr_t limit = base + size;
|
|
||||||
size_t cap = 0;
|
|
||||||
uint32_t used = 0;
|
|
||||||
if (ss && slab_idx >= 0 && slab_idx < ss_slabs_capacity(ss)) {
|
|
||||||
cap = ss->slabs[slab_idx].capacity;
|
|
||||||
used = ss->slabs[slab_idx].used;
|
|
||||||
}
|
}
|
||||||
size_t offset = 0;
|
return (uint8_t*)ss + off;
|
||||||
if (ptr && base && ptr >= (void*)base) {
|
|
||||||
offset = (size_t)((uintptr_t)ptr - base);
|
|
||||||
}
|
|
||||||
fprintf(stderr,
|
|
||||||
"[TRC_FAILFAST_PTR] stage=%s cls=%d slab_idx=%d ptr=%p reason=%s base=%p limit=%p cap=%zu used=%u offset=%zu\n",
|
|
||||||
stage ? stage : "(null)",
|
|
||||||
(ss && slab_idx >= 0 && slab_idx < ss_slabs_capacity(ss))
|
|
||||||
? (int)ss->slabs[slab_idx].class_idx
|
|
||||||
: -1,
|
|
||||||
slab_idx,
|
|
||||||
ptr,
|
|
||||||
reason ? reason : "(null)",
|
|
||||||
(void*)base,
|
|
||||||
(void*)limit,
|
|
||||||
cap,
|
|
||||||
used,
|
|
||||||
offset);
|
|
||||||
fflush(stderr);
|
|
||||||
abort();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get slab index within SuperSlab (DEPRECATED - use slab_index_for)
|
// Compute slab index for a pointer inside ss.
|
||||||
static inline int ptr_to_slab_index(void* p) {
|
static inline int slab_index_for(SuperSlab* ss, void* ptr)
|
||||||
uintptr_t offset = (uintptr_t)p & SUPERSLAB_MASK;
|
{
|
||||||
return (int)(offset >> 16); // Divide by 64KB (2^16)
|
if (!ss || !ptr) return -1;
|
||||||
}
|
|
||||||
|
|
||||||
// Safe slab index computation using SuperSlab base (supports 1MB/2MB)
|
|
||||||
static inline int slab_index_for(const SuperSlab* ss, const void* p) {
|
|
||||||
uintptr_t base = (uintptr_t)ss;
|
uintptr_t base = (uintptr_t)ss;
|
||||||
uintptr_t addr = (uintptr_t)p;
|
uintptr_t p = (uintptr_t)ptr;
|
||||||
uintptr_t off = addr - base;
|
size_t ss_size = (size_t)1 << ss->lg_size;
|
||||||
int idx = (int)(off >> 16); // 64KB
|
|
||||||
int cap = ss_slabs_capacity(ss);
|
if (p < base + SUPERSLAB_SLAB0_DATA_OFFSET || p >= base + ss_size) {
|
||||||
return (idx >= 0 && idx < cap) ? idx : -1;
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
size_t rel = p - (base + SUPERSLAB_SLAB0_DATA_OFFSET);
|
||||||
|
int idx = (int)(rel / SLAB_SIZE);
|
||||||
|
if (idx < 0 || idx >= SLABS_PER_SUPERSLAB_MAX) {
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
return idx;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get slab data start address
|
// Simple ref helpers used by lifecycle paths.
|
||||||
static inline void* slab_data_start(SuperSlab* ss, int slab_idx) {
|
static inline uint32_t superslab_ref_get(SuperSlab* ss)
|
||||||
return (char*)ss + (slab_idx * SLAB_SIZE);
|
{
|
||||||
|
return ss ? atomic_load_explicit(&ss->refcount, memory_order_acquire) : 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get slab base address (accounts for SUPERSLAB_SLAB0_DATA_OFFSET)
|
static inline void superslab_ref_inc(SuperSlab* ss)
|
||||||
// DEPRECATED: Use tiny_slab_base_for_geometry() from Box 3 instead
|
{
|
||||||
// This wrapper maintained for backward compatibility
|
if (ss) {
|
||||||
static inline uint8_t* tiny_slab_base_for(SuperSlab* ss, int slab_idx) {
|
atomic_fetch_add_explicit(&ss->refcount, 1, memory_order_acq_rel);
|
||||||
// Box 3: Delegate to geometry calculator
|
}
|
||||||
return tiny_slab_base_for_geometry(ss, slab_idx);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Refcount helpers (for future MT-safe empty reclamation)
|
static inline void superslab_ref_dec(SuperSlab* ss)
|
||||||
static inline void superslab_ref_inc(SuperSlab* ss) {
|
{
|
||||||
atomic_fetch_add_explicit(&ss->refcount, 1u, memory_order_relaxed);
|
if (ss) {
|
||||||
}
|
uint32_t prev = atomic_fetch_sub_explicit(&ss->refcount, 1, memory_order_acq_rel);
|
||||||
static inline unsigned superslab_ref_dec(SuperSlab* ss) {
|
(void)prev; // caller decides when to free; we just provide the primitive
|
||||||
return atomic_fetch_sub_explicit(&ss->refcount, 1u, memory_order_acq_rel) - 1u;
|
}
|
||||||
}
|
|
||||||
static inline unsigned superslab_ref_get(SuperSlab* ss) {
|
|
||||||
return atomic_load_explicit(&ss->refcount, memory_order_acquire);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Active block counter helper (saturating decrement for free operations)
|
// Ownership helpers (Box 3)
|
||||||
static inline void ss_active_dec_one(SuperSlab* ss) {
|
static inline int ss_owner_try_acquire(TinySlabMeta* m, uint32_t tid)
|
||||||
|
{
|
||||||
|
if (!m) return 0;
|
||||||
|
uint8_t want = (uint8_t)(tid & 0xFFu);
|
||||||
|
uint8_t expected = 0;
|
||||||
|
return __atomic_compare_exchange_n(&m->owner_tid_low, &expected, want,
|
||||||
|
false, __ATOMIC_ACQ_REL, __ATOMIC_RELAXED);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void ss_owner_release(TinySlabMeta* m, uint32_t tid)
|
||||||
|
{
|
||||||
|
if (!m) return;
|
||||||
|
uint8_t expected = (uint8_t)(tid & 0xFFu);
|
||||||
|
(void)__atomic_compare_exchange_n(&m->owner_tid_low, &expected, 0u,
|
||||||
|
false, __ATOMIC_RELEASE, __ATOMIC_RELAXED);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline int ss_owner_is_mine(TinySlabMeta* m, uint32_t tid)
|
||||||
|
{
|
||||||
|
if (!m) return 0;
|
||||||
|
uint8_t cur = __atomic_load_n(&m->owner_tid_low, __ATOMIC_RELAXED);
|
||||||
|
return cur == (uint8_t)(tid & 0xFFu);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Active block accounting (saturating dec by 1)
|
||||||
|
static inline void ss_active_dec_one(SuperSlab* ss)
|
||||||
|
{
|
||||||
|
if (!ss) return;
|
||||||
atomic_fetch_add_explicit(&g_ss_active_dec_calls, 1, memory_order_relaxed);
|
atomic_fetch_add_explicit(&g_ss_active_dec_calls, 1, memory_order_relaxed);
|
||||||
uint32_t old = atomic_load_explicit(&ss->total_active_blocks, memory_order_relaxed);
|
uint32_t cur = atomic_load_explicit(&ss->total_active_blocks, memory_order_relaxed);
|
||||||
while (old != 0) {
|
while (cur != 0) {
|
||||||
if (atomic_compare_exchange_weak_explicit(&ss->total_active_blocks, &old, old - 1u,
|
if (atomic_compare_exchange_weak_explicit(&ss->total_active_blocks,
|
||||||
memory_order_relaxed, memory_order_relaxed)) {
|
&cur,
|
||||||
break;
|
cur - 1u,
|
||||||
|
memory_order_acq_rel,
|
||||||
|
memory_order_relaxed)) {
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
// CAS failed: old is reloaded by CAS intrinsic
|
// cur updated by failed CAS; loop
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Low-cost timestamp (nanoseconds, monotonic) - inline for hot path
|
// Remote push helper (Box 2):
|
||||||
static inline uint64_t hak_now_ns(void) {
|
// - Enqueue node to per-slab MPSC stack
|
||||||
struct timespec ts;
|
// - Returns 1 if transition empty->nonempty, otherwise 0
|
||||||
clock_gettime(CLOCK_MONOTONIC, &ts);
|
// - Also decrements ss->total_active_blocks once (free completed)
|
||||||
return (uint64_t)ts.tv_sec * 1000000000ULL + (uint64_t)ts.tv_nsec;
|
static inline int ss_remote_push(SuperSlab* ss, int slab_idx, void* node)
|
||||||
}
|
{
|
||||||
|
if (!ss || slab_idx < 0 || slab_idx >= SLABS_PER_SUPERSLAB_MAX || !node) {
|
||||||
// Get next lg_size for new SuperSlab allocation (uses target_lg)
|
return -1;
|
||||||
// Forward declaration of ACE state (defined in main header)
|
|
||||||
typedef struct {
|
|
||||||
uint8_t current_lg;
|
|
||||||
uint8_t target_lg;
|
|
||||||
uint16_t hot_score;
|
|
||||||
uint32_t alloc_count;
|
|
||||||
uint32_t refill_count;
|
|
||||||
uint32_t spill_count;
|
|
||||||
uint32_t live_blocks;
|
|
||||||
uint64_t last_tick_ns;
|
|
||||||
} SuperSlabACEState;
|
|
||||||
extern SuperSlabACEState g_ss_ace[8]; // TINY_NUM_CLASSES_SS
|
|
||||||
|
|
||||||
static inline uint8_t hak_tiny_superslab_next_lg(int class_idx) {
|
|
||||||
uint8_t lg = g_ss_ace[class_idx].target_lg ? g_ss_ace[class_idx].target_lg
|
|
||||||
: g_ss_ace[class_idx].current_lg;
|
|
||||||
return lg ? lg : SUPERSLAB_LG_DEFAULT; // Use default if uninitialized
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remote free push (MPSC stack) - returns 1 if transitioned from empty
|
|
||||||
static inline int ss_remote_push(SuperSlab* ss, int slab_idx, void* ptr) {
|
|
||||||
atomic_fetch_add_explicit(&g_ss_remote_push_calls, 1, memory_order_relaxed);
|
|
||||||
#if !HAKMEM_BUILD_RELEASE && HAKMEM_DEBUG_VERBOSE
|
|
||||||
static _Atomic int g_remote_push_count = 0;
|
|
||||||
int count = atomic_fetch_add_explicit(&g_remote_push_count, 1, memory_order_relaxed);
|
|
||||||
if (count < 5) {
|
|
||||||
fprintf(stderr, "[DEBUG ss_remote_push] Call #%d ss=%p slab_idx=%d\n", count+1, (void*)ss, slab_idx);
|
|
||||||
fflush(stderr);
|
|
||||||
}
|
}
|
||||||
if (g_debug_remote_guard && count < 5) {
|
|
||||||
fprintf(stderr, "[REMOTE_PUSH] ss=%p slab_idx=%d ptr=%p count=%d\n",
|
|
||||||
(void*)ss, slab_idx, ptr, count);
|
|
||||||
}
|
|
||||||
#else
|
|
||||||
(void)slab_idx; // Suppress unused warning in release builds
|
|
||||||
#endif
|
|
||||||
|
|
||||||
// Unconditional sanity checks (Fail-Fast without crashing)
|
_Atomic uintptr_t* head = &ss->remote_heads[slab_idx];
|
||||||
{
|
uintptr_t old_head;
|
||||||
uintptr_t ptr_val = (uintptr_t)ptr;
|
uintptr_t new_head;
|
||||||
uintptr_t base = (uintptr_t)ss;
|
int transitioned = 0;
|
||||||
size_t ss_size = (size_t)1ULL << ss->lg_size;
|
|
||||||
int cap = ss_slabs_capacity(ss);
|
|
||||||
int in_range = (ptr_val >= base) && (ptr_val < base + ss_size);
|
|
||||||
int aligned = ((ptr_val & (sizeof(void*) - 1)) == 0);
|
|
||||||
if (!in_range || slab_idx < 0 || slab_idx >= cap || !aligned) {
|
|
||||||
uintptr_t code = 0xB001u;
|
|
||||||
if (!in_range) code |= 0x01u;
|
|
||||||
if (!aligned) code |= 0x02u;
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_INVALID,
|
|
||||||
(uint16_t)(ss ? ss->slabs[slab_idx].class_idx : 0xFFu),
|
|
||||||
ptr,
|
|
||||||
((uintptr_t)slab_idx << 32) | code);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// A/B: global disable for remote MPSC — fallback to legacy freelist push
|
|
||||||
do {
|
do {
|
||||||
static int g_disable_remote_glob = -1;
|
old_head = atomic_load_explicit(head, memory_order_acquire);
|
||||||
if (__builtin_expect(g_disable_remote_glob == -1, 0)) {
|
// next ポインタは tiny_next_ptr_box / tiny_nextptr 等で扱う前提だが、
|
||||||
const char* e = getenv("HAKMEM_TINY_DISABLE_REMOTE");
|
// ここでは単純に単方向リストとして積む(上位が decode する)。
|
||||||
g_disable_remote_glob = (e && *e && *e != '0') ? 1 : 0;
|
*(uintptr_t*)node = old_head;
|
||||||
}
|
new_head = (uintptr_t)node;
|
||||||
if (__builtin_expect(g_disable_remote_glob, 0)) {
|
} while (!atomic_compare_exchange_weak_explicit(
|
||||||
TinySlabMeta* meta = &ss->slabs[slab_idx];
|
head, &old_head, new_head,
|
||||||
void* prev = meta->freelist;
|
memory_order_release, memory_order_relaxed));
|
||||||
tiny_next_write(ss->slabs[slab_idx].class_idx, ptr, prev); // Phase 12: per-slab class
|
transitioned = (old_head == 0) ? 1 : 0;
|
||||||
meta->freelist = ptr;
|
atomic_fetch_add_explicit(&ss->remote_counts[slab_idx], 1, memory_order_acq_rel);
|
||||||
// Reflect accounting (callers also decrement used; keep idempotent here)
|
|
||||||
ss_active_dec_one(ss);
|
|
||||||
if (prev == NULL) {
|
|
||||||
// first item: mark this slab visible to adopters
|
|
||||||
uint32_t bit = (1u << slab_idx);
|
|
||||||
atomic_fetch_or_explicit(&ss->freelist_mask, bit, memory_order_release);
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
} while (0);
|
|
||||||
|
|
||||||
_Atomic(uintptr_t)* head = &ss->remote_heads[slab_idx];
|
// account active block removal once per free
|
||||||
uintptr_t old;
|
ss_active_dec_one(ss);
|
||||||
do {
|
|
||||||
old = atomic_load_explicit(head, memory_order_acquire);
|
|
||||||
if (!g_remote_side_enable) {
|
|
||||||
tiny_next_write(ss->slabs[slab_idx].class_idx, ptr, (void*)old); // Phase 12: per-slab class
|
|
||||||
}
|
|
||||||
} while (!atomic_compare_exchange_weak_explicit(head, &old, (uintptr_t)ptr,
|
|
||||||
memory_order_release, memory_order_relaxed));
|
|
||||||
tiny_remote_side_set(ss, slab_idx, ptr, old);
|
|
||||||
tiny_remote_track_on_remote_push(ss, slab_idx, ptr, "remote_push", 0);
|
|
||||||
if (__builtin_expect(g_debug_remote_guard, 0)) {
|
|
||||||
// One-shot verify just-written next/ptr alignment and range
|
|
||||||
uintptr_t base = (uintptr_t)ss;
|
|
||||||
size_t ss_size = (size_t)1ULL << ss->lg_size;
|
|
||||||
uintptr_t pv = (uintptr_t)ptr;
|
|
||||||
int ptr_in = (pv >= base && pv < base + ss_size);
|
|
||||||
int ptr_al = ((pv & (sizeof(void*) - 1)) == 0);
|
|
||||||
int old_in = (old == 0) || ((old >= base) && (old < base + ss_size));
|
|
||||||
int old_al = (old == 0) || ((old & (sizeof(void*) - 1)) == 0);
|
|
||||||
if (!ptr_in || !ptr_al || !old_in || !old_al) {
|
|
||||||
uintptr_t flags = ((uintptr_t)ptr_al << 3) | ((uintptr_t)ptr_in << 2) | ((uintptr_t)old_al << 1) | (uintptr_t)old_in;
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_INVALID,
|
|
||||||
(uint16_t)(ss ? ss->slabs[slab_idx].class_idx : 0xFFu),
|
|
||||||
ptr,
|
|
||||||
0xB100u | (flags & 0xFu));
|
|
||||||
if (g_tiny_safe_free_strict) { raise(SIGUSR2); }
|
|
||||||
}
|
|
||||||
fprintf(stderr, "[REMOTE_PUSH] cls=%u slab=%d ptr=%p old=%p transitioned=%d\n",
|
|
||||||
(ss && slab_idx >= 0 && slab_idx < ss_slabs_capacity(ss))
|
|
||||||
? ss->slabs[slab_idx].class_idx
|
|
||||||
: 0xFFu,
|
|
||||||
slab_idx,
|
|
||||||
ptr,
|
|
||||||
(void*)old,
|
|
||||||
old == 0);
|
|
||||||
// Pack: [slab_idx<<32 | bit0:old==0 | bit1:old_al | bit2:ptr_al]
|
|
||||||
uintptr_t aux = ((uintptr_t)slab_idx << 32) | ((old == 0) ? 1u : 0u) | ((old_al ? 1u : 0u) << 1) | ((ptr_al ? 1u : 0u) << 2);
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_PUSH,
|
|
||||||
(uint16_t)((ss && slab_idx >= 0 && slab_idx < ss_slabs_capacity(ss))
|
|
||||||
? ss->slabs[slab_idx].class_idx
|
|
||||||
: 0xFFu),
|
|
||||||
ptr,
|
|
||||||
aux);
|
|
||||||
} else {
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_PUSH,
|
|
||||||
(uint16_t)((ss && slab_idx >= 0 && slab_idx < ss_slabs_capacity(ss))
|
|
||||||
? ss->slabs[slab_idx].class_idx
|
|
||||||
: 0xFFu),
|
|
||||||
ptr,
|
|
||||||
((uintptr_t)slab_idx << 32) | (uint32_t)(old == 0));
|
|
||||||
}
|
|
||||||
atomic_fetch_add_explicit(&ss->remote_counts[slab_idx], 1u, memory_order_relaxed);
|
|
||||||
ss_active_dec_one(ss); // Fix: Decrement active blocks on cross-thread free
|
|
||||||
atomic_store_explicit(&g_ss_remote_seen, 1, memory_order_relaxed);
|
|
||||||
int transitioned = (old == 0);
|
|
||||||
// (optional hint to Ready ring moved to mailbox/aggregator to avoid header coupling)
|
|
||||||
if (transitioned) {
|
|
||||||
// First remote observed for this slab: mark slab_listed and notify publisher paths
|
|
||||||
unsigned prev = atomic_exchange_explicit(&ss->slab_listed[slab_idx], 1u, memory_order_acq_rel);
|
|
||||||
(void)prev; // best-effort
|
|
||||||
// Phase 12: Use per-slab class_idx instead of ss->size_class
|
|
||||||
tiny_publish_notify((int)ss->slabs[slab_idx].class_idx, ss, slab_idx);
|
|
||||||
} else {
|
|
||||||
// Optional: best-effort notify if already non-empty but not listed
|
|
||||||
if (__builtin_expect(g_remote_force_notify, 0)) {
|
|
||||||
unsigned listed = atomic_load_explicit(&ss->slab_listed[slab_idx], memory_order_acquire);
|
|
||||||
if (listed == 0) {
|
|
||||||
unsigned prev = atomic_exchange_explicit(&ss->slab_listed[slab_idx], 1u, memory_order_acq_rel);
|
|
||||||
(void)prev;
|
|
||||||
// Phase 12: Use per-slab class_idx instead of ss->size_class
|
|
||||||
tiny_publish_notify((int)ss->slabs[slab_idx].class_idx, ss, slab_idx);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return transitioned;
|
return transitioned;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Drain remote queue into freelist (no change to used/active; already adjusted at free)
|
|
||||||
// INTERNAL UNSAFE VERSION - Only called by slab_handle.h after ownership verified!
|
|
||||||
// DO NOT call directly - use slab_drain_remote() via SlabHandle instead.
|
|
||||||
static inline void _ss_remote_drain_to_freelist_unsafe(SuperSlab* ss, int slab_idx, TinySlabMeta* meta) {
|
|
||||||
do { // one-shot debug print when enabled
|
|
||||||
static int en = -1; static _Atomic int printed;
|
|
||||||
if (__builtin_expect(en == -1, 0)) {
|
|
||||||
const char* e = getenv("HAKMEM_TINY_REFILL_OPT_DEBUG");
|
|
||||||
en = (e && *e && *e != '0') ? 1 : 0;
|
|
||||||
}
|
|
||||||
if (en) {
|
|
||||||
int exp = 0; if (atomic_compare_exchange_strong(&printed, &exp, 1)) {
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
fprintf(stderr, "[DRAIN_OPT] chain splice active (cls=%u slab=%d)\n", meta ? meta->class_idx : 0u, slab_idx);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} while (0);
|
|
||||||
_Atomic(uintptr_t)* head = &ss->remote_heads[slab_idx];
|
|
||||||
uintptr_t p = atomic_exchange_explicit(head, (uintptr_t)NULL, memory_order_acq_rel);
|
|
||||||
if (p == 0) return;
|
|
||||||
// Option A: Fail-fast guard against sentinel leaking into freelist
|
|
||||||
if (__builtin_expect(p == TINY_REMOTE_SENTINEL, 0)) {
|
|
||||||
if (__builtin_expect(g_debug_remote_guard, 0)) {
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
fprintf(stderr, "[REMOTE_DRAIN] head is sentinel! cls=%u slab=%d head=%p\n",
|
|
||||||
meta ? meta->class_idx : 0u,
|
|
||||||
slab_idx,
|
|
||||||
(void*)p);
|
|
||||||
}
|
|
||||||
if (__builtin_expect(g_tiny_safe_free_strict, 0)) { raise(SIGUSR2); }
|
|
||||||
// Drop this drain attempt to prevent corrupting freelist
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
uint32_t drained = 0;
|
|
||||||
uintptr_t base = (uintptr_t)ss;
|
|
||||||
size_t ss_size = (size_t)1ULL << ss->lg_size;
|
|
||||||
uint32_t drain_tid = (uint32_t)(uintptr_t)pthread_self();
|
|
||||||
// Build a local chain then splice once into freelist to reduce writes
|
|
||||||
void* chain_head = NULL;
|
|
||||||
void* chain_tail = NULL;
|
|
||||||
while (p != 0) {
|
|
||||||
// Guard: range/alignment before deref
|
|
||||||
if (__builtin_expect(g_debug_remote_guard, 0)) {
|
|
||||||
if (p < base || p >= base + ss_size) {
|
|
||||||
uintptr_t aux = tiny_remote_pack_diag(0xA210u, base, ss_size, p);
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_INVALID, (uint16_t)meta->class_idx, (void*)p, aux);
|
|
||||||
if (g_tiny_safe_free_strict) { raise(SIGUSR2); return; }
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
if ((p & (uintptr_t)(sizeof(void*) - 1)) != 0) {
|
|
||||||
uintptr_t aux = tiny_remote_pack_diag(0xA211u, base, ss_size, p);
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_INVALID, (uint16_t)meta->class_idx, (void*)p, aux);
|
|
||||||
if (g_tiny_safe_free_strict) { raise(SIGUSR2); return; }
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
void* node = (void*)p;
|
|
||||||
// Additional defensive check (should be redundant with head guard)
|
|
||||||
if (__builtin_expect((uintptr_t)node == TINY_REMOTE_SENTINEL, 0)) {
|
|
||||||
if (__builtin_expect(g_debug_remote_guard, 0)) {
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
fprintf(stderr, "[REMOTE_DRAIN] node sentinel detected, abort chain (cls=%u slab=%d)\n",
|
|
||||||
meta ? meta->class_idx : 0u, slab_idx);
|
|
||||||
}
|
|
||||||
if (__builtin_expect(g_tiny_safe_free_strict, 0)) { raise(SIGUSR2); }
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
uintptr_t next = tiny_remote_side_get(ss, slab_idx, node);
|
|
||||||
tiny_remote_watch_note("drain_pull", ss, slab_idx, node, 0xA238u, drain_tid, 0);
|
|
||||||
if (__builtin_expect(g_remote_side_enable, 0)) {
|
|
||||||
if (!tiny_remote_sentinel_ok(node)) {
|
|
||||||
uintptr_t aux = tiny_remote_pack_diag(0xA202u, base, ss_size, (uintptr_t)node);
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_INVALID, (uint16_t)meta->class_idx, node, aux);
|
|
||||||
uintptr_t observed = atomic_load_explicit((_Atomic uintptr_t*)node, memory_order_relaxed);
|
|
||||||
tiny_remote_report_corruption("drain", node, observed);
|
|
||||||
// Phase 12: Use local meta parameter (no shadowing)
|
|
||||||
if (__builtin_expect(g_debug_remote_guard, 0)) {
|
|
||||||
fprintf(stderr,
|
|
||||||
"[REMOTE_SENTINEL-DRAIN] cls=%u slab=%d node=%p drained=%u observed=0x%016" PRIxPTR " owner=%u used=%u freelist=%p\n",
|
|
||||||
meta->class_idx,
|
|
||||||
slab_idx,
|
|
||||||
node,
|
|
||||||
drained,
|
|
||||||
observed,
|
|
||||||
(unsigned)meta->owner_tid_low, // Phase 12: Use owner_tid_low
|
|
||||||
(unsigned)meta->used,
|
|
||||||
meta->freelist);
|
|
||||||
}
|
|
||||||
if (g_tiny_safe_free_strict) { raise(SIGUSR2); return; }
|
|
||||||
}
|
|
||||||
tiny_remote_side_clear(ss, slab_idx, node);
|
|
||||||
}
|
|
||||||
// Always sanitize node header before linking into freelist (defense-in-depth)
|
|
||||||
// Overwrite any stale sentinel/value in node[0] with the local chain link.
|
|
||||||
tiny_remote_watch_note("drain_link", ss, slab_idx, node, 0xA239u, drain_tid, 0);
|
|
||||||
tiny_remote_track_on_remote_drain(ss, slab_idx, node, "remote_drain", drain_tid);
|
|
||||||
if (__builtin_expect(g_debug_remote_guard && drained < 3, 0)) {
|
|
||||||
// First few nodes: record low info for triage
|
|
||||||
uintptr_t aux = ((uintptr_t)slab_idx << 32) | (uintptr_t)(drained & 0xFFFF);
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_DRAIN, (uint16_t)meta->class_idx, node, aux);
|
|
||||||
}
|
|
||||||
// Link into local chain (avoid touching meta->freelist per node)
|
|
||||||
if (chain_head == NULL) {
|
|
||||||
chain_head = node;
|
|
||||||
chain_tail = node;
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_next_write(meta->class_idx, node, NULL); // Box API: terminate chain
|
|
||||||
} else {
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_next_write(meta->class_idx, node, chain_head); // Box API: link to existing chain
|
|
||||||
chain_head = node;
|
|
||||||
}
|
|
||||||
p = next;
|
|
||||||
drained++;
|
|
||||||
}
|
|
||||||
// Splice the drained chain into freelist (single meta write)
|
|
||||||
if (chain_head != NULL) {
|
|
||||||
if (chain_tail != NULL) {
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_next_write(meta->class_idx, chain_tail, meta->freelist); // Box API: splice chains
|
|
||||||
}
|
|
||||||
void* prev = meta->freelist;
|
|
||||||
meta->freelist = chain_head;
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_failfast_log("remote_drain", meta->class_idx, ss, meta, chain_head, prev);
|
|
||||||
// Optional: set freelist bit when transitioning from empty
|
|
||||||
do {
|
|
||||||
static int g_mask_en = -1;
|
|
||||||
if (__builtin_expect(g_mask_en == -1, 0)) {
|
|
||||||
const char* e = getenv("HAKMEM_TINY_FREELIST_MASK");
|
|
||||||
g_mask_en = (e && *e && *e != '0') ? 1 : 0;
|
|
||||||
}
|
|
||||||
if (__builtin_expect(g_mask_en, 0)) {
|
|
||||||
uint32_t bit = (1u << slab_idx);
|
|
||||||
atomic_fetch_or_explicit(&ss->freelist_mask, bit, memory_order_release);
|
|
||||||
}
|
|
||||||
} while (0);
|
|
||||||
}
|
|
||||||
// Reset remote count after full drain
|
|
||||||
atomic_store_explicit(&ss->remote_counts[slab_idx], 0u, memory_order_relaxed);
|
|
||||||
// Phase 12: Use per-slab class_idx
|
|
||||||
tiny_debug_ring_record(TINY_RING_EVENT_REMOTE_DRAIN,
|
|
||||||
(uint16_t)meta->class_idx,
|
|
||||||
ss,
|
|
||||||
((uintptr_t)slab_idx << 32) | drained);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Legacy wrapper for compatibility (UNSAFE - ownership NOT checked!)
|
|
||||||
// DEPRECATED: Use slab_drain_remote() via SlabHandle instead
|
|
||||||
static inline void ss_remote_drain_to_freelist(SuperSlab* ss, int slab_idx) {
|
|
||||||
TinySlabMeta* meta = &ss->slabs[slab_idx];
|
|
||||||
_ss_remote_drain_to_freelist_unsafe(ss, slab_idx, meta);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to acquire exclusive ownership of slab (REQUIRED before draining remote queue!)
|
|
||||||
// Returns 1 on success (now own slab), 0 on failure (another thread owns it)
|
|
||||||
// CRITICAL: Only succeeds if slab is unowned (owner_tid_low==0) or already owned by us.
|
|
||||||
// Phase 12: Use 8-bit owner_tid_low instead of 16-bit owner_tid
|
|
||||||
static inline int ss_owner_try_acquire(TinySlabMeta* m, uint32_t self_tid) {
|
|
||||||
uint8_t self_tid_low = (uint8_t)self_tid; // Phase 12: Truncate to 8-bit
|
|
||||||
uint8_t cur = __atomic_load_n(&m->owner_tid_low, __ATOMIC_RELAXED);
|
|
||||||
if (cur == self_tid_low) return 1; // Already owner - success
|
|
||||||
if (cur != 0) return 0; // Another thread owns it - FAIL immediately
|
|
||||||
|
|
||||||
// Slab is unowned (cur==0) - try to claim it
|
|
||||||
uint8_t expected = 0;
|
|
||||||
return __atomic_compare_exchange_n(&m->owner_tid_low, &expected, self_tid_low, false,
|
|
||||||
__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Drain remote queues where activity was observed (lightweight sweep).
|
|
||||||
// CRITICAL: Must acquire ownership before draining each slab!
|
|
||||||
static inline void ss_remote_drain_light(SuperSlab* ss) {
|
|
||||||
if (!ss) return;
|
|
||||||
uint32_t threshold = tiny_remote_drain_threshold();
|
|
||||||
uint32_t self_tid = (uint32_t)(uintptr_t)pthread_self();
|
|
||||||
int cap = ss_slabs_capacity(ss);
|
|
||||||
for (int s = 0; s < cap; s++) {
|
|
||||||
uint32_t rc = atomic_load_explicit(&ss->remote_counts[s], memory_order_relaxed);
|
|
||||||
if (rc <= threshold) continue;
|
|
||||||
if (atomic_load_explicit(&ss->remote_heads[s], memory_order_acquire) != 0) {
|
|
||||||
// BUGFIX: Must acquire ownership BEFORE draining!
|
|
||||||
// Without this, we can drain a slab owned by another thread → freelist corruption
|
|
||||||
TinySlabMeta* m = &ss->slabs[s];
|
|
||||||
if (!ss_owner_try_acquire(m, self_tid)) {
|
|
||||||
continue; // Failed to acquire - skip this slab
|
|
||||||
}
|
|
||||||
ss_remote_drain_to_freelist(ss, s);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Best-effort CAS to transfer slab ownership (DEPRECATED - use ss_owner_try_acquire!)
|
|
||||||
static inline void ss_owner_cas(TinySlabMeta* m, uint32_t self_tid) {
|
|
||||||
(void)ss_owner_try_acquire(m, self_tid); // Ignore result (unsafe)
|
|
||||||
}
|
|
||||||
|
|
||||||
#endif // SUPERSLAB_INLINE_H
|
#endif // SUPERSLAB_INLINE_H
|
||||||
|
|||||||
@ -1,146 +1,98 @@
|
|||||||
// superslab_types.h - SuperSlab Configuration & Data Structures
|
|
||||||
// Purpose: Core types and configuration for SuperSlab allocator
|
|
||||||
// Extracted from hakmem_tiny_superslab.h (Phase 6-2.8 Refactoring)
|
|
||||||
|
|
||||||
#ifndef SUPERSLAB_TYPES_H
|
#ifndef SUPERSLAB_TYPES_H
|
||||||
#define SUPERSLAB_TYPES_H
|
#define SUPERSLAB_TYPES_H
|
||||||
|
|
||||||
#include <stdint.h>
|
#include "hakmem_tiny_superslab_constants.h"
|
||||||
#include <stddef.h>
|
#include <stddef.h>
|
||||||
#include <stdbool.h>
|
#include <stdint.h>
|
||||||
#include <stdatomic.h>
|
#include <stdatomic.h>
|
||||||
#include <stdlib.h>
|
#include <pthread.h>
|
||||||
#include <stdio.h>
|
|
||||||
#include <pthread.h> // Phase 2a: For SuperSlabHead expansion_lock
|
|
||||||
#include "hakmem_tiny_superslab_constants.h" // SLAB_SIZE, SUPERSLAB_SLAB0_DATA_OFFSET
|
|
||||||
|
|
||||||
// ============================================================================
|
// TinySlabMeta: per-slab metadata embedded in SuperSlab
|
||||||
// SuperSlab Configuration
|
|
||||||
// ============================================================================
|
|
||||||
|
|
||||||
// Phase 8.3: ACE - Variable SuperSlab size (1MB ↔ 2MB)
|
|
||||||
#define SUPERSLAB_SIZE_MAX (2 * 1024 * 1024) // 2MB max size
|
|
||||||
#define SUPERSLAB_SIZE_MIN (1 * 1024 * 1024) // 1MB min size
|
|
||||||
#define SUPERSLAB_LG_MAX 21 // lg(2MB)
|
|
||||||
#define SUPERSLAB_LG_MIN 20 // lg(1MB)
|
|
||||||
#define SUPERSLAB_LG_DEFAULT 21 // Default: 2MB (syscall reduction, ACE will adapt)
|
|
||||||
|
|
||||||
// Number of tiny size classes (same as TINY_NUM_CLASSES to avoid circular include)
|
|
||||||
#define TINY_NUM_CLASSES_SS 8 // 8-64 bytes (8, 16, 24, 32, 40, 48, 56, 64)
|
|
||||||
|
|
||||||
// Legacy defines (kept for backward compatibility, use lg_size instead)
|
|
||||||
#define SUPERSLAB_SIZE SUPERSLAB_SIZE_MAX // Default to 2MB (syscall reduction)
|
|
||||||
#define SUPERSLAB_MASK (SUPERSLAB_SIZE - 1)
|
|
||||||
// IMPORTANT: Support variable-size SuperSlab (1MB=16 slabs, 2MB=32 slabs)
|
|
||||||
// Arrays below must be sized for the MAX to avoid OOB when lg_size=21 (2MB)
|
|
||||||
#define SLABS_PER_SUPERSLAB_MIN (SUPERSLAB_SIZE_MIN / SLAB_SIZE) // 16 for 1MB
|
|
||||||
#define SLABS_PER_SUPERSLAB_MAX (SUPERSLAB_SIZE_MAX / SLAB_SIZE) // 32 for 2MB
|
|
||||||
|
|
||||||
// Magic number for validation
|
|
||||||
#define SUPERSLAB_MAGIC 0x48414B4D454D5353ULL // "HAKMEMSS"
|
|
||||||
|
|
||||||
// ============================================================================
|
|
||||||
// SuperSlab Metadata Structure
|
|
||||||
// ============================================================================
|
|
||||||
|
|
||||||
// Per-slab metadata (16 bytes)
|
|
||||||
typedef struct TinySlabMeta {
|
typedef struct TinySlabMeta {
|
||||||
void* freelist; // Freelist head (NULL = linear mode, Phase 6.24)
|
void* freelist; // NULL = bump-only, non-NULL = freelist head
|
||||||
uint16_t used; // Blocks currently used
|
uint16_t used; // blocks currently allocated from this slab
|
||||||
uint16_t capacity; // Total blocks in slab
|
uint16_t capacity; // total blocks this slab can hold
|
||||||
uint16_t carved; // Blocks carved from linear region (monotonic, never decrements)
|
uint8_t class_idx; // owning tiny class (Phase 12: per-slab)
|
||||||
uint8_t class_idx; // Phase 12: dynamic class (0-7 active, 255=UNASSIGNED)
|
uint8_t carved; // carve/owner flags
|
||||||
uint8_t owner_tid_low; // Phase 12: low 8 bits of owner thread ID
|
uint8_t owner_tid_low; // low 8 bits of owner TID (debug / locality)
|
||||||
// Phase 6.24: freelist == NULL → linear allocation mode (lazy init)
|
|
||||||
// Linear mode: allocate sequentially without building freelist
|
|
||||||
// Freelist mode: use freelist after first free() call
|
|
||||||
// FIX: carved prevents double-allocation when used decrements after free
|
|
||||||
} TinySlabMeta;
|
} TinySlabMeta;
|
||||||
|
|
||||||
// SuperSlab header (cache-line aligned, 64B)
|
#define TINY_NUM_CLASSES_SS 8
|
||||||
|
|
||||||
|
// Min SuperSlab size used for pointer→ss masking (Phase12: 1MB)
|
||||||
|
#define SUPERSLAB_SIZE_MIN (1u << 20)
|
||||||
|
|
||||||
|
// Max slabs in a SuperSlab for the largest configuration (2MB / 64KB = 32)
|
||||||
|
#define SLABS_PER_SUPERSLAB_MAX ((2 * 1024 * 1024) / SLAB_SIZE)
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
// Magic for SuperSlab validation
|
||||||
|
#define SUPERSLAB_MAGIC 0x5353504Cu // 'SSPL'
|
||||||
|
|
||||||
|
// ACE state (extern; defined in hakmem_tiny_superslab.c)
|
||||||
|
typedef struct SuperSlabACEState {
|
||||||
|
uint8_t current_lg;
|
||||||
|
uint8_t target_lg;
|
||||||
|
uint16_t hot_score;
|
||||||
|
uint32_t alloc_count;
|
||||||
|
uint32_t refill_count;
|
||||||
|
uint32_t spill_count;
|
||||||
|
uint32_t live_blocks;
|
||||||
|
uint64_t last_tick_ns;
|
||||||
|
} SuperSlabACEState;
|
||||||
|
|
||||||
|
extern SuperSlabACEState g_ss_ace[TINY_NUM_CLASSES_SS];
|
||||||
|
|
||||||
|
// SuperSlab: backing region for multiple TinySlabMeta+data slices
|
||||||
typedef struct SuperSlab {
|
typedef struct SuperSlab {
|
||||||
// Header fields (64B total)
|
uint32_t magic; // SUPERSLAB_MAGIC
|
||||||
uint64_t magic; // Magic number (0xHAKMEM_SUPERSLAB)
|
uint8_t lg_size; // log2(super slab size), 20=1MB, 21=2MB
|
||||||
uint8_t active_slabs; // Number of active slabs (0-32 for 2MB, 0-16 for 1MB)
|
uint8_t _pad0[3];
|
||||||
uint8_t lg_size; // Phase 8.3: ACE - SuperSlab size (20=1MB, 21=2MB)
|
|
||||||
uint8_t _pad0; // Padding (Phase 12: reserved, was size_class)
|
|
||||||
uint32_t slab_bitmap; // 32-bit bitmap (1=active, 0=free)
|
|
||||||
_Atomic uint32_t freelist_mask; // Bit i=1 when slab i freelist is non-empty (opt-in)
|
|
||||||
|
|
||||||
// Phase 6-2.1: ChatGPT Pro P0 optimization - O(1) non-empty slab lookup
|
// Phase 12: per-SS size_class removed; classes are per-slab via TinySlabMeta.class_idx
|
||||||
uint32_t nonempty_mask; // Bit i = 1 if slabs[i].freelist != NULL (O(1) lookup via ctz)
|
_Atomic uint32_t total_active_blocks;
|
||||||
|
_Atomic uint32_t refcount;
|
||||||
|
_Atomic uint32_t listed;
|
||||||
|
|
||||||
// Phase 7.6: Deallocation support
|
uint32_t slab_bitmap; // active slabs (bit i = 1 → slab i in use)
|
||||||
atomic_uint total_active_blocks; // Total blocks in use (all slabs combined)
|
uint32_t nonempty_mask; // non-empty slabs (for partial tracking)
|
||||||
atomic_uint refcount; // MT-safe refcount for empty detection/free(将来利用)
|
uint32_t freelist_mask; // slabs with non-empty freelist (for fast scan)
|
||||||
atomic_uint listed; // 0/1: published to partial adopt ring(publish gating)
|
uint8_t active_slabs; // count of active slabs
|
||||||
uint32_t partial_epoch; // Last partial madvise epoch (optional)
|
uint8_t publish_hint;
|
||||||
uint8_t publish_hint; // Best slab index hint for adopt (0..31), 0xFF=none
|
uint16_t partial_epoch;
|
||||||
uint8_t _pad1[3]; // Padding
|
|
||||||
|
|
||||||
// Per-slab metadata (16B each)
|
struct SuperSlab* next_chunk; // legacy per-class chain
|
||||||
// Sized for MAX; use ss->lg_size to bound loops at runtime
|
struct SuperSlab* partial_next; // partial list link
|
||||||
|
|
||||||
|
// LRU integration
|
||||||
|
uint64_t last_used_ns;
|
||||||
|
uint32_t generation;
|
||||||
|
struct SuperSlab* lru_prev;
|
||||||
|
struct SuperSlab* lru_next;
|
||||||
|
|
||||||
|
// Remote free queues (per slab)
|
||||||
|
_Atomic uintptr_t remote_heads[SLABS_PER_SUPERSLAB_MAX];
|
||||||
|
_Atomic uint32_t remote_counts[SLABS_PER_SUPERSLAB_MAX];
|
||||||
|
_Atomic uint32_t slab_listed[SLABS_PER_SUPERSLAB_MAX];
|
||||||
|
|
||||||
|
// Per-slab metadata array
|
||||||
TinySlabMeta slabs[SLABS_PER_SUPERSLAB_MAX];
|
TinySlabMeta slabs[SLABS_PER_SUPERSLAB_MAX];
|
||||||
|
} SuperSlab;
|
||||||
|
|
||||||
// Remote free queues (per slab): MPSC stack heads + counts
|
// Legacy per-class SuperSlabHead (Phase 2a dynamic expansion)
|
||||||
_Atomic(uintptr_t) remote_heads[SLABS_PER_SUPERSLAB_MAX];
|
|
||||||
_Atomic(uint32_t) remote_counts[SLABS_PER_SUPERSLAB_MAX];
|
|
||||||
|
|
||||||
// Per-slab publish state: 0/1 = not listed/listed (for slab-granular republish hints)
|
|
||||||
atomic_uint slab_listed[SLABS_PER_SUPERSLAB_MAX];
|
|
||||||
|
|
||||||
// Partial adopt overflow linkage (single-linked, best-effort)
|
|
||||||
struct SuperSlab* partial_next;
|
|
||||||
|
|
||||||
// Phase 2a: Dynamic expansion - link to next chunk
|
|
||||||
struct SuperSlab* next_chunk; // Link to next SuperSlab chunk in chain
|
|
||||||
|
|
||||||
// Phase 9: Lazy Deallocation - LRU cache management
|
|
||||||
uint64_t last_used_ns; // Last usage timestamp (nanoseconds)
|
|
||||||
uint32_t generation; // Generation counter for aging
|
|
||||||
struct SuperSlab* lru_prev; // LRU doubly-linked list (previous)
|
|
||||||
struct SuperSlab* lru_next; // LRU doubly-linked list (next)
|
|
||||||
|
|
||||||
// Padding to fill remaining space (2MB - 64B - 512B)
|
|
||||||
// Note: Actual slab data starts at offset SLAB_SIZE (64KB)
|
|
||||||
|
|
||||||
} __attribute__((aligned(64))) SuperSlab;
|
|
||||||
|
|
||||||
// Phase 12 compatibility helpers
|
|
||||||
// Prefer per-slab class_idx; superslab_get_class() is a temporary shim.
|
|
||||||
static inline uint8_t tiny_slab_class_idx(const SuperSlab* ss, int slab_idx) {
|
|
||||||
if (slab_idx < 0 || slab_idx >= SLABS_PER_SUPERSLAB_MAX) {
|
|
||||||
return 255; // UNASSIGNED / invalid
|
|
||||||
}
|
|
||||||
return ss->slabs[slab_idx].class_idx;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline uint8_t superslab_get_class(const SuperSlab* ss, int slab_idx) {
|
|
||||||
return tiny_slab_class_idx(ss, slab_idx);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ============================================================================
|
|
||||||
// Phase 2a: Dynamic Expansion - SuperSlabHead for chunk management
|
|
||||||
// ============================================================================
|
|
||||||
|
|
||||||
// SuperSlabHead manages a linked list of SuperSlab chunks for each class
|
|
||||||
typedef struct SuperSlabHead {
|
typedef struct SuperSlabHead {
|
||||||
SuperSlab* first_chunk; // Head of chunk list
|
uint8_t class_idx;
|
||||||
SuperSlab* current_chunk; // Current chunk for fast allocation
|
_Atomic size_t total_chunks;
|
||||||
_Atomic size_t total_chunks; // Total chunks allocated
|
SuperSlab* first_chunk;
|
||||||
uint8_t class_idx; // Size class this head manages
|
SuperSlab* current_chunk;
|
||||||
uint8_t _pad[7]; // Padding to 64 bytes
|
|
||||||
|
|
||||||
// Thread safety for chunk expansion
|
|
||||||
pthread_mutex_t expansion_lock;
|
pthread_mutex_t expansion_lock;
|
||||||
|
} SuperSlabHead;
|
||||||
|
|
||||||
} __attribute__((aligned(64))) SuperSlabHead;
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
// Compile-time assertions
|
#endif
|
||||||
_Static_assert(sizeof(TinySlabMeta) == 16, "TinySlabMeta must be 16 bytes");
|
|
||||||
// Phase 8.3: Variable-size SuperSlab assertions (1MB=16 slabs, 2MB=32 slabs)
|
|
||||||
_Static_assert((SUPERSLAB_SIZE_MIN / SLAB_SIZE) == 16, "1MB SuperSlab must have 16 slabs");
|
|
||||||
_Static_assert((SUPERSLAB_SIZE_MAX / SLAB_SIZE) == 32, "2MB SuperSlab must have 32 slabs");
|
|
||||||
_Static_assert((SUPERSLAB_SIZE & SUPERSLAB_MASK) == 0, "SUPERSLAB_SIZE must be power of 2");
|
|
||||||
|
|
||||||
#endif // SUPERSLAB_TYPES_H
|
#endif // SUPERSLAB_TYPES_H
|
||||||
|
|||||||
@ -570,13 +570,18 @@ static inline void* tiny_alloc_fast(size_t size) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Generic front (FastCache/SFC/SLL)
|
// Generic front (FastCache/SFC/SLL)
|
||||||
|
// Respect SLL global toggle; when disabled, skip TLS SLL fast pop entirely
|
||||||
|
if (__builtin_expect(g_tls_sll_enable, 1)) {
|
||||||
#if HAKMEM_TINY_AGGRESSIVE_INLINE
|
#if HAKMEM_TINY_AGGRESSIVE_INLINE
|
||||||
// Phase 2: Use inline macro (3-4 instructions, zero call overhead)
|
// Phase 2: Use inline macro (3-4 instructions, zero call overhead)
|
||||||
TINY_ALLOC_FAST_POP_INLINE(class_idx, ptr);
|
TINY_ALLOC_FAST_POP_INLINE(class_idx, ptr);
|
||||||
#else
|
#else
|
||||||
// Legacy: Function call (10-15 instructions, 5-10 cycle overhead)
|
// Legacy: Function call (10-15 instructions, 5-10 cycle overhead)
|
||||||
ptr = tiny_alloc_fast_pop(class_idx);
|
ptr = tiny_alloc_fast_pop(class_idx);
|
||||||
#endif
|
#endif
|
||||||
|
} else {
|
||||||
|
ptr = NULL;
|
||||||
|
}
|
||||||
if (__builtin_expect(ptr != NULL, 1)) {
|
if (__builtin_expect(ptr != NULL, 1)) {
|
||||||
HAK_RET_ALLOC(class_idx, ptr);
|
HAK_RET_ALLOC(class_idx, ptr);
|
||||||
}
|
}
|
||||||
@ -594,13 +599,17 @@ static inline void* tiny_alloc_fast(size_t size) {
|
|||||||
{
|
{
|
||||||
int refilled = tiny_alloc_fast_refill(class_idx);
|
int refilled = tiny_alloc_fast_refill(class_idx);
|
||||||
if (__builtin_expect(refilled > 0, 1)) {
|
if (__builtin_expect(refilled > 0, 1)) {
|
||||||
|
if (__builtin_expect(g_tls_sll_enable, 1)) {
|
||||||
#if HAKMEM_TINY_AGGRESSIVE_INLINE
|
#if HAKMEM_TINY_AGGRESSIVE_INLINE
|
||||||
// Phase 2: Use inline macro (3-4 instructions, zero call overhead)
|
// Phase 2: Use inline macro (3-4 instructions, zero call overhead)
|
||||||
TINY_ALLOC_FAST_POP_INLINE(class_idx, ptr);
|
TINY_ALLOC_FAST_POP_INLINE(class_idx, ptr);
|
||||||
#else
|
#else
|
||||||
// Legacy: Function call (10-15 instructions, 5-10 cycle overhead)
|
// Legacy: Function call (10-15 instructions, 5-10 cycle overhead)
|
||||||
ptr = tiny_alloc_fast_pop(class_idx);
|
ptr = tiny_alloc_fast_pop(class_idx);
|
||||||
#endif
|
#endif
|
||||||
|
} else {
|
||||||
|
ptr = NULL;
|
||||||
|
}
|
||||||
if (ptr) {
|
if (ptr) {
|
||||||
HAK_RET_ALLOC(class_idx, ptr);
|
HAK_RET_ALLOC(class_idx, ptr);
|
||||||
}
|
}
|
||||||
|
|||||||
@ -46,6 +46,15 @@ static inline uint32_t tiny_atomic_load_u32_acquire(_Atomic uint32_t* ptr) {
|
|||||||
return atomic_load_explicit(ptr, TINY_MO_ACQUIRE);
|
return atomic_load_explicit(ptr, TINY_MO_ACQUIRE);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Load uint8_t variant
|
||||||
|
static inline uint8_t tiny_atomic_load_u8_relaxed(_Atomic uint8_t* ptr) {
|
||||||
|
return atomic_load_explicit(ptr, TINY_MO_RELAXED);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline uint8_t tiny_atomic_load_u8_acquire(_Atomic uint8_t* ptr) {
|
||||||
|
return atomic_load_explicit(ptr, TINY_MO_ACQUIRE);
|
||||||
|
}
|
||||||
|
|
||||||
// ========== Store Operations ==========
|
// ========== Store Operations ==========
|
||||||
|
|
||||||
// Store with explicit memory order
|
// Store with explicit memory order
|
||||||
|
|||||||
67
core/tiny_failfast.c
Normal file
67
core/tiny_failfast.c
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
// tiny_failfast.c - Fail-fast debugging utilities (lightweight stubs)
|
||||||
|
// Purpose: Provide link-time definitions for instrumentation hooks used across
|
||||||
|
// alloc/free/refill paths. Behavior controlled via env variables.
|
||||||
|
|
||||||
|
#include <stdio.h>
|
||||||
|
#include <stdlib.h>
|
||||||
|
#include <signal.h>
|
||||||
|
#include <stdatomic.h>
|
||||||
|
#include "hakmem_tiny_superslab.h" // For SuperSlab/TinySlabMeta and hak_now_ns
|
||||||
|
|
||||||
|
// Runtime-configurable fail-fast level
|
||||||
|
// 0 = disabled, 1 = log only, 2 = log + raise(SIGUSR2), 3 = abort()
|
||||||
|
int tiny_refill_failfast_level(void)
|
||||||
|
{
|
||||||
|
static _Atomic int lvl = -1;
|
||||||
|
int v = atomic_load_explicit(&lvl, memory_order_relaxed);
|
||||||
|
if (__builtin_expect(v != -1, 1)) return v;
|
||||||
|
const char* e = getenv("HAKMEM_TINY_REFILL_FAILFAST");
|
||||||
|
int parsed = (e && *e) ? atoi(e) : 0;
|
||||||
|
if (parsed < 0) parsed = 0; if (parsed > 3) parsed = 3;
|
||||||
|
atomic_store_explicit(&lvl, parsed, memory_order_relaxed);
|
||||||
|
return parsed;
|
||||||
|
}
|
||||||
|
|
||||||
|
void tiny_failfast_log(const char* stage,
|
||||||
|
int class_idx,
|
||||||
|
SuperSlab* ss,
|
||||||
|
TinySlabMeta* meta,
|
||||||
|
void* ptr,
|
||||||
|
void* prev)
|
||||||
|
{
|
||||||
|
if (tiny_refill_failfast_level() < 1) return;
|
||||||
|
uint64_t ts = hak_now_ns();
|
||||||
|
fprintf(stderr,
|
||||||
|
"[FF][%s] ts=%llu cls=%d ss=%p slab_used=%u cap=%u ptr=%p prev=%p\n",
|
||||||
|
stage ? stage : "?",
|
||||||
|
(unsigned long long)ts,
|
||||||
|
class_idx,
|
||||||
|
(void*)ss,
|
||||||
|
meta ? (unsigned)meta->used : 0u,
|
||||||
|
meta ? (unsigned)meta->capacity : 0u,
|
||||||
|
ptr,
|
||||||
|
prev);
|
||||||
|
}
|
||||||
|
|
||||||
|
void tiny_failfast_abort_ptr(const char* stage,
|
||||||
|
SuperSlab* ss,
|
||||||
|
int slab_idx,
|
||||||
|
void* ptr,
|
||||||
|
const char* reason)
|
||||||
|
{
|
||||||
|
int lvl = tiny_refill_failfast_level();
|
||||||
|
if (lvl <= 0) return;
|
||||||
|
fprintf(stderr,
|
||||||
|
"[FF-ABORT][%s] ss=%p slab=%d ptr=%p reason=%s\n",
|
||||||
|
stage ? stage : "?",
|
||||||
|
(void*)ss,
|
||||||
|
slab_idx,
|
||||||
|
ptr,
|
||||||
|
reason ? reason : "");
|
||||||
|
if (lvl >= 3) {
|
||||||
|
abort();
|
||||||
|
} else if (lvl >= 2) {
|
||||||
|
raise(SIGUSR2);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
14
core/tiny_failfast.d
Normal file
14
core/tiny_failfast.d
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
core/tiny_failfast.o: core/tiny_failfast.c core/hakmem_tiny_superslab.h \
|
||||||
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
|
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
||||||
|
core/hakmem_tiny_superslab_constants.h
|
||||||
|
core/hakmem_tiny_superslab.h:
|
||||||
|
core/superslab/superslab_types.h:
|
||||||
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
|
core/superslab/superslab_inline.h:
|
||||||
|
core/superslab/superslab_types.h:
|
||||||
|
core/tiny_debug_ring.h:
|
||||||
|
core/hakmem_build_flags.h:
|
||||||
|
core/tiny_remote.h:
|
||||||
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
@ -27,6 +27,7 @@
|
|||||||
// External TLS variables (defined in hakmem_tiny.c)
|
// External TLS variables (defined in hakmem_tiny.c)
|
||||||
extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES];
|
extern __thread void* g_tls_sll_head[TINY_NUM_CLASSES];
|
||||||
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
|
extern __thread uint32_t g_tls_sll_count[TINY_NUM_CLASSES];
|
||||||
|
extern int g_tls_sll_enable; // Honored for fast free: when 0, fall back to slow path
|
||||||
|
|
||||||
// External functions
|
// External functions
|
||||||
extern void hak_tiny_free(void* ptr); // Fallback for non-header allocations
|
extern void hak_tiny_free(void* ptr); // Fallback for non-header allocations
|
||||||
@ -51,6 +52,11 @@ extern void hak_tiny_free(void* ptr); // Fallback for non-header allocations
|
|||||||
static inline int hak_tiny_free_fast_v2(void* ptr) {
|
static inline int hak_tiny_free_fast_v2(void* ptr) {
|
||||||
if (__builtin_expect(!ptr, 0)) return 0;
|
if (__builtin_expect(!ptr, 0)) return 0;
|
||||||
|
|
||||||
|
// Respect global SLL toggle: when disabled, do not use TLS SLL fast path.
|
||||||
|
if (__builtin_expect(!g_tls_sll_enable, 0)) {
|
||||||
|
return 0; // Force slow path
|
||||||
|
}
|
||||||
|
|
||||||
// Phase E3-1: Remove registry lookup (50-100 cycles overhead)
|
// Phase E3-1: Remove registry lookup (50-100 cycles overhead)
|
||||||
// Reason: Phase E1 added headers to C7, making this check redundant
|
// Reason: Phase E1 added headers to C7, making this check redundant
|
||||||
// Header magic validation (2-3 cycles) is now sufficient for all classes
|
// Header magic validation (2-3 cycles) is now sufficient for all classes
|
||||||
|
|||||||
@ -84,8 +84,8 @@
|
|||||||
#else
|
#else
|
||||||
const size_t next_off = 0;
|
const size_t next_off = 0;
|
||||||
#endif
|
#endif
|
||||||
#include "box/tiny_next_ptr_box.h"
|
// Build single-linked list via Box next-ptr API (per-class)
|
||||||
tiny_next_write(head, NULL);
|
tiny_next_write(class_idx, head, NULL);
|
||||||
void* tail = head; // current tail
|
void* tail = head; // current tail
|
||||||
int taken = 1;
|
int taken = 1;
|
||||||
while (taken < limit && mag->top > 0) {
|
while (taken < limit && mag->top > 0) {
|
||||||
@ -95,7 +95,7 @@
|
|||||||
#else
|
#else
|
||||||
const size_t next_off2 = 0;
|
const size_t next_off2 = 0;
|
||||||
#endif
|
#endif
|
||||||
tiny_next_write(p2, head);
|
tiny_next_write(class_idx, p2, head);
|
||||||
head = p2;
|
head = p2;
|
||||||
taken++;
|
taken++;
|
||||||
}
|
}
|
||||||
|
|||||||
@ -154,40 +154,75 @@ static inline void* superslab_alloc_from_slab(SuperSlab* ss, int slab_idx) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Phase 12: Shared SuperSlab Pool based superslab_refill
|
/*
|
||||||
// ============================================================================
|
* Phase 12: Shared SuperSlab Pool based superslab_refill
|
||||||
|
*
|
||||||
|
* ポリシー:
|
||||||
|
* - superslab_refill(int class_idx) は shared pool を経由して
|
||||||
|
* 「class_idx 用の slab を1枚 TLS にバインドする」単一のエントリポイントとする。
|
||||||
|
* - 呼び出し側は、この関数が:
|
||||||
|
* * 成功時: TinyTLSSlab (g_tls_slabs[class_idx]) が有効な ss/meta/slab_base を指す
|
||||||
|
* * 失敗時: NULL を返し、TLS は変更しない or クリーンに巻き戻される
|
||||||
|
* ことだけを前提にすればよい。
|
||||||
|
* - shared_pool_acquire_slab() の戻り値は 0=成功 / 非0=失敗 とみなし、
|
||||||
|
* 成功時に (*ss_out, *slab_idx_out) が設定される想定とする。
|
||||||
|
* - superslab_init_slab() / tiny_tls_bind_slab() は再帰的に superslab_refill() を
|
||||||
|
* 呼ばない設計前提(自己呼び出し禁止)。ここで安全側に防御チェックを行う。
|
||||||
|
*/
|
||||||
|
|
||||||
SuperSlab* superslab_refill(int class_idx) {
|
SuperSlab* superslab_refill(int class_idx)
|
||||||
|
{
|
||||||
#if HAKMEM_DEBUG_COUNTERS
|
#if HAKMEM_DEBUG_COUNTERS
|
||||||
g_superslab_refill_calls_dbg[class_idx]++;
|
g_superslab_refill_calls_dbg[class_idx]++;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
TinyTLSSlab* tls = &g_tls_slabs[class_idx];
|
// Bounds check (defensive, should be enforced by callers too)
|
||||||
extern int shared_pool_acquire_slab(int class_idx, SuperSlab** ss_out, int* slab_idx_out);
|
if (class_idx < 0 || class_idx >= TINY_NUM_CLASSES) {
|
||||||
|
|
||||||
SuperSlab* ss = NULL;
|
|
||||||
int slab_idx = -1;
|
|
||||||
if (shared_pool_acquire_slab(class_idx, &ss, &slab_idx) != 0) {
|
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TinyTLSSlab* tls = &g_tls_slabs[class_idx];
|
||||||
|
|
||||||
|
// Shared pool API:
|
||||||
|
// 0 == success, (*ss_out, *slab_idx_out) に有効値が入る。
|
||||||
|
// !=0 == failure, 出力は未定義とみなす。
|
||||||
|
extern int shared_pool_acquire_slab(int class_idx,
|
||||||
|
SuperSlab** ss_out,
|
||||||
|
int* slab_idx_out);
|
||||||
|
|
||||||
|
SuperSlab* ss = NULL;
|
||||||
|
int slab_idx = -1;
|
||||||
|
if (shared_pool_acquire_slab(class_idx, &ss, &slab_idx) != 0 || !ss || slab_idx < 0) {
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize slab metadata for this class/thread.
|
||||||
|
// NOTE:
|
||||||
|
// - superslab_init_slab は再帰的に superslab_refill() を呼ばない設計前提。
|
||||||
|
// - class_idx は slab_meta->class_idx に反映される。
|
||||||
uint32_t my_tid = tiny_self_u32();
|
uint32_t my_tid = tiny_self_u32();
|
||||||
superslab_init_slab(ss,
|
superslab_init_slab(ss,
|
||||||
slab_idx,
|
slab_idx,
|
||||||
g_tiny_class_sizes[class_idx],
|
g_tiny_class_sizes[class_idx],
|
||||||
my_tid);
|
my_tid);
|
||||||
|
|
||||||
|
// Bind this slab to TLS for fast subsequent allocations.
|
||||||
|
// tiny_tls_bind_slab は:
|
||||||
|
// tls->ss, tls->slab_idx, tls->meta, tls->slab_base
|
||||||
|
// を一貫して更新する。
|
||||||
tiny_tls_bind_slab(tls, ss, slab_idx);
|
tiny_tls_bind_slab(tls, ss, slab_idx);
|
||||||
|
|
||||||
// Sanity: TLS must now describe this slab for this class.
|
// Sanity: TLS must now describe this slab for this class.
|
||||||
|
// 失敗時は TLS を巻き戻して NULL を返す(呼び出し側は安全に再試行できる)。
|
||||||
if (!(tls->ss == ss &&
|
if (!(tls->ss == ss &&
|
||||||
tls->slab_idx == slab_idx &&
|
tls->slab_idx == (uint8_t)slab_idx &&
|
||||||
tls->meta != NULL &&
|
tls->meta != NULL &&
|
||||||
tls->meta->class_idx == (uint8_t)class_idx)) {
|
tls->meta->class_idx == (uint8_t)class_idx &&
|
||||||
|
tls->slab_base != NULL)) {
|
||||||
tls->ss = NULL;
|
tls->ss = NULL;
|
||||||
tls->meta = NULL;
|
tls->meta = NULL;
|
||||||
tls->slab_idx = -1;
|
|
||||||
tls->slab_base = NULL;
|
tls->slab_base = NULL;
|
||||||
|
tls->slab_idx = 0;
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
44
hakmem.d
44
hakmem.d
@ -8,23 +8,21 @@ hakmem.o: core/hakmem.c core/hakmem.h core/hakmem_build_flags.h \
|
|||||||
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_debug_ring.h core/tiny_remote.h \
|
core/tiny_debug_ring.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/tiny_fastcache.h \
|
core/hakmem_tiny_superslab_constants.h core/tiny_fastcache.h \
|
||||||
core/hakmem_mid_mt.h core/hakmem_super_registry.h core/hakmem_elo.h \
|
core/box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
||||||
core/hakmem_ace_stats.h core/hakmem_batch.h core/hakmem_evo.h \
|
core/tiny_nextptr.h core/hakmem_mid_mt.h core/hakmem_super_registry.h \
|
||||||
core/hakmem_debug.h core/hakmem_prof.h core/hakmem_syscall.h \
|
core/hakmem_elo.h core/hakmem_ace_stats.h core/hakmem_batch.h \
|
||||||
core/hakmem_ace_controller.h core/hakmem_ace_metrics.h \
|
core/hakmem_evo.h core/hakmem_debug.h core/hakmem_prof.h \
|
||||||
core/hakmem_ace_ucb1.h core/ptr_trace.h core/box/hak_exit_debug.inc.h \
|
core/hakmem_syscall.h core/hakmem_ace_controller.h \
|
||||||
core/box/hak_kpi_util.inc.h core/box/hak_core_init.inc.h \
|
core/hakmem_ace_metrics.h core/hakmem_ace_ucb1.h core/ptr_trace.h \
|
||||||
core/hakmem_phase7_config.h core/box/hak_alloc_api.inc.h \
|
core/box/hak_exit_debug.inc.h core/box/hak_kpi_util.inc.h \
|
||||||
core/box/hak_free_api.inc.h core/hakmem_tiny_superslab.h \
|
core/box/hak_core_init.inc.h core/hakmem_phase7_config.h \
|
||||||
core/box/../tiny_free_fast_v2.inc.h core/box/../tiny_region_id.h \
|
core/box/hak_alloc_api.inc.h core/box/hak_free_api.inc.h \
|
||||||
core/box/../hakmem_build_flags.h core/box/../tiny_box_geometry.h \
|
core/hakmem_tiny_superslab.h core/box/../tiny_free_fast_v2.inc.h \
|
||||||
core/box/../ptr_track.h core/box/../hakmem_tiny_config.h \
|
core/box/../tiny_region_id.h core/box/../hakmem_build_flags.h \
|
||||||
|
core/box/../tiny_box_geometry.h \
|
||||||
|
core/box/../hakmem_tiny_superslab_constants.h \
|
||||||
|
core/box/../hakmem_tiny_config.h core/box/../ptr_track.h \
|
||||||
core/box/../box/tls_sll_box.h core/box/../box/../hakmem_tiny_config.h \
|
core/box/../box/tls_sll_box.h core/box/../box/../hakmem_tiny_config.h \
|
||||||
core/box/../box/../hakmem_build_flags.h core/box/../box/../tiny_remote.h \
|
core/box/../box/../hakmem_build_flags.h core/box/../box/../tiny_remote.h \
|
||||||
core/box/../box/../tiny_region_id.h \
|
core/box/../box/../tiny_region_id.h \
|
||||||
@ -57,16 +55,11 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/tiny_fastcache.h:
|
core/tiny_fastcache.h:
|
||||||
|
core/box/tiny_next_ptr_box.h:
|
||||||
|
core/hakmem_tiny_config.h:
|
||||||
|
core/tiny_nextptr.h:
|
||||||
core/hakmem_mid_mt.h:
|
core/hakmem_mid_mt.h:
|
||||||
core/hakmem_super_registry.h:
|
core/hakmem_super_registry.h:
|
||||||
core/hakmem_elo.h:
|
core/hakmem_elo.h:
|
||||||
@ -91,8 +84,9 @@ core/box/../tiny_free_fast_v2.inc.h:
|
|||||||
core/box/../tiny_region_id.h:
|
core/box/../tiny_region_id.h:
|
||||||
core/box/../hakmem_build_flags.h:
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../tiny_box_geometry.h:
|
core/box/../tiny_box_geometry.h:
|
||||||
core/box/../ptr_track.h:
|
core/box/../hakmem_tiny_superslab_constants.h:
|
||||||
core/box/../hakmem_tiny_config.h:
|
core/box/../hakmem_tiny_config.h:
|
||||||
|
core/box/../ptr_track.h:
|
||||||
core/box/../box/tls_sll_box.h:
|
core/box/../box/tls_sll_box.h:
|
||||||
core/box/../box/../hakmem_tiny_config.h:
|
core/box/../box/../hakmem_tiny_config.h:
|
||||||
core/box/../box/../hakmem_build_flags.h:
|
core/box/../box/../hakmem_build_flags.h:
|
||||||
|
|||||||
@ -7,12 +7,7 @@ hakmem_learner.o: core/hakmem_learner.c core/hakmem_learner.h \
|
|||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/tiny_remote.h core/superslab/../tiny_box_geometry.h \
|
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h
|
|
||||||
core/hakmem_learner.h:
|
core/hakmem_learner.h:
|
||||||
core/hakmem_internal.h:
|
core/hakmem_internal.h:
|
||||||
core/hakmem.h:
|
core/hakmem.h:
|
||||||
@ -35,12 +30,4 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
|
|||||||
16
hakmem_shared_pool.d
Normal file
16
hakmem_shared_pool.d
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
hakmem_shared_pool.o: core/hakmem_shared_pool.c core/hakmem_shared_pool.h \
|
||||||
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
|
core/hakmem_tiny_superslab.h core/superslab/superslab_inline.h \
|
||||||
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
|
core/hakmem_build_flags.h core/tiny_remote.h \
|
||||||
|
core/hakmem_tiny_superslab_constants.h
|
||||||
|
core/hakmem_shared_pool.h:
|
||||||
|
core/superslab/superslab_types.h:
|
||||||
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
|
core/hakmem_tiny_superslab.h:
|
||||||
|
core/superslab/superslab_inline.h:
|
||||||
|
core/superslab/superslab_types.h:
|
||||||
|
core/tiny_debug_ring.h:
|
||||||
|
core/hakmem_build_flags.h:
|
||||||
|
core/tiny_remote.h:
|
||||||
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
@ -3,11 +3,6 @@ hakmem_super_registry.o: core/hakmem_super_registry.c \
|
|||||||
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
core/tiny_debug_ring.h core/hakmem_build_flags.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h
|
core/hakmem_tiny_superslab_constants.h
|
||||||
core/hakmem_super_registry.h:
|
core/hakmem_super_registry.h:
|
||||||
core/hakmem_tiny_superslab.h:
|
core/hakmem_tiny_superslab.h:
|
||||||
@ -18,12 +13,4 @@ core/superslab/superslab_types.h:
|
|||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
|
|||||||
@ -4,9 +4,6 @@ hakmem_tiny_bg_spill.o: core/hakmem_tiny_bg_spill.c \
|
|||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/tiny_remote.h core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h core/tiny_debug_ring.h \
|
|
||||||
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h \
|
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/hakmem_super_registry.h core/hakmem_tiny.h core/hakmem_trace.h \
|
core/hakmem_super_registry.h core/hakmem_tiny.h core/hakmem_trace.h \
|
||||||
core/hakmem_tiny_mini_mag.h
|
core/hakmem_tiny_mini_mag.h
|
||||||
@ -22,11 +19,6 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/hakmem_super_registry.h:
|
core/hakmem_super_registry.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
|
|||||||
@ -5,15 +5,11 @@ hakmem_tiny_magazine.o: core/hakmem_tiny_magazine.c \
|
|||||||
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_debug_ring.h core/tiny_remote.h \
|
core/tiny_debug_ring.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/hakmem_super_registry.h \
|
core/hakmem_tiny_superslab_constants.h core/hakmem_super_registry.h \
|
||||||
core/hakmem_prof.h core/hakmem_internal.h core/hakmem.h \
|
core/hakmem_prof.h core/hakmem_internal.h core/hakmem.h \
|
||||||
core/hakmem_config.h core/hakmem_features.h core/hakmem_sys.h \
|
core/hakmem_config.h core/hakmem_features.h core/hakmem_sys.h \
|
||||||
core/hakmem_whale.h
|
core/hakmem_whale.h core/box/tiny_next_ptr_box.h \
|
||||||
|
core/hakmem_tiny_config.h core/tiny_nextptr.h
|
||||||
core/hakmem_tiny_magazine.h:
|
core/hakmem_tiny_magazine.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
@ -27,14 +23,6 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/hakmem_super_registry.h:
|
core/hakmem_super_registry.h:
|
||||||
core/hakmem_prof.h:
|
core/hakmem_prof.h:
|
||||||
@ -44,3 +32,6 @@ core/hakmem_config.h:
|
|||||||
core/hakmem_features.h:
|
core/hakmem_features.h:
|
||||||
core/hakmem_sys.h:
|
core/hakmem_sys.h:
|
||||||
core/hakmem_whale.h:
|
core/hakmem_whale.h:
|
||||||
|
core/box/tiny_next_ptr_box.h:
|
||||||
|
core/hakmem_tiny_config.h:
|
||||||
|
core/tiny_nextptr.h:
|
||||||
|
|||||||
@ -1,20 +1,17 @@
|
|||||||
hakmem_tiny_query.o: core/hakmem_tiny_query.c core/hakmem_tiny.h \
|
hakmem_tiny_query.o: core/hakmem_tiny_query.c core/hakmem_tiny.h \
|
||||||
core/hakmem_build_flags.h core/hakmem_trace.h \
|
core/hakmem_build_flags.h core/hakmem_trace.h \
|
||||||
core/hakmem_tiny_mini_mag.h core/hakmem_tiny_query_api.h \
|
core/hakmem_tiny_mini_mag.h core/hakmem_tiny_config.h \
|
||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_query_api.h core/hakmem_tiny_superslab.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_remote.h core/superslab/../tiny_box_geometry.h \
|
core/tiny_debug_ring.h core/tiny_remote.h \
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/hakmem_super_registry.h \
|
core/hakmem_tiny_superslab_constants.h core/hakmem_super_registry.h \
|
||||||
core/hakmem_config.h core/hakmem_features.h
|
core/hakmem_config.h core/hakmem_features.h
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/hakmem_trace.h:
|
core/hakmem_trace.h:
|
||||||
core/hakmem_tiny_mini_mag.h:
|
core/hakmem_tiny_mini_mag.h:
|
||||||
|
core/hakmem_tiny_config.h:
|
||||||
core/hakmem_tiny_query_api.h:
|
core/hakmem_tiny_query_api.h:
|
||||||
core/hakmem_tiny_superslab.h:
|
core/hakmem_tiny_superslab.h:
|
||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
@ -23,14 +20,6 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/hakmem_super_registry.h:
|
core/hakmem_super_registry.h:
|
||||||
core/hakmem_config.h:
|
core/hakmem_config.h:
|
||||||
|
|||||||
@ -5,16 +5,15 @@ hakmem_tiny_sfc.o: core/hakmem_tiny_sfc.c core/tiny_alloc_fast_sfc.inc.h \
|
|||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/tiny_remote.h core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h core/tiny_debug_ring.h \
|
|
||||||
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h \
|
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/tiny_tls.h core/box/tls_sll_box.h core/box/../ptr_trace.h \
|
core/tiny_tls.h core/box/tls_sll_box.h core/box/../hakmem_tiny_config.h \
|
||||||
core/box/../hakmem_tiny_config.h core/box/../hakmem_build_flags.h \
|
core/box/../hakmem_build_flags.h core/box/../tiny_remote.h \
|
||||||
core/box/../tiny_remote.h core/box/../tiny_region_id.h \
|
core/box/../tiny_region_id.h core/box/../hakmem_build_flags.h \
|
||||||
core/box/../hakmem_build_flags.h core/box/../tiny_box_geometry.h \
|
core/box/../tiny_box_geometry.h \
|
||||||
core/box/../ptr_track.h core/box/../hakmem_tiny_integrity.h \
|
core/box/../hakmem_tiny_superslab_constants.h \
|
||||||
core/box/../hakmem_tiny.h core/box/../ptr_track.h
|
core/box/../hakmem_tiny_config.h core/box/../ptr_track.h \
|
||||||
|
core/box/../hakmem_tiny_integrity.h core/box/../hakmem_tiny.h \
|
||||||
|
core/box/../ptr_track.h core/box/../ptr_trace.h
|
||||||
core/tiny_alloc_fast_sfc.inc.h:
|
core/tiny_alloc_fast_sfc.inc.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
@ -31,22 +30,19 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/tiny_tls.h:
|
core/tiny_tls.h:
|
||||||
core/box/tls_sll_box.h:
|
core/box/tls_sll_box.h:
|
||||||
core/box/../ptr_trace.h:
|
|
||||||
core/box/../hakmem_tiny_config.h:
|
core/box/../hakmem_tiny_config.h:
|
||||||
core/box/../hakmem_build_flags.h:
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../tiny_remote.h:
|
core/box/../tiny_remote.h:
|
||||||
core/box/../tiny_region_id.h:
|
core/box/../tiny_region_id.h:
|
||||||
core/box/../hakmem_build_flags.h:
|
core/box/../hakmem_build_flags.h:
|
||||||
core/box/../tiny_box_geometry.h:
|
core/box/../tiny_box_geometry.h:
|
||||||
|
core/box/../hakmem_tiny_superslab_constants.h:
|
||||||
|
core/box/../hakmem_tiny_config.h:
|
||||||
core/box/../ptr_track.h:
|
core/box/../ptr_track.h:
|
||||||
core/box/../hakmem_tiny_integrity.h:
|
core/box/../hakmem_tiny_integrity.h:
|
||||||
core/box/../hakmem_tiny.h:
|
core/box/../hakmem_tiny.h:
|
||||||
core/box/../ptr_track.h:
|
core/box/../ptr_track.h:
|
||||||
|
core/box/../ptr_trace.h:
|
||||||
|
|||||||
@ -1,20 +1,17 @@
|
|||||||
hakmem_tiny_stats.o: core/hakmem_tiny_stats.c core/hakmem_tiny.h \
|
hakmem_tiny_stats.o: core/hakmem_tiny_stats.c core/hakmem_tiny.h \
|
||||||
core/hakmem_build_flags.h core/hakmem_trace.h \
|
core/hakmem_build_flags.h core/hakmem_trace.h \
|
||||||
core/hakmem_tiny_mini_mag.h core/hakmem_tiny_stats_api.h \
|
core/hakmem_tiny_mini_mag.h core/hakmem_tiny_config.h \
|
||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_stats_api.h core/hakmem_tiny_superslab.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/superslab/superslab_types.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_inline.h core/superslab/superslab_types.h \
|
||||||
core/tiny_remote.h core/superslab/../tiny_box_geometry.h \
|
core/tiny_debug_ring.h core/tiny_remote.h \
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/hakmem_config.h \
|
core/hakmem_tiny_superslab_constants.h core/hakmem_config.h \
|
||||||
core/hakmem_features.h core/hakmem_tiny_stats.h
|
core/hakmem_features.h core/hakmem_tiny_stats.h
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/hakmem_trace.h:
|
core/hakmem_trace.h:
|
||||||
core/hakmem_tiny_mini_mag.h:
|
core/hakmem_tiny_mini_mag.h:
|
||||||
|
core/hakmem_tiny_config.h:
|
||||||
core/hakmem_tiny_stats_api.h:
|
core/hakmem_tiny_stats_api.h:
|
||||||
core/hakmem_tiny_superslab.h:
|
core/hakmem_tiny_superslab.h:
|
||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
@ -23,14 +20,6 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/hakmem_config.h:
|
core/hakmem_config.h:
|
||||||
core/hakmem_features.h:
|
core/hakmem_features.h:
|
||||||
|
|||||||
@ -3,13 +3,9 @@ hakmem_tiny_superslab.o: core/hakmem_tiny_superslab.c \
|
|||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/hakmem_build_flags.h core/tiny_remote.h \
|
core/hakmem_build_flags.h core/tiny_remote.h \
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/hakmem_super_registry.h \
|
core/hakmem_tiny_superslab_constants.h core/hakmem_super_registry.h \
|
||||||
core/hakmem_tiny.h core/hakmem_trace.h core/hakmem_tiny_mini_mag.h \
|
core/hakmem_tiny.h core/hakmem_trace.h core/hakmem_tiny_mini_mag.h \
|
||||||
|
core/hakmem_tiny_config.h core/hakmem_shared_pool.h \
|
||||||
core/hakmem_internal.h core/hakmem.h core/hakmem_config.h \
|
core/hakmem_internal.h core/hakmem.h core/hakmem_config.h \
|
||||||
core/hakmem_features.h core/hakmem_sys.h core/hakmem_whale.h
|
core/hakmem_features.h core/hakmem_sys.h core/hakmem_whale.h
|
||||||
core/hakmem_tiny_superslab.h:
|
core/hakmem_tiny_superslab.h:
|
||||||
@ -20,19 +16,13 @@ core/superslab/superslab_types.h:
|
|||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/hakmem_super_registry.h:
|
core/hakmem_super_registry.h:
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_trace.h:
|
core/hakmem_trace.h:
|
||||||
core/hakmem_tiny_mini_mag.h:
|
core/hakmem_tiny_mini_mag.h:
|
||||||
|
core/hakmem_tiny_config.h:
|
||||||
|
core/hakmem_shared_pool.h:
|
||||||
core/hakmem_internal.h:
|
core/hakmem_internal.h:
|
||||||
core/hakmem.h:
|
core/hakmem.h:
|
||||||
core/hakmem_config.h:
|
core/hakmem_config.h:
|
||||||
|
|||||||
4
hakmem_tiny_superslab_constants.h
Normal file
4
hakmem_tiny_superslab_constants.h
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
// shim header for legacy include paths
|
||||||
|
// Some units include "hakmem_tiny_superslab_constants.h" from project root.
|
||||||
|
// Forward to the canonical definition in core/.
|
||||||
|
#include "core/hakmem_tiny_superslab_constants.h"
|
||||||
@ -5,9 +5,6 @@ tiny_fastcache.o: core/tiny_fastcache.c core/tiny_fastcache.h \
|
|||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/tiny_remote.h core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h core/tiny_debug_ring.h \
|
|
||||||
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h
|
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h
|
||||||
core/tiny_fastcache.h:
|
core/tiny_fastcache.h:
|
||||||
core/box/tiny_next_ptr_box.h:
|
core/box/tiny_next_ptr_box.h:
|
||||||
@ -24,9 +21,4 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
|
|||||||
@ -4,13 +4,9 @@ tiny_publish.o: core/tiny_publish.c core/hakmem_tiny.h \
|
|||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/tiny_remote.h core/superslab/../tiny_box_geometry.h \
|
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h \
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
core/tiny_publish.h core/hakmem_tiny_superslab.h \
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
core/hakmem_tiny_stats_api.h
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h core/tiny_publish.h \
|
|
||||||
core/hakmem_tiny_superslab.h core/hakmem_tiny_stats_api.h
|
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/hakmem_trace.h:
|
core/hakmem_trace.h:
|
||||||
@ -23,14 +19,6 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
core/tiny_publish.h:
|
core/tiny_publish.h:
|
||||||
core/hakmem_tiny_superslab.h:
|
core/hakmem_tiny_superslab.h:
|
||||||
|
|||||||
@ -2,13 +2,7 @@ tiny_remote.o: core/tiny_remote.c core/tiny_remote.h \
|
|||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/hakmem_build_flags.h core/tiny_remote.h \
|
core/hakmem_build_flags.h core/hakmem_tiny_superslab_constants.h
|
||||||
core/superslab/../tiny_box_geometry.h \
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h
|
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/hakmem_tiny_superslab.h:
|
core/hakmem_tiny_superslab.h:
|
||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
@ -17,12 +11,4 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/tiny_remote.h:
|
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
|
|||||||
@ -4,12 +4,7 @@ tiny_sticky.o: core/tiny_sticky.c core/hakmem_tiny.h \
|
|||||||
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
core/hakmem_tiny_superslab.h core/superslab/superslab_types.h \
|
||||||
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
core/hakmem_tiny_superslab_constants.h core/superslab/superslab_inline.h \
|
||||||
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
core/superslab/superslab_types.h core/tiny_debug_ring.h \
|
||||||
core/tiny_remote.h core/superslab/../tiny_box_geometry.h \
|
core/tiny_remote.h core/hakmem_tiny_superslab_constants.h
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h \
|
|
||||||
core/superslab/../hakmem_tiny_config.h \
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h core/hakmem_tiny_config.h \
|
|
||||||
core/tiny_nextptr.h core/tiny_debug_ring.h core/tiny_remote.h \
|
|
||||||
core/hakmem_tiny_superslab_constants.h
|
|
||||||
core/hakmem_tiny.h:
|
core/hakmem_tiny.h:
|
||||||
core/hakmem_build_flags.h:
|
core/hakmem_build_flags.h:
|
||||||
core/hakmem_trace.h:
|
core/hakmem_trace.h:
|
||||||
@ -22,12 +17,4 @@ core/superslab/superslab_inline.h:
|
|||||||
core/superslab/superslab_types.h:
|
core/superslab/superslab_types.h:
|
||||||
core/tiny_debug_ring.h:
|
core/tiny_debug_ring.h:
|
||||||
core/tiny_remote.h:
|
core/tiny_remote.h:
|
||||||
core/superslab/../tiny_box_geometry.h:
|
|
||||||
core/superslab/../hakmem_tiny_superslab_constants.h:
|
|
||||||
core/superslab/../hakmem_tiny_config.h:
|
|
||||||
core/superslab/../box/tiny_next_ptr_box.h:
|
|
||||||
core/hakmem_tiny_config.h:
|
|
||||||
core/tiny_nextptr.h:
|
|
||||||
core/tiny_debug_ring.h:
|
|
||||||
core/tiny_remote.h:
|
|
||||||
core/hakmem_tiny_superslab_constants.h:
|
core/hakmem_tiny_superslab_constants.h:
|
||||||
|
|||||||
Reference in New Issue
Block a user