## Summary
Investigated OpenAI's new GPT-5-Codex model and Codex GitHub PR review integration capabilities.
## GPT-5-Codex Analysis
### Benchmark Performance (Good)
- SWE-bench Verified: 74.5% (vs GPT-5's 72.8%)
- Refactoring tasks: 51.3% (vs GPT-5's 33.9%)
- Code review: Higher developer ratings
### Real-World Issues (Concerning)
- Users report degraded coding performance
- Scripts that previously worked now fail
- Less consistent than GPT-4.5
- Longer response times (minutes vs instant)
- "Creatively and emotionally flat"
- Basic errors (e.g., counting letters incorrectly)
### Key Finding
Classic case of "optimizing for benchmarks vs real usability" - scores well on tests but performs poorly in practice.
## Codex GitHub PR Integration
### Setup Process
1. Enable MFA and connect GitHub account
2. Authorize Codex GitHub app for repos
3. Enable "Code review" in repository settings
### Usage Methods
- **Manual**: Comment '@codex review' in PR
- **Automatic**: Triggers when PR moves from draft to ready
### Current Limitations
- One-way communication (doesn't respond to review comments)
- Prefers creating new PRs over updating existing ones
- Better for single-pass reviews than iterative feedback
## 'codex resume' Feature
New session management capability:
- Resume previous codex exec sessions
- Useful for continuing long tasks across days
- Maintains context from interrupted work
🐱 The investigation reveals that while GPT-5-Codex shows benchmark improvements, practical developer experience has declined - a reminder that metrics don't always reflect real-world utility\!
## Summary
Documented the "init block vs fields-at-top" design discussion as a valuable example of AI-human collaboration in language design.
## Changes
### Paper G (AI Collaboration)
- Added field-declaration-design.md documenting the entire discussion flow
- Showcased how complex init block proposal evolved to simple "fields at top" rule
- Demonstrates AI's tendency toward complexity vs human intuition for simplicity
### Paper H (AI Practical Patterns)
- Added Pattern #17: "Gradual Refinement Pattern" (段階的洗練型)
- Documents the process: Complex AI proposal → Detailed analysis → Human insight → Convergence
- Field declaration design as a typical example
### Paper K (Explosive Incidents)
- Added Incident #046: "init block vs fields-at-top incident"
- Updated total count to 46 incidents
- Shows how a single human comment redirected entire design approach
## Design Decision
After analysis, decided that BoxIndex should remain a compiler-internal structure, not a core Box:
- Core Boxes: User-instantiable runtime values (String, Integer, Array, Map)
- Compiler internals: BoxIndex for name resolution (compile-time only)
- Clear separation of concerns between language features and compiler tools
## Philosophy
This discussion exemplifies key principles:
- The best design needs no explanation
- Constraints provide clarity, not limitation
- "Everything is Box" doesn't mean "compiler internals are Boxes"
- AI tends toward theoretical completeness; humans toward practical simplicity
🐱 Sometimes the simplest answer is right in front of us\!
Major implementation by ChatGPT:
- Complete JSON v0 Bridge layer with PHI generation for control flow
- If statement: Merge PHI nodes for variables updated in then/else branches
- Loop statement: Header PHI nodes for loop-carried dependencies
- Python MVP Parser Stage-2: Added local/if/loop/call/method/new support
- Full CFG guarantee: All blocks have proper terminators (branch/jump/return)
- Type metadata for string operations (+, ==, !=)
- Comprehensive PHI smoke tests for nested and edge cases
This allows MIR generation without Rust MIR builder - massive step towards
eliminating Rust build dependency!
🎉 ChatGPTが30分以上かけて実装してくれたにゃ!
Co-Authored-By: ChatGPT <noreply@openai.com>
Parser improvements:
- Added expression statement fallback in parse_statement() for flexible syntax
- Fixed ternary operator to use PeekExpr instead of If AST (better lowering)
- Added peek_token() check to avoid ?/?: operator conflicts
LLVM Python improvements:
- Added optional ESC_JSON_FIX environment flag for string concatenation
- Improved PHI generation with better default handling
- Enhanced substring tracking for esc_json pattern
Documentation updates:
- Updated language guide with peek expression examples
- Added box theory diagrams to Phase 15 planning
- Clarified peek vs when syntax differences
These changes enable cleaner parser implementation for self-hosting,
especially for handling digit conversion with peek expressions instead
of 19-line if-else chains.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added -j 24 to cargo build commands (lines 52, 83)
- Consistent with build_jit.sh which already uses 24 threads
- Significantly speeds up LLVM builds on multi-core systems
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changes to resolver.py:
- Improved PHI value tracking in _value_at_end_i64() (lines 268-285)
- Added trace logging for snap hits with PHI detection
- Fixed PHI placeholder reuse logic to preserve dominance
- PHI values now returned directly from snapshots when valid
Changes to llvm_builder.py:
- Fixed externcall instruction parsing (line 522: 'func' instead of 'name')
- Improved block snapshot tracing (line 439)
- Added PHI incoming metadata tracking (lines 316-376)
- Enhanced definition tracking for lifetime hints
This should help debug the string carry=0 issue in esc_dirname_smoke where
PHI values were being incorrectly coerced instead of preserved.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
✅ Print and FileBox paths now working correctly
✅ Resolver simplified by removing overly aggressive fast-path optimization
✅ Both OFF/ON in compare_harness_on_off.sh now use Python version
✅ String handle propagation issues resolved
Key changes:
- Removed instruction reordering in llvm_builder.py (respecting MIR order)
- Resolver now more conservative but reliable
- compare_harness_on_off.sh updated to use Python backend for both paths
This marks a major milestone towards Phase 15 self-hosting with Python/llvmlite!
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added:
- Resolver API (resolve_i64) for unified value resolution with per-block cache
- llvmlite harness (Python) for rapid PHI/SSA verification
- Comprehensive LLVM documentation suite:
- LLVM_LAYER_OVERVIEW.md: Overall architecture and invariants
- RESOLVER_API.md: Value resolution strategy
- LLVM_HARNESS.md: Python verification harness
Updated:
- BuilderCursor applied to ALL lowering paths (externcall/newbox/arrays/maps/call)
- localize_to_i64 for dominance safety in strings/compare/flow
- NYASH_LLVM_DUMP_ON_FAIL=1 for debug IR output
Key insight: LoopForm didn't cause problems, it just exposed existing design flaws:
- Scattered value resolution (now unified via Resolver)
- Inconsistent type conversion placement
- Ambiguous PHI wiring responsibilities
Next: Wire Resolver throughout, achieve sealed=ON green for dep_tree_min_string
Major changes:
- Removed 617 lines of duplicate/legacy code from mod.rs (lines 351-967)
- All BoxCall handling now properly delegated to instructions::lower_boxcall
- Updated CURRENT_TASK.md with new findings:
- String concatenation issue (BinOp type mismatch)
- Plugin return value smoke test added
- Clear next steps for fixing return value display
Key improvements:
- Clean separation between dispatch (mod.rs) and implementation (instructions.rs)
- Legacy code marked as unreachable and ready for removal
- Better error visibility with modularized code structure
- llvm_smoke.sh updated with new plugin return value tests
Next steps:
1. Fix BinOp string concatenation type handling
2. Investigate MIR value_types for BoxCall returns
3. Further split lower_boxcall function (still 260+ lines)
- Split 2522-line codegen.rs into modular structure:
- mod.rs (1330 lines) - main compilation flow and instruction dispatch
- instructions.rs (1266 lines) - all MIR instruction implementations
- types.rs (189 lines) - type conversion and classification helpers
- helpers.rs retained for shared utilities
- Preserved all functionality including:
- Plugin return value handling (BoxCall/ExternCall)
- Handle-to-pointer conversions for proper value display
- Type-aware return value processing based on MIR metadata
- All optimization paths (ArrayBox fast-paths, string concat, etc.)
- Benefits:
- Better code organization and maintainability
- Easier to locate specific functionality
- Reduced cognitive load when working on specific features
- Cleaner separation of concerns
No functional changes - pure refactoring to improve code structure.
ChatGPTのstring_smoke.nyash実行が無限ループ(CPU 99.9%)に
陥る問題を調査。原因はarchive/codex-keep-two-loop.shの
while trueによる無限監視ループだったことを確認。
現在は安全な単発実行版(codex-keep-two.sh)に置き換え済み。
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add JIT Self-Host Quickstart section for Phase 15
- Include important flags reference (plugins, parsers, debugging)
- Add Codex async workflow documentation for parallel tasks
- Update test execution with Phase 15 smoke tests
- Improve build time notes (JIT vs LLVM)
- Align with current Phase 15 progress and tooling
🎉 Bootstrap (c0→c1→c1') test confirmed working\!
Co-Authored-By: Claude <noreply@anthropic.com>
- Update phase indicator to Phase 15 (Self-Hosting)
- Update documentation links to Phase 15 resources
- Reflect completion of R1-R5 tasks and ongoing work
- Fix CURRENT_TASK.md location to root directory
Co-Authored-By: Claude <noreply@anthropic.com>