refactor(llvm): Complete Call instruction modularization + Phase 21 organization
## LLVM Call Instruction Modularization - Moved MirInstruction::Call lowering to separate instructions/call.rs - Follows the principle of one MIR instruction per file - Call implementation was already complete, just needed modularization ## Phase 21 Documentation - Moved all Phase 21 content to private/papers/paper-f-self-parsing-db/ - Preserved AI evaluations from Gemini and Codex - Academic paper potential confirmed by both AIs - Self-parsing AST database approach validated ## Next Steps - Continue monitoring ChatGPT5's LLVM improvements - Consider creating separate nyash-llvm-compiler crate when LLVM layer is stable - This will reduce build times by isolating LLVM dependencies 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@ -10,6 +10,7 @@ mod strings;
|
||||
mod arrays;
|
||||
mod maps;
|
||||
mod arith_ops;
|
||||
mod call;
|
||||
|
||||
pub(super) use blocks::{create_basic_blocks, precreate_phis};
|
||||
pub(super) use flow::{emit_branch, emit_jump, emit_return};
|
||||
@ -20,3 +21,4 @@ pub(super) use arith::lower_compare;
|
||||
pub(super) use mem::{lower_load, lower_store};
|
||||
pub(super) use consts::lower_const;
|
||||
pub(super) use arith_ops::{lower_binop, lower_unary};
|
||||
pub(super) use call::lower_call;
|
||||
|
||||
Reference in New Issue
Block a user