Rayzor Blade

High Performance Next Generation Compiler & Runtime for The Haxe Language

Fast compilation • Native performance • Ownership-based memory

Cranelift LLVM Apache 2.0
SCROLL

5-Tier JIT Compilation

Adaptive optimization from MIR interpreter through Cranelift to LLVM. Hot paths automatically tier-up for maximum performance.

~3ms/function → ~500ms LLVM
🚀

100x Faster Compilation

Cranelift JIT compiles in ~3ms per function vs 2-5 seconds for C++ targets. Instant feedback during development.

~3ms vs 2-5s C++
🔒

Ownership-Based Memory

Rust-inspired ownership tracking with compile-time drop analysis. No garbage collection for statically-typed code.

Zero GC pauses
📦

BLADE Incremental Cache

Per-module binary caching with source hash validation. Skip unchanged modules entirely for lightning-fast rebuilds.

~30x faster rebuilds
🎯

Native Performance

LLVM -O3 optimization delivers 45-50x interpreter speed. Profile-guided optimization for hot code paths.

45-50x interpreter
🏗️

Modern SSA Architecture

Full SSA-based IR with optimization passes, monomorphization, and SIMD vectorization infrastructure.

31k LOC MIR

Quick Start

Get up and running in seconds

Terminal
# Clone and build
git clone https://github.com/darmie/rayzor.git
cd rayzor
cargo build --release

# Run with tiered JIT
rayzor run hello.hx --preset application

# Compile to native
rayzor compile hello.hx --stage native

# Create single-file executable
rayzor bundle main.hx --output app.rzb