How the Reactor thinks.
The quality of AI output depends on process design, not prompt engineering skill. The Reactor shifts control: away from prompting, toward designing cognitive recipes.
One stream.
Five composable layers.
Every API call builds a system prompt from five independent layers. The same recipe step produces fundamentally different results depending on persona, context, and method. Library of 200+ frameworks.
Persona
Cognitive stance — analytical, creative, adversarial, or curatorial
Method
One of 200+ thinking methods, injected as a structured instruction
Context
Pinned content + pruned history (full, last_step, or none)
Slot
Model selection — LOGIC, CREATIVE, or SEARCH per step
Output
Structured nodes, streamed in real-time with transparent reasoning
Three slots.
Heterogeneous composition.
Each recipe step declares the required cognitive slot. Analytical work goes to reasoning models, research to web models, synthesis to generative models — all in one workflow.
Analytical Reasoning
Reasoning models (o1, o3-mini, Gemini Flash Thinking)
TRIZ analysis, contradiction detection, scoring, validation, adversarial audit
Ideation & Synthesis
Generative models (Claude Sonnet, GPT-4o, Llama 4)
Brainstorming, storytelling, concept synthesis, solution architecture, copywriting
Evidence-Based Research
Web models (Perplexity Sonar, Gemini with grounding)
Patent research, competitive analysis, market research, state of the art
Six ways to orchestrate thinking.
From linear pipelines to autonomous, self-correcting swarms — every recipe is built on composable execution patterns.View all recipes
Linear Pipeline
Sequential steps, each building on the previous one. Simple, deterministic, fast.
e.g. TRIZ Express, Elevator Pitch
Multi-Model Orchestrated
Different model slots per step — creative, analytical, and research models tackle the same problem from their respective strengths.
e.g. TRIZ v8, Pitch Architect
User-Driven Branching
The engine pauses at a decision point. The user chooses a path, and the recipe branches accordingly.
e.g. Dilemma Decoder, Branching Demo
Self-Correcting Loop
Autonomous generate → verify → revise cycle. AI evaluates its own output and refines until quality standards are met.
e.g. Aletheia Engine, TRIZ v9 MAX
Parallel Swarm
Multiple agents work simultaneously with different perspectives. Outputs are aggregated and synthesized.
e.g. MAD Engine, Ergodic Hive
Composable Patterns
Patterns can be freely combined. Real recipes are hybrids:
- E+FAutonomous swarm — parallel agents + self-correction
- C+FBranching swarm — user decision → specialized swarm
- B+ESlot-based correction — creative generates, logic verifies
- D+ECopilot / Autopilot — user switches between manual and autonomous
Glass Box UI.
Every AI response is fully transparent. No black box. See what the model thought before it wrote — inspect structured data and debug raw output — all in real-time.
Agent labels show which specialist is active during multi-step recipes. The Cognitive Protocol reveals the model's internal reasoning process before a single word of output appears.
Formatted output — clean, structured, actionable
Native reasoning trace from the model's thought process
Structured node data — parseable, exportable
Unprocessed model output for debugging
TRIZ v8 Orchestrator.
5 specialized agents. 3 model slots. 2 context modes. The flagship recipe demonstrates full orchestrator capacity.
Decompose the problem into components, functions, and contradictions.
Apply the TRIZ contradiction matrix. Generate solution directions.
Research existing solutions, patents, and analogous domains.
Synthesize all insights into concrete, actionable concepts.
Identify weaknesses, risks, and unintended consequences.
Context Pruning
Steps 2 & 3 use last_step — the model focuses on the distilled output, not the full history.
Slot Diversity
Analysis → reasoning models. Research → web models. Synthesis → creative models. No single model does everything.
Adversarial Close
Step 5 sees the full context but uses a reasoning model — maximum scrutiny on the entire proposal.
Why cognitive architecture beats mega-prompts.
In March 2026, the Kimi team published the paper Attention Residuals — a mathematical proof of a problem that the Reactor already solves architecturally at the process control level.
PreNorm Dilution
In deep networks, residual connections accumulate. Essential early information gets diluted by noise from middle layers — deeper layers lose access to the original signals.
Context Contamination
In standard chats, context accumulates message by message. By the time a model reaches step 6, the original problem is diluted in the noise of iterative intermediate steps.
Skip Connections → Epistemic Anchor
Kimi: Later layers skip accumulated noise and directly access early, clean layers (Attention Residuals).
Reactor: The Epistemic Anchor preserves extracted facts from step 1 via auto-pinning. Later agents with context_mode: last_step work only with the distilled output — the ground truth remains directly accessible via <pinned_context>.
Block Compression → Swarm Synthesis
Kimi: Layers are grouped into blocks and compressed into a single vector. Later layers see only the clean summary, not the raw individual steps.
Reactor: When parallel agents work (Pattern F), a synthesizer step compresses the outputs into a dense XML aggregate. Subsequent steps process only this node — not the individual agent responses.
Deep & Narrow → Microsteps
Kimi: Networks with Attention Residuals reach their optimum with deeper, narrower architectures — shallow networks stagnate.
Reactor: Many tightly focused steps (deep & narrow) beat few overloaded mega-prompts (shallow & wide). When the engine handles navigation and context pruning, the model can concentrate 100% of its parameters on pure transformation.
Intelligence in complex systems doesn't emerge from endlessly accumulating data, but from targeted noise reduction — regardless of whether the system is a neural network or a cognitive pipeline.
Source: Kimi / Moonshot AI, Attention Residuals (March 2026) · View benchmark results →
Methods think ahead. Think along. Think further.
200+ methods. 84+ recipes. Three model slots — ready to steer.
Launch Reactor