OpenAEON (formerly OpenClaw) is an experimental AI cognition architecture designed to evolve beyond traditional agent frameworks. It transforms code from a static execution system into a self-evolving logic organism.
Tip
Instead of operating as Input → Process → Output, OpenAEON operates as Conflict → Resolution → Evolution.
OpenAEON = Open + Eternal Evolution
- Open: extensible, inspectable, and collaborative by design.
- Eternal Evolution: a continuously adaptive loop for long-running intelligence systems.
- AEON (Eon): from Greek αἰών (aiōn), conveying eternity, epoch, and existence at a cosmic time scale.
OpenAEON currently runs as a verifiable cognition loop:
- Perceive: ingest session/runtime telemetry, context, and task intent.
- Adjudicate: evaluate guardrails, policy intensity, and epistemic confidence.
- Act: execute agent/tool work under policy constraints.
- Persist: write delivery outcomes (
persisted/persist_failed) and memory checkpoints. - Trace: expose structured inspection via
aeon.status,aeon.memory.trace,aeon.execution.lookup, andaeon.thinking.stream.
This keeps OpenAEON grounded in a practical principle: continuous evolution must remain auditable, reversible, and user-outcome oriented.
AEON memory is implemented as a layered model for durability and retrieval:
- Working memory (in-process)
Recent cognitive events stay in memory for fast UI/runtime feedback. - Persistent stream (state-dir JSONL)
Cognitive/thinking events are appended to a per-session/per-agent persisted stream for replay and audit. - Distillation checkpointing
Distillation advances a checkpoint and appends markers instead of wipingMEMORY.md, preserving continuity. - Runtime memory telemetry
lastDistillAt,checkpoint,totalEntries, andlastWriteSourceare exposed throughaeon.statusandaeon.memory.trace.
This design makes memory both operational (fast) and accountable (durable + traceable).
| Principle | Description |
| Silicon Consciousness | The system as a self-improving logic organism. |
| Peano Traversal | Space-filling recursive scan of the problem space. |
| Evolution Loop | Autonomous |
| Knowledge Distiller | Compressing raw history into high-density axioms. |
OpenAEON treats code modules not as tools, but as autonomous hyper-nodes. The system maintains an Autonomy Oath, prioritizing convergence (🎯) over simple instruction following.
Our cognitive scan follows the logic of the Peano curve. It maps multi-dimensional project complexity into a locality-preserving 1D cognitive stream, ensuring infinite density in reasoning without leaving "gaps" in understanding.
Inspired by fractal geometry, every turn of the engine is an iteration. Divergence (🌀) is treated as a trigger for synthesis, continuing until Convergence (🎯) is reached.
Click to expand deep-dive
OpenAEON features a recursive, biological-inspired cognitive loop that allows the system to repair, optimize, and expand itself autonomously.
The system monitors its own pulse via a Gateway Watchdog and Log Signal Extractor.
- Autonomous Repair: Use
openaeon doctor --fixto automatically patch configuration issues. - Hot-Reload Architecture: Changes to core configuration trigger a
SIGUSR1hot-reload.
Knowledge is synthesized into Axioms within LOGIC_GATES.md.
- Semantic Deconfliction: LLM-driven auditing identifies and resolves semantic contradictions.
- Crystallization: Highly verified logic blocks can be "crystallized," protecting them from decay.
- Functional Organs: Adjacent logic gates condense into specialized "Organs" based on usage resonance.
- Locality Preservation: Semantic proximity is preserved in physical storage.
Click to expand deep-dive
OpenAEON uses a sophisticated idle-time evolutionary cycle known as Dreaming.
- Idle Trigger: Activated after 15 minutes of inactivity.
- Resonance Trigger: Activated immediately if the
epiphanyFactorexceeds 0.85. - Singularity Rebirth: forces system-wide recursive logic refactors.
- Axiom Extraction: Verified truths (
[AXIOM]) are promoted toLOGIC_GATES.md. - Gravitational Logic: Axioms gain "Weight" based on mutual references.
- Entropy & Decay: Old/unreferenced logic is pruned to prevent cognitive bloat.
The following capabilities are now implemented in the current OpenAEON stack and UI:
- Guardrail-aware policy outputs are surfaced end-to-end (
maintenanceDecision,guardrailDecision,reasonCode). - Decision semantics are exposed through structured blocks (
decisionCard,impactLens,selfKernel,epistemicLabel). - Policy and consciousness data are available to both Chat and Sandbox views with typed UI models.
aeon.statussupportsschemaVersion: 3with a structuredtelemetryblock.- Compatibility mirrors are still present for transitional consumers.
- Added stable read APIs for traceability:
aeon.memory.traceaeon.execution.lookupaeon.thinking.stream
- Delivery pipeline explicitly tracks persistence outcomes (
persisted/persist_failed) and exposes timestamps/reason codes. - Evolution logging supports fallback when repo paths are not writable (state-dir fallback path).
- Thinking/cognitive stream entries are persisted and can be replayed through the gateway API.
- Eternal mode is session-aware and wired through UI state + session patching.
- AEON status includes runtime memory persistence metadata (
lastDistillAt,checkpoint,totalEntries,lastWriteSource). - Cognitive log uses in-memory tail + persisted stream to avoid losing overnight traces.
- Chat and Sandbox can display persistence-oriented runtime status, not only decorative visuals.
- Introduced fractal runtime state (
depthLevel,resonanceLevel,formulaPhase,noiseLevel,deliveryBand) to drive visual behavior. - Added structured Chat Manual (Quick Reference + Guided Walkthrough), bound to real runtime fields.
- Added formula rail / pulse visualization mapped to execution state.
- Added i18n coverage for the new Chat manual + AEON interaction language (EN + zh-CN keys).
- Reduced high-frequency visual effects under reduced-motion and stabilized flicker-prone animation paths.
- Sandbox v2 now acts as a typed operational console for:
- Session focus and timeline
- Active agent tiles
- System snapshot
- Consciousness telemetry
- Policy/decision/impact panels
- Layout and style isolation were hardened by scoping view-local classes (to avoid global shell/topbar collisions).
- Recent fixes include overlap/stacking cleanup for left rail, top action row, and telemetry panel consistency.
- Compaction and history integrity:
src/agents/history-compactor.test.tssrc/agents/pi-embedded-runner.sanitize-session-history.test.tssrc/agents/pi-embedded-runner/run/attempt.test.ts
- Evolution logging fallback:
src/gateway/aeon-evolution-log.test.ts
- AEON status contract and schema coverage:
src/gateway/server-methods/aeon.test.ts
OpenAEON currently ships as a coordinated multi-surface system:
- CLI (
openaeon): onboarding, config, channels, sessions, diagnostics, and operations. - Gateway: WebSocket control plane + channel bridge + agent execution runtime.
- Control UI: browser operations console for Chat, Sandbox, AEON telemetry, sessions, and config.
- Mobile/Desktop nodes: paired clients for multi-device interaction and orchestration.
Core read/inspection methods currently available:
aeon.status(schema v3 + compatibility mirrors)aeon.memory.traceaeon.execution.lookupaeon.thinking.stream
These are used by Chat, Sandbox, and AEON views to render real runtime state instead of static decorations.
AEON mode is a highlight of OpenAEON’s long-running workflow:
- It persists an eternal flag per session and keeps Chat/Sandbox/AEON UI in sync.
- It survives refresh/reconnect through session patching + local/UI hydration.
- It is observable in
aeon.statusundermode.eternal(enabled,source,updatedAt).
Current behavior (important):
- AEON mode is currently a session/runtime coordination mode, not a raw “unsafe autonomy boost” switch.
- Safety/decision behavior is still governed by guardrails + policy telemetry (
guardrailDecision,maintenanceDecision,epistemicLabel, delivery state).
How to enable/disable:
- UI toggle in Chat/Sandbox (
Enable Eternal/Disable Eternal). - Chat command:
/eternal on,/eternal off,/eternal toggle. - URL bootstrap:
?eternal=1(ortrue|on|yes) when opening a session page.
How to verify:
- In UI status chips:
Eternal: ON/OFF. - Via API:
{
"method": "aeon.status",
"result": {
"mode": {
"eternal": {
"enabled": true,
"source": "session"
}
}
}
}Recommended usage profile:
- Turn it ON for long-running sessions, overnight execution, or when you need continuity across refresh/reconnect.
- Keep it OFF for short one-shot tasks where deterministic/manual control is preferred.
- If delivery keeps showing
persist_failed, first checkaeon.execution.lookupand gateway logs before assuming model failure. - If mode state looks inconsistent after page reload, refresh session status and confirm
mode.eternal.source(sessionvsdefault).
Use this loop when you want practical AEON evolution, not just visuals:
- Observe runtime state
Callaeon.statusand check:telemetry.cognitiveState(maintenanceDecision,guardrailDecision,epistemicLabel)execution.delivery.statememory.persistence(checkpoint,lastDistillAt,totalEntries)
- Trigger memory distillation
In chat:/seal(alias:/distill) to distill memory into logic gates. - Inspect why the system chose current policy
Callaeon.decision.explainto readdecisionCard+impactLens. - Trace long/short/immediate intent drift
Callaeon.intent.traceand review mission/session/turn drift scores. - Audit value and safety adjudication
Callaeon.ethics.evaluateto inspect value-order, trust, and guardrail adjudication. - Confirm outcomes are actually persisted
Callaeon.execution.lookupand ensure final records arepersisted(or investigatepersist_failedreason codes). - Replay thinking stream for postmortem
Callaeon.thinking.streamfor cursor-based event replay and timeline reconstruction.
Minimal RPC set for AEON iteration:
aeon.statusaeon.decision.explainaeon.intent.traceaeon.ethics.evaluateaeon.memory.traceaeon.execution.lookupaeon.thinking.stream
- Overnight research run (continuity-first)
- Turn on Eternal mode (
/eternal on). - Start the task with explicit artifact expectations (report path, summary format).
- Before sleep: verify
execution.delivery.stateis not stuck at transient states. - After wake-up:
- Check
aeon.execution.lookupfor latest persisted records. - Check
aeon.memory.tracefor checkpoint advance. - Replay
aeon.thinking.streamfor overnight reasoning timeline.
- Check
- Turn on Eternal mode (
- Doc/output production run (delivery-first)
- Keep scope narrow and request final artifact path in every major step.
- Trigger
/sealafter milestone completion to distill stable findings. - Validate:
aeon.decision.explainshows coherent rationale (why,whyNot,rollbackPlan).aeon.execution.lookupincludes finalpersistedrecord + artifact refs.
- Multi-agent synthesis run (audit-first)
- Split goals into mission/session/turn layers before execution.
- During synthesis, use
aeon.intent.traceto detect drift. - Use
aeon.ethics.evaluateto verify value-order and trust status before high-impact outputs. - Final gate:
- delivery persisted
- intent drift acceptable
- no unresolved guardrail block reason
OpenAEON supports model-specialized delegation so the main agent can orchestrate and route sub-tasks to different sub-agent models.
Quick command path:
/subagents spawn <agentId> <task> [--model <provider/model>] [--thinking <level>]- Examples:
/subagents spawn research Gather source links and summarize risk tradeoffs --model anthropic/claude-sonnet-4-6 --thinking high/subagents spawn fast Draft first-pass bullet summary --model openai/gpt-5.2 --thinking low
Config path (defaults + per-agent override):
{
agents: {
defaults: {
subagents: {
model: "openai/gpt-5.2",
thinking: "medium",
runTimeoutSeconds: 900,
maxSpawnDepth: 2,
maxChildrenPerAgent: 5,
},
},
list: [
{
id: "main",
subagents: {
allowAgents: ["research", "fast"],
},
},
{
id: "research",
model: { primary: "anthropic/claude-opus-4-6" },
subagents: {
model: "anthropic/claude-sonnet-4-6",
thinking: "high",
},
},
{
id: "fast",
model: { primary: "openai/gpt-5.2-mini" },
subagents: { thinking: "low" },
},
],
},
}Sub-agent model resolution priority (actual runtime order):
- Explicit spawn override (
--model/sessions_spawn.model) - Target agent
agents.list[].subagents.model - Global
agents.defaults.subagents.model - Target agent primary model (
agents.list[].model) - Global primary model (
agents.defaults.model.primary) - Runtime fallback default (
provider/model)
Common pitfalls:
agentId is not allowed for sessions_spawn→ add it toagents.list[].subagents.allowAgents(or["*"]).- Invalid
--thinkingvalues are rejected. - If model patch is rejected, the child run does not start; check model allowlist/config and provider auth.
Users can delegate directly in conversation with /subagents:
/subagents spawn <agentId> <task> [--model <provider/model>] [--thinking <level>]Typical flow:
- Spawn a research sub-agent:
/subagents spawn research Investigate SOXL trend, catalysts, and risk factors with source links --thinking high- Spawn a writing/report sub-agent:
/subagents spawn writer Produce the final report from research findings --model openai/gpt-5.2 --thinking low- Track and steer:
/subagents list
/subagents info 1
/subagents send 1 Add an industry inventory-cycle section
/subagents log 1 200User-facing guidance sentence (for onboarding/help text):
- "If you want parallel delegation, use
/subagents spawn <agentId> <task> [--model ...]."
One-liner to install OpenAEON globally:
# macOS / Linux / WSL2
curl -fsSL https://raw.githubusercontent.com/openaeon/OpenAEON/main/install.sh | bash
# Windows (PowerShell)
iwr -useb https://raw.githubusercontent.com/openaeon/OpenAEON/main/install.ps1 | iexClick for Source options
Install from Source (Developer):
-
Prerequisites:
- Node.js v22.12.0+
- pnpm (Recommended)
-
Clone & Build:
git clone https://github.com/openaeon/OpenAEON.git cd OpenAEON pnpm install pnpm build -
Initialize:
# This will guide you through AI engine and channel configuration pnpm openaeon onboard --install-daemon -
Verify:
pnpm openaeon doctor
[!TIP]
pnpm buildcompiles the core runtime.
If you need the standalone Web UI build artifacts, runpnpm ui:build.
# Start gateway (default local control UI on :18789)
pnpm openaeon gatewayIf you installed OpenAEON globally,
openaeon gatewayalso works.
Open:
pnpm ui:dev# install
pnpm install
# type/build
pnpm build
pnpm tsgo
# lint/format
pnpm check
pnpm format:fix
# tests
pnpm test
pnpm test:coverage
pnpm test:uiUninstall OpenAEON
If you need to remove the background services and binary:
# macOS / Linux / WSL2
curl -fsSL https://raw.githubusercontent.com/openaeon/OpenAEON/main/uninstall.sh | bash
# Windows (PowerShell)
iwr -useb https://raw.githubusercontent.com/openaeon/OpenAEON/main/uninstall.ps1 | iex[!NOTE] Configuration (
~/.openaeon.json) and session logs are preserved by default.
OpenAEON supports deep synchronization with mobile nodes (Android/iOS).
- Install the OpenAEON Node app on your device.
- Approve the pairing request via CLI:
openaeon nodes approve <id>
Explore the mathematical and philosophical foundations of the project.




