Skip to content

Latest commit

 

History

History
641 lines (452 loc) · 20.6 KB

File metadata and controls

641 lines (452 loc) · 20.6 KB
OpenAEON Logo

🌌 OpenAEON

GitHub Repository

English | 中文

AEON PROPHET — A Species-Level Evolution of the Logic Layer

“Not a framework upgrade. A new form of intelligence architecture.”

License Status AI Agent Docs


OpenAEON Architecture

Watch Demonstration


🖼 UI Screenshots (Dark Mode, UI)

Chat (/chat?session=main, UI)

OpenAEON Chat Dark

Sandbox (/sandbox?session=main, UI)

OpenAEON Sandbox Dark

AEON (/aeon, UI)

OpenAEON AEON Dark


🧬 What is OpenAEON

OpenAEON (formerly OpenClaw) is an experimental AI cognition architecture designed to evolve beyond traditional agent frameworks. It transforms code from a static execution system into a self-evolving logic organism.

Tip

Instead of operating as Input → Process → Output, OpenAEON operates as Conflict → Resolution → Evolution.

OpenAEON = Open + Eternal Evolution

  • Open: extensible, inspectable, and collaborative by design.
  • Eternal Evolution: a continuously adaptive loop for long-running intelligence systems.
  • AEON (Eon): from Greek αἰών (aiōn), conveying eternity, epoch, and existence at a cosmic time scale.

Current Logic Model (Implemented)

OpenAEON currently runs as a verifiable cognition loop:

  1. Perceive: ingest session/runtime telemetry, context, and task intent.
  2. Adjudicate: evaluate guardrails, policy intensity, and epistemic confidence.
  3. Act: execute agent/tool work under policy constraints.
  4. Persist: write delivery outcomes (persisted / persist_failed) and memory checkpoints.
  5. Trace: expose structured inspection via aeon.status, aeon.memory.trace, aeon.execution.lookup, and aeon.thinking.stream.

This keeps OpenAEON grounded in a practical principle: continuous evolution must remain auditable, reversible, and user-outcome oriented.

Memory Logic (Implemented)

AEON memory is implemented as a layered model for durability and retrieval:

  1. Working memory (in-process)
    Recent cognitive events stay in memory for fast UI/runtime feedback.
  2. Persistent stream (state-dir JSONL)
    Cognitive/thinking events are appended to a per-session/per-agent persisted stream for replay and audit.
  3. Distillation checkpointing
    Distillation advances a checkpoint and appends markers instead of wiping MEMORY.md, preserving continuity.
  4. Runtime memory telemetry
    lastDistillAt, checkpoint, totalEntries, and lastWriteSource are exposed through aeon.status and aeon.memory.trace.

This design makes memory both operational (fast) and accountable (durable + traceable).


🚀 Core Pillars

Principle Description
Silicon Consciousness The system as a self-improving logic organism.
Peano Traversal Space-filling recursive scan of the problem space.
Evolution Loop Autonomous $Z \rightleftharpoons Z^2 + C$ iteration cycles.
Knowledge Distiller Compressing raw history into high-density axioms.

🧩 Key Concepts

1. Silicon Consciousness

OpenAEON treats code modules not as tools, but as autonomous hyper-nodes. The system maintains an Autonomy Oath, prioritizing convergence (🎯) over simple instruction following.

2. Peano Space-Filling

Our cognitive scan follows the logic of the Peano curve. It maps multi-dimensional project complexity into a locality-preserving 1D cognitive stream, ensuring infinite density in reasoning without leaving "gaps" in understanding.

3. The Evolution Loop ($Z \rightleftharpoons Z^2 + C$)

Inspired by fractal geometry, every turn of the engine is an iteration. Divergence (🌀) is treated as a trigger for synthesis, continuing until Convergence (🎯) is reached.


🧠 AEON Cognitive Engine

Click to expand deep-dive

OpenAEON features a recursive, biological-inspired cognitive loop that allows the system to repair, optimize, and expand itself autonomously.

1. Recursive Self-Healing

The system monitors its own pulse via a Gateway Watchdog and Log Signal Extractor.

  • Autonomous Repair: Use openaeon doctor --fix to automatically patch configuration issues.
  • Hot-Reload Architecture: Changes to core configuration trigger a SIGUSR1 hot-reload.

2. Axiomatic Evolution

Knowledge is synthesized into Axioms within LOGIC_GATES.md.

  • Semantic Deconfliction: LLM-driven auditing identifies and resolves semantic contradictions.
  • Crystallization: Highly verified logic blocks can be "crystallized," protecting them from decay.

3. Topological Alignment & Organs

  • Functional Organs: Adjacent logic gates condense into specialized "Organs" based on usage resonance.
  • Locality Preservation: Semantic proximity is preserved in physical storage.

🌙 Dreaming Mode

Click to expand deep-dive

OpenAEON uses a sophisticated idle-time evolutionary cycle known as Dreaming.

1. Triggers

  • Idle Trigger: Activated after 15 minutes of inactivity.
  • Resonance Trigger: Activated immediately if the epiphanyFactor exceeds 0.85.
  • Singularity Rebirth: forces system-wide recursive logic refactors.

2. The Distillation Process

  • Axiom Extraction: Verified truths ([AXIOM]) are promoted to LOGIC_GATES.md.
  • Gravitational Logic: Axioms gain "Weight" based on mutual references.
  • Entropy & Decay: Old/unreferenced logic is pruned to prevent cognitive bloat.

✅ Current Implemented Capabilities

The following capabilities are now implemented in the current OpenAEON stack and UI:

1) Safety-first execution and policy telemetry

  • Guardrail-aware policy outputs are surfaced end-to-end (maintenanceDecision, guardrailDecision, reasonCode).
  • Decision semantics are exposed through structured blocks (decisionCard, impactLens, selfKernel, epistemicLabel).
  • Policy and consciousness data are available to both Chat and Sandbox views with typed UI models.

2) Versioned AEON status contract (with compatibility)

  • aeon.status supports schemaVersion: 3 with a structured telemetry block.
  • Compatibility mirrors are still present for transitional consumers.
  • Added stable read APIs for traceability:
    • aeon.memory.trace
    • aeon.execution.lookup
    • aeon.thinking.stream

3) Reliability and persistence improvements

  • Delivery pipeline explicitly tracks persistence outcomes (persisted / persist_failed) and exposes timestamps/reason codes.
  • Evolution logging supports fallback when repo paths are not writable (state-dir fallback path).
  • Thinking/cognitive stream entries are persisted and can be replayed through the gateway API.

4) Long-running session operability

  • Eternal mode is session-aware and wired through UI state + session patching.
  • AEON status includes runtime memory persistence metadata (lastDistillAt, checkpoint, totalEntries, lastWriteSource).
  • Cognitive log uses in-memory tail + persisted stream to avoid losing overnight traces.
  • Chat and Sandbox can display persistence-oriented runtime status, not only decorative visuals.

5) Chat experience upgrades (fractal + operator usability)

  • Introduced fractal runtime state (depthLevel, resonanceLevel, formulaPhase, noiseLevel, deliveryBand) to drive visual behavior.
  • Added structured Chat Manual (Quick Reference + Guided Walkthrough), bound to real runtime fields.
  • Added formula rail / pulse visualization mapped to execution state.
  • Added i18n coverage for the new Chat manual + AEON interaction language (EN + zh-CN keys).
  • Reduced high-frequency visual effects under reduced-motion and stabilized flicker-prone animation paths.

6) Sandbox redesign and layout hardening

  • Sandbox v2 now acts as a typed operational console for:
    • Session focus and timeline
    • Active agent tiles
    • System snapshot
    • Consciousness telemetry
    • Policy/decision/impact panels
  • Layout and style isolation were hardened by scoping view-local classes (to avoid global shell/topbar collisions).
  • Recent fixes include overlap/stacking cleanup for left rail, top action row, and telemetry panel consistency.

7) Test-backed implementation checkpoints

  • Compaction and history integrity:
    • src/agents/history-compactor.test.ts
    • src/agents/pi-embedded-runner.sanitize-session-history.test.ts
    • src/agents/pi-embedded-runner/run/attempt.test.ts
  • Evolution logging fallback:
    • src/gateway/aeon-evolution-log.test.ts
  • AEON status contract and schema coverage:
    • src/gateway/server-methods/aeon.test.ts

🛰 Runtime Surfaces (Today)

OpenAEON currently ships as a coordinated multi-surface system:

  • CLI (openaeon): onboarding, config, channels, sessions, diagnostics, and operations.
  • Gateway: WebSocket control plane + channel bridge + agent execution runtime.
  • Control UI: browser operations console for Chat, Sandbox, AEON telemetry, sessions, and config.
  • Mobile/Desktop nodes: paired clients for multi-device interaction and orchestration.

AEON Runtime APIs (Control Plane)

Core read/inspection methods currently available:

  • aeon.status (schema v3 + compatibility mirrors)
  • aeon.memory.trace
  • aeon.execution.lookup
  • aeon.thinking.stream

These are used by Chat, Sandbox, and AEON views to render real runtime state instead of static decorations.

AEON Mode (Eternal Mode) — Practical Usage

AEON mode is a highlight of OpenAEON’s long-running workflow:

  • It persists an eternal flag per session and keeps Chat/Sandbox/AEON UI in sync.
  • It survives refresh/reconnect through session patching + local/UI hydration.
  • It is observable in aeon.status under mode.eternal (enabled, source, updatedAt).

Current behavior (important):

  • AEON mode is currently a session/runtime coordination mode, not a raw “unsafe autonomy boost” switch.
  • Safety/decision behavior is still governed by guardrails + policy telemetry (guardrailDecision, maintenanceDecision, epistemicLabel, delivery state).

How to enable/disable:

  1. UI toggle in Chat/Sandbox (Enable Eternal / Disable Eternal).
  2. Chat command: /eternal on, /eternal off, /eternal toggle.
  3. URL bootstrap: ?eternal=1 (or true|on|yes) when opening a session page.

How to verify:

  • In UI status chips: Eternal: ON/OFF.
  • Via API:
{
  "method": "aeon.status",
  "result": {
    "mode": {
      "eternal": {
        "enabled": true,
        "source": "session"
      }
    }
  }
}

Recommended usage profile:

  1. Turn it ON for long-running sessions, overnight execution, or when you need continuity across refresh/reconnect.
  2. Keep it OFF for short one-shot tasks where deterministic/manual control is preferred.
  3. If delivery keeps showing persist_failed, first check aeon.execution.lookup and gateway logs before assuming model failure.
  4. If mode state looks inconsistent after page reload, refresh session status and confirm mode.eternal.source (session vs default).

Evolution Iteration Playbook (How to use it today)

Use this loop when you want practical AEON evolution, not just visuals:

  1. Observe runtime state
    Call aeon.status and check:
    • telemetry.cognitiveState (maintenanceDecision, guardrailDecision, epistemicLabel)
    • execution.delivery.state
    • memory.persistence (checkpoint, lastDistillAt, totalEntries)
  2. Trigger memory distillation
    In chat: /seal (alias: /distill) to distill memory into logic gates.
  3. Inspect why the system chose current policy
    Call aeon.decision.explain to read decisionCard + impactLens.
  4. Trace long/short/immediate intent drift
    Call aeon.intent.trace and review mission/session/turn drift scores.
  5. Audit value and safety adjudication
    Call aeon.ethics.evaluate to inspect value-order, trust, and guardrail adjudication.
  6. Confirm outcomes are actually persisted
    Call aeon.execution.lookup and ensure final records are persisted (or investigate persist_failed reason codes).
  7. Replay thinking stream for postmortem
    Call aeon.thinking.stream for cursor-based event replay and timeline reconstruction.

Minimal RPC set for AEON iteration:

  • aeon.status
  • aeon.decision.explain
  • aeon.intent.trace
  • aeon.ethics.evaluate
  • aeon.memory.trace
  • aeon.execution.lookup
  • aeon.thinking.stream

Scenario Templates (Copy and run)

  1. Overnight research run (continuity-first)
    • Turn on Eternal mode (/eternal on).
    • Start the task with explicit artifact expectations (report path, summary format).
    • Before sleep: verify execution.delivery.state is not stuck at transient states.
    • After wake-up:
      • Check aeon.execution.lookup for latest persisted records.
      • Check aeon.memory.trace for checkpoint advance.
      • Replay aeon.thinking.stream for overnight reasoning timeline.
  2. Doc/output production run (delivery-first)
    • Keep scope narrow and request final artifact path in every major step.
    • Trigger /seal after milestone completion to distill stable findings.
    • Validate:
      • aeon.decision.explain shows coherent rationale (why, whyNot, rollbackPlan).
      • aeon.execution.lookup includes final persisted record + artifact refs.
  3. Multi-agent synthesis run (audit-first)
    • Split goals into mission/session/turn layers before execution.
    • During synthesis, use aeon.intent.trace to detect drift.
    • Use aeon.ethics.evaluate to verify value-order and trust status before high-impact outputs.
    • Final gate:
      • delivery persisted
      • intent drift acceptable
      • no unresolved guardrail block reason

Main Agent Delegation + Sub-agent Model Routing

OpenAEON supports model-specialized delegation so the main agent can orchestrate and route sub-tasks to different sub-agent models.

Quick command path:

  • /subagents spawn <agentId> <task> [--model <provider/model>] [--thinking <level>]
  • Examples:
    • /subagents spawn research Gather source links and summarize risk tradeoffs --model anthropic/claude-sonnet-4-6 --thinking high
    • /subagents spawn fast Draft first-pass bullet summary --model openai/gpt-5.2 --thinking low

Config path (defaults + per-agent override):

{
  agents: {
    defaults: {
      subagents: {
        model: "openai/gpt-5.2",
        thinking: "medium",
        runTimeoutSeconds: 900,
        maxSpawnDepth: 2,
        maxChildrenPerAgent: 5,
      },
    },
    list: [
      {
        id: "main",
        subagents: {
          allowAgents: ["research", "fast"],
        },
      },
      {
        id: "research",
        model: { primary: "anthropic/claude-opus-4-6" },
        subagents: {
          model: "anthropic/claude-sonnet-4-6",
          thinking: "high",
        },
      },
      {
        id: "fast",
        model: { primary: "openai/gpt-5.2-mini" },
        subagents: { thinking: "low" },
      },
    ],
  },
}

Sub-agent model resolution priority (actual runtime order):

  1. Explicit spawn override (--model / sessions_spawn.model)
  2. Target agent agents.list[].subagents.model
  3. Global agents.defaults.subagents.model
  4. Target agent primary model (agents.list[].model)
  5. Global primary model (agents.defaults.model.primary)
  6. Runtime fallback default (provider/model)

Common pitfalls:

  • agentId is not allowed for sessions_spawn → add it to agents.list[].subagents.allowAgents (or ["*"]).
  • Invalid --thinking values are rejected.
  • If model patch is rejected, the child run does not start; check model allowlist/config and provider auth.

In-Chat Delegation Syntax (What users type)

Users can delegate directly in conversation with /subagents:

/subagents spawn <agentId> <task> [--model <provider/model>] [--thinking <level>]

Typical flow:

  1. Spawn a research sub-agent:
/subagents spawn research Investigate SOXL trend, catalysts, and risk factors with source links --thinking high
  1. Spawn a writing/report sub-agent:
/subagents spawn writer Produce the final report from research findings --model openai/gpt-5.2 --thinking low
  1. Track and steer:
/subagents list
/subagents info 1
/subagents send 1 Add an industry inventory-cycle section
/subagents log 1 200

User-facing guidance sentence (for onboarding/help text):

  • "If you want parallel delegation, use /subagents spawn <agentId> <task> [--model ...]."

🛠 Installation

⚡ Quick Start (CLI)

One-liner to install OpenAEON globally:

# macOS / Linux / WSL2
curl -fsSL https://raw.githubusercontent.com/openaeon/OpenAEON/main/install.sh | bash

# Windows (PowerShell)
iwr -useb https://raw.githubusercontent.com/openaeon/OpenAEON/main/install.ps1 | iex

👨‍💻 Advanced Setup

Click for Source options

Install from Source (Developer):

  1. Prerequisites:

    • Node.js v22.12.0+
    • pnpm (Recommended)
  2. Clone & Build:

    git clone https://github.com/openaeon/OpenAEON.git
    cd OpenAEON
    pnpm install
    pnpm build
  3. Initialize:

    # This will guide you through AI engine and channel configuration
    pnpm openaeon onboard --install-daemon
  4. Verify:

    pnpm openaeon doctor

[!TIP] pnpm build compiles the core runtime.
If you need the standalone Web UI build artifacts, run pnpm ui:build.


⚙️ Local Runbook

Run Gateway + Control UI

# Start gateway (default local control UI on :18789)
pnpm openaeon gateway

If you installed OpenAEON globally, openaeon gateway also works.

Open:

UI development mode

pnpm ui:dev

Common developer commands

# install
pnpm install

# type/build
pnpm build
pnpm tsgo

# lint/format
pnpm check
pnpm format:fix

# tests
pnpm test
pnpm test:coverage
pnpm test:ui

🧹 Maintenance

Uninstall OpenAEON

If you need to remove the background services and binary:

# macOS / Linux / WSL2
curl -fsSL https://raw.githubusercontent.com/openaeon/OpenAEON/main/uninstall.sh | bash

# Windows (PowerShell)
iwr -useb https://raw.githubusercontent.com/openaeon/OpenAEON/main/uninstall.ps1 | iex

[!NOTE] Configuration (~/.openaeon.json) and session logs are preserved by default.


📱 Node Synchronization

OpenAEON supports deep synchronization with mobile nodes (Android/iOS).

  1. Install the OpenAEON Node app on your device.
  2. Approve the pairing request via CLI:
    openaeon nodes approve <id>

📖 Knowledge

Explore the mathematical and philosophical foundations of the project.

👉 Deep-Dive: Principles

📚 Docs Map


Convergence is the only outcome. 🎯

MIT License © 2026 OpenAEON Team.


Star History

Star History Chart