Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions crates/synapse-core/src/logic/metabolism.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ use crate::{MemoryNode, NodeType};
use std::sync::Arc;
use chrono::Utc;
use uuid::Uuid;
use crate::logic::system3::{NeuroSymbolicEngine, ReasoningGraph, LogicNode, LogicNodeType};

/// Metabolism: The process of digesting short-term buffer into long-term memory.
///
Expand All @@ -20,6 +21,7 @@ pub struct Metabolism {
holographic: Arc<dyn HolographicPort>,
sanitizer: Arc<dyn SanitizerPort>,
threshold: usize,
neuro_symbolic: Option<Arc<NeuroSymbolicEngine>>,
}

impl Metabolism {
Expand All @@ -40,9 +42,16 @@ impl Metabolism {
holographic,
sanitizer,
threshold: 10, // Default threshold
neuro_symbolic: None,
}
}

/// Enable System 3 validation using the Neuro-Symbolic Engine
pub fn with_system3(mut self, engine: Arc<NeuroSymbolicEngine>) -> Self {
self.neuro_symbolic = Some(engine);
self
}

/// Push a new interaction to the short-term buffer.
pub async fn push_interaction(&self, interaction: crate::Interaction) -> Result<()> {
self.buffer.push(interaction).await
Expand Down Expand Up @@ -98,6 +107,59 @@ impl Metabolism {
// 4. Summarize via LLM
let summary = self.llm.summarize(&combined_text).await?;

// System 3 Validation: Ensure the summary reasoning does not contain hallucinations
if let Some(engine) = &self.neuro_symbolic {
// Very simplified: Generate a mock Reasoning Graph for the summary logic
// In a real scenario, the LLM should output the ReasoningGraph directly.
let mut graph = ReasoningGraph::new();
graph.add_node(LogicNode::new(
"1".to_string(),
LogicNodeType::Conclusion,
summary.clone(),
Some(summary.clone()) // Passing the summary directly as a translation to verify
));
Comment on lines +115 to +120

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The Metabolism::digest function directly uses raw LLM output (summary) as the formal_translation for a LogicNode. This formal_translation is then passed to NeuroSymbolicEngine::evaluate_graph and SymbolicVerifierPort::verify, which is intended for deterministic code execution. This creates a critical vulnerability to Prompt Injection leading to Remote Code Execution (RCE), as a malicious LLM output could be executed. The formal_translation should be a formal, verifiable statement, not just the natural language summary, to ensure deterministic verification and prevent this exploit.


let is_sound = engine.evaluate_graph(&mut graph).await.unwrap_or(false);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The use of unwrap_or(false) on the engine.evaluate_graph call can mask underlying errors in the NeuroSymbolicEngine. If the engine itself encounters a critical error (e.g., an I/O error with a symbolic backend), this will silently treat it as a validation failure rather than a system error. It would be more robust to log the error explicitly or propagate it if it represents a critical failure of the engine.

            let is_sound = match engine.evaluate_graph(&mut graph).await {
                Ok(sound) => sound,
                Err(e) => {
                    // Log the error explicitly for debugging
                    eprintln!("NeuroSymbolicEngine error during evaluation: {:?}", e);
                    false // Treat engine errors as validation failures for now
                }
            };

if !is_sound {
// For MVP: Return error or silently drop/flag the invalid memory
// We'll log and skip digestion if the logic is flawed
println!("System 3 Validation Failed: Hallucination detected in memory digestion.");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use structured logging instead of println!.

For production visibility and consistency with typical Rust service patterns, replace println! with the tracing or log crate macros (e.g., tracing::warn!).

♻️ Suggested fix
-                println!("System 3 Validation Failed: Hallucination detected in memory digestion.");
+                tracing::warn!("System 3 Validation Failed: Hallucination detected in memory digestion.");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
println!("System 3 Validation Failed: Hallucination detected in memory digestion.");
tracing::warn!("System 3 Validation Failed: Hallucination detected in memory digestion.");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/synapse-core/src/logic/metabolism.rs` at line 126, Replace the
println! call that emits "System 3 Validation Failed: Hallucination detected in
memory digestion." with a structured logging macro such as tracing::warn! (or
log::warn!) so the message is recorded by the app's telemetry; update the
import/scope to ensure tracing is available (add use tracing::warn or enable the
tracing crate) and keep the same message text but use the structured macro in
the same location where the println! currently appears in metabolism.rs.

return Ok(0);
}
Comment on lines +123 to +128

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

If System 3 validation fails in Metabolism::digest, interactions are silently lost because they are popped from the buffer but not stored. This can lead to a Denial of Service (DoS) where an attacker could cause specific interactions to be dropped. Additionally, the current use of println! for logging validation failures is not suitable for production; a proper logging framework like log::warn! should be used to ensure these critical events are captured and monitored.

Comment on lines +123 to +128
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Do not return Ok(0) on System 3 rejection after consuming the batch.

By this point, the batch was already popped (Line 88). Returning success-like Ok(0) on Line 127 silently drops interactions and is indistinguishable from “below threshold” in callers.

🔧 Proposed fix
             let is_sound = engine.evaluate_graph(&mut graph).await.unwrap_or(false);
             if !is_sound {
-                // For MVP: Return error or silently drop/flag the invalid memory
-                // We'll log and skip digestion if the logic is flawed
-                println!("System 3 Validation Failed: Hallucination detected in memory digestion.");
-                return Ok(0);
+                // Requeue consumed interactions so data is not lost on validation failure.
+                for interaction in interactions {
+                    self.buffer.push(interaction).await?;
+                }
+                return Err(crate::error::Error::System(
+                    "System 3 validation failed during digestion".to_string(),
+                ));
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/synapse-core/src/logic/metabolism.rs` around lines 123 - 128, The
current System 3 rejection path logs "System 3 Validation Failed..." then
returns Ok(0), which incorrectly signals success after the batch was already
popped; change this to return a failure Result so callers can distinguish a
rejection from a legitimate Ok(0). Locate the is_sound check in metabolism.rs
and replace the Ok(0) return with a proper Err return (e.g., a dedicated
MetabolismError variant like MetabolismError::System3Rejected or an
anyhow::Error with a clear message) and include contextual info (e.g., memory id
or batch info) in the error to help callers handle/retry/rollback correctly.


// 5. Generate embedding
let embedding = self.embedder.embed(&summary).await?;

// 6. Diffuse validated logic into Holographic Memory
// Convert the discrete reasoning into a continuous latent representation (Diffusion / Entropy Injection)
let serialized_graph = serde_json::to_string(&graph).unwrap_or_default();
let neurogram = match self.holographic.diffuse_logic(&serialized_graph).await {
Ok(data) => Some(data),
Err(_) => None,
};

// 7. Store in Long-Term Memory
let memory = MemoryNode {
id: Uuid::new_v4().to_string(),
content: summary,
layer: 0,
node_type: crate::NodeType::Summary,
created_at: Utc::now().timestamp(),
updated_at: Utc::now().timestamp(),
embedding,
metadata: std::collections::HashMap::new(),
namespace: "default".to_string(),
source: "metabolism".to_string(),
holographic_data: neurogram,
loop_level: 0,
};

self.memory.store(memory).await?;
Comment on lines +130 to +157
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

System3 branch skips engram upsert, causing feature regression.

with_engram() has no effect when System3 is enabled because Line 130-Line 157 bypasses the engram step that exists in the fallback path.

🔧 Proposed parity fix
             // 7. Store in Long-Term Memory
+            if let Some(engram) = &self.engram {
+                let ngrams = Self::extract_ngrams(&combined_text, 2, 3);
+                if !ngrams.is_empty() {
+                    engram.upsert_ngrams(&ngrams, &summary).await?;
+                }
+            }
+
             let memory = MemoryNode {
                 id: Uuid::new_v4().to_string(),
                 content: summary,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/synapse-core/src/logic/metabolism.rs` around lines 130 - 157, The
System3 path currently builds a MemoryNode and calls self.memory.store but never
performs the engram upsert used in the fallback path, so with_engram() is
ignored; to fix, after creating serialized_graph and neurogram (from
holographic.dissolve_logic) and before/after self.memory.store, call the same
engram upsert routine used by the fallback path (e.g., self.memory.upsert_engram
or the method that accepts the serialized_graph/neurogram and memory id) when
with_engram() is true, and ensure MemoryNode.holographic_data is populated
consistently so the upsert receives the same payload the fallback uses; locate
this change around dissolve_logic, MemoryNode construction, and the
self.memory.store call.

self.buffer.clear().await?;
return Ok(count);
}
Comment on lines +110 to +160
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stub validation bypasses the core System 3 intent.

The current implementation passes the raw NL summary as formal_translation (line 119), but per graph.rs, this field is meant for "Python code, Lean4 statement" - actual formal logic. The MockPythonVerifier only checks for string keywords ("invalid", "False"), so this validates nothing meaningful.

As the comment at line 113 notes, the LLM should output the ReasoningGraph directly. Until that's implemented, this validation provides false confidence. Consider either:

  1. Removing System 3 integration until proper graph generation exists
  2. Adding clear documentation/TODO that this is a no-op stub
  3. Gating this behind a feature flag to make the MVP nature explicit
🔧 Suggested improvement: Add explicit stub marker
         // System 3 Validation: Ensure the summary reasoning does not contain hallucinations
         if let Some(engine) = &self.neuro_symbolic {
-            // Very simplified: Generate a mock Reasoning Graph for the summary logic
-            // In a real scenario, the LLM should output the ReasoningGraph directly.
+            // TODO(system3-mvp): This is a stub implementation that does NOT provide real validation.
+            // Real implementation requires the LLM to output a structured ReasoningGraph.
+            // Currently, this only catches summaries containing "invalid" or "False" keywords.
+            #[cfg(debug_assertions)]
+            tracing::warn!("System 3 validation is using stub implementation - no real verification");
+            
             let mut graph = ReasoningGraph::new();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/synapse-core/src/logic/metabolism.rs` around lines 110 - 129, The
System 3 block in metabolism.rs currently fakes formal logic by stuffing the NL
summary into ReasoningGraph::add_node as the formal_translation (LogicNode::new)
and thus gives false validation via the MockPythonVerifier; either remove this
integration or make it explicit/stubbed: stop invoking
self.neuro_symbolic.evaluate_graph until the LLM actually produces a
ReasoningGraph, or gate the code behind a feature flag (e.g.,
"system3_validation") and replace the current behavior with a clear TODO log
message (e.g., "SYSTEM3-STUB: validation disabled — awaiting real graph output")
so callers know it's a no-op; reference self.neuro_symbolic, ReasoningGraph,
LogicNode::new (and MockPythonVerifier/graph.rs) when making the change.

Comment on lines +157 to +160
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

System3 success path clears undigested items and over-reports digest count.

Line 158 wipes the whole buffer, but only one batch was digested. Line 159 returns the pre-pop buffer length instead of actual processed batch size.

🔧 Proposed fix
             self.memory.store(memory).await?;
-            self.buffer.clear().await?;
-            return Ok(count);
+            return Ok(interactions.len());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/synapse-core/src/logic/metabolism.rs` around lines 157 - 160, The
success path currently clears the entire buffer and returns the pre-populated
buffer length (variable count), which over-reports work done; instead, remove
only the items actually processed and return the actual processed batch size. In
the block that calls self.memory.store(...) and currently calls
self.buffer.clear().await?; replace the full-clear with logic that removes
exactly the number of processed items (use the buffer API you have—e.g., drain,
truncate, pop_front many—on the same buffer instance) and ensure the returned
value is the actual number of items processed (use a processed_count variable
derived from the digest loop rather than the original count). Keep references to
self.memory.store, self.buffer (and the variable count) when making the change
so the intent is explicit.


// Fallback for non-System3 path (Continuous Mode)
// 5. Generate embedding
let embedding = self.embedder.embed(&summary).await?;

Expand Down
1 change: 1 addition & 0 deletions crates/synapse-core/src/logic/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ pub mod translator;
pub mod dreaming;
pub mod hirag;
pub mod sanitizer;
pub mod system3;
// pub mod reranker;
136 changes: 136 additions & 0 deletions crates/synapse-core/src/logic/system3/engine.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
//! The Neuro-Symbolic Engine
//!
//! Takes a Reasoning Graph generated by the continuous language model
//! and deterministically verifies each node using a symbolic backend
//! (e.g., Python math parser, Prover9 logic solver, etc.).
//!
//! If a node is invalid (e.g., result-oriented hallucination), it is marked
//! Invalid, and the explicit reasoning graph provides targeted feedback
//! for the LLM to learn or correct itself using graph-based reward thinking.

use crate::error::Result;
use crate::logic::system3::graph::{LogicNodeType, ReasoningGraph, VerificationStatus};
use async_trait::async_trait;
use std::sync::Arc;

/// Verifier Interface (A2A-compatible in future via `rust-adk` tools).
#[async_trait]
pub trait SymbolicVerifierPort: Send + Sync {
/// Deterministically verify a formal statement.
/// Returns: (Is Valid, Reward Signal [0.0 - 1.0], Error Message if any)
async fn verify(&self, translation: &str) -> Result<(bool, f32, Option<String>)>;
}

/// A local mocked verifier for Python expressions/Math.
/// In production, this would execute actual code or query a Prover9 solver.
pub struct MockPythonVerifier;

#[async_trait]
impl SymbolicVerifierPort for MockPythonVerifier {
async fn verify(&self, translation: &str) -> Result<(bool, f32, Option<String>)> {
// Very basic mock:
if translation.contains("invalid") || translation.contains("False") {
return Ok((false, 0.0, Some("Failed validation check".to_string())));
}
Comment on lines +32 to +34

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The MockPythonVerifier's verify method is very simplistic. While it's a mock, it's crucial to ensure that the actual implementation of SymbolicVerifierPort in a production environment handles formal statements robustly and securely. If it involves executing external code (e.g., Python eval), it introduces significant security risks (e.g., arbitrary code execution) that must be mitigated with sandboxing or strict input validation.

// Simulating objective, unfalsifiable code-based reward signal (ImpRIF)
Ok((true, 1.0, None))
}
}

pub struct NeuroSymbolicEngine {
verifier: Arc<dyn SymbolicVerifierPort>,
}

impl NeuroSymbolicEngine {
pub fn new(verifier: Arc<dyn SymbolicVerifierPort>) -> Self {
Self { verifier }
}

/// Evaluates the entire reasoning graph step-by-step.
/// Returns true if all nodes are logically sound.
pub async fn evaluate_graph(&self, graph: &mut ReasoningGraph) -> Result<bool> {
if !graph.is_acyclic() {
// Cannot evaluate a cyclic graph
return Ok(false);
}

// In a real DAG, we would topological sort. For now, we iterate.
// A graph is considered sound if all Operations and Conclusions are valid.
Comment on lines +57 to +58

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The evaluate_graph method iterates through nodes without explicit topological sorting, as noted in the comment. While premises are assumed valid, for Operation and Conclusion nodes, this iteration order might not correctly reflect dependencies. If an Operation node depends on another Operation node that hasn't been evaluated yet, its verification might be incomplete or incorrect. Implementing topological sort would ensure correct evaluation order and potentially more accurate overall_soundness.

let mut overall_soundness = true;

for node in graph.nodes.values_mut() {
if node.node_type == LogicNodeType::Knowledge || node.node_type == LogicNodeType::Requirement {
// Premises are assumed valid or validated differently.
node.status = VerificationStatus::Valid;
node.reward = 1.0;
continue;
}

if let Some(translation) = &node.formal_translation {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message returned by self.verifier.verify(translation).await? is currently ignored (_err). This error message could contain valuable diagnostic information from the symbolic backend, which would be useful for debugging or for providing targeted feedback to the LLM to improve its reasoning.

                let (is_valid, reward, err_msg) = self.verifier.verify(translation).await?;

let (is_valid, reward, _err) = self.verifier.verify(translation).await?;
if is_valid {
node.status = VerificationStatus::Valid;
node.reward = reward;
} else {
node.status = VerificationStatus::Invalid;
node.reward = 0.0;
overall_soundness = false;
}
} else {
// If there's no formal translation for an operation, we can't objectively verify it.
node.status = VerificationStatus::Invalid;
node.reward = 0.0;
overall_soundness = false;
}
}
Comment on lines +57 to +85
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

HashMap iteration order is non-deterministic; DAG evaluation may produce inconsistent results.

Line 57's comment acknowledges the need for topological sort, but the current graph.nodes.values_mut() iteration uses HashMap's arbitrary order. For nodes with dependencies, this could evaluate a dependent node before its prerequisites, leading to inconsistent overall_soundness outcomes across runs.

Consider implementing topological sort or using an IndexMap/BTreeMap to ensure deterministic evaluation order.

🔧 Suggested approach
     pub async fn evaluate_graph(&self, graph: &mut ReasoningGraph) -> Result<bool> {
         if !graph.is_acyclic() {
             // Cannot evaluate a cyclic graph
             return Ok(false);
         }

-        // In a real DAG, we would topological sort. For now, we iterate.
+        // Topologically sort nodes to ensure dependencies are evaluated first
+        let sorted_ids = graph.topological_sort()?;
         // A graph is considered sound if all Operations and Conclusions are valid.
         let mut overall_soundness = true;

-        for node in graph.nodes.values_mut() {
+        for node_id in sorted_ids {
+            let node = graph.nodes.get_mut(&node_id).unwrap();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/synapse-core/src/logic/system3/engine.rs` around lines 57 - 85, The
current evaluation iterates graph.nodes.values_mut() (in engine.rs) which is
non-deterministic because HashMap iteration can visit dependent nodes before
their prerequisites; change the evaluation to a deterministic topological order:
compute a topological sort over the DAG represented by graph (use node
dependency edges already present on the graph structure) and then iterate nodes
in that sorted order when calling self.verifier.verify, setting node.status and
node.reward (and updating overall_soundness) only after prerequisites are
processed; alternatively, replace the underlying map with an order-preserving
structure (IndexMap or BTreeMap) and ensure insertion/order corresponds to a
valid topological sequence so LogicNodeType checks, VerificationStatus
assignments, and overall_soundness updates behave deterministically.


Ok(overall_soundness)
}
}

#[cfg(test)]
mod tests {
use super::*;
use crate::logic::system3::graph::{LogicNode, LogicNodeType};

#[tokio::test]
async fn test_valid_reasoning_graph() {
let mut graph = ReasoningGraph::new();

let mut node1 = LogicNode::new("1".to_string(), LogicNodeType::Knowledge, "x = 5".to_string(), Some("x = 5".to_string()));
let mut node2 = LogicNode::new("2".to_string(), LogicNodeType::Operation, "x * 2 = 10".to_string(), Some("assert x * 2 == 10".to_string()));
node2.add_dependency("1".to_string());

graph.add_node(node1);
graph.add_node(node2);

let verifier = Arc::new(MockPythonVerifier);
let engine = NeuroSymbolicEngine::new(verifier);

let is_sound = engine.evaluate_graph(&mut graph).await.unwrap();
assert!(is_sound);
}

#[tokio::test]
async fn test_invalid_reasoning_graph_hallucination() {
// Testing "Result-Oriented Hallucination" prevention.
let mut graph = ReasoningGraph::new();

let mut node1 = LogicNode::new("1".to_string(), LogicNodeType::Knowledge, "x = 5".to_string(), Some("x = 5".to_string()));
let mut node2 = LogicNode::new("2".to_string(), LogicNodeType::Operation, "x * 2 = 12".to_string(), Some("invalid assert x * 2 == 12".to_string()));
node2.add_dependency("1".to_string());

graph.add_node(node1);
graph.add_node(node2);

let verifier = Arc::new(MockPythonVerifier);
let engine = NeuroSymbolicEngine::new(verifier);

let is_sound = engine.evaluate_graph(&mut graph).await.unwrap();
assert!(!is_sound);

let node2_eval = graph.get_node("2").unwrap();
assert_eq!(node2_eval.status, VerificationStatus::Invalid);
assert_eq!(node2_eval.reward, 0.0);
}
}
149 changes: 149 additions & 0 deletions crates/synapse-core/src/logic/system3/graph.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
//! # System 3 AI - Reasoning Graph & Neuro-Symbolic Engine
//!
//! This module implements the "System 3 AI" paradigm described by the `LogicGraph` and `ImpRIF` models.
//!
//! ## Overview
//! Current LLMs (System 1/2) suffer from "Result-Oriented Hallucination". They prioritize semantic fluency
//! over logical soundness. To fix this, System 3 AI transitions from continuous differentiable spaces
//! to discrete logical graphs (DAGs).
//!
//! A reasoning process is split into discrete `LogicNode`s. Instead of generating text autoregressively,
//! the LLM generates a logic graph. Each node's validity is evaluated by a deterministic `NeuroSymbolicVerifier`
//! (e.g., Python code execution, Prover9, Lean4 solver).
//!
//! The graph explicitly tracks:
//! - Semantic knowledge nodes
//! - Relationships
//! - Operations
//! - Requirements/Dependencies

use serde::{Deserialize, Serialize};
use std::collections::{HashMap, HashSet};

/// Types of logic nodes in the reasoning graph.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum LogicNodeType {
/// Semantic knowledge / Premise
Knowledge,
/// Logical operation or deduction step
Operation,
/// Requirement or Goal
Requirement,
/// Conclusion
Conclusion,
}

/// Verification status of a discrete node.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum VerificationStatus {
/// Not yet verified by the symbolic engine
Pending,
/// Validated by deterministic logic (e.g., code executed successfully)
Valid,
/// Logically invalid (e.g., invalid deduction, math error)
Invalid,
}

/// A single step in the reasoning graph.
/// In System 3 AI, natural language is compiled into these discrete blocks.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LogicNode {
pub id: String,
pub node_type: LogicNodeType,
/// Natural language representation of the step
pub proposition: String,
/// Formal translation for the symbolic engine (e.g., Python code, Lean4 statement)
pub formal_translation: Option<String>,
pub status: VerificationStatus,
/// IDs of nodes this node depends on
pub dependencies: Vec<String>,
/// Confidence or Reward given by the verification step (for explicit graph-based reward thinking)
pub reward: f32,
}

impl LogicNode {
pub fn new(id: String, node_type: LogicNodeType, proposition: String, formal_translation: Option<String>) -> Self {
Self {
id,
node_type,
proposition,
formal_translation,
status: VerificationStatus::Pending,
dependencies: Vec::new(),
reward: 0.0,
}
}

pub fn add_dependency(&mut self, dep_id: String) {
if !self.dependencies.contains(&dep_id) {
self.dependencies.push(dep_id);
}
Comment on lines +78 to +80

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The add_dependency method for LogicNode uses a Vec<String> for dependencies and checks for existing dependencies with !self.dependencies.contains(&dep_id). For graphs with a potentially large number of dependencies per node, this O(N) check can become inefficient. Using a HashSet<String> for dependencies would improve the average time complexity of add_dependency and dependency checks to O(1).

    pub fn add_dependency(&mut self, dep_id: String) {
        self.dependencies.insert(dep_id);
    }

}
}

/// The Reasoning Graph (DAG) that explicitly represents the logical path.
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct ReasoningGraph {
pub nodes: HashMap<String, LogicNode>,
}

impl ReasoningGraph {
pub fn new() -> Self {
Self {
nodes: HashMap::new(),
}
}

pub fn add_node(&mut self, node: LogicNode) {
self.nodes.insert(node.id.clone(), node);
}

pub fn get_node(&self, id: &str) -> Option<&LogicNode> {
self.nodes.get(id)
}

pub fn get_node_mut(&mut self, id: &str) -> Option<&mut LogicNode> {
self.nodes.get_mut(id)
}

/// Checks if the graph is a valid DAG (no cycles)
pub fn is_acyclic(&self) -> bool {
let mut visited = HashSet::new();
let mut path = HashSet::new();

for node_id in self.nodes.keys() {
if self.detect_cycle(node_id, &mut visited, &mut path) {
return false; // Cycle detected
}
}
true
}

fn detect_cycle(&self, node_id: &str, visited: &mut HashSet<String>, path: &mut HashSet<String>) -> bool {
if path.contains(node_id) {
return true;
}
if visited.contains(node_id) {
return false;
}

visited.insert(node_id.to_string());
path.insert(node_id.to_string());

if let Some(node) = self.nodes.get(node_id) {
for dep in &node.dependencies {
if self.detect_cycle(dep, visited, path) {
return true;
}
}
}

path.remove(node_id);
false
}

/// Determines if all nodes in the graph are marked as Valid.
pub fn is_logically_sound(&self) -> bool {
self.nodes.values().all(|n| n.status == VerificationStatus::Valid)
}
}
7 changes: 7 additions & 0 deletions crates/synapse-core/src/logic/system3/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
pub mod engine;
pub mod graph;
pub mod parser;

pub use engine::{NeuroSymbolicEngine, SymbolicVerifierPort, MockPythonVerifier};
pub use graph::{LogicNode, LogicNodeType, ReasoningGraph, VerificationStatus};
pub use parser::NeuroSymbolicParser;
Loading
Loading