Skip to content

feat: elevate task system to instance-level global scope#468

Merged
jamiepine merged 7 commits intomainfrom
feat/global-task-database
Mar 24, 2026
Merged

feat: elevate task system to instance-level global scope#468
jamiepine merged 7 commits intomainfrom
feat/global-task-database

Conversation

@jamiepine
Copy link
Member

@jamiepine jamiepine commented Mar 21, 2026

Summary

Replaces per-agent task databases with a single global task store. Tasks get globally unique numbers, explicit owner/assigned agent fields, and a unified API + UI surface.

Design doc: docs/design-docs/global-task-elevation.md
Addresses: #472 (borrow patterns from beads_rust — monotonic numbering, atomic claim semantics, schema invariants)

What changed

Database

  • New instance-level SQLite database at {instance_dir}/data/tasks.db with global tasks table
  • Monotonic task_number_seq high-water-mark table — numbers never reuse after deletes
  • agent_id replaced with owner_agent_id (creator) + assigned_agent_id (executor)
  • All timestamps use ISO 8601 (strftime('%Y-%m-%dT%H:%M:%SZ', 'now'))
  • One-time migration reads all per-agent task tables, assigns global numbers, writes to global DB

Rust

  • TaskStore refactored from per-agent to global — single Arc<TaskStore> shared across all agents
  • task_store_registry eliminated entirely (was ArcSwap<HashMap<String, Arc<TaskStore>>>)
  • SendAgentMessageTool simplified — creates tasks in global store with cross-agent assignment instead of writing to foreign agent databases
  • claim_next_ready filters by assigned_agent_id with atomic compare-and-set
  • Reassignment auto-clears worker_id so detached workers can't update reassigned tasks
  • Per-agent TaskStore fallback removed — fail-fast if global store not initialized

API

  • Single /tasks endpoint surface (no /agents/tasks duplication):
    • GET /tasks — list with optional agent_id (OR filter), owner_agent_id, assigned_agent_id, status, priority, created_by
    • POST /tasks — create with explicit owner_agent_id, optional assigned_agent_id
    • GET/PUT/DELETE /tasks/{number} — globally addressed by number
    • POST /tasks/{number}/approve — move to ready
    • POST /tasks/{number}/execute — move to ready (rejects pending_approval — must approve first)
    • POST /tasks/{number}/assign — reassign to different agent

UI

  • Shared TaskBoard component (interface/src/components/TaskBoard.tsx) with kanban columns, task cards, create dialog, detail dialog
  • Agent-scoped view (/agents/$agentId/tasks) uses agent_id OR filter — shows both assigned and owned tasks
  • New global view (/tasks) with agent badges on cards, explicit owner picker for task creation
  • Tasks icon added to sidebar between Orchestrate and Settings
  • Truncation warning when task list hits 200-task limit
  • Create dialog guarded against double-submit

Review fixes

  • Monotonic task numbering via task_number_seq table (no reuse after deletes)
  • Auto-clear worker_id on reassignment
  • ISO 8601 timestamps everywhere (store + migration normalization)
  • execute_task rejects pending_approval (409 CONFLICT)
  • Migration fails fast on per-agent error (no partial marker)
  • create_dir_all before global DB connect
  • Upstream config test fixes (missing team_id, mattermost fields)

Verification

  • cargo check — clean
  • cargo test --lib — 756 passed, 0 failed
  • bun run build — clean

Introduce the foundation for elevating the task system from per-agent
scope to a single global instance-level database shared across all
agents.

Changes:
- New migration directory (migrations/global/) with the global tasks
  schema featuring globally unique task_number, owner_agent_id and
  assigned_agent_id columns, and ISO 8601 timestamp defaults
- New connect_global_tasks() function in db.rs for connecting to and
  migrating the instance-level tasks.db
- Initialize the global task pool during startup in main.rs alongside
  the existing secrets.redb instance-level store
- Design specification for the full task elevation roadmap
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 21, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Migrates per-agent task storage to a single instance-level global task system: adds a global SQLite tasks DB and migration, rewires Rust task APIs/store to use owner/assigned agent IDs and TaskListFilter, moves HTTP routes to /tasks, updates tools and UI, and provides an idempotent one-time migration path.

Changes

Cohort / File(s) Summary
Design documentation
docs/design-docs/global-task-elevation.md
New design doc describing global tasks DB schema, API routes, event payloads, UI changes, and phased migration plan.
DB migration
migrations/global/20260321000001_global_tasks.sql
New instance-wide tasks table with global task_number, owner_agent_id/assigned_agent_id, timestamps, metadata, and indexes.
DB connector
src/db.rs
Added connect_global_tasks(data_dir: &Path) to open {data_dir}/tasks.db and run global migrations.
Startup & wiring
src/main.rs, src/api/state.rs, src/api/agents.rs, src/lib.rs
Startup initializes a single global TaskStore and sets it on ApiState (task_store: ArcSwap<Option<_>>); removed per-agent task_store_registry and its population.
Task store core & migration
src/tasks/store.rs, src/tasks.rs, src/tasks/migration.rs
Refactor to global-scoped tasks: agent_idowner_agent_id/assigned_agent_id, global task_number allocation, new TaskListFilter, updated store APIs (list/get/update/delete/claim), timestamp compatibility, and added idempotent one-time migration module.
API endpoints & handlers
src/api/tasks.rs, src/api/server.rs
Routes moved to top-level /tasks; handlers use get_task_store and TaskListFilter; request/response types updated for owner/assigned semantics; added POST /tasks/{number}/assign.
Agent logic & cortex
src/agent/cortex.rs, src/agent/channel.rs
Removed per-agent store args in task-store calls, switched to TaskListFilter-based queries, and updated test SQL fixtures to the new schema.
Tools & messaging
src/tools/send_agent_message.rs, src/tools/task_create.rs, src/tools/task_list.rs, src/tools/task_update.rs
Tools now use the global TaskStore; task creation/updates use owner_agent_id/assigned_agent_id; SendAgentMessageTool::new takes a single Arc<TaskStore>.
UI & client
interface/src/api/client.ts, interface/src/components/TaskBoard.tsx, interface/src/routes/GlobalTasks.tsx, interface/src/routes/AgentTasks.tsx, interface/src/components/Sidebar.tsx, interface/src/router.tsx
Client types and API methods updated for global endpoints and owner/assigned fields; added shared TaskBoard UI, new GlobalTasks route/page, refactored AgentTasks to reuse TaskBoard, and added sidebar link.
Tests
tests/*, inline SQL fixtures
Tests updated to remove registry initialization and to match the new global tasks schema and inputs; added migration tests.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related issues

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 87.34% which is sufficient. The required threshold is 80.00%.
Title check ✅ Passed The title 'feat: elevate task system to instance-level global scope' clearly and accurately summarizes the main change—migrating tasks from per-agent scope to an instance-level global system.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, explaining database schema changes, Rust refactors, API updates, UI changes, and verification steps all aligned with the PR's objectives.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/global-task-database

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment on lines +77 to +78
let db_path = data_dir.join("tasks.db");
let url = format!("sqlite:{}?mode=rwc", db_path.display());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor robustness: if {instance_dir}/data doesn’t exist yet, sqlite:...tasks.db connect will fail with “unable to open database file”. Creating the directory here keeps startup behavior predictable.

Suggested change
let db_path = data_dir.join("tasks.db");
let url = format!("sqlite:{}?mode=rwc", db_path.display());
std::fs::create_dir_all(data_dir).with_context(|| {
format!(
"failed to create global task database directory: {}",
data_dir.display()
)
})?;
let db_path = data_dir.join("tasks.db");
let url = format!("sqlite:{}?mode=rwc", db_path.display());

…t model

Replace agent-scoped task storage with a global model where tasks have
owner_agent_id (creator) and assigned_agent_id (executor) instead of a
single agent_id. Task numbers are now globally unique (no composite key).

TaskStore API changes:
- create() takes owner_agent_id + assigned_agent_id instead of agent_id
- list() takes TaskListFilter struct instead of positional agent_id param
- get_by_number() no longer requires agent_id (globally unique numbers)
- update() no longer requires agent_id; gains assigned_agent_id field
- delete() no longer requires agent_id
- claim_next_ready() filters by assigned_agent_id
- list_ready() filters by assigned_agent_id

All 24 consumer call sites updated across tools (task_create, task_list,
task_update, send_agent_message), API handlers, and cortex ready-task
loop. Cortex test fixtures updated to use global schema.

New tests: global_task_numbers_are_unique_across_agents,
list_filters_by_assigned_agent, claim_next_ready_scopes_by_assigned_agent,
reassign_task_via_update
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/api/tasks.rs (1)

136-143: ⚠️ Potential issue | 🔴 Critical

Reapply agent scoping on the global-by-number handlers.

These handlers now fetch/update/delete by global task_number, while query.agent_id / request.agent_id is only used to pick a store entry. With the shared global store, any caller who knows #N can fetch, approve, execute, update, or delete another agent’s task. Gate each operation on the task’s assigned_agent_id / owner_agent_id, or push the agent into the SQL WHERE clause.

Also applies to: 215-237, 260-280, 296-320, 336-377

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/tasks.rs` around lines 136 - 143, The handler is fetching tasks by
global task_number via store.get_by_number(number) without enforcing the
caller's agent scope; update the logic after retrieving the task to verify that
task.assigned_agent_id or task.owner_agent_id equals query.agent_id (or modify
store.get_by_number to accept an agent_id filter) and return
StatusCode::FORBIDDEN when they differ; apply the same fix to the other affected
handlers referenced (around the blocks at 215-237, 260-280, 296-320, 336-377) so
every global-by-number operation (get_by_number, approve/update/delete/execute
flows) enforces agent scoping before proceeding.
src/tasks/store.rs (1)

371-406: ⚠️ Potential issue | 🟠 Major

Use the migration's ISO-8601 expression on updates too.

The new schema writes task timestamps with strftime('%Y-%m-%dT%H:%M:%SZ', 'now'), but these paths still persist datetime('now'). Because read_timestamp returns TEXT values verbatim, touched rows will start returning "YYYY-MM-DD HH:MM:SS" while untouched rows return ISO/RFC3339 strings, and string ordering on updated_at becomes unreliable. Switch every write here to the same strftime(...) expression, or normalize string reads before returning them.

Also applies to: 470-471, 623-644

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tasks/store.rs` around lines 371 - 406, The code uses sqlite
datetime('now') in approved_at, completed_at and the updated_at assignment while
the migration uses ISO-8601 via strftime('%Y-%m-%dT%H:%M:%SZ','now'); change
every occurrence of "datetime('now')" in the task update path (variables
approved_at, completed_at, and the query fragment that appends "updated_at =
datetime('now')") to use "strftime('%Y-%m-%dT%H:%M:%SZ','now')" so writes remain
consistent with the migration; apply the same replacement to the other update
sites mentioned (around the blocks currently at 470-471 and 623-644) to ensure
all timestamps written by this store use the ISO-8601 expression.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/tasks/store.rs`:
- Around line 158-159: When assigned_agent_id changes you must not leave the old
worker_id attached; update the task update logic that mutates assigned_agent_id
to either reject the reassignment while worker_id is set or atomically clear
worker_id and set status away from in_progress when the assignee changes.
Specifically, in the Task struct update/assignment code that touches
assigned_agent_id (and the code paths mentioned around lines 364-422), detect a
change in assigned_agent_id versus the existing value and either return an error
if worker_id.is_some(), or call clear_worker_id() and set status != in_progress
before applying the new assigned_agent_id; also ensure the detached-worker
completion path in src/agent/cortex.rs uses the same contract (don’t complete or
update a task if assigned_agent_id no longer matches the worker) so a detached
worker cannot keep updating a reassigned task.

In `@src/tools/task_update.rs`:
- Around line 20-22: The TaskStore::update path is now global-by-task_number but
agent_id in the TaskUpdate tool is unused, allowing cross-agent updates; modify
the code in src/tools/task_update.rs (including the logic around lines
referenced 156-236) to first fetch the task by task_number (e.g., call
TaskStore::get or equivalent) and verify that the fetched
task.owner/assigned_agent equals self.agent_id, returning an authorization/error
when it does not; only call TaskStore::update after this ownership check, or
alternatively reintroduce a scoped update helper that enforces self.agent_id
ownership before applying changes (ensure you update the code paths that
currently call TaskStore::update to use this ownership-checked flow).

---

Outside diff comments:
In `@src/api/tasks.rs`:
- Around line 136-143: The handler is fetching tasks by global task_number via
store.get_by_number(number) without enforcing the caller's agent scope; update
the logic after retrieving the task to verify that task.assigned_agent_id or
task.owner_agent_id equals query.agent_id (or modify store.get_by_number to
accept an agent_id filter) and return StatusCode::FORBIDDEN when they differ;
apply the same fix to the other affected handlers referenced (around the blocks
at 215-237, 260-280, 296-320, 336-377) so every global-by-number operation
(get_by_number, approve/update/delete/execute flows) enforces agent scoping
before proceeding.

In `@src/tasks/store.rs`:
- Around line 371-406: The code uses sqlite datetime('now') in approved_at,
completed_at and the updated_at assignment while the migration uses ISO-8601 via
strftime('%Y-%m-%dT%H:%M:%SZ','now'); change every occurrence of
"datetime('now')" in the task update path (variables approved_at, completed_at,
and the query fragment that appends "updated_at = datetime('now')") to use
"strftime('%Y-%m-%dT%H:%M:%SZ','now')" so writes remain consistent with the
migration; apply the same replacement to the other update sites mentioned
(around the blocks currently at 470-471 and 623-644) to ensure all timestamps
written by this store use the ISO-8601 expression.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7f8a0c69-3d93-47a4-a512-b047e1e3bc1d

📥 Commits

Reviewing files that changed from the base of the PR and between e28e571 and 5c8bb24.

📒 Files selected for processing (8)
  • src/agent/cortex.rs
  • src/api/tasks.rs
  • src/tasks.rs
  • src/tasks/store.rs
  • src/tools/send_agent_message.rs
  • src/tools/task_create.rs
  • src/tools/task_list.rs
  • src/tools/task_update.rs
✅ Files skipped from review due to trivial changes (1)
  • src/tasks.rs

Comment on lines +158 to +159
/// Reassign the task to a different agent.
pub assigned_agent_id: Option<String>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reassignment needs to break the old worker association.

assigned_agent_id can change while worker_id is silently preserved. Reassigning an in_progress task leaves the old worker attached, so get_by_worker_id here and the detached-worker completion path in src/agent/cortex.rs can keep updating a task that now belongs to someone else. Reject reassignment while a worker is bound, or force clear_worker_id / a non-in_progress status when the assignee changes.

Also applies to: 364-422

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tasks/store.rs` around lines 158 - 159, When assigned_agent_id changes
you must not leave the old worker_id attached; update the task update logic that
mutates assigned_agent_id to either reject the reassignment while worker_id is
set or atomically clear worker_id and set status away from in_progress when the
assignee changes. Specifically, in the Task struct update/assignment code that
touches assigned_agent_id (and the code paths mentioned around lines 364-422),
detect a change in assigned_agent_id versus the existing value and either return
an error if worker_id.is_some(), or call clear_worker_id() and set status !=
in_progress before applying the new assigned_agent_id; also ensure the
detached-worker completion path in src/agent/cortex.rs uses the same contract
(don’t complete or update a task if assigned_agent_id no longer matches the
worker) so a detached worker cannot keep updating a reassigned task.

Comment on lines +20 to 22
// Retained for future authorization checks on global task updates.
#[allow(dead_code)]
agent_id: AgentId,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Enforce task ownership before branch-scoped updates.

TaskStore::update is now global by task_number, but agent_id is explicitly unused here. In branch scope that means an agent can update another agent’s task just by referencing its number once this tool is backed by the shared global store. Fetch the task first and reject when it is not owned/assigned to self.agent_id, or restore a scoped update helper.

Also applies to: 156-236

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tools/task_update.rs` around lines 20 - 22, The TaskStore::update path is
now global-by-task_number but agent_id in the TaskUpdate tool is unused,
allowing cross-agent updates; modify the code in src/tools/task_update.rs
(including the logic around lines referenced 156-236) to first fetch the task by
task_number (e.g., call TaskStore::get or equivalent) and verify that the
fetched task.owner/assigned_agent equals self.agent_id, returning an
authorization/error when it does not; only call TaskStore::update after this
ownership check, or alternatively reintroduce a scoped update helper that
enforces self.agent_id ownership before applying changes (ensure you update the
code paths that currently call TaskStore::update to use this ownership-checked
flow).

Replace the per-agent task store map and cross-agent task_store_registry
with a single global TaskStore shared across all agents. This eliminates
the fragile cross-database write pattern where SendAgentMessageTool
looked up target agent stores from a registry.

Removed fields:
- AgentDeps.task_store_registry (lib.rs)
- ApiState.task_stores HashMap (api/state.rs)
- ApiState.task_store_registry (api/state.rs)

Added/changed:
- ApiState.task_store: ArcSwap<Option<Arc<TaskStore>>> (single global)
- ApiState.set_task_store() replaces set_task_stores()
- SendAgentMessageTool.task_store replaces task_store_registry
- initialize_agents() takes Arc<TaskStore> instead of registry
- All agents share the same global_task_store via AgentDeps

Cleanup:
- Removed task_store_registry population loop after agent init
- Removed per-agent TaskStore::new(db.sqlite) in agent init
- Removed per-agent task_stores map build for API state
- Simplified API task handlers from per-agent map lookup to single store
- Updated test fixtures (context_dump.rs, bulletin.rs)
Introduce tasks::migration module that runs on startup before agent
initialization. Scans each agent's per-agent spacebot.db for legacy
task rows and inserts them into the global tasks.db with new globally
unique task numbers.

Migration behavior:
- Idempotent via .tasks_migrated marker file in instance data dir
- Preserves original task UUIDs (skips already-migrated rows)
- Maps delegating_agent_id metadata to owner_agent_id for cross-agent
  tasks; assigned_agent_id comes from the legacy agent_id
- Enriches metadata with legacy_agent_id and legacy_task_number
- Handles both ISO 8601 TEXT and legacy TIMESTAMP column formats
- Per-agent databases are opened read-only, never modified
- Logs warnings and continues if individual agent migration fails
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/api/tasks.rs (1)

145-151: ⚠️ Potential issue | 🔴 Critical

Reintroduce agent scoping on the non-list task endpoints.

Lines 145, 223, 266, 302, and 340 now operate on a global task number without checking that the row is visible to request/query.agent_id. After removing per-agent stores, that lets callers read, approve, execute, update, or delete another agent's task if they know the number, and delete_task() will also publish the SSE event under the caller-supplied agent_id instead of the task's real assignee. Fetch first and return NOT_FOUND on mismatch—at least against assigned_agent_id, consistent with list_tasks()—before returning or mutating the task.

Also applies to: 221-243, 265-285, 300-315, 339-370

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/api/agents.rs`:
- Around line 403-408: The current code silently falls back to creating a
per-agent TaskStore when ApiState.task_store is unset; remove the unwrap_or_else
fallback and instead fail fast: in create_agent_internal check
ApiState.task_store.load() at the top and return an Err (or propagate an error)
if it's None so no AgentDeps/Channel::new/SendAgentMessageTool is constructed
with a local store; replace the clone().unwrap_or_else(...) logic around
task_store with a required lookup that errors if missing, and apply the same
change to the other occurrence that uses the same fallback.

In `@src/tasks/migration.rs`:
- Around line 69-86: The current logic writes the global marker (write_marker /
marker_path) after looping agents even when some agent migrations aborted inside
migrate_agent_tasks, which can permanently skip unfinished agents; change the
flow so that the global marker is only written for an agent after that agent's
migration completed successfully (i.e., only call write_marker for that agent
when migrate_agent_tasks returned Ok and no errors occurred), or alternatively
implement per-agent success markers/progress (e.g., per-agent .tasks_migrated
files or persisted last-processed task index) so partial migrations can be
resumed; update the code paths around migrate_agent_tasks, the outer agent loop,
and the write_marker call to ensure markers are written atomically per-agent
only on complete success.
- Around line 264-283: The helper functions read_timestamp and
read_optional_timestamp currently return TEXT timestamp strings verbatim; update
them to detect and parse legacy SQLite TEXT forms (e.g. "YYYY-MM-DD HH:MM:SS"
optionally with fractional seconds) before returning by attempting
chrono::NaiveDateTime::parse_from_str with a format like "%Y-%m-%d %H:%M:%S%.f"
(fall back to the existing behavior if parsing fails): for read_timestamp try
parsing the string value and, on success, convert with .and_utc().to_rfc3339()
(otherwise keep the current NaiveDateTime handling and final fallback to
chrono::Utc::now()), and for read_optional_timestamp do the same
parsing/normalization for Some(string) values before returning Option<String>.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f0ccaf40-72a1-40c0-ac88-8d300b604dbd

📥 Commits

Reviewing files that changed from the base of the PR and between 5c8bb24 and 2209b32.

📒 Files selected for processing (11)
  • src/agent/channel.rs
  • src/api/agents.rs
  • src/api/state.rs
  • src/api/tasks.rs
  • src/lib.rs
  • src/main.rs
  • src/tasks.rs
  • src/tasks/migration.rs
  • src/tools/send_agent_message.rs
  • tests/bulletin.rs
  • tests/context_dump.rs
💤 Files with no reviewable changes (3)
  • tests/bulletin.rs
  • src/lib.rs
  • tests/context_dump.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/tasks.rs

@vsumner
Copy link
Collaborator

vsumner commented Mar 22, 2026

Global numbering and the owner_agent_id / assigned_agent_id split fix real problems in the current model, especially ambiguous #N references and the existing cross-database write path in send_agent_message.


let task = store
.get_by_number(&query.agent_id, number)
.get_by_number(number)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The agent-scoped /agents/tasks read path lost its visibility check when tasks became globally addressed. get_by_number(number) will now return any task if the caller knows #N, regardless of query.agent_id. The same problem exists in the update/approve/execute/delete handlers below. Please fetch first and return NOT_FOUND unless the task is actually visible to the requested agent (at minimum assigned_agent_id, consistent with list_tasks()).

}
}

write_marker(&marker_path)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This writes the global migration marker even if one or more agents failed inside migrate_agent_tasks(). That makes partial migration permanent: later startups skip the migration entirely, leaving some legacy task sets stranded forever. The marker needs to reflect full success, or progress needs to be tracked per agent so failed agents can be retried safely.

));
let task_store = std::sync::Arc::new(crate::tasks::TaskStore::new(db.sqlite.clone()));
let task_store =
state.task_store.load().as_ref().clone().unwrap_or_else(|| {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the global task store is unexpectedly unset, this silently falls back to a per-agent TaskStore backed by spacebot.db. That breaks the new global-store invariant and can create split-brain task state depending on startup order. This should fail fast instead of quietly recreating the old topology.

metadata
}

fn read_timestamp(row: &sqlx::sqlite::SqliteRow, column: &str) -> String {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These helpers return TEXT timestamps verbatim. If legacy rows contain the usual SQLite YYYY-MM-DD HH:MM:SS form from the old TIMESTAMP columns, the migrated global store will end up with mixed timestamp formats. Please normalize legacy text forms to RFC 3339 before inserting them into the new TEXT columns.

Replace dual /agents/tasks + /tasks API surfaces with a single /tasks
endpoint. All handlers use the global task store with optional
owner_agent_id/assigned_agent_id filters.

Extract shared kanban components into TaskBoard.tsx so both the
agent-scoped view (/agents/$agentId/tasks) and the new top-level
view (/tasks) reuse the same board, columns, cards, and dialogs.
Add Tasks icon to the sidebar between Orchestrate and Settings.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/api/tasks.rs (1)

371-385: ⚠️ Potential issue | 🟠 Major

execute_task currently bypasses the approval gate.

A pending_approval task falls through to the Ready update path here, and approved_by can still be None. That makes the dedicated approve route optional and lets tasks enter the runnable state with no approver recorded. The UI already exposes this path for pending-approval cards, so this is reachable today. Reject execute requests for pending_approval, or funnel execute through the same approval rules.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/tasks.rs` around lines 371 - 385, The execute_task flow incorrectly
allows tasks with TaskStatus::PendingApproval to be updated to Ready without an
approver; in the execute_task handler, check for
crate::tasks::TaskStatus::PendingApproval (via current.status) and either return
an error response (reject the execute request) or enforce the same approval
logic used by the approve route (require request.approved_by to be Some and/or
call the approval path) before calling store.update; adjust the branch around
the existing matches(...) check and the call to store.update(...) so
PendingApproval does not fall through to the Ready update and ensure approved_by
is validated when transitioning to Ready.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@interface/src/components/TaskBoard.tsx`:
- Around line 222-230: The UI is hard-capping api.listTasks to limit:200 via the
useQuery call (queryKey / queryFn) and then treating tasks.length as the full
total in the toolbar, which silently drops items past page 1; fix by either
adding proper pagination/infinite loading (replace useQuery with
useInfiniteQuery and call api.listTasks with page/limit or cursor, using
queryKey that includes page/cursor and merging pages before computing totals) or
by changing the API to return total/has_more and updating the list rendering and
toolbar to use returned total/has_more (do not use tasks.length as the
authoritative total). Update all occurrences (e.g., the call using api.listTasks
at queryKey and the toolbar code referencing tasks.length around the other
mention) so truncation is surfaced to users and more items are fetched on scroll
or via pagination controls.
- Around line 588-601: handleSubmit can fire twice before the disabled UI state
appears; guard the in-flight mutation by returning early when an "isPending" (or
local submitting) flag is set, and set/clear that flag around the create call.
Concretely: inside handleSubmit check if isPending (or a new submitting
ref/state) and return if true; set submitting = true immediately before calling
onCreate({...}), and in the promise finally set submitting = false; update the
hook dependencies to include the submitting value (or use a ref to avoid
re-creating the callback). Ensure you still call setTitle, setDescription,
setPriority, and setStatus after the create completes or immediately if that
behavior is desired.

In `@interface/src/routes/AgentTasks.tsx`:
- Around line 3-4: AgentTasks currently passes ownerAgentId but TaskBoard only
uses its agentId prop to filter assigned_agent_id, dropping tasks that are owned
but delegated; update TaskBoard (the TaskBoard component and its
data-fetching/filter logic) to accept and use the ownerAgentId prop and change
the query/filter so it returns tasks where owner_agent_id === ownerAgentId OR
assigned_agent_id === agentId (or equivalent DB predicate), ensuring both owned
and currently-assigned tasks are shown.

In `@interface/src/routes/GlobalTasks.tsx`:
- Around line 23-31: Currently defaultOwner = agents[0]?.id silently attributes
new tasks to the first agent; instead require an explicit owner selection:
remove the implicit defaultOwner assignment and pass
ownerAgentId={selectedOwnerId} (initially undefined) into TaskBoard, and update
TaskBoard usage/props (e.g., ownerAgentId, onOwnerSelect or
disableCreateWhenNoOwner) so the create UI is hidden/disabled until
selectedOwnerId is set; alternatively add an owner picker modal triggered by
TaskBoard that calls onOwnerSelect to set selectedOwnerId before allowing
creates. Ensure references: defaultOwner, agents, TaskBoard, ownerAgentId, and
any new selectedOwnerId/onOwnerSelect props are used to gate creation.

---

Outside diff comments:
In `@src/api/tasks.rs`:
- Around line 371-385: The execute_task flow incorrectly allows tasks with
TaskStatus::PendingApproval to be updated to Ready without an approver; in the
execute_task handler, check for crate::tasks::TaskStatus::PendingApproval (via
current.status) and either return an error response (reject the execute request)
or enforce the same approval logic used by the approve route (require
request.approved_by to be Some and/or call the approval path) before calling
store.update; adjust the branch around the existing matches(...) check and the
call to store.update(...) so PendingApproval does not fall through to the Ready
update and ensure approved_by is validated when transitioning to Ready.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d2037602-15e2-4f97-9211-784e68e96236

📥 Commits

Reviewing files that changed from the base of the PR and between 2209b32 and 595c682.

📒 Files selected for processing (8)
  • interface/src/api/client.ts
  • interface/src/components/Sidebar.tsx
  • interface/src/components/TaskBoard.tsx
  • interface/src/router.tsx
  • interface/src/routes/AgentTasks.tsx
  • interface/src/routes/GlobalTasks.tsx
  • src/api/server.rs
  • src/api/tasks.rs

Comment on lines +222 to +230
const { data, isLoading } = useQuery({
queryKey,
queryFn: () =>
api.listTasks({
assigned_agent_id: agentId,
limit: 200,
}),
refetchInterval: 15_000,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't present a truncated list as the full task board.

This hard-caps the query at 200 rows, but the toolbar later reports tasks.length as if it's the full total. On the new global board, anything beyond that first page silently disappears with no warning. Add pagination/infinite loading, or have the API return total/has_more so the UI can surface truncation.

Also applies to: 318-320

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/components/TaskBoard.tsx` around lines 222 - 230, The UI is
hard-capping api.listTasks to limit:200 via the useQuery call (queryKey /
queryFn) and then treating tasks.length as the full total in the toolbar, which
silently drops items past page 1; fix by either adding proper
pagination/infinite loading (replace useQuery with useInfiniteQuery and call
api.listTasks with page/limit or cursor, using queryKey that includes
page/cursor and merging pages before computing totals) or by changing the API to
return total/has_more and updating the list rendering and toolbar to use
returned total/has_more (do not use tasks.length as the authoritative total).
Update all occurrences (e.g., the call using api.listTasks at queryKey and the
toolbar code referencing tasks.length around the other mention) so truncation is
surfaced to users and more items are fetched on scroll or via pagination
controls.

Comment on lines +588 to +601
const handleSubmit = useCallback(() => {
if (!title.trim()) return;
onCreate({
owner_agent_id: ownerAgentId,
title: title.trim(),
description: description.trim() || undefined,
priority,
status,
});
setTitle("");
setDescription("");
setPriority("medium");
setStatus("backlog");
}, [title, description, priority, status, ownerAgentId, onCreate]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Block duplicate creates while the mutation is in flight.

handleSubmit doesn't guard on isPending, so a fast double-click or Enter key repeat can still invoke createTask twice before the disabled state lands. Because creation is non-idempotent, that can produce duplicate tasks.

Proposed fix
   const handleSubmit = useCallback(() => {
-    if (!title.trim()) return;
+    if (isPending || !title.trim()) return;
     onCreate({
       owner_agent_id: ownerAgentId,
       title: title.trim(),
       description: description.trim() || undefined,
       priority,
       status,
     });
     setTitle("");
     setDescription("");
     setPriority("medium");
     setStatus("backlog");
-  }, [title, description, priority, status, ownerAgentId, onCreate]);
+  }, [title, description, priority, status, ownerAgentId, onCreate, isPending]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/components/TaskBoard.tsx` around lines 588 - 601, handleSubmit
can fire twice before the disabled UI state appears; guard the in-flight
mutation by returning early when an "isPending" (or local submitting) flag is
set, and set/clear that flag around the create call. Concretely: inside
handleSubmit check if isPending (or a new submitting ref/state) and return if
true; set submitting = true immediately before calling onCreate({...}), and in
the promise finally set submitting = false; update the hook dependencies to
include the submitting value (or use a ref to avoid re-creating the callback).
Ensure you still call setTitle, setDescription, setPriority, and setStatus after
the create completes or immediately if that behavior is desired.

Comment on lines 3 to +4
export function AgentTasks({ agentId }: { agentId: string }) {
const queryClient = useQueryClient();
const { taskEventVersion } = useLiveContext();

// Invalidate on SSE task events
const prevVersion = useRef(taskEventVersion);
useEffect(() => {
if (taskEventVersion !== prevVersion.current) {
prevVersion.current = taskEventVersion;
queryClient.invalidateQueries({ queryKey: ["tasks", agentId] });
}
}, [taskEventVersion, agentId, queryClient]);

const { data, isLoading } = useQuery({
queryKey: ["tasks", agentId],
queryFn: () => api.listTasks(agentId, { limit: 200 }),
refetchInterval: 15_000,
});

const tasks = data?.tasks ?? [];

// Group tasks by status
const tasksByStatus: Record<TaskStatus, TaskItem[]> = {
pending_approval: [],
backlog: [],
ready: [],
in_progress: [],
done: [],
};
for (const task of tasks) {
tasksByStatus[task.status]?.push(task);
}

// Create task dialog
const [createOpen, setCreateOpen] = useState(false);
// Detail dialog — store task number and derive from live list to stay current.
const [selectedTaskNumber, setSelectedTaskNumber] = useState<number | null>(
null,
);
const selectedTask =
selectedTaskNumber !== null
? (tasks.find((t) => t.task_number === selectedTaskNumber) ?? null)
: null;

const createMutation = useMutation({
mutationFn: (request: CreateTaskRequest) =>
api.createTask(agentId, request),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ["tasks", agentId] });
setCreateOpen(false);
},
});

const updateMutation = useMutation({
mutationFn: ({
taskNumber,
...request
}: {
taskNumber: number;
status?: TaskStatus;
priority?: TaskPriority;
}) => api.updateTask(agentId, taskNumber, request),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ["tasks", agentId] });
},
});

const approveMutation = useMutation({
mutationFn: (taskNumber: number) =>
api.approveTask(agentId, taskNumber, "human"),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ["tasks", agentId] });
},
});

const executeMutation = useMutation({
mutationFn: (taskNumber: number) => api.executeTask(agentId, taskNumber),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ["tasks", agentId] });
},
});

const deleteMutation = useMutation({
mutationFn: (taskNumber: number) => api.deleteTask(agentId, taskNumber),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ["tasks", agentId] });
setSelectedTaskNumber(null);
},
});

if (isLoading) {
return (
<div className="flex h-full items-center justify-center text-ink-faint">
Loading tasks...
</div>
);
}

return (
<div className="flex h-full flex-col">
{/* Toolbar */}
<div className="flex items-center justify-between border-b border-app-line px-4 py-2">
<div className="flex items-center gap-3">
<span className="text-sm text-ink-dull">
{tasks.length} task{tasks.length !== 1 ? "s" : ""}
</span>
{tasksByStatus.pending_approval.length > 0 && (
<Badge variant="amber" size="sm">
{tasksByStatus.pending_approval.length} pending approval
</Badge>
)}
{tasksByStatus.in_progress.length > 0 && (
<Badge variant="violet" size="sm">
{tasksByStatus.in_progress.length} in progress
</Badge>
)}
</div>
<Button size="sm" onClick={() => setCreateOpen(true)}>
Create Task
</Button>
</div>

{/* Kanban Board */}
<div className="flex flex-1 flex-wrap content-start gap-3 overflow-y-auto p-4">
{COLUMNS.map(({ status, label }) => (
<KanbanColumn
key={status}
status={status}
label={label}
tasks={tasksByStatus[status]}
onSelect={(task) => setSelectedTaskNumber(task.task_number)}
onApprove={(task) => approveMutation.mutate(task.task_number)}
onExecute={(task) => executeMutation.mutate(task.task_number)}
onStatusChange={(task, newStatus) =>
updateMutation.mutate({
taskNumber: task.task_number,
status: newStatus,
})
}
/>
))}
</div>

{/* Create Dialog */}
<CreateTaskDialog
open={createOpen}
onClose={() => setCreateOpen(false)}
onCreate={(request) => createMutation.mutate(request)}
isPending={createMutation.isPending}
/>

{/* Detail Dialog */}
{selectedTask && (
<TaskDetailDialog
task={selectedTask}
onClose={() => setSelectedTaskNumber(null)}
onApprove={() => approveMutation.mutate(selectedTask.task_number)}
onExecute={() => executeMutation.mutate(selectedTask.task_number)}
onDelete={() => deleteMutation.mutate(selectedTask.task_number)}
onStatusChange={(status) =>
updateMutation.mutate({
taskNumber: selectedTask.task_number,
status,
})
}
/>
)}
</div>
);
}

// -- Kanban Column --

function KanbanColumn({
status,
label,
tasks,
onSelect,
onApprove,
onExecute,
onStatusChange,
}: {
status: TaskStatus;
label: string;
tasks: TaskItem[];
onSelect: (task: TaskItem) => void;
onApprove: (task: TaskItem) => void;
onExecute: (task: TaskItem) => void;
onStatusChange: (task: TaskItem, status: TaskStatus) => void;
}) {
return (
<div className="flex min-h-0 min-w-[14rem] flex-1 basis-[14rem] flex-col rounded-lg border border-app-line/50 bg-app-darkBox/20">
{/* Column Header */}
<div className="flex items-center gap-2 border-b border-app-line/30 px-3 py-2">
<Badge variant={STATUS_COLORS[status]} size="sm">
{label}
</Badge>
<span className="text-tiny text-ink-faint">{tasks.length}</span>
</div>

{/* Cards */}
<div className="flex-1 space-y-2 overflow-y-auto p-2">
<AnimatePresence mode="popLayout">
{tasks.map((task) => (
<TaskCard
key={task.id}
task={task}
onSelect={() => onSelect(task)}
onApprove={() => onApprove(task)}
onExecute={() => onExecute(task)}
onStatusChange={(newStatus) => onStatusChange(task, newStatus)}
/>
))}
</AnimatePresence>
{tasks.length === 0 && (
<div className="py-4 text-center text-tiny text-ink-faint">
No tasks
</div>
)}
</div>
</div>
);
}

// -- Task Card --

function TaskCard({
task,
onSelect,
onApprove,
onExecute,
onStatusChange,
}: {
task: TaskItem;
onSelect: () => void;
onApprove: () => void;
onExecute: () => void;
onStatusChange: (status: TaskStatus) => void;
}) {
const subtasksDone = task.subtasks.filter((s) => s.completed).length;
const subtasksTotal = task.subtasks.length;

return (
<motion.div
layout
initial={{ opacity: 0, scale: 0.95 }}
animate={{ opacity: 1, scale: 1 }}
exit={{ opacity: 0, scale: 0.95 }}
transition={{ duration: 0.15 }}
className="cursor-pointer rounded-md border border-app-line/30 bg-app p-3 transition-colors hover:border-app-line"
onClick={onSelect}
>
{/* Title row */}
<div className="flex items-start justify-between gap-2">
<span className="text-sm font-medium text-ink leading-tight">
#{task.task_number} {task.title}
</span>
</div>

{/* Meta row */}
<div className="mt-2 flex flex-wrap items-center gap-1.5">
<Badge variant={PRIORITY_COLORS[task.priority]} size="sm">
{PRIORITY_LABELS[task.priority]}
</Badge>
{subtasksTotal > 0 && (
<span className="text-tiny text-ink-faint">
{subtasksDone}/{subtasksTotal}
</span>
)}
{task.worker_id && (
<Badge variant="violet" size="sm">
Worker
</Badge>
)}
<GithubMetadataBadges metadata={task.metadata} compact />
</div>

{/* Subtask progress bar */}
{subtasksTotal > 0 && (
<div className="mt-2 h-1 overflow-hidden rounded-full bg-app-line/30">
<div
className="h-full rounded-full bg-accent transition-all"
style={{ width: `${(subtasksDone / subtasksTotal) * 100}%` }}
/>
</div>
)}

{/* Quick actions */}
<div className="mt-2 flex gap-1" onClick={(e) => e.stopPropagation()}>
{task.status === "pending_approval" && (
<button
className="rounded px-1.5 py-0.5 text-tiny text-accent hover:bg-accent/10"
onClick={onApprove}
>
Approve
</button>
)}
{(task.status === "backlog" || task.status === "pending_approval") && (
<button
className="rounded px-1.5 py-0.5 text-tiny text-violet-400 hover:bg-violet-400/10"
onClick={onExecute}
>
Execute
</button>
)}
{task.status === "in_progress" && (
<button
className="rounded px-1.5 py-0.5 text-tiny text-emerald-400 hover:bg-emerald-400/10"
onClick={() => onStatusChange("done")}
>
Mark Done
</button>
)}
</div>

{/* Footer */}
<div className="mt-1.5 text-tiny text-ink-faint">
{formatTimeAgo(task.created_at)} by {task.created_by}
</div>
</motion.div>
);
}

// -- Create Task Dialog --

function CreateTaskDialog({
open,
onClose,
onCreate,
isPending,
}: {
open: boolean;
onClose: () => void;
onCreate: (request: CreateTaskRequest) => void;
isPending: boolean;
}) {
const [title, setTitle] = useState("");
const [description, setDescription] = useState("");
const [priority, setPriority] = useState<TaskPriority>("medium");
const [status, setStatus] = useState<TaskStatus>("backlog");

const handleSubmit = useCallback(() => {
if (!title.trim()) return;
onCreate({
title: title.trim(),
description: description.trim() || undefined,
priority,
status,
});
setTitle("");
setDescription("");
setPriority("medium");
setStatus("backlog");
}, [title, description, priority, status, onCreate]);

return (
<Dialog open={open} onOpenChange={(v) => !v && onClose()}>
<DialogContent>
<DialogHeader>
<DialogTitle>Create Task</DialogTitle>
</DialogHeader>
<div className="flex flex-col gap-3 py-2">
<div>
<label className="mb-1 block text-xs text-ink-dull">Title</label>
<input
className="w-full rounded-md border border-app-line bg-app-darkBox px-3 py-2 text-sm text-ink placeholder:text-ink-faint focus:border-accent focus:outline-none"
placeholder="Task title..."
value={title}
onChange={(e) => setTitle(e.target.value)}
onKeyDown={(e) => e.key === "Enter" && handleSubmit()}
autoFocus
/>
</div>
<div>
<label className="mb-1 block text-xs text-ink-dull">
Description
</label>
<textarea
className="w-full rounded-md border border-app-line bg-app-darkBox px-3 py-2 text-sm text-ink placeholder:text-ink-faint focus:border-accent focus:outline-none"
placeholder="Optional description..."
value={description}
onChange={(e) => setDescription(e.target.value)}
rows={3}
/>
</div>
<div className="flex gap-4">
<div className="flex-1">
<label className="mb-1 block text-xs text-ink-dull">
Priority
</label>
<select
className="w-full rounded-md border border-app-line bg-app-darkBox px-3 py-2 text-sm text-ink focus:border-accent focus:outline-none"
value={priority}
onChange={(e) => setPriority(e.target.value as TaskPriority)}
>
<option value="critical">Critical</option>
<option value="high">High</option>
<option value="medium">Medium</option>
<option value="low">Low</option>
</select>
</div>
<div className="flex-1">
<label className="mb-1 block text-xs text-ink-dull">
Initial Status
</label>
<select
className="w-full rounded-md border border-app-line bg-app-darkBox px-3 py-2 text-sm text-ink focus:border-accent focus:outline-none"
value={status}
onChange={(e) => setStatus(e.target.value as TaskStatus)}
>
<option value="pending_approval">Pending Approval</option>
<option value="backlog">Backlog</option>
<option value="ready">Ready</option>
</select>
</div>
</div>
</div>
<DialogFooter>
<Button variant="outline" size="sm" onClick={onClose}>
Cancel
</Button>
<Button
size="sm"
onClick={handleSubmit}
disabled={!title.trim() || isPending}
>
{isPending ? "Creating..." : "Create"}
</Button>
</DialogFooter>
</DialogContent>
</Dialog>
);
}

// -- Task Detail Dialog --

function TaskDetailDialog({
task,
onClose,
onApprove,
onExecute,
onDelete,
onStatusChange,
}: {
task: TaskItem;
onClose: () => void;
onApprove: () => void;
onExecute: () => void;
onDelete: () => void;
onStatusChange: (status: TaskStatus) => void;
}) {
return (
<Dialog open={true} onOpenChange={(v) => !v && onClose()}>
<DialogContent className="!flex max-h-[85vh] max-w-lg !flex-col overflow-hidden">
<DialogHeader className="shrink-0">
<DialogTitle>
#{task.task_number} {task.title}
</DialogTitle>
</DialogHeader>
<div className="flex min-h-0 flex-1 flex-col gap-3 overflow-y-auto py-2 pr-1">
{/* Status + Priority */}
<div className="flex items-center gap-2">
<Badge variant={STATUS_COLORS[task.status]} size="md">
{task.status.replace("_", " ")}
</Badge>
<Badge variant={PRIORITY_COLORS[task.priority]} size="md">
{PRIORITY_LABELS[task.priority]}
</Badge>
{task.worker_id && (
<Badge variant="violet" size="md">
Worker: {task.worker_id.slice(0, 8)}
</Badge>
)}
</div>

{/* Description */}
{task.description && (
<div>
<label className="mb-1 block text-xs text-ink-dull">
Description
</label>
<Markdown className="break-words text-sm text-ink">
{task.description}
</Markdown>
</div>
)}

{/* Subtasks */}
{task.subtasks.length > 0 && (
<div>
<label className="mb-1 block text-xs text-ink-dull">
Subtasks ({task.subtasks.filter((s) => s.completed).length}/
{task.subtasks.length})
</label>
<ul className="space-y-1">
{task.subtasks.map((subtask, index) => (
<li key={index} className="flex items-center gap-2 text-sm">
<span
className={
subtask.completed
? "text-emerald-400"
: "text-ink-faint"
}
>
{subtask.completed ? "[x]" : "[ ]"}
</span>
<span
className={
subtask.completed
? "text-ink-dull line-through"
: "text-ink"
}
>
{subtask.title}
</span>
</li>
))}
</ul>
</div>
)}

{(() => {
const githubRefs = getGithubReferences(task.metadata);
if (githubRefs.length === 0) return null;
return (
<div>
<label className="mb-1 block text-xs text-ink-dull">
GitHub Links
</label>
<GithubMetadataBadges references={githubRefs} />
</div>
);
})()}

{/* Metadata */}
<div className="grid grid-cols-1 gap-2 text-xs text-ink-dull sm:grid-cols-2">
<div>Created: {formatTimeAgo(task.created_at)}</div>
<div>By: {task.created_by}</div>
{task.approved_at && (
<div>Approved: {formatTimeAgo(task.approved_at)}</div>
)}
{task.approved_by && <div>By: {task.approved_by}</div>}
{task.completed_at && (
<div>Completed: {formatTimeAgo(task.completed_at)}</div>
)}
<div>Updated: {formatTimeAgo(task.updated_at)}</div>
</div>
</div>

<DialogFooter className="shrink-0">
<div className="flex w-full flex-col gap-3 sm:flex-row sm:items-center sm:justify-between">
<button
className="text-xs text-red-400 hover:text-red-300"
onClick={onDelete}
>
Delete
</button>
<div className="flex flex-wrap gap-2 sm:justify-end">
{task.status === "pending_approval" && (
<Button size="sm" onClick={onApprove}>
Approve
</Button>
)}
{(task.status === "backlog" ||
task.status === "pending_approval") && (
<Button size="sm" onClick={onExecute}>
Execute
</Button>
)}
{task.status === "ready" && (
<Badge variant="accent" size="md">
Waiting for cortex pickup
</Badge>
)}
{task.status === "in_progress" && (
<Button size="sm" onClick={() => onStatusChange("done")}>
Mark Done
</Button>
)}
{task.status === "done" && (
<Button
size="sm"
variant="outline"
onClick={() => onStatusChange("backlog")}
>
Reopen
</Button>
)}
<Button size="sm" variant="outline" onClick={onClose}>
Close
</Button>
</div>
</div>
</DialogFooter>
</DialogContent>
</Dialog>
);
return <TaskBoard agentId={agentId} ownerAgentId={agentId} />;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This route now drops owned-but-delegated tasks.

TaskBoard uses agentId only as assigned_agent_id, so once one of this agent's tasks is reassigned it disappears from /agents/$agentId/tasks even though owner_agent_id is still this agent. That changes the scoped view from “tasks for this agent” to “tasks currently assigned here” and removes visibility into delegated work.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/routes/AgentTasks.tsx` around lines 3 - 4, AgentTasks currently
passes ownerAgentId but TaskBoard only uses its agentId prop to filter
assigned_agent_id, dropping tasks that are owned but delegated; update TaskBoard
(the TaskBoard component and its data-fetching/filter logic) to accept and use
the ownerAgentId prop and change the query/filter so it returns tasks where
owner_agent_id === ownerAgentId OR assigned_agent_id === agentId (or equivalent
DB predicate), ensuring both owned and currently-assigned tasks are shown.

Comment on lines +23 to +31
// Use the first agent as default owner for task creation.
// In the global view this is a reasonable default — the user can reassign later.
const defaultOwner = agents[0]?.id;

return (
<TaskBoard
ownerAgentId={defaultOwner}
agentNames={agentNames}
showAgentBadge={agents.length > 1}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Require an explicit owner before allowing creates on the global board.

defaultOwner = agents[0]?.id means every task created from /tasks is attributed to the first agent in the list. The current flow doesn't offer an owner picker or an owner edit path, so in multi-agent instances this will persist incorrect owner_agent_id values. Hide create until the user selects an owner, or add owner selection to the dialog.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/routes/GlobalTasks.tsx` around lines 23 - 31, Currently
defaultOwner = agents[0]?.id silently attributes new tasks to the first agent;
instead require an explicit owner selection: remove the implicit defaultOwner
assignment and pass ownerAgentId={selectedOwnerId} (initially undefined) into
TaskBoard, and update TaskBoard usage/props (e.g., ownerAgentId, onOwnerSelect
or disableCreateWhenNoOwner) so the create UI is hidden/disabled until
selectedOwnerId is set; alternatively add an owner picker modal triggered by
TaskBoard that calls onOwnerSelect to set selectedOwnerId before allowing
creates. Ensure references: defaultOwner, agents, TaskBoard, ownerAgentId, and
any new selectedOwnerId/onOwnerSelect props are used to gate creation.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/api/agents.rs (1)

403-408: ⚠️ Potential issue | 🟠 Major

Fail fast instead of recreating per-agent task stores.

These fallbacks silently point task operations back at the agent SQLite DB when ApiState.task_store is unset. That breaks the new single-store invariant and can split task state between the instance-level tasks.db and per-agent spacebot.db. In create_agent_internal, the required lookup should also happen before the config / directory / DB side effects so a missing global store cannot leave a half-created agent behind.

💡 Suggested fix
@@
-        let task_store = state
-            .task_store
-            .load()
-            .as_ref()
-            .clone()
-            .unwrap_or_else(|| Arc::new(crate::tasks::TaskStore::new(sqlite_pool.clone())));
+        let task_store = state.task_store.load().as_ref().clone().ok_or_else(|| {
+            tracing::error!("global task store not initialized for warmup trigger");
+            StatusCode::SERVICE_UNAVAILABLE
+        })?;
@@
-    let task_store =
-        state.task_store.load().as_ref().clone().unwrap_or_else(|| {
-            std::sync::Arc::new(crate::tasks::TaskStore::new(db.sqlite.clone()))
-        });
+    let task_store = state.task_store.load().as_ref().clone().ok_or_else(|| {
+        tracing::error!(agent_id = %agent_id, "global task store not initialized");
+        "global task store not initialized".to_string()
+    })?;

For create_agent_internal, move that required lookup up near the start of the function so it fails before writing config.toml or initializing agent-local resources.

Also applies to: 719-722

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/agents.rs` around lines 403 - 408, The code currently falls back to
creating per-agent TaskStore when ApiState.task_store is unset (see the
task_store lookup using ApiState.task_store.load() and unwrap_or_else(||
Arc::new(crate::tasks::TaskStore::new(sqlite_pool.clone())))), which can split
state; instead, change create_agent_internal to perform the global task_store
lookup near the start and fail fast if ApiState.task_store is None (return an
error or propagate), and remove the fallback creation logic (do not call
TaskStore::new for per-agent DB); update all call sites (including the other
occurrence around lines 719-722) to expect or propagate the missing global store
rather than silently constructing a new one.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/main.rs`:
- Around line 1591-1594: The migration currently treats per-agent read errors as
non-fatal and still writes the ".tasks_migrated" marker, causing skipped agents
on future boots; update the migration logic in
spacebot::tasks::migration::migrate_legacy_tasks (implementation in
src/tasks/migration.rs) to fail the whole migration when any agent migration
errors occur (or alternatively accumulate per-agent results and only write the
.tasks_migrated marker after all agents migrated successfully), i.e. replace the
current "log-and-skip" behavior with error propagation or a consolidated Err
return so the caller (the startup call to migrate_legacy_tasks) will abort and
not write the completion marker on partial failures. Ensure the function's
signature and callers still compile and preserve the existing context-wrapping
at the call site.

---

Duplicate comments:
In `@src/api/agents.rs`:
- Around line 403-408: The code currently falls back to creating per-agent
TaskStore when ApiState.task_store is unset (see the task_store lookup using
ApiState.task_store.load() and unwrap_or_else(||
Arc::new(crate::tasks::TaskStore::new(sqlite_pool.clone())))), which can split
state; instead, change create_agent_internal to perform the global task_store
lookup near the start and fail fast if ApiState.task_store is None (return an
error or propagate), and remove the fallback creation logic (do not call
TaskStore::new for per-agent DB); update all call sites (including the other
occurrence around lines 719-722) to expect or propagate the missing global store
rather than silently constructing a new one.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7bad6708-112e-4082-8303-80cbd456f0e1

📥 Commits

Reviewing files that changed from the base of the PR and between 595c682 and d069002.

📒 Files selected for processing (8)
  • interface/src/api/client.ts
  • interface/src/router.tsx
  • src/agent/channel.rs
  • src/api/agents.rs
  • src/api/state.rs
  • src/lib.rs
  • src/main.rs
  • tests/context_dump.rs
💤 Files with no reviewable changes (2)
  • src/lib.rs
  • tests/context_dump.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • interface/src/router.tsx

Review findings (PR #468, issue #472):
- Replace silent per-agent TaskStore fallback with fail-fast (agents.rs)
- Use monotonic high-water-mark sequence for task numbers instead of
  MAX(task_number), preventing number reuse after deletes
- Force clear worker_id when reassigning a task to a different agent
- Normalize all timestamps to ISO 8601 (replace datetime('now') with
  strftime, normalize legacy timestamps during migration)
- Reject execute on pending_approval tasks (must approve first)
- Fail migration on per-agent error instead of writing marker after
  partial success
- Add create_dir_all before global task DB connect
- Guard create dialog submit on isPending to prevent duplicates
- Use agent_id OR filter so agent view shows owned + assigned tasks
- Add explicit agent picker on global tasks page
- Show truncation warning when task list hits limit

Upstream fix:
- Add missing team_id and mattermost fields to config test fixtures
@jamiepine jamiepine changed the title feat: global task database infrastructure (Step 1 of task elevation) feat: elevate task system to instance-level global scope Mar 24, 2026
@jamiepine jamiepine merged commit 193871b into main Mar 24, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants