feat: elevate task system to instance-level global scope#468
Conversation
Introduce the foundation for elevating the task system from per-agent scope to a single global instance-level database shared across all agents. Changes: - New migration directory (migrations/global/) with the global tasks schema featuring globally unique task_number, owner_agent_id and assigned_agent_id columns, and ISO 8601 timestamp defaults - New connect_global_tasks() function in db.rs for connecting to and migrating the instance-level tasks.db - Initialize the global task pool during startup in main.rs alongside the existing secrets.redb instance-level store - Design specification for the full task elevation roadmap
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughMigrates per-agent task storage to a single instance-level global task system: adds a global SQLite tasks DB and migration, rewires Rust task APIs/store to use owner/assigned agent IDs and TaskListFilter, moves HTTP routes to Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| let db_path = data_dir.join("tasks.db"); | ||
| let url = format!("sqlite:{}?mode=rwc", db_path.display()); |
There was a problem hiding this comment.
Minor robustness: if {instance_dir}/data doesn’t exist yet, sqlite:...tasks.db connect will fail with “unable to open database file”. Creating the directory here keeps startup behavior predictable.
| let db_path = data_dir.join("tasks.db"); | |
| let url = format!("sqlite:{}?mode=rwc", db_path.display()); | |
| std::fs::create_dir_all(data_dir).with_context(|| { | |
| format!( | |
| "failed to create global task database directory: {}", | |
| data_dir.display() | |
| ) | |
| })?; | |
| let db_path = data_dir.join("tasks.db"); | |
| let url = format!("sqlite:{}?mode=rwc", db_path.display()); |
…t model Replace agent-scoped task storage with a global model where tasks have owner_agent_id (creator) and assigned_agent_id (executor) instead of a single agent_id. Task numbers are now globally unique (no composite key). TaskStore API changes: - create() takes owner_agent_id + assigned_agent_id instead of agent_id - list() takes TaskListFilter struct instead of positional agent_id param - get_by_number() no longer requires agent_id (globally unique numbers) - update() no longer requires agent_id; gains assigned_agent_id field - delete() no longer requires agent_id - claim_next_ready() filters by assigned_agent_id - list_ready() filters by assigned_agent_id All 24 consumer call sites updated across tools (task_create, task_list, task_update, send_agent_message), API handlers, and cortex ready-task loop. Cortex test fixtures updated to use global schema. New tests: global_task_numbers_are_unique_across_agents, list_filters_by_assigned_agent, claim_next_ready_scopes_by_assigned_agent, reassign_task_via_update
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/api/tasks.rs (1)
136-143:⚠️ Potential issue | 🔴 CriticalReapply agent scoping on the global-by-number handlers.
These handlers now fetch/update/delete by global
task_number, whilequery.agent_id/request.agent_idis only used to pick a store entry. With the shared global store, any caller who knows#Ncan fetch, approve, execute, update, or delete another agent’s task. Gate each operation on the task’sassigned_agent_id/owner_agent_id, or push the agent into the SQLWHEREclause.Also applies to: 215-237, 260-280, 296-320, 336-377
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/tasks.rs` around lines 136 - 143, The handler is fetching tasks by global task_number via store.get_by_number(number) without enforcing the caller's agent scope; update the logic after retrieving the task to verify that task.assigned_agent_id or task.owner_agent_id equals query.agent_id (or modify store.get_by_number to accept an agent_id filter) and return StatusCode::FORBIDDEN when they differ; apply the same fix to the other affected handlers referenced (around the blocks at 215-237, 260-280, 296-320, 336-377) so every global-by-number operation (get_by_number, approve/update/delete/execute flows) enforces agent scoping before proceeding.src/tasks/store.rs (1)
371-406:⚠️ Potential issue | 🟠 MajorUse the migration's ISO-8601 expression on updates too.
The new schema writes task timestamps with
strftime('%Y-%m-%dT%H:%M:%SZ', 'now'), but these paths still persistdatetime('now'). Becauseread_timestampreturns TEXT values verbatim, touched rows will start returning"YYYY-MM-DD HH:MM:SS"while untouched rows return ISO/RFC3339 strings, and string ordering onupdated_atbecomes unreliable. Switch every write here to the samestrftime(...)expression, or normalize string reads before returning them.Also applies to: 470-471, 623-644
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tasks/store.rs` around lines 371 - 406, The code uses sqlite datetime('now') in approved_at, completed_at and the updated_at assignment while the migration uses ISO-8601 via strftime('%Y-%m-%dT%H:%M:%SZ','now'); change every occurrence of "datetime('now')" in the task update path (variables approved_at, completed_at, and the query fragment that appends "updated_at = datetime('now')") to use "strftime('%Y-%m-%dT%H:%M:%SZ','now')" so writes remain consistent with the migration; apply the same replacement to the other update sites mentioned (around the blocks currently at 470-471 and 623-644) to ensure all timestamps written by this store use the ISO-8601 expression.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/tasks/store.rs`:
- Around line 158-159: When assigned_agent_id changes you must not leave the old
worker_id attached; update the task update logic that mutates assigned_agent_id
to either reject the reassignment while worker_id is set or atomically clear
worker_id and set status away from in_progress when the assignee changes.
Specifically, in the Task struct update/assignment code that touches
assigned_agent_id (and the code paths mentioned around lines 364-422), detect a
change in assigned_agent_id versus the existing value and either return an error
if worker_id.is_some(), or call clear_worker_id() and set status != in_progress
before applying the new assigned_agent_id; also ensure the detached-worker
completion path in src/agent/cortex.rs uses the same contract (don’t complete or
update a task if assigned_agent_id no longer matches the worker) so a detached
worker cannot keep updating a reassigned task.
In `@src/tools/task_update.rs`:
- Around line 20-22: The TaskStore::update path is now global-by-task_number but
agent_id in the TaskUpdate tool is unused, allowing cross-agent updates; modify
the code in src/tools/task_update.rs (including the logic around lines
referenced 156-236) to first fetch the task by task_number (e.g., call
TaskStore::get or equivalent) and verify that the fetched
task.owner/assigned_agent equals self.agent_id, returning an authorization/error
when it does not; only call TaskStore::update after this ownership check, or
alternatively reintroduce a scoped update helper that enforces self.agent_id
ownership before applying changes (ensure you update the code paths that
currently call TaskStore::update to use this ownership-checked flow).
---
Outside diff comments:
In `@src/api/tasks.rs`:
- Around line 136-143: The handler is fetching tasks by global task_number via
store.get_by_number(number) without enforcing the caller's agent scope; update
the logic after retrieving the task to verify that task.assigned_agent_id or
task.owner_agent_id equals query.agent_id (or modify store.get_by_number to
accept an agent_id filter) and return StatusCode::FORBIDDEN when they differ;
apply the same fix to the other affected handlers referenced (around the blocks
at 215-237, 260-280, 296-320, 336-377) so every global-by-number operation
(get_by_number, approve/update/delete/execute flows) enforces agent scoping
before proceeding.
In `@src/tasks/store.rs`:
- Around line 371-406: The code uses sqlite datetime('now') in approved_at,
completed_at and the updated_at assignment while the migration uses ISO-8601 via
strftime('%Y-%m-%dT%H:%M:%SZ','now'); change every occurrence of
"datetime('now')" in the task update path (variables approved_at, completed_at,
and the query fragment that appends "updated_at = datetime('now')") to use
"strftime('%Y-%m-%dT%H:%M:%SZ','now')" so writes remain consistent with the
migration; apply the same replacement to the other update sites mentioned
(around the blocks currently at 470-471 and 623-644) to ensure all timestamps
written by this store use the ISO-8601 expression.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7f8a0c69-3d93-47a4-a512-b047e1e3bc1d
📒 Files selected for processing (8)
src/agent/cortex.rssrc/api/tasks.rssrc/tasks.rssrc/tasks/store.rssrc/tools/send_agent_message.rssrc/tools/task_create.rssrc/tools/task_list.rssrc/tools/task_update.rs
✅ Files skipped from review due to trivial changes (1)
- src/tasks.rs
| /// Reassign the task to a different agent. | ||
| pub assigned_agent_id: Option<String>, |
There was a problem hiding this comment.
Reassignment needs to break the old worker association.
assigned_agent_id can change while worker_id is silently preserved. Reassigning an in_progress task leaves the old worker attached, so get_by_worker_id here and the detached-worker completion path in src/agent/cortex.rs can keep updating a task that now belongs to someone else. Reject reassignment while a worker is bound, or force clear_worker_id / a non-in_progress status when the assignee changes.
Also applies to: 364-422
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tasks/store.rs` around lines 158 - 159, When assigned_agent_id changes
you must not leave the old worker_id attached; update the task update logic that
mutates assigned_agent_id to either reject the reassignment while worker_id is
set or atomically clear worker_id and set status away from in_progress when the
assignee changes. Specifically, in the Task struct update/assignment code that
touches assigned_agent_id (and the code paths mentioned around lines 364-422),
detect a change in assigned_agent_id versus the existing value and either return
an error if worker_id.is_some(), or call clear_worker_id() and set status !=
in_progress before applying the new assigned_agent_id; also ensure the
detached-worker completion path in src/agent/cortex.rs uses the same contract
(don’t complete or update a task if assigned_agent_id no longer matches the
worker) so a detached worker cannot keep updating a reassigned task.
| // Retained for future authorization checks on global task updates. | ||
| #[allow(dead_code)] | ||
| agent_id: AgentId, |
There was a problem hiding this comment.
Enforce task ownership before branch-scoped updates.
TaskStore::update is now global by task_number, but agent_id is explicitly unused here. In branch scope that means an agent can update another agent’s task just by referencing its number once this tool is backed by the shared global store. Fetch the task first and reject when it is not owned/assigned to self.agent_id, or restore a scoped update helper.
Also applies to: 156-236
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tools/task_update.rs` around lines 20 - 22, The TaskStore::update path is
now global-by-task_number but agent_id in the TaskUpdate tool is unused,
allowing cross-agent updates; modify the code in src/tools/task_update.rs
(including the logic around lines referenced 156-236) to first fetch the task by
task_number (e.g., call TaskStore::get or equivalent) and verify that the
fetched task.owner/assigned_agent equals self.agent_id, returning an
authorization/error when it does not; only call TaskStore::update after this
ownership check, or alternatively reintroduce a scoped update helper that
enforces self.agent_id ownership before applying changes (ensure you update the
code paths that currently call TaskStore::update to use this ownership-checked
flow).
Replace the per-agent task store map and cross-agent task_store_registry with a single global TaskStore shared across all agents. This eliminates the fragile cross-database write pattern where SendAgentMessageTool looked up target agent stores from a registry. Removed fields: - AgentDeps.task_store_registry (lib.rs) - ApiState.task_stores HashMap (api/state.rs) - ApiState.task_store_registry (api/state.rs) Added/changed: - ApiState.task_store: ArcSwap<Option<Arc<TaskStore>>> (single global) - ApiState.set_task_store() replaces set_task_stores() - SendAgentMessageTool.task_store replaces task_store_registry - initialize_agents() takes Arc<TaskStore> instead of registry - All agents share the same global_task_store via AgentDeps Cleanup: - Removed task_store_registry population loop after agent init - Removed per-agent TaskStore::new(db.sqlite) in agent init - Removed per-agent task_stores map build for API state - Simplified API task handlers from per-agent map lookup to single store - Updated test fixtures (context_dump.rs, bulletin.rs)
Introduce tasks::migration module that runs on startup before agent initialization. Scans each agent's per-agent spacebot.db for legacy task rows and inserts them into the global tasks.db with new globally unique task numbers. Migration behavior: - Idempotent via .tasks_migrated marker file in instance data dir - Preserves original task UUIDs (skips already-migrated rows) - Maps delegating_agent_id metadata to owner_agent_id for cross-agent tasks; assigned_agent_id comes from the legacy agent_id - Enriches metadata with legacy_agent_id and legacy_task_number - Handles both ISO 8601 TEXT and legacy TIMESTAMP column formats - Per-agent databases are opened read-only, never modified - Logs warnings and continues if individual agent migration fails
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/api/tasks.rs (1)
145-151:⚠️ Potential issue | 🔴 CriticalReintroduce agent scoping on the non-list task endpoints.
Lines 145, 223, 266, 302, and 340 now operate on a global task number without checking that the row is visible to
request/query.agent_id. After removing per-agent stores, that lets callers read, approve, execute, update, or delete another agent's task if they know the number, anddelete_task()will also publish the SSE event under the caller-suppliedagent_idinstead of the task's real assignee. Fetch first and returnNOT_FOUNDon mismatch—at least againstassigned_agent_id, consistent withlist_tasks()—before returning or mutating the task.Also applies to: 221-243, 265-285, 300-315, 339-370
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/api/agents.rs`:
- Around line 403-408: The current code silently falls back to creating a
per-agent TaskStore when ApiState.task_store is unset; remove the unwrap_or_else
fallback and instead fail fast: in create_agent_internal check
ApiState.task_store.load() at the top and return an Err (or propagate an error)
if it's None so no AgentDeps/Channel::new/SendAgentMessageTool is constructed
with a local store; replace the clone().unwrap_or_else(...) logic around
task_store with a required lookup that errors if missing, and apply the same
change to the other occurrence that uses the same fallback.
In `@src/tasks/migration.rs`:
- Around line 69-86: The current logic writes the global marker (write_marker /
marker_path) after looping agents even when some agent migrations aborted inside
migrate_agent_tasks, which can permanently skip unfinished agents; change the
flow so that the global marker is only written for an agent after that agent's
migration completed successfully (i.e., only call write_marker for that agent
when migrate_agent_tasks returned Ok and no errors occurred), or alternatively
implement per-agent success markers/progress (e.g., per-agent .tasks_migrated
files or persisted last-processed task index) so partial migrations can be
resumed; update the code paths around migrate_agent_tasks, the outer agent loop,
and the write_marker call to ensure markers are written atomically per-agent
only on complete success.
- Around line 264-283: The helper functions read_timestamp and
read_optional_timestamp currently return TEXT timestamp strings verbatim; update
them to detect and parse legacy SQLite TEXT forms (e.g. "YYYY-MM-DD HH:MM:SS"
optionally with fractional seconds) before returning by attempting
chrono::NaiveDateTime::parse_from_str with a format like "%Y-%m-%d %H:%M:%S%.f"
(fall back to the existing behavior if parsing fails): for read_timestamp try
parsing the string value and, on success, convert with .and_utc().to_rfc3339()
(otherwise keep the current NaiveDateTime handling and final fallback to
chrono::Utc::now()), and for read_optional_timestamp do the same
parsing/normalization for Some(string) values before returning Option<String>.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f0ccaf40-72a1-40c0-ac88-8d300b604dbd
📒 Files selected for processing (11)
src/agent/channel.rssrc/api/agents.rssrc/api/state.rssrc/api/tasks.rssrc/lib.rssrc/main.rssrc/tasks.rssrc/tasks/migration.rssrc/tools/send_agent_message.rstests/bulletin.rstests/context_dump.rs
💤 Files with no reviewable changes (3)
- tests/bulletin.rs
- src/lib.rs
- tests/context_dump.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/tasks.rs
|
Global numbering and the |
|
|
||
| let task = store | ||
| .get_by_number(&query.agent_id, number) | ||
| .get_by_number(number) |
There was a problem hiding this comment.
The agent-scoped /agents/tasks read path lost its visibility check when tasks became globally addressed. get_by_number(number) will now return any task if the caller knows #N, regardless of query.agent_id. The same problem exists in the update/approve/execute/delete handlers below. Please fetch first and return NOT_FOUND unless the task is actually visible to the requested agent (at minimum assigned_agent_id, consistent with list_tasks()).
| } | ||
| } | ||
|
|
||
| write_marker(&marker_path)?; |
There was a problem hiding this comment.
This writes the global migration marker even if one or more agents failed inside migrate_agent_tasks(). That makes partial migration permanent: later startups skip the migration entirely, leaving some legacy task sets stranded forever. The marker needs to reflect full success, or progress needs to be tracked per agent so failed agents can be retried safely.
src/api/agents.rs
Outdated
| )); | ||
| let task_store = std::sync::Arc::new(crate::tasks::TaskStore::new(db.sqlite.clone())); | ||
| let task_store = | ||
| state.task_store.load().as_ref().clone().unwrap_or_else(|| { |
There was a problem hiding this comment.
If the global task store is unexpectedly unset, this silently falls back to a per-agent TaskStore backed by spacebot.db. That breaks the new global-store invariant and can create split-brain task state depending on startup order. This should fail fast instead of quietly recreating the old topology.
| metadata | ||
| } | ||
|
|
||
| fn read_timestamp(row: &sqlx::sqlite::SqliteRow, column: &str) -> String { |
There was a problem hiding this comment.
These helpers return TEXT timestamps verbatim. If legacy rows contain the usual SQLite YYYY-MM-DD HH:MM:SS form from the old TIMESTAMP columns, the migrated global store will end up with mixed timestamp formats. Please normalize legacy text forms to RFC 3339 before inserting them into the new TEXT columns.
Replace dual /agents/tasks + /tasks API surfaces with a single /tasks endpoint. All handlers use the global task store with optional owner_agent_id/assigned_agent_id filters. Extract shared kanban components into TaskBoard.tsx so both the agent-scoped view (/agents/$agentId/tasks) and the new top-level view (/tasks) reuse the same board, columns, cards, and dialogs. Add Tasks icon to the sidebar between Orchestrate and Settings.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/api/tasks.rs (1)
371-385:⚠️ Potential issue | 🟠 Major
execute_taskcurrently bypasses the approval gate.A
pending_approvaltask falls through to theReadyupdate path here, andapproved_bycan still beNone. That makes the dedicated approve route optional and lets tasks enter the runnable state with no approver recorded. The UI already exposes this path for pending-approval cards, so this is reachable today. Reject execute requests forpending_approval, or funnel execute through the same approval rules.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/tasks.rs` around lines 371 - 385, The execute_task flow incorrectly allows tasks with TaskStatus::PendingApproval to be updated to Ready without an approver; in the execute_task handler, check for crate::tasks::TaskStatus::PendingApproval (via current.status) and either return an error response (reject the execute request) or enforce the same approval logic used by the approve route (require request.approved_by to be Some and/or call the approval path) before calling store.update; adjust the branch around the existing matches(...) check and the call to store.update(...) so PendingApproval does not fall through to the Ready update and ensure approved_by is validated when transitioning to Ready.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/components/TaskBoard.tsx`:
- Around line 222-230: The UI is hard-capping api.listTasks to limit:200 via the
useQuery call (queryKey / queryFn) and then treating tasks.length as the full
total in the toolbar, which silently drops items past page 1; fix by either
adding proper pagination/infinite loading (replace useQuery with
useInfiniteQuery and call api.listTasks with page/limit or cursor, using
queryKey that includes page/cursor and merging pages before computing totals) or
by changing the API to return total/has_more and updating the list rendering and
toolbar to use returned total/has_more (do not use tasks.length as the
authoritative total). Update all occurrences (e.g., the call using api.listTasks
at queryKey and the toolbar code referencing tasks.length around the other
mention) so truncation is surfaced to users and more items are fetched on scroll
or via pagination controls.
- Around line 588-601: handleSubmit can fire twice before the disabled UI state
appears; guard the in-flight mutation by returning early when an "isPending" (or
local submitting) flag is set, and set/clear that flag around the create call.
Concretely: inside handleSubmit check if isPending (or a new submitting
ref/state) and return if true; set submitting = true immediately before calling
onCreate({...}), and in the promise finally set submitting = false; update the
hook dependencies to include the submitting value (or use a ref to avoid
re-creating the callback). Ensure you still call setTitle, setDescription,
setPriority, and setStatus after the create completes or immediately if that
behavior is desired.
In `@interface/src/routes/AgentTasks.tsx`:
- Around line 3-4: AgentTasks currently passes ownerAgentId but TaskBoard only
uses its agentId prop to filter assigned_agent_id, dropping tasks that are owned
but delegated; update TaskBoard (the TaskBoard component and its
data-fetching/filter logic) to accept and use the ownerAgentId prop and change
the query/filter so it returns tasks where owner_agent_id === ownerAgentId OR
assigned_agent_id === agentId (or equivalent DB predicate), ensuring both owned
and currently-assigned tasks are shown.
In `@interface/src/routes/GlobalTasks.tsx`:
- Around line 23-31: Currently defaultOwner = agents[0]?.id silently attributes
new tasks to the first agent; instead require an explicit owner selection:
remove the implicit defaultOwner assignment and pass
ownerAgentId={selectedOwnerId} (initially undefined) into TaskBoard, and update
TaskBoard usage/props (e.g., ownerAgentId, onOwnerSelect or
disableCreateWhenNoOwner) so the create UI is hidden/disabled until
selectedOwnerId is set; alternatively add an owner picker modal triggered by
TaskBoard that calls onOwnerSelect to set selectedOwnerId before allowing
creates. Ensure references: defaultOwner, agents, TaskBoard, ownerAgentId, and
any new selectedOwnerId/onOwnerSelect props are used to gate creation.
---
Outside diff comments:
In `@src/api/tasks.rs`:
- Around line 371-385: The execute_task flow incorrectly allows tasks with
TaskStatus::PendingApproval to be updated to Ready without an approver; in the
execute_task handler, check for crate::tasks::TaskStatus::PendingApproval (via
current.status) and either return an error response (reject the execute request)
or enforce the same approval logic used by the approve route (require
request.approved_by to be Some and/or call the approval path) before calling
store.update; adjust the branch around the existing matches(...) check and the
call to store.update(...) so PendingApproval does not fall through to the Ready
update and ensure approved_by is validated when transitioning to Ready.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d2037602-15e2-4f97-9211-784e68e96236
📒 Files selected for processing (8)
interface/src/api/client.tsinterface/src/components/Sidebar.tsxinterface/src/components/TaskBoard.tsxinterface/src/router.tsxinterface/src/routes/AgentTasks.tsxinterface/src/routes/GlobalTasks.tsxsrc/api/server.rssrc/api/tasks.rs
| const { data, isLoading } = useQuery({ | ||
| queryKey, | ||
| queryFn: () => | ||
| api.listTasks({ | ||
| assigned_agent_id: agentId, | ||
| limit: 200, | ||
| }), | ||
| refetchInterval: 15_000, | ||
| }); |
There was a problem hiding this comment.
Don't present a truncated list as the full task board.
This hard-caps the query at 200 rows, but the toolbar later reports tasks.length as if it's the full total. On the new global board, anything beyond that first page silently disappears with no warning. Add pagination/infinite loading, or have the API return total/has_more so the UI can surface truncation.
Also applies to: 318-320
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/components/TaskBoard.tsx` around lines 222 - 230, The UI is
hard-capping api.listTasks to limit:200 via the useQuery call (queryKey /
queryFn) and then treating tasks.length as the full total in the toolbar, which
silently drops items past page 1; fix by either adding proper
pagination/infinite loading (replace useQuery with useInfiniteQuery and call
api.listTasks with page/limit or cursor, using queryKey that includes
page/cursor and merging pages before computing totals) or by changing the API to
return total/has_more and updating the list rendering and toolbar to use
returned total/has_more (do not use tasks.length as the authoritative total).
Update all occurrences (e.g., the call using api.listTasks at queryKey and the
toolbar code referencing tasks.length around the other mention) so truncation is
surfaced to users and more items are fetched on scroll or via pagination
controls.
| const handleSubmit = useCallback(() => { | ||
| if (!title.trim()) return; | ||
| onCreate({ | ||
| owner_agent_id: ownerAgentId, | ||
| title: title.trim(), | ||
| description: description.trim() || undefined, | ||
| priority, | ||
| status, | ||
| }); | ||
| setTitle(""); | ||
| setDescription(""); | ||
| setPriority("medium"); | ||
| setStatus("backlog"); | ||
| }, [title, description, priority, status, ownerAgentId, onCreate]); |
There was a problem hiding this comment.
Block duplicate creates while the mutation is in flight.
handleSubmit doesn't guard on isPending, so a fast double-click or Enter key repeat can still invoke createTask twice before the disabled state lands. Because creation is non-idempotent, that can produce duplicate tasks.
Proposed fix
const handleSubmit = useCallback(() => {
- if (!title.trim()) return;
+ if (isPending || !title.trim()) return;
onCreate({
owner_agent_id: ownerAgentId,
title: title.trim(),
description: description.trim() || undefined,
priority,
status,
});
setTitle("");
setDescription("");
setPriority("medium");
setStatus("backlog");
- }, [title, description, priority, status, ownerAgentId, onCreate]);
+ }, [title, description, priority, status, ownerAgentId, onCreate, isPending]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/components/TaskBoard.tsx` around lines 588 - 601, handleSubmit
can fire twice before the disabled UI state appears; guard the in-flight
mutation by returning early when an "isPending" (or local submitting) flag is
set, and set/clear that flag around the create call. Concretely: inside
handleSubmit check if isPending (or a new submitting ref/state) and return if
true; set submitting = true immediately before calling onCreate({...}), and in
the promise finally set submitting = false; update the hook dependencies to
include the submitting value (or use a ref to avoid re-creating the callback).
Ensure you still call setTitle, setDescription, setPriority, and setStatus after
the create completes or immediately if that behavior is desired.
| export function AgentTasks({ agentId }: { agentId: string }) { | ||
| const queryClient = useQueryClient(); | ||
| const { taskEventVersion } = useLiveContext(); | ||
|
|
||
| // Invalidate on SSE task events | ||
| const prevVersion = useRef(taskEventVersion); | ||
| useEffect(() => { | ||
| if (taskEventVersion !== prevVersion.current) { | ||
| prevVersion.current = taskEventVersion; | ||
| queryClient.invalidateQueries({ queryKey: ["tasks", agentId] }); | ||
| } | ||
| }, [taskEventVersion, agentId, queryClient]); | ||
|
|
||
| const { data, isLoading } = useQuery({ | ||
| queryKey: ["tasks", agentId], | ||
| queryFn: () => api.listTasks(agentId, { limit: 200 }), | ||
| refetchInterval: 15_000, | ||
| }); | ||
|
|
||
| const tasks = data?.tasks ?? []; | ||
|
|
||
| // Group tasks by status | ||
| const tasksByStatus: Record<TaskStatus, TaskItem[]> = { | ||
| pending_approval: [], | ||
| backlog: [], | ||
| ready: [], | ||
| in_progress: [], | ||
| done: [], | ||
| }; | ||
| for (const task of tasks) { | ||
| tasksByStatus[task.status]?.push(task); | ||
| } | ||
|
|
||
| // Create task dialog | ||
| const [createOpen, setCreateOpen] = useState(false); | ||
| // Detail dialog — store task number and derive from live list to stay current. | ||
| const [selectedTaskNumber, setSelectedTaskNumber] = useState<number | null>( | ||
| null, | ||
| ); | ||
| const selectedTask = | ||
| selectedTaskNumber !== null | ||
| ? (tasks.find((t) => t.task_number === selectedTaskNumber) ?? null) | ||
| : null; | ||
|
|
||
| const createMutation = useMutation({ | ||
| mutationFn: (request: CreateTaskRequest) => | ||
| api.createTask(agentId, request), | ||
| onSuccess: () => { | ||
| queryClient.invalidateQueries({ queryKey: ["tasks", agentId] }); | ||
| setCreateOpen(false); | ||
| }, | ||
| }); | ||
|
|
||
| const updateMutation = useMutation({ | ||
| mutationFn: ({ | ||
| taskNumber, | ||
| ...request | ||
| }: { | ||
| taskNumber: number; | ||
| status?: TaskStatus; | ||
| priority?: TaskPriority; | ||
| }) => api.updateTask(agentId, taskNumber, request), | ||
| onSuccess: () => { | ||
| queryClient.invalidateQueries({ queryKey: ["tasks", agentId] }); | ||
| }, | ||
| }); | ||
|
|
||
| const approveMutation = useMutation({ | ||
| mutationFn: (taskNumber: number) => | ||
| api.approveTask(agentId, taskNumber, "human"), | ||
| onSuccess: () => { | ||
| queryClient.invalidateQueries({ queryKey: ["tasks", agentId] }); | ||
| }, | ||
| }); | ||
|
|
||
| const executeMutation = useMutation({ | ||
| mutationFn: (taskNumber: number) => api.executeTask(agentId, taskNumber), | ||
| onSuccess: () => { | ||
| queryClient.invalidateQueries({ queryKey: ["tasks", agentId] }); | ||
| }, | ||
| }); | ||
|
|
||
| const deleteMutation = useMutation({ | ||
| mutationFn: (taskNumber: number) => api.deleteTask(agentId, taskNumber), | ||
| onSuccess: () => { | ||
| queryClient.invalidateQueries({ queryKey: ["tasks", agentId] }); | ||
| setSelectedTaskNumber(null); | ||
| }, | ||
| }); | ||
|
|
||
| if (isLoading) { | ||
| return ( | ||
| <div className="flex h-full items-center justify-center text-ink-faint"> | ||
| Loading tasks... | ||
| </div> | ||
| ); | ||
| } | ||
|
|
||
| return ( | ||
| <div className="flex h-full flex-col"> | ||
| {/* Toolbar */} | ||
| <div className="flex items-center justify-between border-b border-app-line px-4 py-2"> | ||
| <div className="flex items-center gap-3"> | ||
| <span className="text-sm text-ink-dull"> | ||
| {tasks.length} task{tasks.length !== 1 ? "s" : ""} | ||
| </span> | ||
| {tasksByStatus.pending_approval.length > 0 && ( | ||
| <Badge variant="amber" size="sm"> | ||
| {tasksByStatus.pending_approval.length} pending approval | ||
| </Badge> | ||
| )} | ||
| {tasksByStatus.in_progress.length > 0 && ( | ||
| <Badge variant="violet" size="sm"> | ||
| {tasksByStatus.in_progress.length} in progress | ||
| </Badge> | ||
| )} | ||
| </div> | ||
| <Button size="sm" onClick={() => setCreateOpen(true)}> | ||
| Create Task | ||
| </Button> | ||
| </div> | ||
|
|
||
| {/* Kanban Board */} | ||
| <div className="flex flex-1 flex-wrap content-start gap-3 overflow-y-auto p-4"> | ||
| {COLUMNS.map(({ status, label }) => ( | ||
| <KanbanColumn | ||
| key={status} | ||
| status={status} | ||
| label={label} | ||
| tasks={tasksByStatus[status]} | ||
| onSelect={(task) => setSelectedTaskNumber(task.task_number)} | ||
| onApprove={(task) => approveMutation.mutate(task.task_number)} | ||
| onExecute={(task) => executeMutation.mutate(task.task_number)} | ||
| onStatusChange={(task, newStatus) => | ||
| updateMutation.mutate({ | ||
| taskNumber: task.task_number, | ||
| status: newStatus, | ||
| }) | ||
| } | ||
| /> | ||
| ))} | ||
| </div> | ||
|
|
||
| {/* Create Dialog */} | ||
| <CreateTaskDialog | ||
| open={createOpen} | ||
| onClose={() => setCreateOpen(false)} | ||
| onCreate={(request) => createMutation.mutate(request)} | ||
| isPending={createMutation.isPending} | ||
| /> | ||
|
|
||
| {/* Detail Dialog */} | ||
| {selectedTask && ( | ||
| <TaskDetailDialog | ||
| task={selectedTask} | ||
| onClose={() => setSelectedTaskNumber(null)} | ||
| onApprove={() => approveMutation.mutate(selectedTask.task_number)} | ||
| onExecute={() => executeMutation.mutate(selectedTask.task_number)} | ||
| onDelete={() => deleteMutation.mutate(selectedTask.task_number)} | ||
| onStatusChange={(status) => | ||
| updateMutation.mutate({ | ||
| taskNumber: selectedTask.task_number, | ||
| status, | ||
| }) | ||
| } | ||
| /> | ||
| )} | ||
| </div> | ||
| ); | ||
| } | ||
|
|
||
| // -- Kanban Column -- | ||
|
|
||
| function KanbanColumn({ | ||
| status, | ||
| label, | ||
| tasks, | ||
| onSelect, | ||
| onApprove, | ||
| onExecute, | ||
| onStatusChange, | ||
| }: { | ||
| status: TaskStatus; | ||
| label: string; | ||
| tasks: TaskItem[]; | ||
| onSelect: (task: TaskItem) => void; | ||
| onApprove: (task: TaskItem) => void; | ||
| onExecute: (task: TaskItem) => void; | ||
| onStatusChange: (task: TaskItem, status: TaskStatus) => void; | ||
| }) { | ||
| return ( | ||
| <div className="flex min-h-0 min-w-[14rem] flex-1 basis-[14rem] flex-col rounded-lg border border-app-line/50 bg-app-darkBox/20"> | ||
| {/* Column Header */} | ||
| <div className="flex items-center gap-2 border-b border-app-line/30 px-3 py-2"> | ||
| <Badge variant={STATUS_COLORS[status]} size="sm"> | ||
| {label} | ||
| </Badge> | ||
| <span className="text-tiny text-ink-faint">{tasks.length}</span> | ||
| </div> | ||
|
|
||
| {/* Cards */} | ||
| <div className="flex-1 space-y-2 overflow-y-auto p-2"> | ||
| <AnimatePresence mode="popLayout"> | ||
| {tasks.map((task) => ( | ||
| <TaskCard | ||
| key={task.id} | ||
| task={task} | ||
| onSelect={() => onSelect(task)} | ||
| onApprove={() => onApprove(task)} | ||
| onExecute={() => onExecute(task)} | ||
| onStatusChange={(newStatus) => onStatusChange(task, newStatus)} | ||
| /> | ||
| ))} | ||
| </AnimatePresence> | ||
| {tasks.length === 0 && ( | ||
| <div className="py-4 text-center text-tiny text-ink-faint"> | ||
| No tasks | ||
| </div> | ||
| )} | ||
| </div> | ||
| </div> | ||
| ); | ||
| } | ||
|
|
||
| // -- Task Card -- | ||
|
|
||
| function TaskCard({ | ||
| task, | ||
| onSelect, | ||
| onApprove, | ||
| onExecute, | ||
| onStatusChange, | ||
| }: { | ||
| task: TaskItem; | ||
| onSelect: () => void; | ||
| onApprove: () => void; | ||
| onExecute: () => void; | ||
| onStatusChange: (status: TaskStatus) => void; | ||
| }) { | ||
| const subtasksDone = task.subtasks.filter((s) => s.completed).length; | ||
| const subtasksTotal = task.subtasks.length; | ||
|
|
||
| return ( | ||
| <motion.div | ||
| layout | ||
| initial={{ opacity: 0, scale: 0.95 }} | ||
| animate={{ opacity: 1, scale: 1 }} | ||
| exit={{ opacity: 0, scale: 0.95 }} | ||
| transition={{ duration: 0.15 }} | ||
| className="cursor-pointer rounded-md border border-app-line/30 bg-app p-3 transition-colors hover:border-app-line" | ||
| onClick={onSelect} | ||
| > | ||
| {/* Title row */} | ||
| <div className="flex items-start justify-between gap-2"> | ||
| <span className="text-sm font-medium text-ink leading-tight"> | ||
| #{task.task_number} {task.title} | ||
| </span> | ||
| </div> | ||
|
|
||
| {/* Meta row */} | ||
| <div className="mt-2 flex flex-wrap items-center gap-1.5"> | ||
| <Badge variant={PRIORITY_COLORS[task.priority]} size="sm"> | ||
| {PRIORITY_LABELS[task.priority]} | ||
| </Badge> | ||
| {subtasksTotal > 0 && ( | ||
| <span className="text-tiny text-ink-faint"> | ||
| {subtasksDone}/{subtasksTotal} | ||
| </span> | ||
| )} | ||
| {task.worker_id && ( | ||
| <Badge variant="violet" size="sm"> | ||
| Worker | ||
| </Badge> | ||
| )} | ||
| <GithubMetadataBadges metadata={task.metadata} compact /> | ||
| </div> | ||
|
|
||
| {/* Subtask progress bar */} | ||
| {subtasksTotal > 0 && ( | ||
| <div className="mt-2 h-1 overflow-hidden rounded-full bg-app-line/30"> | ||
| <div | ||
| className="h-full rounded-full bg-accent transition-all" | ||
| style={{ width: `${(subtasksDone / subtasksTotal) * 100}%` }} | ||
| /> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* Quick actions */} | ||
| <div className="mt-2 flex gap-1" onClick={(e) => e.stopPropagation()}> | ||
| {task.status === "pending_approval" && ( | ||
| <button | ||
| className="rounded px-1.5 py-0.5 text-tiny text-accent hover:bg-accent/10" | ||
| onClick={onApprove} | ||
| > | ||
| Approve | ||
| </button> | ||
| )} | ||
| {(task.status === "backlog" || task.status === "pending_approval") && ( | ||
| <button | ||
| className="rounded px-1.5 py-0.5 text-tiny text-violet-400 hover:bg-violet-400/10" | ||
| onClick={onExecute} | ||
| > | ||
| Execute | ||
| </button> | ||
| )} | ||
| {task.status === "in_progress" && ( | ||
| <button | ||
| className="rounded px-1.5 py-0.5 text-tiny text-emerald-400 hover:bg-emerald-400/10" | ||
| onClick={() => onStatusChange("done")} | ||
| > | ||
| Mark Done | ||
| </button> | ||
| )} | ||
| </div> | ||
|
|
||
| {/* Footer */} | ||
| <div className="mt-1.5 text-tiny text-ink-faint"> | ||
| {formatTimeAgo(task.created_at)} by {task.created_by} | ||
| </div> | ||
| </motion.div> | ||
| ); | ||
| } | ||
|
|
||
| // -- Create Task Dialog -- | ||
|
|
||
| function CreateTaskDialog({ | ||
| open, | ||
| onClose, | ||
| onCreate, | ||
| isPending, | ||
| }: { | ||
| open: boolean; | ||
| onClose: () => void; | ||
| onCreate: (request: CreateTaskRequest) => void; | ||
| isPending: boolean; | ||
| }) { | ||
| const [title, setTitle] = useState(""); | ||
| const [description, setDescription] = useState(""); | ||
| const [priority, setPriority] = useState<TaskPriority>("medium"); | ||
| const [status, setStatus] = useState<TaskStatus>("backlog"); | ||
|
|
||
| const handleSubmit = useCallback(() => { | ||
| if (!title.trim()) return; | ||
| onCreate({ | ||
| title: title.trim(), | ||
| description: description.trim() || undefined, | ||
| priority, | ||
| status, | ||
| }); | ||
| setTitle(""); | ||
| setDescription(""); | ||
| setPriority("medium"); | ||
| setStatus("backlog"); | ||
| }, [title, description, priority, status, onCreate]); | ||
|
|
||
| return ( | ||
| <Dialog open={open} onOpenChange={(v) => !v && onClose()}> | ||
| <DialogContent> | ||
| <DialogHeader> | ||
| <DialogTitle>Create Task</DialogTitle> | ||
| </DialogHeader> | ||
| <div className="flex flex-col gap-3 py-2"> | ||
| <div> | ||
| <label className="mb-1 block text-xs text-ink-dull">Title</label> | ||
| <input | ||
| className="w-full rounded-md border border-app-line bg-app-darkBox px-3 py-2 text-sm text-ink placeholder:text-ink-faint focus:border-accent focus:outline-none" | ||
| placeholder="Task title..." | ||
| value={title} | ||
| onChange={(e) => setTitle(e.target.value)} | ||
| onKeyDown={(e) => e.key === "Enter" && handleSubmit()} | ||
| autoFocus | ||
| /> | ||
| </div> | ||
| <div> | ||
| <label className="mb-1 block text-xs text-ink-dull"> | ||
| Description | ||
| </label> | ||
| <textarea | ||
| className="w-full rounded-md border border-app-line bg-app-darkBox px-3 py-2 text-sm text-ink placeholder:text-ink-faint focus:border-accent focus:outline-none" | ||
| placeholder="Optional description..." | ||
| value={description} | ||
| onChange={(e) => setDescription(e.target.value)} | ||
| rows={3} | ||
| /> | ||
| </div> | ||
| <div className="flex gap-4"> | ||
| <div className="flex-1"> | ||
| <label className="mb-1 block text-xs text-ink-dull"> | ||
| Priority | ||
| </label> | ||
| <select | ||
| className="w-full rounded-md border border-app-line bg-app-darkBox px-3 py-2 text-sm text-ink focus:border-accent focus:outline-none" | ||
| value={priority} | ||
| onChange={(e) => setPriority(e.target.value as TaskPriority)} | ||
| > | ||
| <option value="critical">Critical</option> | ||
| <option value="high">High</option> | ||
| <option value="medium">Medium</option> | ||
| <option value="low">Low</option> | ||
| </select> | ||
| </div> | ||
| <div className="flex-1"> | ||
| <label className="mb-1 block text-xs text-ink-dull"> | ||
| Initial Status | ||
| </label> | ||
| <select | ||
| className="w-full rounded-md border border-app-line bg-app-darkBox px-3 py-2 text-sm text-ink focus:border-accent focus:outline-none" | ||
| value={status} | ||
| onChange={(e) => setStatus(e.target.value as TaskStatus)} | ||
| > | ||
| <option value="pending_approval">Pending Approval</option> | ||
| <option value="backlog">Backlog</option> | ||
| <option value="ready">Ready</option> | ||
| </select> | ||
| </div> | ||
| </div> | ||
| </div> | ||
| <DialogFooter> | ||
| <Button variant="outline" size="sm" onClick={onClose}> | ||
| Cancel | ||
| </Button> | ||
| <Button | ||
| size="sm" | ||
| onClick={handleSubmit} | ||
| disabled={!title.trim() || isPending} | ||
| > | ||
| {isPending ? "Creating..." : "Create"} | ||
| </Button> | ||
| </DialogFooter> | ||
| </DialogContent> | ||
| </Dialog> | ||
| ); | ||
| } | ||
|
|
||
| // -- Task Detail Dialog -- | ||
|
|
||
| function TaskDetailDialog({ | ||
| task, | ||
| onClose, | ||
| onApprove, | ||
| onExecute, | ||
| onDelete, | ||
| onStatusChange, | ||
| }: { | ||
| task: TaskItem; | ||
| onClose: () => void; | ||
| onApprove: () => void; | ||
| onExecute: () => void; | ||
| onDelete: () => void; | ||
| onStatusChange: (status: TaskStatus) => void; | ||
| }) { | ||
| return ( | ||
| <Dialog open={true} onOpenChange={(v) => !v && onClose()}> | ||
| <DialogContent className="!flex max-h-[85vh] max-w-lg !flex-col overflow-hidden"> | ||
| <DialogHeader className="shrink-0"> | ||
| <DialogTitle> | ||
| #{task.task_number} {task.title} | ||
| </DialogTitle> | ||
| </DialogHeader> | ||
| <div className="flex min-h-0 flex-1 flex-col gap-3 overflow-y-auto py-2 pr-1"> | ||
| {/* Status + Priority */} | ||
| <div className="flex items-center gap-2"> | ||
| <Badge variant={STATUS_COLORS[task.status]} size="md"> | ||
| {task.status.replace("_", " ")} | ||
| </Badge> | ||
| <Badge variant={PRIORITY_COLORS[task.priority]} size="md"> | ||
| {PRIORITY_LABELS[task.priority]} | ||
| </Badge> | ||
| {task.worker_id && ( | ||
| <Badge variant="violet" size="md"> | ||
| Worker: {task.worker_id.slice(0, 8)} | ||
| </Badge> | ||
| )} | ||
| </div> | ||
|
|
||
| {/* Description */} | ||
| {task.description && ( | ||
| <div> | ||
| <label className="mb-1 block text-xs text-ink-dull"> | ||
| Description | ||
| </label> | ||
| <Markdown className="break-words text-sm text-ink"> | ||
| {task.description} | ||
| </Markdown> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* Subtasks */} | ||
| {task.subtasks.length > 0 && ( | ||
| <div> | ||
| <label className="mb-1 block text-xs text-ink-dull"> | ||
| Subtasks ({task.subtasks.filter((s) => s.completed).length}/ | ||
| {task.subtasks.length}) | ||
| </label> | ||
| <ul className="space-y-1"> | ||
| {task.subtasks.map((subtask, index) => ( | ||
| <li key={index} className="flex items-center gap-2 text-sm"> | ||
| <span | ||
| className={ | ||
| subtask.completed | ||
| ? "text-emerald-400" | ||
| : "text-ink-faint" | ||
| } | ||
| > | ||
| {subtask.completed ? "[x]" : "[ ]"} | ||
| </span> | ||
| <span | ||
| className={ | ||
| subtask.completed | ||
| ? "text-ink-dull line-through" | ||
| : "text-ink" | ||
| } | ||
| > | ||
| {subtask.title} | ||
| </span> | ||
| </li> | ||
| ))} | ||
| </ul> | ||
| </div> | ||
| )} | ||
|
|
||
| {(() => { | ||
| const githubRefs = getGithubReferences(task.metadata); | ||
| if (githubRefs.length === 0) return null; | ||
| return ( | ||
| <div> | ||
| <label className="mb-1 block text-xs text-ink-dull"> | ||
| GitHub Links | ||
| </label> | ||
| <GithubMetadataBadges references={githubRefs} /> | ||
| </div> | ||
| ); | ||
| })()} | ||
|
|
||
| {/* Metadata */} | ||
| <div className="grid grid-cols-1 gap-2 text-xs text-ink-dull sm:grid-cols-2"> | ||
| <div>Created: {formatTimeAgo(task.created_at)}</div> | ||
| <div>By: {task.created_by}</div> | ||
| {task.approved_at && ( | ||
| <div>Approved: {formatTimeAgo(task.approved_at)}</div> | ||
| )} | ||
| {task.approved_by && <div>By: {task.approved_by}</div>} | ||
| {task.completed_at && ( | ||
| <div>Completed: {formatTimeAgo(task.completed_at)}</div> | ||
| )} | ||
| <div>Updated: {formatTimeAgo(task.updated_at)}</div> | ||
| </div> | ||
| </div> | ||
|
|
||
| <DialogFooter className="shrink-0"> | ||
| <div className="flex w-full flex-col gap-3 sm:flex-row sm:items-center sm:justify-between"> | ||
| <button | ||
| className="text-xs text-red-400 hover:text-red-300" | ||
| onClick={onDelete} | ||
| > | ||
| Delete | ||
| </button> | ||
| <div className="flex flex-wrap gap-2 sm:justify-end"> | ||
| {task.status === "pending_approval" && ( | ||
| <Button size="sm" onClick={onApprove}> | ||
| Approve | ||
| </Button> | ||
| )} | ||
| {(task.status === "backlog" || | ||
| task.status === "pending_approval") && ( | ||
| <Button size="sm" onClick={onExecute}> | ||
| Execute | ||
| </Button> | ||
| )} | ||
| {task.status === "ready" && ( | ||
| <Badge variant="accent" size="md"> | ||
| Waiting for cortex pickup | ||
| </Badge> | ||
| )} | ||
| {task.status === "in_progress" && ( | ||
| <Button size="sm" onClick={() => onStatusChange("done")}> | ||
| Mark Done | ||
| </Button> | ||
| )} | ||
| {task.status === "done" && ( | ||
| <Button | ||
| size="sm" | ||
| variant="outline" | ||
| onClick={() => onStatusChange("backlog")} | ||
| > | ||
| Reopen | ||
| </Button> | ||
| )} | ||
| <Button size="sm" variant="outline" onClick={onClose}> | ||
| Close | ||
| </Button> | ||
| </div> | ||
| </div> | ||
| </DialogFooter> | ||
| </DialogContent> | ||
| </Dialog> | ||
| ); | ||
| return <TaskBoard agentId={agentId} ownerAgentId={agentId} />; |
There was a problem hiding this comment.
This route now drops owned-but-delegated tasks.
TaskBoard uses agentId only as assigned_agent_id, so once one of this agent's tasks is reassigned it disappears from /agents/$agentId/tasks even though owner_agent_id is still this agent. That changes the scoped view from “tasks for this agent” to “tasks currently assigned here” and removes visibility into delegated work.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/AgentTasks.tsx` around lines 3 - 4, AgentTasks currently
passes ownerAgentId but TaskBoard only uses its agentId prop to filter
assigned_agent_id, dropping tasks that are owned but delegated; update TaskBoard
(the TaskBoard component and its data-fetching/filter logic) to accept and use
the ownerAgentId prop and change the query/filter so it returns tasks where
owner_agent_id === ownerAgentId OR assigned_agent_id === agentId (or equivalent
DB predicate), ensuring both owned and currently-assigned tasks are shown.
interface/src/routes/GlobalTasks.tsx
Outdated
| // Use the first agent as default owner for task creation. | ||
| // In the global view this is a reasonable default — the user can reassign later. | ||
| const defaultOwner = agents[0]?.id; | ||
|
|
||
| return ( | ||
| <TaskBoard | ||
| ownerAgentId={defaultOwner} | ||
| agentNames={agentNames} | ||
| showAgentBadge={agents.length > 1} |
There was a problem hiding this comment.
Require an explicit owner before allowing creates on the global board.
defaultOwner = agents[0]?.id means every task created from /tasks is attributed to the first agent in the list. The current flow doesn't offer an owner picker or an owner edit path, so in multi-agent instances this will persist incorrect owner_agent_id values. Hide create until the user selects an owner, or add owner selection to the dialog.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/GlobalTasks.tsx` around lines 23 - 31, Currently
defaultOwner = agents[0]?.id silently attributes new tasks to the first agent;
instead require an explicit owner selection: remove the implicit defaultOwner
assignment and pass ownerAgentId={selectedOwnerId} (initially undefined) into
TaskBoard, and update TaskBoard usage/props (e.g., ownerAgentId, onOwnerSelect
or disableCreateWhenNoOwner) so the create UI is hidden/disabled until
selectedOwnerId is set; alternatively add an owner picker modal triggered by
TaskBoard that calls onOwnerSelect to set selectedOwnerId before allowing
creates. Ensure references: defaultOwner, agents, TaskBoard, ownerAgentId, and
any new selectedOwnerId/onOwnerSelect props are used to gate creation.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/api/agents.rs (1)
403-408:⚠️ Potential issue | 🟠 MajorFail fast instead of recreating per-agent task stores.
These fallbacks silently point task operations back at the agent SQLite DB when
ApiState.task_storeis unset. That breaks the new single-store invariant and can split task state between the instance-leveltasks.dband per-agentspacebot.db. Increate_agent_internal, the required lookup should also happen before the config / directory / DB side effects so a missing global store cannot leave a half-created agent behind.💡 Suggested fix
@@ - let task_store = state - .task_store - .load() - .as_ref() - .clone() - .unwrap_or_else(|| Arc::new(crate::tasks::TaskStore::new(sqlite_pool.clone()))); + let task_store = state.task_store.load().as_ref().clone().ok_or_else(|| { + tracing::error!("global task store not initialized for warmup trigger"); + StatusCode::SERVICE_UNAVAILABLE + })?; @@ - let task_store = - state.task_store.load().as_ref().clone().unwrap_or_else(|| { - std::sync::Arc::new(crate::tasks::TaskStore::new(db.sqlite.clone())) - }); + let task_store = state.task_store.load().as_ref().clone().ok_or_else(|| { + tracing::error!(agent_id = %agent_id, "global task store not initialized"); + "global task store not initialized".to_string() + })?;For
create_agent_internal, move that required lookup up near the start of the function so it fails before writingconfig.tomlor initializing agent-local resources.Also applies to: 719-722
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/agents.rs` around lines 403 - 408, The code currently falls back to creating per-agent TaskStore when ApiState.task_store is unset (see the task_store lookup using ApiState.task_store.load() and unwrap_or_else(|| Arc::new(crate::tasks::TaskStore::new(sqlite_pool.clone())))), which can split state; instead, change create_agent_internal to perform the global task_store lookup near the start and fail fast if ApiState.task_store is None (return an error or propagate), and remove the fallback creation logic (do not call TaskStore::new for per-agent DB); update all call sites (including the other occurrence around lines 719-722) to expect or propagate the missing global store rather than silently constructing a new one.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main.rs`:
- Around line 1591-1594: The migration currently treats per-agent read errors as
non-fatal and still writes the ".tasks_migrated" marker, causing skipped agents
on future boots; update the migration logic in
spacebot::tasks::migration::migrate_legacy_tasks (implementation in
src/tasks/migration.rs) to fail the whole migration when any agent migration
errors occur (or alternatively accumulate per-agent results and only write the
.tasks_migrated marker after all agents migrated successfully), i.e. replace the
current "log-and-skip" behavior with error propagation or a consolidated Err
return so the caller (the startup call to migrate_legacy_tasks) will abort and
not write the completion marker on partial failures. Ensure the function's
signature and callers still compile and preserve the existing context-wrapping
at the call site.
---
Duplicate comments:
In `@src/api/agents.rs`:
- Around line 403-408: The code currently falls back to creating per-agent
TaskStore when ApiState.task_store is unset (see the task_store lookup using
ApiState.task_store.load() and unwrap_or_else(||
Arc::new(crate::tasks::TaskStore::new(sqlite_pool.clone())))), which can split
state; instead, change create_agent_internal to perform the global task_store
lookup near the start and fail fast if ApiState.task_store is None (return an
error or propagate), and remove the fallback creation logic (do not call
TaskStore::new for per-agent DB); update all call sites (including the other
occurrence around lines 719-722) to expect or propagate the missing global store
rather than silently constructing a new one.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7bad6708-112e-4082-8303-80cbd456f0e1
📒 Files selected for processing (8)
interface/src/api/client.tsinterface/src/router.tsxsrc/agent/channel.rssrc/api/agents.rssrc/api/state.rssrc/lib.rssrc/main.rstests/context_dump.rs
💤 Files with no reviewable changes (2)
- src/lib.rs
- tests/context_dump.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- interface/src/router.tsx
Review findings (PR #468, issue #472): - Replace silent per-agent TaskStore fallback with fail-fast (agents.rs) - Use monotonic high-water-mark sequence for task numbers instead of MAX(task_number), preventing number reuse after deletes - Force clear worker_id when reassigning a task to a different agent - Normalize all timestamps to ISO 8601 (replace datetime('now') with strftime, normalize legacy timestamps during migration) - Reject execute on pending_approval tasks (must approve first) - Fail migration on per-agent error instead of writing marker after partial success - Add create_dir_all before global task DB connect - Guard create dialog submit on isPending to prevent duplicates - Use agent_id OR filter so agent view shows owned + assigned tasks - Add explicit agent picker on global tasks page - Show truncation warning when task list hits limit Upstream fix: - Add missing team_id and mattermost fields to config test fixtures
Summary
Replaces per-agent task databases with a single global task store. Tasks get globally unique numbers, explicit owner/assigned agent fields, and a unified API + UI surface.
Design doc:
docs/design-docs/global-task-elevation.mdAddresses: #472 (borrow patterns from beads_rust — monotonic numbering, atomic claim semantics, schema invariants)
What changed
Database
{instance_dir}/data/tasks.dbwith globaltaskstabletask_number_seqhigh-water-mark table — numbers never reuse after deletesagent_idreplaced withowner_agent_id(creator) +assigned_agent_id(executor)strftime('%Y-%m-%dT%H:%M:%SZ', 'now'))Rust
TaskStorerefactored from per-agent to global — singleArc<TaskStore>shared across all agentstask_store_registryeliminated entirely (wasArcSwap<HashMap<String, Arc<TaskStore>>>)SendAgentMessageToolsimplified — creates tasks in global store with cross-agent assignment instead of writing to foreign agent databasesclaim_next_readyfilters byassigned_agent_idwith atomic compare-and-setworker_idso detached workers can't update reassigned tasksAPI
/tasksendpoint surface (no/agents/tasksduplication):GET /tasks— list with optionalagent_id(OR filter),owner_agent_id,assigned_agent_id,status,priority,created_byPOST /tasks— create with explicitowner_agent_id, optionalassigned_agent_idGET/PUT/DELETE /tasks/{number}— globally addressed by numberPOST /tasks/{number}/approve— move to readyPOST /tasks/{number}/execute— move to ready (rejectspending_approval— must approve first)POST /tasks/{number}/assign— reassign to different agentUI
TaskBoardcomponent (interface/src/components/TaskBoard.tsx) with kanban columns, task cards, create dialog, detail dialog/agents/$agentId/tasks) usesagent_idOR filter — shows both assigned and owned tasks/tasks) with agent badges on cards, explicit owner picker for task creationReview fixes
task_number_seqtable (no reuse after deletes)worker_idon reassignmentexecute_taskrejectspending_approval(409 CONFLICT)create_dir_allbefore global DB connectteam_id,mattermostfields)Verification
cargo check— cleancargo test --lib— 756 passed, 0 failedbun run build— clean