fix(teams): remove chatroom fan-out to prevent agent feedback loops#220
fix(teams): remove chatroom fan-out to prevent agent feedback loops#220jcenters wants to merge 2 commits intoTinyAGI:mainfrom
Conversation
Every [#team: ...] post previously triggered a full queue message for each teammate, causing a new Claude invocation per agent. When agents responded with their own [#team: ...] posts, this created an exponential feedback loop that exhausted API token budgets in minutes. Chat room messages are now stored as history only. Agents read recent chat room messages as passive context when next invoked for a real task, preserving coordination without the runaway cost of active fan-out. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Greptile SummaryThis PR fixes a critical exponential feedback loop in the team chatroom system by removing the per-agent Key changes:
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Agent as Agent (any)
participant postToChatRoom
participant insertChatMessage as insertChatMessage (DB)
participant enqueueMessage as enqueueMessage (REMOVED)
participant Teammate as Teammate Agent(s)
Note over Agent,Teammate: BEFORE (feedback loop)
Agent->>postToChatRoom: [#team: message]
postToChatRoom->>insertChatMessage: store history
postToChatRoom->>enqueueMessage: fan-out to each teammate
enqueueMessage-->>Teammate: new Claude invocation
Teammate->>postToChatRoom: [#team: response] (triggers loop)
Note over Agent,Teammate: AFTER (this PR)
Agent->>postToChatRoom: [#team: message]
postToChatRoom->>insertChatMessage: store history only
Note right of insertChatMessage: No fan-out — teammates\nread history as passive\ncontext on next invocation
|
…removal The teamAgents and originalData parameters became unused dead code when the enqueueMessage fan-out loop was removed. Every call site was computing and passing values that were silently discarded. Cleaned up the function signature and both call sites: - packages/teams/src/conversation.ts (agent response handler) - packages/server/src/routes/chatroom.ts (human REST endpoint) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Agents in a team could trigger runaway feedback loops by sending each other messages indefinitely. Two mechanisms failed to prevent this: 1. The chatroom fan-out (fixed separately in TinyAGI#220) escaped the conversation tracking system entirely, so totalMessages never incremented and the maxMessages guard never fired. 2. Agent-to-agent @mentions via sendInternalMessage had a maxMessages guard, but the default was 50 — enough for a 5-hour API limit burn before anything stopped. This PR adds two independent, layered defenses: **Rate limiter in enqueueMessage (queues.ts)** Any message where fromAgent is set is agent-generated. Before inserting, count how many agent-to-agent messages the target agent already has queued in the last 60 seconds. If at or above the limit, drop the message and log a [LoopGuard] warning instead of enqueuing. Default: 10 messages/minute/agent. Configurable via settings.json: "protection": { "max_agent_messages_per_minute": 10 } **Conversation chain depth cap (conversation.ts)** Lower DEFAULT_MAX_CONVERSATION_MESSAGES from 50 to 10. Read the effective value from settings.json at conversation creation time so operators can tune it without a code change: "protection": { "max_chain_depth": 10 } Both limits are independent — the rate limiter catches loops that escape the conversation system (e.g. chatroom messages, new conversations spawned by agents), while the chain depth cap limits depth within a single tracked conversation. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Yes this is a real concern and has come up in my testing as well. Thanks for fixing it! |
Problem
Every
[#team: ...]post was fanned out to all teammates as a full queue message, triggering a new Claude invocation per agent. When agents responded with their own[#team: ...]posts, this created an exponential feedback loop.Real-world impact: In a 4-agent crew, a single chatroom post triggered 3 invocations. Each agent's response triggered 2 more. Within an hour of the crew team going live, this loop exhausted a 5-hour Anthropic API token limit entirely.
Root cause
postToChatRoom()inpackages/teams/src/conversation.tsenqueued the chat message for every teammate viaenqueueMessage(). Agents processed these as regular tasks and often responded with another[#crew: ...]post, which triggered the next round.Fix
Chat room messages are now stored as history only —
insertChatMessage()still runs (history is preserved), butenqueueMessage()fan-out is removed entirely.Agents already receive recent chat room history as passive context when invoked for real tasks. This preserves team coordination without the runaway invocation cost.
Trade-off
Agents no longer get a push notification for every chatroom post. They see chat history on their next invocation instead. For the intended use case (status broadcasts, findings, flags) this is the correct behavior — chatroom posts are not meant to demand immediate responses.
🤖 Generated with Claude Code