Skip to content

Conversation

@RUiNtheExtinct
Copy link

Summary

  • Fixes crash when custom nodes use print() in API mode on Windows
  • Catches OSError with errno 22 (EINVAL) in LogInterceptor.flush() that can occur with piped/redirected stdout
  • Adds comprehensive unit tests for the logger flush behavior

Problem

When running ComfyUI in API mode on Windows, print statements from custom nodes crash with:

OSError: [Errno 22] Invalid argument

This occurs because the log interceptor's flush() method fails on piped/redirected streams, even though the write() operation succeeded.

Solution

Wrap the super().flush() call in a try-except block that specifically catches errno 22. This error is safe to ignore because:

  1. The write() already succeeded
  2. The flush callbacks still need to execute
  3. This is a known Windows issue with piped streams

Test plan

  • Added unit tests in tests-unit/app_test/test_logger.py
  • All tests pass (python -m pytest tests-unit/app_test/test_logger.py -v)
  • Existing tests still pass

Fixes #11367

When running ComfyUI in API mode on Windows, print() statements from
custom nodes can crash with "OSError: [Errno 22] Invalid argument"
during flush. This occurs because piped/redirected stdout streams on
Windows may fail to flush even after successful writes.

This fix catches OSError with errno 22 (EINVAL) specifically in
LogInterceptor.flush(), allowing the flush callbacks to still execute.
The error is safe to ignore since write() already succeeded.

Fixes comfyanonymous#11367
Move _logs_since_flush reset outside the callback loop so all
registered callbacks receive the same log data instead of only
the first callback getting logs while subsequent ones get an empty list.

Add test to verify multiple callbacks all receive the same logs.
Clear _logs_since_flush before iterating callbacks by capturing logs
into a local variable first. This prevents duplicate logs if a
callback raises an exception, since the instance variable is already
cleared before any callback runs.

Add test to verify logs are cleared even when callbacks raise.
Move log clearing to after all callbacks succeed, not before. This
ensures that if any callback raises an exception, the logs remain
available for retry on the next flush call instead of being lost.

The previous approach cleared logs before iterating callbacks,
which meant logs were permanently lost if any callback failed.
If flush callbacks persistently fail (e.g., network issues), logs would
accumulate indefinitely in _logs_since_flush, potentially causing OOM
on long-running servers.

Added MAX_PENDING_LOGS (10000) limit - when exceeded, oldest logs are
dropped. This is similar to how the global logs deque uses maxlen.
If a callback modifies the logs list (e.g., clear()) and a subsequent
callback raises an exception, the preserved logs for retry would have
been corrupted. Now passes a shallow copy to callbacks.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

API + Custom nodes that use print currently broken?

1 participant