Add daemon mode (-d) to keep model loaded for multiple batch processing#10
Open
ollm wants to merge 3 commits intoupscayl:masterfrom
Open
Add daemon mode (-d) to keep model loaded for multiple batch processing#10ollm wants to merge 3 commits intoupscayl:masterfrom
ollm wants to merge 3 commits intoupscayl:masterfrom
Conversation
This was referenced Jan 12, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
I have added a daemon mode that allows the model to remain loaded in memory so images can be processed faster.
To make these changes, I had to rely on Copilot, since my knowledge of C++ is limited. I reviewed the parts of the code generated by and I fixed all issues. I also tested it on both Linux and Windows, and it works correctly (both the new daemon mode and the normal mode). I haven't been able to test it on macOS because I don't have a Mac.
With daemon mode, the model can be loaded at startup so that when the user interacts (for example, in Upscayl), the image is processed much faster. At least on my graphics card (AMD RX 6700, on both Linux and Windows), the model loading time is considerably high (several seconds), and with this mode that cost can be avoided for each image. For some models, the loading time is even slower than the actual image processing time.
You can also see my implementation of process spawning in Node in this other repository:
https://github.com/ollm/opencomic-ai-bin
https://github.com/ollm/opencomic-ai-bin/blob/b348e3754245cb633d02937d35a7270e53670c3a/index.mts#L1221
Example usage
Daemon comparative performance
Table (10 images 512x512px)
OpenComic AI Upscale Lite
Disabled
Enabled
RealESRGAN x4 Plus
Disabled
Enabled