Open
Description
Search before asking
- I have searched the Inference issues and found no similar bug report.
Bug
The same workflow tested, reporting only latency for single frame processing inside WorkflowRunner.run_workflow(...) function:
MacBook, bare metal in script using InferencePipeline directly - ~40ms
MacBook, inside docker container, behind API - ~110ms
MacBook dies - 100% CPU utilisation when InferencePipeline
is running inside container 😢
Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?
- Yes I'd like to help by submitting a PR!