Refresh ack timeout #4030
-
Hi, I am unsure if this is a thing, but... |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 5 replies
-
I will convert this issue to a GitHub discussion. Currently GitHub will automatically close and lock the issue even though your question will be transferred and responded to elsewhere. This is to let you know that we do not intend to ignore this but this is how the current GitHub conversion mechanism makes it seem for the users :( |
Beta Was this translation helpful? Give feedback.
-
As documented in the Consumers guide, you can increase the limit. Delivery of a message does not imply ownership over it, it's a concept that does not exist in any of the protocols RabbitMQ supports. "Refreshing" will complicate a lot of things, including around monitoring, and most consumers do not need anywhere close to the default 30m timeout. If an operation takes more than 30m, consumers performing it should have a progress metric of some kind, and acknowledge delivery reasonably early. RabbitMQ queues and streams are not meant to be a general purpose data store, too. |
Beta Was this translation helpful? Give feedback.
-
Related to #2990 For now you will have to adjust the global consumer timeout if you need more time to complete the task. RabbitMQ can't release resources for a queued item until the consumer acks it. We had enough experience with support cases arising from abusive consumers that the timeout was implemented. |
Beta Was this translation helpful? Give feedback.
-
I think you are correct, there should be some progress metric that would make the most sense, although might require some restructuring. The problem came up with transcoding I used RabbitMQ as a queue to issue transcoding tasks for video. The transcoder picks up the task and then acks when the transcoding job is complete. Sometimes, if the videos are several hours the job would take longer than 30min. I have since changed the timeout to 12h but I thought maybe there should be a feature to resolve this. |
Beta Was this translation helpful? Give feedback.
-
We've had this issue come up with the open source Koha LMS project as well. I'm inclined to acknowledge the message after the task has been processed as that seems the most robust method when using the message queue as a work/task queue. That's the current approach taken by Koha. However, there are currently cases where some tasks take longer than 30 minutes to complete and we're getting these consumer timeouts. I think the optimal solution is to break up those tasks into smaller batches, but that would involve significant refactoring, so not a practical short-term solution. In theory, the easiest answer is to change the global consumer timeout, but we distribute the Koha LMS code worldwide so we don't directly control people's RabbitMQ installations. I think that leaves Michael's suggestion of acknowledging the message early, and then using separate progress metrics for managing the task. For users, I figure they can monitor the result store (which we update as the task is progressed). For the system, I'm thinking that we implement a cronjob that fails tasks that exceed a configurable Koha application specified timeout. Does that sound like an optimal strategy to you, @lukebakken and @michaelklishin ? As always, very much appreciate your assistance. Hope you guys had a good holiday season! |
Beta Was this translation helpful? Give feedback.
-
We have this issue as well, I wonder if we can at least have a way to config the timeout value per queue? |
Beta Was this translation helpful? Give feedback.
Related to #2990
For now you will have to adjust the global consumer timeout if you need more time to complete the task.
RabbitMQ can't release resources for a queued item until the consumer acks it. We had enough experience with support cases arising from abusive consumers that the timeout was implemented.