You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the elasticsearch cluster is inaccessible (e.g. a network interruption), we want the connector to be resilient to that situation and resume operation when connectivity to elasticsearch is reestablished. And we want it to reconnect in a timely manner.
So first of all, it would be nice to allow a way to set max.retries to "unlimited". Granted, the max value of 2147483647 is pretty darn large, and probably going to be enough in practice.
But more important, we really need to put an upper limited on the retry backoff. It is nice that it is designed to "wait up to twice as long as the previous wait", but if elasticsearch is down for a few hours that would let the backoff time grow to an unacceptably long interval.
We could really use an option to cap the growth of the backoff to some maximum value.
The text was updated successfully, but these errors were encountered:
I was looking into this a little. There does seem to be an upper limit to the back off time which is 24 hours. This value is also automatically applied after 32 attempts (due to overflow).
Did you want to make this configurable instead of it being a constant?
The existing retry options are a bit limiting:
When the elasticsearch cluster is inaccessible (e.g. a network interruption), we want the connector to be resilient to that situation and resume operation when connectivity to elasticsearch is reestablished. And we want it to reconnect in a timely manner.
So first of all, it would be nice to allow a way to set max.retries to "unlimited". Granted, the max value of 2147483647 is pretty darn large, and probably going to be enough in practice.
But more important, we really need to put an upper limited on the retry backoff. It is nice that it is designed to "wait up to twice as long as the previous wait", but if elasticsearch is down for a few hours that would let the backoff time grow to an unacceptably long interval.
We could really use an option to cap the growth of the backoff to some maximum value.
The text was updated successfully, but these errors were encountered: