Merged
Conversation
6d09fb8 to
229bde1
Compare
optimize: consistentHash linear scan by using binary search in case it is actually faster than linear scan
% benchstat old.txt new.txt
goos: linux
goarch: amd64
pkg: github.com/zalando/skipper/proxy
cpu: AMD Ryzen 7 PRO 4750U with Radeon Graphics
│ old.txt │ new.txt │
│ sec/op │ sec/op vs base │
ConsistentHashSelectEndpoint/10_endpoints 195.8n ± 3% 182.7n ± 2% -6.72% (p=0.001 n=10)
ConsistentHashSelectEndpoint/10_endpoints-2 243.4n ± 2% 231.6n ± 3% -4.87% (p=0.001 n=10)
ConsistentHashSelectEndpoint/10_endpoints-4 235.2n ± 2% 218.1n ± 3% -7.27% (p=0.000 n=10)
ConsistentHashSelectEndpoint/10_endpoints-8 240.6n ± 3% 232.3n ± 2% -3.41% (p=0.000 n=10)
ConsistentHashSelectEndpoint/10_endpoints-16 239.9n ± 3% 231.1n ± 1% -3.65% (p=0.000 n=10)
ConsistentHashSelectEndpoint/100_endpoints 339.0n ± 5% 334.4n ± 1% -1.36% (p=0.011 n=10)
ConsistentHashSelectEndpoint/100_endpoints-2 376.8n ± 2% 360.0n ± 3% -4.45% (p=0.000 n=10)
ConsistentHashSelectEndpoint/100_endpoints-4 390.5n ± 4% 356.1n ± 3% -8.80% (p=0.000 n=10)
ConsistentHashSelectEndpoint/100_endpoints-8 394.8n ± 4% 370.8n ± 1% -6.07% (p=0.000 n=10)
ConsistentHashSelectEndpoint/100_endpoints-16 391.8n ± 3% 368.4n ± 1% -5.97% (p=0.000 n=10)
ConsistentHashSelectEndpoint/250_endpoints 276.8n ± 3% 286.2n ± 1% +3.41% (p=0.003 n=10)
ConsistentHashSelectEndpoint/250_endpoints-2 296.0n ± 4% 303.2n ± 4% +2.45% (p=0.043 n=10)
ConsistentHashSelectEndpoint/250_endpoints-4 283.3n ± 3% 303.1n ± 1% +6.99% (p=0.000 n=10)
ConsistentHashSelectEndpoint/250_endpoints-8 303.9n ± 2% 318.3n ± 1% +4.77% (p=0.000 n=10)
ConsistentHashSelectEndpoint/250_endpoints-16 302.1n ± 1% 314.4n ± 1% +4.09% (p=0.000 n=10)
ConsistentHashSelectEndpoint/300_endpoints 272.9n ± 1% 288.4n ± 1% +5.68% (p=0.000 n=10)
ConsistentHashSelectEndpoint/300_endpoints-2 277.3n ± 2% 304.7n ± 2% +9.86% (p=0.000 n=10)
ConsistentHashSelectEndpoint/300_endpoints-4 271.9n ± 3% 296.1n ± 1% +8.88% (p=0.000 n=10)
ConsistentHashSelectEndpoint/300_endpoints-8 276.0n ± 1% 307.1n ± 2% +11.27% (p=0.000 n=10)
ConsistentHashSelectEndpoint/300_endpoints-16 284.3n ± 3% 304.2n ± 1% +6.98% (p=0.000 n=10)
ConsistentHashSelectEndpoint/400_endpoints 968.4n ± 2% 710.4n ± 1% -26.64% (p=0.000 n=10)
ConsistentHashSelectEndpoint/400_endpoints-2 976.3n ± 2% 698.5n ± 4% -28.45% (p=0.000 n=10)
ConsistentHashSelectEndpoint/400_endpoints-4 975.3n ± 1% 690.0n ± 1% -29.26% (p=0.000 n=10)
ConsistentHashSelectEndpoint/400_endpoints-8 981.5n ± 1% 701.9n ± 1% -28.49% (p=0.000 n=10)
ConsistentHashSelectEndpoint/400_endpoints-16 985.3n ± 1% 700.5n ± 2% -28.90% (p=0.000 n=10)
ConsistentHashSelectEndpoint/500_endpoints 987.3n ± 0% 725.2n ± 0% -26.55% (p=0.000 n=10)
ConsistentHashSelectEndpoint/500_endpoints-2 960.5n ± 0% 702.5n ± 3% -26.85% (p=0.000 n=10)
ConsistentHashSelectEndpoint/500_endpoints-4 968.0n ± 1% 698.5n ± 1% -27.85% (p=0.000 n=10)
ConsistentHashSelectEndpoint/500_endpoints-8 975.9n ± 1% 715.6n ± 1% -26.67% (p=0.000 n=10)
ConsistentHashSelectEndpoint/500_endpoints-16 977.6n ± 1% 711.1n ± 1% -27.26% (p=0.000 n=10)
ConsistentHashSelectEndpoint/1000_endpoints 971.0n ± 0% 794.8n ± 2% -18.15% (p=0.000 n=10)
ConsistentHashSelectEndpoint/1000_endpoints-2 955.0n ± 1% 748.6n ± 1% -21.61% (p=0.000 n=10)
ConsistentHashSelectEndpoint/1000_endpoints-4 959.7n ± 1% 751.6n ± 1% -21.68% (p=0.000 n=10)
ConsistentHashSelectEndpoint/1000_endpoints-8 968.8n ± 0% 755.1n ± 1% -22.06% (p=0.000 n=10)
ConsistentHashSelectEndpoint/1000_endpoints-16 976.9n ± 11% 762.0n ± 1% -22.00% (p=0.000 n=10)
ConsistentHashSelectEndpoint/5000_endpoints 957.2n ± 0% 879.9n ± 0% -8.08% (p=0.000 n=10)
ConsistentHashSelectEndpoint/5000_endpoints-2 950.8n ± 6% 828.0n ± 0% -12.92% (p=0.000 n=10)
ConsistentHashSelectEndpoint/5000_endpoints-4 951.0n ± 7% 834.9n ± 1% -12.21% (p=0.000 n=10)
ConsistentHashSelectEndpoint/5000_endpoints-8 963.7n ± 9% 847.7n ± 1% -12.04% (p=0.000 n=10)
ConsistentHashSelectEndpoint/5000_endpoints-16 962.4n ± 3% 852.9n ± 1% -11.37% (p=0.000 n=10)
geomean 529.6n 467.9n -11.65%
Signed-off-by: Sandor Szücs <sandor.szuecs@zalando.de>
229bde1 to
e381d2b
Compare
Member
Author
|
👍 |
Member
But why? How does that help? |
Member
Author
7d6a26a#diff-92db2045c41fcb6839064f0caac8129c14bf4aeedcc389a6ad161bafb7d8560a commit message shows that it is faster to limit the hash ring to a reasonable size: |
Member
|
👍 |
|
|
||
| func skipEndpoint(c *routing.LBContext, index int) bool { | ||
| host := c.Route.LBEndpoints[index].Host | ||
| if len(c.LBEndpoints) > 300 { // 300 see https://github.com/zalando/skipper/pull/3918/ |
Member
There was a problem hiding this comment.
I think it's better to make this a constant with descriptive name
This was referenced Mar 16, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
optimize: limit hash ring bucket size to max 10k
optimize: consistentHash linear scan by using binary search in case it is actually faster than linear scan
In practice we will see better results as the worst case will be in favor of the new code, because the linear scan had only to walk to index 332 instead of 5000 for example. I added some log during test to check it.