-
Notifications
You must be signed in to change notification settings - Fork 26
[feat] Remove machines that have been marked for deletion from the dns record #787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #787 +/- ##
==========================================
- Coverage 62.81% 62.80% -0.02%
==========================================
Files 69 69
Lines 6721 6735 +14
==========================================
+ Hits 4222 4230 +8
- Misses 2258 2262 +4
- Partials 241 243 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
eeca44c
to
2595660
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors DNS entry generation to skip machines marked for deletion by their corresponding CAPI Machine, and updates tests to mock this behavior.
- Introduce
processLinodeMachine
to handle per-machine DNS option logic and skip deleted machines - Refactor
getDNSEntriesToEnsure
to take a context, lock around mutation, and call the new helper - Update tests to include
OwnerReferences
and addmockCAPIMachine
helper for CAPI Machine mocking
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
cloud/services/domains.go | Refactored DNS entry computation and skipped deleted machines |
cloud/services/domains_test.go | Added OwnerReferences in fixtures and new mockCAPIMachine |
Comments suppressed due to low confidence (2)
cloud/services/domains.go:330
- Update the comment above
getDNSEntriesToEnsure
to reflect the newctx
parameter and that it now delegates toprocessLinodeMachine
for each LinodeMachine, clarifying howd.options
is populated.
func (d *DNSEntries) getDNSEntriesToEnsure(ctx context.Context, cscope *scope.ClusterScope) ([]DNSOptions, error) {
cloud/services/domains_test.go:384
- Add a test case for a LinodeMachine with no OwnerReferences (so
GetOwnerMachine
returns nil) to validate that the code handles missing owners without panicking.
{name: "Skip - If a CAPI machine is deleted, don't add its IP to the Domain but include other machines",
return nil, fmt.Errorf("failed to get CAPI machine for LinodeMachine %s: %w", eachMachine.Name, err) | ||
} | ||
|
||
if capiMachine == nil || capiMachine.DeletionTimestamp != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Distinguish between a missing owner (nil) and a deleted CAPI Machine—silently skipping on nil may hide misconfigured resources. Consider logging or returning an explicit error for missing owners.
Copilot uses AI. Check for mistakes.
What this PR does / why we need it:
When using a dns load-balancer i noticed traffic still arriving at the api-server on a node that was marked for deletion.
According to https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/machine_deletions
the
linodeMachine
is deleted only after the nodes are drained. However, since the node's IP is still present in the dns record, new traffic was still sent to the node marked for deletion. To avoid this problem, this pr verifies that a CAPI machine exists, and isn't marked for deletion before adding that linodeMachine's IPs to the dns records.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
TODOs: