-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Use preload image for worker nodes #20470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Diff-fusion The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Welcome @Diff-fusion! |
|
Hi @Diff-fusion. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Can one of the admins verify this patch? |
medyagh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch ! I didnt know our multinode was not using preload.
thank you for this PR do you mind pasting in the PR description Before And After this PR, (--alsologtostderr with verbose logs to see this is working) ?
and also we should also add this to our multi node integration test
|
/ok-to-test |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
@Diff-fusion would you be interested to add a small test to the multi node test?
you could check for the Log line in the worker nodes for using the preload ? |
|
@ComradeProgrammer can you plz check this PR to verify it is correct |
|
I attached the logs from before and after this change to the initial comment. Also I changed the error behavior to just log a message instead of failing the start. This seems more reasonable here. |
|
kvm2 driver with docker runtime DetailsTimes for minikube start: 47.8s 47.8s 51.3s 50.4s 50.7s Times for minikube ingress: 15.0s 15.5s 19.0s 14.5s 16.6s docker driver with docker runtime DetailsTimes for minikube (PR 20470) start: 21.1s 21.4s 21.0s 20.2s 21.2s Times for minikube (PR 20470) ingress: 12.3s 10.2s 12.3s 12.8s 12.2s docker driver with containerd runtime DetailsTimes for minikube ingress: 38.8s 23.2s 22.7s 39.2s 38.8s Times for minikube start: 20.4s 19.8s 19.8s 19.9s 22.3s |
|
Here are the number of top 10 failed tests in each environments with lowest flake rate.
Besides the following environments also have failed tests:
To see the flake rates of all tests by environment, click here. |
|
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
|
@k8s-triage-robot: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Currently when creating a worker node it will not be filled with any container images.
So it will pull the required images from the upstream registry.
This makes it impossible to use a multi node cluster in an offline environment.
This PR will changes the behavior to also copy the preload image to worker nodes.
Log before this patch: without-preload.log
Log after this patch: with-preload.log