Skip to content

ISO: update linux kernel version to 6.12.19 #20978

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

ComradeProgrammer
Copy link
Member

@ComradeProgrammer ComradeProgrammer commented Jun 23, 2025

6.12.19 is the latest version officially supported by buildroot. Update this may help @nirs to implement the krunkit.

Status:
I think at least this works with booting with Kubernetes on kvm2 locally.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 23, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ComradeProgrammer

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 23, 2025
@ComradeProgrammer ComradeProgrammer requested review from prezha and removed request for spowelljr June 23, 2025 17:53
@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jun 23, 2025
@ComradeProgrammer
Copy link
Member Author

/ok-to-build-iso

@ComradeProgrammer
Copy link
Member Author

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Jun 23, 2025
@medyagh
Copy link
Member

medyagh commented Jun 23, 2025

@ComradeProgrammer can you plz also check with Qemu and Vfkit to see they wrok too?

@minikube-pr-bot

This comment has been minimized.

@minikube-bot
Copy link
Collaborator

Hi @ComradeProgrammer, we have updated your PR with the reference to newly built ISO. Pull the changes locally if you want to test with them or update your PR further.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 20978) |
+----------------+----------+---------------------+
| minikube start | 50.9s    | 49.8s               |
| enable ingress | 16.1s    | 18.8s               |
+----------------+----------+---------------------+

Times for minikube start: 50.2s 47.9s 52.2s 50.9s 53.3s
Times for minikube (PR 20978) start: 48.6s 52.1s 46.9s 53.0s 48.3s

Times for minikube ingress: 15.9s 15.0s 16.0s 19.0s 14.5s
Times for minikube (PR 20978) ingress: 15.6s 20.1s 19.1s 19.1s 20.1s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 20978) |
+----------------+----------+---------------------+
| minikube start | 24.0s    | 23.1s               |
| enable ingress | 13.4s    | 13.1s               |
+----------------+----------+---------------------+

Times for minikube start: 22.3s 23.3s 24.7s 23.9s 25.8s
Times for minikube (PR 20978) start: 24.3s 23.3s 22.4s 22.6s 23.1s

Times for minikube (PR 20978) ingress: 13.3s 12.8s 13.3s 13.3s 12.8s
Times for minikube ingress: 13.3s 13.8s 12.8s 13.3s 13.8s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 20978) |
+----------------+----------+---------------------+
| minikube start | 22.8s    | 22.4s               |
| enable ingress | 39.7s    | 38.5s               |
+----------------+----------+---------------------+

Times for minikube (PR 20978) start: 24.3s 21.7s 21.3s 23.3s 21.4s
Times for minikube start: 25.5s 21.3s 21.6s 23.6s 21.8s

Times for minikube ingress: 38.8s 39.3s 40.3s 40.4s 39.9s
Times for minikube (PR 20978) ingress: 39.8s 39.3s 39.8s 40.3s 33.3s

@medyagh
Copy link
Member

medyagh commented Jun 26, 2025

I started it with Vfkit on Macos it seems to start

$ mk start -p p2 
😄  [p2] minikube v1.36.0 on Darwin 15.5 (arm64)
✨  Automatically selected the vfkit driver
💿  Downloading VM boot image ...
    > minikube-v1.36.0-1750701620...:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > minikube-v1.36.0-1750701620...:  404.38 MiB / 404.38 MiB  100.00% 17.21 M
👍  Starting "p2" primary control-plane node in "p2" cluster
🔥  Creating vfkit VM (CPUs=2, Memory=6144MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.33.1 on Docker 28.0.4 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "p2" cluster and "default" namespace by default

@medyagh
Copy link
Member

medyagh commented Jun 26, 2025

I tried "make functional" with Vfkit on macos
I got these error logs

=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-528000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-528000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-vlsqn" [b518c5b1-3969-4dbb-a798-1ec11b453d88] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-vlsqn" [b518c5b1-3969-4dbb-a798-1ec11b453d88] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.003360792s
functional_test.go:1666: (dbg) Run:  out/minikube -p functional-528000 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.106.3:31821
functional_test.go:1678: error fetching http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
I0626 10:57:19.096049   39905 retry.go:31] will retry after 1.275075028s: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1678: error fetching http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
I0626 10:57:20.374098   39905 retry.go:31] will retry after 1.39631915s: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1678: error fetching http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
I0626 10:57:21.772666   39905 retry.go:31] will retry after 2.893432371s: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1678: error fetching http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
I0626 10:57:24.667812   39905 retry.go:31] will retry after 4.03392664s: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1678: error fetching http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
I0626 10:57:28.703353   39905 retry.go:31] will retry after 3.26086095s: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1678: error fetching http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
I0626 10:57:31.965953   39905 retry.go:31] will retry after 9.12904254s: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1678: error fetching http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
I0626 10:57:41.097646   39905 retry.go:31] will retry after 6.364275731s: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1678: error fetching http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1698: failed to fetch http://192.168.106.3:31821: Get "http://192.168.106.3:31821": dial tcp 192.168.106.3:31821: connect: connection refused
functional_test.go:1615: service test failed - dumping debug information
functional_test.go:1616: -----------------------service failure post-mortem--------------------------------
functional_test.go:1619: (dbg) Run:  kubectl --context functional-528000 describe po hello-node-connect
functional_test.go:1623: hello-node pod describe:
Name:             hello-node-connect-8449669db6-vlsqn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-528000/192.168.106.3
Start Time:       Thu, 26 Jun 2025 10:57:04 -0700
Labels:           app=hello-node-connect
pod-template-hash=8449669db6
Annotations:      <none>
Status:           Running
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-8449669db6
Containers:
echoserver-arm:
Container ID:   docker://70475101e4df4cdbceb7c24020e5cde882e73790964cae3093f24af704032b3f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    255
Started:      Thu, 26 Jun 2025 10:57:25 -0700
Finished:     Thu, 26 Jun 2025 10:57:25 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j76xc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j76xc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  42s                default-scheduler  Successfully assigned default/hello-node-connect-8449669db6-vlsqn to functional-528000
Normal   Pulling    42s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     36s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 6.148s (6.148s including waiting). Image size: 84957542 bytes.
Normal   Created    22s (x3 over 36s)  kubelet            Created container: echoserver-arm
Normal   Started    22s (x3 over 36s)  kubelet            Started container echoserver-arm
Normal   Pulled     22s (x2 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    7s (x4 over 34s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-8449669db6-vlsqn_default(b518c5b1-3969-4dbb-a798-1ec11b453d88)
functional_test.go:1625: (dbg) Run:  kubectl --context functional-528000 logs -l app=hello-node-connect
functional_test.go:1629: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1631: (dbg) Run:  kubectl --context functional-528000 describe svc hello-node-connect
functional_test.go:1635: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.245.72
IPs:                      10.99.245.72
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31821/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube status --format={{.Host}} -p functional-528000 -n functional-528000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube -p functional-528000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|-------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      | User  | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|-------|---------|---------------------|---------------------|
	| mount     | -p functional-528000                                                                                                | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | /var/folders/dc/cvsg4_xs3hv5r7hrthj402s400h78j/T/TestFunctionalparallelMountCmdany-port3170383556/001:/mount-9p     |                   |       |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh findmnt                                                                                       | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh findmnt                                                                                       | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh -- ls                                                                                         | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | -la /mount-9p                                                                                                       |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh cat                                                                                           | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | /mount-9p/test-1750960658536623000                                                                                  |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh stat                                                                                          | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh stat                                                                                          | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh sudo                                                                                          | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh findmnt                                                                                       | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |       |         |                     |                     |
	| mount     | -p functional-528000                                                                                                | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | /var/folders/dc/cvsg4_xs3hv5r7hrthj402s400h78j/T/TestFunctionalparallelMountCmdspecific-port937672006/001:/mount-9p |                   |       |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh findmnt                                                                                       | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh -- ls                                                                                         | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | -la /mount-9p                                                                                                       |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh sudo                                                                                          | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |       |         |                     |                     |
	| mount     | -p functional-528000                                                                                                | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | /var/folders/dc/cvsg4_xs3hv5r7hrthj402s400h78j/T/TestFunctionalparallelMountCmdVerifyCleanup3420531624/001:/mount3  |                   |       |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh findmnt                                                                                       | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |       |         |                     |                     |
	| mount     | -p functional-528000                                                                                                | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | /var/folders/dc/cvsg4_xs3hv5r7hrthj402s400h78j/T/TestFunctionalparallelMountCmdVerifyCleanup3420531624/001:/mount1  |                   |       |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |       |         |                     |                     |
	| mount     | -p functional-528000                                                                                                | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | /var/folders/dc/cvsg4_xs3hv5r7hrthj402s400h78j/T/TestFunctionalparallelMountCmdVerifyCleanup3420531624/001:/mount2  |                   |       |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh findmnt                                                                                       | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | -T /mount1                                                                                                          |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh findmnt                                                                                       | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | -T /mount2                                                                                                          |                   |       |         |                     |                     |
	| ssh       | functional-528000 ssh findmnt                                                                                       | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT | 26 Jun 25 10:57 PDT |
	|           | -T /mount3                                                                                                          |                   |       |         |                     |                     |
	| mount     | -p functional-528000                                                                                                | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | --kill=true                                                                                                         |                   |       |         |                     |                     |
	| start     | -p functional-528000                                                                                                | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | --dry-run --memory 250MB                                                                                            |                   |       |         |                     |                     |
	|           | --alsologtostderr                                                                                                   |                   |       |         |                     |                     |
	| start     | -p functional-528000                                                                                                | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | --dry-run --memory 250MB                                                                                            |                   |       |         |                     |                     |
	|           | --alsologtostderr                                                                                                   |                   |       |         |                     |                     |
	| start     | -p functional-528000 --dry-run                                                                                      | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |       |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-528000 | medya | v1.36.0 | 26 Jun 25 10:57 PDT |                     |
	|           | -p functional-528000                                                                                                |                   |       |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |       |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|-------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/06/26 10:57:46
	Running on machine: medya-mac
	Binary: Built with gc go1.24.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 10:57:46.965272   40415 out.go:345] Setting OutFile to fd 1 ...
	I0626 10:57:46.965415   40415 out.go:397] isatty.IsTerminal(1) = false
	I0626 10:57:46.965418   40415 out.go:358] Setting ErrFile to fd 2...
	I0626 10:57:46.965421   40415 out.go:397] isatty.IsTerminal(2) = false
	I0626 10:57:46.965492   40415 root.go:338] Updating PATH: /Users/medya/.minikube/bin
	I0626 10:57:46.967192   40415 out.go:352] Setting JSON to false
	I0626 10:57:46.985880   40415 start.go:130] hostinfo: {"hostname":"medya-mac.roam.internal","uptime":2056960,"bootTime":1748903706,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.5","kernelVersion":"24.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"310c08a4-5f51-59cf-b00b-cdd8edf33699"}
	W0626 10:57:46.985954   40415 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0626 10:57:46.989731   40415 out.go:177] * [functional-528000] minikube v1.36.0 on Darwin 15.5 (arm64)
	I0626 10:57:46.997835   40415 notify.go:220] Checking for updates...
	I0626 10:57:46.998086   40415 config.go:182] Loaded profile config "functional-528000": Driver=vfkit, ContainerRuntime=docker, KubernetesVersion=v1.33.1
	I0626 10:57:46.998147   40415 driver.go:404] Setting default libvirt URI to qemu:///system
	I0626 10:57:47.002772   40415 out.go:177] * Using the vfkit driver based on existing profile
	I0626 10:57:47.009709   40415 start.go:304] selected driver: vfkit
	I0626 10:57:47.009718   40415 start.go:908] validating driver "vfkit" against &{Name:functional-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20978/minikube-v1.36.0-1750701620-20978-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:vfkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.1 C
lusterName:functional-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.106.3 Port:8441 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:nat Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0626 10:57:47.009776   40415 start.go:919] status for vfkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 10:57:47.010307   40415 cni.go:84] Creating CNI manager for ""
	I0626 10:57:47.010349   40415 cni.go:158] "vfkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0626 10:57:47.010495   40415 start.go:347] cluster config:
	{Name:functional-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20978/minikube-v1.36.0-1750701620-20978-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.47@sha256:6ed579c9292b4370177b7ef3c42cc4b4a6dcd0735a1814916cbc22c8bf38412b Memory:4096 CPUs:2 DiskSize:20000 Driver:vfkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.33.1 ClusterName:functional-528000 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.106.3 Port:8441 KubernetesVersion:v1.33.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:nat Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0626 10:57:47.018700   40415 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Jun 26 17:57:39 functional-528000 dockerd[6709]: time="2025-06-26T17:57:39.861525158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 26 17:57:39 functional-528000 cri-dockerd[7061]: time="2025-06-26T17:57:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f6e1e3c46ec885abe22bae070119f2493dab4865fcd1ff77b32ed4f2579d6a6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 26 17:57:42 functional-528000 cri-dockerd[7061]: time="2025-06-26T17:57:42Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jun 26 17:57:42 functional-528000 dockerd[6709]: time="2025-06-26T17:57:42.397832661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 26 17:57:42 functional-528000 dockerd[6709]: time="2025-06-26T17:57:42.397859578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 26 17:57:42 functional-528000 dockerd[6709]: time="2025-06-26T17:57:42.397865786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 26 17:57:42 functional-528000 dockerd[6709]: time="2025-06-26T17:57:42.397887495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 26 17:57:42 functional-528000 dockerd[6701]: time="2025-06-26T17:57:42.447587183Z" level=info msg="ignoring event" container=c9f63d13ac25dc1602ca777030bfb2b1bfa0b1972e2083eeadffd5716caf52a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 26 17:57:42 functional-528000 dockerd[6709]: time="2025-06-26T17:57:42.447625975Z" level=info msg="shim disconnected" id=c9f63d13ac25dc1602ca777030bfb2b1bfa0b1972e2083eeadffd5716caf52a8 namespace=moby
	Jun 26 17:57:42 functional-528000 dockerd[6709]: time="2025-06-26T17:57:42.447637933Z" level=warning msg="cleaning up after shim disconnected" id=c9f63d13ac25dc1602ca777030bfb2b1bfa0b1972e2083eeadffd5716caf52a8 namespace=moby
	Jun 26 17:57:42 functional-528000 dockerd[6709]: time="2025-06-26T17:57:42.447652059Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 26 17:57:43 functional-528000 dockerd[6701]: time="2025-06-26T17:57:43.616456799Z" level=info msg="ignoring event" container=6f6e1e3c46ec885abe22bae070119f2493dab4865fcd1ff77b32ed4f2579d6a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 26 17:57:43 functional-528000 dockerd[6709]: time="2025-06-26T17:57:43.616626092Z" level=info msg="shim disconnected" id=6f6e1e3c46ec885abe22bae070119f2493dab4865fcd1ff77b32ed4f2579d6a6 namespace=moby
	Jun 26 17:57:43 functional-528000 dockerd[6709]: time="2025-06-26T17:57:43.616639759Z" level=warning msg="cleaning up after shim disconnected" id=6f6e1e3c46ec885abe22bae070119f2493dab4865fcd1ff77b32ed4f2579d6a6 namespace=moby
	Jun 26 17:57:43 functional-528000 dockerd[6709]: time="2025-06-26T17:57:43.616656801Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 26 17:57:47 functional-528000 dockerd[6709]: time="2025-06-26T17:57:47.716149858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 26 17:57:47 functional-528000 dockerd[6709]: time="2025-06-26T17:57:47.716175275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 26 17:57:47 functional-528000 dockerd[6709]: time="2025-06-26T17:57:47.716181400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 26 17:57:47 functional-528000 dockerd[6709]: time="2025-06-26T17:57:47.716213150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 26 17:57:47 functional-528000 dockerd[6709]: time="2025-06-26T17:57:47.716653861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 26 17:57:47 functional-528000 dockerd[6709]: time="2025-06-26T17:57:47.716676153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 26 17:57:47 functional-528000 dockerd[6709]: time="2025-06-26T17:57:47.716681737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 26 17:57:47 functional-528000 dockerd[6709]: time="2025-06-26T17:57:47.716703195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 26 17:57:47 functional-528000 cri-dockerd[7061]: time="2025-06-26T17:57:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc432920dcdc95c3fb2d14ff364b0f009d9acf937a710da5fcf75b68e31806d0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 26 17:57:47 functional-528000 cri-dockerd[7061]: time="2025-06-26T17:57:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7927bcdda41ddf9a69d58ebe933c8b1547973b4172046fc02a382763164e0791/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c9f63d13ac25d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 seconds ago        Exited              mount-munger              0                   6f6e1e3c46ec8       busybox-mount
	214552d817288       72565bf5bbedf                                                                                         15 seconds ago       Exited              echoserver-arm            1                   a3d79b2b6840c       hello-node-64fc58db8c-26qgg
	70475101e4df4       72565bf5bbedf                                                                                         22 seconds ago       Exited              echoserver-arm            2                   ff34552f9f690       hello-node-connect-8449669db6-vlsqn
	b3e038e01813e       nginx@sha256:dc53c8f25a10f9109190ed5b59bda2d707a3bde0e45857ce9e1efaa32ff9cbc1                         22 seconds ago       Running             myfrontend                0                   eb052b8983f32       sp-pod
	df3ec2590387f       f72407be9e08c                                                                                         About a minute ago   Running             coredns                   2                   d5072a9a124c0       coredns-674b8bbfcf-7nq22
	5efee9129077b       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   02cb4b05e622c       storage-provisioner
	057721cad27bd       3e58848989f55                                                                                         About a minute ago   Running             kube-proxy                2                   cd5ed83049df0       kube-proxy-k5v2p
	106e86b92b715       014094c90caac                                                                                         About a minute ago   Running             kube-scheduler            2                   72a38198fb41e       kube-scheduler-functional-528000
	2cfdc139a3cbf       674996a72aa59                                                                                         About a minute ago   Running             kube-controller-manager   2                   fe28a5c231c76       kube-controller-manager-functional-528000
	2c94d7fe5aa07       31747a36ce712                                                                                         About a minute ago   Running             etcd                      2                   040156a9b6d8f       etcd-functional-528000
	855ffe06fee2c       9a2b7cf4f8540                                                                                         About a minute ago   Running             kube-apiserver            0                   77fa38ce225e7       kube-apiserver-functional-528000
	dfdb20b146965       f72407be9e08c                                                                                         About a minute ago   Exited              coredns                   1                   9561fe73a7876       coredns-674b8bbfcf-7nq22
	64f516e53fc40       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   146a241fe4c41       storage-provisioner
	1ff7f6c8fa737       3e58848989f55                                                                                         About a minute ago   Exited              kube-proxy                1                   b51c3739137c7       kube-proxy-k5v2p
	1088c990a3670       014094c90caac                                                                                         About a minute ago   Exited              kube-scheduler            1                   0926ef264a6a9       kube-scheduler-functional-528000
	cb0d75ce626c4       31747a36ce712                                                                                         About a minute ago   Exited              etcd                      1                   42ba399cf7f30       etcd-functional-528000
	051612b4e557c       674996a72aa59                                                                                         About a minute ago   Exited              kube-controller-manager   1                   b8a3167fa20a6       kube-controller-manager-functional-528000
	
	
	==> coredns [df3ec2590387] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 783a63fd790b67347773cc11134852a0cefffe9807c510eacea6a36a53ac96373edc45eeec2b3b91b9aa83097e8a4305a7930990ee34ff08160db538153799e6
	CoreDNS-1.12.0
	linux/arm64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:43600 - 37270 "HINFO IN 1768346471835711663.1383593264685414243. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.099837017s
	
	
	==> coredns [dfdb20b14696] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 783a63fd790b67347773cc11134852a0cefffe9807c510eacea6a36a53ac96373edc45eeec2b3b91b9aa83097e8a4305a7930990ee34ff08160db538153799e6
	CoreDNS-1.12.0
	linux/arm64, go1.23.3, 51e11f1
	[INFO] 127.0.0.1:49051 - 29560 "HINFO IN 7772855047705592415.6135726515325411379. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.124692334s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-528000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-528000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8a4ee670a64b93df2b47d5d189a1ce6f3cd64c90
	                    minikube.k8s.io/name=functional-528000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_06_26T10_53_05_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Jun 2025 17:53:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-528000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Jun 2025 17:57:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Jun 2025 17:57:42 +0000   Thu, 26 Jun 2025 17:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Jun 2025 17:57:42 +0000   Thu, 26 Jun 2025 17:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Jun 2025 17:57:42 +0000   Thu, 26 Jun 2025 17:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Jun 2025 17:57:42 +0000   Thu, 26 Jun 2025 17:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.106.3
	  Hostname:    functional-528000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4007636Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4007636Ki
	  pods:               110
	System Info:
	  Machine ID:                 569531f1e25c4edabe00619287d13d73
	  System UUID:                ed688e70-21dc-e84e-b4cb-da55bb6771c8
	  Boot ID:                    424bd500-d580-4692-9430-14b51557aafc
	  Kernel Version:             6.12.19
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.0.4
	  Kubelet Version:            v1.33.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64fc58db8c-26qgg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  default                     hello-node-connect-8449669db6-vlsqn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 coredns-674b8bbfcf-7nq22                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m36s
	  kube-system                 etcd-functional-528000                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m42s
	  kube-system                 kube-apiserver-functional-528000              250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-functional-528000     200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-proxy-k5v2p                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-scheduler-functional-528000              100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-d4q7t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-mckwr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m36s                kube-proxy       
	  Normal  Starting                 65s                  kube-proxy       
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 4m43s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m42s                kubelet          Node functional-528000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s                kubelet          Node functional-528000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s                kubelet          Node functional-528000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m41s                kubelet          Node functional-528000 status is now: NodeReady
	  Normal  RegisteredNode           4m38s                node-controller  Node functional-528000 event: Registered Node functional-528000 in Controller
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node functional-528000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node functional-528000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)  kubelet          Node functional-528000 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s                 node-controller  Node functional-528000 event: Registered Node functional-528000 in Controller
	  Normal  Starting                 69s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s (x8 over 69s)    kubelet          Node functional-528000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s (x8 over 69s)    kubelet          Node functional-528000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s (x7 over 69s)    kubelet          Node functional-528000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                  node-controller  Node functional-528000 event: Registered Node functional-528000 in Controller
	
	
	==> dmesg <==
	[Jun26 17:55] KASLR disabled due to lack of seed
	[  +0.000210] ACPI PPTT: No PPTT table found, CPU and cache topology may be inaccurate
	[  +0.000001] cacheinfo: Unable to detect cache hierarchy for CPU 0
	[  +0.000211] (rpcbind)[99]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.000129] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.121477] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +1.022086] kauditd_printk_skb: 354 callbacks suppressed
	[  +0.135159] kauditd_printk_skb: 362 callbacks suppressed
	[  +0.189368] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.054753] kauditd_printk_skb: 219 callbacks suppressed
	[  +0.000088] kauditd_printk_skb: 44 callbacks suppressed
	[Jun26 17:56] kauditd_printk_skb: 358 callbacks suppressed
	[  +6.753116] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.000077] kauditd_printk_skb: 44 callbacks suppressed
	[  +3.400185] kauditd_printk_skb: 358 callbacks suppressed
	[Jun26 17:57] kauditd_printk_skb: 125 callbacks suppressed
	[  +2.043898] kauditd_printk_skb: 103 callbacks suppressed
	[  +1.292209] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.907654] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.701683] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.801644] kauditd_printk_skb: 77 callbacks suppressed
	[  +8.025224] kauditd_printk_skb: 76 callbacks suppressed
	[  +3.814527] kauditd_printk_skb: 61 callbacks suppressed
	
	
	==> etcd [2c94d7fe5aa0] <==
	{"level":"info","ts":"2025-06-26T17:56:39.401778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 switched to configuration voters=(16715338594042615938)"}
	{"level":"info","ts":"2025-06-26T17:56:39.402177Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da0c3f740ed801f5","local-member-id":"e7f8cfe3be499882","added-peer-id":"e7f8cfe3be499882","added-peer-peer-urls":["https://192.168.106.3:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-06-26T17:56:39.402298Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"da0c3f740ed801f5","local-member-id":"e7f8cfe3be499882","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-26T17:56:39.402323Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-06-26T17:56:39.403824Z","caller":"embed/etcd.go:762","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-06-26T17:56:39.404118Z","caller":"embed/etcd.go:633","msg":"serving peer traffic","address":"192.168.106.3:2380"}
	{"level":"info","ts":"2025-06-26T17:56:39.404141Z","caller":"embed/etcd.go:603","msg":"cmux::serve","address":"192.168.106.3:2380"}
	{"level":"info","ts":"2025-06-26T17:56:39.404205Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"e7f8cfe3be499882","initial-advertise-peer-urls":["https://192.168.106.3:2380"],"listen-peer-urls":["https://192.168.106.3:2380"],"advertise-client-urls":["https://192.168.106.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.106.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-06-26T17:56:39.404228Z","caller":"embed/etcd.go:908","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-06-26T17:56:41.198205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 is starting a new election at term 3"}
	{"level":"info","ts":"2025-06-26T17:56:41.198311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-06-26T17:56:41.198378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 received MsgPreVoteResp from e7f8cfe3be499882 at term 3"}
	{"level":"info","ts":"2025-06-26T17:56:41.198601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 became candidate at term 4"}
	{"level":"info","ts":"2025-06-26T17:56:41.198656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 received MsgVoteResp from e7f8cfe3be499882 at term 4"}
	{"level":"info","ts":"2025-06-26T17:56:41.198669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 became leader at term 4"}
	{"level":"info","ts":"2025-06-26T17:56:41.198681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e7f8cfe3be499882 elected leader e7f8cfe3be499882 at term 4"}
	{"level":"info","ts":"2025-06-26T17:56:41.200022Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"e7f8cfe3be499882","local-member-attributes":"{Name:functional-528000 ClientURLs:[https://192.168.106.3:2379]}","request-path":"/0/members/e7f8cfe3be499882/attributes","cluster-id":"da0c3f740ed801f5","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-26T17:56:41.200028Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-26T17:56:41.200294Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-26T17:56:41.200319Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-26T17:56:41.200049Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-26T17:56:41.200874Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-26T17:56:41.200875Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-26T17:56:41.201331Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.106.3:2379"}
	{"level":"info","ts":"2025-06-26T17:56:41.201676Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [cb0d75ce626c] <==
	{"level":"info","ts":"2025-06-26T17:56:01.498069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-06-26T17:56:01.498153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 received MsgPreVoteResp from e7f8cfe3be499882 at term 2"}
	{"level":"info","ts":"2025-06-26T17:56:01.498195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 became candidate at term 3"}
	{"level":"info","ts":"2025-06-26T17:56:01.498225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 received MsgVoteResp from e7f8cfe3be499882 at term 3"}
	{"level":"info","ts":"2025-06-26T17:56:01.498244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7f8cfe3be499882 became leader at term 3"}
	{"level":"info","ts":"2025-06-26T17:56:01.498350Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e7f8cfe3be499882 elected leader e7f8cfe3be499882 at term 3"}
	{"level":"info","ts":"2025-06-26T17:56:01.501466Z","caller":"etcdserver/server.go:2144","msg":"published local member to cluster through raft","local-member-id":"e7f8cfe3be499882","local-member-attributes":"{Name:functional-528000 ClientURLs:[https://192.168.106.3:2379]}","request-path":"/0/members/e7f8cfe3be499882/attributes","cluster-id":"da0c3f740ed801f5","publish-timeout":"7s"}
	{"level":"info","ts":"2025-06-26T17:56:01.501514Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-26T17:56:01.501700Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-06-26T17:56:01.501727Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-06-26T17:56:01.501743Z","caller":"embed/serve.go:124","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-06-26T17:56:01.502599Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-26T17:56:01.504750Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.106.3:2379"}
	{"level":"info","ts":"2025-06-26T17:56:01.502599Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-06-26T17:56:01.506481Z","caller":"embed/serve.go:275","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-06-26T17:56:26.016083Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-06-26T17:56:26.016126Z","caller":"embed/etcd.go:408","msg":"closing etcd server","name":"functional-528000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.106.3:2380"],"advertise-client-urls":["https://192.168.106.3:2379"]}
	{"level":"info","ts":"2025-06-26T17:56:33.019752Z","caller":"etcdserver/server.go:1546","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e7f8cfe3be499882","current-leader-member-id":"e7f8cfe3be499882"}
	{"level":"warn","ts":"2025-06-26T17:56:33.020597Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-26T17:56:33.020630Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-26T17:56:33.020660Z","caller":"embed/serve.go:235","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.106.3:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-06-26T17:56:33.020696Z","caller":"embed/serve.go:237","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.106.3:2379: use of closed network connection"}
	{"level":"info","ts":"2025-06-26T17:56:33.021975Z","caller":"embed/etcd.go:613","msg":"stopping serving peer traffic","address":"192.168.106.3:2380"}
	{"level":"info","ts":"2025-06-26T17:56:33.022029Z","caller":"embed/etcd.go:618","msg":"stopped serving peer traffic","address":"192.168.106.3:2380"}
	{"level":"info","ts":"2025-06-26T17:56:33.022091Z","caller":"embed/etcd.go:410","msg":"closed etcd server","name":"functional-528000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.106.3:2380"],"advertise-client-urls":["https://192.168.106.3:2379"]}
	
	
	==> kernel <==
	 17:57:48 up 2 min,  0 users,  load average: 0.51, 0.20, 0.07
	Linux functional-528000 6.12.19 #1 SMP PREEMPT Mon Jun 23 21:32:48 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [855ffe06fee2] <==
	I0626 17:56:41.670575       1 cache.go:39] Caches are synced for autoregister controller
	I0626 17:56:41.883640       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0626 17:56:42.555035       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0626 17:56:42.889726       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0626 17:56:42.896598       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0626 17:56:42.901168       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0626 17:56:42.902488       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0626 17:56:45.030411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0626 17:56:45.230156       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0626 17:56:45.232467       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0626 17:56:45.277353       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0626 17:56:45.378190       1 controller.go:667] quota admission added evaluator for: endpoints
	I0626 17:56:59.902003       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0626 17:56:59.902965       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.127.153"}
	I0626 17:57:03.067961       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0626 17:57:04.800858       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0626 17:57:04.800965       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.245.72"}
	E0626 17:57:23.863204       1 conn.go:339] Error on socket receive: read tcp 192.168.106.3:8441->192.168.106.1:56568: use of closed network connection
	E0626 17:57:31.450631       1 conn.go:339] Error on socket receive: read tcp 192.168.106.3:8441->192.168.106.1:56582: use of closed network connection
	I0626 17:57:31.523480       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.153.123"}
	I0626 17:57:31.525155       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0626 17:57:47.326596       1 controller.go:667] quota admission added evaluator for: namespaces
	I0626 17:57:47.382614       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0626 17:57:47.382649       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.102.248"}
	I0626 17:57:47.388228       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.215.236"}
	
	
	==> kube-controller-manager [051612b4e557] <==
	I0626 17:56:05.197478       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0626 17:56:05.198522       1 shared_informer.go:357] "Caches are synced" controller="ReplicaSet"
	I0626 17:56:05.198531       1 shared_informer.go:357] "Caches are synced" controller="service-cidr-controller"
	I0626 17:56:05.199733       1 shared_informer.go:357] "Caches are synced" controller="TTL"
	I0626 17:56:05.200880       1 shared_informer.go:357] "Caches are synced" controller="ClusterRoleAggregator"
	I0626 17:56:05.201124       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0626 17:56:05.202235       1 shared_informer.go:357] "Caches are synced" controller="TTL after finished"
	I0626 17:56:05.203549       1 shared_informer.go:357] "Caches are synced" controller="namespace"
	I0626 17:56:05.203565       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0626 17:56:05.204733       1 shared_informer.go:357] "Caches are synced" controller="job"
	I0626 17:56:05.206046       1 shared_informer.go:357] "Caches are synced" controller="bootstrap_signer"
	I0626 17:56:05.208432       1 shared_informer.go:357] "Caches are synced" controller="expand"
	I0626 17:56:05.247080       1 shared_informer.go:357] "Caches are synced" controller="service account"
	I0626 17:56:05.300389       1 shared_informer.go:357] "Caches are synced" controller="stateful set"
	I0626 17:56:05.324189       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0626 17:56:05.423060       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0626 17:56:05.445290       1 shared_informer.go:357] "Caches are synced" controller="disruption"
	I0626 17:56:05.495860       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0626 17:56:05.500477       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0626 17:56:05.544965       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0626 17:56:05.547899       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0626 17:56:05.914787       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0626 17:56:05.995963       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0626 17:56:05.995982       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0626 17:56:05.995988       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [2cfdc139a3cb] <==
	I0626 17:56:45.077174       1 shared_informer.go:357] "Caches are synced" controller="persistent volume"
	I0626 17:56:45.078482       1 shared_informer.go:357] "Caches are synced" controller="expand"
	I0626 17:56:45.079625       1 shared_informer.go:357] "Caches are synced" controller="PVC protection"
	I0626 17:56:45.086245       1 shared_informer.go:357] "Caches are synced" controller="daemon sets"
	I0626 17:56:45.087594       1 shared_informer.go:357] "Caches are synced" controller="GC"
	I0626 17:56:45.089064       1 shared_informer.go:357] "Caches are synced" controller="TTL"
	I0626 17:56:45.092679       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice"
	I0626 17:56:45.102870       1 shared_informer.go:357] "Caches are synced" controller="taint-eviction-controller"
	I0626 17:56:45.104139       1 shared_informer.go:357] "Caches are synced" controller="taint"
	I0626 17:56:45.104169       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0626 17:56:45.104201       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-528000"
	I0626 17:56:45.104219       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0626 17:56:45.129907       1 shared_informer.go:357] "Caches are synced" controller="attach detach"
	I0626 17:56:45.178677       1 shared_informer.go:357] "Caches are synced" controller="endpoint_slice_mirroring"
	I0626 17:56:45.183281       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0626 17:56:45.186801       1 shared_informer.go:357] "Caches are synced" controller="endpoint"
	I0626 17:56:45.198252       1 shared_informer.go:357] "Caches are synced" controller="resource quota"
	I0626 17:56:45.595461       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0626 17:56:45.625769       1 shared_informer.go:357] "Caches are synced" controller="garbage collector"
	I0626 17:56:45.625785       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0626 17:56:45.625790       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0626 17:57:47.346829       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0626 17:57:47.347354       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0626 17:57:47.351592       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0626 17:57:47.353073       1 replica_set.go:562] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [057721cad27b] <==
	E0626 17:56:42.296300       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0626 17:56:42.300363       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.106.3"]
	E0626 17:56:42.300411       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0626 17:56:42.307145       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0626 17:56:42.307155       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0626 17:56:42.307169       1 server_linux.go:145] "Using iptables Proxier"
	I0626 17:56:42.308808       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0626 17:56:42.308877       1 server.go:516] "Version info" version="v1.33.1"
	I0626 17:56:42.308881       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 17:56:42.310029       1 config.go:329] "Starting node config controller"
	I0626 17:56:42.310036       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0626 17:56:42.310589       1 config.go:199] "Starting service config controller"
	I0626 17:56:42.310603       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0626 17:56:42.310613       1 config.go:105] "Starting endpoint slice config controller"
	I0626 17:56:42.310618       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0626 17:56:42.310623       1 config.go:440] "Starting serviceCIDR config controller"
	I0626 17:56:42.310624       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0626 17:56:42.410925       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0626 17:56:42.410925       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0626 17:56:42.410934       1 shared_informer.go:357] "Caches are synced" controller="service config"
	I0626 17:56:42.410940       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [1ff7f6c8fa73] <==
	E0626 17:56:02.792769       1 proxier.go:732] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0626 17:56:02.796471       1 server.go:715] "Successfully retrieved node IP(s)" IPs=["192.168.106.3"]
	E0626 17:56:02.796498       1 server.go:245] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0626 17:56:02.803476       1 server_linux.go:122] "No iptables support for family" ipFamily="IPv6"
	I0626 17:56:02.803484       1 server.go:256] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0626 17:56:02.803491       1 server_linux.go:145] "Using iptables Proxier"
	I0626 17:56:02.805016       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0626 17:56:02.805157       1 server.go:516] "Version info" version="v1.33.1"
	I0626 17:56:02.805176       1 server.go:518] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 17:56:02.806393       1 config.go:199] "Starting service config controller"
	I0626 17:56:02.806422       1 shared_informer.go:350] "Waiting for caches to sync" controller="service config"
	I0626 17:56:02.806442       1 config.go:105] "Starting endpoint slice config controller"
	I0626 17:56:02.806451       1 shared_informer.go:350] "Waiting for caches to sync" controller="endpoint slice config"
	I0626 17:56:02.806463       1 config.go:440] "Starting serviceCIDR config controller"
	I0626 17:56:02.806471       1 shared_informer.go:350] "Waiting for caches to sync" controller="serviceCIDR config"
	I0626 17:56:02.806873       1 config.go:329] "Starting node config controller"
	I0626 17:56:02.807424       1 shared_informer.go:350] "Waiting for caches to sync" controller="node config"
	I0626 17:56:02.907326       1 shared_informer.go:357] "Caches are synced" controller="serviceCIDR config"
	I0626 17:56:02.907521       1 shared_informer.go:357] "Caches are synced" controller="node config"
	I0626 17:56:02.907339       1 shared_informer.go:357] "Caches are synced" controller="endpoint slice config"
	I0626 17:56:02.907326       1 shared_informer.go:357] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [106e86b92b71] <==
	I0626 17:56:40.136684       1 serving.go:386] Generated self-signed cert in-memory
	W0626 17:56:41.570548       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0626 17:56:41.570561       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 17:56:41.570565       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0626 17:56:41.571134       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0626 17:56:41.603813       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.1"
	I0626 17:56:41.603854       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 17:56:41.608392       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0626 17:56:41.608427       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0626 17:56:41.608681       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0626 17:56:41.608710       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0626 17:56:41.709387       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [1088c990a367] <==
	I0626 17:56:00.110052       1 serving.go:386] Generated self-signed cert in-memory
	W0626 17:56:01.884495       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0626 17:56:01.884622       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 17:56:01.884640       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0626 17:56:01.884649       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0626 17:56:01.907201       1 server.go:171] "Starting Kubernetes Scheduler" version="v1.33.1"
	I0626 17:56:01.907894       1 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 17:56:01.912224       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0626 17:56:01.912271       1 shared_informer.go:350] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0626 17:56:01.913095       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0626 17:56:01.913146       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0626 17:56:02.013097       1 shared_informer.go:357] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0626 17:56:26.013890       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0626 17:56:26.013906       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0626 17:56:26.013935       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 26 17:57:26 functional-528000 kubelet[7468]: I0626 17:57:26.315458    7468 scope.go:117] "RemoveContainer" containerID="3131cc9767ee4cfedbe222bcef91b7bf3e879a971ccf783896aeaa0991f1190a"
	Jun 26 17:57:26 functional-528000 kubelet[7468]: I0626 17:57:26.315581    7468 scope.go:117] "RemoveContainer" containerID="70475101e4df4cdbceb7c24020e5cde882e73790964cae3093f24af704032b3f"
	Jun 26 17:57:26 functional-528000 kubelet[7468]: E0626 17:57:26.315666    7468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-8449669db6-vlsqn_default(b518c5b1-3969-4dbb-a798-1ec11b453d88)\"" pod="default/hello-node-connect-8449669db6-vlsqn" podUID="b518c5b1-3969-4dbb-a798-1ec11b453d88"
	Jun 26 17:57:26 functional-528000 kubelet[7468]: I0626 17:57:26.323570    7468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.473288867 podStartE2EDuration="2.323555946s" podCreationTimestamp="2025-06-26 17:57:24 +0000 UTC" firstStartedPulling="2025-06-26 17:57:24.737407604 +0000 UTC m=+45.955560567" lastFinishedPulling="2025-06-26 17:57:25.587674683 +0000 UTC m=+46.805827646" observedRunningTime="2025-06-26 17:57:26.314582753 +0000 UTC m=+47.532735716" watchObservedRunningTime="2025-06-26 17:57:26.323555946 +0000 UTC m=+47.541708909"
	Jun 26 17:57:31 functional-528000 kubelet[7468]: I0626 17:57:31.587308    7468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-847dc\" (UniqueName: \"kubernetes.io/projected/b8457692-3b90-48eb-a088-8527c4f9ef19-kube-api-access-847dc\") pod \"hello-node-64fc58db8c-26qgg\" (UID: \"b8457692-3b90-48eb-a088-8527c4f9ef19\") " pod="default/hello-node-64fc58db8c-26qgg"
	Jun 26 17:57:32 functional-528000 kubelet[7468]: I0626 17:57:32.422085    7468 scope.go:117] "RemoveContainer" containerID="0dee77cc735f9977c1cf5e1e10fec5c68b63bf78410cc1c5a48918f0ba293880"
	Jun 26 17:57:33 functional-528000 kubelet[7468]: I0626 17:57:33.443973    7468 scope.go:117] "RemoveContainer" containerID="0dee77cc735f9977c1cf5e1e10fec5c68b63bf78410cc1c5a48918f0ba293880"
	Jun 26 17:57:33 functional-528000 kubelet[7468]: I0626 17:57:33.444251    7468 scope.go:117] "RemoveContainer" containerID="214552d817288fc3532d88ebe14740b42a7b9cc67ef80880da59f23de93ad0e5"
	Jun 26 17:57:33 functional-528000 kubelet[7468]: E0626 17:57:33.444408    7468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64fc58db8c-26qgg_default(b8457692-3b90-48eb-a088-8527c4f9ef19)\"" pod="default/hello-node-64fc58db8c-26qgg" podUID="b8457692-3b90-48eb-a088-8527c4f9ef19"
	Jun 26 17:57:38 functional-528000 kubelet[7468]: I0626 17:57:38.884264    7468 scope.go:117] "RemoveContainer" containerID="f2a1d79bd1efef53352a6cbf684aa7d738ae20d3f52a511bef43bdc8d5310fae"
	Jun 26 17:57:39 functional-528000 kubelet[7468]: I0626 17:57:39.660890    7468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/74371335-3480-4580-8569-55bb70e3b0dc-test-volume\") pod \"busybox-mount\" (UID: \"74371335-3480-4580-8569-55bb70e3b0dc\") " pod="default/busybox-mount"
	Jun 26 17:57:39 functional-528000 kubelet[7468]: I0626 17:57:39.660925    7468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f92b6\" (UniqueName: \"kubernetes.io/projected/74371335-3480-4580-8569-55bb70e3b0dc-kube-api-access-f92b6\") pod \"busybox-mount\" (UID: \"74371335-3480-4580-8569-55bb70e3b0dc\") " pod="default/busybox-mount"
	Jun 26 17:57:40 functional-528000 kubelet[7468]: I0626 17:57:40.824728    7468 scope.go:117] "RemoveContainer" containerID="70475101e4df4cdbceb7c24020e5cde882e73790964cae3093f24af704032b3f"
	Jun 26 17:57:40 functional-528000 kubelet[7468]: E0626 17:57:40.825470    7468 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-8449669db6-vlsqn_default(b518c5b1-3969-4dbb-a798-1ec11b453d88)\"" pod="default/hello-node-connect-8449669db6-vlsqn" podUID="b518c5b1-3969-4dbb-a798-1ec11b453d88"
	Jun 26 17:57:43 functional-528000 kubelet[7468]: I0626 17:57:43.800124    7468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f92b6\" (UniqueName: \"kubernetes.io/projected/74371335-3480-4580-8569-55bb70e3b0dc-kube-api-access-f92b6\") pod \"74371335-3480-4580-8569-55bb70e3b0dc\" (UID: \"74371335-3480-4580-8569-55bb70e3b0dc\") "
	Jun 26 17:57:43 functional-528000 kubelet[7468]: I0626 17:57:43.800154    7468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/74371335-3480-4580-8569-55bb70e3b0dc-test-volume\") pod \"74371335-3480-4580-8569-55bb70e3b0dc\" (UID: \"74371335-3480-4580-8569-55bb70e3b0dc\") "
	Jun 26 17:57:43 functional-528000 kubelet[7468]: I0626 17:57:43.800214    7468 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74371335-3480-4580-8569-55bb70e3b0dc-test-volume" (OuterVolumeSpecName: "test-volume") pod "74371335-3480-4580-8569-55bb70e3b0dc" (UID: "74371335-3480-4580-8569-55bb70e3b0dc"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Jun 26 17:57:43 functional-528000 kubelet[7468]: I0626 17:57:43.803932    7468 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74371335-3480-4580-8569-55bb70e3b0dc-kube-api-access-f92b6" (OuterVolumeSpecName: "kube-api-access-f92b6") pod "74371335-3480-4580-8569-55bb70e3b0dc" (UID: "74371335-3480-4580-8569-55bb70e3b0dc"). InnerVolumeSpecName "kube-api-access-f92b6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Jun 26 17:57:43 functional-528000 kubelet[7468]: I0626 17:57:43.901235    7468 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f92b6\" (UniqueName: \"kubernetes.io/projected/74371335-3480-4580-8569-55bb70e3b0dc-kube-api-access-f92b6\") on node \"functional-528000\" DevicePath \"\""
	Jun 26 17:57:43 functional-528000 kubelet[7468]: I0626 17:57:43.901279    7468 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/74371335-3480-4580-8569-55bb70e3b0dc-test-volume\") on node \"functional-528000\" DevicePath \"\""
	Jun 26 17:57:44 functional-528000 kubelet[7468]: I0626 17:57:44.567832    7468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f6e1e3c46ec885abe22bae070119f2493dab4865fcd1ff77b32ed4f2579d6a6"
	Jun 26 17:57:47 functional-528000 kubelet[7468]: I0626 17:57:47.528747    7468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c989b8a7-b7a5-484e-b77a-21108695497b-tmp-volume\") pod \"kubernetes-dashboard-7779f9b69b-mckwr\" (UID: \"c989b8a7-b7a5-484e-b77a-21108695497b\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-mckwr"
	Jun 26 17:57:47 functional-528000 kubelet[7468]: I0626 17:57:47.528797    7468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/37309e71-b7c4-4453-8b23-df216d7df346-tmp-volume\") pod \"dashboard-metrics-scraper-5d59dccf9b-d4q7t\" (UID: \"37309e71-b7c4-4453-8b23-df216d7df346\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-d4q7t"
	Jun 26 17:57:47 functional-528000 kubelet[7468]: I0626 17:57:47.528804    7468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76vb\" (UniqueName: \"kubernetes.io/projected/37309e71-b7c4-4453-8b23-df216d7df346-kube-api-access-x76vb\") pod \"dashboard-metrics-scraper-5d59dccf9b-d4q7t\" (UID: \"37309e71-b7c4-4453-8b23-df216d7df346\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-d4q7t"
	Jun 26 17:57:47 functional-528000 kubelet[7468]: I0626 17:57:47.528811    7468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2vz\" (UniqueName: \"kubernetes.io/projected/c989b8a7-b7a5-484e-b77a-21108695497b-kube-api-access-rr2vz\") pod \"kubernetes-dashboard-7779f9b69b-mckwr\" (UID: \"c989b8a7-b7a5-484e-b77a-21108695497b\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-mckwr"
	
	
	==> storage-provisioner [5efee9129077] <==
	W0626 17:57:23.764084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:25.765890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:25.768084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:27.773122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:27.777045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:29.785205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:29.793294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:31.795618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:31.799980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:33.805780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:33.810900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:35.815656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:35.819819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:37.822383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:37.823989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:39.829649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:39.835152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:41.840902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:41.846697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:43.850048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:43.853289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:45.855349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:45.859102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:47.860974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:57:47.863844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [64f516e53fc4] <==
	I0626 17:56:02.761715       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 17:56:02.772400       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 17:56:02.772419       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0626 17:56:02.774240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:06.229313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:10.488201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:14.087955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:17.142183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:20.165514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:20.171103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0626 17:56:20.171243       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 17:56:20.171380       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-528000_b622ea75-7a41-4d08-98d4-6b4e31556f08!
	I0626 17:56:20.172071       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b07e870f-1ce4-482e-94e1-0a4676f29a27", APIVersion:"v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-528000_b622ea75-7a41-4d08-98d4-6b4e31556f08 became leader
	W0626 17:56:20.176246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:20.178596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0626 17:56:20.272401       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-528000_b622ea75-7a41-4d08-98d4-6b4e31556f08!
	W0626 17:56:22.186882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:22.190566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:24.192745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0626 17:56:24.195007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube status --format={{.APIServer}} -p functional-528000 -n functional-528000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-528000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-5d59dccf9b-d4q7t kubernetes-dashboard-7779f9b69b-mckwr
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-528000 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-d4q7t kubernetes-dashboard-7779f9b69b-mckwr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-528000 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-d4q7t kubernetes-dashboard-7779f9b69b-mckwr: exit status 1 (34.222375ms)
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-528000/192.168.106.3
	Start Time:       Thu, 26 Jun 2025 10:57:39 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://c9f63d13ac25dc1602ca777030bfb2b1bfa0b1972e2083eeadffd5716caf52a8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 26 Jun 2025 10:57:42 -0700
	      Finished:     Thu, 26 Jun 2025 10:57:42 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f92b6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-f92b6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/busybox-mount to functional-528000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.454s (2.454s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container: mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-d4q7t" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-mckwr" not found
** /stderr **
helpers_test.go:279: kubectl --context functional-528000 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-d4q7t kubernetes-dashboard-7779f9b69b-mckwr: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (43.68s)

Copy link
Contributor

@nirs nirs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ComradeProgrammer can you explain how this change was made?

Also you did not change
deploy/iso/minikube-iso/board/minikube/aarch64/linux_aarch64_defconfig

So I think that on aarch64 we actually build the same kennel (5.10) instead of 6.12.

Please check the draft I posted here, building kernel 6.6 that works with qemu, vfkit and krunkit with --no-kubernetes:
#20923

I would be careful with kernel 6.12 since it may be too new. We know that Fedora 41+ has issues on macOS with kernel 6.13+. It will be easier to move to kernel 6.6 now, and upgrade the kernel again for a future release.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keeping old configs is not a good idea. We can always get the old config from git. Lets remove both .old files.

# Architecture
#
# Automatically generated file; DO NOT EDIT.
# Buildroot 2025.02-dirty Configuration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2025.02-dirty is suspicious. How did you generate this file?

#
BR2_ARCH_IS_64=y
BR2_USE_MMU=y
# BR2_arcle is not set
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have the defaults in this file? I think that buildroot removes the defaults to minimize the configs. Can you explain how the file was generated?

@@ -5,7 +5,7 @@
################################################################################

HYPERV_DAEMONS_VERSION = $(call qstrip,$(BR2_LINUX_KERNEL_VERSION))
HYPERV_DAEMONS_SITE = https://www.kernel.org/pub/linux/kernel/v5.x
HYPERV_DAEMONS_SITE = https://www.kernel.org/pub/linux/kernel/v6.x
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I tried to build 6.12 kernel few weeks ago I could not compile it because the sources were not found in this url. Maybe this was fixed recently?

@nirs
Copy link
Contributor

nirs commented Jun 26, 2025

I started it with Vfkit on Macos it seems to start

$ mk start -p p2 
😄  [p2] minikube v1.36.0 on Darwin 15.5 (arm64)
✨  Automatically selected the vfkit driver
💿  Downloading VM boot image ...
    > minikube-v1.36.0-1750701620...:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > minikube-v1.36.0-1750701620...:  404.38 MiB / 404.38 MiB  100.00% 17.21 M
👍  Starting "p2" primary control-plane node in "p2" cluster
🔥  Creating vfkit VM (CPUs=2, Memory=6144MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.33.1 on Docker 28.0.4 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "p2" cluster and "default" namespace by default

Looks promising, but what name -a says in the guest?

@nirs
Copy link
Contributor

nirs commented Jun 29, 2025

@ComradeProgrammer this is not needed now, see #20995.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants