Description
Description
Using containers[i].volumeMounts[j].subPath
produces no such file or directory
errors.
$ kubectl describe pods -n my-namespace my-job-bad-6962x
:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m4s default-scheduler Successfully assigned my-namespace/my-job-bad-6962x to minikube
Normal Pulled 3m27s (x8 over 4m48s) kubelet, minikube Successfully pulled image "docker.io/centos:latest"
Warning Failed 3m27s (x8 over 4m48s) kubelet, minikube Error: stat /opt/my-path/: no such file or directory
Oddly, when a pod that is run without subPath
, not only does it work, but it also initializes something that allows a Pod with subPath
to work. There's an initialization problem in 'minikube' somewhere.
Steps to reproduce the issue:
- Start MiniKube
$ minikube start --cpus 4 --memory 8192 --vm-driver kvm2
😄 minikube v1.2.0 on linux (amd64)
🔥 Creating kvm2 VM (CPUs=4, Memory=8192MB, Disk=20000MB) ...
🐳 Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
🚜 Pulling images ...
🚀 Launching Kubernetes ...
⌛ Verifying: apiserver proxy etcd scheduler controller dns
🏄 Done! kubectl is now configured to use "minikube"
- Create
my-namespace.yaml
file:
cat <<EOT > my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
labels:
name: my-namespace
EOT
- Create namespace:
kubectl create -f my-namespace.yaml
- Create
my-persistent-volume.yaml
file:
cat <<EOT > my-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-persistent-volume
labels:
type: local
namespace: my-namespace
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/opt/my-path/"
EOT
- Create persistent volume:
kubectl create -f my-persistent-volume.yaml
- Create
my-persistent-volume-claim.yaml
file:
cat <<EOT > my-persistent-volume-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers:
- kubernetes.io/pvc-protection
labels:
cattle.io/creator: norman
name: my-persistent-volume-claim
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: "manual"
volumeName: my-persistent-volume
EOT
- Create persistent volume claim:
kubectl create -f my-persistent-volume-claim.yaml
- Create
my-job-bad.yaml
file:
cat <<EOT > my-job-bad.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: my-job-bad
namespace: my-namespace
spec:
template:
spec:
containers:
- name: subpath-test
image: docker.io/centos:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["infinity"]
volumeMounts:
- name: my-volume
mountPath: /opt/my-subpath
subPath: my-subpath-1
restartPolicy: Never
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-persistent-volume-claim
EOT
- Create job with
subPath
that fails:
kubectl create -f my-job-bad.yaml
- Watch for error.
$ kubectl get pods --namespace my-namespace --watch
NAME READY STATUS RESTARTS AGE
my-job-bad-6962x 0/1 ContainerCreating 0 11s
my-job-bad-6962x 0/1 CreateContainerConfigError 0 77s
- Create
my-job-good.yaml
file:
cat <<EOT > my-job-good.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: my-job-good
namespace: my-namespace
spec:
template:
spec:
containers:
- name: subpath-test
image: docker.io/centos:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["infinity"]
volumeMounts:
- name: my-volume
mountPath: /opt/my-subpath
restartPolicy: Never
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-persistent-volume-claim
EOT
- Create job without
subPath
that succeeds:
kubectl create -f my-job-good.yaml
- Now both the "good job" and the "bad job" work.
$ kubectl get pods --namespace my-namespace --watch
NAME READY STATUS RESTARTS AGE
my-job-bad-6962x 1/1 Running 0 4m25s
my-job-good-dlfxn 1/1 Running 0 12s
Describe the results you received:
- View error in
my-job-bad
.
$ kubectl describe pods -n my-namespace my-job-bad-6962x
Name: my-job-bad-6962x
Namespace: my-namespace
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.122.59
Start Time: Fri, 28 Jun 2019 16:36:29 -0400
Labels: controller-uid=cdc5f87b-9c59-4f58-94d5-d286f7597d65
job-name=my-job-bad
Annotations: <none>
Status: Running
IP: 172.17.0.4
Controlled By: Job/my-job-bad
Containers:
subpath-test:
Container ID: docker://965ad24defc7d2364982d9c7c5e8a5efa9293578be3b7cb7ef80cfe6e8ab3128
Image: docker.io/centos:latest
Image ID: docker-pullable://centos@sha256:b5e66c4651870a1ad435cd75922fe2cb943c9e973a9673822d1414824a1d0475
Port: <none>
Host Port: <none>
Command:
sleep
Args:
infinity
State: Running
Started: Fri, 28 Jun 2019 16:40:48 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt/my-subpath from my-volume (rw,path="my-subpath-1")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wrmc5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
my-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-persistent-volume-claim
ReadOnly: false
default-token-wrmc5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wrmc5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m4s default-scheduler Successfully assigned my-namespace/my-job-bad-6962x to minikube
Normal Pulled 3m27s (x8 over 4m48s) kubelet, minikube Successfully pulled image "docker.io/centos:latest"
Warning Failed 3m27s (x8 over 4m48s) kubelet, minikube Error: stat /opt/my-path/: no such file or directory
Normal Pulling 3m16s (x9 over 6m3s) kubelet, minikube Pulling image "docker.io/centos:latest"
Describe the results you expected:
The Pod containing containers[i].volumeMounts[j].subPath
should come up without
the necessity of a Pod without subPath
initializing "something".
Additional information you deem important (e.g. issue happens only occasionally):
As seen, when running without subPath, the Pod comes up properly.
My guess is that when subPath
is used, an initialization step is missing.
Version of Kubernetes:
- Output of
kubectl version
:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
- Output of
minikube
:
$ minikube version
minikube version: v1.2.0
Cleanup
kubectl delete -f my-job-good.yaml
kubectl delete -f my-job-bad.yaml
kubectl delete -f my-persistent-volume-claim.yaml
kubectl delete -f my-persistent-volume.yaml
kubectl delete -f my-namespace.yaml
minikube stop
minikube delete
The output of the minikube logs
command:
$ minikube logs
==> coredns <==
.:53
2019-06-28T20:31:33.762Z [INFO] CoreDNS-1.3.1
2019-06-28T20:31:33.762Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-06-28T20:31:33.762Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
==> dmesg <==
[Jun28 20:29] APIC calibration not consistent with PM-Timer: 106ms instead of 100ms
[ +0.000000] core: CPUID marked event: 'bus cycles' unavailable
[ +0.001021] #2
[ +0.001080] #3
[ +0.022772] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +0.118421] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ +21.645559] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
[ +0.025752] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ +0.025602] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ +0.230451] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.047506] systemd-fstab-generator[1109]: Ignoring "noauto" for root device
[ +0.006948] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000004] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.614842] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +0.769835] vboxguest: loading out-of-tree module taints kernel.
[ +0.003883] vboxguest: PCI device not found, probably running on physical hardware.
[ +7.041667] systemd-fstab-generator[1986]: Ignoring "noauto" for root device
[Jun28 20:30] systemd-fstab-generator[2741]: Ignoring "noauto" for root device
[ +9.291779] systemd-fstab-generator[2990]: Ignoring "noauto" for root device
[Jun28 20:31] kauditd_printk_skb: 68 callbacks suppressed
[ +13.855634] tee (3708): /proc/3426/oom_adj is deprecated, please use /proc/3426/oom_score_adj instead.
[ +7.361978] kauditd_printk_skb: 20 callbacks suppressed
[ +6.562157] kauditd_printk_skb: 47 callbacks suppressed
[ +3.921583] NFSD: Unable to end grace period: -110
==> kernel <==
21:02:00 up 32 min, 0 users, load average: 0.33, 0.35, 0.34
Linux minikube 4.15.0 #1 SMP Sun Jun 23 23:02:01 PDT 2019 x86_64 GNU/Linux
==> kube-addon-manager <==
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:54:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:55:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:55:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:56:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:56:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:57:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:57:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:58:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:58:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T20:59:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
error: no objects passed to apply
error: no objects passed to apply
INFO: == Kubernetes addon reconcile completed at 2019-06-28T20:59:33+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T21:00:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T21:00:32+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-28T21:01:30+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-28T21:01:32+00:00 ==
==> kube-apiserver <==
I0628 20:31:16.446365 1 client.go:354] scheme "" not registered, fallback to default scheme
I0628 20:31:16.446451 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
I0628 20:31:16.446513 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0628 20:31:16.463319 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0628 20:31:16.464540 1 client.go:354] parsed scheme: ""
I0628 20:31:16.464624 1 client.go:354] scheme "" not registered, fallback to default scheme
I0628 20:31:16.464699 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}]
I0628 20:31:16.464799 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0628 20:31:16.479368 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0628 20:31:19.006900 1 secure_serving.go:116] Serving securely on [::]:8443
I0628 20:31:19.007039 1 available_controller.go:374] Starting AvailableConditionController
I0628 20:31:19.007127 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0628 20:31:19.007616 1 crd_finalizer.go:255] Starting CRDFinalizer
I0628 20:31:19.007724 1 autoregister_controller.go:140] Starting autoregister controller
I0628 20:31:19.007821 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0628 20:31:19.007995 1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0628 20:31:19.008035 1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0628 20:31:19.008720 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0628 20:31:19.008768 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0628 20:31:19.009739 1 controller.go:83] Starting OpenAPI controller
I0628 20:31:19.009854 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0628 20:31:19.009924 1 naming_controller.go:288] Starting NamingConditionController
I0628 20:31:19.010007 1 establishing_controller.go:73] Starting EstablishingController
I0628 20:31:19.010074 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
E0628 20:31:19.010957 1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.223, ResourceVersion: 0, AdditionalErrorMsg:
I0628 20:31:19.011694 1 controller.go:81] Starting OpenAPI AggregationController
I0628 20:31:19.116245 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0628 20:31:19.116318 1 cache.go:39] Caches are synced for autoregister controller
I0628 20:31:19.207447 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0628 20:31:19.208523 1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0628 20:31:20.004685 1 controller.go:107] OpenAPI AggregationController: Processing item
I0628 20:31:20.004761 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0628 20:31:20.004891 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0628 20:31:20.021255 1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0628 20:31:20.027177 1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0628 20:31:20.027214 1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0628 20:31:21.789899 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0628 20:31:22.069843 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0628 20:31:22.085456 1 controller.go:606] quota admission added evaluator for: endpoints
W0628 20:31:22.375007 1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.39.223]
I0628 20:31:22.428889 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0628 20:31:22.872788 1 controller.go:606] quota admission added evaluator for: namespaces
I0628 20:31:23.516372 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0628 20:31:23.843781 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0628 20:31:24.145272 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0628 20:31:30.267937 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0628 20:31:30.368017 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0628 20:36:29.652258 1 controller.go:606] quota admission added evaluator for: jobs.batch
E0628 20:46:26.034713 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0628 20:59:47.138291 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
==> kube-proxy <==
W0628 20:31:31.469387 1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0628 20:31:31.486180 1 server_others.go:143] Using iptables Proxier.
W0628 20:31:31.486736 1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0628 20:31:31.487461 1 server.go:534] Version: v1.15.0
I0628 20:31:31.506151 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0628 20:31:31.506240 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0628 20:31:31.506405 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0628 20:31:31.506668 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0628 20:31:31.507332 1 config.go:96] Starting endpoints config controller
I0628 20:31:31.507381 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0628 20:31:31.507608 1 config.go:187] Starting service config controller
I0628 20:31:31.507653 1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0628 20:31:31.607874 1 controller_utils.go:1036] Caches are synced for service config controller
I0628 20:31:31.607905 1 controller_utils.go:1036] Caches are synced for endpoints config controller
==> kube-scheduler <==
I0628 20:31:13.084899 1 serving.go:319] Generated self-signed cert in-memory
W0628 20:31:14.157003 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0628 20:31:14.157134 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0628 20:31:14.157255 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0628 20:31:14.162595 1 server.go:142] Version: v1.15.0
I0628 20:31:14.162743 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0628 20:31:14.164519 1 authorization.go:47] Authorization is disabled
W0628 20:31:14.164559 1 authentication.go:55] Authentication is disabled
I0628 20:31:14.164685 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0628 20:31:14.170555 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0628 20:31:19.135466 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0628 20:31:19.195942 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0628 20:31:19.196169 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0628 20:31:19.196451 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0628 20:31:19.196823 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0628 20:31:19.203454 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0628 20:31:19.203884 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0628 20:31:19.206488 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0628 20:31:19.206807 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0628 20:31:19.206488 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0628 20:31:20.141756 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0628 20:31:20.198193 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0628 20:31:20.199220 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0628 20:31:20.206555 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0628 20:31:20.207831 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0628 20:31:20.213491 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0628 20:31:20.213838 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0628 20:31:20.213983 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0628 20:31:20.217183 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0628 20:31:20.217406 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0628 20:31:22.077586 1 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
I0628 20:31:22.088791 1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
E0628 20:31:30.321683 1 factory.go:702] pod is already present in the activeQ
==> kubelet <==
-- Logs begin at Fri 2019-06-28 20:29:38 UTC, end at Fri 2019-06-28 21:02:00 UTC. --
Jun 28 20:31:20 minikube kubelet[3010]: E0628 20:31:20.146241 3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee402e59e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c11d99e, ext:3375456312, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a962c502, ext:3598863731, loc:(*time.Location)(0x781d740)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:20 minikube kubelet[3010]: E0628 20:31:20.547212 3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee4031da5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c1211a5, ext:3375470655, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9633094, ext:3598891777, loc:(*time.Location)(0x781d740)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:20 minikube kubelet[3010]: E0628 20:31:20.948190 3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee402603f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c11543f, ext:3375422170, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9c2709a, ext:3605134986, loc:(*time.Location)(0x781d740)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:21 minikube kubelet[3010]: E0628 20:31:21.347536 3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee402e59e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c11d99e, ext:3375456312, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9c39b54, ext:3605210674, loc:(*time.Location)(0x781d740)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:21 minikube kubelet[3010]: E0628 20:31:21.746377 3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee4031da5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c1211a5, ext:3375470655, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9c4094f, ext:3605238835, loc:(*time.Location)(0x781d740)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:22 minikube kubelet[3010]: E0628 20:31:22.147513 3010 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15ac76dee402603f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc239c11543f, ext:3375422170, loc:(*time.Location)(0x781d740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3dbc23a9ea1830, ext:3607733600, loc:(*time.Location)(0x781d740)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Jun 28 20:31:30 minikube kubelet[3010]: I0628 20:31:30.495279 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/3a4da883-17ca-4a5b-9324-ca24aee64a30-lib-modules") pod "kube-proxy-b2jpw" (UID: "3a4da883-17ca-4a5b-9324-ca24aee64a30")
Jun 28 20:31:30 minikube kubelet[3010]: I0628 20:31:30.495365 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/3a4da883-17ca-4a5b-9324-ca24aee64a30-kube-proxy") pod "kube-proxy-b2jpw" (UID: "3a4da883-17ca-4a5b-9324-ca24aee64a30")
Jun 28 20:31:30 minikube kubelet[3010]: I0628 20:31:30.495415 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-vng88" (UniqueName: "kubernetes.io/secret/3a4da883-17ca-4a5b-9324-ca24aee64a30-kube-proxy-token-vng88") pod "kube-proxy-b2jpw" (UID: "3a4da883-17ca-4a5b-9324-ca24aee64a30")
Jun 28 20:31:30 minikube kubelet[3010]: I0628 20:31:30.495522 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/3a4da883-17ca-4a5b-9324-ca24aee64a30-xtables-lock") pod "kube-proxy-b2jpw" (UID: "3a4da883-17ca-4a5b-9324-ca24aee64a30")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.301860 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3a56ab17-aad6-45f3-813c-dfb6d75ddd69-config-volume") pod "coredns-5c98db65d4-grhc2" (UID: "3a56ab17-aad6-45f3-813c-dfb6d75ddd69")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.303274 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-shjs8" (UniqueName: "kubernetes.io/secret/f849e8a2-462c-47d8-9cd8-86a9d0f2c5f8-coredns-token-shjs8") pod "coredns-5c98db65d4-z6jl7" (UID: "f849e8a2-462c-47d8-9cd8-86a9d0f2c5f8")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.303648 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-shjs8" (UniqueName: "kubernetes.io/secret/3a56ab17-aad6-45f3-813c-dfb6d75ddd69-coredns-token-shjs8") pod "coredns-5c98db65d4-grhc2" (UID: "3a56ab17-aad6-45f3-813c-dfb6d75ddd69")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.303998 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f849e8a2-462c-47d8-9cd8-86a9d0f2c5f8-config-volume") pod "coredns-5c98db65d4-z6jl7" (UID: "f849e8a2-462c-47d8-9cd8-86a9d0f2c5f8")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.605835 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-jrqgk" (UniqueName: "kubernetes.io/secret/0ea547b2-82bd-465c-b1d8-b020c49159c4-storage-provisioner-token-jrqgk") pod "storage-provisioner" (UID: "0ea547b2-82bd-465c-b1d8-b020c49159c4")
Jun 28 20:31:32 minikube kubelet[3010]: I0628 20:31:32.606061 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/0ea547b2-82bd-465c-b1d8-b020c49159c4-tmp") pod "storage-provisioner" (UID: "0ea547b2-82bd-465c-b1d8-b020c49159c4")
Jun 28 20:31:33 minikube kubelet[3010]: W0628 20:31:33.368502 3010 pod_container_deletor.go:75] Container "b73805d4a687d75d991610ad1c2552102d9f42f00e2e5529cfdd550a947c9d20" not found in pod's containers
Jun 28 20:31:33 minikube kubelet[3010]: W0628 20:31:33.554251 3010 pod_container_deletor.go:75] Container "14ff5eb56f6192f01f6f271da07ea970bf3a775f64966addcd8808a2912fb2ed" not found in pod's containers
Jun 28 20:36:29 minikube kubelet[3010]: I0628 20:36:29.780191 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-wrmc5" (UniqueName: "kubernetes.io/secret/4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153-default-token-wrmc5") pod "my-job-bad-6962x" (UID: "4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153")
Jun 28 20:36:29 minikube kubelet[3010]: I0628 20:36:29.780413 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "my-persistent-volume" (UniqueName: "kubernetes.io/host-path/4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153-my-persistent-volume") pod "my-job-bad-6962x" (UID: "4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153")
Jun 28 20:37:45 minikube kubelet[3010]: E0628 20:37:45.180086 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:37:45 minikube kubelet[3010]: E0628 20:37:45.180313 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:37:47 minikube kubelet[3010]: E0628 20:37:47.346431 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:37:47 minikube kubelet[3010]: E0628 20:37:47.346623 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:02 minikube kubelet[3010]: E0628 20:38:02.425642 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:02 minikube kubelet[3010]: E0628 20:38:02.426377 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:16 minikube kubelet[3010]: E0628 20:38:16.459424 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:16 minikube kubelet[3010]: E0628 20:38:16.459540 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:28 minikube kubelet[3010]: E0628 20:38:28.447362 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:28 minikube kubelet[3010]: E0628 20:38:28.450044 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:40 minikube kubelet[3010]: E0628 20:38:40.454969 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:40 minikube kubelet[3010]: E0628 20:38:40.455046 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:38:55 minikube kubelet[3010]: E0628 20:38:55.027408 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:38:55 minikube kubelet[3010]: E0628 20:38:55.027544 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:39:06 minikube kubelet[3010]: E0628 20:39:06.467351 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:39:06 minikube kubelet[3010]: E0628 20:39:06.467523 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:39:18 minikube kubelet[3010]: E0628 20:39:18.451915 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:39:18 minikube kubelet[3010]: E0628 20:39:18.452059 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:39:32 minikube kubelet[3010]: E0628 20:39:32.446451 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:39:32 minikube kubelet[3010]: E0628 20:39:32.446592 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:39:48 minikube kubelet[3010]: E0628 20:39:48.462440 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:39:48 minikube kubelet[3010]: E0628 20:39:48.462514 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:40:01 minikube kubelet[3010]: E0628 20:40:01.735487 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:40:01 minikube kubelet[3010]: E0628 20:40:01.735637 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:40:17 minikube kubelet[3010]: E0628 20:40:17.444403 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:40:17 minikube kubelet[3010]: E0628 20:40:17.444538 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:40:33 minikube kubelet[3010]: E0628 20:40:33.440607 3010 kuberuntime_manager.go:775] container start failed: CreateContainerConfigError: stat /opt/my-path/: no such file or directory
Jun 28 20:40:33 minikube kubelet[3010]: E0628 20:40:33.441939 3010 pod_workers.go:190] Error syncing pod 4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153 ("my-job-bad-6962x_my-namespace(4bebaf7b-adfe-480b-aa3f-ee3ea3a2e153)"), skipping: failed to "StartContainer" for "subpath-test" with CreateContainerConfigError: "stat /opt/my-path/: no such file or directory"
Jun 28 20:40:42 minikube kubelet[3010]: I0628 20:40:42.538640 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-wrmc5" (UniqueName: "kubernetes.io/secret/ebc79fb2-80c0-4ed1-9e3e-fe74bde725c4-default-token-wrmc5") pod "my-job-good-dlfxn" (UID: "ebc79fb2-80c0-4ed1-9e3e-fe74bde725c4")
Jun 28 20:40:42 minikube kubelet[3010]: I0628 20:40:42.538805 3010 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "my-persistent-volume" (UniqueName: "kubernetes.io/host-path/ebc79fb2-80c0-4ed1-9e3e-fe74bde725c4-my-persistent-volume") pod "my-job-good-dlfxn" (UID: "ebc79fb2-80c0-4ed1-9e3e-fe74bde725c4")
==> storage-provisioner <==
The operating system version:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic