I0614 16:45:11.556196 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0614 16:45:11.556376 17 e2e.go:129] Starting e2e run "cb45ef5f-6832-4a4a-814e-451faccc62e9" on Ginkgo node 1 {"msg":"Test Suite starting","total":18,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1623689110 - Will randomize all specs Will run 18 of 5668 specs Jun 14 16:45:11.664: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:45:11.669: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 14 16:45:11.700: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 16:45:11.746: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 16:45:11.746: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 14 16:45:11.746: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 14 16:45:11.756: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Jun 14 16:45:11.756: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 14 16:45:11.756: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) Jun 14 16:45:11.756: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 14 16:45:11.756: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Jun 14 16:45:11.756: INFO: e2e test version: v1.20.7 Jun 14 16:45:11.758: INFO: kube-apiserver version: v1.20.7 Jun 14 16:45:11.758: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:45:11.764: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:45:11.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption Jun 14 16:45:11.805: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:45:11.815: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 14 16:45:11.934: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 16:46:11.983: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Jun 14 16:46:12.130: INFO: Created pod: pod0-sched-preemption-low-priority Jun 14 16:46:12.234: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:46:28.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8197" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:76.815 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":18,"completed":1,"skipped":113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:46:28.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 14 16:46:29.023: INFO: Pod name wrapped-volume-race-483bf954-7f6a-4a87-b44b-e894e2acdb9b: Found 0 pods out of 5 Jun 14 16:46:34.145: INFO: Pod name wrapped-volume-race-483bf954-7f6a-4a87-b44b-e894e2acdb9b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-483bf954-7f6a-4a87-b44b-e894e2acdb9b in namespace emptydir-wrapper-6024, will wait for the garbage collector to delete the pods Jun 14 16:46:44.311: INFO: Deleting ReplicationController wrapped-volume-race-483bf954-7f6a-4a87-b44b-e894e2acdb9b took: 7.801193ms Jun 14 16:46:45.111: INFO: Terminating ReplicationController wrapped-volume-race-483bf954-7f6a-4a87-b44b-e894e2acdb9b pods took: 800.29495ms STEP: Creating RC which spawns configmap-volume pods Jun 14 16:46:58.137: INFO: Pod name wrapped-volume-race-844047a2-6259-4b7c-896a-024aa28ac54e: Found 0 pods out of 5 Jun 14 16:47:03.145: INFO: Pod name wrapped-volume-race-844047a2-6259-4b7c-896a-024aa28ac54e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-844047a2-6259-4b7c-896a-024aa28ac54e in namespace emptydir-wrapper-6024, will wait for the garbage collector to delete the pods Jun 14 16:47:13.235: INFO: Deleting ReplicationController wrapped-volume-race-844047a2-6259-4b7c-896a-024aa28ac54e took: 9.639554ms Jun 14 16:47:14.035: INFO: Terminating ReplicationController wrapped-volume-race-844047a2-6259-4b7c-896a-024aa28ac54e pods took: 800.247622ms STEP: Creating RC which spawns configmap-volume pods Jun 14 16:47:18.061: INFO: Pod name wrapped-volume-race-5c8e2b79-a9ac-44bc-b89a-4305a6e9fb94: Found 0 pods out of 5 Jun 14 16:47:23.072: INFO: Pod name wrapped-volume-race-5c8e2b79-a9ac-44bc-b89a-4305a6e9fb94: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5c8e2b79-a9ac-44bc-b89a-4305a6e9fb94 in namespace emptydir-wrapper-6024, will wait for the garbage collector to delete the pods Jun 14 16:47:37.303: INFO: Deleting ReplicationController wrapped-volume-race-5c8e2b79-a9ac-44bc-b89a-4305a6e9fb94 took: 7.430745ms Jun 14 16:47:38.503: INFO: Terminating ReplicationController wrapped-volume-race-5c8e2b79-a9ac-44bc-b89a-4305a6e9fb94 pods took: 1.200294709s STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:47:42.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6024" for this suite. • [SLOW TEST:73.855 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":18,"completed":2,"skipped":425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:47:42.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jun 14 16:47:42.505: INFO: Create a RollingUpdate DaemonSet Jun 14 16:47:42.510: INFO: Check that daemon pods launch on every node of the cluster Jun 14 16:47:42.516: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:42.519: INFO: Number of nodes with available pods: 0 Jun 14 16:47:42.519: INFO: Node leguer-worker is running more than one daemon pod Jun 14 16:47:43.525: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:43.529: INFO: Number of nodes with available pods: 0 Jun 14 16:47:43.529: INFO: Node leguer-worker is running more than one daemon pod Jun 14 16:47:44.728: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:44.732: INFO: Number of nodes with available pods: 2 Jun 14 16:47:44.732: INFO: Number of running nodes: 2, number of available pods: 2 Jun 14 16:47:44.732: INFO: Update the DaemonSet to trigger a rollout Jun 14 16:47:44.740: INFO: Updating DaemonSet daemon-set Jun 14 16:47:49.126: INFO: Roll back the DaemonSet before rollout is complete Jun 14 16:47:49.136: INFO: Updating DaemonSet daemon-set Jun 14 16:47:49.136: INFO: Make sure DaemonSet rollback is complete Jun 14 16:47:49.332: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:49.332: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:49.339: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:50.344: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:50.344: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:50.350: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:51.344: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:51.344: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:51.350: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:52.344: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:52.344: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:52.350: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:53.344: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:53.344: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:53.349: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:54.344: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:54.344: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:54.350: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:55.343: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:55.343: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:55.349: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:56.344: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:56.344: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:56.349: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:57.344: INFO: Wrong image for pod: daemon-set-l8knd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jun 14 16:47:57.344: INFO: Pod daemon-set-l8knd is not available Jun 14 16:47:57.349: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:47:58.622: INFO: Pod daemon-set-7cc6z is not available Jun 14 16:47:58.628: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8327, will wait for the garbage collector to delete the pods Jun 14 16:47:58.694: INFO: Deleting DaemonSet.extensions daemon-set took: 6.061676ms Jun 14 16:48:00.094: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.400306201s Jun 14 16:48:03.497: INFO: Number of nodes with available pods: 0 Jun 14 16:48:03.497: INFO: Number of running nodes: 0, number of available pods: 0 Jun 14 16:48:03.504: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6269493"},"items":null} Jun 14 16:48:03.507: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6269493"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:48:03.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8327" for this suite. • [SLOW TEST:21.080 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":18,"completed":3,"skipped":908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:48:03.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 14 16:48:03.667: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 16:49:03.722: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:49:03.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 14 16:49:08.342: INFO: found a healthy node: leguer-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jun 14 16:49:26.414: INFO: pods created so far: [1 1 1] Jun 14 16:49:26.414: INFO: length of pods created so far: 3 Jun 14 16:49:34.423: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:49:41.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6571" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:49:41.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9381" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:98.070 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":18,"completed":4,"skipped":992,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:49:41.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:49:47.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7091" for this suite. STEP: Destroying namespace "nsdeletetest-4108" for this suite. Jun 14 16:49:47.870: INFO: Namespace nsdeletetest-4108 was already deleted STEP: Destroying namespace "nsdeletetest-6929" for this suite. • [SLOW TEST:6.272 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":18,"completed":5,"skipped":1037,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:49:47.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 14 16:49:47.916: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 16:50:47.966: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:50:47.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jun 14 16:50:48.016: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jun 14 16:50:48.020: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:50:48.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-9599" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:50:48.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9584" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.237 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":18,"completed":6,"skipped":1163,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:50:48.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 16:50:48.147: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 16:50:48.156: INFO: Waiting for terminating namespaces to be deleted... Jun 14 16:50:48.160: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 16:50:48.169: INFO: chaos-daemon-5rrs8 from default started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:50:48.169: INFO: coredns-74ff55c5b-cjjs2 from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container coredns ready: true, restart count 0 Jun 14 16:50:48.169: INFO: coredns-74ff55c5b-jhwdl from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container coredns ready: true, restart count 0 Jun 14 16:50:48.169: INFO: create-loop-devs-sjhvx from kube-system started at 2021-06-09 07:53:50 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:50:48.169: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container kindnet-cni ready: true, restart count 93 Jun 14 16:50:48.169: INFO: kube-multus-ds-9qpk4 from kube-system started at 2021-06-09 07:53:37 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container kube-multus ready: true, restart count 0 Jun 14 16:50:48.169: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:50:48.169: INFO: tune-sysctls-phstc from kube-system started at 2021-06-09 07:53:22 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:50:48.169: INFO: speaker-wn8vq from metallb-system started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.169: INFO: Container speaker ready: true, restart count 0 Jun 14 16:50:48.169: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 16:50:48.179: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 16:50:48.179: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:50:48.179: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container dockerd ready: true, restart count 0 Jun 14 16:50:48.179: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:50:48.179: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 16:50:48.179: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container kube-multus ready: true, restart count 1 Jun 14 16:50:48.179: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:50:48.179: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:50:48.179: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 16:50:48.179: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container controller ready: true, restart count 0 Jun 14 16:50:48.179: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container speaker ready: true, restart count 0 Jun 14 16:50:48.179: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container contour ready: true, restart count 3 Jun 14 16:50:48.179: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 16:50:48.179: INFO: Container contour ready: true, restart count 1 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d95b9afa-ca70-4e93-af63-f8ce8aafac01 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled Jun 14 16:55:50.242: FAIL: Unexpected error: <*errors.errorString | 0xc0002cc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.createHostPortPodOnNode(0xc001815340, 0x4db8326, 0x4, 0xc004f78fc0, 0xf, 0x4dc1bbf, 0x9, 0xd431, 0x4db6ffe, 0x3, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:1129 +0x5bc k8s.io/kubernetes/test/e2e/scheduling.glob..func4.12() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:685 +0x52d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc004383080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc004383080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc004383080, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 STEP: removing the label kubernetes.io/e2e-d95b9afa-ca70-4e93-af63-f8ce8aafac01 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d95b9afa-ca70-4e93-af63-f8ce8aafac01 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "sched-pred-3805". STEP: Found 10 events. Jun 14 16:55:50.638: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod1: { } Scheduled: Successfully assigned sched-pred-3805/pod1 to leguer-worker Jun 14 16:55:50.638: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for without-label: { } Scheduled: Successfully assigned sched-pred-3805/without-label to leguer-worker Jun 14 16:55:50.638: INFO: At 2021-06-14 16:50:48 +0000 UTC - event for without-label: {multus } AddedInterface: Add eth0 [10.244.1.148/24] Jun 14 16:55:50.638: INFO: At 2021-06-14 16:50:49 +0000 UTC - event for without-label: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/pause:3.2" already present on machine Jun 14 16:55:50.638: INFO: At 2021-06-14 16:50:49 +0000 UTC - event for without-label: {kubelet leguer-worker} Created: Created container without-label Jun 14 16:55:50.638: INFO: At 2021-06-14 16:50:49 +0000 UTC - event for without-label: {kubelet leguer-worker} Started: Started container without-label Jun 14 16:55:50.638: INFO: At 2021-06-14 16:50:50 +0000 UTC - event for without-label: {kubelet leguer-worker} Killing: Stopping container without-label Jun 14 16:55:50.638: INFO: At 2021-06-14 16:50:50 +0000 UTC - event for without-label: {kubelet leguer-worker} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jun 14 16:55:50.638: INFO: At 2021-06-14 16:50:51 +0000 UTC - event for without-label: {kubelet leguer-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b469c09b0bcee3bc74cc8d98a0bc197c106e576c58371a8af68b6508cef5fa96": Multus: [sched-pred-3805/without-label]: error getting pod: pods "without-label" not found Jun 14 16:55:50.638: INFO: At 2021-06-14 16:54:50 +0000 UTC - event for pod1: {kubelet leguer-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Jun 14 16:55:50.642: INFO: POD NODE PHASE GRACE CONDITIONS Jun 14 16:55:50.642: INFO: pod1 leguer-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-14 16:50:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-14 16:50:50 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-14 16:50:50 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-14 16:50:50 +0000 UTC }] Jun 14 16:55:50.642: INFO: Jun 14 16:55:50.646: INFO: Logging node info for node leguer-control-plane Jun 14 16:55:50.649: INFO: Node Info: &Node{ObjectMeta:{leguer-control-plane 6d457de0-9a0f-4ff6-bd75-0bbc1430a694 6270499 0 2021-05-22 08:23:02 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux ingress-ready:true kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-22 08:23:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:ingress-ready":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-22 08:23:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-05-22 08:23:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-14 16:52:48 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-14 16:52:48 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-14 16:52:48 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-14 16:52:48 +0000 UTC,LastTransitionTime:2021-05-22 08:23:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:leguer-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cd6232015d5d4123a4f981fce21e3374,SystemUUID:eba32c45-894e-4080-80ed-6ad2fd75cb06,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9 docker.io/kubernetesui/dashboard:v2.2.0],SizeBytes:67775224,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/envoyproxy/envoy@sha256:55d35e368436519136dbd978fa0682c49d8ab99e4d768413510f226762b30b07 docker.io/envoyproxy/envoy:v1.18.3],SizeBytes:51364868,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[docker.io/kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 docker.io/kubernetesui/metrics-scraper:v1.0.6],SizeBytes:15079854,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 14 16:55:50.650: INFO: Logging kubelet events for node leguer-control-plane Jun 14 16:55:50.654: INFO: Logging pods the kubelet thinks is on node leguer-control-plane Jun 14 16:55:50.682: INFO: create-loop-devs-dxl2f started at 2021-05-22 08:23:43 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:55:50.682: INFO: tune-sysctls-s5nrx started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:55:50.682: INFO: kube-multus-ds-bxrtj started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container kube-multus ready: true, restart count 2 Jun 14 16:55:50.682: INFO: envoy-nwdcq started at 2021-05-22 08:23:46 +0000 UTC (1+2 container statuses recorded) Jun 14 16:55:50.682: INFO: Init container envoy-initconfig ready: true, restart count 0 Jun 14 16:55:50.682: INFO: Container envoy ready: true, restart count 0 Jun 14 16:55:50.682: INFO: Container shutdown-manager ready: true, restart count 0 Jun 14 16:55:50.682: INFO: kube-scheduler-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container kube-scheduler ready: true, restart count 3 Jun 14 16:55:50.682: INFO: kube-proxy-vqm28 started at 2021-05-22 08:23:20 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:55:50.682: INFO: local-path-provisioner-547f784dff-pbsvl started at 2021-05-22 08:23:41 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container local-path-provisioner ready: true, restart count 2 Jun 14 16:55:50.682: INFO: etcd-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container etcd ready: true, restart count 0 Jun 14 16:55:50.682: INFO: kube-apiserver-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container kube-apiserver ready: true, restart count 0 Jun 14 16:55:50.682: INFO: kindnet-8gg6p started at 2021-05-22 08:23:20 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container kindnet-cni ready: true, restart count 88 Jun 14 16:55:50.682: INFO: kube-controller-manager-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container kube-controller-manager ready: true, restart count 4 Jun 14 16:55:50.682: INFO: speaker-gjr9t started at 2021-05-22 08:23:45 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container speaker ready: true, restart count 0 Jun 14 16:55:50.682: INFO: kubernetes-dashboard-9f9799597-x8tx5 started at 2021-05-22 08:23:47 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jun 14 16:55:50.682: INFO: dashboard-metrics-scraper-79c5968bdc-krkfj started at 2021-05-22 08:23:47 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.682: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 W0614 16:55:50.690257 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jun 14 16:55:50.867: INFO: Latency metrics for node leguer-control-plane Jun 14 16:55:50.867: INFO: Logging node info for node leguer-worker Jun 14 16:55:50.872: INFO: Node Info: &Node{ObjectMeta:{leguer-worker a0394caa-d22f-452e-99cd-7356a6b84552 6270934 0 2021-05-22 08:23:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1043":"csi-mock-csi-mock-volumes-1043","csi-mock-csi-mock-volumes-1206":"csi-mock-csi-mock-volumes-1206","csi-mock-csi-mock-volumes-1231":"csi-mock-csi-mock-volumes-1231","csi-mock-csi-mock-volumes-1333":"csi-mock-csi-mock-volumes-1333","csi-mock-csi-mock-volumes-1360":"csi-mock-csi-mock-volumes-1360","csi-mock-csi-mock-volumes-1570":"csi-mock-csi-mock-volumes-1570","csi-mock-csi-mock-volumes-1663":"csi-mock-csi-mock-volumes-1663","csi-mock-csi-mock-volumes-1684":"csi-mock-csi-mock-volumes-1684","csi-mock-csi-mock-volumes-1709":"csi-mock-csi-mock-volumes-1709","csi-mock-csi-mock-volumes-1799":"csi-mock-csi-mock-volumes-1799","csi-mock-csi-mock-volumes-1801":"csi-mock-csi-mock-volumes-1801","csi-mock-csi-mock-volumes-1826":"csi-mock-csi-mock-volumes-1826","csi-mock-csi-mock-volumes-1895":"csi-mock-csi-mock-volumes-1895","csi-mock-csi-mock-volumes-1928":"csi-mock-csi-mock-volumes-1928","csi-mock-csi-mock-volumes-1957":"csi-mock-csi-mock-volumes-1957","csi-mock-csi-mock-volumes-1979":"csi-mock-csi-mock-volumes-1979","csi-mock-csi-mock-volumes-2039":"csi-mock-csi-mock-volumes-2039","csi-mock-csi-mock-volumes-2104":"csi-mock-csi-mock-volumes-2104","csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2229":"csi-mock-csi-mock-volumes-2229","csi-mock-csi-mock-volumes-2262":"csi-mock-csi-mock-volumes-2262","csi-mock-csi-mock-volumes-2272":"csi-mock-csi-mock-volumes-2272","csi-mock-csi-mock-volumes-2290":"csi-mock-csi-mock-volumes-2290","csi-mock-csi-mock-volumes-231":"csi-mock-csi-mock-volumes-231","csi-mock-csi-mock-volumes-2439":"csi-mock-csi-mock-volumes-2439","csi-mock-csi-mock-volumes-2502":"csi-mock-csi-mock-volumes-2502","csi-mock-csi-mock-volumes-2573":"csi-mock-csi-mock-volumes-2573","csi-mock-csi-mock-volumes-2582":"csi-mock-csi-mock-volumes-2582","csi-mock-csi-mock-volumes-2589":"csi-mock-csi-mock-volumes-2589","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-264":"csi-mock-csi-mock-volumes-264","csi-mock-csi-mock-volumes-2708":"csi-mock-csi-mock-volumes-2708","csi-mock-csi-mock-volumes-2709":"csi-mock-csi-mock-volumes-2709","csi-mock-csi-mock-volumes-2834":"csi-mock-csi-mock-volumes-2834","csi-mock-csi-mock-volumes-2887":"csi-mock-csi-mock-volumes-2887","csi-mock-csi-mock-volumes-3020":"csi-mock-csi-mock-volumes-3020","csi-mock-csi-mock-volumes-3030":"csi-mock-csi-mock-volumes-3030","csi-mock-csi-mock-volumes-3239":"csi-mock-csi-mock-volumes-3239","csi-mock-csi-mock-volumes-3297":"csi-mock-csi-mock-volumes-3297","csi-mock-csi-mock-volumes-3328":"csi-mock-csi-mock-volumes-3328","csi-mock-csi-mock-volumes-3358":"csi-mock-csi-mock-volumes-3358","csi-mock-csi-mock-volumes-338":"csi-mock-csi-mock-volumes-338","csi-mock-csi-mock-volumes-3397":"csi-mock-csi-mock-volumes-3397","csi-mock-csi-mock-volumes-3429":"csi-mock-csi-mock-volumes-3429","csi-mock-csi-mock-volumes-3509":"csi-mock-csi-mock-volumes-3509","csi-mock-csi-mock-volumes-3570":"csi-mock-csi-mock-volumes-3570","csi-mock-csi-mock-volumes-3684":"csi-mock-csi-mock-volumes-3684","csi-mock-csi-mock-volumes-3688":"csi-mock-csi-mock-volumes-3688","csi-mock-csi-mock-volumes-3826":"csi-mock-csi-mock-volumes-3826","csi-mock-csi-mock-volumes-3868":"csi-mock-csi-mock-volumes-3868","csi-mock-csi-mock-volumes-3935":"csi-mock-csi-mock-volumes-3935","csi-mock-csi-mock-volumes-4016":"csi-mock-csi-mock-volumes-4016","csi-mock-csi-mock-volumes-4061":"csi-mock-csi-mock-volumes-4061","csi-mock-csi-mock-volumes-4236":"csi-mock-csi-mock-volumes-4236","csi-mock-csi-mock-volumes-4241":"csi-mock-csi-mock-volumes-4241","csi-mock-csi-mock-volumes-4348":"csi-mock-csi-mock-volumes-4348","csi-mock-csi-mock-volumes-4356":"csi-mock-csi-mock-volumes-4356","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4490":"csi-mock-csi-mock-volumes-4490","csi-mock-csi-mock-volumes-4572":"csi-mock-csi-mock-volumes-4572","csi-mock-csi-mock-volumes-4622":"csi-mock-csi-mock-volumes-4622","csi-mock-csi-mock-volumes-4716":"csi-mock-csi-mock-volumes-4716","csi-mock-csi-mock-volumes-4721":"csi-mock-csi-mock-volumes-4721","csi-mock-csi-mock-volumes-476":"csi-mock-csi-mock-volumes-476","csi-mock-csi-mock-volumes-4796":"csi-mock-csi-mock-volumes-4796","csi-mock-csi-mock-volumes-4808":"csi-mock-csi-mock-volumes-4808","csi-mock-csi-mock-volumes-4881":"csi-mock-csi-mock-volumes-4881","csi-mock-csi-mock-volumes-5037":"csi-mock-csi-mock-volumes-5037","csi-mock-csi-mock-volumes-5044":"csi-mock-csi-mock-volumes-5044","csi-mock-csi-mock-volumes-5066":"csi-mock-csi-mock-volumes-5066","csi-mock-csi-mock-volumes-507":"csi-mock-csi-mock-volumes-507","csi-mock-csi-mock-volumes-5081":"csi-mock-csi-mock-volumes-5081","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5151":"csi-mock-csi-mock-volumes-5151","csi-mock-csi-mock-volumes-5192":"csi-mock-csi-mock-volumes-5192","csi-mock-csi-mock-volumes-521":"csi-mock-csi-mock-volumes-521","csi-mock-csi-mock-volumes-5212":"csi-mock-csi-mock-volumes-5212","csi-mock-csi-mock-volumes-5258":"csi-mock-csi-mock-volumes-5258","csi-mock-csi-mock-volumes-5438":"csi-mock-csi-mock-volumes-5438","csi-mock-csi-mock-volumes-5458":"csi-mock-csi-mock-volumes-5458","csi-mock-csi-mock-volumes-5473":"csi-mock-csi-mock-volumes-5473","csi-mock-csi-mock-volumes-5479":"csi-mock-csi-mock-volumes-5479","csi-mock-csi-mock-volumes-5489":"csi-mock-csi-mock-volumes-5489","csi-mock-csi-mock-volumes-5566":"csi-mock-csi-mock-volumes-5566","csi-mock-csi-mock-volumes-5607":"csi-mock-csi-mock-volumes-5607","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5779":"csi-mock-csi-mock-volumes-5779","csi-mock-csi-mock-volumes-5811":"csi-mock-csi-mock-volumes-5811","csi-mock-csi-mock-volumes-5822":"csi-mock-csi-mock-volumes-5822","csi-mock-csi-mock-volumes-5852":"csi-mock-csi-mock-volumes-5852","csi-mock-csi-mock-volumes-5913":"csi-mock-csi-mock-volumes-5913","csi-mock-csi-mock-volumes-6027":"csi-mock-csi-mock-volumes-6027","csi-mock-csi-mock-volumes-6074":"csi-mock-csi-mock-volumes-6074","csi-mock-csi-mock-volumes-6086":"csi-mock-csi-mock-volumes-6086","csi-mock-csi-mock-volumes-6090":"csi-mock-csi-mock-volumes-6090","csi-mock-csi-mock-volumes-6187":"csi-mock-csi-mock-volumes-6187","csi-mock-csi-mock-volumes-6192":"csi-mock-csi-mock-volumes-6192","csi-mock-csi-mock-volumes-6350":"csi-mock-csi-mock-volumes-6350","csi-mock-csi-mock-volumes-641":"csi-mock-csi-mock-volumes-641","csi-mock-csi-mock-volumes-6434":"csi-mock-csi-mock-volumes-6434","csi-mock-csi-mock-volumes-6436":"csi-mock-csi-mock-volumes-6436","csi-mock-csi-mock-volumes-6449":"csi-mock-csi-mock-volumes-6449","csi-mock-csi-mock-volumes-6567":"csi-mock-csi-mock-volumes-6567","csi-mock-csi-mock-volumes-6584":"csi-mock-csi-mock-volumes-6584","csi-mock-csi-mock-volumes-6649":"csi-mock-csi-mock-volumes-6649","csi-mock-csi-mock-volumes-6748":"csi-mock-csi-mock-volumes-6748","csi-mock-csi-mock-volumes-6808":"csi-mock-csi-mock-volumes-6808","csi-mock-csi-mock-volumes-6835":"csi-mock-csi-mock-volumes-6835","csi-mock-csi-mock-volumes-6858":"csi-mock-csi-mock-volumes-6858","csi-mock-csi-mock-volumes-6881":"csi-mock-csi-mock-volumes-6881","csi-mock-csi-mock-volumes-6944":"csi-mock-csi-mock-volumes-6944","csi-mock-csi-mock-volumes-7014":"csi-mock-csi-mock-volumes-7014","csi-mock-csi-mock-volumes-7049":"csi-mock-csi-mock-volumes-7049","csi-mock-csi-mock-volumes-7063":"csi-mock-csi-mock-volumes-7063","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7223":"csi-mock-csi-mock-volumes-7223","csi-mock-csi-mock-volumes-7292":"csi-mock-csi-mock-volumes-7292","csi-mock-csi-mock-volumes-731":"csi-mock-csi-mock-volumes-731","csi-mock-csi-mock-volumes-7372":"csi-mock-csi-mock-volumes-7372","csi-mock-csi-mock-volumes-7390":"csi-mock-csi-mock-volumes-7390","csi-mock-csi-mock-volumes-7436":"csi-mock-csi-mock-volumes-7436","csi-mock-csi-mock-volumes-7562":"csi-mock-csi-mock-volumes-7562","csi-mock-csi-mock-volumes-7661":"csi-mock-csi-mock-volumes-7661","csi-mock-csi-mock-volumes-7711":"csi-mock-csi-mock-volumes-7711","csi-mock-csi-mock-volumes-7764":"csi-mock-csi-mock-volumes-7764","csi-mock-csi-mock-volumes-7779":"csi-mock-csi-mock-volumes-7779","csi-mock-csi-mock-volumes-7813":"csi-mock-csi-mock-volumes-7813","csi-mock-csi-mock-volumes-785":"csi-mock-csi-mock-volumes-785","csi-mock-csi-mock-volumes-7865":"csi-mock-csi-mock-volumes-7865","csi-mock-csi-mock-volumes-7884":"csi-mock-csi-mock-volumes-7884","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8126":"csi-mock-csi-mock-volumes-8126","csi-mock-csi-mock-volumes-8149":"csi-mock-csi-mock-volumes-8149","csi-mock-csi-mock-volumes-8201":"csi-mock-csi-mock-volumes-8201","csi-mock-csi-mock-volumes-8273":"csi-mock-csi-mock-volumes-8273","csi-mock-csi-mock-volumes-840":"csi-mock-csi-mock-volumes-840","csi-mock-csi-mock-volumes-8635":"csi-mock-csi-mock-volumes-8635","csi-mock-csi-mock-volumes-8665":"csi-mock-csi-mock-volumes-8665","csi-mock-csi-mock-volumes-8764":"csi-mock-csi-mock-volumes-8764","csi-mock-csi-mock-volumes-8765":"csi-mock-csi-mock-volumes-8765","csi-mock-csi-mock-volumes-8835":"csi-mock-csi-mock-volumes-8835","csi-mock-csi-mock-volumes-884":"csi-mock-csi-mock-volumes-884","csi-mock-csi-mock-volumes-8968":"csi-mock-csi-mock-volumes-8968","csi-mock-csi-mock-volumes-8973":"csi-mock-csi-mock-volumes-8973","csi-mock-csi-mock-volumes-8985":"csi-mock-csi-mock-volumes-8985","csi-mock-csi-mock-volumes-9044":"csi-mock-csi-mock-volumes-9044","csi-mock-csi-mock-volumes-9077":"csi-mock-csi-mock-volumes-9077","csi-mock-csi-mock-volumes-9265":"csi-mock-csi-mock-volumes-9265","csi-mock-csi-mock-volumes-9313":"csi-mock-csi-mock-volumes-9313","csi-mock-csi-mock-volumes-9378":"csi-mock-csi-mock-volumes-9378","csi-mock-csi-mock-volumes-9618":"csi-mock-csi-mock-volumes-9618","csi-mock-csi-mock-volumes-963":"csi-mock-csi-mock-volumes-963","csi-mock-csi-mock-volumes-9639":"csi-mock-csi-mock-volumes-9639","csi-mock-csi-mock-volumes-9717":"csi-mock-csi-mock-volumes-9717","csi-mock-csi-mock-volumes-9736":"csi-mock-csi-mock-volumes-9736","csi-mock-csi-mock-volumes-9757":"csi-mock-csi-mock-volumes-9757","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9838":"csi-mock-csi-mock-volumes-9838","csi-mock-csi-mock-volumes-9918":"csi-mock-csi-mock-volumes-9918"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-22 08:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-06-09 08:26:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-06-14 16:49:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-06-14 16:50:50 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-14 16:54:49 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-14 16:54:49 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-14 16:54:49 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-14 16:54:49 +0000 UTC,LastTransitionTime:2021-05-22 08:23:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:leguer-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b3190afa60a4b3f8acfa4d884b5f41e,SystemUUID:e4621450-f7e7-447f-a390-1b05f9cdaec2,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:706d69e007d61c69495dc384167c7cb242ced8b893ac8bb30bdee4367c894980 docker.io/litmuschaos/go-runner:1.13.2],SizeBytes:153211568,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:806f80ccc41d7d5b33035d09bfc41bb7814f9989e738fcdefc29780934d4a663 docker.io/litmuschaos/chaos-runner:1.13.2],SizeBytes:56004602,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:0f30e5c1a1286a4bf6739dd8bdf1d00f0dd915474b3c62e892592277b0395986 docker.io/bitnami/kubectl:latest],SizeBytes:49444404,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:17747507,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:8888823,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 14 16:55:50.874: INFO: Logging kubelet events for node leguer-worker Jun 14 16:55:50.877: INFO: Logging pods the kubelet thinks is on node leguer-worker Jun 14 16:55:50.905: INFO: kindnet-svp2q started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container kindnet-cni ready: true, restart count 93 Jun 14 16:55:50.905: INFO: chaos-daemon-5rrs8 started at 2021-06-09 07:53:21 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:55:50.905: INFO: speaker-wn8vq started at 2021-06-09 07:53:21 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container speaker ready: true, restart count 0 Jun 14 16:55:50.905: INFO: pod1 started at 2021-06-14 16:50:50 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container agnhost ready: false, restart count 0 Jun 14 16:55:50.905: INFO: kube-proxy-7g274 started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:55:50.905: INFO: tune-sysctls-phstc started at 2021-06-09 07:53:22 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:55:50.905: INFO: coredns-74ff55c5b-cjjs2 started at 2021-06-09 08:12:11 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container coredns ready: true, restart count 0 Jun 14 16:55:50.905: INFO: coredns-74ff55c5b-jhwdl started at 2021-06-09 08:12:11 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container coredns ready: true, restart count 0 Jun 14 16:55:50.905: INFO: create-loop-devs-sjhvx started at 2021-06-09 07:53:50 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:55:50.905: INFO: kube-multus-ds-9qpk4 started at 2021-06-09 07:53:37 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:50.905: INFO: Container kube-multus ready: true, restart count 0 W0614 16:55:50.914309 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jun 14 16:55:51.194: INFO: Latency metrics for node leguer-worker Jun 14 16:55:51.194: INFO: Logging node info for node leguer-worker2 Jun 14 16:55:51.205: INFO: Node Info: &Node{ObjectMeta:{leguer-worker2 8f8eaae4-b1b9-4593-a956-0b952e0c41c9 6270788 0 2021-05-22 08:23:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-101":"csi-mock-csi-mock-volumes-101","csi-mock-csi-mock-volumes-1085":"csi-mock-csi-mock-volumes-1085","csi-mock-csi-mock-volumes-1097":"csi-mock-csi-mock-volumes-1097","csi-mock-csi-mock-volumes-116":"csi-mock-csi-mock-volumes-116","csi-mock-csi-mock-volumes-1188":"csi-mock-csi-mock-volumes-1188","csi-mock-csi-mock-volumes-1245":"csi-mock-csi-mock-volumes-1245","csi-mock-csi-mock-volumes-1317":"csi-mock-csi-mock-volumes-1317","csi-mock-csi-mock-volumes-1465":"csi-mock-csi-mock-volumes-1465","csi-mock-csi-mock-volumes-1553":"csi-mock-csi-mock-volumes-1553","csi-mock-csi-mock-volumes-1584":"csi-mock-csi-mock-volumes-1584","csi-mock-csi-mock-volumes-1665":"csi-mock-csi-mock-volumes-1665","csi-mock-csi-mock-volumes-1946":"csi-mock-csi-mock-volumes-1946","csi-mock-csi-mock-volumes-1954":"csi-mock-csi-mock-volumes-1954","csi-mock-csi-mock-volumes-2098":"csi-mock-csi-mock-volumes-2098","csi-mock-csi-mock-volumes-2254":"csi-mock-csi-mock-volumes-2254","csi-mock-csi-mock-volumes-2380":"csi-mock-csi-mock-volumes-2380","csi-mock-csi-mock-volumes-24":"csi-mock-csi-mock-volumes-24","csi-mock-csi-mock-volumes-2611":"csi-mock-csi-mock-volumes-2611","csi-mock-csi-mock-volumes-2722":"csi-mock-csi-mock-volumes-2722","csi-mock-csi-mock-volumes-2731":"csi-mock-csi-mock-volumes-2731","csi-mock-csi-mock-volumes-282":"csi-mock-csi-mock-volumes-282","csi-mock-csi-mock-volumes-2860":"csi-mock-csi-mock-volumes-2860","csi-mock-csi-mock-volumes-3181":"csi-mock-csi-mock-volumes-3181","csi-mock-csi-mock-volumes-3267":"csi-mock-csi-mock-volumes-3267","csi-mock-csi-mock-volumes-3275":"csi-mock-csi-mock-volumes-3275","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3308":"csi-mock-csi-mock-volumes-3308","csi-mock-csi-mock-volumes-3354":"csi-mock-csi-mock-volumes-3354","csi-mock-csi-mock-volumes-3523":"csi-mock-csi-mock-volumes-3523","csi-mock-csi-mock-volumes-3559":"csi-mock-csi-mock-volumes-3559","csi-mock-csi-mock-volumes-3596":"csi-mock-csi-mock-volumes-3596","csi-mock-csi-mock-volumes-3624":"csi-mock-csi-mock-volumes-3624","csi-mock-csi-mock-volumes-3731":"csi-mock-csi-mock-volumes-3731","csi-mock-csi-mock-volumes-3760":"csi-mock-csi-mock-volumes-3760","csi-mock-csi-mock-volumes-3791":"csi-mock-csi-mock-volumes-3791","csi-mock-csi-mock-volumes-3796":"csi-mock-csi-mock-volumes-3796","csi-mock-csi-mock-volumes-38":"csi-mock-csi-mock-volumes-38","csi-mock-csi-mock-volumes-3926":"csi-mock-csi-mock-volumes-3926","csi-mock-csi-mock-volumes-3935":"csi-mock-csi-mock-volumes-3935","csi-mock-csi-mock-volumes-3993":"csi-mock-csi-mock-volumes-3993","csi-mock-csi-mock-volumes-4187":"csi-mock-csi-mock-volumes-4187","csi-mock-csi-mock-volumes-419":"csi-mock-csi-mock-volumes-419","csi-mock-csi-mock-volumes-4231":"csi-mock-csi-mock-volumes-4231","csi-mock-csi-mock-volumes-4274":"csi-mock-csi-mock-volumes-4274","csi-mock-csi-mock-volumes-4278":"csi-mock-csi-mock-volumes-4278","csi-mock-csi-mock-volumes-4352":"csi-mock-csi-mock-volumes-4352","csi-mock-csi-mock-volumes-438":"csi-mock-csi-mock-volumes-438","csi-mock-csi-mock-volumes-4439":"csi-mock-csi-mock-volumes-4439","csi-mock-csi-mock-volumes-4567":"csi-mock-csi-mock-volumes-4567","csi-mock-csi-mock-volumes-4864":"csi-mock-csi-mock-volumes-4864","csi-mock-csi-mock-volumes-4869":"csi-mock-csi-mock-volumes-4869","csi-mock-csi-mock-volumes-4902":"csi-mock-csi-mock-volumes-4902","csi-mock-csi-mock-volumes-4926":"csi-mock-csi-mock-volumes-4926","csi-mock-csi-mock-volumes-4981":"csi-mock-csi-mock-volumes-4981","csi-mock-csi-mock-volumes-5085":"csi-mock-csi-mock-volumes-5085","csi-mock-csi-mock-volumes-5254":"csi-mock-csi-mock-volumes-5254","csi-mock-csi-mock-volumes-529":"csi-mock-csi-mock-volumes-529","csi-mock-csi-mock-volumes-5359":"csi-mock-csi-mock-volumes-5359","csi-mock-csi-mock-volumes-5482":"csi-mock-csi-mock-volumes-5482","csi-mock-csi-mock-volumes-5526":"csi-mock-csi-mock-volumes-5526","csi-mock-csi-mock-volumes-5620":"csi-mock-csi-mock-volumes-5620","csi-mock-csi-mock-volumes-5823":"csi-mock-csi-mock-volumes-5823","csi-mock-csi-mock-volumes-5902":"csi-mock-csi-mock-volumes-5902","csi-mock-csi-mock-volumes-6003":"csi-mock-csi-mock-volumes-6003","csi-mock-csi-mock-volumes-6014":"csi-mock-csi-mock-volumes-6014","csi-mock-csi-mock-volumes-6026":"csi-mock-csi-mock-volumes-6026","csi-mock-csi-mock-volumes-6089":"csi-mock-csi-mock-volumes-6089","csi-mock-csi-mock-volumes-6102":"csi-mock-csi-mock-volumes-6102","csi-mock-csi-mock-volumes-6152":"csi-mock-csi-mock-volumes-6152","csi-mock-csi-mock-volumes-6220":"csi-mock-csi-mock-volumes-6220","csi-mock-csi-mock-volumes-6258":"csi-mock-csi-mock-volumes-6258","csi-mock-csi-mock-volumes-6290":"csi-mock-csi-mock-volumes-6290","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-6424":"csi-mock-csi-mock-volumes-6424","csi-mock-csi-mock-volumes-6448":"csi-mock-csi-mock-volumes-6448","csi-mock-csi-mock-volumes-6551":"csi-mock-csi-mock-volumes-6551","csi-mock-csi-mock-volumes-6564":"csi-mock-csi-mock-volumes-6564","csi-mock-csi-mock-volumes-661":"csi-mock-csi-mock-volumes-661","csi-mock-csi-mock-volumes-6620":"csi-mock-csi-mock-volumes-6620","csi-mock-csi-mock-volumes-6689":"csi-mock-csi-mock-volumes-6689","csi-mock-csi-mock-volumes-6776":"csi-mock-csi-mock-volumes-6776","csi-mock-csi-mock-volumes-7048":"csi-mock-csi-mock-volumes-7048","csi-mock-csi-mock-volumes-7182":"csi-mock-csi-mock-volumes-7182","csi-mock-csi-mock-volumes-7195":"csi-mock-csi-mock-volumes-7195","csi-mock-csi-mock-volumes-7255":"csi-mock-csi-mock-volumes-7255","csi-mock-csi-mock-volumes-7316":"csi-mock-csi-mock-volumes-7316","csi-mock-csi-mock-volumes-7339":"csi-mock-csi-mock-volumes-7339","csi-mock-csi-mock-volumes-7364":"csi-mock-csi-mock-volumes-7364","csi-mock-csi-mock-volumes-7388":"csi-mock-csi-mock-volumes-7388","csi-mock-csi-mock-volumes-7421":"csi-mock-csi-mock-volumes-7421","csi-mock-csi-mock-volumes-7435":"csi-mock-csi-mock-volumes-7435","csi-mock-csi-mock-volumes-7495":"csi-mock-csi-mock-volumes-7495","csi-mock-csi-mock-volumes-7533":"csi-mock-csi-mock-volumes-7533","csi-mock-csi-mock-volumes-7664":"csi-mock-csi-mock-volumes-7664","csi-mock-csi-mock-volumes-7688":"csi-mock-csi-mock-volumes-7688","csi-mock-csi-mock-volumes-7695":"csi-mock-csi-mock-volumes-7695","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7768":"csi-mock-csi-mock-volumes-7768","csi-mock-csi-mock-volumes-7791":"csi-mock-csi-mock-volumes-7791","csi-mock-csi-mock-volumes-7938":"csi-mock-csi-mock-volumes-7938","csi-mock-csi-mock-volumes-800":"csi-mock-csi-mock-volumes-800","csi-mock-csi-mock-volumes-8090":"csi-mock-csi-mock-volumes-8090","csi-mock-csi-mock-volumes-8163":"csi-mock-csi-mock-volumes-8163","csi-mock-csi-mock-volumes-8244":"csi-mock-csi-mock-volumes-8244","csi-mock-csi-mock-volumes-8351":"csi-mock-csi-mock-volumes-8351","csi-mock-csi-mock-volumes-8495":"csi-mock-csi-mock-volumes-8495","csi-mock-csi-mock-volumes-8510":"csi-mock-csi-mock-volumes-8510","csi-mock-csi-mock-volumes-860":"csi-mock-csi-mock-volumes-860","csi-mock-csi-mock-volumes-868":"csi-mock-csi-mock-volumes-868","csi-mock-csi-mock-volumes-8794":"csi-mock-csi-mock-volumes-8794","csi-mock-csi-mock-volumes-8829":"csi-mock-csi-mock-volumes-8829","csi-mock-csi-mock-volumes-8875":"csi-mock-csi-mock-volumes-8875","csi-mock-csi-mock-volumes-8912":"csi-mock-csi-mock-volumes-8912","csi-mock-csi-mock-volumes-8951":"csi-mock-csi-mock-volumes-8951","csi-mock-csi-mock-volumes-9011":"csi-mock-csi-mock-volumes-9011","csi-mock-csi-mock-volumes-9167":"csi-mock-csi-mock-volumes-9167","csi-mock-csi-mock-volumes-9176":"csi-mock-csi-mock-volumes-9176","csi-mock-csi-mock-volumes-926":"csi-mock-csi-mock-volumes-926","csi-mock-csi-mock-volumes-9267":"csi-mock-csi-mock-volumes-9267","csi-mock-csi-mock-volumes-927":"csi-mock-csi-mock-volumes-927","csi-mock-csi-mock-volumes-9337":"csi-mock-csi-mock-volumes-9337","csi-mock-csi-mock-volumes-9346":"csi-mock-csi-mock-volumes-9346","csi-mock-csi-mock-volumes-9361":"csi-mock-csi-mock-volumes-9361","csi-mock-csi-mock-volumes-944":"csi-mock-csi-mock-volumes-944","csi-mock-csi-mock-volumes-9453":"csi-mock-csi-mock-volumes-9453","csi-mock-csi-mock-volumes-9494":"csi-mock-csi-mock-volumes-9494","csi-mock-csi-mock-volumes-9507":"csi-mock-csi-mock-volumes-9507","csi-mock-csi-mock-volumes-9529":"csi-mock-csi-mock-volumes-9529","csi-mock-csi-mock-volumes-9629":"csi-mock-csi-mock-volumes-9629","csi-mock-csi-mock-volumes-9788":"csi-mock-csi-mock-volumes-9788","csi-mock-csi-mock-volumes-9818":"csi-mock-csi-mock-volumes-9818","csi-mock-csi-mock-volumes-9836":"csi-mock-csi-mock-volumes-9836","csi-mock-csi-mock-volumes-9868":"csi-mock-csi-mock-volumes-9868"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-22 08:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-06-09 08:24:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-06-14 16:49:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-14 16:54:49 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-14 16:54:49 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-14 16:54:49 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-14 16:54:49 +0000 UTC,LastTransitionTime:2021-05-22 08:23:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:leguer-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:86c8c7b1af6542c49386440702c637be,SystemUUID:fe86f09a-28b3-4895-94ce-6312a2d07a57,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:706d69e007d61c69495dc384167c7cb242ced8b893ac8bb30bdee4367c894980 docker.io/litmuschaos/go-runner:1.13.2],SizeBytes:153211568,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/library/docker@sha256:87ed8e3a7b251eef42c2e4251f95ae3c5f8c4c0a64900f19cc532d0a42aa7107 docker.io/library/docker:dind],SizeBytes:81659525,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:332c4eff6fb327d140edbcc4cf5be7d3afd2ce5b6883348350f2336320c79ff7 docker.io/litmuschaos/chaos-operator:1.13.2],SizeBytes:57450276,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:806f80ccc41d7d5b33035d09bfc41bb7814f9989e738fcdefc29780934d4a663 docker.io/litmuschaos/chaos-runner:1.13.2],SizeBytes:56004602,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:0f30e5c1a1286a4bf6739dd8bdf1d00f0dd915474b3c62e892592277b0395986 docker.io/bitnami/kubectl:latest],SizeBytes:49444404,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[quay.io/metallb/controller@sha256:68c52b5301b42cad0cbf497f3d83c2e18b82548a9c36690b99b2023c55cb715a quay.io/metallb/controller:main],SizeBytes:35989620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:17747507,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:8888823,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 14 16:55:51.206: INFO: Logging kubelet events for node leguer-worker2 Jun 14 16:55:51.225: INFO: Logging pods the kubelet thinks is on node leguer-worker2 Jun 14 16:55:51.257: INFO: chaos-controller-manager-69c479c674-ld4jc started at 2021-05-26 09:15:28 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.257: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 16:55:51.258: INFO: kindnet-kx9mk started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 16:55:51.258: INFO: tune-sysctls-vjdll started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:55:51.258: INFO: chaos-operator-ce-5754fd4b69-zcrd4 started at 2021-05-26 09:12:47 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 16:55:51.258: INFO: speaker-55zcr started at 2021-05-22 08:23:57 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container speaker ready: true, restart count 0 Jun 14 16:55:51.258: INFO: kube-proxy-mp68m started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:55:51.258: INFO: kube-multus-ds-n48bs started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container kube-multus ready: true, restart count 1 Jun 14 16:55:51.258: INFO: contour-6648989f79-8gz4z started at 2021-05-22 10:05:00 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container contour ready: true, restart count 1 Jun 14 16:55:51.258: INFO: controller-675995489c-h2wms started at 2021-05-22 08:23:59 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container controller ready: true, restart count 0 Jun 14 16:55:51.258: INFO: dockerd started at 2021-05-26 09:12:20 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container dockerd ready: true, restart count 0 Jun 14 16:55:51.258: INFO: chaos-daemon-2tzpz started at 2021-05-26 09:15:28 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:55:51.258: INFO: create-loop-devs-nbf25 started at 2021-05-22 08:23:43 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:55:51.258: INFO: contour-6648989f79-2vldk started at 2021-05-22 08:24:02 +0000 UTC (0+1 container statuses recorded) Jun 14 16:55:51.258: INFO: Container contour ready: true, restart count 3 W0614 16:55:51.428288 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jun 14 16:55:51.821: INFO: Latency metrics for node leguer-worker2 Jun 14 16:55:51.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3805" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • Failure [303.722 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jun 14 16:55:50.242: Unexpected error: <*errors.errorString | 0xc0002cc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:1129 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":18,"completed":6,"skipped":1170,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:55:51.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:56:24.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4517" for this suite. STEP: Destroying namespace "nsdeletetest-7370" for this suite. Jun 14 16:56:24.522: INFO: Namespace nsdeletetest-7370 was already deleted STEP: Destroying namespace "nsdeletetest-3047" for this suite. • [SLOW TEST:32.688 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":18,"completed":7,"skipped":1395,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:56:24.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:56:24.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3189" for this suite. STEP: Destroying namespace "nspatchtest-ba4f016b-27c4-467f-8088-b9f864cf7ba2-1432" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":18,"completed":8,"skipped":1641,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:56:24.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jun 14 16:56:24.978: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 14 16:56:24.987: INFO: Number of nodes with available pods: 0 Jun 14 16:56:24.987: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 14 16:56:25.006: INFO: Number of nodes with available pods: 0 Jun 14 16:56:25.006: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:56:26.129: INFO: Number of nodes with available pods: 0 Jun 14 16:56:26.129: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:56:27.023: INFO: Number of nodes with available pods: 1 Jun 14 16:56:27.023: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 14 16:56:27.224: INFO: Number of nodes with available pods: 1 Jun 14 16:56:27.224: INFO: Number of running nodes: 0, number of available pods: 1 Jun 14 16:56:28.229: INFO: Number of nodes with available pods: 0 Jun 14 16:56:28.230: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 14 16:56:28.241: INFO: Number of nodes with available pods: 0 Jun 14 16:56:28.241: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:56:29.246: INFO: Number of nodes with available pods: 0 Jun 14 16:56:29.246: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:56:30.429: INFO: Number of nodes with available pods: 0 Jun 14 16:56:30.429: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:56:31.322: INFO: Number of nodes with available pods: 0 Jun 14 16:56:31.322: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:56:32.330: INFO: Number of nodes with available pods: 0 Jun 14 16:56:32.330: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:56:33.328: INFO: Number of nodes with available pods: 0 Jun 14 16:56:33.328: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:56:34.244: INFO: Number of nodes with available pods: 1 Jun 14 16:56:34.244: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1723, will wait for the garbage collector to delete the pods Jun 14 16:56:34.305: INFO: Deleting DaemonSet.extensions daemon-set took: 4.029691ms Jun 14 16:56:35.105: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.292025ms Jun 14 16:56:48.008: INFO: Number of nodes with available pods: 0 Jun 14 16:56:48.008: INFO: Number of running nodes: 0, number of available pods: 0 Jun 14 16:56:48.011: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6271222"},"items":null} Jun 14 16:56:48.014: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6271222"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:56:48.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1723" for this suite. • [SLOW TEST:23.424 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":18,"completed":9,"skipped":2847,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:56:48.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 16:56:48.645: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 16:56:48.654: INFO: Waiting for terminating namespaces to be deleted... Jun 14 16:56:48.658: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 16:56:48.666: INFO: chaos-daemon-5rrs8 from default started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.666: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:56:48.666: INFO: coredns-74ff55c5b-cjjs2 from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.666: INFO: Container coredns ready: true, restart count 0 Jun 14 16:56:48.666: INFO: coredns-74ff55c5b-jhwdl from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.666: INFO: Container coredns ready: true, restart count 0 Jun 14 16:56:48.666: INFO: create-loop-devs-sjhvx from kube-system started at 2021-06-09 07:53:50 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.666: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:56:48.666: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.667: INFO: Container kindnet-cni ready: true, restart count 93 Jun 14 16:56:48.667: INFO: kube-multus-ds-9qpk4 from kube-system started at 2021-06-09 07:53:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.667: INFO: Container kube-multus ready: true, restart count 0 Jun 14 16:56:48.667: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.667: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:56:48.667: INFO: tune-sysctls-phstc from kube-system started at 2021-06-09 07:53:22 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.667: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:56:48.667: INFO: speaker-wn8vq from metallb-system started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.667: INFO: Container speaker ready: true, restart count 0 Jun 14 16:56:48.667: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 16:56:48.675: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 16:56:48.675: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:56:48.675: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container dockerd ready: true, restart count 0 Jun 14 16:56:48.675: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:56:48.675: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 16:56:48.675: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container kube-multus ready: true, restart count 1 Jun 14 16:56:48.675: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:56:48.675: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:56:48.675: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 16:56:48.675: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container controller ready: true, restart count 0 Jun 14 16:56:48.675: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container speaker ready: true, restart count 0 Jun 14 16:56:48.675: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container contour ready: true, restart count 3 Jun 14 16:56:48.675: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 16:56:48.675: INFO: Container contour ready: true, restart count 1 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c1ca3b04-a51e-4be6-b1d5-e7fa61fcb044 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c1ca3b04-a51e-4be6-b1d5-e7fa61fcb044 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c1ca3b04-a51e-4be6-b1d5-e7fa61fcb044 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:56:52.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3919" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":18,"completed":10,"skipped":3157,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:56:52.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 16:56:53.024: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 16:56:53.034: INFO: Waiting for terminating namespaces to be deleted... Jun 14 16:56:53.037: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 16:56:53.045: INFO: chaos-daemon-5rrs8 from default started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:56:53.045: INFO: coredns-74ff55c5b-cjjs2 from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container coredns ready: true, restart count 0 Jun 14 16:56:53.045: INFO: coredns-74ff55c5b-jhwdl from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container coredns ready: true, restart count 0 Jun 14 16:56:53.045: INFO: create-loop-devs-sjhvx from kube-system started at 2021-06-09 07:53:50 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:56:53.045: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container kindnet-cni ready: true, restart count 93 Jun 14 16:56:53.045: INFO: kube-multus-ds-9qpk4 from kube-system started at 2021-06-09 07:53:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container kube-multus ready: true, restart count 0 Jun 14 16:56:53.045: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:56:53.045: INFO: tune-sysctls-phstc from kube-system started at 2021-06-09 07:53:22 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:56:53.045: INFO: speaker-wn8vq from metallb-system started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container speaker ready: true, restart count 0 Jun 14 16:56:53.045: INFO: with-labels from sched-pred-3919 started at 2021-06-14 16:56:50 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.045: INFO: Container with-labels ready: true, restart count 0 Jun 14 16:56:53.045: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 16:56:53.053: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 16:56:53.054: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:56:53.054: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container dockerd ready: true, restart count 0 Jun 14 16:56:53.054: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:56:53.054: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 16:56:53.054: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container kube-multus ready: true, restart count 1 Jun 14 16:56:53.054: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:56:53.054: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:56:53.054: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 16:56:53.054: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container controller ready: true, restart count 0 Jun 14 16:56:53.054: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container speaker ready: true, restart count 0 Jun 14 16:56:53.054: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container contour ready: true, restart count 3 Jun 14 16:56:53.054: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 16:56:53.054: INFO: Container contour ready: true, restart count 1 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.168881484624b1e2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16888148469f8f0a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:56:54.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1644" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":18,"completed":11,"skipped":3970,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:56:54.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 16:56:54.439: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 16:56:54.535: INFO: Waiting for terminating namespaces to be deleted... Jun 14 16:56:54.539: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 16:56:54.549: INFO: chaos-daemon-5rrs8 from default started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:56:54.549: INFO: coredns-74ff55c5b-cjjs2 from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container coredns ready: true, restart count 0 Jun 14 16:56:54.549: INFO: coredns-74ff55c5b-jhwdl from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container coredns ready: true, restart count 0 Jun 14 16:56:54.549: INFO: create-loop-devs-sjhvx from kube-system started at 2021-06-09 07:53:50 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:56:54.549: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container kindnet-cni ready: true, restart count 93 Jun 14 16:56:54.549: INFO: kube-multus-ds-9qpk4 from kube-system started at 2021-06-09 07:53:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container kube-multus ready: true, restart count 0 Jun 14 16:56:54.549: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:56:54.549: INFO: tune-sysctls-phstc from kube-system started at 2021-06-09 07:53:22 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:56:54.549: INFO: speaker-wn8vq from metallb-system started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container speaker ready: true, restart count 0 Jun 14 16:56:54.549: INFO: with-labels from sched-pred-3919 started at 2021-06-14 16:56:50 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.549: INFO: Container with-labels ready: true, restart count 0 Jun 14 16:56:54.549: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 16:56:54.558: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 16:56:54.558: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 16:56:54.558: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container dockerd ready: true, restart count 0 Jun 14 16:56:54.558: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container loopdev ready: true, restart count 0 Jun 14 16:56:54.558: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 16:56:54.558: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container kube-multus ready: true, restart count 1 Jun 14 16:56:54.558: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 16:56:54.558: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container setsysctls ready: true, restart count 0 Jun 14 16:56:54.558: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 16:56:54.558: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container controller ready: true, restart count 0 Jun 14 16:56:54.558: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container speaker ready: true, restart count 0 Jun 14 16:56:54.558: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container contour ready: true, restart count 3 Jun 14 16:56:54.558: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 16:56:54.558: INFO: Container contour ready: true, restart count 1 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod chaos-controller-manager-69c479c674-ld4jc requesting resource cpu=25m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod chaos-daemon-2tzpz requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod chaos-daemon-5rrs8 requesting resource cpu=0m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod dockerd requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod coredns-74ff55c5b-cjjs2 requesting resource cpu=100m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod coredns-74ff55c5b-jhwdl requesting resource cpu=100m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod create-loop-devs-nbf25 requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod create-loop-devs-sjhvx requesting resource cpu=0m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod kindnet-kx9mk requesting resource cpu=100m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod kindnet-svp2q requesting resource cpu=100m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod kube-multus-ds-9qpk4 requesting resource cpu=100m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod kube-multus-ds-n48bs requesting resource cpu=100m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod kube-proxy-7g274 requesting resource cpu=0m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod kube-proxy-mp68m requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod tune-sysctls-phstc requesting resource cpu=0m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod tune-sysctls-vjdll requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod chaos-operator-ce-5754fd4b69-zcrd4 requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod controller-675995489c-h2wms requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod speaker-55zcr requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod speaker-wn8vq requesting resource cpu=0m on Node leguer-worker Jun 14 16:57:00.848: INFO: Pod contour-6648989f79-2vldk requesting resource cpu=0m on Node leguer-worker2 Jun 14 16:57:00.848: INFO: Pod contour-6648989f79-8gz4z requesting resource cpu=0m on Node leguer-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jun 14 16:57:00.848: INFO: Creating a pod which consumes cpu=61320m on Node leguer-worker Jun 14 16:57:00.854: INFO: Creating a pod which consumes cpu=61442m on Node leguer-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-48df8770-0be4-41ff-9ec0-5e3272dfbc15.1688814a15b97e89], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5249/filler-pod-48df8770-0be4-41ff-9ec0-5e3272dfbc15 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-48df8770-0be4-41ff-9ec0-5e3272dfbc15.1688814a33ef09f6], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.154/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-48df8770-0be4-41ff-9ec0-5e3272dfbc15.1688814a4588efcf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-48df8770-0be4-41ff-9ec0-5e3272dfbc15.1688814a47125327], Reason = [Created], Message = [Created container filler-pod-48df8770-0be4-41ff-9ec0-5e3272dfbc15] STEP: Considering event: Type = [Normal], Name = [filler-pod-48df8770-0be4-41ff-9ec0-5e3272dfbc15.1688814a4ffb32db], Reason = [Started], Message = [Started container filler-pod-48df8770-0be4-41ff-9ec0-5e3272dfbc15] STEP: Considering event: Type = [Normal], Name = [filler-pod-cb127aff-fb59-4f6c-90ba-5bd14e83463b.1688814a160ade07], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5249/filler-pod-cb127aff-fb59-4f6c-90ba-5bd14e83463b to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-cb127aff-fb59-4f6c-90ba-5bd14e83463b.1688814a3627b721], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.181/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-cb127aff-fb59-4f6c-90ba-5bd14e83463b.1688814a45647ef8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cb127aff-fb59-4f6c-90ba-5bd14e83463b.1688814a47130f5a], Reason = [Created], Message = [Created container filler-pod-cb127aff-fb59-4f6c-90ba-5bd14e83463b] STEP: Considering event: Type = [Normal], Name = [filler-pod-cb127aff-fb59-4f6c-90ba-5bd14e83463b.1688814a50230d3a], Reason = [Started], Message = [Started container filler-pod-cb127aff-fb59-4f6c-90ba-5bd14e83463b] STEP: Considering event: Type = [Warning], Name = [additional-pod.1688814a91dfc4a4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1688814a9297c29c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:57:03.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5249" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:9.887 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":18,"completed":12,"skipped":4527,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:57:03.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 14 16:57:04.244: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:57:04.247: INFO: Number of nodes with available pods: 0 Jun 14 16:57:04.247: INFO: Node leguer-worker is running more than one daemon pod Jun 14 16:57:05.331: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:57:05.335: INFO: Number of nodes with available pods: 0 Jun 14 16:57:05.335: INFO: Node leguer-worker is running more than one daemon pod Jun 14 16:57:06.253: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:57:06.256: INFO: Number of nodes with available pods: 0 Jun 14 16:57:06.256: INFO: Node leguer-worker is running more than one daemon pod Jun 14 16:57:07.252: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:57:07.256: INFO: Number of nodes with available pods: 2 Jun 14 16:57:07.256: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 14 16:57:07.273: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:57:07.276: INFO: Number of nodes with available pods: 2 Jun 14 16:57:07.277: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9180, will wait for the garbage collector to delete the pods Jun 14 16:57:07.344: INFO: Deleting DaemonSet.extensions daemon-set took: 6.613508ms Jun 14 16:57:08.144: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.310053ms Jun 14 16:58:14.829: INFO: Number of nodes with available pods: 0 Jun 14 16:58:14.829: INFO: Number of running nodes: 0, number of available pods: 0 Jun 14 16:58:14.833: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6271651"},"items":null} Jun 14 16:58:14.836: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6271651"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:58:14.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9180" for this suite. • [SLOW TEST:70.867 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":18,"completed":13,"skipped":4538,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:58:14.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 14 16:58:14.912: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 16:59:14.958: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Jun 14 16:59:14.988: INFO: Created pod: pod0-sched-preemption-low-priority Jun 14 16:59:15.013: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:59:51.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3843" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:96.476 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":18,"completed":14,"skipped":4758,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:59:51.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 14 16:59:51.401: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:51.403: INFO: Number of nodes with available pods: 0 Jun 14 16:59:51.403: INFO: Node leguer-worker is running more than one daemon pod Jun 14 16:59:52.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:52.413: INFO: Number of nodes with available pods: 0 Jun 14 16:59:52.413: INFO: Node leguer-worker is running more than one daemon pod Jun 14 16:59:53.430: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:53.438: INFO: Number of nodes with available pods: 2 Jun 14 16:59:53.438: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 14 16:59:53.456: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:53.459: INFO: Number of nodes with available pods: 1 Jun 14 16:59:53.459: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:59:54.466: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:54.469: INFO: Number of nodes with available pods: 1 Jun 14 16:59:54.469: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:59:55.466: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:55.470: INFO: Number of nodes with available pods: 1 Jun 14 16:59:55.470: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:59:56.525: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:56.529: INFO: Number of nodes with available pods: 1 Jun 14 16:59:56.529: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:59:57.465: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:57.470: INFO: Number of nodes with available pods: 1 Jun 14 16:59:57.470: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:59:58.465: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:58.469: INFO: Number of nodes with available pods: 1 Jun 14 16:59:58.469: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 16:59:59.465: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 16:59:59.469: INFO: Number of nodes with available pods: 1 Jun 14 16:59:59.469: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:00.465: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:00.469: INFO: Number of nodes with available pods: 1 Jun 14 17:00:00.469: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:01.523: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:01.528: INFO: Number of nodes with available pods: 1 Jun 14 17:00:01.528: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:02.466: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:02.470: INFO: Number of nodes with available pods: 1 Jun 14 17:00:02.470: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:03.465: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:03.469: INFO: Number of nodes with available pods: 1 Jun 14 17:00:03.469: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:04.529: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:04.533: INFO: Number of nodes with available pods: 1 Jun 14 17:00:04.533: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:05.465: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:05.469: INFO: Number of nodes with available pods: 1 Jun 14 17:00:05.469: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:06.465: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:06.469: INFO: Number of nodes with available pods: 1 Jun 14 17:00:06.469: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:07.465: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:07.524: INFO: Number of nodes with available pods: 1 Jun 14 17:00:07.524: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:08.467: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:08.471: INFO: Number of nodes with available pods: 1 Jun 14 17:00:08.471: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:00:09.466: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:00:09.470: INFO: Number of nodes with available pods: 2 Jun 14 17:00:09.470: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3454, will wait for the garbage collector to delete the pods Jun 14 17:00:09.578: INFO: Deleting DaemonSet.extensions daemon-set took: 6.257645ms Jun 14 17:00:10.378: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.256161ms Jun 14 17:00:18.124: INFO: Number of nodes with available pods: 0 Jun 14 17:00:18.124: INFO: Number of running nodes: 0, number of available pods: 0 Jun 14 17:00:18.128: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6272124"},"items":null} Jun 14 17:00:18.131: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6272124"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:00:18.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3454" for this suite. • [SLOW TEST:26.905 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":18,"completed":15,"skipped":4848,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:00:18.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 17:00:18.463: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 17:00:18.473: INFO: Waiting for terminating namespaces to be deleted... Jun 14 17:00:18.477: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 17:00:18.486: INFO: chaos-daemon-5rrs8 from default started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:00:18.486: INFO: coredns-74ff55c5b-cjjs2 from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container coredns ready: true, restart count 0 Jun 14 17:00:18.486: INFO: coredns-74ff55c5b-jhwdl from kube-system started at 2021-06-09 08:12:11 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container coredns ready: true, restart count 0 Jun 14 17:00:18.486: INFO: create-loop-devs-sjhvx from kube-system started at 2021-06-09 07:53:50 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:00:18.486: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container kindnet-cni ready: true, restart count 93 Jun 14 17:00:18.486: INFO: kube-multus-ds-9qpk4 from kube-system started at 2021-06-09 07:53:37 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container kube-multus ready: true, restart count 0 Jun 14 17:00:18.486: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:00:18.486: INFO: tune-sysctls-phstc from kube-system started at 2021-06-09 07:53:22 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:00:18.486: INFO: speaker-wn8vq from metallb-system started at 2021-06-09 07:53:21 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.486: INFO: Container speaker ready: true, restart count 0 Jun 14 17:00:18.486: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 17:00:18.495: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 17:00:18.495: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:00:18.495: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container dockerd ready: true, restart count 0 Jun 14 17:00:18.495: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:00:18.495: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 17:00:18.495: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container kube-multus ready: true, restart count 1 Jun 14 17:00:18.495: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:00:18.495: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:00:18.495: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 17:00:18.495: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container controller ready: true, restart count 0 Jun 14 17:00:18.495: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container speaker ready: true, restart count 0 Jun 14 17:00:18.495: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container contour ready: true, restart count 3 Jun 14 17:00:18.495: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 17:00:18.495: INFO: Container contour ready: true, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-22fa7d19-01a7-4558-a463-7baf21c607cf 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled Jun 14 17:05:22.566: FAIL: Unexpected error: <*errors.errorString | 0xc0002cc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.createHostPortPodOnNode(0xc001815340, 0x4db8332, 0x4, 0xc004cfca90, 0xf, 0x0, 0x0, 0xd432, 0x4db6ffe, 0x3, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:1129 +0x5bc k8s.io/kubernetes/test/e2e/scheduling.glob..func4.13() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:778 +0x41b k8s.io/kubernetes/test/e2e.RunE2ETests(0xc004383080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc004383080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc004383080, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 STEP: removing the label kubernetes.io/e2e-22fa7d19-01a7-4558-a463-7baf21c607cf off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-22fa7d19-01a7-4558-a463-7baf21c607cf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "sched-pred-1347". STEP: Found 8 events. Jun 14 17:05:22.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod4: { } Scheduled: Successfully assigned sched-pred-1347/pod4 to leguer-worker Jun 14 17:05:22.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for without-label: { } Scheduled: Successfully assigned sched-pred-1347/without-label to leguer-worker Jun 14 17:05:22.590: INFO: At 2021-06-14 17:00:19 +0000 UTC - event for without-label: {multus } AddedInterface: Add eth0 [10.244.1.160/24] Jun 14 17:05:22.590: INFO: At 2021-06-14 17:00:19 +0000 UTC - event for without-label: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/pause:3.2" already present on machine Jun 14 17:05:22.590: INFO: At 2021-06-14 17:00:20 +0000 UTC - event for without-label: {kubelet leguer-worker} Created: Created container without-label Jun 14 17:05:22.590: INFO: At 2021-06-14 17:00:20 +0000 UTC - event for without-label: {kubelet leguer-worker} Started: Started container without-label Jun 14 17:05:22.590: INFO: At 2021-06-14 17:00:22 +0000 UTC - event for without-label: {kubelet leguer-worker} Killing: Stopping container without-label Jun 14 17:05:22.590: INFO: At 2021-06-14 17:04:22 +0000 UTC - event for pod4: {kubelet leguer-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded Jun 14 17:05:22.593: INFO: POD NODE PHASE GRACE CONDITIONS Jun 14 17:05:22.593: INFO: pod4 leguer-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-14 17:00:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-14 17:00:22 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-14 17:00:22 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-14 17:00:22 +0000 UTC }] Jun 14 17:05:22.593: INFO: Jun 14 17:05:22.598: INFO: Logging node info for node leguer-control-plane Jun 14 17:05:22.611: INFO: Node Info: &Node{ObjectMeta:{leguer-control-plane 6d457de0-9a0f-4ff6-bd75-0bbc1430a694 6272541 0 2021-05-22 08:23:02 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux ingress-ready:true kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-22 08:23:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:ingress-ready":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-22 08:23:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-05-22 08:23:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-14 17:02:52 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-14 17:02:52 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-14 17:02:52 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-14 17:02:52 +0000 UTC,LastTransitionTime:2021-05-22 08:23:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:leguer-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cd6232015d5d4123a4f981fce21e3374,SystemUUID:eba32c45-894e-4080-80ed-6ad2fd75cb06,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9 docker.io/kubernetesui/dashboard:v2.2.0],SizeBytes:67775224,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/envoyproxy/envoy@sha256:55d35e368436519136dbd978fa0682c49d8ab99e4d768413510f226762b30b07 docker.io/envoyproxy/envoy:v1.18.3],SizeBytes:51364868,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[docker.io/kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 docker.io/kubernetesui/metrics-scraper:v1.0.6],SizeBytes:15079854,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 14 17:05:22.611: INFO: Logging kubelet events for node leguer-control-plane Jun 14 17:05:22.616: INFO: Logging pods the kubelet thinks is on node leguer-control-plane Jun 14 17:05:22.664: INFO: tune-sysctls-s5nrx started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:05:22.664: INFO: kube-multus-ds-bxrtj started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container kube-multus ready: true, restart count 2 Jun 14 17:05:22.664: INFO: envoy-nwdcq started at 2021-05-22 08:23:46 +0000 UTC (1+2 container statuses recorded) Jun 14 17:05:22.664: INFO: Init container envoy-initconfig ready: true, restart count 0 Jun 14 17:05:22.664: INFO: Container envoy ready: true, restart count 0 Jun 14 17:05:22.664: INFO: Container shutdown-manager ready: true, restart count 0 Jun 14 17:05:22.664: INFO: kube-scheduler-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container kube-scheduler ready: true, restart count 3 Jun 14 17:05:22.664: INFO: kube-proxy-vqm28 started at 2021-05-22 08:23:20 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:05:22.664: INFO: create-loop-devs-dxl2f started at 2021-05-22 08:23:43 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:05:22.664: INFO: etcd-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container etcd ready: true, restart count 0 Jun 14 17:05:22.664: INFO: kube-apiserver-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container kube-apiserver ready: true, restart count 0 Jun 14 17:05:22.664: INFO: kindnet-8gg6p started at 2021-05-22 08:23:20 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container kindnet-cni ready: true, restart count 88 Jun 14 17:05:22.664: INFO: local-path-provisioner-547f784dff-pbsvl started at 2021-05-22 08:23:41 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.664: INFO: Container local-path-provisioner ready: true, restart count 2 Jun 14 17:05:22.664: INFO: kube-controller-manager-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.665: INFO: Container kube-controller-manager ready: true, restart count 4 Jun 14 17:05:22.665: INFO: speaker-gjr9t started at 2021-05-22 08:23:45 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.665: INFO: Container speaker ready: true, restart count 0 Jun 14 17:05:22.665: INFO: kubernetes-dashboard-9f9799597-x8tx5 started at 2021-05-22 08:23:47 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.665: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jun 14 17:05:22.665: INFO: dashboard-metrics-scraper-79c5968bdc-krkfj started at 2021-05-22 08:23:47 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:22.665: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 W0614 17:05:22.833894 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jun 14 17:05:23.003: INFO: Latency metrics for node leguer-control-plane Jun 14 17:05:23.003: INFO: Logging node info for node leguer-worker Jun 14 17:05:23.007: INFO: Node Info: &Node{ObjectMeta:{leguer-worker a0394caa-d22f-452e-99cd-7356a6b84552 6272897 0 2021-05-22 08:23:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1043":"csi-mock-csi-mock-volumes-1043","csi-mock-csi-mock-volumes-1206":"csi-mock-csi-mock-volumes-1206","csi-mock-csi-mock-volumes-1231":"csi-mock-csi-mock-volumes-1231","csi-mock-csi-mock-volumes-1333":"csi-mock-csi-mock-volumes-1333","csi-mock-csi-mock-volumes-1360":"csi-mock-csi-mock-volumes-1360","csi-mock-csi-mock-volumes-1570":"csi-mock-csi-mock-volumes-1570","csi-mock-csi-mock-volumes-1663":"csi-mock-csi-mock-volumes-1663","csi-mock-csi-mock-volumes-1684":"csi-mock-csi-mock-volumes-1684","csi-mock-csi-mock-volumes-1709":"csi-mock-csi-mock-volumes-1709","csi-mock-csi-mock-volumes-1799":"csi-mock-csi-mock-volumes-1799","csi-mock-csi-mock-volumes-1801":"csi-mock-csi-mock-volumes-1801","csi-mock-csi-mock-volumes-1826":"csi-mock-csi-mock-volumes-1826","csi-mock-csi-mock-volumes-1895":"csi-mock-csi-mock-volumes-1895","csi-mock-csi-mock-volumes-1928":"csi-mock-csi-mock-volumes-1928","csi-mock-csi-mock-volumes-1957":"csi-mock-csi-mock-volumes-1957","csi-mock-csi-mock-volumes-1979":"csi-mock-csi-mock-volumes-1979","csi-mock-csi-mock-volumes-2039":"csi-mock-csi-mock-volumes-2039","csi-mock-csi-mock-volumes-2104":"csi-mock-csi-mock-volumes-2104","csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2229":"csi-mock-csi-mock-volumes-2229","csi-mock-csi-mock-volumes-2262":"csi-mock-csi-mock-volumes-2262","csi-mock-csi-mock-volumes-2272":"csi-mock-csi-mock-volumes-2272","csi-mock-csi-mock-volumes-2290":"csi-mock-csi-mock-volumes-2290","csi-mock-csi-mock-volumes-231":"csi-mock-csi-mock-volumes-231","csi-mock-csi-mock-volumes-2439":"csi-mock-csi-mock-volumes-2439","csi-mock-csi-mock-volumes-2502":"csi-mock-csi-mock-volumes-2502","csi-mock-csi-mock-volumes-2573":"csi-mock-csi-mock-volumes-2573","csi-mock-csi-mock-volumes-2582":"csi-mock-csi-mock-volumes-2582","csi-mock-csi-mock-volumes-2589":"csi-mock-csi-mock-volumes-2589","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-264":"csi-mock-csi-mock-volumes-264","csi-mock-csi-mock-volumes-2708":"csi-mock-csi-mock-volumes-2708","csi-mock-csi-mock-volumes-2709":"csi-mock-csi-mock-volumes-2709","csi-mock-csi-mock-volumes-2834":"csi-mock-csi-mock-volumes-2834","csi-mock-csi-mock-volumes-2887":"csi-mock-csi-mock-volumes-2887","csi-mock-csi-mock-volumes-3020":"csi-mock-csi-mock-volumes-3020","csi-mock-csi-mock-volumes-3030":"csi-mock-csi-mock-volumes-3030","csi-mock-csi-mock-volumes-3239":"csi-mock-csi-mock-volumes-3239","csi-mock-csi-mock-volumes-3297":"csi-mock-csi-mock-volumes-3297","csi-mock-csi-mock-volumes-3328":"csi-mock-csi-mock-volumes-3328","csi-mock-csi-mock-volumes-3358":"csi-mock-csi-mock-volumes-3358","csi-mock-csi-mock-volumes-338":"csi-mock-csi-mock-volumes-338","csi-mock-csi-mock-volumes-3397":"csi-mock-csi-mock-volumes-3397","csi-mock-csi-mock-volumes-3429":"csi-mock-csi-mock-volumes-3429","csi-mock-csi-mock-volumes-3509":"csi-mock-csi-mock-volumes-3509","csi-mock-csi-mock-volumes-3570":"csi-mock-csi-mock-volumes-3570","csi-mock-csi-mock-volumes-3684":"csi-mock-csi-mock-volumes-3684","csi-mock-csi-mock-volumes-3688":"csi-mock-csi-mock-volumes-3688","csi-mock-csi-mock-volumes-3826":"csi-mock-csi-mock-volumes-3826","csi-mock-csi-mock-volumes-3868":"csi-mock-csi-mock-volumes-3868","csi-mock-csi-mock-volumes-3935":"csi-mock-csi-mock-volumes-3935","csi-mock-csi-mock-volumes-4016":"csi-mock-csi-mock-volumes-4016","csi-mock-csi-mock-volumes-4061":"csi-mock-csi-mock-volumes-4061","csi-mock-csi-mock-volumes-4236":"csi-mock-csi-mock-volumes-4236","csi-mock-csi-mock-volumes-4241":"csi-mock-csi-mock-volumes-4241","csi-mock-csi-mock-volumes-4348":"csi-mock-csi-mock-volumes-4348","csi-mock-csi-mock-volumes-4356":"csi-mock-csi-mock-volumes-4356","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4490":"csi-mock-csi-mock-volumes-4490","csi-mock-csi-mock-volumes-4572":"csi-mock-csi-mock-volumes-4572","csi-mock-csi-mock-volumes-4622":"csi-mock-csi-mock-volumes-4622","csi-mock-csi-mock-volumes-4716":"csi-mock-csi-mock-volumes-4716","csi-mock-csi-mock-volumes-4721":"csi-mock-csi-mock-volumes-4721","csi-mock-csi-mock-volumes-476":"csi-mock-csi-mock-volumes-476","csi-mock-csi-mock-volumes-4796":"csi-mock-csi-mock-volumes-4796","csi-mock-csi-mock-volumes-4808":"csi-mock-csi-mock-volumes-4808","csi-mock-csi-mock-volumes-4881":"csi-mock-csi-mock-volumes-4881","csi-mock-csi-mock-volumes-5037":"csi-mock-csi-mock-volumes-5037","csi-mock-csi-mock-volumes-5044":"csi-mock-csi-mock-volumes-5044","csi-mock-csi-mock-volumes-5066":"csi-mock-csi-mock-volumes-5066","csi-mock-csi-mock-volumes-507":"csi-mock-csi-mock-volumes-507","csi-mock-csi-mock-volumes-5081":"csi-mock-csi-mock-volumes-5081","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5151":"csi-mock-csi-mock-volumes-5151","csi-mock-csi-mock-volumes-5192":"csi-mock-csi-mock-volumes-5192","csi-mock-csi-mock-volumes-521":"csi-mock-csi-mock-volumes-521","csi-mock-csi-mock-volumes-5212":"csi-mock-csi-mock-volumes-5212","csi-mock-csi-mock-volumes-5258":"csi-mock-csi-mock-volumes-5258","csi-mock-csi-mock-volumes-5438":"csi-mock-csi-mock-volumes-5438","csi-mock-csi-mock-volumes-5458":"csi-mock-csi-mock-volumes-5458","csi-mock-csi-mock-volumes-5473":"csi-mock-csi-mock-volumes-5473","csi-mock-csi-mock-volumes-5479":"csi-mock-csi-mock-volumes-5479","csi-mock-csi-mock-volumes-5489":"csi-mock-csi-mock-volumes-5489","csi-mock-csi-mock-volumes-5566":"csi-mock-csi-mock-volumes-5566","csi-mock-csi-mock-volumes-5607":"csi-mock-csi-mock-volumes-5607","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5779":"csi-mock-csi-mock-volumes-5779","csi-mock-csi-mock-volumes-5811":"csi-mock-csi-mock-volumes-5811","csi-mock-csi-mock-volumes-5822":"csi-mock-csi-mock-volumes-5822","csi-mock-csi-mock-volumes-5852":"csi-mock-csi-mock-volumes-5852","csi-mock-csi-mock-volumes-5913":"csi-mock-csi-mock-volumes-5913","csi-mock-csi-mock-volumes-6027":"csi-mock-csi-mock-volumes-6027","csi-mock-csi-mock-volumes-6074":"csi-mock-csi-mock-volumes-6074","csi-mock-csi-mock-volumes-6086":"csi-mock-csi-mock-volumes-6086","csi-mock-csi-mock-volumes-6090":"csi-mock-csi-mock-volumes-6090","csi-mock-csi-mock-volumes-6187":"csi-mock-csi-mock-volumes-6187","csi-mock-csi-mock-volumes-6192":"csi-mock-csi-mock-volumes-6192","csi-mock-csi-mock-volumes-6350":"csi-mock-csi-mock-volumes-6350","csi-mock-csi-mock-volumes-641":"csi-mock-csi-mock-volumes-641","csi-mock-csi-mock-volumes-6434":"csi-mock-csi-mock-volumes-6434","csi-mock-csi-mock-volumes-6436":"csi-mock-csi-mock-volumes-6436","csi-mock-csi-mock-volumes-6449":"csi-mock-csi-mock-volumes-6449","csi-mock-csi-mock-volumes-6567":"csi-mock-csi-mock-volumes-6567","csi-mock-csi-mock-volumes-6584":"csi-mock-csi-mock-volumes-6584","csi-mock-csi-mock-volumes-6649":"csi-mock-csi-mock-volumes-6649","csi-mock-csi-mock-volumes-6748":"csi-mock-csi-mock-volumes-6748","csi-mock-csi-mock-volumes-6808":"csi-mock-csi-mock-volumes-6808","csi-mock-csi-mock-volumes-6835":"csi-mock-csi-mock-volumes-6835","csi-mock-csi-mock-volumes-6858":"csi-mock-csi-mock-volumes-6858","csi-mock-csi-mock-volumes-6881":"csi-mock-csi-mock-volumes-6881","csi-mock-csi-mock-volumes-6944":"csi-mock-csi-mock-volumes-6944","csi-mock-csi-mock-volumes-7014":"csi-mock-csi-mock-volumes-7014","csi-mock-csi-mock-volumes-7049":"csi-mock-csi-mock-volumes-7049","csi-mock-csi-mock-volumes-7063":"csi-mock-csi-mock-volumes-7063","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7223":"csi-mock-csi-mock-volumes-7223","csi-mock-csi-mock-volumes-7292":"csi-mock-csi-mock-volumes-7292","csi-mock-csi-mock-volumes-731":"csi-mock-csi-mock-volumes-731","csi-mock-csi-mock-volumes-7372":"csi-mock-csi-mock-volumes-7372","csi-mock-csi-mock-volumes-7390":"csi-mock-csi-mock-volumes-7390","csi-mock-csi-mock-volumes-7436":"csi-mock-csi-mock-volumes-7436","csi-mock-csi-mock-volumes-7562":"csi-mock-csi-mock-volumes-7562","csi-mock-csi-mock-volumes-7661":"csi-mock-csi-mock-volumes-7661","csi-mock-csi-mock-volumes-7711":"csi-mock-csi-mock-volumes-7711","csi-mock-csi-mock-volumes-7764":"csi-mock-csi-mock-volumes-7764","csi-mock-csi-mock-volumes-7779":"csi-mock-csi-mock-volumes-7779","csi-mock-csi-mock-volumes-7813":"csi-mock-csi-mock-volumes-7813","csi-mock-csi-mock-volumes-785":"csi-mock-csi-mock-volumes-785","csi-mock-csi-mock-volumes-7865":"csi-mock-csi-mock-volumes-7865","csi-mock-csi-mock-volumes-7884":"csi-mock-csi-mock-volumes-7884","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8126":"csi-mock-csi-mock-volumes-8126","csi-mock-csi-mock-volumes-8149":"csi-mock-csi-mock-volumes-8149","csi-mock-csi-mock-volumes-8201":"csi-mock-csi-mock-volumes-8201","csi-mock-csi-mock-volumes-8273":"csi-mock-csi-mock-volumes-8273","csi-mock-csi-mock-volumes-840":"csi-mock-csi-mock-volumes-840","csi-mock-csi-mock-volumes-8635":"csi-mock-csi-mock-volumes-8635","csi-mock-csi-mock-volumes-8665":"csi-mock-csi-mock-volumes-8665","csi-mock-csi-mock-volumes-8764":"csi-mock-csi-mock-volumes-8764","csi-mock-csi-mock-volumes-8765":"csi-mock-csi-mock-volumes-8765","csi-mock-csi-mock-volumes-8835":"csi-mock-csi-mock-volumes-8835","csi-mock-csi-mock-volumes-884":"csi-mock-csi-mock-volumes-884","csi-mock-csi-mock-volumes-8968":"csi-mock-csi-mock-volumes-8968","csi-mock-csi-mock-volumes-8973":"csi-mock-csi-mock-volumes-8973","csi-mock-csi-mock-volumes-8985":"csi-mock-csi-mock-volumes-8985","csi-mock-csi-mock-volumes-9044":"csi-mock-csi-mock-volumes-9044","csi-mock-csi-mock-volumes-9077":"csi-mock-csi-mock-volumes-9077","csi-mock-csi-mock-volumes-9265":"csi-mock-csi-mock-volumes-9265","csi-mock-csi-mock-volumes-9313":"csi-mock-csi-mock-volumes-9313","csi-mock-csi-mock-volumes-9378":"csi-mock-csi-mock-volumes-9378","csi-mock-csi-mock-volumes-9618":"csi-mock-csi-mock-volumes-9618","csi-mock-csi-mock-volumes-963":"csi-mock-csi-mock-volumes-963","csi-mock-csi-mock-volumes-9639":"csi-mock-csi-mock-volumes-9639","csi-mock-csi-mock-volumes-9717":"csi-mock-csi-mock-volumes-9717","csi-mock-csi-mock-volumes-9736":"csi-mock-csi-mock-volumes-9736","csi-mock-csi-mock-volumes-9757":"csi-mock-csi-mock-volumes-9757","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9838":"csi-mock-csi-mock-volumes-9838","csi-mock-csi-mock-volumes-9918":"csi-mock-csi-mock-volumes-9918"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-22 08:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-06-09 08:26:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-06-14 16:59:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-06-14 17:00:22 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-14 17:04:22 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-14 17:04:22 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-14 17:04:22 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-14 17:04:22 +0000 UTC,LastTransitionTime:2021-05-22 08:23:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:leguer-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b3190afa60a4b3f8acfa4d884b5f41e,SystemUUID:e4621450-f7e7-447f-a390-1b05f9cdaec2,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:706d69e007d61c69495dc384167c7cb242ced8b893ac8bb30bdee4367c894980 docker.io/litmuschaos/go-runner:1.13.2],SizeBytes:153211568,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:806f80ccc41d7d5b33035d09bfc41bb7814f9989e738fcdefc29780934d4a663 docker.io/litmuschaos/chaos-runner:1.13.2],SizeBytes:56004602,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:0f30e5c1a1286a4bf6739dd8bdf1d00f0dd915474b3c62e892592277b0395986 docker.io/bitnami/kubectl:latest],SizeBytes:49444404,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:17747507,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:8888823,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 14 17:05:23.008: INFO: Logging kubelet events for node leguer-worker Jun 14 17:05:23.011: INFO: Logging pods the kubelet thinks is on node leguer-worker Jun 14 17:05:23.039: INFO: kindnet-svp2q started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container kindnet-cni ready: true, restart count 93 Jun 14 17:05:23.039: INFO: chaos-daemon-5rrs8 started at 2021-06-09 07:53:21 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:05:23.039: INFO: speaker-wn8vq started at 2021-06-09 07:53:21 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container speaker ready: true, restart count 0 Jun 14 17:05:23.039: INFO: pod4 started at 2021-06-14 17:00:22 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container agnhost ready: false, restart count 0 Jun 14 17:05:23.039: INFO: kube-proxy-7g274 started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:05:23.039: INFO: tune-sysctls-phstc started at 2021-06-09 07:53:22 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:05:23.039: INFO: coredns-74ff55c5b-cjjs2 started at 2021-06-09 08:12:11 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container coredns ready: true, restart count 0 Jun 14 17:05:23.039: INFO: coredns-74ff55c5b-jhwdl started at 2021-06-09 08:12:11 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container coredns ready: true, restart count 0 Jun 14 17:05:23.039: INFO: create-loop-devs-sjhvx started at 2021-06-09 07:53:50 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:05:23.039: INFO: kube-multus-ds-9qpk4 started at 2021-06-09 07:53:37 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.039: INFO: Container kube-multus ready: true, restart count 0 W0614 17:05:23.047913 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jun 14 17:05:23.304: INFO: Latency metrics for node leguer-worker Jun 14 17:05:23.304: INFO: Logging node info for node leguer-worker2 Jun 14 17:05:23.309: INFO: Node Info: &Node{ObjectMeta:{leguer-worker2 8f8eaae4-b1b9-4593-a956-0b952e0c41c9 6272759 0 2021-05-22 08:23:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-101":"csi-mock-csi-mock-volumes-101","csi-mock-csi-mock-volumes-1085":"csi-mock-csi-mock-volumes-1085","csi-mock-csi-mock-volumes-1097":"csi-mock-csi-mock-volumes-1097","csi-mock-csi-mock-volumes-116":"csi-mock-csi-mock-volumes-116","csi-mock-csi-mock-volumes-1188":"csi-mock-csi-mock-volumes-1188","csi-mock-csi-mock-volumes-1245":"csi-mock-csi-mock-volumes-1245","csi-mock-csi-mock-volumes-1317":"csi-mock-csi-mock-volumes-1317","csi-mock-csi-mock-volumes-1465":"csi-mock-csi-mock-volumes-1465","csi-mock-csi-mock-volumes-1553":"csi-mock-csi-mock-volumes-1553","csi-mock-csi-mock-volumes-1584":"csi-mock-csi-mock-volumes-1584","csi-mock-csi-mock-volumes-1665":"csi-mock-csi-mock-volumes-1665","csi-mock-csi-mock-volumes-1946":"csi-mock-csi-mock-volumes-1946","csi-mock-csi-mock-volumes-1954":"csi-mock-csi-mock-volumes-1954","csi-mock-csi-mock-volumes-2098":"csi-mock-csi-mock-volumes-2098","csi-mock-csi-mock-volumes-2254":"csi-mock-csi-mock-volumes-2254","csi-mock-csi-mock-volumes-2380":"csi-mock-csi-mock-volumes-2380","csi-mock-csi-mock-volumes-24":"csi-mock-csi-mock-volumes-24","csi-mock-csi-mock-volumes-2611":"csi-mock-csi-mock-volumes-2611","csi-mock-csi-mock-volumes-2722":"csi-mock-csi-mock-volumes-2722","csi-mock-csi-mock-volumes-2731":"csi-mock-csi-mock-volumes-2731","csi-mock-csi-mock-volumes-282":"csi-mock-csi-mock-volumes-282","csi-mock-csi-mock-volumes-2860":"csi-mock-csi-mock-volumes-2860","csi-mock-csi-mock-volumes-3181":"csi-mock-csi-mock-volumes-3181","csi-mock-csi-mock-volumes-3267":"csi-mock-csi-mock-volumes-3267","csi-mock-csi-mock-volumes-3275":"csi-mock-csi-mock-volumes-3275","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3308":"csi-mock-csi-mock-volumes-3308","csi-mock-csi-mock-volumes-3354":"csi-mock-csi-mock-volumes-3354","csi-mock-csi-mock-volumes-3523":"csi-mock-csi-mock-volumes-3523","csi-mock-csi-mock-volumes-3559":"csi-mock-csi-mock-volumes-3559","csi-mock-csi-mock-volumes-3596":"csi-mock-csi-mock-volumes-3596","csi-mock-csi-mock-volumes-3624":"csi-mock-csi-mock-volumes-3624","csi-mock-csi-mock-volumes-3731":"csi-mock-csi-mock-volumes-3731","csi-mock-csi-mock-volumes-3760":"csi-mock-csi-mock-volumes-3760","csi-mock-csi-mock-volumes-3791":"csi-mock-csi-mock-volumes-3791","csi-mock-csi-mock-volumes-3796":"csi-mock-csi-mock-volumes-3796","csi-mock-csi-mock-volumes-38":"csi-mock-csi-mock-volumes-38","csi-mock-csi-mock-volumes-3926":"csi-mock-csi-mock-volumes-3926","csi-mock-csi-mock-volumes-3935":"csi-mock-csi-mock-volumes-3935","csi-mock-csi-mock-volumes-3993":"csi-mock-csi-mock-volumes-3993","csi-mock-csi-mock-volumes-4187":"csi-mock-csi-mock-volumes-4187","csi-mock-csi-mock-volumes-419":"csi-mock-csi-mock-volumes-419","csi-mock-csi-mock-volumes-4231":"csi-mock-csi-mock-volumes-4231","csi-mock-csi-mock-volumes-4274":"csi-mock-csi-mock-volumes-4274","csi-mock-csi-mock-volumes-4278":"csi-mock-csi-mock-volumes-4278","csi-mock-csi-mock-volumes-4352":"csi-mock-csi-mock-volumes-4352","csi-mock-csi-mock-volumes-438":"csi-mock-csi-mock-volumes-438","csi-mock-csi-mock-volumes-4439":"csi-mock-csi-mock-volumes-4439","csi-mock-csi-mock-volumes-4567":"csi-mock-csi-mock-volumes-4567","csi-mock-csi-mock-volumes-4864":"csi-mock-csi-mock-volumes-4864","csi-mock-csi-mock-volumes-4869":"csi-mock-csi-mock-volumes-4869","csi-mock-csi-mock-volumes-4902":"csi-mock-csi-mock-volumes-4902","csi-mock-csi-mock-volumes-4926":"csi-mock-csi-mock-volumes-4926","csi-mock-csi-mock-volumes-4981":"csi-mock-csi-mock-volumes-4981","csi-mock-csi-mock-volumes-5085":"csi-mock-csi-mock-volumes-5085","csi-mock-csi-mock-volumes-5254":"csi-mock-csi-mock-volumes-5254","csi-mock-csi-mock-volumes-529":"csi-mock-csi-mock-volumes-529","csi-mock-csi-mock-volumes-5359":"csi-mock-csi-mock-volumes-5359","csi-mock-csi-mock-volumes-5482":"csi-mock-csi-mock-volumes-5482","csi-mock-csi-mock-volumes-5526":"csi-mock-csi-mock-volumes-5526","csi-mock-csi-mock-volumes-5620":"csi-mock-csi-mock-volumes-5620","csi-mock-csi-mock-volumes-5823":"csi-mock-csi-mock-volumes-5823","csi-mock-csi-mock-volumes-5902":"csi-mock-csi-mock-volumes-5902","csi-mock-csi-mock-volumes-6003":"csi-mock-csi-mock-volumes-6003","csi-mock-csi-mock-volumes-6014":"csi-mock-csi-mock-volumes-6014","csi-mock-csi-mock-volumes-6026":"csi-mock-csi-mock-volumes-6026","csi-mock-csi-mock-volumes-6089":"csi-mock-csi-mock-volumes-6089","csi-mock-csi-mock-volumes-6102":"csi-mock-csi-mock-volumes-6102","csi-mock-csi-mock-volumes-6152":"csi-mock-csi-mock-volumes-6152","csi-mock-csi-mock-volumes-6220":"csi-mock-csi-mock-volumes-6220","csi-mock-csi-mock-volumes-6258":"csi-mock-csi-mock-volumes-6258","csi-mock-csi-mock-volumes-6290":"csi-mock-csi-mock-volumes-6290","csi-mock-csi-mock-volumes-6381":"csi-mock-csi-mock-volumes-6381","csi-mock-csi-mock-volumes-6424":"csi-mock-csi-mock-volumes-6424","csi-mock-csi-mock-volumes-6448":"csi-mock-csi-mock-volumes-6448","csi-mock-csi-mock-volumes-6551":"csi-mock-csi-mock-volumes-6551","csi-mock-csi-mock-volumes-6564":"csi-mock-csi-mock-volumes-6564","csi-mock-csi-mock-volumes-661":"csi-mock-csi-mock-volumes-661","csi-mock-csi-mock-volumes-6620":"csi-mock-csi-mock-volumes-6620","csi-mock-csi-mock-volumes-6689":"csi-mock-csi-mock-volumes-6689","csi-mock-csi-mock-volumes-6776":"csi-mock-csi-mock-volumes-6776","csi-mock-csi-mock-volumes-7048":"csi-mock-csi-mock-volumes-7048","csi-mock-csi-mock-volumes-7182":"csi-mock-csi-mock-volumes-7182","csi-mock-csi-mock-volumes-7195":"csi-mock-csi-mock-volumes-7195","csi-mock-csi-mock-volumes-7255":"csi-mock-csi-mock-volumes-7255","csi-mock-csi-mock-volumes-7316":"csi-mock-csi-mock-volumes-7316","csi-mock-csi-mock-volumes-7339":"csi-mock-csi-mock-volumes-7339","csi-mock-csi-mock-volumes-7364":"csi-mock-csi-mock-volumes-7364","csi-mock-csi-mock-volumes-7388":"csi-mock-csi-mock-volumes-7388","csi-mock-csi-mock-volumes-7421":"csi-mock-csi-mock-volumes-7421","csi-mock-csi-mock-volumes-7435":"csi-mock-csi-mock-volumes-7435","csi-mock-csi-mock-volumes-7495":"csi-mock-csi-mock-volumes-7495","csi-mock-csi-mock-volumes-7533":"csi-mock-csi-mock-volumes-7533","csi-mock-csi-mock-volumes-7664":"csi-mock-csi-mock-volumes-7664","csi-mock-csi-mock-volumes-7688":"csi-mock-csi-mock-volumes-7688","csi-mock-csi-mock-volumes-7695":"csi-mock-csi-mock-volumes-7695","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7768":"csi-mock-csi-mock-volumes-7768","csi-mock-csi-mock-volumes-7791":"csi-mock-csi-mock-volumes-7791","csi-mock-csi-mock-volumes-7938":"csi-mock-csi-mock-volumes-7938","csi-mock-csi-mock-volumes-800":"csi-mock-csi-mock-volumes-800","csi-mock-csi-mock-volumes-8090":"csi-mock-csi-mock-volumes-8090","csi-mock-csi-mock-volumes-8163":"csi-mock-csi-mock-volumes-8163","csi-mock-csi-mock-volumes-8244":"csi-mock-csi-mock-volumes-8244","csi-mock-csi-mock-volumes-8351":"csi-mock-csi-mock-volumes-8351","csi-mock-csi-mock-volumes-8495":"csi-mock-csi-mock-volumes-8495","csi-mock-csi-mock-volumes-8510":"csi-mock-csi-mock-volumes-8510","csi-mock-csi-mock-volumes-860":"csi-mock-csi-mock-volumes-860","csi-mock-csi-mock-volumes-868":"csi-mock-csi-mock-volumes-868","csi-mock-csi-mock-volumes-8794":"csi-mock-csi-mock-volumes-8794","csi-mock-csi-mock-volumes-8829":"csi-mock-csi-mock-volumes-8829","csi-mock-csi-mock-volumes-8875":"csi-mock-csi-mock-volumes-8875","csi-mock-csi-mock-volumes-8912":"csi-mock-csi-mock-volumes-8912","csi-mock-csi-mock-volumes-8951":"csi-mock-csi-mock-volumes-8951","csi-mock-csi-mock-volumes-9011":"csi-mock-csi-mock-volumes-9011","csi-mock-csi-mock-volumes-9167":"csi-mock-csi-mock-volumes-9167","csi-mock-csi-mock-volumes-9176":"csi-mock-csi-mock-volumes-9176","csi-mock-csi-mock-volumes-926":"csi-mock-csi-mock-volumes-926","csi-mock-csi-mock-volumes-9267":"csi-mock-csi-mock-volumes-9267","csi-mock-csi-mock-volumes-927":"csi-mock-csi-mock-volumes-927","csi-mock-csi-mock-volumes-9337":"csi-mock-csi-mock-volumes-9337","csi-mock-csi-mock-volumes-9346":"csi-mock-csi-mock-volumes-9346","csi-mock-csi-mock-volumes-9361":"csi-mock-csi-mock-volumes-9361","csi-mock-csi-mock-volumes-944":"csi-mock-csi-mock-volumes-944","csi-mock-csi-mock-volumes-9453":"csi-mock-csi-mock-volumes-9453","csi-mock-csi-mock-volumes-9494":"csi-mock-csi-mock-volumes-9494","csi-mock-csi-mock-volumes-9507":"csi-mock-csi-mock-volumes-9507","csi-mock-csi-mock-volumes-9529":"csi-mock-csi-mock-volumes-9529","csi-mock-csi-mock-volumes-9629":"csi-mock-csi-mock-volumes-9629","csi-mock-csi-mock-volumes-9788":"csi-mock-csi-mock-volumes-9788","csi-mock-csi-mock-volumes-9818":"csi-mock-csi-mock-volumes-9818","csi-mock-csi-mock-volumes-9836":"csi-mock-csi-mock-volumes-9836","csi-mock-csi-mock-volumes-9868":"csi-mock-csi-mock-volumes-9868"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-22 08:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-06-09 08:24:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-06-14 16:59:15 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-06-14 16:59:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-14 17:04:22 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-14 17:04:22 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-14 17:04:22 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-14 17:04:22 +0000 UTC,LastTransitionTime:2021-05-22 08:23:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:leguer-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:86c8c7b1af6542c49386440702c637be,SystemUUID:fe86f09a-28b3-4895-94ce-6312a2d07a57,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:706d69e007d61c69495dc384167c7cb242ced8b893ac8bb30bdee4367c894980 docker.io/litmuschaos/go-runner:1.13.2],SizeBytes:153211568,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/library/docker@sha256:87ed8e3a7b251eef42c2e4251f95ae3c5f8c4c0a64900f19cc532d0a42aa7107 docker.io/library/docker:dind],SizeBytes:81659525,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:332c4eff6fb327d140edbcc4cf5be7d3afd2ce5b6883348350f2336320c79ff7 docker.io/litmuschaos/chaos-operator:1.13.2],SizeBytes:57450276,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:806f80ccc41d7d5b33035d09bfc41bb7814f9989e738fcdefc29780934d4a663 docker.io/litmuschaos/chaos-runner:1.13.2],SizeBytes:56004602,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:0f30e5c1a1286a4bf6739dd8bdf1d00f0dd915474b3c62e892592277b0395986 docker.io/bitnami/kubectl:latest],SizeBytes:49444404,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[quay.io/metallb/controller@sha256:68c52b5301b42cad0cbf497f3d83c2e18b82548a9c36690b99b2023c55cb715a quay.io/metallb/controller:main],SizeBytes:35989620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:17747507,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:8888823,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 14 17:05:23.310: INFO: Logging kubelet events for node leguer-worker2 Jun 14 17:05:23.357: INFO: Logging pods the kubelet thinks is on node leguer-worker2 Jun 14 17:05:23.386: INFO: create-loop-devs-nbf25 started at 2021-05-22 08:23:43 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:05:23.386: INFO: contour-6648989f79-2vldk started at 2021-05-22 08:24:02 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container contour ready: true, restart count 3 Jun 14 17:05:23.386: INFO: kindnet-kx9mk started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 17:05:23.386: INFO: tune-sysctls-vjdll started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:05:23.386: INFO: chaos-operator-ce-5754fd4b69-zcrd4 started at 2021-05-26 09:12:47 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 17:05:23.386: INFO: chaos-controller-manager-69c479c674-ld4jc started at 2021-05-26 09:15:28 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 17:05:23.386: INFO: speaker-55zcr started at 2021-05-22 08:23:57 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container speaker ready: true, restart count 0 Jun 14 17:05:23.386: INFO: kube-proxy-mp68m started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:05:23.386: INFO: kube-multus-ds-n48bs started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container kube-multus ready: true, restart count 1 Jun 14 17:05:23.386: INFO: contour-6648989f79-8gz4z started at 2021-05-22 10:05:00 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container contour ready: true, restart count 1 Jun 14 17:05:23.386: INFO: controller-675995489c-h2wms started at 2021-05-22 08:23:59 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container controller ready: true, restart count 0 Jun 14 17:05:23.386: INFO: dockerd started at 2021-05-26 09:12:20 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container dockerd ready: true, restart count 0 Jun 14 17:05:23.386: INFO: chaos-daemon-2tzpz started at 2021-05-26 09:15:28 +0000 UTC (0+1 container statuses recorded) Jun 14 17:05:23.386: INFO: Container chaos-daemon ready: true, restart count 0 W0614 17:05:23.395702 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jun 14 17:05:23.680: INFO: Latency metrics for node leguer-worker2 Jun 14 17:05:23.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1347" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • Failure [305.434 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jun 14 17:05:22.566: Unexpected error: <*errors.errorString | 0xc0002cc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:1129 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":18,"completed":15,"skipped":5362,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:05:23.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jun 14 17:05:23.778: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 14 17:05:23.788: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:23.791: INFO: Number of nodes with available pods: 0 Jun 14 17:05:23.791: INFO: Node leguer-worker is running more than one daemon pod Jun 14 17:05:24.798: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:24.802: INFO: Number of nodes with available pods: 1 Jun 14 17:05:24.802: INFO: Node leguer-worker is running more than one daemon pod Jun 14 17:05:25.798: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:25.802: INFO: Number of nodes with available pods: 2 Jun 14 17:05:25.802: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 14 17:05:25.836: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:25.836: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:26.034: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:27.040: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:27.040: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:27.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:28.129: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:28.129: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:28.134: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:29.039: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:29.039: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:29.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:29.043: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:30.040: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:30.040: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:30.040: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:30.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:31.039: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:31.039: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:31.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:31.044: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:32.040: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:32.041: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:32.041: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:32.046: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:33.039: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:33.039: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:33.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:33.044: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:34.037: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:34.037: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:34.037: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:34.041: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:35.040: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:35.040: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:35.040: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:35.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:36.040: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:36.040: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:36.040: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:36.046: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:37.123: INFO: Wrong image for pod: daemon-set-562tg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:37.123: INFO: Pod daemon-set-562tg is not available Jun 14 17:05:37.123: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:37.128: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:38.039: INFO: Pod daemon-set-5885q is not available Jun 14 17:05:38.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:38.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:39.130: INFO: Pod daemon-set-5885q is not available Jun 14 17:05:39.130: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:39.135: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:40.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:40.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:41.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:41.039: INFO: Pod daemon-set-tjsgm is not available Jun 14 17:05:41.044: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:42.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:42.039: INFO: Pod daemon-set-tjsgm is not available Jun 14 17:05:42.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:43.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:43.039: INFO: Pod daemon-set-tjsgm is not available Jun 14 17:05:43.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:44.040: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:44.040: INFO: Pod daemon-set-tjsgm is not available Jun 14 17:05:44.046: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:45.040: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:45.041: INFO: Pod daemon-set-tjsgm is not available Jun 14 17:05:45.046: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:46.040: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:46.040: INFO: Pod daemon-set-tjsgm is not available Jun 14 17:05:46.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:47.039: INFO: Wrong image for pod: daemon-set-tjsgm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jun 14 17:05:47.039: INFO: Pod daemon-set-tjsgm is not available Jun 14 17:05:47.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:48.040: INFO: Pod daemon-set-fh4km is not available Jun 14 17:05:48.045: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 14 17:05:48.049: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:48.053: INFO: Number of nodes with available pods: 1 Jun 14 17:05:48.053: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:05:49.061: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:49.066: INFO: Number of nodes with available pods: 1 Jun 14 17:05:49.066: INFO: Node leguer-worker2 is running more than one daemon pod Jun 14 17:05:50.227: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 14 17:05:50.232: INFO: Number of nodes with available pods: 2 Jun 14 17:05:50.232: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7592, will wait for the garbage collector to delete the pods Jun 14 17:05:50.312: INFO: Deleting DaemonSet.extensions daemon-set took: 7.173273ms Jun 14 17:05:51.312: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000330732s Jun 14 17:05:58.021: INFO: Number of nodes with available pods: 0 Jun 14 17:05:58.021: INFO: Number of running nodes: 0, number of available pods: 0 Jun 14 17:05:58.025: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6273090"},"items":null} Jun 14 17:05:58.028: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6273090"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:05:58.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7592" for this suite. • [SLOW TEST:34.360 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":18,"completed":16,"skipped":5418,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 14 17:05:58.059: INFO: Running AfterSuite actions on all nodes Jun 14 17:05:58.059: INFO: Running AfterSuite actions on node 1 Jun 14 17:05:58.059: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":18,"completed":16,"skipped":5650,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]} Summarizing 2 Failures: [Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:1129 [Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:1129 Ran 18 of 5668 Specs in 1246.401 seconds FAIL! -- 16 Passed | 2 Failed | 0 Pending | 5650 Skipped --- FAIL: TestE2E (1246.55s) FAIL Ginkgo ran 1 suite in 20m48.089911385s Test Suite Failed