I0903 14:00:19.877258 17 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0903 14:00:19.877527 17 e2e.go:129] Starting e2e run "b7934e04-3c96-4ddb-99ed-96efb31076b3" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630677618 - Will randomize all specs Will run 17 of 5484 specs Sep 3 14:00:19.980: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:00:19.984: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 3 14:00:20.016: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 3 14:00:20.059: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 3 14:00:20.059: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 3 14:00:20.059: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 3 14:00:20.073: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Sep 3 14:00:20.073: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 3 14:00:20.073: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 3 14:00:20.073: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Sep 3 14:00:20.073: INFO: e2e test version: v1.19.11 Sep 3 14:00:20.074: INFO: kube-apiserver version: v1.19.11 Sep 3 14:00:20.074: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:00:20.080: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:00:20.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces Sep 3 14:00:20.112: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:00:20.121: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:00:49.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1310" for this suite. STEP: Destroying namespace "nsdeletetest-2512" for this suite. Sep 3 14:00:49.215: INFO: Namespace nsdeletetest-2512 was already deleted STEP: Destroying namespace "nsdeletetest-4239" for this suite. • [SLOW TEST:29.139 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":1,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:00:49.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:00:49.258: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:00:49.267: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:00:49.270: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:00:49.277: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.277: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:00:49.277: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.277: INFO: Container coredns ready: true, restart count 0 Sep 3 14:00:49.277: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.277: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:00:49.277: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.277: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:00:49.277: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.277: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:00:49.277: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.277: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:00:49.277: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.277: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:00:49.277: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:00:49.284: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:00:49.284: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:00:49.284: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:00:49.284: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container coredns ready: true, restart count 0 Sep 3 14:00:49.284: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:00:49.284: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:00:49.284: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:00:49.284: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:00:49.284: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:00:49.284: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node has the label node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod chaos-controller-manager-69c479c674-2scf8 requesting resource cpu=25m on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod chaos-daemon-6lv64 requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod chaos-daemon-tzn7z requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:00:49.329: INFO: Pod dockerd requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod coredns-f9fd979d6-45cv5 requesting resource cpu=100m on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod coredns-f9fd979d6-qdhsv requesting resource cpu=100m on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:00:49.329: INFO: Pod create-loop-devs-4jkpj requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod create-loop-devs-qjl7t requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:00:49.329: INFO: Pod kindnet-55d6f requesting resource cpu=100m on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:00:49.329: INFO: Pod kindnet-7cmgn requesting resource cpu=100m on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod kube-proxy-h8v9x requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod kube-proxy-lqr9t requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:00:49.329: INFO: Pod tune-sysctls-mv2h6 requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:00:49.329: INFO: Pod tune-sysctls-wz9ls requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:00:49.329: INFO: Pod chaos-operator-ce-5754fd4b69-crx4p requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:00:49.329: INFO: Pod local-path-provisioner-556d4466c8-khwq6 requesting resource cpu=0m on Node capi-kali-md-0-76b6798f7f-7jvhm STEP: Starting Pods to consume most of the cluster CPU. Sep 3 14:00:49.329: INFO: Creating a pod which consumes cpu=61460m on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:00:49.336: INFO: Creating a pod which consumes cpu=61442m on Node capi-kali-md-0-76b6798f7f-7jvhm STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-280e7766-c596-4a3b-bbc2-95a04a0a1f48.16a154aeb05cf25d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-784/filler-pod-280e7766-c596-4a3b-bbc2-95a04a0a1f48 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [filler-pod-280e7766-c596-4a3b-bbc2-95a04a0a1f48.16a154aedc80fad4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-280e7766-c596-4a3b-bbc2-95a04a0a1f48.16a154aeddc2a670], Reason = [Created], Message = [Created container filler-pod-280e7766-c596-4a3b-bbc2-95a04a0a1f48] STEP: Considering event: Type = [Normal], Name = [filler-pod-280e7766-c596-4a3b-bbc2-95a04a0a1f48.16a154aee6288010], Reason = [Started], Message = [Started container filler-pod-280e7766-c596-4a3b-bbc2-95a04a0a1f48] STEP: Considering event: Type = [Normal], Name = [filler-pod-bf2c122d-0141-4ea9-ab45-b49f101f07b1.16a154aeb0242724], Reason = [Scheduled], Message = [Successfully assigned sched-pred-784/filler-pod-bf2c122d-0141-4ea9-ab45-b49f101f07b1 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [filler-pod-bf2c122d-0141-4ea9-ab45-b49f101f07b1.16a154aedc507539], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bf2c122d-0141-4ea9-ab45-b49f101f07b1.16a154aeddbbcc9e], Reason = [Created], Message = [Created container filler-pod-bf2c122d-0141-4ea9-ab45-b49f101f07b1] STEP: Considering event: Type = [Normal], Name = [filler-pod-bf2c122d-0141-4ea9-ab45-b49f101f07b1.16a154aee62badcd], Reason = [Started], Message = [Started container filler-pod-bf2c122d-0141-4ea9-ab45-b49f101f07b1] STEP: Considering event: Type = [Warning], Name = [additional-pod.16a154af28c59820], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label node STEP: removing the label node off the node capi-kali-md-0-76b6798f7f-7jvhm STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:00:52.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-784" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":2,"skipped":968,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:00:52.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 3 14:00:52.471: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:00:52.473: INFO: Number of nodes with available pods: 0 Sep 3 14:00:52.473: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:00:53.480: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:00:53.483: INFO: Number of nodes with available pods: 0 Sep 3 14:00:53.483: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:00:54.479: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:00:54.483: INFO: Number of nodes with available pods: 2 Sep 3 14:00:54.484: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 3 14:00:54.502: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:00:54.506: INFO: Number of nodes with available pods: 1 Sep 3 14:00:54.506: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:00:55.511: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:00:55.516: INFO: Number of nodes with available pods: 1 Sep 3 14:00:55.516: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:00:56.512: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:00:56.517: INFO: Number of nodes with available pods: 2 Sep 3 14:00:56.517: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9643, will wait for the garbage collector to delete the pods Sep 3 14:00:56.585: INFO: Deleting DaemonSet.extensions daemon-set took: 7.030144ms Sep 3 14:00:57.085: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.316404ms Sep 3 14:01:03.688: INFO: Number of nodes with available pods: 0 Sep 3 14:01:03.688: INFO: Number of running nodes: 0, number of available pods: 0 Sep 3 14:01:03.694: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9643/daemonsets","resourceVersion":"1060903"},"items":null} Sep 3 14:01:03.697: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9643/pods","resourceVersion":"1060903"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:01:03.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9643" for this suite. • [SLOW TEST:11.304 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":3,"skipped":1213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:01:03.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:01:03.751: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:01:03.760: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:01:03.764: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:01:03.770: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.770: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:01:03.770: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.770: INFO: Container coredns ready: true, restart count 0 Sep 3 14:01:03.771: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.771: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:01:03.771: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.771: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:01:03.771: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.771: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:01:03.771: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.771: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:01:03.771: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.771: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:01:03.771: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:01:03.778: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:01:03.778: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:01:03.778: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:01:03.778: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container coredns ready: true, restart count 0 Sep 3 14:01:03.778: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:01:03.778: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:01:03.778: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:01:03.778: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:01:03.778: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:01:03.778: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a067ca75-c63b-4eef-803a-8138677f6792 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-a067ca75-c63b-4eef-803a-8138677f6792 off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-a067ca75-c63b-4eef-803a-8138677f6792 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:01:16.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1343" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.343 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":17,"completed":4,"skipped":1251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:01:16.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:01:16.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4261" for this suite. STEP: Destroying namespace "nspatchtest-d07450c6-d3ac-410e-8652-31990a60fc19-7313" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":5,"skipped":1448,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:01:16.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:01:16.181: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:01:16.190: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:01:16.193: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:01:16.200: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:01:16.201: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container coredns ready: true, restart count 0 Sep 3 14:01:16.201: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:01:16.201: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:01:16.201: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:01:16.201: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:01:16.201: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:01:16.201: INFO: pod1 from sched-pred-1343 started at 2021-09-03 14:01:05 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container pod1 ready: true, restart count 0 Sep 3 14:01:16.201: INFO: pod2 from sched-pred-1343 started at 2021-09-03 14:01:07 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container pod2 ready: true, restart count 0 Sep 3 14:01:16.201: INFO: pod3 from sched-pred-1343 started at 2021-09-03 14:01:12 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.201: INFO: Container pod3 ready: true, restart count 0 Sep 3 14:01:16.201: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:01:16.208: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:01:16.208: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:01:16.208: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:01:16.208: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container coredns ready: true, restart count 0 Sep 3 14:01:16.208: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:01:16.208: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:01:16.208: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:01:16.208: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:01:16.208: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:01:16.208: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2cbd0c31-cb72-4996-822e-e43ea4a54714 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-2cbd0c31-cb72-4996-822e-e43ea4a54714 off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-2cbd0c31-cb72-4996-822e-e43ea4a54714 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:01:20.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4753" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":6,"skipped":1654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:01:20.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 3 14:01:20.339: INFO: Waiting up to 1m0s for all nodes to be ready Sep 3 14:02:20.369: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 3 14:02:20.400: INFO: Created pod: pod0-sched-preemption-low-priority Sep 3 14:02:20.418: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:02:36.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8885" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:76.216 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":7,"skipped":2680,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:02:36.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 3 14:02:36.556: INFO: Waiting up to 1m0s for all nodes to be ready Sep 3 14:03:36.588: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 3 14:03:36.616: INFO: Created pod: pod0-sched-preemption-low-priority Sep 3 14:03:36.929: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:03:57.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9789" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:80.680 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":8,"skipped":2756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:03:57.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:04:03.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-835" for this suite. STEP: Destroying namespace "nsdeletetest-5832" for this suite. Sep 3 14:04:03.317: INFO: Namespace nsdeletetest-5832 was already deleted STEP: Destroying namespace "nsdeletetest-3829" for this suite. • [SLOW TEST:6.125 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":9,"skipped":2801,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:04:03.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Sep 3 14:04:03.636: INFO: Pod name wrapped-volume-race-99ee8dea-3169-472e-adbb-45b91c01463b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-99ee8dea-3169-472e-adbb-45b91c01463b in namespace emptydir-wrapper-8832, will wait for the garbage collector to delete the pods Sep 3 14:04:17.766: INFO: Deleting ReplicationController wrapped-volume-race-99ee8dea-3169-472e-adbb-45b91c01463b took: 8.272288ms Sep 3 14:04:18.267: INFO: Terminating ReplicationController wrapped-volume-race-99ee8dea-3169-472e-adbb-45b91c01463b pods took: 500.36667ms STEP: Creating RC which spawns configmap-volume pods Sep 3 14:04:23.782: INFO: Pod name wrapped-volume-race-23c07c8d-cb21-4785-bad6-55af98d28b28: Found 0 pods out of 5 Sep 3 14:04:28.791: INFO: Pod name wrapped-volume-race-23c07c8d-cb21-4785-bad6-55af98d28b28: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-23c07c8d-cb21-4785-bad6-55af98d28b28 in namespace emptydir-wrapper-8832, will wait for the garbage collector to delete the pods Sep 3 14:04:38.983: INFO: Deleting ReplicationController wrapped-volume-race-23c07c8d-cb21-4785-bad6-55af98d28b28 took: 7.114954ms Sep 3 14:04:39.483: INFO: Terminating ReplicationController wrapped-volume-race-23c07c8d-cb21-4785-bad6-55af98d28b28 pods took: 500.276736ms STEP: Creating RC which spawns configmap-volume pods Sep 3 14:04:42.904: INFO: Pod name wrapped-volume-race-7b19788a-af98-4c8d-8adb-9c089b684f19: Found 0 pods out of 5 Sep 3 14:04:47.912: INFO: Pod name wrapped-volume-race-7b19788a-af98-4c8d-8adb-9c089b684f19: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7b19788a-af98-4c8d-8adb-9c089b684f19 in namespace emptydir-wrapper-8832, will wait for the garbage collector to delete the pods Sep 3 14:04:58.003: INFO: Deleting ReplicationController wrapped-volume-race-7b19788a-af98-4c8d-8adb-9c089b684f19 took: 8.11298ms Sep 3 14:04:58.104: INFO: Terminating ReplicationController wrapped-volume-race-7b19788a-af98-4c8d-8adb-9c089b684f19 pods took: 100.335087ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:05:02.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8832" for this suite. • [SLOW TEST:59.450 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":10,"skipped":3156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:05:02.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 14:05:02.838: INFO: Create a RollingUpdate DaemonSet Sep 3 14:05:02.843: INFO: Check that daemon pods launch on every node of the cluster Sep 3 14:05:02.848: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:02.850: INFO: Number of nodes with available pods: 0 Sep 3 14:05:02.850: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:05:03.856: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:03.861: INFO: Number of nodes with available pods: 0 Sep 3 14:05:03.861: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:05:04.856: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:04.860: INFO: Number of nodes with available pods: 2 Sep 3 14:05:04.860: INFO: Number of running nodes: 2, number of available pods: 2 Sep 3 14:05:04.861: INFO: Update the DaemonSet to trigger a rollout Sep 3 14:05:04.869: INFO: Updating DaemonSet daemon-set Sep 3 14:05:08.886: INFO: Roll back the DaemonSet before rollout is complete Sep 3 14:05:08.894: INFO: Updating DaemonSet daemon-set Sep 3 14:05:08.894: INFO: Make sure DaemonSet rollback is complete Sep 3 14:05:08.898: INFO: Wrong image for pod: daemon-set-qn24z. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 3 14:05:08.898: INFO: Pod daemon-set-qn24z is not available Sep 3 14:05:08.903: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:09.907: INFO: Wrong image for pod: daemon-set-qn24z. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 3 14:05:09.907: INFO: Pod daemon-set-qn24z is not available Sep 3 14:05:09.912: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:10.907: INFO: Pod daemon-set-qxngp is not available Sep 3 14:05:10.912: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8644, will wait for the garbage collector to delete the pods Sep 3 14:05:10.978: INFO: Deleting DaemonSet.extensions daemon-set took: 6.175715ms Sep 3 14:05:11.478: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.30076ms Sep 3 14:05:23.681: INFO: Number of nodes with available pods: 0 Sep 3 14:05:23.681: INFO: Number of running nodes: 0, number of available pods: 0 Sep 3 14:05:23.684: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8644/daemonsets","resourceVersion":"1062785"},"items":null} Sep 3 14:05:23.687: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8644/pods","resourceVersion":"1062785"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:05:23.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8644" for this suite. • [SLOW TEST:20.944 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":11,"skipped":3467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:05:23.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:05:23.753: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:05:23.760: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:05:23.763: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:05:23.768: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.768: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:05:23.768: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.768: INFO: Container coredns ready: true, restart count 0 Sep 3 14:05:23.768: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.768: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:05:23.768: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.768: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:05:23.768: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.768: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:05:23.768: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.769: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:05:23.769: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.769: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:05:23.769: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:05:23.774: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.774: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:05:23.774: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.774: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:05:23.774: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.774: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:05:23.774: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.774: INFO: Container coredns ready: true, restart count 0 Sep 3 14:05:23.774: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.774: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:05:23.774: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.774: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:05:23.774: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.774: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:05:23.774: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.774: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:05:23.774: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:05:23.775: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16a154ee96e892c2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:05:24.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3782" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":12,"skipped":3557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:05:24.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 14:05:24.860: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Sep 3 14:05:24.870: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:24.872: INFO: Number of nodes with available pods: 0 Sep 3 14:05:24.872: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:05:25.878: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:25.882: INFO: Number of nodes with available pods: 0 Sep 3 14:05:25.882: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:05:26.878: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:26.882: INFO: Number of nodes with available pods: 1 Sep 3 14:05:26.883: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:05:27.878: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:27.882: INFO: Number of nodes with available pods: 2 Sep 3 14:05:27.882: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Sep 3 14:05:27.910: INFO: Wrong image for pod: daemon-set-8p4qt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:27.910: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:27.915: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:29.020: INFO: Wrong image for pod: daemon-set-8p4qt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:29.020: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:29.026: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:29.919: INFO: Wrong image for pod: daemon-set-8p4qt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:29.919: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:29.925: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:30.921: INFO: Wrong image for pod: daemon-set-8p4qt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:30.921: INFO: Pod daemon-set-8p4qt is not available Sep 3 14:05:30.921: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:30.926: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:32.020: INFO: Wrong image for pod: daemon-set-8p4qt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:32.020: INFO: Pod daemon-set-8p4qt is not available Sep 3 14:05:32.020: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:32.026: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:32.920: INFO: Wrong image for pod: daemon-set-8p4qt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:32.920: INFO: Pod daemon-set-8p4qt is not available Sep 3 14:05:32.920: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:32.925: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:33.920: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:33.920: INFO: Pod daemon-set-knvtv is not available Sep 3 14:05:33.925: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:34.920: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:34.920: INFO: Pod daemon-set-knvtv is not available Sep 3 14:05:34.926: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:35.920: INFO: Wrong image for pod: daemon-set-dwscs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 3 14:05:35.920: INFO: Pod daemon-set-dwscs is not available Sep 3 14:05:35.926: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:36.921: INFO: Pod daemon-set-7sxj8 is not available Sep 3 14:05:36.927: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Sep 3 14:05:36.932: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:36.935: INFO: Number of nodes with available pods: 1 Sep 3 14:05:36.935: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:05:37.942: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:05:37.946: INFO: Number of nodes with available pods: 2 Sep 3 14:05:37.946: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3755, will wait for the garbage collector to delete the pods Sep 3 14:05:38.024: INFO: Deleting DaemonSet.extensions daemon-set took: 8.427607ms Sep 3 14:05:38.524: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.126522ms Sep 3 14:05:43.728: INFO: Number of nodes with available pods: 0 Sep 3 14:05:43.728: INFO: Number of running nodes: 0, number of available pods: 0 Sep 3 14:05:43.731: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3755/daemonsets","resourceVersion":"1062955"},"items":null} Sep 3 14:05:43.734: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3755/pods","resourceVersion":"1062955"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:05:43.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3755" for this suite. • [SLOW TEST:18.948 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":13,"skipped":3580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:05:43.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 3 14:05:43.801: INFO: Waiting up to 1m0s for all nodes to be ready Sep 3 14:06:43.831: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:06:43.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Sep 3 14:06:45.895: INFO: found a healthy node: capi-kali-md-0-76b6798f7f-5n8xl [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 14:06:52.039: INFO: pods created so far: [1 1 1] Sep 3 14:06:52.039: INFO: length of pods created so far: 3 Sep 3 14:06:56.049: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:07:03.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5409" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:07:03.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6942" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:79.391 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":14,"skipped":3729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:07:03.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 3 14:07:03.207: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 3 14:07:03.215: INFO: Number of nodes with available pods: 0 Sep 3 14:07:03.215: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 3 14:07:03.237: INFO: Number of nodes with available pods: 0 Sep 3 14:07:03.237: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:04.242: INFO: Number of nodes with available pods: 0 Sep 3 14:07:04.242: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:05.242: INFO: Number of nodes with available pods: 1 Sep 3 14:07:05.242: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 3 14:07:05.263: INFO: Number of nodes with available pods: 1 Sep 3 14:07:05.263: INFO: Number of running nodes: 0, number of available pods: 1 Sep 3 14:07:06.268: INFO: Number of nodes with available pods: 0 Sep 3 14:07:06.268: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 3 14:07:06.279: INFO: Number of nodes with available pods: 0 Sep 3 14:07:06.279: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:07.284: INFO: Number of nodes with available pods: 0 Sep 3 14:07:07.284: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:08.318: INFO: Number of nodes with available pods: 0 Sep 3 14:07:08.318: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:09.283: INFO: Number of nodes with available pods: 0 Sep 3 14:07:09.283: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:10.283: INFO: Number of nodes with available pods: 0 Sep 3 14:07:10.283: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:11.284: INFO: Number of nodes with available pods: 0 Sep 3 14:07:11.285: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:12.284: INFO: Number of nodes with available pods: 0 Sep 3 14:07:12.284: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:13.284: INFO: Number of nodes with available pods: 0 Sep 3 14:07:13.284: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:14.284: INFO: Number of nodes with available pods: 0 Sep 3 14:07:14.284: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:15.284: INFO: Number of nodes with available pods: 1 Sep 3 14:07:15.284: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1437, will wait for the garbage collector to delete the pods Sep 3 14:07:15.351: INFO: Deleting DaemonSet.extensions daemon-set took: 7.243075ms Sep 3 14:07:15.852: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.266448ms Sep 3 14:07:23.755: INFO: Number of nodes with available pods: 0 Sep 3 14:07:23.755: INFO: Number of running nodes: 0, number of available pods: 0 Sep 3 14:07:23.758: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1437/daemonsets","resourceVersion":"1063454"},"items":null} Sep 3 14:07:23.761: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1437/pods","resourceVersion":"1063454"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:07:23.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1437" for this suite. • [SLOW TEST:20.634 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":15,"skipped":4286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:07:23.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 3 14:07:23.863: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:23.866: INFO: Number of nodes with available pods: 0 Sep 3 14:07:23.866: INFO: Node capi-kali-md-0-76b6798f7f-5n8xl is running more than one daemon pod Sep 3 14:07:24.871: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:24.876: INFO: Number of nodes with available pods: 1 Sep 3 14:07:24.876: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:25.872: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:25.876: INFO: Number of nodes with available pods: 2 Sep 3 14:07:25.876: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Sep 3 14:07:25.894: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:25.898: INFO: Number of nodes with available pods: 1 Sep 3 14:07:25.898: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:26.904: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:26.908: INFO: Number of nodes with available pods: 1 Sep 3 14:07:26.908: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:27.905: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:27.909: INFO: Number of nodes with available pods: 1 Sep 3 14:07:27.909: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:28.923: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:28.926: INFO: Number of nodes with available pods: 1 Sep 3 14:07:28.927: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:29.905: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:29.909: INFO: Number of nodes with available pods: 1 Sep 3 14:07:29.909: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:30.905: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:30.909: INFO: Number of nodes with available pods: 1 Sep 3 14:07:30.909: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:31.904: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:31.908: INFO: Number of nodes with available pods: 1 Sep 3 14:07:31.908: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:32.904: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:32.908: INFO: Number of nodes with available pods: 1 Sep 3 14:07:32.908: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:33.904: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:33.908: INFO: Number of nodes with available pods: 1 Sep 3 14:07:33.908: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:34.904: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:34.908: INFO: Number of nodes with available pods: 1 Sep 3 14:07:34.908: INFO: Node capi-kali-md-0-76b6798f7f-7jvhm is running more than one daemon pod Sep 3 14:07:35.905: INFO: DaemonSet pods can't tolerate node capi-kali-control-plane-ltrkf with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 3 14:07:35.909: INFO: Number of nodes with available pods: 2 Sep 3 14:07:35.909: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7957, will wait for the garbage collector to delete the pods Sep 3 14:07:35.973: INFO: Deleting DaemonSet.extensions daemon-set took: 7.118034ms Sep 3 14:07:36.473: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.184094ms Sep 3 14:07:43.777: INFO: Number of nodes with available pods: 0 Sep 3 14:07:43.777: INFO: Number of running nodes: 0, number of available pods: 0 Sep 3 14:07:43.780: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7957/daemonsets","resourceVersion":"1063577"},"items":null} Sep 3 14:07:43.783: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7957/pods","resourceVersion":"1063577"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:07:43.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7957" for this suite. • [SLOW TEST:20.004 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":16,"skipped":4856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:07:43.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:07:43.844: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:07:43.854: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:07:43.856: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:07:43.863: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.863: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:07:43.863: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.863: INFO: Container coredns ready: true, restart count 0 Sep 3 14:07:43.863: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.863: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:07:43.863: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.863: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:07:43.863: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.863: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:07:43.863: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.863: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:07:43.863: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.863: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:07:43.863: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:07:43.870: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.870: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:07:43.870: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.870: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:07:43.870: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.871: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:07:43.871: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.871: INFO: Container coredns ready: true, restart count 0 Sep 3 14:07:43.871: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.871: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:07:43.871: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.871: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:07:43.871: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.871: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:07:43.871: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.871: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:07:43.871: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:07:43.871: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-50750310-73dd-4777-8b5a-a5cc1123bcf6 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-50750310-73dd-4777-8b5a-a5cc1123bcf6 off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-50750310-73dd-4777-8b5a-a5cc1123bcf6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:12:47.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1437" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:304.177 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":17,"skipped":5366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSep 3 14:12:47.992: INFO: Running AfterSuite actions on all nodes Sep 3 14:12:47.992: INFO: Running AfterSuite actions on node 1 Sep 3 14:12:47.992: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5467,"failed":0} Ran 17 of 5484 Specs in 748.017 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5467 Skipped PASS Ginkgo ran 1 suite in 12m29.714465483s Test Suite Passed