I0413 12:26:49.786109 15 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0413 12:26:49.786333 15 e2e.go:129] Starting e2e run "612191dc-69b4-4536-bfef-d1825af9887c" on Ginkgo node 1 {"msg":"Test Suite starting","total":4,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1618316808 - Will randomize all specs Will run 4 of 5667 specs Apr 13 12:26:49.858: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:26:49.862: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 13 12:26:50.129: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 13 12:26:51.966: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:26:51.966: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:26:51.966: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (1 seconds elapsed) Apr 13 12:26:51.966: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:26:51.966: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:26:51.966: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:37 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:37 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:26:51.966: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:26:51.966: INFO: Apr 13 12:26:55.735: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:26:55.735: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (5 seconds elapsed) Apr 13 12:26:55.735: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:26:55.735: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:26:55.735: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:26:55.735: INFO: Apr 13 12:26:56.907: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:26:56.907: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Apr 13 12:26:56.907: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:26:56.907: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:26:56.907: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:26:56.907: INFO: Apr 13 12:26:58.614: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:26:58.614: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Apr 13 12:26:58.614: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:26:58.614: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:26:58.614: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:26:58.614: INFO: Apr 13 12:27:00.709: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:27:00.709: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Apr 13 12:27:00.709: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:27:00.709: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:27:00.709: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:27:00.709: INFO: Apr 13 12:27:02.067: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:27:02.067: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (11 seconds elapsed) Apr 13 12:27:02.067: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:27:02.067: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:27:02.067: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:27:02.067: INFO: Apr 13 12:27:04.985: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Apr 13 12:27:04.985: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:27:04.985: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 13 12:27:05.034: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 13 12:27:05.034: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 13 12:27:05.034: INFO: e2e test version: v1.20.5 Apr 13 12:27:05.035: INFO: kube-apiserver version: v1.20.2 Apr 13 12:27:05.035: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:27:05.231: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:27:05.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-multiple-pods Apr 13 12:27:06.847: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 Apr 13 12:27:06.850: INFO: Waiting up to 1m0s for all nodes to be ready Apr 13 12:28:06.862: INFO: Waiting for terminating namespaces to be deleted... [It] only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 Apr 13 12:28:06.910: INFO: Starting informer... STEP: Starting pods... Apr 13 12:28:08.184: INFO: Pod1 is running on leguer-worker2. Tainting Node Apr 13 12:28:08.822: INFO: Pod2 is running on leguer-worker2. Tainting Node STEP: Trying to apply a taint on the Nodes STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod1 to be deleted Apr 13 12:28:17.574: INFO: Noticed Pod "taint-eviction-a1" gets evicted. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:29:15.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-multiple-pods-8414" for this suite. • [SLOW TEST:131.020 seconds] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes","total":4,"completed":1,"skipped":1885,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:29:16.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Apr 13 12:29:18.804: INFO: Waiting up to 1m0s for all nodes to be ready Apr 13 12:30:18.819: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 Apr 13 12:30:19.457: INFO: Starting informer... STEP: Starting pod... Apr 13 12:30:24.426: INFO: Pod is running on leguer-worker2. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod to be deleted Apr 13 12:30:36.521: INFO: Noticed Pod eviction. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:30:38.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-7090" for this suite. • [SLOW TEST:82.398 seconds] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes","total":4,"completed":2,"skipped":1903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:30:38.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Apr 13 12:30:39.675: INFO: Waiting up to 1m0s for all nodes to be ready Apr 13 12:31:39.691: INFO: Waiting for terminating namespaces to be deleted... [It] doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 Apr 13 12:31:40.040: INFO: Starting informer... STEP: Starting pod... Apr 13 12:31:41.862: INFO: Pod is running on leguer-worker2. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod to be deleted Apr 13 12:32:47.871: INFO: Pod wasn't evicted. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:32:48.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-4567" for this suite. • [SLOW TEST:130.159 seconds] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes","total":4,"completed":3,"skipped":2824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:32:48.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Apr 13 12:32:51.800: INFO: Waiting up to 1m0s for all nodes to be ready Apr 13 12:33:51.811: INFO: Waiting for terminating namespaces to be deleted... [It] eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 Apr 13 12:33:52.043: INFO: Starting informer... STEP: Starting pod... Apr 13 12:33:53.062: INFO: Pod is running on leguer-worker2. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting to see if a Pod won't be deleted Apr 13 12:34:58.830: INFO: Pod wasn't evicted STEP: Waiting for Pod to be deleted Apr 13 12:36:03.830: FAIL: Pod wasn't evicted Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0037fa180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0037fa180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0037fa180, 0x4fc2a88) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "taint-single-pod-1272". STEP: Found 4 events. Apr 13 12:36:03.975: INFO: At 2021-04-13 12:33:52 +0000 UTC - event for taint-eviction-3: {default-scheduler } Scheduled: Successfully assigned taint-single-pod-1272/taint-eviction-3 to leguer-worker2 Apr 13 12:36:03.976: INFO: At 2021-04-13 12:35:03 +0000 UTC - event for taint-eviction-3: {taint-controller } TaintManagerEviction: Marking for deletion Pod taint-single-pod-1272/taint-eviction-3 Apr 13 12:36:03.976: INFO: At 2021-04-13 12:35:05 +0000 UTC - event for taint-eviction-3: {kubelet leguer-worker2} Pulled: Container image "k8s.gcr.io/pause:3.2" already present on machine Apr 13 12:36:03.976: INFO: At 2021-04-13 12:35:05 +0000 UTC - event for taint-eviction-3: {kubelet leguer-worker2} Failed: Error: cannot find volume "default-token-6v96b" to mount into container "pause" Apr 13 12:36:04.048: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:36:04.048: INFO: taint-eviction-3 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:33:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [pause]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [pause]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:33:52 +0000 UTC }] Apr 13 12:36:04.048: INFO: Apr 13 12:36:04.156: INFO: Logging node info for node leguer-control-plane Apr 13 12:36:04.228: INFO: Node Info: &Node{ObjectMeta:{leguer-control-plane 3670b09d-f7a8-4b72-a949-8919adfef587 95884 0 2021-04-13 08:13:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24 fd00:10:244::/64],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922059776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922059776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-04-13 12:32:19 +0000 UTC,LastTransitionTime:2021-04-13 08:13:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-04-13 12:32:19 +0000 UTC,LastTransitionTime:2021-04-13 08:13:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-04-13 12:32:19 +0000 UTC,LastTransitionTime:2021-04-13 08:13:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-04-13 12:32:19 +0000 UTC,LastTransitionTime:2021-04-13 08:13:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:InternalIP,Address:fc00:f853:ccd:e793::d,},NodeAddress{Type:Hostname,Address:leguer-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:74705ab9669d4284b794db774db44f8a,SystemUUID:081f2064-98ee-4ed2-af6e-2422440801c4,BootID:dc0058b1-aa97-45b0-baf9-d3a69a0326a3,KernelVersion:4.15.0-141-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.0-beta.3-24-g95513021e,KubeletVersion:v1.20.2,KubeProxyVersion:v1.20.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.2],SizeBytes:122890541,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210220-5b7e6d01],SizeBytes:121784635,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.2],SizeBytes:120344944,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.2],SizeBytes:117070143,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.2],SizeBytes:47614252,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:685714,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 13 12:36:04.229: INFO: Logging kubelet events for node leguer-control-plane Apr 13 12:36:04.314: INFO: Logging pods the kubelet thinks is on node leguer-control-plane Apr 13 12:36:04.408: INFO: kube-apiserver-leguer-control-plane started at 2021-04-13 08:13:36 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.408: INFO: Container kube-apiserver ready: true, restart count 0 Apr 13 12:36:04.408: INFO: kube-controller-manager-leguer-control-plane started at 2021-04-13 08:13:36 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.408: INFO: Container kube-controller-manager ready: true, restart count 0 Apr 13 12:36:04.408: INFO: kube-proxy-q6zg2 started at 2021-04-13 08:13:43 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.408: INFO: Container kube-proxy ready: true, restart count 0 Apr 13 12:36:04.408: INFO: kube-scheduler-leguer-control-plane started at 2021-04-13 08:13:36 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.408: INFO: Container kube-scheduler ready: true, restart count 0 Apr 13 12:36:04.408: INFO: etcd-leguer-control-plane started at 2021-04-13 08:13:36 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.408: INFO: Container etcd ready: true, restart count 0 Apr 13 12:36:04.408: INFO: kindnet-dnnlq started at 2021-04-13 08:13:43 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.408: INFO: Container kindnet-cni ready: true, restart count 35 Apr 13 12:36:04.408: INFO: local-path-provisioner-78776bfc44-8zb8r started at 2021-04-13 08:14:02 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.408: INFO: Container local-path-provisioner ready: true, restart count 0 W0413 12:36:04.476502 15 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Apr 13 12:36:04.718: INFO: Latency metrics for node leguer-control-plane Apr 13 12:36:04.718: INFO: Logging node info for node leguer-worker Apr 13 12:36:04.738: INFO: Node Info: &Node{ObjectMeta:{leguer-worker 5899105f-bf07-4d32-b34e-1830270b0ac6 95700 0 2021-04-13 08:13:54 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24 fd00:10:244:1::/64],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922059776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922059776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-04-13 12:31:39 +0000 UTC,LastTransitionTime:2021-04-13 08:13:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-04-13 12:31:39 +0000 UTC,LastTransitionTime:2021-04-13 08:13:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-04-13 12:31:39 +0000 UTC,LastTransitionTime:2021-04-13 08:13:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-04-13 12:31:39 +0000 UTC,LastTransitionTime:2021-04-13 08:14:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:InternalIP,Address:fc00:f853:ccd:e793::e,},NodeAddress{Type:Hostname,Address:leguer-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6b5cc8378dc7413caa9fa7871d5f97c6,SystemUUID:0f47f62c-ed39-48d7-be3c-fc1aee5ad071,BootID:dc0058b1-aa97-45b0-baf9-d3a69a0326a3,KernelVersion:4.15.0-141-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.0-beta.3-24-g95513021e,KubeletVersion:v1.20.2,KubeProxyVersion:v1.20.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.2],SizeBytes:122890541,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210220-5b7e6d01],SizeBytes:121784635,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.2],SizeBytes:120344944,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.2],SizeBytes:117070143,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.2],SizeBytes:47614252,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:8888823,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:685714,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 13 12:36:04.739: INFO: Logging kubelet events for node leguer-worker Apr 13 12:36:04.850: INFO: Logging pods the kubelet thinks is on node leguer-worker Apr 13 12:36:04.996: INFO: kindnet-d9q5l started at 2021-04-13 11:30:05 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.996: INFO: Container kindnet-cni ready: true, restart count 11 Apr 13 12:36:04.996: INFO: coredns-74ff55c5b-qrssg started at 2021-04-13 12:17:42 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.996: INFO: Container coredns ready: true, restart count 0 Apr 13 12:36:04.996: INFO: kube-proxy-srl76 started at 2021-04-13 08:13:55 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.996: INFO: Container kube-proxy ready: true, restart count 0 Apr 13 12:36:04.996: INFO: coredns-74ff55c5b-g6j98 started at 2021-04-13 12:17:42 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:04.996: INFO: Container coredns ready: true, restart count 0 W0413 12:36:05.042167 15 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Apr 13 12:36:05.186: INFO: Latency metrics for node leguer-worker Apr 13 12:36:05.186: INFO: Logging node info for node leguer-worker2 Apr 13 12:36:05.207: INFO: Node Info: &Node{ObjectMeta:{leguer-worker2 b128357c-7817-4304-9d59-78ab8faa1cf3 98496 0 2021-04-13 08:13:54 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24 fd00:10:244:2::/64],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922059776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922059776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-04-13 12:33:39 +0000 UTC,LastTransitionTime:2021-04-13 08:13:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-04-13 12:33:39 +0000 UTC,LastTransitionTime:2021-04-13 08:13:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-04-13 12:33:39 +0000 UTC,LastTransitionTime:2021-04-13 08:13:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-04-13 12:33:39 +0000 UTC,LastTransitionTime:2021-04-13 08:14:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.12,},NodeAddress{Type:InternalIP,Address:fc00:f853:ccd:e793::c,},NodeAddress{Type:Hostname,Address:leguer-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e94b41d0ce1e434fb5f7bc1ba9bb441a,SystemUUID:2a426c9c-c4c0-4526-a98c-189a15b3af7e,BootID:dc0058b1-aa97-45b0-baf9-d3a69a0326a3,KernelVersion:4.15.0-141-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.0-beta.3-24-g95513021e,KubeletVersion:v1.20.2,KubeProxyVersion:v1.20.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.2],SizeBytes:122890541,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210220-5b7e6d01],SizeBytes:121784635,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.2],SizeBytes:120344944,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.2],SizeBytes:117070143,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.2],SizeBytes:47614252,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:17747507,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:685714,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 13 12:36:05.207: INFO: Logging kubelet events for node leguer-worker2 Apr 13 12:36:05.292: INFO: Logging pods the kubelet thinks is on node leguer-worker2 Apr 13 12:36:05.351: INFO: kube-proxy-kfr76 started at 2021-04-13 08:13:55 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:05.351: INFO: Container kube-proxy ready: true, restart count 0 Apr 13 12:36:05.351: INFO: kindnet-nlczk started at 2021-04-13 12:36:04 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:05.351: INFO: Container kindnet-cni ready: false, restart count 0 Apr 13 12:36:05.351: INFO: taint-eviction-3 started at 2021-04-13 12:33:52 +0000 UTC (0+1 container statuses recorded) Apr 13 12:36:05.351: INFO: Container pause ready: false, restart count 0 W0413 12:36:05.453392 15 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Apr 13 12:36:06.139: INFO: Latency metrics for node leguer-worker2 Apr 13 12:36:06.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-1272" for this suite. • Failure [197.575 seconds] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 eventually evict pod with finite tolerations from tainted nodes [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 Apr 13 12:36:03.830: Pod wasn't evicted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","total":4,"completed":3,"skipped":5378,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 13 12:36:06.430: INFO: Running AfterSuite actions on all nodes Apr 13 12:36:06.430: INFO: Running AfterSuite actions on node 1 Apr 13 12:36:06.430: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_node_serial/junit_01.xml {"msg":"Test Suite completed","total":4,"completed":3,"skipped":5663,"failed":1,"failures":["[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} Summarizing 1 Failure: [Fail] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] [It] eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 Ran 4 of 5667 Specs in 556.576 seconds FAIL! -- 3 Passed | 1 Failed | 0 Pending | 5663 Skipped --- FAIL: TestE2E (556.69s) FAIL Ginkgo ran 1 suite in 9m18.250840736s Test Suite Failed