I0413 09:57:51.424811 15 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0413 09:57:51.424909 15 e2e.go:129] Starting e2e run "4566acd6-9b05-4e4e-a6a7-f083163c33ae" on Ginkgo node 1 {"msg":"Test Suite starting","total":18,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1618307870 - Will randomize all specs Will run 18 of 5667 specs Apr 13 09:57:51.499: INFO: >>> kubeConfig: /root/.kube/config Apr 13 09:57:51.504: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 13 09:57:51.524: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 13 09:57:51.559: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:51.559: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:51.559: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:51.559: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 13 09:57:51.559: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:57:51.559: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:57:51.559: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:51.560: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:57:51.560: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:51.560: INFO: Apr 13 09:57:53.586: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:53.586: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:53.586: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:53.586: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Apr 13 09:57:53.586: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:57:53.586: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:57:53.586: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:53.586: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:57:53.586: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:53.586: INFO: Apr 13 09:57:55.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:55.579: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:55.579: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:55.579: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Apr 13 09:57:55.579: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:57:55.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:57:55.579: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:55.579: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:57:55.579: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:55.579: INFO: Apr 13 09:57:57.578: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:57.578: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:57.578: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:57.578: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Apr 13 09:57:57.578: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:57:57.578: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:57:57.578: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:57.578: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:57:57.578: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:57.578: INFO: Apr 13 09:57:59.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:59.579: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:59.579: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:57:59.579: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Apr 13 09:57:59.579: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:57:59.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:57:59.580: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:59.580: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:57:59.580: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:57:59.580: INFO: Apr 13 09:58:01.584: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:01.584: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:01.584: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:01.584: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Apr 13 09:58:01.584: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:01.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:01.584: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:01.584: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:01.584: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:01.584: INFO: Apr 13 09:58:03.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:03.582: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:03.582: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:03.582: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Apr 13 09:58:03.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:03.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:03.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:03.582: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:03.582: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:03.582: INFO: Apr 13 09:58:05.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:05.581: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:05.581: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:05.581: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Apr 13 09:58:05.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:05.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:05.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:05.581: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:05.581: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:05.581: INFO: Apr 13 09:58:07.580: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:07.580: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:07.580: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:07.580: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Apr 13 09:58:07.580: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:07.580: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:07.580: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:07.580: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:07.580: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:07.580: INFO: Apr 13 09:58:09.583: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:09.583: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:09.583: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:09.583: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Apr 13 09:58:09.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:09.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:09.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:09.584: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:09.584: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:09.584: INFO: Apr 13 09:58:11.584: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:11.584: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:11.584: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:11.584: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Apr 13 09:58:11.584: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:11.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:11.585: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:11.585: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:11.585: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:11.585: INFO: Apr 13 09:58:13.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:13.581: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:13.581: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:13.581: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Apr 13 09:58:13.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:13.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:13.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:13.581: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:13.581: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:13.581: INFO: Apr 13 09:58:15.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:15.581: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:15.581: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:15.581: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Apr 13 09:58:15.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:15.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:15.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:15.582: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:15.582: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:15.582: INFO: Apr 13 09:58:17.580: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:17.580: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:17.580: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:17.580: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Apr 13 09:58:17.580: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:17.580: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:17.580: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:17.580: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:17.580: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:17.580: INFO: Apr 13 09:58:19.578: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:19.578: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:19.578: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:19.578: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Apr 13 09:58:19.578: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:19.578: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:19.578: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:19.579: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:19.579: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:19.579: INFO: Apr 13 09:58:21.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:21.582: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:21.582: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:21.582: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (30 seconds elapsed) Apr 13 09:58:21.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:21.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:21.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:21.582: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:21.582: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:21.582: INFO: Apr 13 09:58:23.593: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:23.593: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:23.594: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:23.594: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (32 seconds elapsed) Apr 13 09:58:23.594: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:23.594: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:23.594: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:23.594: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:23.594: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:23.594: INFO: Apr 13 09:58:25.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:25.582: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:25.582: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:25.582: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (34 seconds elapsed) Apr 13 09:58:25.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:25.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:25.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:25.582: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:25.582: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:25.582: INFO: Apr 13 09:58:27.584: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:27.584: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:27.584: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:27.584: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (36 seconds elapsed) Apr 13 09:58:27.584: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:27.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:27.584: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:27.584: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:27.584: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:27.584: INFO: Apr 13 09:58:29.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:29.579: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:29.579: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:29.579: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (38 seconds elapsed) Apr 13 09:58:29.579: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:29.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:29.579: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:29.579: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:29.579: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:29.579: INFO: Apr 13 09:58:31.583: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:31.583: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:31.583: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:31.583: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (40 seconds elapsed) Apr 13 09:58:31.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:31.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:31.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:31.583: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:31.583: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:31.583: INFO: Apr 13 09:58:33.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:33.582: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:33.582: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:33.582: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (42 seconds elapsed) Apr 13 09:58:33.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:33.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:33.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:33.582: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:33.582: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:33.582: INFO: Apr 13 09:58:35.585: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:35.585: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:35.585: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:35.585: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (44 seconds elapsed) Apr 13 09:58:35.585: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:35.585: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:35.585: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:35.585: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:35.585: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:35.585: INFO: Apr 13 09:58:37.585: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:37.585: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:37.585: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:37.585: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (46 seconds elapsed) Apr 13 09:58:37.585: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:37.585: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:37.585: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:37.585: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:37.585: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:37.585: INFO: Apr 13 09:58:39.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:39.582: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:39.582: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:39.582: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (48 seconds elapsed) Apr 13 09:58:39.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:39.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:39.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:39.582: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:39.582: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:39.582: INFO: Apr 13 09:58:41.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:41.581: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:41.581: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:41.581: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (50 seconds elapsed) Apr 13 09:58:41.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:41.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:41.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:41.581: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:41.581: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:41.581: INFO: Apr 13 09:58:43.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:43.582: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:43.582: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:43.582: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (52 seconds elapsed) Apr 13 09:58:43.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:43.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:43.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:43.582: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:43.582: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:43.582: INFO: Apr 13 09:58:45.580: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:45.580: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:45.580: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:45.580: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (54 seconds elapsed) Apr 13 09:58:45.580: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:45.580: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:45.580: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:45.580: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:45.580: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:45.580: INFO: Apr 13 09:58:47.583: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:47.583: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:47.583: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:47.583: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (56 seconds elapsed) Apr 13 09:58:47.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:47.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:47.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:47.583: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:47.583: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:47.583: INFO: Apr 13 09:58:49.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:49.579: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:49.579: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:49.579: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (58 seconds elapsed) Apr 13 09:58:49.579: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:49.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:49.579: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:49.579: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:49.579: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:49.579: INFO: Apr 13 09:58:51.584: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:51.584: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:51.584: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:51.584: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (60 seconds elapsed) Apr 13 09:58:51.584: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:51.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:51.584: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:51.584: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:51.584: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:51.584: INFO: Apr 13 09:58:53.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:53.583: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:53.583: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:53.583: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (62 seconds elapsed) Apr 13 09:58:53.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:53.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:53.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:53.583: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:53.583: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:53.583: INFO: Apr 13 09:58:55.578: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:55.578: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:55.578: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:55.578: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (64 seconds elapsed) Apr 13 09:58:55.578: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:55.578: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:55.578: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:55.578: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:55.578: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:55.578: INFO: Apr 13 09:58:57.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:57.582: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:57.582: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:57.582: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (66 seconds elapsed) Apr 13 09:58:57.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:57.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:57.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:57.582: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:57.582: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:57.582: INFO: Apr 13 09:58:59.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:59.579: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:59.579: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:58:59.579: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (68 seconds elapsed) Apr 13 09:58:59.579: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:58:59.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:58:59.579: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:59.579: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:58:59.580: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:58:59.580: INFO: Apr 13 09:59:01.578: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:01.578: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:01.578: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:01.578: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (70 seconds elapsed) Apr 13 09:59:01.578: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:01.578: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:01.578: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:01.578: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:59:01.578: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:01.578: INFO: Apr 13 09:59:03.578: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:03.578: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:03.578: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:03.578: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (72 seconds elapsed) Apr 13 09:59:03.578: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:03.578: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:03.578: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:03.578: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:59:03.578: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:03.578: INFO: Apr 13 09:59:05.580: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:05.580: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:05.580: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:05.580: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (74 seconds elapsed) Apr 13 09:59:05.580: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:05.580: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:05.580: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:05.580: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:59:05.580: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:05.580: INFO: Apr 13 09:59:07.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:07.581: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:07.581: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:07.581: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (76 seconds elapsed) Apr 13 09:59:07.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:07.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:07.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:07.581: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:02 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 09:59:07.581: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:07.581: INFO: Apr 13 09:59:09.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:09.579: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:09.579: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (78 seconds elapsed) Apr 13 09:59:09.579: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:09.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:09.579: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:09.579: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:09.579: INFO: Apr 13 09:59:11.584: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:11.584: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:11.584: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (80 seconds elapsed) Apr 13 09:59:11.584: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:11.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:11.584: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:11.584: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:11.584: INFO: Apr 13 09:59:13.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:13.579: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:13.579: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (82 seconds elapsed) Apr 13 09:59:13.579: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:13.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:13.579: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:13.579: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:13.579: INFO: Apr 13 09:59:15.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:15.582: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:15.582: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (84 seconds elapsed) Apr 13 09:59:15.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:15.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:15.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:15.583: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:15.583: INFO: Apr 13 09:59:17.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:17.583: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:17.583: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (86 seconds elapsed) Apr 13 09:59:17.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:17.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:17.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:17.583: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:17.583: INFO: Apr 13 09:59:19.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:19.579: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:19.579: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (88 seconds elapsed) Apr 13 09:59:19.579: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:19.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:19.579: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:19.579: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:13 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:19.579: INFO: Apr 13 09:59:21.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:21.582: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (90 seconds elapsed) Apr 13 09:59:21.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:21.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:21.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:21.582: INFO: Apr 13 09:59:23.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:23.581: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (92 seconds elapsed) Apr 13 09:59:23.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:23.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:23.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:23.581: INFO: Apr 13 09:59:25.583: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:25.583: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (94 seconds elapsed) Apr 13 09:59:25.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:25.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:25.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:25.583: INFO: Apr 13 09:59:27.583: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:27.583: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (96 seconds elapsed) Apr 13 09:59:27.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:27.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:27.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:27.583: INFO: Apr 13 09:59:29.580: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:29.581: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (98 seconds elapsed) Apr 13 09:59:29.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:29.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:29.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:29.581: INFO: Apr 13 09:59:31.580: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:31.580: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (100 seconds elapsed) Apr 13 09:59:31.580: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:31.580: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:31.580: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:31.580: INFO: Apr 13 09:59:33.575: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:33.575: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (102 seconds elapsed) Apr 13 09:59:33.575: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:33.575: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:33.575: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:33.575: INFO: Apr 13 09:59:35.584: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:35.584: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (104 seconds elapsed) Apr 13 09:59:35.584: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:35.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:35.584: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:35.584: INFO: Apr 13 09:59:37.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:37.581: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (106 seconds elapsed) Apr 13 09:59:37.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:37.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:37.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:37.581: INFO: Apr 13 09:59:39.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:39.582: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (108 seconds elapsed) Apr 13 09:59:39.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:39.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:39.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:39.582: INFO: Apr 13 09:59:41.597: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:41.597: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (110 seconds elapsed) Apr 13 09:59:41.597: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:41.597: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:41.597: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:41.597: INFO: Apr 13 09:59:43.577: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:43.577: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (112 seconds elapsed) Apr 13 09:59:43.577: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:43.577: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:43.577: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:43.577: INFO: Apr 13 09:59:45.581: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:45.581: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (114 seconds elapsed) Apr 13 09:59:45.581: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:45.581: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:45.581: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:45.581: INFO: Apr 13 09:59:47.582: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:47.582: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (116 seconds elapsed) Apr 13 09:59:47.582: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:47.582: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:47.582: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:47.582: INFO: Apr 13 09:59:49.580: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:49.580: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (118 seconds elapsed) Apr 13 09:59:49.580: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:49.580: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:49.580: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:49.580: INFO: Apr 13 09:59:51.579: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:51.580: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (120 seconds elapsed) Apr 13 09:59:51.580: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:51.580: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:51.580: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:51.580: INFO: Apr 13 09:59:53.584: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:53.584: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (122 seconds elapsed) Apr 13 09:59:53.584: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:53.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:53.584: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:53.584: INFO: Apr 13 09:59:55.583: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:55.583: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (124 seconds elapsed) Apr 13 09:59:55.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:55.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:55.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:55.583: INFO: Apr 13 09:59:57.584: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:57.584: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (126 seconds elapsed) Apr 13 09:59:57.584: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:57.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:57.584: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:57.584: INFO: Apr 13 09:59:59.583: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 09:59:59.583: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (128 seconds elapsed) Apr 13 09:59:59.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 09:59:59.583: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 09:59:59.583: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 09:59:59.583: INFO: Apr 13 10:00:01.590: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 10:00:01.590: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (130 seconds elapsed) Apr 13 10:00:01.590: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 10:00:01.590: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 10:00:01.590: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 10:00:01.590: INFO: Apr 13 10:00:03.845: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 10:00:03.845: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (132 seconds elapsed) Apr 13 10:00:03.845: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 10:00:03.845: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 10:00:03.845: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 09:54:52 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 10:00:03.845: INFO: Apr 13 10:00:05.587: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (134 seconds elapsed) Apr 13 10:00:05.587: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 10:00:05.587: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 13 10:00:05.601: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 13 10:00:05.601: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 13 10:00:05.601: INFO: e2e test version: v1.20.5 Apr 13 10:00:05.603: INFO: kube-apiserver version: v1.20.2 Apr 13 10:00:05.603: INFO: >>> kubeConfig: /root/.kube/config Apr 13 10:00:05.622: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:00:05.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Apr 13 10:00:05.741: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Apr 13 10:00:05.764: INFO: Create a RollingUpdate DaemonSet Apr 13 10:00:05.769: INFO: Check that daemon pods launch on every node of the cluster Apr 13 10:00:05.775: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:05.789: INFO: Number of nodes with available pods: 0 Apr 13 10:00:05.789: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:00:06.794: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:06.799: INFO: Number of nodes with available pods: 0 Apr 13 10:00:06.799: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:00:07.795: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:07.799: INFO: Number of nodes with available pods: 0 Apr 13 10:00:07.799: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:00:08.793: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:08.843: INFO: Number of nodes with available pods: 1 Apr 13 10:00:08.843: INFO: Node leguer-worker2 is running more than one daemon pod Apr 13 10:00:09.795: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:09.798: INFO: Number of nodes with available pods: 2 Apr 13 10:00:09.798: INFO: Number of running nodes: 2, number of available pods: 2 Apr 13 10:00:09.799: INFO: Update the DaemonSet to trigger a rollout Apr 13 10:00:09.808: INFO: Updating DaemonSet daemon-set Apr 13 10:00:15.830: INFO: Roll back the DaemonSet before rollout is complete Apr 13 10:00:15.837: INFO: Updating DaemonSet daemon-set Apr 13 10:00:15.837: INFO: Make sure DaemonSet rollback is complete Apr 13 10:00:15.897: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:15.897: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:15.901: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:16.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:16.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:16.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:17.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:17.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:17.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:18.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:18.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:18.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:19.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:19.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:19.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:20.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:20.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:20.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:21.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:21.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:21.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:22.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:22.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:22.916: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:23.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:23.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:23.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:24.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:24.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:24.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:25.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:25.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:25.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:26.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:26.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:26.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:27.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:27.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:27.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:28.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:28.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:28.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:29.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:29.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:29.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:30.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:30.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:30.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:31.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:31.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:31.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:32.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:32.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:32.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:33.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:33.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:33.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:34.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:34.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:34.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:35.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:35.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:35.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:36.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:36.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:36.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:37.908: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:37.908: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:37.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:38.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:38.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:38.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:39.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:39.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:39.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:40.905: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:40.905: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:40.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:41.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:41.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:41.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:42.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:42.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:42.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:43.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:43.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:43.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:44.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:44.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:44.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:45.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:45.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:45.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:46.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:46.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:46.912: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:47.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:47.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:47.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:48.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:48.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:48.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:49.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:49.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:49.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:50.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:50.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:50.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:51.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:51.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:51.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:52.909: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:52.909: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:52.913: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:53.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:53.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:53.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:54.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:54.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:54.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:55.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:55.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:55.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:56.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:56.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:56.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:57.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:57.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:57.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:58.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:58.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:58.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:00:59.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:00:59.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:00:59.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:01:00.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:01:00.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:01:00.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:01:01.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:01:01.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:01:01.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:01:02.906: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:01:02.906: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:01:02.910: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:01:03.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:01:03.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:01:03.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:01:04.907: INFO: Wrong image for pod: daemon-set-tfgtl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 13 10:01:04.907: INFO: Pod daemon-set-tfgtl is not available Apr 13 10:01:04.911: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:01:05.906: INFO: Pod daemon-set-6cwp2 is not available Apr 13 10:01:05.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8641, will wait for the garbage collector to delete the pods Apr 13 10:01:05.975: INFO: Deleting DaemonSet.extensions daemon-set took: 6.940754ms Apr 13 10:01:06.475: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.25655ms Apr 13 10:01:55.290: INFO: Number of nodes with available pods: 0 Apr 13 10:01:55.290: INFO: Number of running nodes: 0, number of available pods: 0 Apr 13 10:01:55.296: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"28012"},"items":null} Apr 13 10:01:55.299: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"28012"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:01:55.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8641" for this suite. • [SLOW TEST:109.683 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":18,"completed":1,"skipped":1062,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:01:55.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 13 10:01:55.428: INFO: Waiting up to 1m0s for all nodes to be ready Apr 13 10:02:55.449: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Apr 13 10:02:55.525: INFO: Created pod: pod0-sched-preemption-low-priority Apr 13 10:02:55.555: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:04:09.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9958" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:134.384 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":18,"completed":2,"skipped":1067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:04:09.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:04:10.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1143" for this suite. STEP: Destroying namespace "nspatchtest-3138a130-484b-4704-85f9-d5986a5da7e1-1501" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":18,"completed":3,"skipped":1250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:04:10.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 13 10:04:10.393: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:04:10.398: INFO: Number of nodes with available pods: 0 Apr 13 10:04:10.398: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:04:11.403: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:04:11.406: INFO: Number of nodes with available pods: 0 Apr 13 10:04:11.406: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:04:12.404: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:04:12.408: INFO: Number of nodes with available pods: 0 Apr 13 10:04:12.408: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:04:13.458: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:04:13.463: INFO: Number of nodes with available pods: 1 Apr 13 10:04:13.463: INFO: Node leguer-worker2 is running more than one daemon pod Apr 13 10:04:14.404: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:04:14.408: INFO: Number of nodes with available pods: 2 Apr 13 10:04:14.408: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 13 10:04:14.444: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:04:14.456: INFO: Number of nodes with available pods: 2 Apr 13 10:04:14.456: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3608, will wait for the garbage collector to delete the pods Apr 13 10:04:15.706: INFO: Deleting DaemonSet.extensions daemon-set took: 86.902143ms Apr 13 10:04:16.407: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.228937ms Apr 13 10:04:55.320: INFO: Number of nodes with available pods: 0 Apr 13 10:04:55.320: INFO: Number of running nodes: 0, number of available pods: 0 Apr 13 10:04:55.323: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"28555"},"items":null} Apr 13 10:04:55.325: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"28555"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:04:55.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3608" for this suite. • [SLOW TEST:45.088 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":18,"completed":4,"skipped":1308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:04:55.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Apr 13 10:04:55.423: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 13 10:04:55.452: INFO: Waiting for terminating namespaces to be deleted... Apr 13 10:04:55.455: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Apr 13 10:04:55.460: INFO: kindnet-hzqnl from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:04:55.460: INFO: Container kindnet-cni ready: false, restart count 16 Apr 13 10:04:55.460: INFO: kube-proxy-srl76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:04:55.460: INFO: Container kube-proxy ready: true, restart count 0 Apr 13 10:04:55.460: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Apr 13 10:04:55.464: INFO: kindnet-67q65 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:04:55.464: INFO: Container kindnet-cni ready: false, restart count 16 Apr 13 10:04:55.464: INFO: kube-proxy-kfr76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:04:55.464: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e64afe1e-5a9d-4508-bf31-d4b6f4ae0146 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e64afe1e-5a9d-4508-bf31-d4b6f4ae0146 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e64afe1e-5a9d-4508-bf31-d4b6f4ae0146 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:05:03.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4811" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:8.305 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":18,"completed":5,"skipped":1447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:05:03.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 13 10:05:03.813: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:03.832: INFO: Number of nodes with available pods: 0 Apr 13 10:05:03.832: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:04.837: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:04.841: INFO: Number of nodes with available pods: 0 Apr 13 10:05:04.841: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:05.837: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:05.841: INFO: Number of nodes with available pods: 0 Apr 13 10:05:05.841: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:06.847: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:06.850: INFO: Number of nodes with available pods: 0 Apr 13 10:05:06.850: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:07.837: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:07.841: INFO: Number of nodes with available pods: 1 Apr 13 10:05:07.841: INFO: Node leguer-worker2 is running more than one daemon pod Apr 13 10:05:08.835: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:08.838: INFO: Number of nodes with available pods: 2 Apr 13 10:05:08.838: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 13 10:05:08.914: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:08.917: INFO: Number of nodes with available pods: 1 Apr 13 10:05:08.917: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:09.950: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:09.954: INFO: Number of nodes with available pods: 1 Apr 13 10:05:09.954: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:10.923: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:10.925: INFO: Number of nodes with available pods: 1 Apr 13 10:05:10.925: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:11.922: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:11.925: INFO: Number of nodes with available pods: 1 Apr 13 10:05:11.925: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:12.922: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:12.926: INFO: Number of nodes with available pods: 1 Apr 13 10:05:12.926: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:13.922: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:13.925: INFO: Number of nodes with available pods: 1 Apr 13 10:05:13.925: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:14.922: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:14.925: INFO: Number of nodes with available pods: 1 Apr 13 10:05:14.925: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:15.921: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:15.924: INFO: Number of nodes with available pods: 1 Apr 13 10:05:15.924: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:16.923: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:16.927: INFO: Number of nodes with available pods: 1 Apr 13 10:05:16.927: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:17.928: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:17.932: INFO: Number of nodes with available pods: 2 Apr 13 10:05:17.932: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1490, will wait for the garbage collector to delete the pods Apr 13 10:05:17.994: INFO: Deleting DaemonSet.extensions daemon-set took: 5.75986ms Apr 13 10:05:18.494: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.227862ms Apr 13 10:05:55.303: INFO: Number of nodes with available pods: 0 Apr 13 10:05:55.303: INFO: Number of running nodes: 0, number of available pods: 0 Apr 13 10:05:55.306: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"28798"},"items":null} Apr 13 10:05:55.308: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"28798"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:05:55.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1490" for this suite. • [SLOW TEST:51.653 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":18,"completed":6,"skipped":1777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:05:55.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Apr 13 10:05:55.446: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 13 10:05:55.454: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:55.473: INFO: Number of nodes with available pods: 0 Apr 13 10:05:55.473: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:56.479: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:56.484: INFO: Number of nodes with available pods: 0 Apr 13 10:05:56.484: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:57.484: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:57.488: INFO: Number of nodes with available pods: 0 Apr 13 10:05:57.488: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:05:58.492: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:58.554: INFO: Number of nodes with available pods: 1 Apr 13 10:05:58.555: INFO: Node leguer-worker2 is running more than one daemon pod Apr 13 10:05:59.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:05:59.482: INFO: Number of nodes with available pods: 2 Apr 13 10:05:59.482: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 13 10:05:59.554: INFO: Wrong image for pod: daemon-set-8gbkw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:05:59.554: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:05:59.574: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:00.592: INFO: Wrong image for pod: daemon-set-8gbkw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:00.592: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:00.599: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:01.580: INFO: Wrong image for pod: daemon-set-8gbkw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:01.581: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:01.584: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:02.579: INFO: Wrong image for pod: daemon-set-8gbkw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:02.579: INFO: Pod daemon-set-8gbkw is not available Apr 13 10:06:02.579: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:02.582: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:03.580: INFO: Wrong image for pod: daemon-set-8gbkw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:03.580: INFO: Pod daemon-set-8gbkw is not available Apr 13 10:06:03.580: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:03.584: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:04.584: INFO: Wrong image for pod: daemon-set-8gbkw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:04.584: INFO: Pod daemon-set-8gbkw is not available Apr 13 10:06:04.584: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:04.587: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:05.579: INFO: Pod daemon-set-68mqb is not available Apr 13 10:06:05.579: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:05.583: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:06.580: INFO: Pod daemon-set-68mqb is not available Apr 13 10:06:06.580: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:06.585: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:07.580: INFO: Pod daemon-set-68mqb is not available Apr 13 10:06:07.580: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:07.584: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:08.580: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:08.583: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:09.579: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:09.579: INFO: Pod daemon-set-qv7v7 is not available Apr 13 10:06:09.583: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:10.579: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:10.579: INFO: Pod daemon-set-qv7v7 is not available Apr 13 10:06:10.583: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:11.582: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:11.582: INFO: Pod daemon-set-qv7v7 is not available Apr 13 10:06:11.586: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:12.580: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:12.580: INFO: Pod daemon-set-qv7v7 is not available Apr 13 10:06:12.584: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:13.580: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:13.580: INFO: Pod daemon-set-qv7v7 is not available Apr 13 10:06:13.584: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:14.579: INFO: Wrong image for pod: daemon-set-qv7v7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Apr 13 10:06:14.579: INFO: Pod daemon-set-qv7v7 is not available Apr 13 10:06:14.582: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:15.579: INFO: Pod daemon-set-qch9t is not available Apr 13 10:06:15.582: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 13 10:06:15.586: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:15.589: INFO: Number of nodes with available pods: 1 Apr 13 10:06:15.589: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:06:16.595: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:16.601: INFO: Number of nodes with available pods: 1 Apr 13 10:06:16.601: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:06:17.594: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 13 10:06:17.598: INFO: Number of nodes with available pods: 2 Apr 13 10:06:17.598: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3628, will wait for the garbage collector to delete the pods Apr 13 10:06:17.673: INFO: Deleting DaemonSet.extensions daemon-set took: 5.642953ms Apr 13 10:06:18.174: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.207408ms Apr 13 10:06:25.277: INFO: Number of nodes with available pods: 0 Apr 13 10:06:25.277: INFO: Number of running nodes: 0, number of available pods: 0 Apr 13 10:06:25.280: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"28960"},"items":null} Apr 13 10:06:25.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"28960"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:06:25.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3628" for this suite. • [SLOW TEST:29.969 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":18,"completed":7,"skipped":2142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:06:25.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 13 10:06:25.462: INFO: Waiting up to 1m0s for all nodes to be ready Apr 13 10:07:25.484: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:07:25.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Apr 13 10:07:25.625: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Apr 13 10:07:25.629: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:07:25.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5088" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:07:25.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2630" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.644 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":18,"completed":8,"skipped":2166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:07:25.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Apr 13 10:07:26.042: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 13 10:07:26.059: INFO: Waiting for terminating namespaces to be deleted... Apr 13 10:07:26.062: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Apr 13 10:07:26.067: INFO: kindnet-hzqnl from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:07:26.067: INFO: Container kindnet-cni ready: false, restart count 16 Apr 13 10:07:26.067: INFO: kube-proxy-srl76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:07:26.067: INFO: Container kube-proxy ready: true, restart count 0 Apr 13 10:07:26.067: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Apr 13 10:07:26.071: INFO: kindnet-67q65 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:07:26.071: INFO: Container kindnet-cni ready: false, restart count 16 Apr 13 10:07:26.071: INFO: kube-proxy-kfr76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:07:26.071: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.167562f523109952], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:07:27.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6509" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":18,"completed":9,"skipped":2590,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:07:27.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 13 10:07:27.843: INFO: Pod name wrapped-volume-race-ddad397b-b311-47dc-93f2-dd744bcc0339: Found 0 pods out of 5 Apr 13 10:07:32.883: INFO: Pod name wrapped-volume-race-ddad397b-b311-47dc-93f2-dd744bcc0339: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ddad397b-b311-47dc-93f2-dd744bcc0339 in namespace emptydir-wrapper-2008, will wait for the garbage collector to delete the pods Apr 13 10:07:46.978: INFO: Deleting ReplicationController wrapped-volume-race-ddad397b-b311-47dc-93f2-dd744bcc0339 took: 6.087851ms Apr 13 10:07:47.579: INFO: Terminating ReplicationController wrapped-volume-race-ddad397b-b311-47dc-93f2-dd744bcc0339 pods took: 600.310217ms STEP: Creating RC which spawns configmap-volume pods Apr 13 10:07:55.714: INFO: Pod name wrapped-volume-race-21e0200a-b857-4b48-ba40-84c75cab2dc0: Found 0 pods out of 5 Apr 13 10:08:00.723: INFO: Pod name wrapped-volume-race-21e0200a-b857-4b48-ba40-84c75cab2dc0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-21e0200a-b857-4b48-ba40-84c75cab2dc0 in namespace emptydir-wrapper-2008, will wait for the garbage collector to delete the pods Apr 13 10:08:14.836: INFO: Deleting ReplicationController wrapped-volume-race-21e0200a-b857-4b48-ba40-84c75cab2dc0 took: 32.235298ms Apr 13 10:08:15.336: INFO: Terminating ReplicationController wrapped-volume-race-21e0200a-b857-4b48-ba40-84c75cab2dc0 pods took: 500.194124ms STEP: Creating RC which spawns configmap-volume pods Apr 13 10:08:25.268: INFO: Pod name wrapped-volume-race-61fc977a-5299-4b7a-bd35-528b610bff90: Found 0 pods out of 5 Apr 13 10:08:30.277: INFO: Pod name wrapped-volume-race-61fc977a-5299-4b7a-bd35-528b610bff90: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-61fc977a-5299-4b7a-bd35-528b610bff90 in namespace emptydir-wrapper-2008, will wait for the garbage collector to delete the pods Apr 13 10:08:44.363: INFO: Deleting ReplicationController wrapped-volume-race-61fc977a-5299-4b7a-bd35-528b610bff90 took: 10.613524ms Apr 13 10:08:44.863: INFO: Terminating ReplicationController wrapped-volume-race-61fc977a-5299-4b7a-bd35-528b610bff90 pods took: 500.284442ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:08:56.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2008" for this suite. • [SLOW TEST:88.956 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":18,"completed":10,"skipped":2661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:08:56.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Apr 13 10:08:56.137: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 13 10:08:56.166: INFO: Waiting for terminating namespaces to be deleted... Apr 13 10:08:56.167: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Apr 13 10:08:56.171: INFO: kindnet-hzqnl from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:08:56.171: INFO: Container kindnet-cni ready: true, restart count 17 Apr 13 10:08:56.171: INFO: kube-proxy-srl76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:08:56.171: INFO: Container kube-proxy ready: true, restart count 0 Apr 13 10:08:56.171: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Apr 13 10:08:56.174: INFO: kindnet-67q65 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:08:56.175: INFO: Container kindnet-cni ready: true, restart count 17 Apr 13 10:08:56.175: INFO: kube-proxy-kfr76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:08:56.175: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e494df85-7d65-4774-9a72-b269e31399aa 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.12 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-e494df85-7d65-4774-9a72-b269e31399aa off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e494df85-7d65-4774-9a72-b269e31399aa [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:14:04.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7132" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:308.658 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":18,"completed":11,"skipped":3090,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:14:04.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:14:36.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6037" for this suite. STEP: Destroying namespace "nsdeletetest-5361" for this suite. Apr 13 10:14:36.063: INFO: Namespace nsdeletetest-5361 was already deleted STEP: Destroying namespace "nsdeletetest-8949" for this suite. • [SLOW TEST:31.328 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":18,"completed":12,"skipped":3168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:14:36.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Apr 13 10:14:36.136: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 13 10:14:36.173: INFO: Waiting for terminating namespaces to be deleted... Apr 13 10:14:36.175: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Apr 13 10:14:36.181: INFO: kindnet-hzqnl from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:14:36.181: INFO: Container kindnet-cni ready: false, restart count 17 Apr 13 10:14:36.181: INFO: kube-proxy-srl76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:14:36.181: INFO: Container kube-proxy ready: true, restart count 0 Apr 13 10:14:36.181: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Apr 13 10:14:36.186: INFO: kindnet-67q65 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:14:36.186: INFO: Container kindnet-cni ready: false, restart count 17 Apr 13 10:14:36.186: INFO: kube-proxy-kfr76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:14:36.186: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 Apr 13 10:14:36.241: INFO: Pod kindnet-67q65 requesting resource cpu=100m on Node leguer-worker2 Apr 13 10:14:36.241: INFO: Pod kindnet-hzqnl requesting resource cpu=100m on Node leguer-worker Apr 13 10:14:36.241: INFO: Pod kube-proxy-kfr76 requesting resource cpu=0m on Node leguer-worker2 Apr 13 10:14:36.241: INFO: Pod kube-proxy-srl76 requesting resource cpu=0m on Node leguer-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 13 10:14:36.241: INFO: Creating a pod which consumes cpu=11130m on Node leguer-worker Apr 13 10:14:36.248: INFO: Creating a pod which consumes cpu=11130m on Node leguer-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6713e6d1-9d01-483f-9b9a-ff10f7771de7.167563594adc7e8a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3121/filler-pod-6713e6d1-9d01-483f-9b9a-ff10f7771de7 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6713e6d1-9d01-483f-9b9a-ff10f7771de7.167563599b3e7cd6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6713e6d1-9d01-483f-9b9a-ff10f7771de7.16756359da0678ce], Reason = [Created], Message = [Created container filler-pod-6713e6d1-9d01-483f-9b9a-ff10f7771de7] STEP: Considering event: Type = [Normal], Name = [filler-pod-6713e6d1-9d01-483f-9b9a-ff10f7771de7.16756359ec31f877], Reason = [Started], Message = [Started container filler-pod-6713e6d1-9d01-483f-9b9a-ff10f7771de7] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e9f8193-018d-4e44-888d-0fba1abf58a6.167563594add96ef], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3121/filler-pod-7e9f8193-018d-4e44-888d-0fba1abf58a6 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e9f8193-018d-4e44-888d-0fba1abf58a6.16756359abd2e60b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e9f8193-018d-4e44-888d-0fba1abf58a6.16756359ec3128df], Reason = [Created], Message = [Created container filler-pod-7e9f8193-018d-4e44-888d-0fba1abf58a6] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e9f8193-018d-4e44-888d-0fba1abf58a6.16756359fb99b2f0], Reason = [Started], Message = [Started container filler-pod-7e9f8193-018d-4e44-888d-0fba1abf58a6] STEP: Considering event: Type = [Warning], Name = [additional-pod.1675635a3a3a6fc8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:14:41.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3121" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:5.312 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":18,"completed":13,"skipped":3329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:14:41.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:14:47.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-280" for this suite. STEP: Destroying namespace "nsdeletetest-1259" for this suite. Apr 13 10:14:47.779: INFO: Namespace nsdeletetest-1259 was already deleted STEP: Destroying namespace "nsdeletetest-3354" for this suite. • [SLOW TEST:6.399 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":18,"completed":14,"skipped":3635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:14:47.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 13 10:14:47.899: INFO: Waiting up to 1m0s for all nodes to be ready Apr 13 10:15:47.917: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:15:47.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Apr 13 10:15:52.042: INFO: found a healthy node: leguer-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Apr 13 10:16:08.210: INFO: pods created so far: [1 1 1] Apr 13 10:16:08.210: INFO: length of pods created so far: 3 Apr 13 10:16:20.222: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:16:27.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7448" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:16:27.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4973" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:99.592 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":18,"completed":15,"skipped":4732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:16:27.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Apr 13 10:16:27.510: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 13 10:16:27.517: INFO: Waiting for terminating namespaces to be deleted... Apr 13 10:16:27.520: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Apr 13 10:16:27.525: INFO: kindnet-hzqnl from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:16:27.525: INFO: Container kindnet-cni ready: true, restart count 18 Apr 13 10:16:27.525: INFO: kube-proxy-srl76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:16:27.525: INFO: Container kube-proxy ready: true, restart count 0 Apr 13 10:16:27.525: INFO: pod4 from sched-preemption-path-7448 started at 2021-04-13 10:16:19 +0000 UTC (1 container statuses recorded) Apr 13 10:16:27.525: INFO: Container pod4 ready: true, restart count 0 Apr 13 10:16:27.525: INFO: rs-pod3-727df from sched-preemption-path-7448 started at 2021-04-13 10:16:04 +0000 UTC (1 container statuses recorded) Apr 13 10:16:27.525: INFO: Container pod3 ready: true, restart count 0 Apr 13 10:16:27.525: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Apr 13 10:16:27.530: INFO: kindnet-67q65 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:16:27.530: INFO: Container kindnet-cni ready: false, restart count 17 Apr 13 10:16:27.530: INFO: kube-proxy-kfr76 from kube-system started at 2021-04-13 08:13:55 +0000 UTC (1 container statuses recorded) Apr 13 10:16:27.530: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5edf4165-a5f3-4cbe-953c-db491a377584 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.12 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.12 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Apr 13 10:16:47.982: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:47.982: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Apr 13 10:16:48.129: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:48.129: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Apr 13 10:16:48.235: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:48.235: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Apr 13 10:16:53.350: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:53.350: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Apr 13 10:16:53.482: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:53.482: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Apr 13 10:16:53.586: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:53.586: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Apr 13 10:16:58.700: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:58.700: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Apr 13 10:16:58.811: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:58.811: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Apr 13 10:16:58.915: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:16:58.915: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Apr 13 10:17:04.067: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:17:04.067: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Apr 13 10:17:04.216: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:17:04.216: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Apr 13 10:17:04.314: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:17:04.314: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Apr 13 10:17:09.431: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.12 http://127.0.0.1:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:17:09.431: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 Apr 13 10:17:09.537: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.12:54321/hostname] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:17:09.537: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.12, port: 54321 UDP Apr 13 10:17:09.641: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.12 54321] Namespace:sched-pred-4402 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 10:17:09.641: INFO: >>> kubeConfig: /root/.kube/config STEP: removing the label kubernetes.io/e2e-5edf4165-a5f3-4cbe-953c-db491a377584 off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5edf4165-a5f3-4cbe-953c-db491a377584 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:17:14.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4402" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:47.375 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":18,"completed":16,"skipped":5154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:17:14.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 13 10:17:15.014: INFO: Waiting up to 1m0s for all nodes to be ready Apr 13 10:18:15.034: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Apr 13 10:18:15.078: INFO: Created pod: pod0-sched-preemption-low-priority Apr 13 10:18:15.121: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:19:09.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1788" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:114.479 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":18,"completed":17,"skipped":5409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 10:19:09.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Apr 13 10:19:09.386: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 13 10:19:09.394: INFO: Number of nodes with available pods: 0 Apr 13 10:19:09.394: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 13 10:19:09.454: INFO: Number of nodes with available pods: 0 Apr 13 10:19:09.454: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:10.461: INFO: Number of nodes with available pods: 0 Apr 13 10:19:10.461: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:11.461: INFO: Number of nodes with available pods: 0 Apr 13 10:19:11.461: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:12.458: INFO: Number of nodes with available pods: 1 Apr 13 10:19:12.458: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 13 10:19:12.525: INFO: Number of nodes with available pods: 1 Apr 13 10:19:12.525: INFO: Number of running nodes: 0, number of available pods: 1 Apr 13 10:19:13.530: INFO: Number of nodes with available pods: 0 Apr 13 10:19:13.530: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 13 10:19:13.544: INFO: Number of nodes with available pods: 0 Apr 13 10:19:13.544: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:14.701: INFO: Number of nodes with available pods: 0 Apr 13 10:19:14.701: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:15.550: INFO: Number of nodes with available pods: 0 Apr 13 10:19:15.550: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:16.547: INFO: Number of nodes with available pods: 0 Apr 13 10:19:16.547: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:17.549: INFO: Number of nodes with available pods: 0 Apr 13 10:19:17.549: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:18.547: INFO: Number of nodes with available pods: 0 Apr 13 10:19:18.547: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:19.548: INFO: Number of nodes with available pods: 0 Apr 13 10:19:19.548: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:20.548: INFO: Number of nodes with available pods: 0 Apr 13 10:19:20.548: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:21.550: INFO: Number of nodes with available pods: 0 Apr 13 10:19:21.550: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:22.556: INFO: Number of nodes with available pods: 0 Apr 13 10:19:22.557: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:23.550: INFO: Number of nodes with available pods: 0 Apr 13 10:19:23.551: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:24.548: INFO: Number of nodes with available pods: 0 Apr 13 10:19:24.548: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:25.549: INFO: Number of nodes with available pods: 0 Apr 13 10:19:25.549: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:26.551: INFO: Number of nodes with available pods: 0 Apr 13 10:19:26.551: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:27.548: INFO: Number of nodes with available pods: 0 Apr 13 10:19:27.548: INFO: Node leguer-worker is running more than one daemon pod Apr 13 10:19:28.551: INFO: Number of nodes with available pods: 1 Apr 13 10:19:28.551: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9055, will wait for the garbage collector to delete the pods Apr 13 10:19:28.624: INFO: Deleting DaemonSet.extensions daemon-set took: 16.986311ms Apr 13 10:19:29.125: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.207176ms Apr 13 10:19:35.028: INFO: Number of nodes with available pods: 0 Apr 13 10:19:35.028: INFO: Number of running nodes: 0, number of available pods: 0 Apr 13 10:19:35.031: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"32053"},"items":null} Apr 13 10:19:35.034: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"32053"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 10:19:35.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9055" for this suite. • [SLOW TEST:25.830 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":18,"completed":18,"skipped":5560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 13 10:19:35.088: INFO: Running AfterSuite actions on all nodes Apr 13 10:19:35.088: INFO: Running AfterSuite actions on node 1 Apr 13 10:19:35.088: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":18,"completed":18,"skipped":5649,"failed":0} Ran 18 of 5667 Specs in 1303.592 seconds SUCCESS! -- 18 Passed | 0 Failed | 0 Pending | 5649 Skipped PASS Ginkgo ran 1 suite in 21m44.981322087s Test Suite Passed