Running Suite: Kubernetes e2e suite =================================== Random Seed: 1618303492 - Will randomize all specs Will run 5667 specs Running in parallel across 25 nodes Apr 13 08:44:56.538: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:44:56.540: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 13 08:44:56.565: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 13 08:44:56.646: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:44:56.646: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:44:56.646: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:44:56.646: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 13 08:44:56.646: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:44:56.646: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:44:56.646: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:44:56.646: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:44:56.646: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:44:56.646: INFO: Apr 13 08:44:58.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:44:58.668: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:44:58.668: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:44:58.668: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Apr 13 08:44:58.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:44:58.668: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:44:58.668: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:44:58.668: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:44:58.668: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:44:58.668: INFO: Apr 13 08:45:00.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:00.669: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:00.669: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:00.669: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Apr 13 08:45:00.669: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:00.669: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:00.669: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:00.669: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:00.669: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:00.669: INFO: Apr 13 08:45:02.667: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:02.667: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:02.667: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:02.667: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Apr 13 08:45:02.667: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:02.667: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:02.667: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:02.667: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:02.667: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:02.667: INFO: Apr 13 08:45:04.677: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:04.677: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:04.678: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:04.678: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Apr 13 08:45:04.678: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:04.678: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:04.678: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:04.678: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:04.678: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:04.678: INFO: Apr 13 08:45:06.666: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:06.666: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:06.666: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:06.666: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Apr 13 08:45:06.666: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:06.666: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:06.666: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:06.666: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:06.666: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:06.666: INFO: Apr 13 08:45:08.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:08.668: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:08.668: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:08.668: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Apr 13 08:45:08.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:08.668: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:08.668: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:08.668: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:08.668: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:08.668: INFO: Apr 13 08:45:10.666: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:10.666: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:10.666: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:10.666: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Apr 13 08:45:10.666: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:10.666: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:10.666: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:10.666: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:10.666: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:10.666: INFO: Apr 13 08:45:12.663: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:12.663: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:12.663: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:12.663: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Apr 13 08:45:12.663: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:12.663: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:12.663: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:12.663: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:12.663: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:12.663: INFO: Apr 13 08:45:14.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:14.668: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:14.668: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:14.668: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Apr 13 08:45:14.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:14.668: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:14.668: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:14.668: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:14.668: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:14.668: INFO: Apr 13 08:45:16.665: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:16.666: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:16.666: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:16.666: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Apr 13 08:45:16.666: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:16.666: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:16.666: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:16.666: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:16.666: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:16.666: INFO: Apr 13 08:45:18.663: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:18.663: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:18.663: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:18.663: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Apr 13 08:45:18.663: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:18.664: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:18.664: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:18.664: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:18.664: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:18.664: INFO: Apr 13 08:45:20.672: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:20.672: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:20.672: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:20.672: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Apr 13 08:45:20.672: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:20.672: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:20.672: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:20.672: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:20.672: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:20.672: INFO: Apr 13 08:45:22.673: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:22.673: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:22.673: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:22.673: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Apr 13 08:45:22.673: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:22.673: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:22.673: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:22.673: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:22.674: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:22.674: INFO: Apr 13 08:45:24.665: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:24.665: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:24.665: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:24.665: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Apr 13 08:45:24.665: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:24.665: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:24.665: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:24.665: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:24.665: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:24.665: INFO: Apr 13 08:45:26.667: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:26.667: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:26.667: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:26.667: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (30 seconds elapsed) Apr 13 08:45:26.667: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:26.667: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:26.667: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:26.667: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:26.667: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:26.667: INFO: Apr 13 08:45:28.669: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:28.669: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:28.669: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:28.669: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (32 seconds elapsed) Apr 13 08:45:28.669: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:28.669: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:28.669: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:28.669: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:28.669: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:28.669: INFO: Apr 13 08:45:30.695: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:30.695: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:30.695: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:30.695: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (34 seconds elapsed) Apr 13 08:45:30.695: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:30.695: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:30.695: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:30.695: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:30.695: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:30.695: INFO: Apr 13 08:45:32.663: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:32.663: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:32.663: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:32.663: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (36 seconds elapsed) Apr 13 08:45:32.663: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:32.663: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:32.663: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:32.663: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:32.663: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:32.663: INFO: Apr 13 08:45:34.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:34.668: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:34.668: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:34.668: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (38 seconds elapsed) Apr 13 08:45:34.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:34.668: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:34.668: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:34.668: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:34.668: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:34.668: INFO: Apr 13 08:45:36.681: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:36.681: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:36.681: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:36.681: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (40 seconds elapsed) Apr 13 08:45:36.681: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:36.681: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:36.681: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:36.681: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:36.681: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:36.681: INFO: Apr 13 08:45:38.680: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:38.680: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:38.680: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:38.680: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (42 seconds elapsed) Apr 13 08:45:38.680: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:38.680: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:38.680: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:38.680: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:38.680: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:38.680: INFO: Apr 13 08:45:40.663: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:40.664: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:40.664: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:40.664: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (44 seconds elapsed) Apr 13 08:45:40.664: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:40.664: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:40.664: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:40.664: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:40.664: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:40.664: INFO: Apr 13 08:45:42.662: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:42.662: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:42.662: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:42.662: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (46 seconds elapsed) Apr 13 08:45:42.662: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:42.662: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:42.662: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:42.662: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:42.662: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:42.662: INFO: Apr 13 08:45:44.683: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:44.683: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:44.683: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:44.683: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (48 seconds elapsed) Apr 13 08:45:44.683: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:44.684: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:44.684: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:44.684: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:44.684: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:44.684: INFO: Apr 13 08:45:46.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:46.668: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:46.668: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:46.668: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (50 seconds elapsed) Apr 13 08:45:46.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:46.668: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:46.668: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:46.668: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:46.668: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:46.668: INFO: Apr 13 08:45:48.677: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:48.677: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:48.677: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:48.677: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (52 seconds elapsed) Apr 13 08:45:48.677: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:48.677: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:48.677: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:48.677: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:48.677: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:48.677: INFO: Apr 13 08:45:50.672: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:50.672: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:50.672: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:50.672: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (54 seconds elapsed) Apr 13 08:45:50.672: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:50.672: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:50.672: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:50.672: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:50.672: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:50.672: INFO: Apr 13 08:45:52.666: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:52.666: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:52.666: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:52.666: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (56 seconds elapsed) Apr 13 08:45:52.666: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:52.666: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:52.666: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:52.666: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:52.666: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:52.666: INFO: Apr 13 08:45:54.665: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:54.665: INFO: The status of Pod kindnet-dnnlq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:54.665: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:54.665: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (58 seconds elapsed) Apr 13 08:45:54.665: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:54.665: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:54.665: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:54.665: INFO: kindnet-dnnlq leguer-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:43 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:43 +0000 UTC }] Apr 13 08:45:54.665: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:54.665: INFO: Apr 13 08:45:56.665: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:56.665: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:56.665: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (60 seconds elapsed) Apr 13 08:45:56.665: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:56.665: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:56.665: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:56.665: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:56.665: INFO: Apr 13 08:45:58.665: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:58.665: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:45:58.665: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (62 seconds elapsed) Apr 13 08:45:58.665: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:45:58.665: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:45:58.665: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:58.665: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:45:58.665: INFO: Apr 13 08:46:00.661: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:00.661: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:00.661: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (64 seconds elapsed) Apr 13 08:46:00.661: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:00.661: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:00.661: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:00.661: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:00.661: INFO: Apr 13 08:46:02.667: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:02.667: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:02.667: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (66 seconds elapsed) Apr 13 08:46:02.667: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:02.667: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:02.667: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:02.667: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:02.667: INFO: Apr 13 08:46:04.670: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:04.670: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:04.670: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (68 seconds elapsed) Apr 13 08:46:04.670: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:04.670: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:04.671: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:04.671: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:04.671: INFO: Apr 13 08:46:06.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:06.668: INFO: The status of Pod kindnet-hzqnl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:06.668: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (70 seconds elapsed) Apr 13 08:46:06.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:06.668: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:06.669: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:06.669: INFO: kindnet-hzqnl leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:40:57 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:06.669: INFO: Apr 13 08:46:08.664: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:08.664: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (72 seconds elapsed) Apr 13 08:46:08.664: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:08.664: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:08.664: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:08.664: INFO: Apr 13 08:46:10.669: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:10.669: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (74 seconds elapsed) Apr 13 08:46:10.669: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:10.669: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:10.669: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:10.669: INFO: Apr 13 08:46:12.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:12.668: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (76 seconds elapsed) Apr 13 08:46:12.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:12.668: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:12.668: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:12.668: INFO: Apr 13 08:46:14.668: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:14.668: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (78 seconds elapsed) Apr 13 08:46:14.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:14.668: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:14.668: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:14.668: INFO: Apr 13 08:46:16.669: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:16.669: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (80 seconds elapsed) Apr 13 08:46:16.669: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:16.669: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:16.669: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:16.669: INFO: Apr 13 08:46:18.672: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:18.672: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (82 seconds elapsed) Apr 13 08:46:18.672: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:18.672: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:18.672: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:18.672: INFO: Apr 13 08:46:20.667: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:20.667: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (84 seconds elapsed) Apr 13 08:46:20.667: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:20.667: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:20.668: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:20.668: INFO: Apr 13 08:46:22.674: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:22.674: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (86 seconds elapsed) Apr 13 08:46:22.674: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:22.674: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:22.674: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:22.674: INFO: Apr 13 08:46:24.670: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:24.670: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (88 seconds elapsed) Apr 13 08:46:24.670: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:24.670: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:24.670: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:24.670: INFO: Apr 13 08:46:26.669: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:26.669: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (90 seconds elapsed) Apr 13 08:46:26.669: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:26.669: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:26.669: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:26.669: INFO: Apr 13 08:46:28.665: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:28.665: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (92 seconds elapsed) Apr 13 08:46:28.665: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:28.665: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:28.665: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:28.665: INFO: Apr 13 08:46:30.666: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:30.666: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (94 seconds elapsed) Apr 13 08:46:30.666: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:30.666: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:30.666: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:30.666: INFO: Apr 13 08:46:32.663: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:32.663: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (96 seconds elapsed) Apr 13 08:46:32.663: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:32.663: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:32.663: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:32.663: INFO: Apr 13 08:46:34.698: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 08:46:34.698: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (98 seconds elapsed) Apr 13 08:46:34.698: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:34.698: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 08:46:34.698: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:41:24 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 08:46:34.698: INFO: Apr 13 08:46:36.667: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (100 seconds elapsed) Apr 13 08:46:36.667: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 08:46:36.667: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 13 08:46:36.680: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 13 08:46:36.680: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 13 08:46:36.680: INFO: e2e test version: v1.20.5 Apr 13 08:46:36.683: INFO: kube-apiserver version: v1.20.2 Apr 13 08:46:36.683: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.688: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Apr 13 08:46:36.696: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.727: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.699: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.728: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Apr 13 08:46:36.700: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.734: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.698: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.733: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 13 08:46:36.697: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.735: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 13 08:46:36.696: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.736: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.696: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.736: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.699: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.736: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.699: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.736: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.703: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.739: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.699: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.738: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 13 08:46:36.698: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.739: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.699: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.739: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.700: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.738: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 13 08:46:36.700: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.737: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.695: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.738: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.700: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.737: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Apr 13 08:46:36.696: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.741: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 13 08:46:36.704: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.747: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Apr 13 08:46:36.702: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.752: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ Apr 13 08:46:36.698: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.752: INFO: Cluster IP family: ipv4 Apr 13 08:46:36.697: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.754: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 13 08:46:36.705: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.754: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Apr 13 08:46:36.698: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:46:36.755: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd Apr 13 08:46:37.352: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 Apr 13 08:46:37.357: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:37.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-493" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.468 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools Apr 13 08:46:37.413: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 Apr 13 08:46:37.420: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:37.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-8076" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.473 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Apr 13 08:46:37.411: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:37.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6915" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":1,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Apr 13 08:46:38.421: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 13 08:46:38.424: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:38.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-1133" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0413 08:46:38.607512 177 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 88 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x44a33e0, 0x77ad860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x44a33e0, 0x77ad860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0000b80d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000de0760, 0xcb2400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0036607c0, 0xc000de0760, 0xc0036607c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000de0760, 0x3d4ede4a5b2, 0xc000de0788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x797fe80, 0x4d, 0x4f8fb7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0031eb6e0, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0014241e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0014241e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00081c038, 0x54f2200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000de16c8, 0xc003c341e0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003c341e0, 0x0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003c341e0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000692000, 0xc003c341e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000692000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000692000, 0xc00367a030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0000c2280, 0x7f9201d589f0, 0xc0011ddb00, 0x4dfa0a4, 0x14, 0xc0035c3ec0, 0x3, 0x3, 0x55ac780, 0xc0000b6900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54f6e40, 0xc0011ddb00, 0x4dfa0a4, 0x14, 0xc00389b000, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54f6e40, 0xc0011ddb00, 0x4dfa0a4, 0x14, 0xc00364f8e0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0011ddb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0011ddb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0011ddb00, 0x4fc2a88) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [1.358 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Apr 13 08:46:38.897: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:39.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5562" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:38.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 13 08:46:39.659: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:39.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-9488" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0413 08:46:39.852299 177 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 88 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x44a33e0, 0x77ad860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x44a33e0, 0x77ad860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0000b80d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000de0760, 0xcb2400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002678b60, 0xc000de0760, 0xc002678b60, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000de0760, 0x3d53816f55f, 0xc000de0788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x797fe80, 0xbb, 0x4f8fb7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002670990, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0014241e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0014241e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc00081c038, 0x54f2200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000de16c8, 0xc003c340f0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003c340f0, 0x0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003c340f0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000692000, 0xc003c340f0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000692000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000692000, 0xc00367a030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0000c2280, 0x7f9201d589f0, 0xc0011ddb00, 0x4dfa0a4, 0x14, 0xc0035c3ec0, 0x3, 0x3, 0x55ac780, 0xc0000b6900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54f6e40, 0xc0011ddb00, 0x4dfa0a4, 0x14, 0xc00389b000, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54f6e40, 0xc0011ddb00, 0x4dfa0a4, 0x14, 0xc00364f8e0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0011ddb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0011ddb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0011ddb00, 0x4fc2a88) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.977 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:39.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:39.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-142" for this suite. •S ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":2,"skipped":362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Apr 13 08:46:38.077: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:40.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-296" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":1,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:40.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 13 08:46:40.653: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:40.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-4121" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0413 08:46:40.684045 17 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 130 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x44a33e0, 0x77ad860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x44a33e0, 0x77ad860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001a4078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0019a2760, 0xcb2400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002f65a20, 0xc0019a2760, 0xc002f65a20, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0019a2760, 0x3d569ab349e, 0xc0019a2788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x797fe80, 0x53, 0x4f8fb7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002f329c0, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0010122a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0010122a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0007cbe80, 0x54f2200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0019a36c8, 0xc002784690, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002784690, 0x0, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002784690, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0024aaa00, 0xc002784690, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0024aaa00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0024aaa00, 0xc002dd2dc0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000180280, 0x7fc9e71bd718, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc002f34ab0, 0x3, 0x3, 0x55ac780, 0xc0001d68c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54f6e40, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc002aec0c0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54f6e40, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc00285e920, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00222ec00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00222ec00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00222ec00, 0x4fc2a88) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.470 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:41.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 13 08:46:41.629: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:41.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-2768" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0413 08:46:41.835885 17 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 130 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x44a33e0, 0x77ad860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x44a33e0, 0x77ad860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001a4078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0019a2760, 0xcb2400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003d0dac0, 0xc0019a2760, 0xc003d0dac0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0019a2760, 0x3d5ae52f32c, 0xc0019a2788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x797fe80, 0x70, 0x4f8fb7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc003977590, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0010122a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0010122a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0007cbe80, 0x54f2200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0019a36c8, 0xc002784780, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002784780, 0x0, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002784780, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0024aaa00, 0xc002784780, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0024aaa00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0024aaa00, 0xc002dd2dc0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000180280, 0x7fc9e71bd718, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc002f34ab0, 0x3, 0x3, 0x55ac780, 0xc0001d68c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54f6e40, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc002aec0c0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54f6e40, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc00285e920, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00222ec00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00222ec00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00222ec00, 0x4fc2a88) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.476 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:42.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 13 08:46:42.463: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:42.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-6428" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0413 08:46:42.470472 17 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 130 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x44a33e0, 0x77ad860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x44a33e0, 0x77ad860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001a4078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000ab4760, 0xcb2400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0026932c0, 0xc000ab4760, 0xc0026932c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000ab4760, 0x3d5d4262de5, 0xc000ab4788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x797fe80, 0x8c, 0x4f8fb7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00064fd10, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0010122a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0010122a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0007cbe80, 0x54f2200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000ab56c8, 0xc0027845a0, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0027845a0, 0x0, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0027845a0, 0x54f2200, 0xc0001d68c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0024aaa00, 0xc0027845a0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0024aaa00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0024aaa00, 0xc002dd2dc0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000180280, 0x7fc9e71bd718, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc002f34ab0, 0x3, 0x3, 0x55ac780, 0xc0001d68c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54f6e40, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc002aec0c0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54f6e40, 0xc00222ec00, 0x4dfa0a4, 0x14, 0xc00285e920, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00222ec00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00222ec00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00222ec00, 0x4fc2a88) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.319 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:36.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 13 08:46:36.948: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 Apr 13 08:46:37.038: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-8352" to be "Succeeded or Failed" Apr 13 08:46:37.100: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 61.20377ms Apr 13 08:46:39.140: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101353007s Apr 13 08:46:41.240: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202026191s Apr 13 08:46:43.394: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355855428s Apr 13 08:46:45.430: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.391829301s Apr 13 08:46:47.754: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.71591215s Apr 13 08:46:49.988: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.949040727s Apr 13 08:46:52.069: INFO: Pod "implicit-nonroot-uid": Phase="Running", Reason="", readiness=true. Elapsed: 15.030192643s Apr 13 08:46:54.467: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.428895431s Apr 13 08:46:54.467: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:54.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8352" for this suite. • [SLOW TEST:18.073 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:46:56.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7281" for this suite. • [SLOW TEST:18.986 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":2,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 13 08:46:39.504: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 Apr 13 08:46:39.557: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3043" to be "Succeeded or Failed" Apr 13 08:46:39.655: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 97.540723ms Apr 13 08:46:41.831: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273830984s Apr 13 08:46:43.873: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315969232s Apr 13 08:46:45.996: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438271601s Apr 13 08:46:48.194: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636330759s Apr 13 08:46:50.369: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.811295878s Apr 13 08:46:52.753: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 13.195512388s Apr 13 08:46:54.862: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 15.304629259s Apr 13 08:46:57.533: INFO: Pod "explicit-nonroot-uid": Phase="Running", Reason="", readiness=true. Elapsed: 17.975432449s Apr 13 08:46:59.736: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.178272625s Apr 13 08:46:59.736: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:00.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3043" for this suite. • [SLOW TEST:22.750 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":361,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:39.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Apr 13 08:46:40.188: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 STEP: creating the pod Apr 13 08:46:40.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39371 --kubeconfig=/root/.kube/config --namespace=examples-9414 create -f -' Apr 13 08:46:48.434: INFO: stderr: "" Apr 13 08:46:48.434: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Apr 13 08:46:58.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39371 --kubeconfig=/root/.kube/config --namespace=examples-9414 logs dapi-test-pod test-container' Apr 13 08:47:00.083: INFO: stderr: "" Apr 13 08:47:00.083: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-9414\nMY_POD_IP=10.244.1.35\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.14\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Apr 13 08:47:00.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39371 --kubeconfig=/root/.kube/config --namespace=examples-9414 logs dapi-test-pod test-container' Apr 13 08:47:00.317: INFO: stderr: "" Apr 13 08:47:00.317: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-9414\nMY_POD_IP=10.244.1.35\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.14\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:00.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9414" for this suite. • [SLOW TEST:20.384 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:47:00.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Apr 13 08:47:00.533: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:00.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-1272" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0413 08:47:01.445399 44 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 142 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x44a33e0, 0x77ad860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x44a33e0, 0x77ad860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0000d0078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc000ffa760, 0xcb2400, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002f092c0, 0xc000ffa760, 0xc002f092c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc000ffa760, 0x3da3f224077, 0xc000ffa788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x797fe80, 0x1ca, 0x4f8fb7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002e02db0, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000e22a80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000e22a80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000527450, 0x54f2200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc000ffb6c8, 0xc0029261e0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0029261e0, 0x0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0029261e0, 0x54f2200, 0xc0000b6900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0035a6000, 0xc0029261e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0035a6000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0035a6000, 0xc0035a0030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0000ee230, 0x7f295bddefc8, 0xc0020f6d80, 0x4dfa0a4, 0x14, 0xc002aa5320, 0x3, 0x3, 0x55ac780, 0xc0000b6900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54f6e40, 0xc0020f6d80, 0x4dfa0a4, 0x14, 0xc002b28700, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54f6e40, 0xc0020f6d80, 0x4dfa0a4, 0x14, 0xc0021ed100, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0020f6d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0020f6d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0020f6d80, 0x4fc2a88) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [1.105 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:36.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 13 08:46:37.190: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:02.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8801" for this suite. • [SLOW TEST:26.032 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:36.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 13 08:46:36.925: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 Apr 13 08:46:37.016: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7" in namespace "security-context-test-3435" to be "Succeeded or Failed" Apr 13 08:46:37.063: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 47.166742ms Apr 13 08:46:39.140: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124162827s Apr 13 08:46:41.241: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224910568s Apr 13 08:46:43.394: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378589206s Apr 13 08:46:45.424: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.408438088s Apr 13 08:46:47.755: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.738970499s Apr 13 08:46:49.987: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.971594576s Apr 13 08:46:52.069: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.052790286s Apr 13 08:46:54.467: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.451618656s Apr 13 08:46:56.515: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.499659266s Apr 13 08:46:58.578: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.561843748s Apr 13 08:47:00.720: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Running", Reason="", readiness=true. Elapsed: 23.703890082s Apr 13 08:47:02.945: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7": Phase="Failed", Reason="", readiness=false. Elapsed: 25.929531161s Apr 13 08:47:02.945: INFO: Pod "busybox-readonly-true-a663a01f-9392-4f75-8f00-b24b0d1c68a7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:02.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3435" for this suite. • [SLOW TEST:26.500 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Apr 13 08:46:39.700: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 STEP: creating secret and pod Apr 13 08:46:39.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39371 --kubeconfig=/root/.kube/config --namespace=examples-7141 create -f -' Apr 13 08:46:49.027: INFO: stderr: "" Apr 13 08:46:49.027: INFO: stdout: "secret/test-secret created\n" Apr 13 08:46:49.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39371 --kubeconfig=/root/.kube/config --namespace=examples-7141 create -f -' Apr 13 08:46:50.545: INFO: stderr: "" Apr 13 08:46:50.545: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Apr 13 08:47:02.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39371 --kubeconfig=/root/.kube/config --namespace=examples-7141 logs secret-test-pod test-container' Apr 13 08:47:03.288: INFO: stderr: "" Apr 13 08:47:03.288: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:03.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7141" for this suite. • [SLOW TEST:26.581 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":1,"skipped":267,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 13 08:46:37.424: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:08.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3166" for this suite. • [SLOW TEST:31.122 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":94,"failed":0} SSSSS ------------------------------ Apr 13 08:47:08.675: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:36.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod Apr 13 08:46:37.015: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container Apr 13 08:47:09.354: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5714 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 08:47:09.354: INFO: >>> kubeConfig: /root/.kube/config Apr 13 08:47:09.585: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-5714 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 08:47:09.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Apr 13 08:47:09.764: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5714 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 08:47:09.764: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:10.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-5714" for this suite. • [SLOW TEST:33.958 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":11,"failed":0} Apr 13 08:47:10.844: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 13 08:46:38.109: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:10.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5039" for this suite. • [SLOW TEST:33.849 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":1,"skipped":120,"failed":0} Apr 13 08:47:11.191: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Apr 13 08:46:39.185: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:250 STEP: Creating pod liveness-c63b35a5-48f1-43ac-acea-81a646c274d5 in namespace container-probe-1383 Apr 13 08:46:53.574: INFO: Started pod liveness-c63b35a5-48f1-43ac-acea-81a646c274d5 in namespace container-probe-1383 STEP: checking the pod's current state and verifying that restartCount is present Apr 13 08:46:53.578: INFO: Initial restart count of pod liveness-c63b35a5-48f1-43ac-acea-81a646c274d5 is 0 Apr 13 08:47:15.616: INFO: Restart count of pod container-probe-1383/liveness-c63b35a5-48f1-43ac-acea-81a646c274d5 is now 1 (22.037572685s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:16.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1383" for this suite. • [SLOW TEST:39.597 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:250 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":1,"skipped":304,"failed":0} Apr 13 08:47:17.024: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:47:02.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:18.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6348" for this suite. • [SLOW TEST:16.832 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":2,"skipped":699,"failed":0} Apr 13 08:47:18.883: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 13 08:46:37.328: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 Apr 13 08:46:37.424: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2" in namespace "security-context-test-1739" to be "Succeeded or Failed" Apr 13 08:46:37.473: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 48.280662ms Apr 13 08:46:39.519: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0946015s Apr 13 08:46:41.556: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131249936s Apr 13 08:46:43.568: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143771985s Apr 13 08:46:45.580: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155012356s Apr 13 08:46:47.754: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.329850151s Apr 13 08:46:49.987: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.562844802s Apr 13 08:46:52.069: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.64416689s Apr 13 08:46:54.467: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.042979776s Apr 13 08:46:56.516: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.09180127s Apr 13 08:46:58.578: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.153561013s Apr 13 08:47:00.719: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.294882782s Apr 13 08:47:02.945: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 25.520746344s Apr 13 08:47:05.156: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.731671357s Apr 13 08:47:07.197: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.772767679s Apr 13 08:47:09.311: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 31.886160341s Apr 13 08:47:11.469: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 34.044826002s Apr 13 08:47:13.513: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.088353336s Apr 13 08:47:16.043: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.618344124s Apr 13 08:47:18.144: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 40.719577113s Apr 13 08:47:20.240: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 42.815544915s Apr 13 08:47:22.514: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 45.089419733s Apr 13 08:47:22.514: INFO: Pod "alpine-nnp-nil-72a4272d-05ea-48e8-8539-0372e15d6fd2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:22.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1739" for this suite. • [SLOW TEST:46.034 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":71,"failed":0} Apr 13 08:47:23.058: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:26.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2454" for this suite. • [SLOW TEST:49.329 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":1,"skipped":166,"failed":0} Apr 13 08:47:26.986: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 13 08:46:37.740: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 Apr 13 08:46:37.867: INFO: Waiting up to 5m0s for pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510" in namespace "security-context-test-1109" to be "Succeeded or Failed" Apr 13 08:46:37.885: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 17.170432ms Apr 13 08:46:39.902: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034983514s Apr 13 08:46:41.906: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038285733s Apr 13 08:46:43.928: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060814951s Apr 13 08:46:45.994: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126300011s Apr 13 08:46:48.193: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326101034s Apr 13 08:46:50.368: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 12.501032428s Apr 13 08:46:52.753: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 14.885395659s Apr 13 08:46:54.862: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 16.994640953s Apr 13 08:46:57.533: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 19.665348841s Apr 13 08:46:59.736: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 21.868590837s Apr 13 08:47:02.144: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 24.276167641s Apr 13 08:47:04.462: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 26.594825252s Apr 13 08:47:06.491: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 28.62318318s Apr 13 08:47:08.930: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 31.062936085s Apr 13 08:47:11.027: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 33.159591727s Apr 13 08:47:13.273: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 35.405654992s Apr 13 08:47:15.615: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 37.748123765s Apr 13 08:47:17.858: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 39.990830895s Apr 13 08:47:20.240: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 42.373030844s Apr 13 08:47:22.514: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Pending", Reason="", readiness=false. Elapsed: 44.646237036s Apr 13 08:47:24.759: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Running", Reason="", readiness=true. Elapsed: 46.891214189s Apr 13 08:47:26.983: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510": Phase="Succeeded", Reason="", readiness=false. Elapsed: 49.115394078s Apr 13 08:47:26.983: INFO: Pod "busybox-user-0-e490b67e-6b7c-496a-bf5d-048511449510" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:26.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1109" for this suite. • [SLOW TEST:50.191 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":97,"failed":0} Apr 13 08:47:27.304: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Apr 13 08:46:38.109: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 Apr 13 08:46:38.248: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303" in namespace "security-context-test-6966" to be "Succeeded or Failed" Apr 13 08:46:38.274: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 26.071182ms Apr 13 08:46:40.352: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103750273s Apr 13 08:46:42.460: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211960569s Apr 13 08:46:44.472: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224674126s Apr 13 08:46:47.719: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 9.471500363s Apr 13 08:46:49.753: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 11.505321784s Apr 13 08:46:51.762: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 13.513852035s Apr 13 08:46:53.858: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 15.609932831s Apr 13 08:46:55.970: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 17.722109683s Apr 13 08:46:58.396: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 20.148291435s Apr 13 08:47:00.499: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 22.251446967s Apr 13 08:47:02.561: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 24.31341967s Apr 13 08:47:05.158: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 26.909842636s Apr 13 08:47:07.198: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 28.949925677s Apr 13 08:47:09.311: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 31.063480599s Apr 13 08:47:11.470: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 33.221939805s Apr 13 08:47:13.947: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 35.699330553s Apr 13 08:47:16.042: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 37.794616545s Apr 13 08:47:18.145: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 39.897490168s Apr 13 08:47:20.240: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 41.992238464s Apr 13 08:47:22.514: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 44.266260806s Apr 13 08:47:24.757: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Pending", Reason="", readiness=false. Elapsed: 46.508910939s Apr 13 08:47:26.984: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.736045737s Apr 13 08:47:26.984: INFO: Pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303" satisfied condition "Succeeded or Failed" Apr 13 08:47:27.301: INFO: Got logs for pod "busybox-privileged-true-941afb3a-0c22-461c-94e4-19a1f888c303": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:27.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6966" for this suite. • [SLOW TEST:50.289 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":184,"failed":0} Apr 13 08:47:27.554: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 13 08:46:38.956: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:28.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6546" for this suite. • [SLOW TEST:52.169 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:56.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:30.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7248" for this suite. • [SLOW TEST:33.897 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":3,"skipped":232,"failed":0} Apr 13 08:47:30.739: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 13 08:46:38.956: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 13 08:47:31.360: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:31.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6302" for this suite. • [SLOW TEST:54.871 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:47:03.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:778 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:31.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8916" for this suite. • [SLOW TEST:28.707 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:778 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":259,"failed":0} Apr 13 08:47:32.237: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":2,"skipped":93,"failed":0} Apr 13 08:47:32.237: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:47:04.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 Apr 13 08:47:06.505: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc" in namespace "security-context-test-6417" to be "Succeeded or Failed" Apr 13 08:47:06.693: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 187.979421ms Apr 13 08:47:08.930: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.425041767s Apr 13 08:47:11.025: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520039435s Apr 13 08:47:13.276: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.770809786s Apr 13 08:47:15.615: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.110250589s Apr 13 08:47:17.860: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.354822807s Apr 13 08:47:20.241: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.736271039s Apr 13 08:47:22.514: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.008534078s Apr 13 08:47:24.757: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.251438361s Apr 13 08:47:26.983: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.477599381s Apr 13 08:47:29.113: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.607545982s Apr 13 08:47:31.146: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Running", Reason="", readiness=true. Elapsed: 24.640787971s Apr 13 08:47:33.300: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.794741986s Apr 13 08:47:33.300: INFO: Pod "alpine-nnp-true-cc9f45a7-62e7-43c0-90b4-f19dccac33cc" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:33.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6417" for this suite. • [SLOW TEST:29.063 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":280,"failed":0} Apr 13 08:47:33.507: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:36.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Apr 13 08:46:37.280: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:216 STEP: Creating pod busybox-9181f752-b037-4d38-87a9-d74e049f6878 in namespace container-probe-2438 Apr 13 08:47:05.829: INFO: Started pod busybox-9181f752-b037-4d38-87a9-d74e049f6878 in namespace container-probe-2438 STEP: checking the pod's current state and verifying that restartCount is present Apr 13 08:47:06.330: INFO: Initial restart count of pod busybox-9181f752-b037-4d38-87a9-d74e049f6878 is 0 Apr 13 08:47:57.383: INFO: Restart count of pod container-probe-2438/busybox-9181f752-b037-4d38-87a9-d74e049f6878 is now 1 (51.052757623s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:47:57.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2438" for this suite. • [SLOW TEST:80.961 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:216 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout ","total":-1,"completed":1,"skipped":53,"failed":0} Apr 13 08:47:57.948: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:36.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Apr 13 08:46:37.163: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should not be ready until startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:376 Apr 13 08:46:37.411: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:39.504: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:41.556: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:43.424: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:45.430: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:47.754: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:49.754: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:51.748: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:53.442: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:55.532: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:57.533: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:46:59.737: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:47:01.689: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:47:03.582: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:47:05.827: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:47:07.435: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Pending, waiting for it to be Running (with Ready = true) Apr 13 08:47:09.485: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:11.535: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:13.512: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:15.618: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:17.859: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:19.840: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:21.444: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:23.487: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:25.887: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:27.490: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:29.793: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:31.429: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:33.504: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:35.422: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:37.421: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:39.527: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:41.421: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:43.422: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:45.430: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:47.421: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:49.455: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:51.421: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:53.422: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:55.421: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:57.600: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:47:59.431: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = false) Apr 13 08:48:01.420: INFO: The status of Pod startup-78097ef3-7263-48f0-8428-f6c3d3adc5c6 is Running (Ready = true) Apr 13 08:48:01.423: INFO: Container started at 2021-04-13 08:47:06 +0000 UTC, pod became ready at 2021-04-13 08:47:59 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:48:01.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9763" for this suite. • [SLOW TEST:84.543 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not be ready until startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:376 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should not be ready until startupProbe succeeds","total":-1,"completed":1,"skipped":32,"failed":0} Apr 13 08:48:01.432: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:47:00.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:347 STEP: Creating pod startup-8c7fea32-c329-43dd-b3ec-11d15ba8fc8f in namespace container-probe-1748 Apr 13 08:47:14.367: INFO: Started pod startup-8c7fea32-c329-43dd-b3ec-11d15ba8fc8f in namespace container-probe-1748 STEP: checking the pod's current state and verifying that restartCount is present Apr 13 08:47:15.055: INFO: Initial restart count of pod startup-8c7fea32-c329-43dd-b3ec-11d15ba8fc8f is 0 Apr 13 08:48:04.733: INFO: Restart count of pod container-probe-1748/startup-8c7fea32-c329-43dd-b3ec-11d15ba8fc8f is now 1 (49.677736303s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:48:04.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1748" for this suite. • [SLOW TEST:64.207 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:347 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":2,"skipped":626,"failed":0} Apr 13 08:48:04.836: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Apr 13 08:46:38.607: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should not be ready with a docker exec readiness probe timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233 STEP: Creating pod busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 in namespace container-probe-6793 Apr 13 08:47:18.827: INFO: Started pod busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 in namespace container-probe-6793 Apr 13 08:47:18.827: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (1.785µs elapsed) Apr 13 08:47:20.827: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (2.000144626s elapsed) Apr 13 08:47:22.827: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (4.000315759s elapsed) Apr 13 08:47:24.997: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (6.169760628s elapsed) Apr 13 08:47:26.997: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (8.169954806s elapsed) Apr 13 08:47:28.997: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (10.170146659s elapsed) Apr 13 08:47:30.997: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (12.170361356s elapsed) Apr 13 08:47:32.998: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (14.170518495s elapsed) Apr 13 08:47:34.998: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (16.170797771s elapsed) Apr 13 08:47:36.998: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (18.171078837s elapsed) Apr 13 08:47:38.998: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (20.171284562s elapsed) Apr 13 08:47:40.999: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (22.171516304s elapsed) Apr 13 08:47:42.999: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (24.171766264s elapsed) Apr 13 08:47:44.999: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (26.171985782s elapsed) Apr 13 08:47:46.999: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (28.17220143s elapsed) Apr 13 08:47:48.999: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (30.172420366s elapsed) Apr 13 08:47:51.000: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (32.172722371s elapsed) Apr 13 08:47:53.000: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (34.172950698s elapsed) Apr 13 08:47:55.000: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (36.173187385s elapsed) Apr 13 08:47:57.001: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (38.173456268s elapsed) Apr 13 08:47:59.001: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (40.173753245s elapsed) Apr 13 08:48:01.001: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (42.174017488s elapsed) Apr 13 08:48:03.001: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (44.174261203s elapsed) Apr 13 08:48:05.002: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (46.174523343s elapsed) Apr 13 08:48:07.002: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (48.174739241s elapsed) Apr 13 08:48:09.002: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (50.174938709s elapsed) Apr 13 08:48:11.002: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (52.175155728s elapsed) Apr 13 08:48:13.003: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (54.176019036s elapsed) Apr 13 08:48:15.003: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (56.176219922s elapsed) Apr 13 08:48:17.003: INFO: pod container-probe-6793/busybox-b5bebdd9-8f5f-41b5-8959-57f04b43b2f8 is not ready (58.176400296s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:48:19.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6793" for this suite. • [SLOW TEST:101.884 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not be ready with a docker exec readiness probe timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should not be ready with a docker exec readiness probe timeout ","total":-1,"completed":1,"skipped":198,"failed":0} Apr 13 08:48:19.154: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:36.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Apr 13 08:46:36.887: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:289 STEP: Creating pod startup-68bf8360-33b9-402f-b080-506380bb13ad in namespace container-probe-8229 Apr 13 08:46:55.041: INFO: Started pod startup-68bf8360-33b9-402f-b080-506380bb13ad in namespace container-probe-8229 STEP: checking the pod's current state and verifying that restartCount is present Apr 13 08:46:55.060: INFO: Initial restart count of pod startup-68bf8360-33b9-402f-b080-506380bb13ad is 0 Apr 13 08:48:31.580: INFO: Restart count of pod container-probe-8229/startup-68bf8360-33b9-402f-b080-506380bb13ad is now 1 (1m36.519963111s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:48:31.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8229" for this suite. • [SLOW TEST:114.850 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:289 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":4,"failed":0} Apr 13 08:48:31.629: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:36.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples Apr 13 08:46:37.282: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Apr 13 08:46:37.337: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 Apr 13 08:46:37.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39371 --kubeconfig=/root/.kube/config --namespace=examples-9230 create -f -' Apr 13 08:46:43.429: INFO: stderr: "" Apr 13 08:46:43.429: INFO: stdout: "pod/liveness-exec created\n" Apr 13 08:46:43.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:39371 --kubeconfig=/root/.kube/config --namespace=examples-9230 create -f -' Apr 13 08:46:43.933: INFO: stderr: "" Apr 13 08:46:43.933: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Apr 13 08:46:56.161: INFO: Pod: liveness-http, restart count:0 Apr 13 08:46:58.396: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:00.499: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:02.560: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:05.158: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:07.197: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:09.311: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:11.469: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:13.512: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:16.043: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:18.143: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:20.241: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:22.514: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:24.758: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:26.983: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:29.112: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:31.146: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:32.259: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:33.300: INFO: Pod: liveness-http, restart count:0 Apr 13 08:47:34.279: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:35.304: INFO: Pod: liveness-http, restart count:1 Apr 13 08:47:35.304: INFO: Saw liveness-http restart, succeeded... Apr 13 08:47:36.284: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:38.288: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:40.293: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:42.298: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:44.302: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:46.359: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:48.364: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:50.367: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:52.370: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:54.426: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:56.430: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:47:58.434: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:00.600: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:02.678: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:04.695: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:06.698: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:08.702: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:10.725: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:12.798: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:14.801: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:16.804: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:18.808: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:20.813: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:22.817: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:24.822: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:26.826: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:28.830: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:30.835: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:32.839: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:34.844: INFO: Pod: liveness-exec, restart count:0 Apr 13 08:48:36.848: INFO: Pod: liveness-exec, restart count:1 Apr 13 08:48:36.848: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:48:36.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9230" for this suite. • [SLOW TEST:119.875 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":1,"skipped":54,"failed":0} Apr 13 08:48:36.857: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:37.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Apr 13 08:46:37.867: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready Apr 13 08:46:37.963: INFO: Waiting up to 5m0s for node leguer-worker condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Apr 13 08:46:39.140: INFO: node status heartbeat is unchanged for 1.030421208s, waiting for 1m20s Apr 13 08:46:40.172: INFO: node status heartbeat is unchanged for 2.062715973s, waiting for 1m20s Apr 13 08:46:41.241: INFO: node status heartbeat is unchanged for 3.131508501s, waiting for 1m20s Apr 13 08:46:42.113: INFO: node status heartbeat is unchanged for 4.003825127s, waiting for 1m20s Apr 13 08:46:43.315: INFO: node status heartbeat is unchanged for 5.205162177s, waiting for 1m20s Apr 13 08:46:44.113: INFO: node status heartbeat is unchanged for 6.003460725s, waiting for 1m20s Apr 13 08:46:45.167: INFO: node status heartbeat is unchanged for 7.057604532s, waiting for 1m20s Apr 13 08:46:46.305: INFO: node status heartbeat is unchanged for 8.195080015s, waiting for 1m20s Apr 13 08:46:47.721: INFO: node status heartbeat is unchanged for 9.611888227s, waiting for 1m20s Apr 13 08:46:48.193: INFO: node status heartbeat is unchanged for 10.084020671s, waiting for 1m20s Apr 13 08:46:49.515: INFO: node status heartbeat is unchanged for 11.405088046s, waiting for 1m20s Apr 13 08:46:50.369: INFO: node status heartbeat is unchanged for 12.259288987s, waiting for 1m20s Apr 13 08:46:51.156: INFO: node status heartbeat is unchanged for 13.046127386s, waiting for 1m20s Apr 13 08:46:52.197: INFO: node status heartbeat is unchanged for 14.088018094s, waiting for 1m20s Apr 13 08:46:53.312: INFO: node status heartbeat is unchanged for 15.202588271s, waiting for 1m20s Apr 13 08:46:54.467: INFO: node status heartbeat is unchanged for 16.35725457s, waiting for 1m20s Apr 13 08:46:55.353: INFO: node status heartbeat is unchanged for 17.243492641s, waiting for 1m20s Apr 13 08:46:56.161: INFO: node status heartbeat is unchanged for 18.051996606s, waiting for 1m20s Apr 13 08:46:57.538: INFO: node status heartbeat is unchanged for 19.42890621s, waiting for 1m20s Apr 13 08:46:58.396: INFO: node status heartbeat changed in 30s (with other status changes), waiting for 40s Apr 13 08:46:58.403: INFO: v1.NodeStatus{ Capacity: {s"cpu": {i: {...}, s: "16", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "2303189964Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cpu": {i: {...}, s: "16", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "2303189964Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:57 +0000 UTC"}, LastTransitionTime: {Time: s"2021-04-13 08:13:54 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:57 +0000 UTC"}, LastTransitionTime: {Time: s"2021-04-13 08:13:54 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:57 +0000 UTC"}, LastTransitionTime: {Time: s"2021-04-13 08:13:54 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-04-13 08:14:25 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "172.18.0.14"}, {Type: "InternalIP", Address: "fc00:f853:ccd:e793::e"}, {Type: "Hostname", Address: "leguer-worker"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, NodeInfo: {MachineID: "6b5cc8378dc7413caa9fa7871d5f97c6", SystemUUID: "0f47f62c-ed39-48d7-be3c-fc1aee5ad071", BootID: "dc0058b1-aa97-45b0-baf9-d3a69a0326a3", KernelVersion: "4.15.0-141-generic", ...}, Images: []v1.ContainerImage{ ... // 10 identical elements {Names: {"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04"..., "docker.io/library/nginx:1.14-alpine"}, SizeBytes: 6978806}, {Names: {"docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca99847"..., "docker.io/appropriate/curl:edge"}, SizeBytes: 2854657}, + { + Names: []string{ + "docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e1224"..., + "docker.io/library/busybox:1.29", + }, + SizeBytes: 732685, + }, {Names: {"docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bc"..., "docker.io/library/busybox:1.28"}, SizeBytes: 727869}, {Names: {"k8s.gcr.io/pause:3.4.1"}, SizeBytes: 685714}, {Names: {"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8"..., "k8s.gcr.io/pause:3.3"}, SizeBytes: 299480}, }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } Apr 13 08:46:59.318: INFO: node status heartbeat is unchanged for 922.329072ms, waiting for 1m20s Apr 13 08:47:00.269: INFO: node status heartbeat is unchanged for 1.873771167s, waiting for 1m20s Apr 13 08:47:01.413: INFO: node status heartbeat is unchanged for 3.017150409s, waiting for 1m20s Apr 13 08:47:02.170: INFO: node status heartbeat is unchanged for 3.774294116s, waiting for 1m20s Apr 13 08:47:03.283: INFO: node status heartbeat is unchanged for 4.887614961s, waiting for 1m20s Apr 13 08:47:04.460: INFO: node status heartbeat is unchanged for 6.064756525s, waiting for 1m20s Apr 13 08:47:05.575: INFO: node status heartbeat is unchanged for 7.179701231s, waiting for 1m20s Apr 13 08:47:06.386: INFO: node status heartbeat is unchanged for 7.990443021s, waiting for 1m20s Apr 13 08:47:07.197: INFO: node status heartbeat is unchanged for 8.801911895s, waiting for 1m20s Apr 13 08:47:08.183: INFO: node status heartbeat is unchanged for 9.787267929s, waiting for 1m20s Apr 13 08:47:09.139: INFO: node status heartbeat is unchanged for 10.743172233s, waiting for 1m20s Apr 13 08:47:10.177: INFO: node status heartbeat is unchanged for 11.78187196s, waiting for 1m20s Apr 13 08:47:11.247: INFO: node status heartbeat is unchanged for 12.851313752s, waiting for 1m20s Apr 13 08:47:12.309: INFO: node status heartbeat is unchanged for 13.913310677s, waiting for 1m20s Apr 13 08:47:13.271: INFO: node status heartbeat is unchanged for 14.876079392s, waiting for 1m20s Apr 13 08:47:14.367: INFO: node status heartbeat is unchanged for 15.971859932s, waiting for 1m20s Apr 13 08:47:15.227: INFO: node status heartbeat is unchanged for 16.831618696s, waiting for 1m20s Apr 13 08:47:16.436: INFO: node status heartbeat is unchanged for 18.040959718s, waiting for 1m20s Apr 13 08:47:17.294: INFO: node status heartbeat is unchanged for 18.898468946s, waiting for 1m20s Apr 13 08:47:18.144: INFO: node status heartbeat is unchanged for 19.748119696s, waiting for 1m20s Apr 13 08:47:19.287: INFO: node status heartbeat is unchanged for 20.892006553s, waiting for 1m20s Apr 13 08:47:20.240: INFO: node status heartbeat is unchanged for 21.844693809s, waiting for 1m20s Apr 13 08:47:21.297: INFO: node status heartbeat is unchanged for 22.901302963s, waiting for 1m20s Apr 13 08:47:22.143: INFO: node status heartbeat is unchanged for 23.747975235s, waiting for 1m20s Apr 13 08:47:23.414: INFO: node status heartbeat is unchanged for 25.018601389s, waiting for 1m20s Apr 13 08:47:24.396: INFO: node status heartbeat is unchanged for 26.000726547s, waiting for 1m20s Apr 13 08:47:25.190: INFO: node status heartbeat is unchanged for 26.794702065s, waiting for 1m20s Apr 13 08:47:26.289: INFO: node status heartbeat is unchanged for 27.89406584s, waiting for 1m20s Apr 13 08:47:27.268: INFO: node status heartbeat is unchanged for 28.872666142s, waiting for 1m20s Apr 13 08:47:28.584: INFO: node status heartbeat changed in 31s (with other status changes), waiting for 40s Apr 13 08:47:28.698: INFO: v1.NodeStatus{ Capacity: {s"cpu": {i: {...}, s: "16", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "2303189964Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cpu": {i: {...}, s: "16", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "2303189964Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, s"hugepages-2Mi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:47:28 +0000 UTC"}, LastTransitionTime: {Time: s"2021-04-13 08:13:54 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:47:28 +0000 UTC"}, LastTransitionTime: {Time: s"2021-04-13 08:13:54 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:46:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2021-04-13 08:47:28 +0000 UTC"}, LastTransitionTime: {Time: s"2021-04-13 08:13:54 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-04-13 08:14:25 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "172.18.0.14"}, {Type: "InternalIP", Address: "fc00:f853:ccd:e793::e"}, {Type: "Hostname", Address: "leguer-worker"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, NodeInfo: {MachineID: "6b5cc8378dc7413caa9fa7871d5f97c6", SystemUUID: "0f47f62c-ed39-48d7-be3c-fc1aee5ad071", BootID: "dc0058b1-aa97-45b0-baf9-d3a69a0326a3", KernelVersion: "4.15.0-141-generic", ...}, Images: []v1.ContainerImage{ ... // 9 identical elements {Names: {"docker.io/rancher/local-path-provisioner:v0.0.14"}, SizeBytes: 41982521}, {Names: {"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04"..., "docker.io/library/nginx:1.14-alpine"}, SizeBytes: 6978806}, + { + Names: []string{ + "gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e903921"..., + "gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0", + }, + SizeBytes: 3054649, + }, {Names: {"docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca99847"..., "docker.io/appropriate/curl:edge"}, SizeBytes: 2854657}, {Names: {"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e1224"..., "docker.io/library/busybox:1.29"}, SizeBytes: 732685}, ... // 3 identical elements }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } Apr 13 08:47:29.118: INFO: node status heartbeat is unchanged for 534.106126ms, waiting for 1m20s Apr 13 08:47:30.420: INFO: node status heartbeat is unchanged for 1.83691308s, waiting for 1m20s Apr 13 08:47:31.145: INFO: node status heartbeat is unchanged for 2.561671735s, waiting for 1m20s Apr 13 08:47:32.222: INFO: node status heartbeat is unchanged for 3.638618028s, waiting for 1m20s Apr 13 08:47:33.116: INFO: node status heartbeat is unchanged for 4.532031824s, waiting for 1m20s Apr 13 08:47:34.114: INFO: node status heartbeat is unchanged for 5.530355794s, waiting for 1m20s Apr 13 08:47:35.126: INFO: node status heartbeat is unchanged for 6.542304632s, waiting for 1m20s Apr 13 08:47:36.114: INFO: node status heartbeat is unchanged for 7.530038042s, waiting for 1m20s Apr 13 08:47:37.114: INFO: node status heartbeat is unchanged for 8.530884057s, waiting for 1m20s Apr 13 08:47:38.113: INFO: node status heartbeat is unchanged for 9.529738947s, waiting for 1m20s Apr 13 08:47:39.117: INFO: node status heartbeat is unchanged for 10.5331521s, waiting for 1m20s Apr 13 08:47:40.137: INFO: node status heartbeat is unchanged for 11.55384506s, waiting for 1m20s Apr 13 08:47:41.114: INFO: node status heartbeat is unchanged for 12.530074912s, waiting for 1m20s Apr 13 08:47:42.125: INFO: node status heartbeat is unchanged for 13.541887385s, waiting for 1m20s Apr 13 08:47:43.115: INFO: node status heartbeat is unchanged for 14.531023847s, waiting for 1m20s Apr 13 08:47:44.114: INFO: node status heartbeat is unchanged for 15.530030377s, waiting for 1m20s Apr 13 08:47:45.114: INFO: node status heartbeat is unchanged for 16.530003689s, waiting for 1m20s Apr 13 08:47:46.113: INFO: node status heartbeat is unchanged for 17.529379241s, waiting for 1m20s Apr 13 08:47:47.115: INFO: node status heartbeat is unchanged for 18.531580875s, waiting for 1m20s Apr 13 08:47:48.117: INFO: node status heartbeat is unchanged for 19.532937079s, waiting for 1m20s Apr 13 08:47:49.114: INFO: node status heartbeat is unchanged for 20.529978269s, waiting for 1m20s Apr 13 08:47:50.112: INFO: node status heartbeat is unchanged for 21.528571955s, waiting for 1m20s Apr 13 08:47:51.113: INFO: node status heartbeat is unchanged for 22.529751319s, waiting for 1m20s Apr 13 08:47:52.113: INFO: node status heartbeat is unchanged for 23.529030449s, waiting for 1m20s Apr 13 08:47:53.112: INFO: node status heartbeat is unchanged for 24.528797591s, waiting for 1m20s Apr 13 08:47:54.112: INFO: node status heartbeat is unchanged for 25.528693367s, waiting for 1m20s Apr 13 08:47:55.114: INFO: node status heartbeat is unchanged for 26.530174355s, waiting for 1m20s Apr 13 08:47:56.112: INFO: node status heartbeat is unchanged for 27.528275091s, waiting for 1m20s Apr 13 08:47:57.113: INFO: node status heartbeat is unchanged for 28.529865151s, waiting for 1m20s Apr 13 08:47:58.126: INFO: node status heartbeat is unchanged for 29.542156685s, waiting for 1m20s Apr 13 08:47:59.112: INFO: node status heartbeat is unchanged for 30.528480039s, waiting for 1m20s Apr 13 08:48:00.277: INFO: node status heartbeat is unchanged for 31.693548496s, waiting for 1m20s Apr 13 08:48:01.114: INFO: node status heartbeat is unchanged for 32.530179847s, waiting for 1m20s Apr 13 08:48:02.113: INFO: node status heartbeat is unchanged for 33.529505085s, waiting for 1m20s Apr 13 08:48:03.168: INFO: node status heartbeat is unchanged for 34.584647036s, waiting for 1m20s Apr 13 08:48:04.114: INFO: node status heartbeat is unchanged for 35.530325417s, waiting for 1m20s Apr 13 08:48:05.115: INFO: node status heartbeat is unchanged for 36.531028088s, waiting for 1m20s Apr 13 08:48:06.114: INFO: node status heartbeat is unchanged for 37.530254316s, waiting for 1m20s Apr 13 08:48:07.141: INFO: node status heartbeat is unchanged for 38.557166198s, waiting for 1m20s Apr 13 08:48:08.114: INFO: node status heartbeat is unchanged for 39.530207802s, waiting for 1m20s Apr 13 08:48:09.116: INFO: node status heartbeat is unchanged for 40.532546635s, waiting for 1m20s Apr 13 08:48:10.112: INFO: node status heartbeat is unchanged for 41.528486085s, waiting for 1m20s Apr 13 08:48:11.181: INFO: node status heartbeat is unchanged for 42.597653595s, waiting for 1m20s Apr 13 08:48:12.114: INFO: node status heartbeat is unchanged for 43.529923585s, waiting for 1m20s Apr 13 08:48:13.114: INFO: node status heartbeat is unchanged for 44.529963892s, waiting for 1m20s Apr 13 08:48:14.113: INFO: node status heartbeat is unchanged for 45.529894137s, waiting for 1m20s Apr 13 08:48:15.132: INFO: node status heartbeat is unchanged for 46.548483956s, waiting for 1m20s Apr 13 08:48:16.114: INFO: node status heartbeat is unchanged for 47.53056922s, waiting for 1m20s Apr 13 08:48:17.114: INFO: node status heartbeat is unchanged for 48.530681543s, waiting for 1m20s Apr 13 08:48:18.114: INFO: node status heartbeat is unchanged for 49.529943808s, waiting for 1m20s Apr 13 08:48:19.127: INFO: node status heartbeat is unchanged for 50.543190376s, waiting for 1m20s Apr 13 08:48:20.113: INFO: node status heartbeat is unchanged for 51.529571618s, waiting for 1m20s Apr 13 08:48:21.115: INFO: node status heartbeat is unchanged for 52.531877291s, waiting for 1m20s Apr 13 08:48:22.114: INFO: node status heartbeat is unchanged for 53.530517383s, waiting for 1m20s Apr 13 08:48:23.151: INFO: node status heartbeat is unchanged for 54.56714704s, waiting for 1m20s Apr 13 08:48:24.114: INFO: node status heartbeat is unchanged for 55.53058418s, waiting for 1m20s Apr 13 08:48:25.114: INFO: node status heartbeat is unchanged for 56.530417601s, waiting for 1m20s Apr 13 08:48:26.114: INFO: node status heartbeat is unchanged for 57.530893784s, waiting for 1m20s Apr 13 08:48:27.114: INFO: node status heartbeat is unchanged for 58.530430378s, waiting for 1m20s Apr 13 08:48:28.114: INFO: node status heartbeat is unchanged for 59.530327925s, waiting for 1m20s Apr 13 08:48:29.114: INFO: node status heartbeat is unchanged for 1m0.530699849s, waiting for 1m20s Apr 13 08:48:30.223: INFO: node status heartbeat is unchanged for 1m1.639311696s, waiting for 1m20s Apr 13 08:48:31.114: INFO: node status heartbeat is unchanged for 1m2.530358793s, waiting for 1m20s Apr 13 08:48:32.113: INFO: node status heartbeat is unchanged for 1m3.529225858s, waiting for 1m20s Apr 13 08:48:33.114: INFO: node status heartbeat is unchanged for 1m4.530604748s, waiting for 1m20s Apr 13 08:48:34.114: INFO: node status heartbeat is unchanged for 1m5.529945067s, waiting for 1m20s Apr 13 08:48:35.115: INFO: node status heartbeat is unchanged for 1m6.530979802s, waiting for 1m20s Apr 13 08:48:36.114: INFO: node status heartbeat is unchanged for 1m7.530086803s, waiting for 1m20s Apr 13 08:48:37.115: INFO: node status heartbeat is unchanged for 1m8.531003531s, waiting for 1m20s Apr 13 08:48:38.114: INFO: node status heartbeat is unchanged for 1m9.530147699s, waiting for 1m20s Apr 13 08:48:39.114: INFO: node status heartbeat is unchanged for 1m10.530361799s, waiting for 1m20s Apr 13 08:48:40.114: INFO: node status heartbeat is unchanged for 1m11.530652973s, waiting for 1m20s Apr 13 08:48:41.114: INFO: node status heartbeat is unchanged for 1m12.530555712s, waiting for 1m20s Apr 13 08:48:42.112: INFO: node status heartbeat is unchanged for 1m13.528845535s, waiting for 1m20s Apr 13 08:48:43.114: INFO: node status heartbeat is unchanged for 1m14.530177406s, waiting for 1m20s Apr 13 08:48:44.175: INFO: node status heartbeat is unchanged for 1m15.59137614s, waiting for 1m20s Apr 13 08:48:45.113: INFO: node status heartbeat is unchanged for 1m16.52945139s, waiting for 1m20s Apr 13 08:48:46.120: INFO: node status heartbeat is unchanged for 1m17.536541047s, waiting for 1m20s Apr 13 08:48:47.114: INFO: node status heartbeat is unchanged for 1m18.53030049s, waiting for 1m20s Apr 13 08:48:48.114: INFO: node status heartbeat is unchanged for 1m19.530341827s, waiting for 1m20s Apr 13 08:48:49.113: INFO: node status heartbeat is unchanged for 1m20.529560566s, was waiting for at least 1m20s, success! STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:48:49.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2668" for this suite. • [SLOW TEST:131.997 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":1,"skipped":111,"failed":0} Apr 13 08:48:49.124: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:42.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:318 STEP: Creating pod startup-013c9646-56a8-4364-beba-28e823f96254 in namespace container-probe-3727 Apr 13 08:46:57.533: INFO: Started pod startup-013c9646-56a8-4364-beba-28e823f96254 in namespace container-probe-3727 STEP: checking the pod's current state and verifying that restartCount is present Apr 13 08:46:57.587: INFO: Initial restart count of pod startup-013c9646-56a8-4364-beba-28e823f96254 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:50:58.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3727" for this suite. • [SLOW TEST:255.434 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:318 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":3,"skipped":1151,"failed":0} Apr 13 08:50:58.154: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:40.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:265 STEP: Creating pod liveness-46761308-b2c7-4b1f-ba16-c6434a967b21 in namespace container-probe-711 Apr 13 08:46:59.318: INFO: Started pod liveness-46761308-b2c7-4b1f-ba16-c6434a967b21 in namespace container-probe-711 STEP: checking the pod's current state and verifying that restartCount is present Apr 13 08:46:59.322: INFO: Initial restart count of pod liveness-46761308-b2c7-4b1f-ba16-c6434a967b21 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:51:00.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-711" for this suite. • [SLOW TEST:259.419 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:265 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":2,"skipped":444,"failed":0} Apr 13 08:51:00.182: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:46:55.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:682 STEP: getting restart delay-0 Apr 13 08:48:08.710: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-04-13 08:47:38 +0000 UTC restartedAt=2021-04-13 08:48:07 +0000 UTC (29s) STEP: getting restart delay-1 Apr 13 08:48:59.112: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-04-13 08:48:12 +0000 UTC restartedAt=2021-04-13 08:48:58 +0000 UTC (46s) STEP: getting restart delay-2 Apr 13 08:50:36.981: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-04-13 08:49:03 +0000 UTC restartedAt=2021-04-13 08:50:35 +0000 UTC (1m32s) STEP: updating the image Apr 13 08:50:37.492: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Apr 13 08:51:09.678: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-04-13 08:50:52 +0000 UTC restartedAt=2021-04-13 08:51:08 +0000 UTC (16s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 08:51:09.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1501" for this suite. • [SLOW TEST:254.626 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:682 ------------------------------ {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":2,"skipped":52,"failed":0} Apr 13 08:51:09.692: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 08:47:03.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:723 STEP: getting restart delay when capped Apr 13 08:58:51.576: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-04-13 08:53:41 +0000 UTC restartedAt=2021-04-13 08:58:50 +0000 UTC (5m9s) Apr 13 09:04:08.685: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-04-13 08:58:55 +0000 UTC restartedAt=2021-04-13 09:04:07 +0000 UTC (5m12s) Apr 13 09:09:17.343: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-04-13 09:04:12 +0000 UTC restartedAt=2021-04-13 09:09:15 +0000 UTC (5m3s) STEP: getting restart delay after a capped delay Apr 13 09:14:27.008: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-04-13 09:09:20 +0000 UTC restartedAt=2021-04-13 09:14:25 +0000 UTC (5m5s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 09:14:27.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5291" for this suite. • [SLOW TEST:1643.946 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:723 ------------------------------ {"msg":"PASSED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":120,"failed":0} Apr 13 09:14:27.020: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":171,"failed":0} Apr 13 08:47:29.584: INFO: Running AfterSuite actions on all nodes Apr 13 09:14:27.073: INFO: Running AfterSuite actions on node 1 Apr 13 09:14:27.073: INFO: Skipping dumping logs from cluster Ran 36 of 5667 Specs in 1770.540 seconds SUCCESS! -- 36 Passed | 0 Failed | 0 Pending | 5631 Skipped Ginkgo ran 1 suite in 29m34.288581048s Test Suite Passed