Running Suite: Kubernetes e2e suite =================================== Random Seed: 1618316276 - Will randomize all specs Will run 5667 specs Running in parallel across 25 nodes Apr 13 12:18:01.116: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:01.118: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 13 12:18:01.192: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 13 12:18:01.288: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:01.297: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:01.297: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 13 12:18:01.297: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:01.297: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:01.297: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:01.297: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:01.297: INFO: Apr 13 12:18:03.406: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:03.406: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:03.406: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Apr 13 12:18:03.406: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:03.406: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:03.406: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:03.406: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:03.406: INFO: Apr 13 12:18:05.370: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:05.370: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:05.370: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Apr 13 12:18:05.370: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:05.370: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:05.370: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:05.370: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:05.370: INFO: Apr 13 12:18:07.439: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:07.440: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:07.440: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Apr 13 12:18:07.440: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:07.440: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:07.440: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:07.440: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:07.440: INFO: Apr 13 12:18:09.589: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:09.589: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:09.589: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Apr 13 12:18:09.589: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:09.589: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:09.589: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:09.589: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:09.589: INFO: Apr 13 12:18:11.473: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:11.473: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:11.473: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Apr 13 12:18:11.473: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:11.473: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:11.473: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:11.473: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:11.473: INFO: Apr 13 12:18:13.406: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:13.406: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:13.406: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Apr 13 12:18:13.406: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:13.406: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:13.406: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:13.406: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:13.406: INFO: Apr 13 12:18:15.392: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:15.392: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:15.392: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Apr 13 12:18:15.392: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:15.392: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:15.392: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:15.392: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:15.392: INFO: Apr 13 12:18:17.453: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:17.453: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:17.453: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Apr 13 12:18:17.453: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:17.453: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:17.453: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:17.453: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:17.453: INFO: Apr 13 12:18:19.398: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:19.398: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:19.398: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Apr 13 12:18:19.398: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:19.398: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:19.398: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:19.398: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:19.398: INFO: Apr 13 12:18:21.416: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:21.416: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:21.416: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Apr 13 12:18:21.416: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:21.416: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:21.416: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:21.416: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:21.416: INFO: Apr 13 12:18:23.476: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:23.476: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:23.476: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Apr 13 12:18:23.476: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:23.476: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:23.476: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:23.476: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:23.476: INFO: Apr 13 12:18:25.366: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:25.366: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:25.366: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Apr 13 12:18:25.366: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:25.366: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:25.366: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:25.366: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:25.366: INFO: Apr 13 12:18:27.511: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:27.511: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:27.511: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Apr 13 12:18:27.511: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:27.511: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:27.511: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:27.511: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:27.511: INFO: Apr 13 12:18:29.447: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:29.447: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:29.447: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (28 seconds elapsed) Apr 13 12:18:29.447: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:29.447: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:29.447: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:29.447: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:29.447: INFO: Apr 13 12:18:31.393: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:31.393: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:31.393: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (30 seconds elapsed) Apr 13 12:18:31.393: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:31.393: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:31.393: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:31.393: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:31.393: INFO: Apr 13 12:18:33.759: INFO: The status of Pod kindnet-67q65 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:33.828: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:33.828: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (32 seconds elapsed) Apr 13 12:18:33.828: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:33.828: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:33.828: INFO: kindnet-67q65 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:25 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 08:13:54 +0000 UTC }] Apr 13 12:18:33.828: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:33.828: INFO: Apr 13 12:18:35.406: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:35.406: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (34 seconds elapsed) Apr 13 12:18:35.406: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:35.406: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:35.406: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:35.406: INFO: Apr 13 12:18:37.429: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:37.429: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (36 seconds elapsed) Apr 13 12:18:37.429: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:37.429: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:37.429: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:37.429: INFO: Apr 13 12:18:39.534: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:39.534: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (38 seconds elapsed) Apr 13 12:18:39.534: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:39.534: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:39.534: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:39.534: INFO: Apr 13 12:18:41.388: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:41.388: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (40 seconds elapsed) Apr 13 12:18:41.388: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:41.388: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:41.388: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:41.389: INFO: Apr 13 12:18:43.400: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:43.400: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (42 seconds elapsed) Apr 13 12:18:43.400: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:43.400: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:43.400: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:43.400: INFO: Apr 13 12:18:45.900: INFO: The status of Pod kindnet-d9q5l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 13 12:18:45.900: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (44 seconds elapsed) Apr 13 12:18:45.900: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:45.900: INFO: POD NODE PHASE GRACE CONDITIONS Apr 13 12:18:45.900: INFO: kindnet-d9q5l leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-13 12:13:45 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-13 11:30:05 +0000 UTC }] Apr 13 12:18:45.900: INFO: Apr 13 12:18:47.388: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (46 seconds elapsed) Apr 13 12:18:47.388: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 13 12:18:47.388: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 13 12:18:47.472: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 13 12:18:47.472: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 13 12:18:47.472: INFO: e2e test version: v1.20.5 Apr 13 12:18:47.474: INFO: kube-apiserver version: v1.20.2 Apr 13 12:18:47.474: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.497: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 13 12:18:47.494: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.528: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.493: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.528: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.494: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.528: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.492: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.528: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 13 12:18:47.492: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.530: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.492: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.530: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.499: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.531: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Apr 13 12:18:47.493: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.533: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 13 12:18:47.492: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.533: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 13 12:18:47.491: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.550: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Apr 13 12:18:47.498: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.550: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.493: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.550: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.499: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.551: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.492: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.550: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.508: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.550: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.500: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.550: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.508: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.550: INFO: Cluster IP family: ipv4 SSS ------------------------------ Apr 13 12:18:47.493: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.551: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Apr 13 12:18:47.497: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.554: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 13 12:18:47.493: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.553: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 13 12:18:47.491: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.550: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.496: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.552: INFO: Cluster IP family: ipv4 Apr 13 12:18:47.493: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.556: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Apr 13 12:18:47.500: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:18:47.554: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:47.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor Apr 13 12:18:47.860: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Apr 13 12:18:47.864: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:18:47.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-1167" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.430 seconds] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:267 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector Apr 13 12:18:48.546: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:51 Apr 13 12:18:48.549: INFO: No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:18:48.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-260" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.713 seconds] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:59 No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh Apr 13 12:18:48.597: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Apr 13 12:18:48.600: INFO: Only supported for providers [gce gke aws local] (not skeleton) [AfterEach] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:18:48.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-3096" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.791 seconds] [k8s.io] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 Only supported for providers [gce gke aws local] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:38 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 13 12:18:48.931: INFO: Running AfterSuite actions on all nodes Apr 13 12:18:48.931: INFO: Running AfterSuite actions on all nodes SS ------------------------------ Apr 13 12:18:48.931: INFO: Running AfterSuite actions on all nodes Apr 13 12:18:48.931: INFO: Running AfterSuite actions on all nodes S ------------------------------ Apr 13 12:18:48.932: INFO: Running AfterSuite actions on all nodes Apr 13 12:18:48.932: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Apr 13 12:18:48.682: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:18:48.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-484" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.641 seconds] [k8s.io] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:267 ------------------------------ Apr 13 12:18:48.941: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl Apr 13 12:18:48.885: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Apr 13 12:18:48.896: INFO: Only supported for providers [gce gke] (not skeleton) [AfterEach] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:18:48.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-8520" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.896 seconds] [k8s.io] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ Apr 13 12:18:49.282: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Apr 13 12:18:49.067: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:140 STEP: Creating ConfigMap configmap-1108/configmap-test-7cc0b576-b971-44b3-95fb-bbda789f9425 STEP: Updating configMap configmap-1108/configmap-test-7cc0b576-b971-44b3-95fb-bbda789f9425 STEP: Verifying update of ConfigMap configmap-1108/configmap-test-7cc0b576-b971-44b3-95fb-bbda789f9425 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:18:49.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1108" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":249,"failed":0} Apr 13 12:18:49.697: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:47.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:48.461: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:157 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 13 12:18:48.604: INFO: Waiting up to 5m0s for pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00" in namespace "security-context-8318" to be "Succeeded or Failed" Apr 13 12:18:48.681: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00": Phase="Pending", Reason="", readiness=false. Elapsed: 76.747766ms Apr 13 12:18:50.842: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237873895s Apr 13 12:18:53.744: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00": Phase="Pending", Reason="", readiness=false. Elapsed: 5.140325964s Apr 13 12:18:56.064: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00": Phase="Pending", Reason="", readiness=false. Elapsed: 7.459732175s Apr 13 12:18:58.390: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00": Phase="Pending", Reason="", readiness=false. Elapsed: 9.785803272s Apr 13 12:19:01.071: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00": Phase="Pending", Reason="", readiness=false. Elapsed: 12.467521814s Apr 13 12:19:03.676: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00": Phase="Pending", Reason="", readiness=false. Elapsed: 15.072116614s Apr 13 12:19:06.513: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.909129778s STEP: Saw pod success Apr 13 12:19:06.513: INFO: Pod "security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00" satisfied condition "Succeeded or Failed" Apr 13 12:19:06.702: INFO: Trying to get logs from node leguer-worker2 pod security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00 container test-container: STEP: delete the pod Apr 13 12:19:09.352: INFO: Waiting for pod security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00 to disappear Apr 13 12:19:09.967: INFO: Pod security-context-5aeff4b0-2ea7-4802-a1c4-967fa2272e00 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:09.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8318" for this suite. • [SLOW TEST:22.911 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:157 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":1,"skipped":95,"failed":0} Apr 13 12:19:10.978: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:47.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Apr 13 12:18:48.076: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109 STEP: Creating a pod to test downward api env vars Apr 13 12:18:48.164: INFO: Waiting up to 5m0s for pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917" in namespace "downward-api-6547" to be "Succeeded or Failed" Apr 13 12:18:48.310: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917": Phase="Pending", Reason="", readiness=false. Elapsed: 145.190992ms Apr 13 12:18:50.575: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411147487s Apr 13 12:18:53.282: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917": Phase="Pending", Reason="", readiness=false. Elapsed: 5.117836838s Apr 13 12:18:55.432: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917": Phase="Pending", Reason="", readiness=false. Elapsed: 7.267543077s Apr 13 12:18:58.390: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917": Phase="Running", Reason="", readiness=true. Elapsed: 10.226047299s Apr 13 12:19:01.072: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917": Phase="Running", Reason="", readiness=true. Elapsed: 12.907658033s Apr 13 12:19:03.673: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917": Phase="Running", Reason="", readiness=true. Elapsed: 15.508811106s Apr 13 12:19:06.512: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.347896593s STEP: Saw pod success Apr 13 12:19:06.512: INFO: Pod "downward-api-2088812a-139e-4764-a99d-8f61830f4917" satisfied condition "Succeeded or Failed" Apr 13 12:19:06.702: INFO: Trying to get logs from node leguer-worker2 pod downward-api-2088812a-139e-4764-a99d-8f61830f4917 container dapi-container: STEP: delete the pod Apr 13 12:19:09.349: INFO: Waiting for pod downward-api-2088812a-139e-4764-a99d-8f61830f4917 to disappear Apr 13 12:19:09.969: INFO: Pod downward-api-2088812a-139e-4764-a99d-8f61830f4917 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:10.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6547" for this suite. • [SLOW TEST:23.251 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109 ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:47.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:47.773: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:164 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 13 12:18:47.846: INFO: Waiting up to 5m0s for pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160" in namespace "security-context-7235" to be "Succeeded or Failed" Apr 13 12:18:47.916: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160": Phase="Pending", Reason="", readiness=false. Elapsed: 63.648542ms Apr 13 12:18:50.537: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.684338906s Apr 13 12:18:53.285: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160": Phase="Pending", Reason="", readiness=false. Elapsed: 5.432894627s Apr 13 12:18:55.787: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160": Phase="Pending", Reason="", readiness=false. Elapsed: 7.934732774s Apr 13 12:18:58.391: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160": Phase="Pending", Reason="", readiness=false. Elapsed: 10.539115067s Apr 13 12:19:01.072: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160": Phase="Pending", Reason="", readiness=false. Elapsed: 13.220178225s Apr 13 12:19:03.677: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160": Phase="Pending", Reason="", readiness=false. Elapsed: 15.824630621s Apr 13 12:19:06.515: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.662803477s STEP: Saw pod success Apr 13 12:19:06.515: INFO: Pod "security-context-60d99af6-6029-4950-8c4b-88eeac3bf160" satisfied condition "Succeeded or Failed" Apr 13 12:19:06.702: INFO: Trying to get logs from node leguer-worker2 pod security-context-60d99af6-6029-4950-8c4b-88eeac3bf160 container test-container: STEP: delete the pod Apr 13 12:19:09.348: INFO: Waiting for pod security-context-60d99af6-6029-4950-8c4b-88eeac3bf160 to disappear Apr 13 12:19:09.968: INFO: Pod security-context-60d99af6-6029-4950-8c4b-88eeac3bf160 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:10.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7235" for this suite. • [SLOW TEST:23.468 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:164 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":40,"failed":0} Apr 13 12:19:11.003: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0} Apr 13 12:19:11.003: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:47.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:48.100: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:149 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 13 12:18:48.241: INFO: Waiting up to 5m0s for pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01" in namespace "security-context-3628" to be "Succeeded or Failed" Apr 13 12:18:48.380: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 138.433718ms Apr 13 12:18:50.576: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334087447s Apr 13 12:18:53.285: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 5.043475526s Apr 13 12:18:55.786: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 7.544438772s Apr 13 12:18:58.391: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.149622435s Apr 13 12:19:01.071: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 12.829726564s Apr 13 12:19:03.675: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 15.43361881s Apr 13 12:19:06.512: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 18.270911529s Apr 13 12:19:09.349: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.107241412s STEP: Saw pod success Apr 13 12:19:09.349: INFO: Pod "security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01" satisfied condition "Succeeded or Failed" Apr 13 12:19:09.969: INFO: Trying to get logs from node leguer-worker2 pod security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01 container test-container: STEP: delete the pod Apr 13 12:19:11.907: INFO: Waiting for pod security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01 to disappear Apr 13 12:19:12.482: INFO: Pod security-context-ab3cf668-dac4-4e1f-8450-76afda04fa01 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:12.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3628" for this suite. • [SLOW TEST:25.481 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:149 ------------------------------ [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:48.627: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Apr 13 12:18:48.755: INFO: Waiting up to 5m0s for pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e" in namespace "security-context-6961" to be "Succeeded or Failed" Apr 13 12:18:48.887: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Pending", Reason="", readiness=false. Elapsed: 132.067202ms Apr 13 12:18:51.237: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482619808s Apr 13 12:18:53.744: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.989376575s Apr 13 12:18:56.064: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.308976387s Apr 13 12:18:58.394: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.639153011s Apr 13 12:19:01.072: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.31722521s Apr 13 12:19:03.677: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.922102639s Apr 13 12:19:06.515: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.760352781s Apr 13 12:19:09.348: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.593135466s STEP: Saw pod success Apr 13 12:19:09.527: INFO: Pod "security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e" satisfied condition "Succeeded or Failed" Apr 13 12:19:09.967: INFO: Trying to get logs from node leguer-worker pod security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e container test-container: STEP: delete the pod Apr 13 12:19:11.907: INFO: Waiting for pod security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e to disappear Apr 13 12:19:12.483: INFO: Pod security-context-0853a8d8-6c9d-4709-bf73-5b022d23752e no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:12.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6961" for this suite. • [SLOW TEST:25.153 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":1,"skipped":155,"failed":0} Apr 13 12:19:13.444: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":1,"skipped":44,"failed":0} Apr 13 12:19:13.267: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:48.938: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:89 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 13 12:18:49.150: INFO: Waiting up to 5m0s for pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1" in namespace "security-context-4920" to be "Succeeded or Failed" Apr 13 12:18:49.344: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 194.549413ms Apr 13 12:18:51.624: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474461958s Apr 13 12:18:53.816: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.666164993s Apr 13 12:18:56.114: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.964819476s Apr 13 12:18:58.389: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.239172748s Apr 13 12:19:01.071: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.921473341s Apr 13 12:19:03.673: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.52289358s Apr 13 12:19:06.512: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.362132781s Apr 13 12:19:09.349: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Running", Reason="", readiness=true. Elapsed: 20.19982046s Apr 13 12:19:11.824: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.674767296s STEP: Saw pod success Apr 13 12:19:11.825: INFO: Pod "security-context-00d997d2-5f54-482b-942b-2a73c9169eb1" satisfied condition "Succeeded or Failed" Apr 13 12:19:11.829: INFO: Trying to get logs from node leguer-worker2 pod security-context-00d997d2-5f54-482b-942b-2a73c9169eb1 container test-container: STEP: delete the pod Apr 13 12:19:13.534: INFO: Waiting for pod security-context-00d997d2-5f54-482b-942b-2a73c9169eb1 to disappear Apr 13 12:19:13.890: INFO: Pod security-context-00d997d2-5f54-482b-942b-2a73c9169eb1 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:13.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4920" for this suite. • [SLOW TEST:26.312 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:89 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":1,"skipped":265,"failed":0} Apr 13 12:19:15.027: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:48.705: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:171 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 13 12:18:48.891: INFO: Waiting up to 5m0s for pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e" in namespace "security-context-1125" to be "Succeeded or Failed" Apr 13 12:18:49.067: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 175.846182ms Apr 13 12:18:51.238: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346445474s Apr 13 12:18:53.738: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.846465902s Apr 13 12:18:56.064: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.172626767s Apr 13 12:18:58.389: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.49768378s Apr 13 12:19:01.072: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.180088381s Apr 13 12:19:03.673: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.781650801s Apr 13 12:19:06.512: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.620692598s Apr 13 12:19:09.348: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.456926558s Apr 13 12:19:11.824: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.93300076s STEP: Saw pod success Apr 13 12:19:11.825: INFO: Pod "security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e" satisfied condition "Succeeded or Failed" Apr 13 12:19:11.909: INFO: Trying to get logs from node leguer-worker pod security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e container test-container: STEP: delete the pod Apr 13 12:19:13.273: INFO: Waiting for pod security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e to disappear Apr 13 12:19:13.890: INFO: Pod security-context-a57c959c-15c6-43d1-8bb6-e18aaec0077e no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:13.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1125" for this suite. • [SLOW TEST:27.366 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:171 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":153,"failed":0} Apr 13 12:19:15.524: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:49.468: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 13 12:18:49.661: INFO: Waiting up to 5m0s for pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003" in namespace "security-context-5838" to be "Succeeded or Failed" Apr 13 12:18:49.698: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 37.513381ms Apr 13 12:18:51.833: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172460003s Apr 13 12:18:54.054: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393291567s Apr 13 12:18:57.308: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 7.647366109s Apr 13 12:18:59.687: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025955457s Apr 13 12:19:02.656: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 12.99513101s Apr 13 12:19:04.703: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 15.042273138s Apr 13 12:19:07.511: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 17.850022404s Apr 13 12:19:09.966: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Pending", Reason="", readiness=false. Elapsed: 20.305707321s Apr 13 12:19:12.483: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Running", Reason="", readiness=true. Elapsed: 22.82262497s Apr 13 12:19:14.769: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.107980881s STEP: Saw pod success Apr 13 12:19:14.769: INFO: Pod "security-context-bd2c710f-f9e1-4636-b300-55ff4c596003" satisfied condition "Succeeded or Failed" Apr 13 12:19:15.266: INFO: Trying to get logs from node leguer-worker pod security-context-bd2c710f-f9e1-4636-b300-55ff4c596003 container test-container: STEP: delete the pod Apr 13 12:19:18.404: INFO: Waiting for pod security-context-bd2c710f-f9e1-4636-b300-55ff4c596003 to disappear Apr 13 12:19:19.200: INFO: Pod security-context-bd2c710f-f9e1-4636-b300-55ff4c596003 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:19.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5838" for this suite. • [SLOW TEST:31.492 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:118 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":1,"skipped":360,"failed":0} Apr 13 12:19:20.155: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:47.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:48.420: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 13 12:18:48.546: INFO: Waiting up to 5m0s for pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1" in namespace "security-context-5176" to be "Succeeded or Failed" Apr 13 12:18:48.627: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 80.729095ms Apr 13 12:18:50.841: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294506234s Apr 13 12:18:53.744: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.197678215s Apr 13 12:18:56.064: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.51708416s Apr 13 12:18:58.389: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.842228085s Apr 13 12:19:01.072: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.525336411s Apr 13 12:19:03.677: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.130442546s Apr 13 12:19:06.515: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.968066541s Apr 13 12:19:09.348: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.801490115s Apr 13 12:19:11.824: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 23.277935697s Apr 13 12:19:13.890: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 25.343859141s Apr 13 12:19:16.470: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 27.923918435s Apr 13 12:19:19.200: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Running", Reason="", readiness=true. Elapsed: 30.653063146s Apr 13 12:19:21.723: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.176890308s STEP: Saw pod success Apr 13 12:19:21.933: INFO: Pod "security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1" satisfied condition "Succeeded or Failed" Apr 13 12:19:22.510: INFO: Trying to get logs from node leguer-worker pod security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1 container test-container: STEP: delete the pod Apr 13 12:19:22.899: INFO: Waiting for pod security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1 to disappear Apr 13 12:19:22.958: INFO: Pod security-context-cc34e336-47ae-4b8f-9bdc-c67565218fc1 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:22.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5176" for this suite. • [SLOW TEST:35.169 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":94,"failed":0} Apr 13 12:19:23.114: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Apr 13 12:18:49.553: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:103 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 13 12:18:49.695: INFO: Waiting up to 5m0s for pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861" in namespace "security-context-7807" to be "Succeeded or Failed" Apr 13 12:18:50.536: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 840.676675ms Apr 13 12:18:53.285: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 3.589919342s Apr 13 12:18:55.788: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092593809s Apr 13 12:18:58.392: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 8.696258364s Apr 13 12:19:01.073: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 11.377486236s Apr 13 12:19:03.677: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 13.981164824s Apr 13 12:19:06.516: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 16.820431167s Apr 13 12:19:09.349: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 19.653680808s Apr 13 12:19:11.825: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 22.129329994s Apr 13 12:19:13.891: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 24.195544549s Apr 13 12:19:16.472: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 26.776892887s Apr 13 12:19:19.199: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Pending", Reason="", readiness=false. Elapsed: 29.50371373s Apr 13 12:19:21.272: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Running", Reason="", readiness=true. Elapsed: 31.576178193s Apr 13 12:19:23.310: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.614494448s STEP: Saw pod success Apr 13 12:19:23.310: INFO: Pod "security-context-32a522b3-30d0-4293-b2d6-ed8a45258861" satisfied condition "Succeeded or Failed" Apr 13 12:19:23.333: INFO: Trying to get logs from node leguer-worker2 pod security-context-32a522b3-30d0-4293-b2d6-ed8a45258861 container test-container: STEP: delete the pod Apr 13 12:19:24.230: INFO: Waiting for pod security-context-32a522b3-30d0-4293-b2d6-ed8a45258861 to disappear Apr 13 12:19:24.354: INFO: Pod security-context-32a522b3-30d0-4293-b2d6-ed8a45258861 no longer exists [AfterEach] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:24.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7807" for this suite. • [SLOW TEST:35.834 seconds] [k8s.io] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:103 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":321,"failed":0} Apr 13 12:19:24.499: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:47.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 13 12:18:48.531: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Apr 13 12:18:48.627: INFO: Waiting up to 5m0s for pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56" in namespace "pods-262" to be "Succeeded or Failed" Apr 13 12:18:48.706: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 79.24492ms Apr 13 12:18:50.841: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214180193s Apr 13 12:18:53.744: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 5.117218602s Apr 13 12:18:56.065: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 7.437585159s Apr 13 12:18:58.390: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 9.762614558s Apr 13 12:19:01.072: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 12.44506631s Apr 13 12:19:03.676: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 15.048589719s Apr 13 12:19:06.518: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 17.891056643s Apr 13 12:19:09.348: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 20.721330401s Apr 13 12:19:11.824: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 23.19728364s Apr 13 12:19:13.890: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 25.263407198s Apr 13 12:19:16.471: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 27.843878234s Apr 13 12:19:19.201: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 30.574211001s Apr 13 12:19:21.724: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 33.097375293s Apr 13 12:19:24.080: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Pending", Reason="", readiness=false. Elapsed: 35.452632833s Apr 13 12:19:26.705: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Running", Reason="", readiness=true. Elapsed: 38.078431537s Apr 13 12:19:28.734: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.10755106s STEP: Saw pod success Apr 13 12:19:28.735: INFO: Pod "pod-always-succeed0570db38-3895-48a6-abe2-3acb7d296b56" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:30.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-262" for this suite. • [SLOW TEST:42.923 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":1,"skipped":103,"failed":0} Apr 13 12:19:30.898: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop Apr 13 12:18:49.150: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Apr 13 12:19:37.193: INFO: pod is running [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:37.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-952" for this suite. • [SLOW TEST:48.850 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":264,"failed":0} Apr 13 12:19:37.395: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 13 12:18:48.861: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 13 12:19:22.567: INFO: start=2021-04-13 12:19:17.208305239 +0000 UTC m=+81.021221750, now=2021-04-13 12:19:22.567700885 +0000 UTC m=+86.380617401, kubelet pod: {"metadata":{"name":"pod-submit-remove-7210081c-2062-4270-bdfb-06f97c3ea601","namespace":"pods-6208","uid":"d4c5aaff-f93b-4b10-9f56-fe65218e2c83","resourceVersion":"86837","creationTimestamp":"2021-04-13T12:18:49Z","deletionTimestamp":"2021-04-13T12:19:46Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"864574453"},"annotations":{"kubernetes.io/config.seen":"2021-04-13T12:18:49.278348939Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-04-13T12:18:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-6skgb","secret":{"secretName":"default-token-6skgb","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-6skgb","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:14Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:14Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"}],"hostIP":"172.18.0.14","podIP":"10.244.1.14","podIPs":[{"ip":"10.244.1.14"}],"startTime":"2021-04-13T12:18:49Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2021-04-13T12:19:14Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a","containerID":"containerd://26d33e655db9abb29889b8c101b21393e47a237ff164b68fccf2b530df84b236","started":true}],"qosClass":"BestEffort"}} Apr 13 12:19:27.490: INFO: start=2021-04-13 12:19:17.208305239 +0000 UTC m=+81.021221750, now=2021-04-13 12:19:27.490338601 +0000 UTC m=+91.303255131, kubelet pod: {"metadata":{"name":"pod-submit-remove-7210081c-2062-4270-bdfb-06f97c3ea601","namespace":"pods-6208","uid":"d4c5aaff-f93b-4b10-9f56-fe65218e2c83","resourceVersion":"86837","creationTimestamp":"2021-04-13T12:18:49Z","deletionTimestamp":"2021-04-13T12:19:46Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"864574453"},"annotations":{"kubernetes.io/config.seen":"2021-04-13T12:18:49.278348939Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-04-13T12:18:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-6skgb","secret":{"secretName":"default-token-6skgb","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-6skgb","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"}],"hostIP":"172.18.0.14","podIP":"10.244.1.14","podIPs":[{"ip":"10.244.1.14"}],"startTime":"2021-04-13T12:18:49Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"","started":false}],"qosClass":"BestEffort"}} Apr 13 12:19:32.372: INFO: start=2021-04-13 12:19:17.208305239 +0000 UTC m=+81.021221750, now=2021-04-13 12:19:32.37291535 +0000 UTC m=+96.185831836, kubelet pod: {"metadata":{"name":"pod-submit-remove-7210081c-2062-4270-bdfb-06f97c3ea601","namespace":"pods-6208","uid":"d4c5aaff-f93b-4b10-9f56-fe65218e2c83","resourceVersion":"86837","creationTimestamp":"2021-04-13T12:18:49Z","deletionTimestamp":"2021-04-13T12:19:46Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"864574453"},"annotations":{"kubernetes.io/config.seen":"2021-04-13T12:18:49.278348939Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-04-13T12:18:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-6skgb","secret":{"secretName":"default-token-6skgb","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-6skgb","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"}],"hostIP":"172.18.0.14","podIP":"10.244.1.14","podIPs":[{"ip":"10.244.1.14"}],"startTime":"2021-04-13T12:18:49Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"","started":false}],"qosClass":"BestEffort"}} Apr 13 12:19:37.351: INFO: start=2021-04-13 12:19:17.208305239 +0000 UTC m=+81.021221750, now=2021-04-13 12:19:37.35152101 +0000 UTC m=+101.164437503, kubelet pod: {"metadata":{"name":"pod-submit-remove-7210081c-2062-4270-bdfb-06f97c3ea601","namespace":"pods-6208","uid":"d4c5aaff-f93b-4b10-9f56-fe65218e2c83","resourceVersion":"86837","creationTimestamp":"2021-04-13T12:18:49Z","deletionTimestamp":"2021-04-13T12:19:46Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"864574453"},"annotations":{"kubernetes.io/config.seen":"2021-04-13T12:18:49.278348939Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-04-13T12:18:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-6skgb","secret":{"secretName":"default-token-6skgb","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-6skgb","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"}],"hostIP":"172.18.0.14","podIP":"10.244.1.14","podIPs":[{"ip":"10.244.1.14"}],"startTime":"2021-04-13T12:18:49Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"","started":false}],"qosClass":"BestEffort"}} Apr 13 12:19:42.222: INFO: start=2021-04-13 12:19:17.208305239 +0000 UTC m=+81.021221750, now=2021-04-13 12:19:42.222281594 +0000 UTC m=+106.035198112, kubelet pod: {"metadata":{"name":"pod-submit-remove-7210081c-2062-4270-bdfb-06f97c3ea601","namespace":"pods-6208","uid":"d4c5aaff-f93b-4b10-9f56-fe65218e2c83","resourceVersion":"86837","creationTimestamp":"2021-04-13T12:18:49Z","deletionTimestamp":"2021-04-13T12:19:46Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"864574453"},"annotations":{"kubernetes.io/config.seen":"2021-04-13T12:18:49.278348939Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-04-13T12:18:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-6skgb","secret":{"secretName":"default-token-6skgb","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-6skgb","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"}],"hostIP":"172.18.0.14","podIP":"10.244.1.14","podIPs":[{"ip":"10.244.1.14"}],"startTime":"2021-04-13T12:18:49Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"","started":false}],"qosClass":"BestEffort"}} Apr 13 12:19:47.271: INFO: start=2021-04-13 12:19:17.208305239 +0000 UTC m=+81.021221750, now=2021-04-13 12:19:47.271819869 +0000 UTC m=+111.084736349, kubelet pod: {"metadata":{"name":"pod-submit-remove-7210081c-2062-4270-bdfb-06f97c3ea601","namespace":"pods-6208","uid":"d4c5aaff-f93b-4b10-9f56-fe65218e2c83","resourceVersion":"86837","creationTimestamp":"2021-04-13T12:18:49Z","deletionTimestamp":"2021-04-13T12:19:46Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"864574453"},"annotations":{"kubernetes.io/config.seen":"2021-04-13T12:18:49.278348939Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-04-13T12:18:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-6skgb","secret":{"secretName":"default-token-6skgb","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-6skgb","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"}],"hostIP":"172.18.0.14","podIP":"10.244.1.14","podIPs":[{"ip":"10.244.1.14"}],"startTime":"2021-04-13T12:18:49Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"","started":false}],"qosClass":"BestEffort"}} Apr 13 12:19:52.270: INFO: start=2021-04-13 12:19:17.208305239 +0000 UTC m=+81.021221750, now=2021-04-13 12:19:52.270453253 +0000 UTC m=+116.083369815, kubelet pod: {"metadata":{"name":"pod-submit-remove-7210081c-2062-4270-bdfb-06f97c3ea601","namespace":"pods-6208","uid":"d4c5aaff-f93b-4b10-9f56-fe65218e2c83","resourceVersion":"86837","creationTimestamp":"2021-04-13T12:18:49Z","deletionTimestamp":"2021-04-13T12:19:46Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"864574453"},"annotations":{"kubernetes.io/config.seen":"2021-04-13T12:18:49.278348939Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-04-13T12:18:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-6skgb","secret":{"secretName":"default-token-6skgb","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-6skgb","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"leguer-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:19:21Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-04-13T12:18:49Z"}],"hostIP":"172.18.0.14","podIP":"10.244.1.14","podIPs":[{"ip":"10.244.1.14"}],"startTime":"2021-04-13T12:18:49Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.21","imageID":"","started":false}],"qosClass":"BestEffort"}} Apr 13 12:19:57.271: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:57.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6208" for this suite. • [SLOW TEST:69.970 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed","total":-1,"completed":1,"skipped":188,"failed":0} Apr 13 12:19:58.222: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet Apr 13 12:18:49.662: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac in namespace kubelet-8936 I0413 12:18:51.237481 17 runners.go:190] Created replication controller with name: cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac, namespace: kubelet-8936, replica count: 20 Apr 13 12:18:51.446: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:18:51.460: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:18:52.402: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:18:57.290: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:18:57.305: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" I0413 12:19:01.287894 17 runners.go:190] cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 13 12:19:02.310: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:02.706: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:05.882: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:19:08.212: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:08.869: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:11.052: INFO: Missing info/stats for container "runtime" on node "leguer-worker" I0413 12:19:11.288098 17 runners.go:190] cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 13 12:19:13.324: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:13.939: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:19.040: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:19:19.254: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:20.582: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" I0413 12:19:21.288318 17 runners.go:190] cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac Pods: 20 out of 20 created, 4 running, 16 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 13 12:19:24.188: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:19:24.432: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:26.702: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:29.504: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:29.542: INFO: Missing info/stats for container "runtime" on node "leguer-worker" I0413 12:19:31.288617 17 runners.go:190] cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 13 12:19:31.923: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:32.589: INFO: Checking pods on node leguer-worker2 via /runningpods endpoint Apr 13 12:19:32.590: INFO: Checking pods on node leguer-worker via /runningpods endpoint Apr 13 12:19:32.675: INFO: [Resource usage on node "leguer-control-plane" is not ready yet, Resource usage on node "leguer-worker" is not ready yet, Resource usage on node "leguer-worker2" is not ready yet] Apr 13 12:19:32.675: INFO: STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac in namespace kubelet-8936, will wait for the garbage collector to delete the pods Apr 13 12:19:32.878: INFO: Deleting ReplicationController cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac took: 133.818076ms Apr 13 12:19:34.379: INFO: Terminating ReplicationController cleanup20-40f8805f-bd15-409d-b19e-b6014c1132ac pods took: 1.500255934s Apr 13 12:19:35.161: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:19:35.166: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:37.157: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:40.640: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:19:40.656: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:42.218: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:45.761: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:19:45.788: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:47.318: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:51.015: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:19:51.032: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:53.633: INFO: Missing info/stats for container "runtime" on node "leguer-worker2" Apr 13 12:19:56.111: INFO: Missing info/stats for container "runtime" on node "leguer-worker" Apr 13 12:19:56.129: INFO: Missing info/stats for container "runtime" on node "leguer-control-plane" Apr 13 12:19:56.979: INFO: Checking pods on node leguer-worker2 via /runningpods endpoint Apr 13 12:19:56.979: INFO: Checking pods on node leguer-worker via /runningpods endpoint Apr 13 12:19:57.008: INFO: Deleting 20 pods on 2 nodes completed in 1.029519972s after the RC was deleted Apr 13 12:19:57.023: INFO: CPU usage of containers on node "leguer-worker" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.496 0.658 0.745 0.793 0.793 0.793 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "leguer-worker2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.697 0.759 0.782 0.802 0.802 0.802 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "leguer-control-plane" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.744 0.798 0.950 1.024 1.024 1.024 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [AfterEach] [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node leguer-worker2 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node leguer-worker STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:19:57.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-8936" for this suite. • [SLOW TEST:69.538 seconds] [k8s.io] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] [sig-node] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ [BeforeEach] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation Apr 13 12:18:49.553: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Apr 13 12:19:42.947: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:42.947: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:43.080: INFO: Exec stderr: "" Apr 13 12:19:43.127: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:43.127: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:43.387: INFO: Exec stderr: "" Apr 13 12:19:43.474: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:43.474: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:43.667: INFO: Exec stderr: "" Apr 13 12:19:43.714: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:43.714: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:43.845: INFO: Exec stderr: "" Apr 13 12:19:43.881: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:43.890: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:44.066: INFO: Exec stderr: "" Apr 13 12:19:44.085: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:44.085: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:44.200: INFO: Exec stderr: "" Apr 13 12:19:44.241: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:44.241: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:44.430: INFO: Exec stderr: "" Apr 13 12:19:44.499: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:44.499: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:44.638: INFO: Exec stderr: "" Apr 13 12:19:44.691: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:44.691: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:44.843: INFO: Exec stderr: "" Apr 13 12:19:44.912: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:44.912: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:45.142: INFO: Exec stderr: "" Apr 13 12:19:45.235: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:45.235: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:45.472: INFO: Exec stderr: "" Apr 13 12:19:45.475: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:45.475: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:45.790: INFO: Exec stderr: "" Apr 13 12:19:45.864: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:45.864: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:46.015: INFO: Exec stderr: "" Apr 13 12:19:46.026: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:46.026: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:46.171: INFO: Exec stderr: "" Apr 13 12:19:46.174: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:46.174: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:46.327: INFO: Exec stderr: "" Apr 13 12:19:46.366: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:46.366: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:46.547: INFO: Exec stderr: "" Apr 13 12:19:46.558: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:46.558: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:46.693: INFO: Exec stderr: "" Apr 13 12:19:46.738: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:46.738: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:46.918: INFO: Exec stderr: "" Apr 13 12:19:46.925: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:46.925: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:47.084: INFO: Exec stderr: "" Apr 13 12:19:47.115: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:47.115: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:47.225: INFO: Exec stderr: "" Apr 13 12:19:53.948: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-2016"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-2016"/host; echo host > "/var/lib/kubelet/mount-propagation-2016"/host/file] Namespace:mount-propagation-2016 PodName:hostexec-leguer-worker2-6cwkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 13 12:19:53.949: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:54.122: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:54.122: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:54.278: INFO: pod master mount master: stdout: "master", stderr: "" error: Apr 13 12:19:54.284: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:54.284: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:54.393: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:54.396: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:54.396: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:54.543: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:54.553: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:54.559: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:54.694: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:54.738: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:54.738: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:54.861: INFO: pod master mount host: stdout: "host", stderr: "" error: Apr 13 12:19:54.881: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:54.889: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:55.015: INFO: pod slave mount master: stdout: "master", stderr: "" error: Apr 13 12:19:55.025: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:55.033: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:55.199: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Apr 13 12:19:55.210: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:55.210: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:55.444: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:55.475: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:55.475: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:55.639: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:55.690: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:55.690: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:55.806: INFO: pod slave mount host: stdout: "host", stderr: "" error: Apr 13 12:19:55.811: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:55.811: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:55.946: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:55.996: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:55.996: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:56.155: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:56.247: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:56.247: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:56.379: INFO: pod private mount private: stdout: "private", stderr: "" error: Apr 13 12:19:56.469: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:56.469: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:56.637: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:56.792: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:56.792: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:56.966: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:57.005: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:57.005: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:57.240: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:57.265: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:57.265: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:57.420: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:57.784: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:57.961: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:58.299: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:58.355: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:58.355: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:58.608: INFO: pod default mount default: stdout: "default", stderr: "" error: Apr 13 12:19:58.658: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:58.667: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:58.801: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Apr 13 12:19:58.801: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-2016"/master/file` = master] Namespace:mount-propagation-2016 PodName:hostexec-leguer-worker2-6cwkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 13 12:19:58.801: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:58.924: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-2016"/slave/file] Namespace:mount-propagation-2016 PodName:hostexec-leguer-worker2-6cwkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 13 12:19:58.924: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:59.013: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-2016"/host] Namespace:mount-propagation-2016 PodName:hostexec-leguer-worker2-6cwkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 13 12:19:59.013: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:59.359: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-2016 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:59.359: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:59.643: INFO: Exec stderr: "" Apr 13 12:19:59.688: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-2016 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:59.688: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:19:59.867: INFO: Exec stderr: "" Apr 13 12:19:59.916: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-2016 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:19:59.917: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:20:00.055: INFO: Exec stderr: "" Apr 13 12:20:00.097: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-2016 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 13 12:20:00.098: INFO: >>> kubeConfig: /root/.kube/config Apr 13 12:20:00.226: INFO: Exec stderr: "" Apr 13 12:20:00.226: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-2016"] Namespace:mount-propagation-2016 PodName:hostexec-leguer-worker2-6cwkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 13 12:20:00.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-leguer-worker2-6cwkd in namespace mount-propagation-2016 [AfterEach] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:20:00.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-2016" for this suite. • [SLOW TEST:71.925 seconds] [k8s.io] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":1,"skipped":321,"failed":0} Apr 13 12:20:00.563: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 13 12:18:48.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Apr 13 12:18:48.679: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Apr 13 12:18:57.444: INFO: watch delete seen for pod-submit-status-2-0 Apr 13 12:18:57.444: INFO: Pod pod-submit-status-2-0 on node leguer-worker timings total=8.76212789s t=1.669s run=0s execute=0s Apr 13 12:18:59.871: INFO: watch delete seen for pod-submit-status-0-0 Apr 13 12:18:59.872: INFO: Pod pod-submit-status-0-0 on node leguer-worker2 timings total=11.189471384s t=1.39s run=0s execute=0s Apr 13 12:19:01.634: INFO: watch delete seen for pod-submit-status-1-0 Apr 13 12:19:01.634: INFO: Pod pod-submit-status-1-0 on node leguer-worker2 timings total=12.951485362s t=1.476s run=0s execute=0s Apr 13 12:19:03.706: INFO: watch delete seen for pod-submit-status-2-1 Apr 13 12:19:03.715: INFO: Pod pod-submit-status-2-1 on node leguer-worker timings total=6.271248983s t=1.693s run=0s execute=0s Apr 13 12:19:05.205: INFO: watch delete seen for pod-submit-status-0-1 Apr 13 12:19:05.205: INFO: Pod pod-submit-status-0-1 on node leguer-worker timings total=5.333145861s t=849ms run=0s execute=0s Apr 13 12:19:08.434: INFO: watch delete seen for pod-submit-status-2-2 Apr 13 12:19:08.434: INFO: Pod pod-submit-status-2-2 on node leguer-worker timings total=4.718803107s t=1.934s run=0s execute=0s Apr 13 12:19:18.649: INFO: watch delete seen for pod-submit-status-2-3 Apr 13 12:19:18.812: INFO: Pod pod-submit-status-2-3 on node leguer-worker2 timings total=10.377327736s t=1.235s run=0s execute=0s Apr 13 12:19:20.570: INFO: watch delete seen for pod-submit-status-0-2 Apr 13 12:19:20.570: INFO: Pod pod-submit-status-0-2 on node leguer-worker2 timings total=15.365494025s t=1.365s run=0s execute=0s Apr 13 12:19:24.229: INFO: watch delete seen for pod-submit-status-0-3 Apr 13 12:19:24.229: INFO: Pod pod-submit-status-0-3 on node leguer-worker2 timings total=3.65885498s t=348ms run=0s execute=0s Apr 13 12:19:25.442: INFO: watch delete seen for pod-submit-status-1-1 Apr 13 12:19:25.529: INFO: Pod pod-submit-status-1-1 on node leguer-worker timings total=23.895569608s t=1.973s run=0s execute=0s Apr 13 12:19:46.444: INFO: watch delete seen for pod-submit-status-2-4 Apr 13 12:19:46.444: INFO: Pod pod-submit-status-2-4 on node leguer-worker2 timings total=27.63257664s t=1.777s run=0s execute=0s Apr 13 12:19:50.470: INFO: watch delete seen for pod-submit-status-2-5 Apr 13 12:19:50.470: INFO: Pod pod-submit-status-2-5 on node leguer-worker2 timings total=4.025864867s t=1.58s run=0s execute=0s Apr 13 12:19:55.239: INFO: watch delete seen for pod-submit-status-1-2 Apr 13 12:19:55.239: INFO: Pod pod-submit-status-1-2 on node leguer-worker timings total=29.709716253s t=1.341s run=0s execute=0s Apr 13 12:19:56.082: INFO: watch delete seen for pod-submit-status-0-4 Apr 13 12:19:56.082: INFO: Pod pod-submit-status-0-4 on node leguer-worker timings total=31.852408458s t=298ms run=0s execute=0s Apr 13 12:20:05.362: INFO: watch delete seen for pod-submit-status-2-6 Apr 13 12:20:05.362: INFO: Pod pod-submit-status-2-6 on node leguer-worker2 timings total=14.891625329s t=205ms run=0s execute=0s Apr 13 12:20:15.231: INFO: watch delete seen for pod-submit-status-2-7 Apr 13 12:20:15.231: INFO: Pod pod-submit-status-2-7 on node leguer-worker timings total=9.86903706s t=1.948s run=0s execute=0s Apr 13 12:20:46.034: INFO: watch delete seen for pod-submit-status-1-3 Apr 13 12:20:46.035: INFO: Pod pod-submit-status-1-3 on node leguer-worker2 timings total=50.795337223s t=238ms run=0s execute=0s Apr 13 12:20:55.104: INFO: watch delete seen for pod-submit-status-0-5 Apr 13 12:20:55.104: INFO: Pod pod-submit-status-0-5 on node leguer-worker timings total=59.022722491s t=504ms run=0s execute=0s Apr 13 12:20:55.276: INFO: watch delete seen for pod-submit-status-1-4 Apr 13 12:20:55.277: INFO: Pod pod-submit-status-1-4 on node leguer-worker timings total=9.241942635s t=1.565s run=0s execute=0s Apr 13 12:20:55.518: INFO: watch delete seen for pod-submit-status-2-8 Apr 13 12:20:55.518: INFO: Pod pod-submit-status-2-8 on node leguer-worker timings total=40.287074958s t=1.188s run=0s execute=0s Apr 13 12:21:55.344: INFO: watch delete seen for pod-submit-status-0-6 Apr 13 12:21:55.344: INFO: Pod pod-submit-status-0-6 on node leguer-worker2 timings total=1m0.239594472s t=570ms run=0s execute=0s Apr 13 12:21:55.652: INFO: watch delete seen for pod-submit-status-2-9 Apr 13 12:21:55.652: INFO: Pod pod-submit-status-2-9 on node leguer-worker2 timings total=1m0.134282188s t=1.084s run=0s execute=0s Apr 13 12:22:05.146: INFO: watch delete seen for pod-submit-status-1-5 Apr 13 12:22:05.146: INFO: Pod pod-submit-status-1-5 on node leguer-worker timings total=1m9.869172059s t=264ms run=0s execute=0s Apr 13 12:22:05.377: INFO: watch delete seen for pod-submit-status-2-10 Apr 13 12:22:05.378: INFO: Pod pod-submit-status-2-10 on node leguer-worker2 timings total=9.725050562s t=245ms run=0s execute=0s Apr 13 12:22:05.557: INFO: watch delete seen for pod-submit-status-0-7 Apr 13 12:22:05.557: INFO: Pod pod-submit-status-0-7 on node leguer-worker2 timings total=10.212669164s t=985ms run=0s execute=0s Apr 13 12:22:15.448: INFO: watch delete seen for pod-submit-status-2-11 Apr 13 12:22:15.448: INFO: Pod pod-submit-status-2-11 on node leguer-worker2 timings total=10.070343682s t=1.41s run=0s execute=0s Apr 13 12:22:15.619: INFO: watch delete seen for pod-submit-status-1-6 Apr 13 12:22:15.619: INFO: Pod pod-submit-status-1-6 on node leguer-worker2 timings total=10.472888546s t=1.513s run=0s execute=0s Apr 13 12:22:26.428: INFO: watch delete seen for pod-submit-status-1-7 Apr 13 12:22:26.428: INFO: Pod pod-submit-status-1-7 on node leguer-worker2 timings total=10.808763053s t=1.411s run=0s execute=0s Apr 13 12:22:56.873: INFO: watch delete seen for pod-submit-status-2-12 Apr 13 12:22:56.873: INFO: Pod pod-submit-status-2-12 on node leguer-worker2 timings total=41.42526541s t=326ms run=0s execute=0s Apr 13 12:22:58.327: INFO: watch delete seen for pod-submit-status-1-8 Apr 13 12:22:58.327: INFO: Pod pod-submit-status-1-8 on node leguer-worker2 timings total=31.899088337s t=1.028s run=0s execute=0s Apr 13 12:23:00.582: INFO: watch delete seen for pod-submit-status-0-8 Apr 13 12:23:00.582: INFO: Pod pod-submit-status-0-8 on node leguer-worker2 timings total=55.024803504s t=759ms run=0s execute=0s Apr 13 12:23:04.317: INFO: watch delete seen for pod-submit-status-1-9 Apr 13 12:23:04.317: INFO: Pod pod-submit-status-1-9 on node leguer-worker2 timings total=5.990210234s t=905ms run=0s execute=0s Apr 13 12:23:07.024: INFO: watch delete seen for pod-submit-status-2-13 Apr 13 12:23:07.024: INFO: Pod pod-submit-status-2-13 on node leguer-worker2 timings total=10.150403389s t=1.099s run=0s execute=0s Apr 13 12:23:10.318: INFO: watch delete seen for pod-submit-status-2-14 Apr 13 12:23:10.318: INFO: Pod pod-submit-status-2-14 on node leguer-worker2 timings total=3.293738065s t=126ms run=0s execute=0s Apr 13 12:23:16.199: INFO: watch delete seen for pod-submit-status-0-9 Apr 13 12:23:16.199: INFO: Pod pod-submit-status-0-9 on node leguer-worker2 timings total=15.617163566s t=636ms run=0s execute=0s Apr 13 12:23:26.816: INFO: watch delete seen for pod-submit-status-0-10 Apr 13 12:23:26.816: INFO: Pod pod-submit-status-0-10 on node leguer-worker2 timings total=10.617401526s t=978ms run=0s execute=0s Apr 13 12:23:56.513: INFO: watch delete seen for pod-submit-status-0-11 Apr 13 12:23:56.513: INFO: Pod pod-submit-status-0-11 on node leguer-worker2 timings total=29.696611552s t=1.7s run=0s execute=0s Apr 13 12:23:58.586: INFO: watch delete seen for pod-submit-status-1-10 Apr 13 12:23:58.644: INFO: Pod pod-submit-status-1-10 on node leguer-worker2 timings total=54.326733689s t=145ms run=0s execute=0s Apr 13 12:24:57.073: INFO: watch delete seen for pod-submit-status-1-11 Apr 13 12:24:57.312: INFO: Pod pod-submit-status-1-11 on node leguer-worker2 timings total=58.668256365s t=347ms run=0s execute=0s Apr 13 12:24:59.237: INFO: watch delete seen for pod-submit-status-0-12 Apr 13 12:24:59.238: INFO: Pod pod-submit-status-0-12 on node leguer-worker2 timings total=1m2.72432412s t=1.868s run=0s execute=0s Apr 13 12:25:03.890: INFO: watch delete seen for pod-submit-status-1-12 Apr 13 12:25:03.890: INFO: Pod pod-submit-status-1-12 on node leguer-worker2 timings total=6.578011626s t=43ms run=0s execute=0s Apr 13 12:25:57.127: INFO: watch delete seen for pod-submit-status-1-13 Apr 13 12:25:57.127: INFO: Pod pod-submit-status-1-13 on node leguer-worker2 timings total=53.236379584s t=1.675s run=0s execute=0s Apr 13 12:25:59.949: INFO: watch delete seen for pod-submit-status-0-13 Apr 13 12:25:59.949: INFO: Pod pod-submit-status-0-13 on node leguer-worker2 timings total=1m0.711384783s t=1.232s run=0s execute=0s Apr 13 12:26:02.522: INFO: watch delete seen for pod-submit-status-1-14 Apr 13 12:26:02.522: INFO: Pod pod-submit-status-1-14 on node leguer-worker2 timings total=5.395046768s t=153ms run=0s execute=0s Apr 13 12:26:16.120: INFO: watch delete seen for pod-submit-status-0-14 Apr 13 12:26:16.120: INFO: Pod pod-submit-status-0-14 on node leguer-worker2 timings total=16.171329349s t=1.92s run=0s execute=0s [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 13 12:26:16.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7008" for this suite. • [SLOW TEST:448.584 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container Status should never report success for a pending container","total":-1,"completed":1,"skipped":145,"failed":0} Apr 13 12:26:16.987: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":349,"failed":0} Apr 13 12:19:58.257: INFO: Running AfterSuite actions on all nodes Apr 13 12:26:17.015: INFO: Running AfterSuite actions on node 1 Apr 13 12:26:17.015: INFO: Skipping dumping logs from cluster Ran 17 of 5667 Specs in 496.850 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5650 Skipped Ginkgo ran 1 suite in 8m21.174155604s Test Suite Passed